path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Deep Learning Specialization/4. Convolutional Neural Networks/Keras_Tutorial_v2a.ipynb | ###Markdown
Keras tutorial - Emotion Detection in Images of FacesWelcome to the first assignment of week 2. In this assignment, you will:1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm. Why are we using Keras? * Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. * Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. * Being able to go from idea to result with the least possible delay is key to finding good models. * However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you would still implement in TensorFlow rather than in Keras. * That being said, Keras will work fine for many common models. Updates If you were working on the notebook before this update...* The current notebook is version "v2a".* You can find your original work saved in the notebook with the previous version name ("v2").* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Changed back-story of model to "emotion detection" from "happy house."* Cleaned/organized wording of instructions and commentary.* Added instructions on how to set `input_shape`* Added explanation of "objects as functions" syntax.* Clarified explanation of variable naming convention.* Added hints for steps 1,2,3,4 Load packages* In this exercise, you'll work on the "Emotion detection" model, which we'll explain below. * Let's load the required packages.
###Code
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Note**: As you can see, we've imported a lot of functions from Keras. You can use them by calling them directly in your code. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`. In other words, unlike TensorFlow, you don't have to create the graph and then make a separate `sess.run()` call to evaluate those variables. 1 - Emotion Tracking* A nearby community health clinic is helping the local residents monitor their mental health. * As part of their study, they are asking volunteers to record their emotions throughout the day.* To help the participants more easily track their emotions, you are asked to create an app that will classify their emotions based on some pictures that the volunteers will take of their facial expressions.* As a proof-of-concept, you first train your model to detect if someone's emotion is classified as "happy" or "not happy."To build and train this model, you have gathered pictures of some volunteers in a nearby neighborhood. The dataset is labeled.Run the following code to normalize the dataset and learn about its shapes.
###Code
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
_____no_output_____
###Markdown
**Details of the "Face" dataset**:- Images are of shape (64,64,3)- Training: 600 pictures- Test: 150 pictures 2 - Building a model in KerasKeras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.Here is an example of a model in Keras:```pythondef model(input_shape): """ input_shape: The height, width and channels as a tuple. Note that this does not include the 'batch' as a dimension. If you have a batch like 'X_train', then you can provide the input_shape using X_train.shape[1:] """ Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model``` Variable naming convention* Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. * Instead of creating unique variable names for each step and each layer, such as ```X = ...Z1 = ...A1 = ...```* Keras re-uses and overwrites the same variable at each step:```X = ...X = ...X = ...```* The exception is `X_input`, which we kept separate since it's needed later. Objects as functions* Notice how there are two pairs of parentheses in each statement. For example:```X = ZeroPadding2D((3, 3))(X_input)```* The first is a constructor call which creates an object (ZeroPadding2D).* In Python, objects can be called as functions. Search for 'python object as function and you can read this blog post [Python Pandemonium](https://medium.com/python-pandemonium/function-as-objects-in-python-d5215e6d1b0d). See the section titled "Objects as functions."* The single line is equivalent to this:```ZP = ZeroPadding2D((3, 3)) ZP is an object that can be called as a functionX = ZP(X_input) ``` **Exercise**: Implement a `HappyModel()`. * This assignment is more open-ended than most. * Start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. * Later, come back and try out other model architectures. * For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. * You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`. **Note**: Be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
###Code
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
(height, width, channels) as a tuple.
Note that this does not include the 'batch' as a dimension.
If you have a batch like 'X_train',
then you can provide the input_shape using
X_train.shape[1:]
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
You have now built a function to describe your model. To train and test this model, there are four steps in Keras:1. Create the model by calling the function above 2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])` 3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)` 4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)` If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/). Step 1: create the model. **Hint**: The `input_shape` parameter is a tuple (height, width, channels). It excludes the batch number. Try `X_train.shape[1:]` as the `input_shape`.
###Code
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Step 2: compile the model**Hint**: Optimizers you can try include `'adam'`, `'sgd'` or others. See the documentation for [optimizers](https://keras.io/optimizers/) The "happiness detection" is a binary classification problem. The loss function that you can use is `'binary_cross_entropy'`. Note that `'categorical_cross_entropy'` won't work with your data set as its formatted, because the data is an array of 0 or 1 rather than two arrays (one for each category). Documentation for [losses](https://keras.io/losses/)
###Code
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Step 3: train the model**Hint**: Use the `'X_train'`, `'Y_train'` variables. Use integers for the epochs and batch_size**Note**: If you run `fit()` again, the `model` will continue to train with the parameters it has already learned instead of reinitializing them.
###Code
### START CODE HERE ### (1 line)
happyModel.fit(x = X_train, y = Y_train, epochs = 5, batch_size = 5)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
Step 4: evaluate model **Hint**: Use the `'X_test'` and `'Y_test'` variables to evaluate the model's performance.
###Code
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(x=X_test, y=Y_test)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
###Output
_____no_output_____
###Markdown
Expected performance If your `happyModel()` function worked, its accuracy should be better than random guessing (50% accuracy).To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. Tips for improving your modelIf you have not yet achieved a very good accuracy (>= 80%), here are some things tips:- Use blocks of CONV->BATCHNORM->RELU such as:```pythonX = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)X = BatchNormalization(axis = 3, name = 'bn0')(X)X = Activation('relu')(X)```until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You can then flatten the volume and use a fully-connected layer.- Use MAXPOOL after such blocks. It will help you lower the dimension in height and width.- Change your optimizer. We find 'adam' works well. - If you get memory issues, lower your batch_size (e.g. 12 )- Run more epochs until you see the train accuracy no longer improves. **Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. Normally, you'll want separate dev and test sets. The dev set is used for parameter tuning, and the test set is used once to estimate the model's performance in production. 3 - ConclusionCongratulations, you have created a proof of concept for "happiness detection"! Key Points to remember- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures.- Remember The four steps in Keras: 1. Create 2. Compile 3. Fit/Train 4. Evaluate/Test 4 - Test with your own image (Optional)Congratulations on finishing this assignment. You can now take a picture of your face and see if it can classify whether your expression is "happy" or "not happy". To do that:1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.2. Add your image to this Jupyter Notebook's directory, in the "images" folder3. Write your image's name in the following code4. Run the code and check if the algorithm is right (0 is not happy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
###Code
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
###Output
_____no_output_____
###Markdown
5 - Other useful functions in Keras (Optional)Two other basic features of Keras that you'll find useful are:- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.Run the following code.
###Code
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
###Output
_____no_output_____ |
Daily Assignments/DA26/DA26.ipynb | ###Markdown
Daily Assignment 26 - EEP 118
###Code
# Add Preamble code here (load packages and dataset)
###Output
_____no_output_____
###Markdown
Time SeriesUsing 41 years of data from 1947 to 1987, we analyze the real wage rate as a function of the average laborproductivity in the US. **1.** Looking at the following graph over time, what potential problem could a regression of wage on labor productivity raise? (*hint: see Lecture notes*) Enter written answer for 1. here **2.** What are the two procedures that could be employed to solve this problem? (*hint: see Lecture notes*) Enter written answer for 2. here **3.** Use the results of the regressions (1)-(3) given below to support the argument that you have just developed.| Model | (1) | (2) | (3) | (4)||---------- | --------- | ----------|---------------------------------|----------------------------------|| Variables | Real Wage | Real Wage | Real Wage (t) - Real Wage (t-1) | Real Wage (t) - Real Wage (t-1) |||||||| Productivity | 0.0360$^{***}$|0.111$^{***}$| | || | (0.00246) | (0.0103| | |||||||| Year | | -0.111$^{***}$ | | || | | (0.0151) | | ||||||||Prod(t) - Prod(t-1) | ||0.0463$^{***}$ | 0.0133 || | | |0.00985) | (0.0113)|||||||| Constant | 1.528$^{***}$ | 213.3$^{***}$ | -0.0217 | 0.0689$^{***}$|| | (0.208) | (28.78) | (0.0185) | (0.0220)|||||||||||||| Observations | 41 | 41 | 40 | 25 || R-squared | 0.846 | 0.936 | 0.368 | 0.057|| Years of Observations | 1947-87 | 1947-87 | 1947-87 | 1947-72|Each column presents a different regression model. The y variable is listed as thecolumn title and the x variables are the row titles. Standard errors are in parentheses. Enter written answer for 3. here **4.** Fully interpret the result of regression (3) (Remember that this means SSS) (*hint: see Lecture notes*) Enter written answer for 4. here **5.** Compute a confidence interval for the effect of productivity on wage from the results of regression (4) estimated from the first 25 years.
###Code
# add any code for 5. here
###Output
_____no_output_____
###Markdown
Enter written answer for 5. here **6.** Can the results from question 5. be interpreted as suggesting that productivity has at most a small effect on wage? (Think of what you have controlled for in this regression). (*hint: see Lecture notes*) Enter written answer for 6. here **7.** Replicate the plot and regression output below using the Stata file `EARNS.dta` (*hint: see Lecture notes and Coding Bootcamps*)
###Code
# add any code for 7. here
###Output
_____no_output_____ |
Phase2/knn-dt.ipynb | ###Markdown
Data Prep
###Code
df = pd.read_csv("../Phase1/main_data.csv", index_col=0)
df = df.dropna()
df
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
X = df.drop('criticality', axis=1).values
Y = df['criticality'].values
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, stratify=Y, test_size=0.2, random_state=42)
# shuffled_data = shuffle(df.values, random_state=0)
# shuffled_data.shape
# test_index = int(shuffled_data.shape[0] * 0.8)
# X_train, X_test = shuffled_data[:test_index, :-1], shuffled_data[test_index:, :-1]
# Y_train, Y_test = shuffled_data[:test_index, -1], shuffled_data[test_index:, -1]
# shuffled_data.shape
print(X_train.shape, X_test.shape)
###Output
(6568, 10) (1643, 10)
###Markdown
Decision Tree
###Code
from sklearn import tree
dt_clf = tree.DecisionTreeClassifier(random_state=0, max_depth=10)
dt_clf.fit(X_train, Y_train)
np.sum(dt_clf.predict(X_test) == Y_test) / len(X_test)
###Output
_____no_output_____
###Markdown
Validation Curve
###Code
#validation curve
from sklearn.model_selection import validation_curve
max_depth_list = range(5, 20)
train_scores, valid_scores = validation_curve(tree.DecisionTreeClassifier(), X_train, Y_train,
param_name="max_depth",
param_range=max_depth_list,
cv=5,
scoring = 'accuracy',
verbose=1, n_jobs=-1
)
print(train_scores.shape)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
xlabel = 'max depth'
ylabel = 'Accuracy'
plt_title = 'Validation Curve, DTClassifier'
fig = plt.figure()
ax = fig.add_subplot(111, xlabel=xlabel, ylabel=ylabel, title=plt_title)
plt.semilogx(max_depth_list, train_scores_mean, label="Training score",
color="darkorange", lw=2)
plt.fill_between(max_depth_list, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=2)
plt.semilogx(max_depth_list, valid_scores_mean, label="Validation score",
color="navy", lw=2)
plt.fill_between(max_depth_list, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.2,
color="navy", lw=2)
plt.legend(loc="best")
plt.savefig('results/validation_curve_DT.jpg', bbox_inches='tight', dpi=200)
plt.show()
###Output
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done 75 out of 75 | elapsed: 1.8s finished
###Markdown
Learning Curve
###Code
#learning curve
from sklearn.model_selection import learning_curve
max_depth_list = range(5, 20)
ns_list, train_scores, validation_scores = learning_curve(
estimator = tree.DecisionTreeClassifier(),
X = X_train, y = Y_train,
train_sizes = max_depth_list,
scoring = 'accuracy',
n_jobs= -1
)
train_scores_mean = train_scores.mean(axis = 1)
train_scores_std = np.std(train_scores, axis=1)
validation_scores_mean = validation_scores.mean(axis = 1)
validation_scores_std = validation_scores.std(axis = 1)
xlabel = 'Max Depth'
ylabel = 'Accuracy'
plt_title = 'Learning Curve, DecisionTreeClassifier'
fig = plt.figure()
ax = fig.add_subplot(111, xlabel=xlabel, ylabel=ylabel, title=plt_title)
ax.plot(max_depth_list, train_scores_mean, label = 'Training Score')
ax.plot(max_depth_list, validation_scores_mean, label = 'Validation Score')
plt.fill_between(max_depth_list, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkblue", lw=2)
plt.fill_between(max_depth_list, validation_scores_mean - validation_scores_std,
validation_scores_mean + validation_scores_std, alpha=0.2,
color="darkgreen", lw=2)
ax.legend(loc=0)
plt.savefig("results/learning_curve_DT.jpg", bbox_inches='tight', dpi=200)
plt.show()
###Output
_____no_output_____
###Markdown
KNN (Neatest Neighbor Classification)**This takes a little bit of time**
###Code
from sklearn import neighbors
from sklearn.pipeline import Pipeline
print(sorted(neighbors.KDTree.valid_metrics))
nca = neighbors.NeighborhoodComponentsAnalysis(random_state=42)
knn = neighbors.KNeighborsClassifier(n_neighbors=3)
nca_pipe = Pipeline([('nca', nca), ('knn', knn)])
nca_pipe.fit(X_train, Y_train)
nca_pipe.score(X_test, Y_test)
#validation curve
from sklearn.model_selection import validation_curve
n_neighbors_list = range(3, 15)
train_scores, valid_scores = validation_curve(neighbors.KNeighborsClassifier(metric='l1'), X_train, Y_train,
param_name="n_neighbors",
param_range=n_neighbors_list,
cv=5,
scoring = 'accuracy',
verbose=1, n_jobs=-1
)
print(train_scores.shape)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
valid_scores_mean = np.mean(valid_scores, axis=1)
valid_scores_std = np.std(valid_scores, axis=1)
xlabel = '# of Neighbors'
ylabel = 'Accuracy'
plt_title = 'Validation Curve, KNN'
fig = plt.figure()
ax = fig.add_subplot(111, xlabel=xlabel, ylabel=ylabel, title=plt_title)
plt.semilogx(n_neighbors_list, train_scores_mean, label="Training score",
color="darkorange", lw=2)
plt.fill_between(n_neighbors_list, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=2)
plt.semilogx(n_neighbors_list, valid_scores_mean, label="Validation score",
color="navy", lw=2)
plt.fill_between(n_neighbors_list, valid_scores_mean - valid_scores_std,
valid_scores_mean + valid_scores_std, alpha=0.2,
color="navy", lw=2)
plt.legend(loc="best")
plt.savefig('results/validation_curve_KNN.jpg', bbox_inches='tight', dpi=200)
plt.show()
#learning curve
from sklearn.model_selection import learning_curve
n_neighbors_list = range(3, 15)
ns_list, train_scores, validation_scores = learning_curve(
estimator = neighbors.KNeighborsClassifier(metric='l1', n_jobs=4),
X = X_train, y = Y_train,
train_sizes = n_neighbors_list,
scoring = 'accuracy',
n_jobs= -1
)
train_scores_mean = train_scores.mean(axis = 1)
train_scores_std = np.std(train_scores, axis=1)
validation_scores_mean = validation_scores.mean(axis = 1)
validation_scores_std = validation_scores.std(axis = 1)
xlabel = '# of Neighbors'
ylabel = 'Accuracy'
plt_title = 'Learning Curve, KNN'
fig = plt.figure()
ax = fig.add_subplot(111, xlabel=xlabel, ylabel=ylabel, title=plt_title)
ax.plot(n_neighbors_list, train_scores_mean, label = 'Training Score')
ax.plot(n_neighbors_list, validation_scores_mean, label = 'Validation Score')
plt.fill_between(n_neighbors_list, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkblue", lw=2)
plt.fill_between(n_neighbors_list, validation_scores_mean - validation_scores_std,
validation_scores_mean + validation_scores_std, alpha=0.2,
color="darkgreen", lw=2)
ax.legend(loc=0)
plt.savefig("results/learning_curve_KNN.jpg", bbox_inches='tight', dpi=200)
plt.show()
###Output
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 3, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:771: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details:
Traceback (most recent call last):
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/model_selection/_validation.py", line 762, in _score
scores = scorer(estimator, X_test, y_test)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 216, in __call__
return self._score(
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 258, in _score
y_pred = method_caller(estimator, "predict", X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/metrics/_scorer.py", line 68, in _cached_call
return getattr(estimator, method)(*args, **kwargs)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_classification.py", line 216, in predict
neigh_dist, neigh_ind = self.kneighbors(X)
File "/home/phoenix/Apps/anaconda3/envs/physics/lib/python3.9/site-packages/sklearn/neighbors/_base.py", line 724, in kneighbors
raise ValueError(
ValueError: Expected n_neighbors <= n_samples, but n_samples = 4, n_neighbors = 5
warnings.warn(
###Markdown
MetricsCouldn't think of any other mectrics
###Code
# confusion metrics
from sklearn.metrics import confusion_matrix
print("Decision Tree:\n", confusion_matrix(y_true=Y_test, y_pred=dt_clf.predict(X_test), normalize='true', labels=[1, 2, 3]))
print("\n\n")
print("NCA Pipeline:\n", confusion_matrix(y_true=Y_test, y_pred=nca_pipe.predict(X_test), normalize='true', labels=[1, 2, 3]))
###Output
Decision Tree:
[[0.9132948 0.06936416 0.01734104]
[0.31410256 0.53846154 0.1474359 ]
[0.00929752 0.02892562 0.96177686]]
NCA Pipeline:
[[0.9132948 0.05780347 0.02890173]
[0.35897436 0.47435897 0.16666667]
[0.02169421 0.02995868 0.94834711]]
|
lorenz.ipynb | ###Markdown
Lorenz attractor simulation
###Code
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
def lorenz(t, u, σ, ρ, β):
x, y, z = u
dudt = [
σ*(y - x),
x*(ρ - z) - y,
x*y - β*z
]
return dudt
###Output
_____no_output_____
###Markdown
Solve up initial value problem
###Code
σ = 10
β = 8/3
ρ = 28
dt = 0.01
u0 = np.random.rand(3)
t_eval = np.arange(0, 80+dt, dt)
t = 0, t_eval[-1]
sol = solve_ivp(lorenz, t, u0, args=(σ, ρ, β), t_eval=t_eval)
###Output
_____no_output_____
###Markdown
Visualize trajectory
###Code
x, y, z = sol.y
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot3D(x, y, z, 'gray')
pass
###Output
_____no_output_____
###Markdown
Generate training data
###Code
sol.y.shape
fig,ax = plt.subplots(1,1,subplot_kw={'projection': '3d'}, figsize=(12,12))
inputs = []
outputs = []
for j in range(100):
u0 = 30*np.random.rand(3) - 0.5
t_eval = np.arange(0, 80+dt, dt)
t = 0, t_eval[-1]
sol = solve_ivp(lorenz, t, u0, args=(σ, ρ, β), t_eval=t_eval)
inputs.append(sol.y[:, :-1].T)
outputs.append(sol.y[:, 1:].T)
ax.plot3D(*sol.y)
ax.plot3D(*sol.y[:,0], 'ro')
ax.view_init(18, -113)
np.save('inputs.npy', np.array(inputs))
np.save('outputs.npy', np.array(outputs))
###Output
_____no_output_____ |
_notebooks/2020-01-01-gol.ipynb | ###Markdown
Conway's Game of Life> The most famous cellular automaton- toc: true - badges: true- comments: false- categories: [jupyter] > youtube: https://youtu.be/lelsVltLZe4 IntroductionThis is a (slightly) modified version of [Glowing Python]( http://glowingpython.blogspot.co.il/2015/10/game-of-life-with-python.html)'s code. I make it available here because it features a few nice things:* how to make a movie using matplotlib.animation* how to write a generator (function with yield)* how to plot a sparce array (```spy```) The code
###Code
import numpy as np
from matplotlib import pyplot as plt
import matplotlib.animation as manimation
def life(X, steps):
"""
Conway's Game of Life.
- X, matrix with the initial state of the game.
- steps, number of generations.
"""
def roll_it(x, y):
# rolls the matrix X in a given direction
# x=1, y=0 left; x=-1, y=0 right;
return np.roll(np.roll(X, y, axis=0), x, axis=1)
for _ in range(steps):
# count the number of neighbours
# the universe is considered toroidal
Y = roll_it(1, 0) + roll_it(0, 1) + \
roll_it(-1, 0) + roll_it(0, -1) + \
roll_it(1, 1) + roll_it(-1, -1) + \
roll_it(1, -1) + roll_it(-1, 1)
# game of life rules
X = np.logical_or(np.logical_and(X, Y == 2), Y == 3)
X = X.astype(int)
yield X
dimensions = (90, 160) # height, width
X = np.zeros(dimensions) # Y by X dead cells
middle_y = dimensions[0] / 2
middle_x = dimensions[1] / 2
N_iterations = 600
# acorn initial condition
# http://www.conwaylife.com/w/index.php?title=Acorn
X[middle_y, middle_x:middle_x+2] = 1
X[middle_y, middle_x+4:middle_x+7] = 1
X[middle_y+1, middle_x+3] = 1
X[middle_y+2, middle_x+1] = 1
FFMpegWriter = manimation.writers['ffmpeg']
metadata = dict(title='Game of life', artist='Acorn initial condition')
writer = FFMpegWriter(fps=10, metadata=metadata)
fig = plt.figure()
fig.patch.set_facecolor('black')
with writer.saving(fig, "game_of_life.mp4", 300): # last argument: dpi
plt.spy(X, origin='lower')
plt.axis('off')
writer.grab_frame()
plt.clf()
for i, x in enumerate(life(X, N_iterations)):
plt.title("iteration: {:03d}".format(i + 1))
plt.spy(x, origin='lower')
plt.axis('off')
writer.grab_frame()
plt.clf()
###Output
_____no_output_____ |
week05_nlp/part1_common.ipynb | ###Markdown
все что ниже можно сделать обычным spacy, а на DL курсе фильтровать строки довольно странно Homework part I: Prohibited Comment Classification (3 points)![img](https://github.com/yandexdataschool/nlp_course/raw/master/resources/banhammer.jpg)__In this notebook__ you will build an algorithm that classifies social media comments into normal or toxic.Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
###Code
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation. Preprocessing and tokenizationComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "fuck you" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = <YOUR CODE>
texts_test = <YOUR CODE>
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
###Output
_____no_output_____
###Markdown
Solving it: bag of words![img](http://www.novuslight.com/uploads/n/BagofWords.jpg)One traditional approach to such problem is to use bag of words features:1. build a vocabulary of frequent words (use train data only)2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).3. consider this count a feature for some classifier__Note:__ in practice, you can compute such features using sklearn. Please don't do that in the current assignment, though.* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = 10000
<YOUR CODE>
bow_vocabulary = <YOUR CODE>
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
<YOUR CODE>
return np.array(<...>, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
###Output
_____no_output_____
###Markdown
Machine learning stuff: fit, predict, evaluate. You know the drill.
###Code
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
```````````````````````````````````````````````` Solving it better: word vectorsLet's try another approach: instead of counting per-word frequencies, we shall map all words to pre-trained word vectors and average over them to get text features.This should give us two key advantages: (1) we now have 10^2 features instead of 10^4 and (2) our model can generalize to word that are not in training dataset.We begin with a standard approach with pre-trained word vectors. However, you may also try* training embeddings from scratch on relevant (unlabeled) data* multiplying word vectors by inverse word frequency in dataset (like tf-idf).* concatenating several embeddings * call `gensim.downloader.info()['models'].keys()` to get a list of available models* clusterizing words by their word-vectors and try bag of cluster_ids__Note:__ loading pre-trained model may take a while. It's a perfect opportunity to refill your cup of tea/coffee and grab some extra cookies. Or binge-watch some tv series if you're slow on internet connection
###Code
import gensim.downloader
embeddings = gensim.downloader.load("fasttext-wiki-news-subwords-300")
# If you're low on RAM or download speed, use "glove-wiki-gigaword-100" instead. Ignore all further asserts.
def vectorize_sum(comment):
"""
implement a function that converts preprocessed comment to a sum of token vectors
"""
embedding_dim = embeddings.wv.vectors.shape[1]
features = np.zeros([embedding_dim], dtype='float32')
<YOUR CODE>
return features
assert np.allclose(
vectorize_sum("who cares anymore . they attack with impunity .")[::70],
np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])
)
X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])
X_test_wv = np.stack([vectorize_sum(text) for text in texts_test])
wv_model = LogisticRegression().fit(X_train_wv, y_train)
for name, X, y, model in [
('bow train', X_train_bow, y_train, bow_model),
('bow test ', X_test_bow, y_test, bow_model),
('vec train', X_train_wv, y_train, wv_model),
('vec test ', X_test_wv, y_test, wv_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
assert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, "something's wrong with your features"
###Output
_____no_output_____
###Markdown
Homework part I: Prohibited Comment Classification (3 points)![img](https://github.com/yandexdataschool/nlp_course/raw/master/resources/banhammer.jpg)__In this notebook__ you will build an algorithm that classifies social media comments into normal or toxic.Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
###Code
import pandas as pd
data = pd.read_csv("comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation. Preprocessing and tokenizationComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "fuck you" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = <YOUR CODE>
texts_test = <YOUR CODE>
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
###Output
_____no_output_____
###Markdown
Solving it: bag of words![img](http://www.novuslight.com/uploads/n/BagofWords.jpg)One traditional approach to such problem is to use bag of words features:1. build a vocabulary of frequent words (use train data only)2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).3. consider this count a feature for some classifier__Note:__ in practice, you can compute such features using sklearn. Please don't do that in the current assignment, though.* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = 10000
<YOUR CODE>
bow_vocabulary = <YOUR CODE>
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
<YOUR CODE>
return np.array(<...>, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
###Output
_____no_output_____
###Markdown
Machine learning stuff: fit, predict, evaluate. You know the drill.
###Code
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
```````````````````````````````````````````````` Solving it better: word vectorsLet's try another approach: instead of counting per-word frequencies, we shall map all words to pre-trained word vectors and average over them to get text features.This should give us two key advantages: (1) we now have 10^2 features instead of 10^4 and (2) our model can generalize to word that are not in training dataset.We begin with a standard approach with pre-trained word vectors. However, you may also try* training embeddings from scratch on relevant (unlabeled) data* multiplying word vectors by inverse word frequency in dataset (like tf-idf).* concatenating several embeddings * call `gensim.downloader.info()['models'].keys()` to get a list of available models* clusterizing words by their word-vectors and try bag of cluster_ids__Note:__ loading pre-trained model may take a while. It's a perfect opportunity to refill your cup of tea/coffee and grab some extra cookies. Or binge-watch some tv series if you're slow on internet connection
###Code
import gensim.downloader
embeddings = gensim.downloader.load("fasttext-wiki-news-subwords-300")
# If you're low on RAM or download speed, use "glove-wiki-gigaword-100" instead. Ignore all further asserts.
def vectorize_sum(comment):
"""
implement a function that converts preprocessed comment to a sum of token vectors
"""
embedding_dim = embeddings.wv.vectors.shape[1]
features = np.zeros([embedding_dim], dtype='float32')
<YOUR CODE>
return features
assert np.allclose(
vectorize_sum("who cares anymore . they attack with impunity .")[::70],
np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])
)
X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])
X_test_wv = np.stack([vectorize_sum(text) for text in texts_test])
wv_model = LogisticRegression().fit(X_train_wv, y_train)
for name, X, y, model in [
('bow train', X_train_bow, y_train, bow_model),
('bow test ', X_test_bow, y_test, bow_model),
('vec train', X_train_wv, y_train, wv_model),
('vec test ', X_test_wv, y_test, wv_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
assert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, "something's wrong with your features"
###Output
_____no_output_____
###Markdown
Homework part I: Prohibited Comment Classification (3 points)![img](https://github.com/yandexdataschool/nlp_course/raw/master/resources/banhammer.jpg)__In this notebook__ you will build an algorithm that classifies social media comments into normal or toxic.Like in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.
###Code
import pandas as pd
data = pd.read_csv(path + "comments.tsv", sep='\t')
texts = data['comment_text'].values
target = data['should_ban'].values
data[50::200]
from sklearn.model_selection import train_test_split
texts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)
###Output
_____no_output_____
###Markdown
__Note:__ it is generally a good idea to split data into train/test before anything is done to them.It guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation. Preprocessing and tokenizationComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.To simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.
###Code
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
preprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))
text = 'How to be a grown-up at work: replace "fuck you" with "Ok, great!".'
print("before:", text,)
print("after:", preprocess(text),)
# task: preprocess each comment in train and test
texts_train = list(map(lambda x: ' '.join(tokenizer.tokenize((x.lower()))), texts_train))
texts_test = list(map(lambda x: ' '.join(tokenizer.tokenize((x.lower()))), texts_test))
assert texts_train[5] == 'who cares anymore . they attack with impunity .'
assert texts_test[89] == 'hey todds ! quick q ? why are you so gay'
assert len(texts_test) == len(y_test)
###Output
_____no_output_____
###Markdown
Solving it: bag of words![img](http://www.novuslight.com/uploads/n/BagofWords.jpg)One traditional approach to such problem is to use bag of words features:1. build a vocabulary of frequent words (use train data only)2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).3. consider this count a feature for some classifier__Note:__ in practice, you can compute such features using sklearn. Please don't do that in the current assignment, though.* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`
###Code
# task: find up to k most frequent tokens in texts_train,
# sort them by number of occurences (highest first)
k = 10000
from collections import Counter
tokens = ' '.join(texts_train).split()
bow_vocabulary = list(map(lambda pair: pair[0], sorted(Counter(tokens).items(), key=lambda item: item[1], reverse=True)[:k]))
print('example features:', sorted(bow_vocabulary)[::100])
def text_to_bow(text):
""" convert text string to an array of token counts. Use bow_vocabulary. """
tokens = tokenizer.tokenize(text.lower())
counts = []
for token in bow_vocabulary:
if token in tokens:
counts.append(tokens.count(token))
else:
counts.append(0)
return np.array(counts, 'float32')
X_train_bow = np.stack(list(map(text_to_bow, texts_train)))
X_test_bow = np.stack(list(map(text_to_bow, texts_test)))
k_max = len(set(' '.join(texts_train).split()))
assert X_train_bow.shape == (len(texts_train), min(k, k_max))
assert X_test_bow.shape == (len(texts_test), min(k, k_max))
assert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))
assert len(bow_vocabulary) <= min(k, k_max)
assert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')
###Output
_____no_output_____
###Markdown
Machine learning stuff: fit, predict, evaluate. You know the drill.
###Code
from sklearn.linear_model import LogisticRegression
bow_model = LogisticRegression().fit(X_train_bow, y_train)
from sklearn.metrics import roc_auc_score, roc_curve
for name, X, y, model in [
('train', X_train_bow, y_train, bow_model),
('test ', X_test_bow, y_test, bow_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
###Output
_____no_output_____
###Markdown
```````````````````````````````````````````````` Solving it better: word vectorsLet's try another approach: instead of counting per-word frequencies, we shall map all words to pre-trained word vectors and average over them to get text features.This should give us two key advantages: (1) we now have 10^2 features instead of 10^4 and (2) our model can generalize to word that are not in training dataset.We begin with a standard approach with pre-trained word vectors. However, you may also try* training embeddings from scratch on relevant (unlabeled) data* multiplying word vectors by inverse word frequency in dataset (like tf-idf).* concatenating several embeddings * call `gensim.downloader.info()['models'].keys()` to get a list of available models* clusterizing words by their word-vectors and try bag of cluster_ids__Note:__ loading pre-trained model may take a while. It's a perfect opportunity to refill your cup of tea/coffee and grab some extra cookies. Or binge-watch some tv series if you're slow on internet connection
###Code
import gensim.downloader
embeddings = gensim.downloader.load("fasttext-wiki-news-subwords-300")
# If you're low on RAM or download speed, use "glove-wiki-gigaword-100" instead. Ignore all further asserts.
def vectorize_sum(comment):
"""
implement a function that converts preprocessed comment to a sum of token vectors
"""
embedding_dim = embeddings.wv.vectors.shape[1]
features = np.zeros([embedding_dim], dtype='float32')
tokens = comment.split()
for token in tokens:
if token in embeddings:
features += embeddings[token]
return features
assert np.allclose(
vectorize_sum("who cares anymore . they attack with impunity .")[::70],
np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])
)
X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])
X_test_wv = np.stack([vectorize_sum(text) for text in texts_test])
wv_model = LogisticRegression().fit(X_train_wv, y_train)
for name, X, y, model in [
('bow train', X_train_bow, y_train, bow_model),
('bow test ', X_test_bow, y_test, bow_model),
('vec train', X_train_wv, y_train, wv_model),
('vec test ', X_test_wv, y_test, wv_model)
]:
proba = model.predict_proba(X)[:, 1]
auc = roc_auc_score(y, proba)
plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))
plt.plot([0, 1], [0, 1], '--', color='black',)
plt.legend(fontsize='large')
plt.grid()
assert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, "something's wrong with your features"
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
|
Pipelines/ETLPipelines/15_scaling_exercise/.ipynb_checkpoints/15_scaling_exercise-solution-checkpoint.ipynb | ###Markdown
Scaling DataIn this exercise, you'll practice scaling data. Sometimes, you'll see the terms **standardization** and **normalization** used interchangeably when referring to feature scaling. However, these are slightly different operations. Standardization refers to scaling a set of values so that they have a mean of zero and a standard deviation of one. Normalization refers to scaling a set of values so that the range if between zero and one.In this exercise, you'll practice implementing standardization and normalization in code. There are libraries, like scikit-learn, that can do this for you; however, in data engineering, you might not always have these tools available.Run this first cell to read in the World Bank GDP and population data. This code cell also filters the data for 2016 and filters out the aggregated values like 'World' and 'OECD Members'.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# read in the projects data set and do basic wrangling
gdp = pd.read_csv('../data/gdp_data.csv', skiprows=4)
gdp.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
population = pd.read_csv('../data/population_data.csv', skiprows=4)
population.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# Reshape the data sets so that they are in long format
gdp_melt = gdp.melt(id_vars=['Country Name'],
var_name='year',
value_name='gdp')
# Use back fill and forward fill to fill in missing gdp values
gdp_melt['gdp'] = gdp_melt.sort_values('year').groupby('Country Name')['gdp'].fillna(method='ffill').fillna(method='bfill')
population_melt = population.melt(id_vars=['Country Name'],
var_name='year',
value_name='population')
# Use back fill and forward fill to fill in missing population values
population_melt['population'] = population_melt.sort_values('year').groupby('Country Name')['population'].fillna(method='ffill').fillna(method='bfill')
# merge the population and gdp data together into one data frame
df_country = gdp_melt.merge(population_melt, on=('Country Name', 'year'))
# filter data for the year 2016
df_2016 = df_country[df_country['year'] == '2016']
# filter out values that are not countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# remove non countries from the data
df_2016 = df_2016[~df_2016['Country Name'].isin(non_countries)]
# show the first ten rows
print('first ten rows of data')
df_2016.head(10)
###Output
_____no_output_____
###Markdown
Exercise - Normalize the DataTo normalize data, you take a feature, like gdp, and use the following formula$x_{normalized} = \frac{x - x_{min}}{x_{max} - x_{min}}$where * x is a value of gdp* x_max is the maximum gdp in the data* x_min is the minimum GDP in the dataFirst, write a function that outputs the x_min and x_max values of an array. The inputs are an array of data (like the GDP data). The outputs are the x_min and x_max values
###Code
def x_min_max(data):
# TODO: Complete this function called x_min_max()
# The input is an array of data as an input
# The outputs are the minimum and maximum of that array
minimum = min(data)
maximum = max(data)
return minimum, maximum
# this should give the result (36572611.88531479, 18624475000000.0)
x_min_max(df_2016['gdp'])
###Output
_____no_output_____
###Markdown
Next, write a function that normalizes a data point. The inputs are an x value, a minimum value, and a maximum value. The output is the normalized data point
###Code
def normalize(x, x_min, x_max):
# TODO: Complete this function
# The input is a single value
# The output is the normalized value
return (x - x_min) / (x_max - x_min)
###Output
_____no_output_____
###Markdown
Why are you making these separate functions? Let's say you are training a machine learning model and using normalized GDP as a feature. As new data comes in, you'll want to make predictions using the new GDP data. You'll have to normalize this incoming data. To do that, you need to store the x_min and x_max from the training set. Hence the x_min_max() function gives you the minimum and maximum values, which you can then store in a variable.A good way to keep track of the minimum and maximum values would be to use a class. In this next section, fill out the Normalizer() class code to make a class that normalizes a data set and stores min and max values.
###Code
class Normalizer():
# TODO: Complete the normalizer class
# The normalizer class receives a dataframe as its only input for initialization
# For example, the data frame might contain gdp and population data in two separate columns
# Follow the TODOs in each section
def __init__(self, dataframe):
# TODO: complete the init function.
# Assume the dataframe has an unknown number of columns like [['gdp', 'population']]
# iterate through each column calculating the min and max for each column
# append the results to the params attribute list
# For example, take the gdp column and calculate the minimum and maximum
# Put these results in a list [minimum, maximum]
# Append the list to the params variable
# Then take the population column and do the same
# HINT: You can put your x_min_max() function as part of this class and use it
self.params = []
for column in dataframe.columns:
self.params.append(x_min_max(dataframe[column]))
def x_min_max(data):
# TODO: complete the x_min_max method
# HINT: You can use the same function defined earlier in the exercise
minimum = min(data)
maximum = max(data)
return minimum, maximum
def normalize_data(self, x):
# TODO: complete the normalize_data method
# The function receives a data point as an input and then outputs the normalized version
# For example, if an input data point of [gdp, population] were used. Then the output would
# be the normalized version of the [gdp, population] data point
# Put the results in the normalized variable defined below
# Assume that the columns in the dataframe used to initialize an object are in the same
# order as this data point x
# HINT: You cannot use the normalize_data function defined earlier in the exercise.
# You'll need to iterate through the individual values in the x variable
# Use the params attribute where the min and max values are stored
normalized = []
for i, value in enumerate(x):
x_max = self.params[i][1]
x_min = self.params[i][0]
normalized.append((x[i] - x_min) / (x_max - x_min))
return normalized
###Output
_____no_output_____
###Markdown
Run the code cells below to check your results
###Code
gdp_normalizer = Normalizer(df_2016[['gdp', 'population']])
# This cell should output: [(36572611.88531479, 18624475000000.0), (11097.0, 1378665000.0)]
gdp_normalizer.params
# This cell should output [0.7207969507229194, 0.9429407193285986]
gdp_normalizer.normalize_data([13424475000000.0, 1300000000])
###Output
_____no_output_____ |
Escalabilidad.ipynb | ###Markdown
W5 - Escalabilidad* Oscar Juárez - 17315* Computación paralela y distribuida* Fecha: 11/02/2021Programa interactivo de python que permite analizar si un programa es escalable, tomando en cuenta su Speedup, Eficiencia y un dominio creciente. **Funciones útiles a lo largo del programa**
###Code
from math import log
import plotly.express as px
import pandas as pd
def SpeedupLineal(Ts, Tp):
return Ts/Tp
def Eficiencia(S, p):
return S/p
def TParalelo(n, p):
return n/p + log(p)
def ObtenerDatos(p):
for n in range(10, 330, 10):
tparalelo = TParalelo(n, p)
speedup = SpeedupLineal(n, tparalelo)
yield speedup, Eficiencia(speedup, p)
###Output
_____no_output_____
###Markdown
**Generación de los datos**
###Code
dicc = {}
dicc['Dominio'] = [val for val in range(10, 330 ,10)]
for n in [1, 2, 4, 8, 16, 32]:
data = list(ObtenerDatos(n))
dicc[f'S{n}'] = [i[0] for i in data]
dicc[f'E{n}'] = [i[1] for i in data]
df = pd.DataFrame(dicc)
###Output
_____no_output_____
###Markdown
Speedup de cada núcleo en el dominio
###Code
fig = px.line(df, x='Dominio', y=['S1','S2', 'S4', 'S8','S16', 'S32'],
labels={
'value': 'Speedup',
'variable': 'Núcleos'
},
title='Speedup de Cada Núcleo por Dominio')
fig.show()
###Output
_____no_output_____
###Markdown
Comentario del gráficoEn el gráfico del speedup se puede observar que este aumenta de manera logarítmica, proporcional a la cantidad de nucleos. Se denota una mejora en la ejecución del programa con diferentes tareas y se le saca el mejor provecho con una alta cantidad de núcleos. Eficiencia de cada núcleo en el dominio
###Code
fig = px.line(df, x='Dominio', y=['E1','E2', 'E4', 'E8','E16', 'E32'],
labels={
'value': 'Eficiencia',
'variable': 'Núcleos'
},
title='Eficiencia de Cada Núcleo por Dominio')
fig.show()
###Output
_____no_output_____
###Markdown
Comentario del gráficoLa eficiencia de cada núcleo también depende de la cantidad de tareas que tengamos. Es decir, para un dominio muy bajo, la eficiencia de múchos núcleos será baja. Sin embargo, a medida que los núcleos y las tareas aumentan, la eficiencia de cada núcleo mejorará hasta llegar a su punto de rendimiento máximo. Proponemos un valor kSe repetirá el procedimiento, esta vez proponiendo un valor de k de 2.5 (factor de crecimiento para los procesadores). Se tomará como base 16 núcleos. **Generación de los datos**
###Code
k = 2.5
p = 32
dicc = {}
data = list(ObtenerDatos(p*k))
dicc['Dominio'] = [val for val in range(10,330,10)]
dicc['Eficiencia'] = [i[1] for i in data]
df = pd.DataFrame(dicc)
###Output
_____no_output_____
###Markdown
**Se grafica nuevamente la eficiencia**
###Code
fig = px.line(df, x='Dominio', y='Eficiencia',
labels={'value': 'Eficiencia'},
title='Eficiencia de 32 Núcleos por Dominio, con un Factor k de Crecimiento')
fig.show()
###Output
_____no_output_____ |
1_3_Types_of_Features_Image_Segmentation/.ipynb_checkpoints/2. Contour detection and features-checkpoint.ipynb | ###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
# The method, cv2.threshold, returns two outputs.
#The first is the threshold that was used and the second output is the thresholded image.
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
# The outputs are list of contours in the hierarchy.
# The hierarchy is useful if you have many contours nested within one another.
# The hierarchy definds their relationship to one another.
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
# -1: all of the contours
all_contours = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(all_contours)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for cnt in contours:
# Fit an ellipse to a contour and extract the angle from that ellipse
(x,y), (Ma,ma), angle = cv2.fitEllipse(cnt)
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
# Find the bounding rectangle of a selected contour
x,y,w,h = cv2.boundingRect(selected_contour)
# Draw the bounding rectangle on a copy of the original image
box_image = np.copy(contours_image)
box_image = cv2.rectangle(box_image, (x,y), (x+w,y+h), (200,0,200),2)
plt.imshow(box_image)
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
cropped_image = cropped_image[y: y + h, x: x + w]
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = contours[1]
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
if(selected_contour is not None):
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
_____no_output_____
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for contour in contours:
(x,y), (MA,ma), angle = cv2.fitEllipse(contour)
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
x, y, w, h = cv2.boundingRect(selected_contour)
cropped_image = cropped_image[x: x + w + 1, y: y + h + 1]
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
selected_index = np.argmax(np.array(angles))
selected_contour = contours[selected_index]
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
_____no_output_____
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 230, 255, cv2.THRESH_BINARY_INV)
#retval, binary = cv2.threshold(gray, 230, 255, cv2.THRESH_BINARY)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
print(contours[1])
#print(contours.lenght)
###Output
[[[172 60]]
[[169 63]]
[[169 65]]
[[168 66]]
[[168 67]]
[[167 68]]
[[167 69]]
[[166 70]]
[[166 71]]
[[165 72]]
[[165 73]]
[[164 74]]
[[164 75]]
[[163 76]]
[[163 77]]
[[162 78]]
[[162 79]]
[[160 81]]
[[160 82]]
[[159 83]]
[[159 84]]
[[157 86]]
[[157 87]]
[[156 88]]
[[156 89]]
[[153 92]]
[[153 93]]
[[143 103]]
[[142 103]]
[[134 111]]
[[134 112]]
[[132 114]]
[[132 115]]
[[128 119]]
[[128 120]]
[[115 133]]
[[115 134]]
[[112 137]]
[[112 138]]
[[111 139]]
[[111 140]]
[[110 141]]
[[108 141]]
[[107 142]]
[[105 142]]
[[104 143]]
[[ 96 143]]
[[ 95 142]]
[[ 78 142]]
[[ 77 141]]
[[ 76 141]]
[[ 75 142]]
[[ 74 141]]
[[ 69 141]]
[[ 68 140]]
[[ 64 140]]
[[ 63 139]]
[[ 57 139]]
[[ 56 138]]
[[ 51 138]]
[[ 50 137]]
[[ 43 137]]
[[ 42 136]]
[[ 38 136]]
[[ 37 135]]
[[ 30 135]]
[[ 29 134]]
[[ 25 134]]
[[ 24 133]]
[[ 19 133]]
[[ 18 132]]
[[ 13 132]]
[[ 12 131]]
[[ 9 131]]
[[ 8 130]]
[[ 4 130]]
[[ 3 129]]
[[ 0 129]]
[[ 0 193]]
[[ 1 193]]
[[ 2 194]]
[[ 15 194]]
[[ 16 195]]
[[ 29 195]]
[[ 30 196]]
[[ 31 196]]
[[ 32 195]]
[[ 34 195]]
[[ 35 196]]
[[ 41 196]]
[[ 42 197]]
[[ 43 196]]
[[ 57 196]]
[[ 58 197]]
[[ 77 197]]
[[ 78 198]]
[[ 85 198]]
[[ 86 199]]
[[105 199]]
[[106 200]]
[[107 200]]
[[108 201]]
[[109 201]]
[[110 202]]
[[112 202]]
[[113 203]]
[[115 203]]
[[116 204]]
[[117 204]]
[[118 205]]
[[120 205]]
[[121 206]]
[[122 206]]
[[123 207]]
[[124 207]]
[[125 208]]
[[127 208]]
[[128 209]]
[[129 209]]
[[130 210]]
[[132 210]]
[[133 211]]
[[135 211]]
[[136 212]]
[[139 212]]
[[140 213]]
[[144 213]]
[[145 214]]
[[163 214]]
[[164 215]]
[[167 215]]
[[168 216]]
[[170 216]]
[[171 215]]
[[173 215]]
[[174 214]]
[[176 214]]
[[177 213]]
[[179 213]]
[[180 212]]
[[182 212]]
[[183 211]]
[[184 211]]
[[185 210]]
[[186 210]]
[[188 208]]
[[188 207]]
[[190 205]]
[[190 204]]
[[191 203]]
[[191 198]]
[[194 195]]
[[194 194]]
[[197 191]]
[[197 189]]
[[198 188]]
[[198 187]]
[[199 186]]
[[199 181]]
[[198 180]]
[[198 179]]
[[204 173]]
[[204 172]]
[[205 171]]
[[205 168]]
[[204 167]]
[[204 166]]
[[203 165]]
[[203 163]]
[[202 162]]
[[202 160]]
[[201 159]]
[[201 158]]
[[200 157]]
[[200 155]]
[[201 154]]
[[201 153]]
[[202 152]]
[[202 151]]
[[203 150]]
[[203 148]]
[[204 147]]
[[204 144]]
[[203 143]]
[[203 142]]
[[201 140]]
[[201 138]]
[[200 137]]
[[200 136]]
[[197 133]]
[[196 133]]
[[195 132]]
[[194 132]]
[[193 131]]
[[190 131]]
[[189 132]]
[[184 132]]
[[183 133]]
[[177 133]]
[[176 132]]
[[175 132]]
[[174 131]]
[[173 131]]
[[170 128]]
[[169 128]]
[[168 127]]
[[167 127]]
[[165 125]]
[[165 124]]
[[164 123]]
[[164 122]]
[[163 121]]
[[163 120]]
[[162 119]]
[[162 117]]
[[163 116]]
[[163 114]]
[[164 113]]
[[164 111]]
[[166 109]]
[[166 108]]
[[168 106]]
[[168 105]]
[[170 103]]
[[170 102]]
[[172 100]]
[[172 99]]
[[174 97]]
[[174 96]]
[[175 95]]
[[175 94]]
[[178 91]]
[[178 90]]
[[179 89]]
[[179 88]]
[[180 87]]
[[180 86]]
[[181 85]]
[[181 81]]
[[182 80]]
[[182 66]]
[[181 65]]
[[181 64]]
[[178 61]]
[[177 61]]
[[176 60]]]
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for selected_contour in contours:
(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.08085632324219, 82.78831481933594]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
x,y,w,h = cv2.boundingRect(selected_contour)
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
cropped_image = cropped_image[y: y + h, x: x + w]
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = contours[1]
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for i in range(len(contours)):
(x,y), (MA,ma), angle = cv2.fitEllipse(contours[i])
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
x,y,w,h = cv2.boundingRect(selected_contour)
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
image = np.copy(image)
cropped_image = image[y: y + h, x: x + w]
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = contours[1]
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
print(retval)
plt.imshow(binary, cmap='gray')
###Output
225.0
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
_____no_output_____
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for i in range(len(contours)):
(x,y), (MA,ma), angle = cv2.fitEllipse(contours[i])
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
# Find the bounding rectangle of a selected contour
x,y,w,h = cv2.boundingRect(selected_contour)
# Draw the bounding rectangle as a purple box
box_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
# Crop using the dimensions of the bounding rectangle (x, y, w, h)
cropped_image = image[y: y + h, x: x + w]
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = contours[1]
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for i in range(len(contours)):
(x,y), (MA,ma), angle = cv2.fitEllipse(contours[i])
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
x,y,w,h = cv2.boundingRect(selected_contour)
## TODO: Crop the image using the dimensions of the bounding rectangle
box_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)
# Make a copy of the image to crop
cropped_image = np.copy(image)
cropped_image = cropped_image[y: y + h, x: x + w]
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = contours[1]
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for each in contours:
(x,y), (MA,ma), angle = cv2.fitEllipse(each)
angles.append(angle)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(image,contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
for contour in contours:
ellipse = cv2.fitEllipse(contour)
(x,y), (MA,ma), angle = ellipse
cv2.ellipse(image, ellipse, (255,0,0), 2,cv2.LINE_AA)
angles.append(angle)
plt.imshow(image)
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours_image,contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
Angles of each contour (in degrees): [61.35833740234375, 82.27550506591797]
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____
###Markdown
Finding Contours Import resources and display image
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/thumbs_up_down.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Produce a binary image for finding contours
###Code
# Convert to grayscale
gray = cv2.cvtColor(image,cv2.COLOR_RGB2GRAY)
# Create a binary thresholded image
retval, binary = cv2.threshold(gray, 225, 255, cv2.THRESH_BINARY_INV)
plt.imshow(binary, cmap='gray')
###Output
_____no_output_____
###Markdown
Find and draw the contours
###Code
# Find contours from thresholded, binary image
retval, contours, hierarchy = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw all contours on a copy of the original image
contours_image = np.copy(image)
contours_image = cv2.drawContours(contours_image, contours, -1, (0,255,0), 3)
plt.imshow(contours_image)
###Output
_____no_output_____
###Markdown
Contour FeaturesEvery contour has a number of features that you can calculate, including the area of the contour, it's orientation (the direction that most of the contour is pointing in), it's perimeter, and many other properties outlined in [OpenCV documentation, here](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_properties/py_contour_properties.html).In the next cell, you'll be asked to identify the orientations of both the left and right hand contours. The orientation should give you an idea of which hand has its thumb up and which one has its thumb down! OrientationThe orientation of an object is the angle at which an object is directed. To find the angle of a contour, you should first find an ellipse that fits the contour and then extract the `angle` from that shape. ```python Fit an ellipse to a contour and extract the angle from that ellipse(x,y), (MA,ma), angle = cv2.fitEllipse(selected_contour)```**Orientation values**These orientation values are in degrees measured from the x-axis. A value of zero means a flat line, and a value of 90 means that a contour is pointing straight up!So, the orientation angles that you calculated for each contour should be able to tell us something about the general position of the hand. The hand with it's thumb up, should have a higher (closer to 90 degrees) orientation than the hand with it's thumb down. TODO: Find the orientation of each contour
###Code
## TODO: Complete this function so that
## it returns the orientations of a list of contours
## The list should be in the same order as the contours
## i.e. the first angle should be the orientation of the first contour
def orientations(contours):
"""
Orientation
:param contours: a list of contours
:return: angles, the orientations of the contours
"""
# Create an empty list to store the angles in
# Tip: Use angles.append(value) to add values to this list
angles = []
return angles
# ---------------------------------------------------------- #
# Print out the orientation values
angles = orientations(contours)
print('Angles of each contour (in degrees): ' + str(angles))
###Output
_____no_output_____
###Markdown
Bounding RectangleIn the next cell, you'll be asked to find the bounding rectangle around the *left* hand contour, which has its thumb up, then use that bounding rectangle to crop the image and better focus on that one hand!```python Find the bounding rectangle of a selected contourx,y,w,h = cv2.boundingRect(selected_contour) Draw the bounding rectangle as a purple boxbox_image = cv2.rectangle(contours_image, (x,y), (x+w,y+h), (200,0,200),2)```And to crop the image, select the correct width and height of the image to include.```python Crop using the dimensions of the bounding rectangle (x, y, w, h)cropped_image = image[y: y + h, x: x + w] ``` TODO: Crop the image around a contour
###Code
## TODO: Complete this function so that
## it returns a new, cropped version of the original image
def left_hand_crop(image, selected_contour):
"""
Left hand crop
:param image: the original image
:param selectec_contour: the contour that will be used for cropping
:return: cropped_image, the cropped image around the left hand
"""
## TODO: Detect the bounding rectangle of the left hand contour
## TODO: Crop the image using the dimensions of the bounding rectangle
# Make a copy of the image to crop
cropped_image = np.copy(image)
return cropped_image
## TODO: Select the left hand contour from the list
## Replace this value
selected_contour = None
# ---------------------------------------------------------- #
# If you've selected a contour
if(selected_contour is not None):
# Call the crop function with that contour passed in as a parameter
cropped_image = left_hand_crop(image, selected_contour)
plt.imshow(cropped_image)
###Output
_____no_output_____ |
docs/source/Basics/S1S2.ipynb | ###Markdown
S1 S2 function computation
###Code
# sphinx_gallery_thumbnail_path = '../images/Basics_S1S2.png'
def run(Plot, Save):
from PyMieSim.Scatterer import Sphere
from PyMieSim.Source import PlaneWave
Source = PlaneWave(Wavelength = 450e-9,
Polarization = 0,
Amplitude = 1)
Scat = Sphere(Diameter = 600e-9,
Source = Source,
Index = 1.4)
S1S2 = Scat.S1S2(Num=200)
if Plot:
S1S2.Plot()
if Save:
from pathlib import Path
dir = f'docs/images/{Path(__file__).stem}'
S1S2.SaveFig(Directory=dir)
if __name__ == '__main__':
run(Plot=True, Save=False)
###Output
_____no_output_____ |
example/Surrogates/PCE/PCE_Example3.ipynb | ###Markdown
Polynomial Chaos Expansion Example 3 Author: Katiana Kontolati \Date: December 8, 2020 In this example, PCE is used to generate a surrogate model for a given set of 2D data. Six-hump camel function $$ f(\textbf{x}) = \Big(4-2.1x_1^2 + \frac{x_1^4}{3} \Big)x_1^2 + x_1x_2 + (-4 + 4x_2^2)x_2^2$$**Description:** Dimensions: 2**Input Domain:** This function is evaluated on the hypercube $x_1 \in [-3, 3], x_2 \in [-2, 2]$.**Global minimum:** $f(\textbf{x}^*)=-1.0316,$ at $\textbf{x}^* = (0.0898, -0.7126)$ and $(-0.0898, 0.7126)$.**Reference:** Molga, M., & Smutnicki, C. Test functions for optimization needs (2005). Retrieved June 2013, from http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf. Import necessary libraries.
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from UQpy.Distributions import Uniform, JointInd
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from UQpy.Surrogates import *
###Output
_____no_output_____
###Markdown
Define the function.
###Code
def function(x,y):
return (4-2.1*x**2 + x**4/3)*x**2 + x*y + (-4+4*y**2)*y**2
###Output
_____no_output_____
###Markdown
Create a distribution object, generate samples and evaluate the function at the samples.
###Code
np.random.seed(1)
dist_1 = Uniform(loc=-2, scale=4)
dist_2 = Uniform(loc=-1, scale=2)
marg = [dist_1, dist_2]
joint = JointInd(marginals=marg)
n_samples = 250
x = joint.rvs(n_samples)
y = function(x[:,0], x[:,1])
###Output
_____no_output_____
###Markdown
Visualize the 2D function.
###Code
xmin, xmax = -2,2
ymin, ymax = -1,1
X1 = np.linspace(xmin, xmax, 50)
X2 = np.linspace(ymin, ymax, 50)
X1_, X2_ = np.meshgrid(X1, X2) # grid of points
f = function(X1_, X2_)
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X1_, X2_, f, rstride=1, cstride=1, cmap='gnuplot2', linewidth=0, antialiased=False)
ax.set_title('True function')
ax.set_xlabel('$x_1$', fontsize=15)
ax.set_ylabel('$x_2$', fontsize=15)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(20, 50)
fig.colorbar(surf, shrink=0.5, aspect=7)
plt.show()
###Output
_____no_output_____
###Markdown
Visualize training data.
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.scatter(x[:,0], x[:,1], y, s=20, c='r')
ax.set_title('Training data')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(20,50)
ax.set_xlabel('$x_1$', fontsize=15)
ax.set_ylabel('$x_2$', fontsize=15)
#ax.set_xlim(-10,10)
#ax.set_ylim(-6,6)
#ax.set_zlim(-1,1.5)
plt.show()
###Output
_____no_output_____
###Markdown
Create an object from the PCE class.
###Code
max_degree = 6
polys = Polynomials(dist_object=joint, degree=max_degree)
###Output
_____no_output_____
###Markdown
Compute PCE coefficients using least squares regression.
###Code
lstsq = PolyChaosLstsq(poly_object=polys)
pce = PCE(method=lstsq)
pce.fit(x,y)
###Output
_____no_output_____
###Markdown
Compute PCE coefficients using LASSO.
###Code
lasso = PolyChaosLasso(poly_object=polys, learning_rate=0.1, iterations=1000, penalty=0.1)
pce2 = PCE(method=lasso)
pce2.fit(x,y)
###Output
_____no_output_____
###Markdown
Compute PCE coefficients with Ridge regression.
###Code
ridge = PolyChaosRidge(poly_object=polys, learning_rate=0.01, iterations=1000, penalty=0.1)
pce3 = PCE(method=ridge)
pce3.fit(x,y)
###Output
_____no_output_____
###Markdown
PCE surrogate is used to predict the behavior of the function at new samples.
###Code
n_test_samples = 20000
x_test = joint.rvs(n_test_samples)
y_test = pce.predict(x_test)
###Output
_____no_output_____
###Markdown
Plot PCE prediction.
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.scatter(x_test[:,0], x_test[:,1], y_test, s=1)
ax.set_title('PCE predictor')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(20,50)
ax.set_xlim(-2,2)
ax.set_ylim(-1,1)
ax.set_xlabel('$x_1$', fontsize=15)
ax.set_ylabel('$x_2$', fontsize=15)
#ax.set_zlim(0,136)
plt.show()
###Output
_____no_output_____
###Markdown
Error Estimation Validation error.
###Code
n_samples = 150
x_val = joint.rvs(n_samples)
y_val = function(x_val[:,0], x_val[:,1])
error = ErrorEstimation(surr_object=pce)
error2 = ErrorEstimation(surr_object=pce2)
error3 = ErrorEstimation(surr_object=pce3)
print('Error from least squares regression is: ', error.validation(x_val, y_val))
print('Error from LASSO regression is: ', error2.validation(x_val, y_val))
print('Error from Ridge regression is: ', error3.validation(x_val, y_val))
###Output
Error from least squares regression is: 0.0
Error from LASSO regression is: 2e-07
Error from Ridge regression is: 4e-07
###Markdown
Moment Estimation Returns mean and variance of the PCE surrogate.
###Code
n_mc = 1000000
x_mc = joint.rvs(n_mc)
y_mc = function(x_mc[:,0], x_mc[:,1])
mu = np.mean(y_mc)
print('Moments from least squares regression :', MomentEstimation(surr_object=pce).get())
print('Moments from LASSO regression :', MomentEstimation(surr_object=pce2).get())
print('Moments from Ridge regression :', MomentEstimation(surr_object=pce3).get())
print('Moments from Monte Carlo integration: ', (round((1/n_mc)*np.sum(y_mc),6), round((1/n_mc)*np.sum((y_mc-mu)**2),6)))
###Output
Moments from least squares regression : (1.1276, 1.3807)
Moments from LASSO regression : (1.1275, 1.3796)
Moments from Ridge regression : (1.1275, 1.3794)
Moments from Monte Carlo integration: (1.127527, 1.380522)
###Markdown
Polynomial Chaos Expansion Example 3 Author: Katiana Kontolati \Date: December 8, 2020 In this example, PCE is used to generate a surrogate model for a given set of 2D data. Six-hump camel function $$ f(\textbf{x}) = \Big(4-2.1x_1^2 + \frac{x_1^4}{3} \Big)x_1^2 + x_1x_2 + (-4 + 4x_2^2)x_2^2$$**Description:** Dimensions: 2**Input Domain:** This function is evaluated on the hypercube $x_1 \in [-3, 3], x_2 \in [-2, 2]$.**Global minimum:** $f(\textbf{x}^*)=-1.0316,$ at $\textbf{x}^* = (0.0898, -0.7126)$ and $(-0.0898, 0.7126)$.**Reference:** Molga, M., & Smutnicki, C. Test functions for optimization needs (2005). Retrieved June 2013, from http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf. Import necessary libraries.
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from UQpy.Surrogates import *
###Output
_____no_output_____
###Markdown
Define the function.
###Code
def function(x,y):
return (4-2.1*x**2 + x**4/3)*x**2 + x*y + (-4+4*y**2)*y**2
###Output
_____no_output_____
###Markdown
Create a distribution object, generate samples and evaluate the function at the samples.
###Code
np.random.seed(1)
dist_1 = Uniform(loc=-2, scale=4)
dist_2 = Uniform(loc=-1, scale=2)
marg = [dist_1, dist_2]
joint = JointInd(marginals=marg)
n_samples = 250
x = joint.rvs(n_samples)
y = function(x[:,0], x[:,1])
###Output
_____no_output_____
###Markdown
Visualize the 2D function.
###Code
xmin, xmax = -2,2
ymin, ymax = -1,1
X1 = np.linspace(xmin, xmax, 50)
X2 = np.linspace(ymin, ymax, 50)
X1_, X2_ = np.meshgrid(X1, X2) # grid of points
f = function(X1_, X2_)
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X1_, X2_, f, rstride=1, cstride=1, cmap='gnuplot2', linewidth=0, antialiased=False)
ax.set_title('True function')
ax.set_xlabel('$x_1$', fontsize=15)
ax.set_ylabel('$x_2$', fontsize=15)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(20, 50)
fig.colorbar(surf, shrink=0.5, aspect=7)
plt.show()
###Output
_____no_output_____
###Markdown
Visualize training data.
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.scatter(x[:,0], x[:,1], y, s=20, c='r')
ax.set_title('Training data')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(20,50)
ax.set_xlabel('$x_1$', fontsize=15)
ax.set_ylabel('$x_2$', fontsize=15)
#ax.set_xlim(-10,10)
#ax.set_ylim(-6,6)
#ax.set_zlim(-1,1.5)
plt.show()
###Output
_____no_output_____
###Markdown
Create an object from the PCE class.
###Code
max_degree = 6
polys = Polynomials(dist_object=joint, degree=max_degree)
###Output
_____no_output_____
###Markdown
Compute PCE coefficients using least squares regression.
###Code
lstsq = PolyChaosLstsq(poly_object=polys)
pce = PCE(method=lstsq)
pce.fit(x,y)
###Output
_____no_output_____
###Markdown
Compute PCE coefficients using LASSO.
###Code
lasso = PolyChaosLasso(poly_object=polys, learning_rate=0.1, iterations=1000, penalty=0.1)
pce2 = PCE(method=lasso)
pce2.fit(x,y)
###Output
_____no_output_____
###Markdown
Compute PCE coefficients with Ridge regression.
###Code
ridge = PolyChaosRidge(poly_object=polys, learning_rate=0.01, iterations=1000, penalty=0.1)
pce3 = PCE(method=ridge)
pce3.fit(x,y)
###Output
_____no_output_____
###Markdown
PCE surrogate is used to predict the behavior of the function at new samples.
###Code
n_test_samples = 20000
x_test = joint.rvs(n_test_samples)
y_test = pce.predict(x_test)
###Output
_____no_output_____
###Markdown
Plot PCE prediction.
###Code
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.scatter(x_test[:,0], x_test[:,1], y_test, s=1)
ax.set_title('PCE predictor')
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
ax.view_init(20,50)
ax.set_xlim(-2,2)
ax.set_ylim(-1,1)
ax.set_xlabel('$x_1$', fontsize=15)
ax.set_ylabel('$x_2$', fontsize=15)
#ax.set_zlim(0,136)
plt.show()
###Output
_____no_output_____
###Markdown
Error Estimation Validation error.
###Code
n_samples = 150
x_val = joint.rvs(n_samples)
y_val = function(x_val[:,0], x_val[:,1])
error = ErrorEstimation(surr_object=pce)
error2 = ErrorEstimation(surr_object=pce2)
error3 = ErrorEstimation(surr_object=pce3)
print('Error from least squares regression is: ', error.validation(x_val, y_val))
print('Error from LASSO regression is: ', error2.validation(x_val, y_val))
print('Error from Ridge regression is: ', error3.validation(x_val, y_val))
###Output
Error from least squares regression is: 0.0
Error from LASSO regression is: 3e-07
Error from Ridge regression is: 4e-07
###Markdown
Moment Estimation Returns mean and variance of the PCE surrogate.
###Code
n_mc = 1000000
x_mc = joint.rvs(n_mc)
y_mc = function(x_mc[:,0], x_mc[:,1])
mu = np.mean(y_mc)
print('Moments from least squares regression :', MomentEstimation(surr_object=pce).get())
print('Moments from LASSO regression :', MomentEstimation(surr_object=pce2).get())
print('Moments from Ridge regression :', MomentEstimation(surr_object=pce3).get())
print('Moments from Monte Carlo integration: ', (round((1/n_mc)*np.sum(y_mc),6), round((1/n_mc)*np.sum((y_mc-mu)**2),6)))
###Output
Moments from least squares regression : (1.127619, 1.380713)
Moments from LASSO regression : (1.127542, 1.380272)
Moments from Ridge regression : (1.12751, 1.379393)
Moments from Monte Carlo integration: (1.127527, 1.380522)
|
Python_Notebook_Data_Analysis.ipynb | ###Markdown
OverviewIn this week's independent project, you will be working as Data Scientist for MTN Cote d'Ivoire, a leading telecom company and you will be solving for the following research question.- Currently MTN Cote d'Ivoire would like to upgrade its technology infrastructure for its mobile users in Ivory Coast. Studying the given dataset, how does MTN Cote d'Ivoire go about the upgrade of its infrastructure strategy within the given cities?Your final deliverable will be a Data Report which will comprise the following sections;1. Business Understanding 2. Data Understanding 3. Data Preparation 4. Analysis 5. Recommendation 6. EvaluationYou can use the CRISP-DM methodology to guide you while working on the Data Report. Your Data Report will also need to have an objective account, with insights majorly coming from the dataset. However, you can refer to external information for supporting information. Below are some questions that can get you started;1. Which ones were the most used city for the three days?2. Which cities were the most used during business and home hours?3. Most used city for the three days?etc. The telecom data provided for this project is only a sample ( i.e. for only three days). The data files that you will need for this project will be as follows:1. cells_geo_description.xlsx [Link] (Links to an external site.)2. cells_geo.csv [Link] (Links to an external site.)3. CDR_description.xlsx [Link] (Links to an external site.)4. CDR 20120507 [http://bit.ly/TelecomDataset1] (Links to an external site.)5. CDR 20120508 [http://bit.ly/TelecomDataset2] (Links to an external site.)6. CDR 20120509 [http://bit.ly/TelecomDataset3] (Links to an external site.)You will use Python for your analysis. Importing Libraries to be used.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
We will need to import the libraries we are going to use. Loading DatasetsWe will load the datasets required for data preparation and analysis.
###Code
#Loading first dataset
df1 = pd.read_csv("/content/cells_geo.csv", delimiter=";")
df1.head()
#Loading second dataset
df2 = pd.read_csv('/content/Telcom_dataset.csv')
df2.head(10)
df2.rename(columns={"PRODUTC": "PRODUCT", "DATETIME": "DATE_TIME"}, inplace=True)
df2
###Output
_____no_output_____
###Markdown
The lines of code above was used to rename some columns to make the same as all the other datasets.
###Code
#Loading third dataset
df3 = pd.read_csv('/content/Telcom_dataset2.csv')
df3.head(10)
df3.rename(columns={"DW_A_NUMBER": "DW_A_NUMBER_INT", "DW_B_NUMBER": "DW_B_NUMBER_INT"}, inplace=True)
df3
###Output
_____no_output_____
###Markdown
The lines of code above was used to rename some columns to make the same as all the other datasets.
###Code
#Loading fourth dataset
df4 = pd.read_csv('/content/Telcom_dataset3.csv')
df4.head(10)
df4.rename(columns={"CELLID": "CELL_ID", "SIET_ID": "SITE_ID"}, inplace=True)
df4
###Output
_____no_output_____
###Markdown
The lines of code above was used to rename some columns to make the same as all the other datasets. Merging Datasets The following lines of code combine the Telecom datasets together.
###Code
df_mer = pd.concat([df2,df3,df4])
df_mer
###Output
_____no_output_____
###Markdown
The following line of code merge the dataframe created after combining the Telecom datasets and the cells geo dataset together.
###Code
merged = df_mer.merge(df1,left_index=True, right_index=True, how = 'outer')
merged
###Output
_____no_output_____
###Markdown
The following line of code shows the sum of null values.
###Code
merged.isna().sum()
###Output
_____no_output_____
###Markdown
The following line of code shows the sum of duplicated items in the merged datasets.
###Code
merged.duplicated().sum()
###Output
_____no_output_____
###Markdown
The following lines of code drops duplicates in the merged datasets.
###Code
merged.dropna(inplace=True)
merged.drop_duplicates(inplace=True)
merged
#The following lines of code show which ones were the most used city for the three days?
cities_used=merged.groupby(['VILLES','DATE_TIME']).sum()
cities_used.sort_values('VILLES', ascending=True)
#The following lines of code show which cities were the most used during business and home hours?
cities_used=merged.groupby(['VILLES','DATE_TIME']).sum()
cities_used.sort_values('DATE_TIME')
#The following lines of code show the cities with the most value of products.
cities_product=merged.groupby(['VILLES','VALUE']).sum()
cities_product.sort_values('VALUE')
###Output
_____no_output_____ |
docs/python/Plots/grid-spec.ipynb | ###Markdown
---title: "Subplots using GridSpec"author: "Charles"date: 2020-08-12description: "-"type: technical_notedraft: false---
###Code
import matplotlib.pyplot as plt
from matplotlib.pyplot import GridSpec
fig2 = plt.figure(constrained_layout=True)
spec2 = GridSpec(ncols=2, nrows=2, figure=fig2)
f2_ax1 = fig2.add_subplot(spec2[0, 0])
f2_ax2 = fig2.add_subplot(spec2[0, 1])
f2_ax3 = fig2.add_subplot(spec2[1, 0])
f2_ax4 = fig2.add_subplot(spec2[1, 1])
###Output
_____no_output_____ |
mdmap_totals_by_category.ipynb | ###Markdown
Summarize total counts of trash by high-level categories for MDMAP dataset
###Code
import pandas as pd
import numpy as np
import json
import seaborn as sns
###Output
_____no_output_____
###Markdown
Import `category_map.csv` and create a dictionary:
###Code
cat_map = pd.read_csv('category_map.csv')
catdict = {key:value for key,value in zip(cat_map['Column Name'], cat_map['High-Level Category'])}
###Output
_____no_output_____
###Markdown
Import cleaned MDMAP_Accumulation data:
###Code
mdmap_all = pd.read_csv('data_processed/mdmap_accumulation_totalarea_zerosremoved.csv')
###Output
_____no_output_____
###Markdown
Map MDMAP trash subcategories to their corresponding `High-Level Category`:
###Code
# Create a dataframe of the subcategories:
mdmap_subset = mdmap_all[['UniqueId',
'Hard Plastic Fragments',
'Foamed Plastic Fragments',
'Filmed Plastic Fragments',
'Food Wrappers',
'Plastic Beverage Bottles',
'Other Jugs/Containers',
'Bottle/Container Caps',
'Cigar Tips',
'Cigarettes',
'Disposable Cigarette Lighters',
'6-Pack Rings',
'Bags',
'Plastic Rope/Net',
'Buoys & Floats',
'Fishing Lures & Line',
'Cups',
'Plastic Utensils',
'Straws',
'Balloons Mylar',
'Personal Care Products',
'Plastic Other',
'Metal',
'Aluminum/Tin Cans',
'Aerosol Cans',
'Metal Fragments',
'Metal Other',
'Glass',
'Glass Beverage Bottles',
'Jars',
'Glass Fragments',
'Glass Other',
'Rubber',
'Flip Flops',
'Rubber Gloves',
'Tires',
'Balloons Latex',
'Rubber Fragments',
'Rubber Other',
'Processed Lumber',
'Cardboard Cartons',
'Paper and Cardboard',
'Paper Bags',
'Lumber/Building Material',
'Processed Lumber Other',
'Cloth/Fabric',
'Clothing & Shoes',
'Gloves (non-rubber)',
'Towels/Rags',
'Rope/Net Pieces (non-nylon)',
'Fabric Pieces',
'Cloth/Fabric Other',
'Unclassified']] # Removed Total Debris and Debris Description
mdmap_long = pd.melt(mdmap_all, id_vars=['UniqueId'],
value_vars=['Hard Plastic Fragments',
'Foamed Plastic Fragments',
'Filmed Plastic Fragments',
'Food Wrappers',
'Plastic Beverage Bottles',
'Other Jugs/Containers',
'Bottle/Container Caps',
'Cigar Tips',
'Cigarettes',
'Disposable Cigarette Lighters',
'6-Pack Rings',
'Bags',
'Plastic Rope/Net',
'Buoys & Floats',
'Fishing Lures & Line',
'Cups',
'Plastic Utensils',
'Straws',
'Balloons Mylar',
'Personal Care Products',
'Plastic Other',
'Metal',
'Aluminum/Tin Cans',
'Aerosol Cans',
'Metal Fragments',
'Metal Other',
'Glass',
'Glass Beverage Bottles',
'Jars',
'Glass Fragments',
'Glass Other',
'Rubber',
'Flip Flops',
'Rubber Gloves',
'Tires',
'Balloons Latex',
'Rubber Fragments',
'Rubber Other',
'Processed Lumber',
'Cardboard Cartons',
'Paper and Cardboard',
'Paper Bags',
'Lumber/Building Material',
'Processed Lumber Other',
'Cloth/Fabric',
'Clothing & Shoes',
'Gloves (non-rubber)',
'Towels/Rags',
'Rope/Net Pieces (non-nylon)',
'Fabric Pieces',
'Cloth/Fabric Other',
'Unclassified'],
var_name='Subcategory', value_name='Count')
mdmap_long['Category'] = mdmap_long['Subcategory'].map(catdict)
# Remove non-numerical values under Count
mdmap_int = mdmap_long[mdmap_long.applymap(np.isreal).Count]
mdmap_int.info()
mdmap_int = mdmap_int.dropna()
mdmap_int.info()
mdmap_int.Count = mdmap_int.Count.astype('int64')
# Calculate the total counts by the High-Level Category:
mdmap_group = mdmap_int.groupby(['UniqueId','Category'], as_index=False).sum()
mdmap_group.head()
mdmap_totals = mdmap_group.pivot(index='UniqueId',
columns='Category',
values='Count')
###Output
_____no_output_____
###Markdown
Merge the key location and survey stats with the trash totals:
###Code
# Store the key location and survey stats in a dataframe:
mdmap_orig = mdmap_all[['UniqueId',
'Organization',
'Date',
'Survey Year',
'Survey ID',
'Latitude Start',
'Longitude Start',
'Latitude End',
'Longitude End',
'Width',
'Length',
'TotalArea',
'Total Debris',
'Plastic']]
mdmap_orig.columns = ['UniqueId',
'Organization',
'Date',
'Survey Year',
'Survey ID',
'Latitude Start',
'Longitude Start',
'Latitude End',
'Longitude End',
'Width',
'Length',
'TotalArea',
'Total Debris',
'Plastic Count']
# Merge with map_totals:
# Test:
mdmap_final = pd.merge(mdmap_orig, mdmap_totals, how='outer', on='UniqueId', indicator=True)
mdmap_final.groupby('_merge').count()
mdmap_final = mdmap_final.drop('_merge', 1)
# Calculate debris relative to beach size
mdmap_final['Cloth Per Sq Meter'] = mdmap_final['Cloth']/mdmap_final['TotalArea']
mdmap_final['Fishing Gear Per Sq Meter'] = mdmap_final['Fishing Gear']/mdmap_final['TotalArea']
mdmap_final['Glass Per Sq Meter'] = mdmap_final['Glass']/mdmap_final['TotalArea']
mdmap_final['Metal Per Sq Meter'] = mdmap_final['Metal']/mdmap_final['TotalArea']
mdmap_final['Other Per Sq Meter'] = mdmap_final['Other']/mdmap_final['TotalArea']
mdmap_final['Plastic Per Sq Meter'] = mdmap_final['Plastic']/mdmap_final['TotalArea']
mdmap_final['Processed Lumber Per Sq Meter'] = mdmap_final['Processed Lumber']/mdmap_final['TotalArea']
mdmap_final['Rubber Per Sq Meter'] = mdmap_final['Rubber']/mdmap_final['TotalArea']
mdmap_final['Total Debris Per Sq Meter'] = mdmap_final['Total Debris']/mdmap_final['TotalArea']
mdmap_final.head()
###Output
_____no_output_____
###Markdown
How well does the computed plastic count align with the recorded plastic count?
###Code
sns.regplot(x=mdmap_final["Plastic Count"], y=mdmap_final["Plastic"])
###Output
_____no_output_____
###Markdown
Save to file.
###Code
mdmap_final.to_csv('data_processed/mdmap_totals_by_category.csv', index=False)
###Output
_____no_output_____ |
015_pca_dim_reduction_sol.ipynb | ###Markdown
PCA - Principal Component AnalysisWhen dealing with text we looked at the truncated SVD algorithm that could reduce the massive datasets generated from encoding text down to a subset of features. PCA is a similar concept, we can take high dimension feature sets and reduce them down to a subset of features used for prediction. PCA is a very common method for dimensionality reduction. PCA Concepts PCA reduces dimensionality by breaking down the variance in the data into its "principal components", then keeping only those components that do the best job in explaining said variance. We can understand this well with an example, in 2D. We'll create something that looks like an example from simple linear regression type of data - we have a bunch of points, each point is located by its X and Y values.
###Code
#make some random numbers
plt.rcParams['figure.figsize'] = 12,6
fig, ax = plt.subplots(1, 2)
X = np.dot(np.random.rand(2, 2), np.random.randn(2, 200)).T
sns.regplot(data=X, x=X[:,0], y=X[:,1], ci=0, ax=ax[0])
ax[0].set_ylabel('Y')
ax[0].set_xlabel('X')
tmpPCA = PCA(2)
tmpData = tmpPCA.fit_transform(X)
sns.regplot(data=tmpData, x=tmpData[:,0], y=tmpData[:,1], ci=0, ax=ax[1])
ax[1].set_ylabel('PC 2')
ax[1].set_xlabel('PC 1')
plt.show()
###Output
_____no_output_____
###Markdown
Principal ComponentsIn normal analysis, each of these points is defined by their X and Y values: X - how far left and right the point is. Y - how far up and down the point is. Together these points explain all of the position data of the points. Once we look at PCA, we can also think of these points being defined by two components: Along the regression line. The majority of the variance in Y is explained by the position along this line. Perpindicular to the regression line. Some smaller part of the variance in Y is explained by how "far off" it is from the regession line.In essence, we can explain the position of our points mostly by examining where it is along the regression line (component 1), along with a little info on how far off it is from that line. These two components can explain our data - "A" amount "up and down" the line, along with "B" amount "off the line". This also explains the position of the points, but does so with different values than X and Y. If we look at the plot of the PCA components, PC1 (plotted as X) has a wide range, or lots of variance. PC2 (plotted as Y) has a small range, or a small amount of variance. Animated ExampleSee: https://setosa.io/ev/principal-component-analysis/ PCA and EigenvectorsThe components generated by the PCA are called eigenvectors. We don't need to worry about much of the math, but this PCA can be calculated by hand with some linear math. We can skip that, computers are good at math. PCA and Dimensionality ReductionOnce we've established the components, reducing the dimensions of our feature set is simple - just reduce the components that matter least to 0. In our example, we'd ignore the "off the line" component that is responsible for only a little bit of the position of our points, and keep the "up the line" component that explains the majority of the position of our points. In the XY system, both X and Y are very important in specifying where a point is, X somewhat more important than Y. In our component system, the "up the line" component provides the majority of the information on our points, with the "off the line" component only adding a little bit of info. This is the key to the dimensionality reduction - if we feature selected away the Y value, we would lose substantial information on the location of the points. If we PCA-away the "off the line" component, we only lose a small amount of information! So we can describe this data "pretty well" with only 1/2 the number of features if we describe the data with the components over the original features. When dealing with large numbers of features, this can allow us to reduce them down to a much smaller number of components, without missing out on too much information describing the real data. The true benefit of PCA is if there are a lot of features. We can do something like the example here to grab the "best" components, drop the rest, and have a smaller feature set with a comparable level of accuracy. Colinearity and Multi-colinearityOne of the other benefits of PCA is that it reduces colinearity between features. The components that PCA generates are orthogonal of each other - the colinearity is reduced to effectively 0. Dimension Reduction in Multiple DimensionsThis 2D example is simple to picture. The same concept applies when we have data with lots of dimensions. We can break the data down into components, remove the least impactful, and end up with a feature set that captures most of the variance in our target with fewer inputs. Example with Real DataThis dataset is one of the sklearn samples, containing measurements from people with and without breast cancer. The classification of cancer/no cancer is the target.
###Code
def sklearn_to_df(sklearn_dataset):
df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)
df['target'] = pd.Series(sklearn_dataset.target)
return df
df = sklearn_to_df(load_breast_cancer())
y1 = df["target"]
X1 = df.drop(columns="target")
df.head()
###Output
_____no_output_____
###Markdown
Pre PCA TestWe can run a test to approximate the accuracy without doing PCA. We don't want accuracy to drop too much after the PCA process. This is our baseline.
###Code
pre_model = LogisticRegression()
pre_scale = MinMaxScaler()
pre_pipe = Pipeline([("scale", pre_scale), ("model", pre_model)])
print("Estimated Initial Accuracy:", np.mean(cross_val_score(pre_pipe, X1, y1)))
###Output
Estimated Initial Accuracy: 0.9613414066138798
###Markdown
Original Dimensionality and CorrelationOne classfication target, along with 30 features. We can look for correlation between those features.
###Code
# Check Original Correlation
plt.rcParams['figure.figsize'] = 15,5
sns.heatmap(X1.corr(), cmap="BuPu")
# Calculate VIF for Multicolinearity
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X1.values, i) for i in range(X1.shape[1])]
vif["features"] = X1.columns
vif.sort_values("VIF Factor", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Colinearity ResultsLooks like there is a lot of correlation going on. The heatmap shows many values that are pretty correlated, and the VIF shows some really high values. Recall, values for a VIF over about 10 are really large. For the model, we'll be sure to use a logistic regression, that is very impacted by the colinearity. Feel free to play with the number of components and observe results.
###Code
#Check accuracy
X_train1, X_test1, y_train1, y_test1 = train_test_split(X1, y1)
can_pca = PCA()
can_model = LogisticRegression()
can_steps = [
("scale", MinMaxScaler()),
("pca", can_pca),
("can_model", can_model)
]
can_pipe = Pipeline(steps=can_steps)
can_params = {
"pca__n_components":[15]
}
clf1 = GridSearchCV(estimator=can_pipe, param_grid=can_params, cv=5, n_jobs=-1)
clf1.fit(X_train1, y_train1.ravel())
print(clf1.score(X_test1, y_test1))
best1 = clf1.best_estimator_
print(best1)
###Output
0.9790209790209791
Pipeline(steps=[('scale', MinMaxScaler()), ('pca', PCA(n_components=15)),
('can_model', LogisticRegression())])
###Markdown
Results - We Have Accuracy!Accuracy looks pretty good, even though we've reduced the number of features. How is the information on our target (the variance) distributed amongst our components?
###Code
# Get PCA Info
comps1 = best1.named_steps['pca'].components_
ev1 = best1.named_steps['pca'].explained_variance_ratio_
plt.rcParams['figure.figsize'] = 6,6
plt.plot(np.cumsum(ev1))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
What is in the PCA Components?We can also reconstruct the importance of the contributions of the different features to the components.
###Code
labels = []
for i in range(len(comps1)):
label = "PC-"+str(i)
labels.append(label)
PCA1_res_comps = pd.DataFrame(comps1,columns=X1.columns, index = labels)
PCA1_res_comps.head()
###Output
_____no_output_____
###Markdown
Results of PCAPCA allows us to reduce down the original 30 feature set to a much smaller number, while still making accurate predictions. In this case, it looks like we can get about 90% of the explained varaiance in the data by using around 6 or so components. Yay, that's cool! PCA and Feature SelectionPCA is not a feature selection technique. PCA does do a similar thing to feature selection in reducing the size of our feature set that goes into a model, but it is technically different. Feature selection removes features. PCA removes components, that are created from features, but that are not, themselves, features. In PCA, the features are being transformed for the components to be created, and each component includes portions of multiple features - for example, in the scatter plot above, both the "up the line" and "off the line" components contain parts of the X and Y features. If we drop the "off the line" feature when doing PCA we aren't really eliminating any features - we still need X and Y to calculate each of our components. In the breast cancer example, each of those features still contributes to the components, but the actual predictors are far reduced. ExamplePredict if people have diabetes (Outcome) using PCA to help.
###Code
df = pd.read_csv("data/diabetes.csv")
df.head()
#Get data
y = df["Outcome"]
X = df.drop(columns={"Outcome"})
X_train, X_test, y_train, y_test = train_test_split(X, y)
#Model and grid search of components.
scaler = MinMaxScaler()
logistic = LogisticRegression(max_iter=10000, tol=0.1)
pca_dia = PCA()
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca_dia), ("logistic", logistic)])
param_grid = {
"pca__n_components": [8]
}
grid = GridSearchCV(pipe, param_grid, n_jobs=4)
grid.fit(X_train, y_train)
best2 = grid.best_estimator_
print("Best parameter (CV score=%0.3f):" % grid.best_score_)
print(grid.best_params_)
###Output
Best parameter (CV score=0.759):
{'pca__n_components': 8}
###Markdown
Plot Component ImportanceWe can plot the effectiveness with different numbers of components.
###Code
comps2 = best2.named_steps['pca'].components_
ev2 = best2.named_steps['pca'].explained_variance_ratio_
plt.rcParams['figure.figsize'] = 6,6
plt.plot(np.cumsum(ev2))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
PCA with Images - Big Dimensions!One common example of something with a large feature set is images - even our simple set of handwritten numbers had 784 features for each digit. Generating models from all 70,000 of those simple images could take forever, and those are about the most simple images we can imagine!Reducing the dimensions of very large images can be highly beneficial, especially if we can keep the useful bits that we need to do identification. Faces, PCA, and YouThis dataset is a more complex set of images than the digits we used previously. It is a set of a bunch of faces of past world leaders, our goal being to make a model that will recognize each person from their picture.
###Code
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
###Output
['Ariel Sharon' 'Colin Powell' 'Donald Rumsfeld' 'George W Bush'
'Gerhard Schroeder' 'Hugo Chavez' 'Junichiro Koizumi' 'Tony Blair']
(1348, 62, 47)
###Markdown
Starting Dimensions and PCA DimensionsWe start with ~1350 images, each 62 x 47 pixels, color depth of 1 - resulting in a feature set that is around 3000 columns wide. We can fit the data to a PCA transformation, and chop the feature set down to a much smaller number of components.
###Code
# Generate PCA and inversed face-sets
pca150 = PCA(150).fit(faces.data)
components150 = pca150.transform(faces.data)
projected150 = pca150.inverse_transform(components150)
pca15 = PCA(15).fit(faces.data)
components15 = pca15.transform(faces.data)
projected15 = pca15.inverse_transform(components15)
###Output
_____no_output_____
###Markdown
Picture Some PicturesWe can look at what the pictures look like in their original state, and after the PCA process has reduced their dimensions by various amounts.
###Code
# Plot faces and PCA faces
fig, ax = plt.subplots(3, 12, figsize=(12, 6),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(12):
ax[0,i].imshow(faces.data[i].reshape(62, 47), cmap='bone')
ax[1,i].imshow(projected150[i].reshape(62, 47), cmap='bone')
ax[2,i].imshow(projected15[i].reshape(62, 47), cmap='bone')
ax[0, 0].set_ylabel('Original')
ax[1, 0].set_ylabel('150-dim')
ax[2, 0].set_ylabel('15-dim')
###Output
_____no_output_____
###Markdown
Amount of Variance Captured in ComponentsWe can look at our PCA'd data and see that while the images are much less clear and defined, they are pretty similar on the whole! We can probably still do a good job of IDing the people, even though we have roughly 1/20 (or 1/200) the number of features as we started with. Cool. Even with the 15 component set, the images are still somewhat able to be recognized. The PCA allows us to call up the details on how much of the variance was captured in each component. The first few contain lots of the useful info, once we reach 20 components we have about ~75% or so of the original varaince.
###Code
plt.plot(np.cumsum(pca150.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
Scree Plot and Number of ComponentsOne question we're left with is how many components should we keep? This answer varies, common suggestions are enough to capture somewhere around 80% to 95% of the explained variance. These metrics are somewhat arbitrary - testing different numbers of components will likely make sense in many cases. One method to choose the number of features is a scree plot. This is a plot that shows the contribution of each component. The scree plot shows the same information as the graph above, but formatted differently. The idea of a scree plot is to find the "elbow", or where the plot levels out. This flattening point is approximately where you should cut off the number of components - the idea being that you capture all the components that make a substantial difference, and let the ones that make a small difference go. Personally, I think the cumulative plot above is easier to view, but scree plots are pretty common.
###Code
#Scree Plot
PC_values = np.arange(pca150.n_components_) + 1
plt.plot(PC_values, pca150.explained_variance_ratio_, 'o-', linewidth=2, color='blue')
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
Predictions with PCAWe can try to make some predictions and see what the results are with PCA'd data. We'll use a multinomial HP to tell our regression to directly predict multiple classes with our friend the softmax.
###Code
#Get data
y = faces.target
X = faces.data
X_train, X_test, y_train, y_test = train_test_split(X, y)
#Model and grid search of components.
scaler = MinMaxScaler()
logistic = LogisticRegression(max_iter=10000, tol=0.1, multi_class="multinomial")
pca_dia = PCA()
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca_dia), ("logistic", logistic)])
param_grid = {
"pca__n_components": [130]
}
grid = GridSearchCV(pipe, param_grid, n_jobs=-1)
grid.fit(X_train, y_train.ravel())
print("Best parameter (CV score=%0.3f):" % grid.best_score_)
print(grid.best_params_)
print("Test Score:", grid.score(X_test, y_test))
###Output
Best parameter (CV score=0.821):
{'pca__n_components': 130}
Test Score: 0.8308605341246291
###Markdown
Kernel PCASimilarly to support vector machines, we can use a kernel transformation to make PCA better suit data with non-linear relationships. The concept is the same as with the SVMs - we can provide a kernel that does a transformation, then the linear algebra of PCA can be executed on the transformed data. The implementation is very simple - we replace PCA with KernelPCA, and provide the kernel we want to use. We can see if a different kernel is better than the original... Try with a grid search of the different kernels other than linear. Also, for the polynomial kernel, try with multiple values in the grid search. Documentation is: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html
###Code
# Use Kernel PCA
from sklearn.decomposition import KernelPCA
#Get data
y = faces.target
X = faces.data
X_train, X_test, y_train, y_test = train_test_split(X, y)
#Model and grid search of components.
scaler = MinMaxScaler()
logistic = LogisticRegression(max_iter=10000, tol=0.1, multi_class="multinomial")
pca_dia = KernelPCA()
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca_dia), ("logistic", logistic)])
param_grid = {
"pca__n_components": [150],
"pca__kernel": ["poly", "rbf", "sigmoid", "cosine"],
"pca__degree": [2,3,4,5,6,7,8,9,10,11,12,13,14,15]
}
grid = GridSearchCV(pipe, param_grid, n_jobs=-1)
grid.fit(X_train, y_train.ravel())
print("Best parameter (CV score=%0.3f):" % grid.best_score_)
print(grid.best_params_)
print("Test Score:", grid.score(X_test, y_test))
###Output
Best parameter (CV score=0.790):
{'pca__degree': 14, 'pca__kernel': 'poly', 'pca__n_components': 150}
Test Score: 0.7863501483679525
###Markdown
Sparse PCASparse PCA is another implementation of PCA that includes L1 regularization - resulting in some of the values being regularized down to 0. The end result of this is that you end up with a subset of the features being used to construct the components. The others are feature selected out just like Lasso rregression.
###Code
from sklearn.decomposition import SparsePCA
sPCA = SparsePCA(15)
sparse = sPCA.fit_transform(X1)
comps3 = sPCA.components_
labels = []
for i in range(len(comps3)):
label = "PC-"+str(i)
labels.append(label)
PCA3_res_comps = pd.DataFrame(comps3, columns=X1.columns, index = labels)
PCA3_res_comps.head()
PCA3_res_comps.describe().T.sort_values("mean")
###Output
_____no_output_____
###Markdown
PCA - Principal Component AnalysisWhen dealing with text we looked at the truncated SVD algorithm that could reduce the massive datasets generated from encoding text down to a subset of features. PCA is a similar concept, we can take high dimension feature sets and reduce them down to a subset of features used for prediction. PCA is a very common method for dimensionality reduction. PCA Concepts PCA reduces dimensionality by breaking down the variance in the data into its "principal components", then keeping only those components that do the best job in explaining said variance. We can understand this well with an example, in 2D. We'll create something that looks like an example from simple linear regression type of data - we have a bunch of points, each point is located by its X and Y values.
###Code
#make some random numbers
plt.rcParams['figure.figsize'] = 12,6
fig, ax = plt.subplots(1, 2)
X = np.dot(np.random.rand(2, 2), np.random.randn(2, 200)).T
sns.regplot(data=X, x=X[:,0], y=X[:,1], ci=0, ax=ax[0])
ax[0].set_ylabel('Y')
ax[0].set_xlabel('X')
tmpPCA = PCA(2)
tmpData = tmpPCA.fit_transform(X)
sns.regplot(data=tmpData, x=tmpData[:,0], y=tmpData[:,1], ci=0, ax=ax[1])
ax[1].set_ylabel('PC 2')
ax[1].set_xlabel('PC 1')
plt.show()
###Output
_____no_output_____
###Markdown
Principal ComponentsIn normal analysis, each of these points is defined by their X and Y values: X - how far left and right the point is. Y - how far up and down the point is. Together these points explain all of the position data of the points. Once we look at PCA, we can also think of these points being defined by two components: Along the regression line. The majority of the variance in Y is explained by the position along this line. Perpindicular to the regression line. Some smaller part of the variance in Y is explained by how "far off" it is from the regession line.In essence, we can explain the position of our points mostly by examining where it is along the regression line (component 1), along with a little info on how far off it is from that line. These two components can explain our data - "A" amount "up and down" the line, along with "B" amount "off the line". This also explains the position of the points, but does so with different values than X and Y. If we look at the plot of the PCA components, PC1 (plotted as X) has a wide range, or lots of variance. PC2 (plotted as Y) has a small range, or a small amount of variance. Animated ExampleSee: https://setosa.io/ev/principal-component-analysis/ PCA and EigenvectorsThe components generated by the PCA are called eigenvectors. We don't need to worry about much of the math, but this PCA can be calculated by hand with some linear math. We can skip that, computers are good at math. PCA and Dimensionality ReductionOnce we've established the components, reducing the dimensions of our feature set is simple - just reduce the components that matter least to 0. In our example, we'd ignore the "off the line" component that is responsible for only a little bit of the position of our points, and keep the "up the line" component that explains the majority of the position of our points. In the XY system, both X and Y are very important in specifying where a point is, X somewhat more important than Y. In our component system, the "up the line" component provides the majority of the information on our points, with the "off the line" component only adding a little bit of info. This is the key to the dimensionality reduction - if we feature selected away the Y value, we would lose substantial information on the location of the points. If we PCA-away the "off the line" component, we only lose a small amount of information! So we can describe this data "pretty well" with only 1/2 the number of features if we describe the data with the components over the original features. When dealing with large numbers of features, this can allow us to reduce them down to a much smaller number of components, without missing out on too much information describing the real data. The true benefit of PCA is if there are a lot of features. We can do something like the example here to grab the "best" components, drop the rest, and have a smaller feature set with a comparable level of accuracy. Colinearity and Multi-colinearityOne of the other benefits of PCA is that it reduces colinearity between features. The components that PCA generates are orthogonal of each other - the colinearity is reduced to effectively 0. Dimension Reduction in Multiple DimensionsThis 2D example is simple to picture. The same concept applies when we have data with lots of dimensions. We can break the data down into components, remove the least impactful, and end up with a feature set that captures most of the variance in our target with fewer inputs. Example with Real DataThis dataset is one of the sklearn samples, containing measurements from people with and without breast cancer. The classification of cancer/no cancer is the target.
###Code
def sklearn_to_df(sklearn_dataset):
df = pd.DataFrame(sklearn_dataset.data, columns=sklearn_dataset.feature_names)
df['target'] = pd.Series(sklearn_dataset.target)
return df
df = sklearn_to_df(load_breast_cancer())
y1 = df["target"]
X1 = df.drop(columns="target")
df.head()
###Output
_____no_output_____
###Markdown
Pre PCA TestWe can run a test to approximate the accuracy without doing PCA. We don't want accuracy to drop too much after the PCA process. This is our baseline.
###Code
pre_model = LogisticRegression()
pre_scale = MinMaxScaler()
pre_pipe = Pipeline([("scale", pre_scale), ("model", pre_model)])
print("Estimated Initial Accuracy:", np.mean(cross_val_score(pre_pipe, X1, y1)))
###Output
Estimated Initial Accuracy: 0.9613414066138798
###Markdown
Original Dimensionality and CorrelationOne classfication target, along with 30 features. We can look for correlation between those features.
###Code
# Check Original Correlation
plt.rcParams['figure.figsize'] = 15,5
sns.heatmap(X1.corr(), cmap="BuPu")
# Calculate VIF for Multicolinearity
from statsmodels.stats.outliers_influence import variance_inflation_factor
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X1.values, i) for i in range(X1.shape[1])]
vif["features"] = X1.columns
vif.sort_values("VIF Factor", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Colinearity ResultsLooks like there is a lot of correlation going on. The heatmap shows many values that are pretty correlated, and the VIF shows some really high values. Recall, values for a VIF over about 10 are really large. For the model, we'll be sure to use a logistic regression, that is very impacted by the colinearity. Feel free to play with the number of components and observe results.
###Code
#Check accuracy
X_train1, X_test1, y_train1, y_test1 = train_test_split(X1, y1)
can_pca = PCA()
can_model = LogisticRegression()
can_steps = [
("scale", MinMaxScaler()),
("pca", can_pca),
("can_model", can_model)
]
can_pipe = Pipeline(steps=can_steps)
can_params = {
"pca__n_components":[15]
}
clf1 = GridSearchCV(estimator=can_pipe, param_grid=can_params, cv=5, n_jobs=-1)
clf1.fit(X_train1, y_train1.ravel())
print(clf1.score(X_test1, y_test1))
best1 = clf1.best_estimator_
print(best1)
###Output
0.965034965034965
Pipeline(steps=[('scale', MinMaxScaler()), ('pca', PCA(n_components=15)),
('can_model', LogisticRegression())])
###Markdown
Results - We Have Accuracy!Accuracy looks pretty good, even though we've reduced the number of features. How is the information on our target (the variance) distributed amongst our components?
###Code
# Get PCA Info
comps1 = best1.named_steps['pca'].components_
ev1 = best1.named_steps['pca'].explained_variance_ratio_
plt.rcParams['figure.figsize'] = 6,6
plt.plot(np.cumsum(ev1))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
What is in the PCA Components?We can also reconstruct the importance of the contributions of the different features to the components.
###Code
labels = []
for i in range(len(comps1)):
label = "PC-"+str(i)
labels.append(label)
PCA1_res_comps = pd.DataFrame(comps1,columns=X1.columns, index = labels)
PCA1_res_comps.head()
###Output
_____no_output_____
###Markdown
Results of PCAPCA allows us to reduce down the original 30 feature set to a much smaller number, while still making accurate predictions. In this case, it looks like we can get about 90% of the explained varaiance in the data by using around 6 or so components. Yay, that's cool! PCA and Feature SelectionPCA is not a feature selection technique. PCA does do a similar thing to feature selection in reducing the size of our feature set that goes into a model, but it is technically different. Feature selection removes features. PCA removes components, that are created from features, but that are not, themselves, features. In PCA, the features are being transformed for the components to be created, and each component includes portions of multiple features - for example, in the scatter plot above, both the "up the line" and "off the line" components contain parts of the X and Y features. If we drop the "off the line" feature when doing PCA we aren't really eliminating any features - we still need X and Y to calculate each of our components. In the breast cancer example, each of those features still contributes to the components, but the actual predictors are far reduced. ExamplePredict if people have diabetes (Outcome) using PCA to help.
###Code
df = pd.read_csv("data/diabetes.csv")
df.head()
#Get data
y = df["Outcome"]
X = df.drop(columns={"Outcome"})
X_train, X_test, y_train, y_test = train_test_split(X, y)
#Model and grid search of components.
scaler = MinMaxScaler()
logistic = LogisticRegression(max_iter=10000, tol=0.1)
pca_dia = PCA()
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca_dia), ("logistic", logistic)])
param_grid = {
"pca__n_components": [8]
}
grid = GridSearchCV(pipe, param_grid, n_jobs=4)
grid.fit(X_train, y_train)
best2 = grid.best_estimator_
print("Best parameter (CV score=%0.3f):" % grid.best_score_)
print(grid.best_params_)
###Output
Best parameter (CV score=0.750):
{'pca__n_components': 8}
###Markdown
Plot Component ImportanceWe can plot the effectiveness with different numbers of components.
###Code
comps2 = best2.named_steps['pca'].components_
ev2 = best2.named_steps['pca'].explained_variance_ratio_
plt.rcParams['figure.figsize'] = 6,6
plt.plot(np.cumsum(ev2))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
PCA with Images - Big Dimensions!One common example of something with a large feature set is images - even our simple set of handwritten numbers had 784 features for each digit. Generating models from all 70,000 of those simple images could take forever, and those are about the most simple images we can imagine!Reducing the dimensions of very large images can be highly beneficial, especially if we can keep the useful bits that we need to do identification. Faces, PCA, and YouThis dataset is a more complex set of images than the digits we used previously. It is a set of a bunch of faces of past world leaders, our goal being to make a model that will recognize each person from their picture.
###Code
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
###Output
['Ariel Sharon' 'Colin Powell' 'Donald Rumsfeld' 'George W Bush'
'Gerhard Schroeder' 'Hugo Chavez' 'Junichiro Koizumi' 'Tony Blair']
(1348, 62, 47)
###Markdown
Starting Dimensions and PCA DimensionsWe start with ~1350 images, each 62 x 47 pixels, color depth of 1 - resulting in a feature set that is around 3000 columns wide. We can fit the data to a PCA transformation, and chop the feature set down to a much smaller number of components.
###Code
# Generate PCA and inversed face-sets
pca150 = PCA(150).fit(faces.data)
components150 = pca150.transform(faces.data)
projected150 = pca150.inverse_transform(components150)
pca15 = PCA(15).fit(faces.data)
components15 = pca15.transform(faces.data)
projected15 = pca15.inverse_transform(components15)
###Output
_____no_output_____
###Markdown
Picture Some PicturesWe can look at what the pictures look like in their original state, and after the PCA process has reduced their dimensions by various amounts.
###Code
# Plot faces and PCA faces
fig, ax = plt.subplots(3, 12, figsize=(12, 6),
subplot_kw={'xticks':[], 'yticks':[]},
gridspec_kw=dict(hspace=0.1, wspace=0.1))
for i in range(12):
ax[0,i].imshow(faces.data[i].reshape(62, 47), cmap='bone')
ax[1,i].imshow(projected150[i].reshape(62, 47), cmap='bone')
ax[2,i].imshow(projected15[i].reshape(62, 47), cmap='bone')
ax[0, 0].set_ylabel('Original')
ax[1, 0].set_ylabel('150-dim')
ax[2, 0].set_ylabel('15-dim')
###Output
_____no_output_____
###Markdown
Amount of Variance Captured in ComponentsWe can look at our PCA'd data and see that while the images are much less clear and defined, they are pretty similar on the whole! We can probably still do a good job of IDing the people, even though we have roughly 1/20 (or 1/200) the number of features as we started with. Cool. Even with the 15 component set, the images are still somewhat able to be recognized. The PCA allows us to call up the details on how much of the variance was captured in each component. The first few contain lots of the useful info, once we reach 20 components we have about ~75% or so of the original varaince.
###Code
plt.plot(np.cumsum(pca150.explained_variance_ratio_))
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
###Output
_____no_output_____
###Markdown
Scree Plot and Number of ComponentsOne question we're left with is how many components should we keep? This answer varies, common suggestions are enough to capture somewhere around 80% to 95% of the explained variance. These metrics are somewhat arbitrary - testing different numbers of components will likely make sense in many cases. One method to choose the number of features is a scree plot. This is a plot that shows the contribution of each component. The scree plot shows the same information as the graph above, but formatted differently. The idea of a scree plot is to find the "elbow", or where the plot levels out. This flattening point is approximately where you should cut off the number of components - the idea being that you capture all the components that make a substantial difference, and let the ones that make a small difference go. Personally, I think the cumulative plot above is easier to view, but scree plots are pretty common.
###Code
#Scree Plot
PC_values = np.arange(pca150.n_components_) + 1
plt.plot(PC_values, pca150.explained_variance_ratio_, 'o-', linewidth=2, color='blue')
plt.title('Scree Plot')
plt.xlabel('Principal Component')
plt.ylabel('Variance Explained')
plt.show()
###Output
_____no_output_____
###Markdown
Predictions with PCAWe can try to make some predictions and see what the results are with PCA'd data. We'll use a multinomial HP to tell our regression to directly predict multiple classes with our friend the softmax.
###Code
#Get data
y = faces.target
X = faces.data
X_train, X_test, y_train, y_test = train_test_split(X, y)
#Model and grid search of components.
scaler = MinMaxScaler()
logistic = LogisticRegression(max_iter=10000, tol=0.1, multi_class="multinomial")
pca_dia = PCA()
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca_dia), ("logistic", logistic)])
param_grid = {
"pca__n_components": [130]
}
grid = GridSearchCV(pipe, param_grid, n_jobs=-1)
grid.fit(X_train, y_train.ravel())
print("Best parameter (CV score=%0.3f):" % grid.best_score_)
print(grid.best_params_)
print("Test Score:", grid.score(X_test, y_test))
###Output
Best parameter (CV score=0.807):
{'pca__n_components': 130}
Test Score: 0.8338278931750742
###Markdown
Kernel PCASimilarly to support vector machines, we can use a kernel transformation to make PCA better suit data with non-linear relationships. The concept is the same as with the SVMs - we can provide a kernel that does a transformation, then the linear algebra of PCA can be executed on the transformed data. The implementation is very simple - we replace PCA with KernelPCA, and provide the kernel we want to use. We can see if a different kernel is better than the original... Try with a grid search of the different kernels other than linear. Also, for the polynomial kernel, try with multiple values in the grid search. Documentation is: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.KernelPCA.html
###Code
# Use Kernel PCA
from sklearn.decomposition import KernelPCA
#Get data
y = faces.target
X = faces.data
X_train, X_test, y_train, y_test = train_test_split(X, y)
#Model and grid search of components.
scaler = MinMaxScaler()
logistic = LogisticRegression(max_iter=10000, tol=0.1, multi_class="multinomial")
pca_dia = KernelPCA()
pipe = Pipeline(steps=[("scaler", scaler), ("pca", pca_dia), ("logistic", logistic)])
param_grid = {
"pca__n_components": [150],
"pca__kernel": ["poly", "rbf", "sigmoid", "cosine"],
"pca__degree": [2,3,4,5,6,7,8,9,10,11,12,13,14,15]
}
grid = GridSearchCV(pipe, param_grid, n_jobs=-1)
grid.fit(X_train, y_train.ravel())
print("Best parameter (CV score=%0.3f):" % grid.best_score_)
print(grid.best_params_)
print("Test Score:", grid.score(X_test, y_test))
###Output
Best parameter (CV score=0.778):
{'pca__degree': 14, 'pca__kernel': 'poly', 'pca__n_components': 150}
Test Score: 0.827893175074184
###Markdown
Sparse PCASparse PCA is another implementation of PCA that includes L1 regularization - resulting in some of the values being regularized down to 0. The end result of this is that you end up with a subset of the features being used to construct the components. The others are feature selected out just like Lasso rregression.
###Code
from sklearn.decomposition import SparsePCA
sPCA = SparsePCA(15)
sparse = sPCA.fit_transform(X1)
comps3 = sPCA.components_
labels = []
for i in range(len(comps3)):
label = "PC-"+str(i)
labels.append(label)
PCA3_res_comps = pd.DataFrame(comps3, columns=X1.columns, index = labels)
PCA3_res_comps.head()
PCA3_res_comps.describe().T.sort_values("mean")
###Output
_____no_output_____ |
DeepLearning/ipython(guide)/0_Best_Result_classification_1_just_accuracy_CNN.ipynb | ###Markdown
A neural network consist of cnn layer (Yoon Kim,2014) and 4 fully connected layers.
Source: https://github.com/jojonki/cnn-for-sentence-classification
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/MyDrive/sharif/DeepLearning/ipython(guide)')
import numpy as np
import codecs
import os
import random
import pandas
from keras import backend as K
from keras.models import Model
from keras.layers.embeddings import Embedding
from keras.layers import Input, Dense, Lambda, Permute, Dropout
from keras.layers import Conv2D, MaxPooling1D
from keras.optimizers import SGD
import ast
import re
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import train_test_split
import gensim
from keras.models import load_model
from keras.callbacks import EarlyStopping, ModelCheckpoint
limit_number = 750
data = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv',index_col=0)
data = data.dropna().reset_index(drop=True)
X = data["body"].values.tolist()
y = pandas.read_csv('../Data/limited_to_'+str(limit_number)+'.csv')
labels = []
tag=[]
for item in y['tag']:
labels += [i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' ']
tag.append([i for i in re.sub('\"|\[|\]|\'| |=','',item.lower()).split(",") if i!='' and i!=' '])
labels = list(set(labels))
mlb = MultiLabelBinarizer()
Y=mlb.fit_transform(tag)
len(labels)
sentence_maxlen = max(map(len, (d for d in X)))
print('sentence maxlen', sentence_maxlen)
freq_dist = pandas.read_csv('../Data/FreqDist_sorted.csv',index_col=False)
vocab=[]
for item in freq_dist["word"]:
try:
word=re.sub(r"[\u200c-\u200f]","",item.replace(" ",""))
vocab.append(word)
except:
pass
print(vocab[10])
vocab = sorted(vocab)
vocab_size = len(vocab)
print('vocab size', len(vocab))
w2i = {w:i for i,w in enumerate(vocab)}
# i2w = {i:w for i,w in enumerate(vocab)}
print(w2i["زبان"])
def vectorize(data, sentence_maxlen, w2i):
vec_data = []
for d in data:
vec = [w2i[w] for w in d if w in w2i]
pad_len = max(0, sentence_maxlen - len(vec))
vec += [0] * pad_len
vec_data.append(vec)
# print(d)
vec_data = np.array(vec_data)
return vec_data
vecX = vectorize(X, sentence_maxlen, w2i)
vecY=Y
X_train, X_test, y_train, y_test = train_test_split(vecX, vecY, test_size=0.2)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25)
print('train: ', X_train.shape , '\ntest: ', X_test.shape , '\nval: ', X_val.shape ,"\ny_tain:",y_train.shape )
# print(vecX[0])
embd_dim = 300
###Output
_____no_output_____
###Markdown
***If the word2vec model is not generated before, we should run the next block.***
###Code
# embed_model = gensim.models.Word2Vec(X, size=embd_dim, window=5, min_count=5)
# embed_model.save('word2vec_model')
###Output
_____no_output_____
###Markdown
***Otherwise, we can run the next block.***
###Code
embed_model=gensim.models.Word2Vec.load('word2vec_model')
word2vec_embd_w = np.zeros((vocab_size, embd_dim))
for word, i in w2i.items():
if word in embed_model.wv.vocab:
embedding_vector =embed_model[word]
# words not found in embedding index will be all-zeros.
word2vec_embd_w[i] = embedding_vector
from keras.layers import LSTM
def Net(vocab_size, embd_size, sentence_maxlen, glove_embd_w):
sentence = Input((sentence_maxlen,), name='SentenceInput')
# embedding
embd_layer = Embedding(input_dim=vocab_size,
output_dim=embd_size,
weights=[word2vec_embd_w],
trainable=False,
name='shared_embd')
embd_sentence = embd_layer(sentence)
embd_sentence = Permute((2,1))(embd_sentence)
embd_sentence = Lambda(lambda x: K.expand_dims(x, -1))(embd_sentence)
# cnn
cnn = Conv2D(1,
kernel_size=(5, sentence_maxlen),
activation='relu')(embd_sentence)
cnn = Lambda(lambda x: K.sum(x, axis=3))(cnn)
cnn = MaxPooling1D(3)(cnn)
cnn = Lambda(lambda x: K.sum(x, axis=2))(cnn)
hidden1=Dense(400,activation="relu")(cnn)
hidden2=Dense(300,activation="relu")(hidden1)
hidden3=Dense(200,activation="relu")(hidden2)
hidden4=Dense(150,activation="relu")(hidden3)
out = Dense(len(labels), activation='sigmoid')(hidden4)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model = Model(inputs=sentence, outputs=out, name='sentence_claccification')
model.compile(optimizer=sgd, loss='binary_crossentropy',metrics=["accuracy"])
return model
model = Net(vocab_size, embd_dim, sentence_maxlen,word2vec_embd_w)
print(model.summary())
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=50) # Model stop training after 50 epoch where validation loss didnt decrease
mc = ModelCheckpoint('best_cnn_4fc.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True) #You save model weight at the epoch where validation loss is minimal
model.fit(X_train, y_train, batch_size=32,epochs=50,verbose=1,validation_data=(X_val, y_val),callbacks=[es,mc])#you can run for 1000 epoch btw model will stop after 50 epoch without better validation loss
###Output
Epoch 1/50
403/403 [==============================] - 675s 2s/step - loss: 0.6377 - accuracy: 0.0230 - val_loss: 0.5882 - val_accuracy: 0.0263
Epoch 00001: val_loss improved from inf to 0.58824, saving model to best_cnn_4fc.h5
Epoch 2/50
403/403 [==============================] - 670s 2s/step - loss: 0.5466 - accuracy: 0.0285 - val_loss: 0.5083 - val_accuracy: 0.0263
Epoch 00002: val_loss improved from 0.58824 to 0.50832, saving model to best_cnn_4fc.h5
Epoch 3/50
403/403 [==============================] - 667s 2s/step - loss: 0.4757 - accuracy: 0.0312 - val_loss: 0.4459 - val_accuracy: 0.0263
Epoch 00003: val_loss improved from 0.50832 to 0.44592, saving model to best_cnn_4fc.h5
Epoch 4/50
403/403 [==============================] - 665s 2s/step - loss: 0.4201 - accuracy: 0.0278 - val_loss: 0.3966 - val_accuracy: 0.0263
Epoch 00004: val_loss improved from 0.44592 to 0.39665, saving model to best_cnn_4fc.h5
Epoch 5/50
403/403 [==============================] - 661s 2s/step - loss: 0.3759 - accuracy: 0.0288 - val_loss: 0.3573 - val_accuracy: 0.0263
Epoch 00005: val_loss improved from 0.39665 to 0.35726, saving model to best_cnn_4fc.h5
Epoch 6/50
403/403 [==============================] - 659s 2s/step - loss: 0.3404 - accuracy: 0.0288 - val_loss: 0.3254 - val_accuracy: 0.0263
Epoch 00006: val_loss improved from 0.35726 to 0.32538, saving model to best_cnn_4fc.h5
Epoch 7/50
403/403 [==============================] - 663s 2s/step - loss: 0.3115 - accuracy: 0.0288 - val_loss: 0.2993 - val_accuracy: 0.0263
Epoch 00007: val_loss improved from 0.32538 to 0.29927, saving model to best_cnn_4fc.h5
Epoch 8/50
97/403 [======>.......................] - ETA: 7:49 - loss: 0.2965 - accuracy: 0.0290
###Markdown
***If the model is generated before:***
###Code
# model = load_model('CNN_1_just_accuracy.h5')
# model.save('CNN_1_just_accuracy.h5')
model.save('CNN_1_just_accuracy.h5')
X_test.shape , X_train.shape
pred=model.predict(X_test)
# For evaluation: If the probability > 0.5 you can say that it belong to the class.
print(pred[0])#example
y_pred=[]
measure = .23#9*(np.mean(pred[0]) + .5*np.sqrt(np.var(pred[0])))
for l in pred:
temp=[]
for value in l:
if value >= measure:
temp.append(1)
else:
temp.append(0)
y_pred.append(temp)
3*(np.mean(pred[0]) + .5*np.sqrt(np.var(pred[0])))
from sklearn.metrics import classification_report,accuracy_score
print("accuracy=",accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))
###Output
_____no_output_____
###Markdown
ROC Curve
###Code
from sklearn.metrics import roc_curve,auc
import matplotlib.pyplot as plt
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(Y[0].shape[0]):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], pred[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
for i in range(Y[0].shape[0]):
plt.figure()
plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
PolyRegression/Polynomial_Regression.ipynb | ###Markdown
USING POLYNOMIAL REGRESSION
###Code
# Training for Polynomial Regression
from sklearn.preprocessing import PolynomialFeatures
poly_reg= PolynomialFeatures(degree=4)
X_poly=poly_reg.fit_transform(X)
from sklearn.linear_model import LinearRegression
lin_reg=LinearRegression()
lin_reg.fit(X_poly,Y)
# Visualization of Polynomial Regression Results
plt.scatter(X, Y, color='red')
plt.plot(X, lin_reg.predict(poly_reg.fit_transform(X)), color='blue')
plt.title('TRUTH OR BLUFF (Polynomial Regression)')
plt.xlabel('Position Level')
plt.ylabel('Salary')
plt.show()
#Predicting results with polynomial regression
n= float(input("Enter the level of position:"))
lin_reg.predict(poly_reg.fit_transform([[n]]))
###Output
Enter the level of position:6.5
###Markdown
USING LINEAR REGRESSION
###Code
# Training dataset for linear regression
from sklearn.linear_model import LinearRegression
lin_reg_2=LinearRegression()
lin_reg_2.fit(X,Y)
# Visualization of Regression Results
plt.scatter(X, Y, color='red')
plt.plot(X, lin_reg_2.predict(X), color='blue')
plt.title('TRUTH OR BLUFF (Linear Regression)')
plt.xlabel('Position Level')
plt.ylabel('Salary')
plt.show()
#Predicting results with Linear regression
m= float(input("Enter the level of position:"))
lin_reg_2.predict([[n]])
###Output
Enter the level of position:6.5
|
Shareable_HRC_ML_Task_Model_Hamdan.ipynb | ###Markdown
Author: Chaudhary Hamdan- Error (RMSE) around 10, very less
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.linear_model import LinearRegression
df = pd.read_csv('https://raw.githubusercontent.com/hamdan-codes/none-but-some/main/dataset.csv')
df.head()
df.corr()
train_val_data = df[df['clear_date'].notnull()]
test_data = df[df['clear_date'].isnull()]
le_business_code = preprocessing.LabelEncoder()
le_business_code.fit(df['business_code'].unique())
le_cust_number = preprocessing.LabelEncoder()
le_cust_number.fit(df['cust_number'].unique())
le_name_customer = preprocessing.LabelEncoder()
le_name_customer.fit(df['name_customer'].unique())
le_document_type = preprocessing.LabelEncoder()
le_document_type.fit(df['document_type'].unique())
le_cust_payment_terms = preprocessing.LabelEncoder()
le_cust_payment_terms.fit(df['cust_payment_terms'].unique())
def transform(df):
# To transform object datatype into numeric data
df['business_code'] = le_business_code.transform(df['business_code'])
df['cust_number'] = le_cust_number.transform(df['cust_number'])
df['name_customer'] = le_name_customer.transform(df['name_customer'])
df['document_type'] = le_document_type.transform(df['document_type'])
df['cust_payment_terms'] = le_cust_payment_terms.transform(df['cust_payment_terms'])
def preprocessToFeed(df):
# Preprocess Data to feed to model
rep = {'USD' : 1.0, 'CAD' : 0.79}
df.replace(to_replace=rep, inplace=True)
df['total_open_amount'] *= df['invoice_currency']
df.drop(axis=1,
columns=[
'area_business', 'posting_id', 'invoice_id', 'invoice_currency', 'document_create_date',
'document_create_date.1', 'baseline_create_date', 'buisness_year', 'posting_date',
'isOpen'
],
inplace=True
)
df['clear_date'] = pd.to_datetime(df['clear_date'], format='%d-%m-%Y 00:00')
df['due_in_date'] = df.due_in_date.astype('int64')
df['due_in_date'] = pd.to_datetime(df['due_in_date'], format='%Y%m%d')
transform(df)
def preprocessTestData(df):
# To preprocess Test Data to feed directly to model prediction
preprocessToFeed(df)
df.drop(columns=['clear_date'], inplace=True)
train_val_data.head()
preprocessToFeed(train_val_data)
train_val_data.head()
train_val_data['delay'] = (train_val_data.clear_date - train_val_data.due_in_date)
train_val_data.drop(columns=['clear_date'], inplace=True)
train_val_data.info()
train_val_data['delay'] = (train_val_data['delay'] / np.timedelta64(1,'D')).astype(int)
train_val_data.corr()
x_train, x_val, y_train, y_val = train_test_split(
train_val_data.drop(columns=['due_in_date', 'delay']),
train_val_data['delay'],
test_size=0.2,
random_state=0
)
x_train.shape, y_train.shape, x_val.shape, y_val.shape
model = LinearRegression()
model.fit(x_train, y_train)
y_val_pred = model.predict(x_val)
y_val_pred = y_val_pred + 0.5
y_val_pred = y_val_pred.astype('int')
x = y_val - y_val_pred
error = (x.dot(x) / len(x)) ** 0.5
print('Error (Root mean squared error):', error)
def predToOutput(y):
# Function to convert predicted output in desired format
y = y + 0.5
y = y.astype('int')
y = pd.to_timedelta(y, unit='D') + test_data['due_in_date']
y = y.dt.strftime("%d-%m-%Y 00:00")
return y
preprocessTestData(test_data)
# Preprocess and feed data to predict
y_pred = model.predict(test_data.drop(columns=['due_in_date']))
y_pred = predToOutput(y_pred)
y_pred
print('Output:')
print(y_pred)
###Output
_____no_output_____
###Markdown
Author: Chaudhary Hamdan- Error (RMSE) around 10, very less
###Code
###Output
_____no_output_____ |
tensor2tensor/visualization/TransformerVisualization.ipynb | ###Markdown
Create Your Own Visualizations!Instructions:1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor.2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above.3. If you used custom hyper parameters then update cell 4.4. Run the notebook!
###Code
import os
import tensorflow as tf
from tensor2tensor import problems
from tensor2tensor.bin import t2t_decoder # To register the hparams set
from tensor2tensor.utils import registry
from tensor2tensor.utils import trainer_lib
from tensor2tensor.visualization import attention
from tensor2tensor.visualization import visualization
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'
}
});
###Output
_____no_output_____
###Markdown
HParams
###Code
# PUT THE MODEL YOU WANT TO LOAD HERE!
CHECKPOINT = os.path.expanduser('~/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu')
# HParams
problem_name = 'translate_ende_wmt32k'
data_dir = os.path.expanduser('~/t2t_data/')
model_name = "transformer"
hparams_set = "transformer_base_single_gpu"
###Output
_____no_output_____
###Markdown
Visualization
###Code
visualizer = visualization.AttentionVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1)
tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step')
sess = tf.train.MonitoredTrainingSession(
checkpoint_dir=CHECKPOINT,
save_summaries_secs=0,
)
input_sentence = "I have two dogs."
output_string, inp_text, out_text, att_mats = visualizer.get_vis_data_from_string(sess, input_sentence)
print(output_string)
###Output
INFO:tensorflow:Saving checkpoints for 1 into /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt.
###Markdown
Interpreting the Visualizations- The layers drop down allow you to view the different Transformer layers, 0-indexed of course. - Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable.- The attention dropdown allows you to select different pairs of encoder-decoder attentions: - All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. - Input - Input: Shows only the encoder self-attention. - Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. - Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded.- The colored squares represent the different attention heads. - You can hide or show a given head by clicking on it’s color. - Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again.- You can hover over a word to see the individual attention weights for just that position. - Hovering over the words on the left will show what that position attended to. - Hovering over the words on the right will show what positions attended to it.
###Code
attention.show(inp_text, out_text, *att_mats)
###Output
_____no_output_____
###Markdown
Create Your Own Visualizations!Instructions:1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor.2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above.3. If you used custom hyper parameters then update cell 4.4. Run the notebook!
###Code
import os
import tensorflow as tf
from tensor2tensor import problems
from tensor2tensor.bin import t2t_decoder # To register the hparams set
from tensor2tensor.utils import registry
from tensor2tensor.utils import trainer_lib
from tensor2tensor.visualization import attention
from tensor2tensor.visualization import visualization
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'
}
});
###Output
_____no_output_____
###Markdown
HParams
###Code
# PUT THE MODEL YOU WANT TO LOAD HERE!
CHECKPOINT = os.path.expanduser('~/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu')
# HParams
problem_name = 'translate_ende_wmt32k'
data_dir = os.path.expanduser('~/t2t_data/')
model_name = "transformer"
hparams_set = "transformer_base_single_gpu"
###Output
_____no_output_____
###Markdown
Visualization
###Code
visualizer = visualization.AttentionVisualizer(hparams_set, model_name, data_dir, problem_name, beam_size=1)
tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step')
sess = tf.train.MonitoredTrainingSession(
checkpoint_dir=CHECKPOINT,
save_summaries_secs=0,
)
input_sentence = "I have two dogs."
output_string, inp_text, out_text, att_mats = visualizer.get_vis_data_from_string(sess, input_sentence)
print(output_string)
###Output
INFO:tensorflow:Saving checkpoints for 1 into /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt.
###Markdown
Interpreting the Visualizations- The layers drop down allow you to view the different Transformer layers, 0-indexed of course. - Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable.- The attention dropdown allows you to select different pairs of encoder-decoder attentions: - All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. - Input - Input: Shows only the encoder self-attention. - Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. - Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded.- The colored squares represent the different attention heads. - You can hide or show a given head by clicking on it’s color. - Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again.- You can hover over a word to see the individual attention weights for just that position. - Hovering over the words on the left will show what that position attended to. - Hovering over the words on the right will show what positions attended to it.
###Code
attention.show(inp_text, out_text, *att_mats)
###Output
_____no_output_____
###Markdown
Create Your Own Visualizations!Instructions:1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor.2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above.3. If you used custom hyper parameters then update cell 4.4. Run the notebook!
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import tensorflow as tf
import numpy as np
from tensor2tensor.utils import trainer_utils as utils
from tensor2tensor.visualization import attention
from tensor2tensor.utils import decoding
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'
}
});
###Output
_____no_output_____
###Markdown
Data
###Code
import os
# PUT THE MODEL YOU WANT TO LOAD HERE!
PROBLEM = 'translate_ende_wmt32k'
MODEL = 'transformer'
HPARAMS = 'transformer_base_single_gpu'
DATA_DIR=os.path.expanduser('~/t2t_data')
TRAIN_DIR=os.path.expanduser('~/t2t_train/%s/%s-%s' % (PROBLEM, MODEL, HPARAMS))
print(TRAIN_DIR)
FLAGS = tf.flags.FLAGS
FLAGS.problems = PROBLEM
FLAGS.hparams_set = HPARAMS
FLAGS.data_dir = DATA_DIR
FLAGS.model = MODEL
FLAGS.schedule = 'train_and_evaluate'
hparams = utils.create_hparams(FLAGS.hparams_set, FLAGS.data_dir)
# SET EXTRA HYPER PARAMS HERE!
#hparams.null_slot = True
utils.add_problem_hparams(hparams, PROBLEM)
num_datashards = utils.devices.data_parallelism().n
mode = tf.estimator.ModeKeys.EVAL
input_fn = utils.input_fn_builder.build_input_fn(
mode=mode,
hparams=hparams,
data_dir=DATA_DIR,
num_datashards=num_datashards,
worker_replicas=FLAGS.worker_replicas,
worker_id=FLAGS.worker_id,
batch_size=1)
inputs, target = input_fn()
features = inputs
features['targets'] = target
def encode(string):
subtokenizer = hparams.problems[0].vocabulary['inputs']
return [subtokenizer.encode(string) + [1] + [0]]
def decode(ids):
return hparams.problems[0].vocabulary['targets'].decode(np.squeeze(ids))
def to_tokens(ids):
ids = np.squeeze(ids)
subtokenizer = hparams.problems[0].vocabulary['targets']
tokens = []
for _id in ids:
if _id == 0:
tokens.append('<PAD>')
elif _id == 1:
tokens.append('<EOS>')
else:
tokens.append(subtokenizer._subtoken_id_to_subtoken_string(_id))
return tokens
###Output
_____no_output_____
###Markdown
Model
###Code
model_fn=utils.model_builder.build_model_fn(
MODEL,
problem_names=[PROBLEM],
train_steps=FLAGS.train_steps,
worker_id=FLAGS.worker_id,
worker_replicas=FLAGS.worker_replicas,
eval_run_autoregressive=FLAGS.eval_run_autoregressive,
decode_hparams=decoding.decode_hparams(FLAGS.decode_hparams))
est_spec = model_fn(features, target, mode, hparams)
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
beam_out = model_fn(features, target, tf.contrib.learn.ModeKeys.INFER, hparams)
###Output
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:Beam Decoding with beam size 4
INFO:tensorflow:Doing model_fn_body took 1.393 sec.
INFO:tensorflow:This model_fn took 1.504 sec.
###Markdown
Session
###Code
sv = tf.train.Supervisor(
logdir=TRAIN_DIR,
global_step=tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step'))
sess = sv.PrepareSession(config=tf.ConfigProto(allow_soft_placement=True))
sv.StartQueueRunners(
sess,
tf.get_default_graph().get_collection(tf.GraphKeys.QUEUE_RUNNERS))
###Output
INFO:tensorflow:Restoring parameters from /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt-1
INFO:tensorflow:Starting standard services.
INFO:tensorflow:Starting queue runners.
INFO:tensorflow:Saving checkpoint to path /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt
###Markdown
Visualization
###Code
# Get the attention tensors from the graph.
# This need to be done using the training graph since the inference uses a tf.while_loop
# and you cant fetch tensors from inside a while_loop.
enc_atts = []
dec_atts = []
encdec_atts = []
for i in range(hparams.num_hidden_layers):
enc_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/encoder/layer_%i/self_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
dec_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/decoder/layer_%i/self_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
encdec_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/decoder/layer_%i/encdec_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
enc_atts.append(enc_att)
dec_atts.append(dec_att)
encdec_atts.append(encdec_att)
###Output
_____no_output_____
###Markdown
Test translation from the dataset
###Code
inp, out, logits = sess.run([inputs['inputs'], target, est_spec.predictions['predictions']])
print("Input: ", decode(inp[0]))
print("Gold: ", decode(out[0]))
logits = np.squeeze(logits[0])
tokens = np.argmax(logits, axis=1)
print("Gold out: ", decode(tokens))
###Output
INFO:tensorflow:global_step/sec: 0
Input: For example, during the 2008 general election in Florida, 33% of early voters were African-Americans, who accounted however for only 13% of voters in the State.
Gold: Beispielsweise waren bei den allgemeinen Wahlen 2008 in Florida 33% der Wähler, die im Voraus gewählt haben, Afro-Amerikaner, obwohl sie nur 13% der Wähler des Bundesstaates ausmachen.
Gold out: So waren 33 den allgemeinen Wahlen im in der a 33 % der Frühjungdie nur Land die wurden, die ro- Amerikaner, die sie nur 13 % der Wähler im Staates staats betra.
INFO:tensorflow:Recording summary at step 250000.
###Markdown
Visualize Custom Sentence
###Code
eng = "I have three dogs."
inp_ids = encode(eng)
beam_decode = sess.run(beam_out.predictions['outputs'], {
inputs['inputs']: np.expand_dims(np.expand_dims(inp_ids, axis=2), axis=3),
})
trans = decode(beam_decode[0])
print(trans)
output_ids = beam_decode
# Get attentions
np_enc_atts, np_dec_atts, np_encdec_atts = sess.run([enc_atts, dec_atts, encdec_atts], {
inputs['inputs']: np.expand_dims(np.expand_dims(inp_ids, axis=2), axis=3),
target: np.expand_dims(np.expand_dims(output_ids, axis=2), axis=3),
})
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
###Output
_____no_output_____
###Markdown
Interpreting the Visualizations- The layers drop down allow you to view the different Transformer layers, 0-indexed of course. - Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable.- The attention dropdown allows you to select different pairs of encoder-decoder attentions: - All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. - Input - Input: Shows only the encoder self-attention. - Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. - Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded.- The colored squares represent the different attention heads. - You can hide or show a given head by clicking on it’s color. - Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again.- You can hover over a word to see the individual attention weights for just that position. - Hovering over the words on the left will show what that position attended to. - Hovering over the words on the right will show what positions attended to it.
###Code
inp_text = to_tokens(inp_ids)
out_text = to_tokens(output_ids)
attention.show(inp_text, out_text, np_enc_atts, np_dec_atts, np_encdec_atts)
###Output
_____no_output_____
###Markdown
Create Your Own Visualizations!Instructions:1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor.2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above.3. If you used custom hyper parameters then update cell 4.4. Run the notebook!
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import tensorflow as tf
import numpy as np
from tensor2tensor.utils import t2t_model
from tensor2tensor.utils import decoding
from tensor2tensor.utils import devices
from tensor2tensor.utils import trainer_lib
from tensor2tensor.visualization import attention
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'
}
});
###Output
_____no_output_____
###Markdown
Data
###Code
import os
# PUT THE MODEL YOU WANT TO LOAD HERE!
PROBLEM = 'translate_ende_wmt32k'
MODEL = 'transformer'
HPARAMS = 'transformer_base_single_gpu'
DATA_DIR=os.path.expanduser('~/t2t_data')
TRAIN_DIR=os.path.expanduser('~/t2t_train/%s/%s-%s' % (PROBLEM, MODEL, HPARAMS))
print(TRAIN_DIR)
FLAGS = tf.flags.FLAGS
FLAGS.problems = PROBLEM
FLAGS.hparams_set = HPARAMS
FLAGS.data_dir = DATA_DIR
FLAGS.model = MODEL
FLAGS.schedule = 'train_and_evaluate'
hparams = trainer_lib.create_hparams(FLAGS.hparams_set, data_dir=FLAGS.data_dir, problem_name=PROBLEM)
hparams.use_fixed_batch_size = True
hparams.batch_size = 1
# SET EXTRA HYPER PARAMS HERE!
#hparams.null_slot = True
mode = tf.estimator.ModeKeys.EVAL
problem = hparams.problem_instances[0]
inputs, target = problem.input_fn(
mode=mode,
hparams=hparams,
data_dir=DATA_DIR)
features = inputs
features['targets'] = target
def encode(string):
subtokenizer = hparams.problems[0].vocabulary['inputs']
return [subtokenizer.encode(string) + [1] + [0]]
def decode(ids):
return hparams.problems[0].vocabulary['targets'].decode(np.squeeze(ids))
def to_tokens(ids):
ids = np.squeeze(ids)
subtokenizer = hparams.problems[0].vocabulary['targets']
tokens = []
for _id in ids:
if _id == 0:
tokens.append('<PAD>')
elif _id == 1:
tokens.append('<EOS>')
else:
tokens.append(subtokenizer._subtoken_id_to_subtoken_string(_id))
return tokens
###Output
_____no_output_____
###Markdown
Model
###Code
decode_hparams = decoding.decode_hparams(FLAGS.decode_hparams)
model_fn = t2t_model.T2TModel.make_estimator_model_fn(
MODEL,
hparams,
decode_hparams=decode_hparams)
est_spec = model_fn(features, target, mode)
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
beam_out = model_fn(features, target, tf.contrib.learn.ModeKeys.INFER)
###Output
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:Beam Decoding with beam size 4
INFO:tensorflow:Doing model_fn_body took 1.393 sec.
INFO:tensorflow:This model_fn took 1.504 sec.
###Markdown
Session
###Code
sv = tf.train.Supervisor(
logdir=TRAIN_DIR,
global_step=tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step'))
sess = sv.PrepareSession(config=tf.ConfigProto(allow_soft_placement=True))
sv.StartQueueRunners(
sess,
tf.get_default_graph().get_collection(tf.GraphKeys.QUEUE_RUNNERS))
###Output
INFO:tensorflow:Restoring parameters from /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt-1
INFO:tensorflow:Starting standard services.
INFO:tensorflow:Starting queue runners.
INFO:tensorflow:Saving checkpoint to path /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt
###Markdown
Visualization
###Code
# Get the attention tensors from the graph.
# This need to be done using the training graph since the inference uses a tf.while_loop
# and you cant fetch tensors from inside a while_loop.
enc_atts = []
dec_atts = []
encdec_atts = []
for i in range(hparams.num_hidden_layers):
enc_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/encoder/layer_%i/self_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
dec_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/decoder/layer_%i/self_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
encdec_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/decoder/layer_%i/encdec_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
enc_atts.append(enc_att)
dec_atts.append(dec_att)
encdec_atts.append(encdec_att)
###Output
_____no_output_____
###Markdown
Test translation from the dataset
###Code
inp, out, logits = sess.run([inputs['inputs'], target, est_spec.predictions['predictions']])
print("Input: ", decode(inp[0]))
print("Gold: ", decode(out[0]))
logits = np.squeeze(logits[0])
tokens = np.argmax(logits, axis=1)
print("Gold out: ", decode(tokens))
###Output
INFO:tensorflow:global_step/sec: 0
Input: For example, during the 2008 general election in Florida, 33% of early voters were African-Americans, who accounted however for only 13% of voters in the State.
Gold: Beispielsweise waren bei den allgemeinen Wahlen 2008 in Florida 33% der Wähler, die im Voraus gewählt haben, Afro-Amerikaner, obwohl sie nur 13% der Wähler des Bundesstaates ausmachen.
Gold out: So waren 33 den allgemeinen Wahlen im in der a 33 % der Frühjungdie nur Land die wurden, die ro- Amerikaner, die sie nur 13 % der Wähler im Staates staats betra.
INFO:tensorflow:Recording summary at step 250000.
###Markdown
Visualize Custom Sentence
###Code
eng = "I have three dogs."
inp_ids = encode(eng)
beam_decode = sess.run(beam_out.predictions['outputs'], {
inputs['inputs']: np.expand_dims(np.expand_dims(inp_ids, axis=2), axis=3),
})
trans = decode(beam_decode[0])
print(trans)
output_ids = beam_decode
# Get attentions
np_enc_atts, np_dec_atts, np_encdec_atts = sess.run([enc_atts, dec_atts, encdec_atts], {
inputs['inputs']: np.expand_dims(np.expand_dims(inp_ids, axis=2), axis=3),
target: np.expand_dims(np.expand_dims(output_ids, axis=2), axis=3),
})
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
###Output
_____no_output_____
###Markdown
Interpreting the Visualizations- The layers drop down allow you to view the different Transformer layers, 0-indexed of course. - Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable.- The attention dropdown allows you to select different pairs of encoder-decoder attentions: - All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. - Input - Input: Shows only the encoder self-attention. - Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. - Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded.- The colored squares represent the different attention heads. - You can hide or show a given head by clicking on it’s color. - Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again.- You can hover over a word to see the individual attention weights for just that position. - Hovering over the words on the left will show what that position attended to. - Hovering over the words on the right will show what positions attended to it.
###Code
inp_text = to_tokens(inp_ids)
out_text = to_tokens(output_ids)
attention.show(inp_text, out_text, np_enc_atts, np_dec_atts, np_encdec_atts)
###Output
_____no_output_____
###Markdown
Create Your Own Visualizations!Instructions:1. Install tensor2tensor and train up a Transformer model following the instruction in the repository https://github.com/tensorflow/tensor2tensor.2. Update cell 3 to point to your checkpoint, it is currently set up to read from the default checkpoint location that would be created from following the instructions above.3. If you used custom hyper parameters then update cell 4.4. Run the notebook!
###Code
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import tensorflow as tf
import numpy as np
from tensor2tensor.tpu import tpu_trainer_lib
from tensor2tensor.utils import t2t_model
from tensor2tensor.utils import decoding
from tensor2tensor.utils import devices
from tensor2tensor.visualization import attention
%%javascript
require.config({
paths: {
d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min'
}
});
###Output
_____no_output_____
###Markdown
Data
###Code
import os
# PUT THE MODEL YOU WANT TO LOAD HERE!
PROBLEM = 'translate_ende_wmt32k'
MODEL = 'transformer'
HPARAMS = 'transformer_base_single_gpu'
DATA_DIR=os.path.expanduser('~/t2t_data')
TRAIN_DIR=os.path.expanduser('~/t2t_train/%s/%s-%s' % (PROBLEM, MODEL, HPARAMS))
print(TRAIN_DIR)
FLAGS = tf.flags.FLAGS
FLAGS.problems = PROBLEM
FLAGS.hparams_set = HPARAMS
FLAGS.data_dir = DATA_DIR
FLAGS.model = MODEL
FLAGS.schedule = 'train_and_evaluate'
hparams = tpu_trainer_lib.create_hparams(FLAGS.hparams_set, data_dir=FLAGS.data_dir, problem_name=PROBLEM)
hparams.use_fixed_batch_size = True
hparams.batch_size = 1
# SET EXTRA HYPER PARAMS HERE!
#hparams.null_slot = True
mode = tf.estimator.ModeKeys.EVAL
problem = hparams.problem_instances[0]
inputs, target = problem.input_fn(
mode=mode,
hparams=hparams,
data_dir=DATA_DIR)
features = inputs
features['targets'] = target
def encode(string):
subtokenizer = hparams.problems[0].vocabulary['inputs']
return [subtokenizer.encode(string) + [1] + [0]]
def decode(ids):
return hparams.problems[0].vocabulary['targets'].decode(np.squeeze(ids))
def to_tokens(ids):
ids = np.squeeze(ids)
subtokenizer = hparams.problems[0].vocabulary['targets']
tokens = []
for _id in ids:
if _id == 0:
tokens.append('<PAD>')
elif _id == 1:
tokens.append('<EOS>')
else:
tokens.append(subtokenizer._subtoken_id_to_subtoken_string(_id))
return tokens
###Output
_____no_output_____
###Markdown
Model
###Code
decode_hparams = decoding.decode_hparams(FLAGS.decode_hparams)
model_fn = t2t_model.T2TModel.make_estimator_model_fn(
MODEL,
hparams,
decode_hparams=decode_hparams)
est_spec = model_fn(features, target, mode)
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
beam_out = model_fn(features, target, tf.contrib.learn.ModeKeys.INFER)
###Output
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:Beam Decoding with beam size 4
INFO:tensorflow:Doing model_fn_body took 1.393 sec.
INFO:tensorflow:This model_fn took 1.504 sec.
###Markdown
Session
###Code
sv = tf.train.Supervisor(
logdir=TRAIN_DIR,
global_step=tf.Variable(0, dtype=tf.int64, trainable=False, name='global_step'))
sess = sv.PrepareSession(config=tf.ConfigProto(allow_soft_placement=True))
sv.StartQueueRunners(
sess,
tf.get_default_graph().get_collection(tf.GraphKeys.QUEUE_RUNNERS))
###Output
INFO:tensorflow:Restoring parameters from /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt-1
INFO:tensorflow:Starting standard services.
INFO:tensorflow:Starting queue runners.
INFO:tensorflow:Saving checkpoint to path /usr/local/google/home/llion/t2t_train/translate_ende_wmt32k/transformer-transformer_base_single_gpu/model.ckpt
###Markdown
Visualization
###Code
# Get the attention tensors from the graph.
# This need to be done using the training graph since the inference uses a tf.while_loop
# and you cant fetch tensors from inside a while_loop.
enc_atts = []
dec_atts = []
encdec_atts = []
for i in range(hparams.num_hidden_layers):
enc_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/encoder/layer_%i/self_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
dec_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/decoder/layer_%i/self_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
encdec_att = tf.get_default_graph().get_operation_by_name(
"body/model/parallel_0/body/decoder/layer_%i/encdec_attention/multihead_attention/dot_product_attention/attention_weights" % i).values()[0]
enc_atts.append(enc_att)
dec_atts.append(dec_att)
encdec_atts.append(encdec_att)
###Output
_____no_output_____
###Markdown
Test translation from the dataset
###Code
inp, out, logits = sess.run([inputs['inputs'], target, est_spec.predictions['predictions']])
print("Input: ", decode(inp[0]))
print("Gold: ", decode(out[0]))
logits = np.squeeze(logits[0])
tokens = np.argmax(logits, axis=1)
print("Gold out: ", decode(tokens))
###Output
INFO:tensorflow:global_step/sec: 0
Input: For example, during the 2008 general election in Florida, 33% of early voters were African-Americans, who accounted however for only 13% of voters in the State.
Gold: Beispielsweise waren bei den allgemeinen Wahlen 2008 in Florida 33% der Wähler, die im Voraus gewählt haben, Afro-Amerikaner, obwohl sie nur 13% der Wähler des Bundesstaates ausmachen.
Gold out: So waren 33 den allgemeinen Wahlen im in der a 33 % der Frühjungdie nur Land die wurden, die ro- Amerikaner, die sie nur 13 % der Wähler im Staates staats betra.
INFO:tensorflow:Recording summary at step 250000.
###Markdown
Visualize Custom Sentence
###Code
eng = "I have three dogs."
inp_ids = encode(eng)
beam_decode = sess.run(beam_out.predictions['outputs'], {
inputs['inputs']: np.expand_dims(np.expand_dims(inp_ids, axis=2), axis=3),
})
trans = decode(beam_decode[0])
print(trans)
output_ids = beam_decode
# Get attentions
np_enc_atts, np_dec_atts, np_encdec_atts = sess.run([enc_atts, dec_atts, encdec_atts], {
inputs['inputs']: np.expand_dims(np.expand_dims(inp_ids, axis=2), axis=3),
target: np.expand_dims(np.expand_dims(output_ids, axis=2), axis=3),
})
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
###Output
_____no_output_____
###Markdown
Interpreting the Visualizations- The layers drop down allow you to view the different Transformer layers, 0-indexed of course. - Tip: The first layer, last layer and 2nd to last layer are usually the most interpretable.- The attention dropdown allows you to select different pairs of encoder-decoder attentions: - All: Shows all types of attentions together. NOTE: There is no relation between heads of the same color - between the decoder self attention and decoder-encoder attention since they do not share parameters. - Input - Input: Shows only the encoder self-attention. - Input - Output: Shows the decoder’s attention on the encoder. NOTE: Every decoder layer attends to the final layer of encoder so the visualization will show the attention on the final encoder layer regardless of what layer is selected in the drop down. - Output - Output: Shows only the decoder self-attention. NOTE: The visualization might be slightly misleading in the first layer since the text shown is the target of the decoder, the input to the decoder at layer 0 is this text with a GO symbol prepreded.- The colored squares represent the different attention heads. - You can hide or show a given head by clicking on it’s color. - Double clicking a color will hide all other colors, double clicking on a color when it’s the only head showing will show all the heads again.- You can hover over a word to see the individual attention weights for just that position. - Hovering over the words on the left will show what that position attended to. - Hovering over the words on the right will show what positions attended to it.
###Code
inp_text = to_tokens(inp_ids)
out_text = to_tokens(output_ids)
attention.show(inp_text, out_text, np_enc_atts, np_dec_atts, np_encdec_atts)
###Output
_____no_output_____ |
docs/larq/tutorials/binarynet_cifar10.ipynb | ###Markdown
BinaryNet on CIFAR10Run on Colab View on GitHubIn this example we demonstrate how to use Larq to build and train BinaryNet on the CIFAR10 dataset to achieve a validation accuracy approximately 83% on laptop hardware.On a Nvidia GTX 1050 Ti Max-Q it takes approximately 200 minutes to train. For simplicity, compared to the original papers [BinaryConnect: Training Deep Neural Networks with binary weights during propagations](https://arxiv.org/abs/1511.00363), and [Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1](https://arxiv.org/abs/1602.02830), we do not impliment learning rate scaling, or image whitening.
###Code
pip install larq
import tensorflow as tf
import larq as lq
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Import CIFAR10 DatasetWe download and normalize the CIFAR10 dataset.
###Code
num_classes = 10
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar10.load_data()
train_images = train_images.reshape((50000, 32, 32, 3)).astype("float32")
test_images = test_images.reshape((10000, 32, 32, 3)).astype("float32")
# Normalize pixel values to be between -1 and 1
train_images, test_images = train_images / 127.5 - 1, test_images / 127.5 - 1
train_labels = tf.keras.utils.to_categorical(train_labels, num_classes)
test_labels = tf.keras.utils.to_categorical(test_labels, num_classes)
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 38s 0us/step
###Markdown
Build BinaryNetHere we build the BinaryNet model layer by layer using the [Keras Sequential API](https://www.tensorflow.org/guide/keras).
###Code
# All quantized layers except the first will use the same options
kwargs = dict(input_quantizer="ste_sign",
kernel_quantizer="ste_sign",
kernel_constraint="weight_clip",
use_bias=False)
model = tf.keras.models.Sequential([
# In the first layer we only quantize the weights and not the input
lq.layers.QuantConv2D(128, 3,
kernel_quantizer="ste_sign",
kernel_constraint="weight_clip",
use_bias=False,
input_shape=(32, 32, 3)),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantConv2D(128, 3, padding="same", **kwargs),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantConv2D(256, 3, padding="same", **kwargs),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantConv2D(256, 3, padding="same", **kwargs),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantConv2D(512, 3, padding="same", **kwargs),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantConv2D(512, 3, padding="same", **kwargs),
tf.keras.layers.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
tf.keras.layers.Flatten(),
lq.layers.QuantDense(1024, **kwargs),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantDense(1024, **kwargs),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
lq.layers.QuantDense(10, **kwargs),
tf.keras.layers.BatchNormalization(momentum=0.999, scale=False),
tf.keras.layers.Activation("softmax")
])
###Output
_____no_output_____
###Markdown
One can output a summary of the model:
###Code
lq.models.summary(model)
###Output
+sequential stats---------------------------------------------------------------------------------------------+
| Layer Input prec. Outputs # 1-bit # 32-bit Memory 1-bit MACs 32-bit MACs |
| (bit) x 1 x 1 (kB) |
+-------------------------------------------------------------------------------------------------------------+
| quant_conv2d - (-1, 30, 30, 128) 3456 0 0.42 0 3110400 |
| batch_normalization - (-1, 30, 30, 128) 0 256 1.00 0 0 |
| quant_conv2d_1 1 (-1, 30, 30, 128) 147456 0 18.00 132710400 0 |
| max_pooling2d - (-1, 15, 15, 128) 0 0 0 0 0 |
| batch_normalization_1 - (-1, 15, 15, 128) 0 256 1.00 0 0 |
| quant_conv2d_2 1 (-1, 15, 15, 256) 294912 0 36.00 66355200 0 |
| batch_normalization_2 - (-1, 15, 15, 256) 0 512 2.00 0 0 |
| quant_conv2d_3 1 (-1, 15, 15, 256) 589824 0 72.00 132710400 0 |
| max_pooling2d_1 - (-1, 7, 7, 256) 0 0 0 0 0 |
| batch_normalization_3 - (-1, 7, 7, 256) 0 512 2.00 0 0 |
| quant_conv2d_4 1 (-1, 7, 7, 512) 1179648 0 144.00 57802752 0 |
| batch_normalization_4 - (-1, 7, 7, 512) 0 1024 4.00 0 0 |
| quant_conv2d_5 1 (-1, 7, 7, 512) 2359296 0 288.00 115605504 0 |
| max_pooling2d_2 - (-1, 3, 3, 512) 0 0 0 0 0 |
| batch_normalization_5 - (-1, 3, 3, 512) 0 1024 4.00 0 0 |
| flatten - (-1, 4608) 0 0 0 0 0 |
| quant_dense 1 (-1, 1024) 4718592 0 576.00 4718592 0 |
| batch_normalization_6 - (-1, 1024) 0 2048 8.00 0 0 |
| quant_dense_1 1 (-1, 1024) 1048576 0 128.00 1048576 0 |
| batch_normalization_7 - (-1, 1024) 0 2048 8.00 0 0 |
| quant_dense_2 1 (-1, 10) 10240 0 1.25 10240 0 |
| batch_normalization_8 - (-1, 10) 0 20 0.08 0 0 |
| activation - (-1, 10) 0 0 0 ? ? |
+-------------------------------------------------------------------------------------------------------------+
| Total 10352000 7700 1293.75 510961664 3110400 |
+-------------------------------------------------------------------------------------------------------------+
+sequential summary---------------------------+
| Total params 10.4 M |
| Trainable params 10.4 M |
| Non-trainable params 7.7 k |
| Model size 1.26 MiB |
| Model size (8-bit FP weights) 1.24 MiB |
| Float-32 Equivalent 39.52 MiB |
| Compression Ratio of Memory 0.03 |
| Number of MACs 514 M |
| Ratio of MACs that are binarized 0.9939 |
+---------------------------------------------+
###Markdown
Model TrainingCompile the model and train the model
###Code
model.compile(
tf.keras.optimizers.Adam(lr=0.01, decay=0.0001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
trained_model = model.fit(
train_images,
train_labels,
batch_size=50,
epochs=100,
validation_data=(test_images, test_labels),
shuffle=True
)
###Output
Train on 50000 samples, validate on 10000 samples
Epoch 1/100
50000/50000 [==============================] - 131s 3ms/step - loss: 1.5733 - acc: 0.4533 - val_loss: 1.6368 - val_acc: 0.4244
Epoch 2/100
50000/50000 [==============================] - 125s 3ms/step - loss: 1.1485 - acc: 0.6387 - val_loss: 1.8497 - val_acc: 0.3764
Epoch 3/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.9641 - acc: 0.7207 - val_loss: 1.5696 - val_acc: 0.4794
Epoch 4/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.8452 - acc: 0.7728 - val_loss: 1.5765 - val_acc: 0.4669
Epoch 5/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.7553 - acc: 0.8114 - val_loss: 1.0653 - val_acc: 0.6928
Epoch 6/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.6841 - acc: 0.8447 - val_loss: 1.0944 - val_acc: 0.6880
Epoch 7/100
50000/50000 [==============================] - 125s 3ms/step - loss: 0.6356 - acc: 0.8685 - val_loss: 0.9909 - val_acc: 0.7317
Epoch 8/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.5907 - acc: 0.8910 - val_loss: 0.9453 - val_acc: 0.7446
Epoch 9/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.5610 - acc: 0.9043 - val_loss: 0.9441 - val_acc: 0.7460
Epoch 10/100
50000/50000 [==============================] - 125s 3ms/step - loss: 0.5295 - acc: 0.9201 - val_loss: 0.8892 - val_acc: 0.7679
Epoch 11/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.5100 - acc: 0.9309 - val_loss: 0.8808 - val_acc: 0.7818
Epoch 12/100
50000/50000 [==============================] - 126s 3ms/step - loss: 0.4926 - acc: 0.9397 - val_loss: 0.8404 - val_acc: 0.7894
Epoch 13/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.4807 - acc: 0.9470 - val_loss: 0.8600 - val_acc: 0.7928
Epoch 14/100
50000/50000 [==============================] - 126s 3ms/step - loss: 0.4661 - acc: 0.9529 - val_loss: 0.9046 - val_acc: 0.7732
Epoch 15/100
50000/50000 [==============================] - 125s 3ms/step - loss: 0.4588 - acc: 0.9571 - val_loss: 0.8505 - val_acc: 0.7965
Epoch 16/100
50000/50000 [==============================] - 126s 3ms/step - loss: 0.4558 - acc: 0.9593 - val_loss: 0.8748 - val_acc: 0.7859
Epoch 17/100
50000/50000 [==============================] - 126s 3ms/step - loss: 0.4434 - acc: 0.9649 - val_loss: 0.9109 - val_acc: 0.7656
Epoch 18/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.4449 - acc: 0.9643 - val_loss: 0.8532 - val_acc: 0.7971
Epoch 19/100
50000/50000 [==============================] - 126s 3ms/step - loss: 0.4349 - acc: 0.9701 - val_loss: 0.8677 - val_acc: 0.7951
Epoch 20/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.4351 - acc: 0.9698 - val_loss: 0.9145 - val_acc: 0.7740
Epoch 21/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4268 - acc: 0.9740 - val_loss: 0.8308 - val_acc: 0.8065
Epoch 22/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4243 - acc: 0.9741 - val_loss: 0.8229 - val_acc: 0.8075
Epoch 23/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4201 - acc: 0.9764 - val_loss: 0.8411 - val_acc: 0.8062
Epoch 24/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.4190 - acc: 0.9769 - val_loss: 0.8649 - val_acc: 0.7951
Epoch 25/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4139 - acc: 0.9787 - val_loss: 0.8257 - val_acc: 0.8071
Epoch 26/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4154 - acc: 0.9779 - val_loss: 0.8041 - val_acc: 0.8205
Epoch 27/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4128 - acc: 0.9798 - val_loss: 0.8296 - val_acc: 0.8115
Epoch 28/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.4121 - acc: 0.9798 - val_loss: 0.8241 - val_acc: 0.8074
Epoch 29/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.4093 - acc: 0.9807 - val_loss: 0.8575 - val_acc: 0.7913
Epoch 30/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.4048 - acc: 0.9826 - val_loss: 0.8118 - val_acc: 0.8166
Epoch 31/100
50000/50000 [==============================] - 126s 3ms/step - loss: 0.4041 - acc: 0.9837 - val_loss: 0.8375 - val_acc: 0.8082
Epoch 32/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.4045 - acc: 0.9831 - val_loss: 0.8604 - val_acc: 0.8091
Epoch 33/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4047 - acc: 0.9823 - val_loss: 0.8797 - val_acc: 0.7931
Epoch 34/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.4023 - acc: 0.9842 - val_loss: 0.8694 - val_acc: 0.8020
Epoch 35/100
50000/50000 [==============================] - 125s 3ms/step - loss: 0.3995 - acc: 0.9858 - val_loss: 0.8161 - val_acc: 0.8186
Epoch 36/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3976 - acc: 0.9859 - val_loss: 0.8495 - val_acc: 0.7988
Epoch 37/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.4021 - acc: 0.9847 - val_loss: 0.8542 - val_acc: 0.8062
Epoch 38/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.3939 - acc: 0.9869 - val_loss: 0.8347 - val_acc: 0.8122
Epoch 39/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.3955 - acc: 0.9856 - val_loss: 0.8521 - val_acc: 0.7993
Epoch 40/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3907 - acc: 0.9885 - val_loss: 0.9023 - val_acc: 0.7992
Epoch 41/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3911 - acc: 0.9873 - val_loss: 0.8597 - val_acc: 0.8010
Epoch 42/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3917 - acc: 0.9885 - val_loss: 0.8968 - val_acc: 0.7936
Epoch 43/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3931 - acc: 0.9874 - val_loss: 0.8318 - val_acc: 0.8169
Epoch 44/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3897 - acc: 0.9893 - val_loss: 0.8811 - val_acc: 0.7988
Epoch 45/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3876 - acc: 0.9888 - val_loss: 0.8453 - val_acc: 0.8094
Epoch 46/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3876 - acc: 0.9889 - val_loss: 0.8195 - val_acc: 0.8179
Epoch 47/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3891 - acc: 0.9890 - val_loss: 0.8373 - val_acc: 0.8137
Epoch 48/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3902 - acc: 0.9888 - val_loss: 0.8457 - val_acc: 0.8120
Epoch 49/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3864 - acc: 0.9903 - val_loss: 0.9012 - val_acc: 0.7907
Epoch 50/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3859 - acc: 0.9903 - val_loss: 0.8291 - val_acc: 0.8053
Epoch 51/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3830 - acc: 0.9915 - val_loss: 0.8494 - val_acc: 0.8139
Epoch 52/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3828 - acc: 0.9907 - val_loss: 0.8447 - val_acc: 0.8135
Epoch 53/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3823 - acc: 0.9910 - val_loss: 0.8539 - val_acc: 0.8120
Epoch 54/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3832 - acc: 0.9905 - val_loss: 0.8592 - val_acc: 0.8098
Epoch 55/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3823 - acc: 0.9908 - val_loss: 0.8585 - val_acc: 0.8087
Epoch 56/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3817 - acc: 0.9911 - val_loss: 0.8840 - val_acc: 0.7889
Epoch 57/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3827 - acc: 0.9914 - val_loss: 0.8205 - val_acc: 0.8250
Epoch 58/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3818 - acc: 0.9912 - val_loss: 0.8571 - val_acc: 0.8051
Epoch 59/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3811 - acc: 0.9919 - val_loss: 0.8155 - val_acc: 0.8254
Epoch 60/100
50000/50000 [==============================] - 125s 3ms/step - loss: 0.3803 - acc: 0.9919 - val_loss: 0.8617 - val_acc: 0.8040
Epoch 61/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.3793 - acc: 0.9926 - val_loss: 0.8212 - val_acc: 0.8192
Epoch 62/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3825 - acc: 0.9912 - val_loss: 0.8139 - val_acc: 0.8277
Epoch 63/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.3784 - acc: 0.9923 - val_loss: 0.8304 - val_acc: 0.8121
Epoch 64/100
50000/50000 [==============================] - 125s 2ms/step - loss: 0.3809 - acc: 0.9918 - val_loss: 0.7961 - val_acc: 0.8289
Epoch 65/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3750 - acc: 0.9930 - val_loss: 0.8676 - val_acc: 0.8110
Epoch 66/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3789 - acc: 0.9928 - val_loss: 0.8308 - val_acc: 0.8148
Epoch 67/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3783 - acc: 0.9929 - val_loss: 0.8595 - val_acc: 0.8097
Epoch 68/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3758 - acc: 0.9935 - val_loss: 0.8359 - val_acc: 0.8065
Epoch 69/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3784 - acc: 0.9927 - val_loss: 0.8189 - val_acc: 0.8255
Epoch 70/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3786 - acc: 0.9924 - val_loss: 0.8754 - val_acc: 0.8001
Epoch 71/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3749 - acc: 0.9936 - val_loss: 0.8188 - val_acc: 0.8262
Epoch 72/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3758 - acc: 0.9932 - val_loss: 0.8540 - val_acc: 0.8169
Epoch 73/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3740 - acc: 0.9934 - val_loss: 0.8127 - val_acc: 0.8258
Epoch 74/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3749 - acc: 0.9932 - val_loss: 0.8662 - val_acc: 0.8018
Epoch 75/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3721 - acc: 0.9941 - val_loss: 0.8359 - val_acc: 0.8213
Epoch 76/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3746 - acc: 0.9937 - val_loss: 0.8462 - val_acc: 0.8178
Epoch 77/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3741 - acc: 0.9936 - val_loss: 0.8983 - val_acc: 0.7972
Epoch 78/100
50000/50000 [==============================] - 122s 2ms/step - loss: 0.3751 - acc: 0.9933 - val_loss: 0.8525 - val_acc: 0.8173
Epoch 79/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3762 - acc: 0.9931 - val_loss: 0.8190 - val_acc: 0.8201
Epoch 80/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3737 - acc: 0.9940 - val_loss: 0.8441 - val_acc: 0.8196
Epoch 81/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3729 - acc: 0.9935 - val_loss: 0.8151 - val_acc: 0.8267
Epoch 82/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3735 - acc: 0.9938 - val_loss: 0.8405 - val_acc: 0.8163
Epoch 83/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3723 - acc: 0.9939 - val_loss: 0.8225 - val_acc: 0.8243
Epoch 84/100
50000/50000 [==============================] - 123s 2ms/step - loss: 0.3738 - acc: 0.9938 - val_loss: 0.8413 - val_acc: 0.8115
Epoch 85/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3714 - acc: 0.9947 - val_loss: 0.9080 - val_acc: 0.7932
Epoch 86/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3744 - acc: 0.9942 - val_loss: 0.8467 - val_acc: 0.8135
Epoch 87/100
50000/50000 [==============================] - 124s 2ms/step - loss: 0.3705 - acc: 0.9948 - val_loss: 0.8491 - val_acc: 0.8163
Epoch 88/100
50000/50000 [==============================] - 128s 3ms/step - loss: 0.3733 - acc: 0.9944 - val_loss: 0.8005 - val_acc: 0.8214
Epoch 89/100
50000/50000 [==============================] - 134s 3ms/step - loss: 0.3693 - acc: 0.9949 - val_loss: 0.7791 - val_acc: 0.8321
Epoch 90/100
50000/50000 [==============================] - 135s 3ms/step - loss: 0.3724 - acc: 0.9942 - val_loss: 0.8458 - val_acc: 0.8124
Epoch 91/100
50000/50000 [==============================] - 128s 3ms/step - loss: 0.3732 - acc: 0.9947 - val_loss: 0.8315 - val_acc: 0.8164
Epoch 92/100
50000/50000 [==============================] - 127s 3ms/step - loss: 0.3699 - acc: 0.9950 - val_loss: 0.8140 - val_acc: 0.8226
Epoch 93/100
50000/50000 [==============================] - 131s 3ms/step - loss: 0.3694 - acc: 0.9950 - val_loss: 0.8342 - val_acc: 0.8210
Epoch 94/100
50000/50000 [==============================] - 134s 3ms/step - loss: 0.3698 - acc: 0.9946 - val_loss: 0.8938 - val_acc: 0.8019
Epoch 95/100
50000/50000 [==============================] - 133s 3ms/step - loss: 0.3698 - acc: 0.9946 - val_loss: 0.8771 - val_acc: 0.8066
Epoch 96/100
50000/50000 [==============================] - 164s 3ms/step - loss: 0.3712 - acc: 0.9946 - val_loss: 0.8396 - val_acc: 0.8211
Epoch 97/100
50000/50000 [==============================] - 155s 3ms/step - loss: 0.3689 - acc: 0.9949 - val_loss: 0.8728 - val_acc: 0.8112
Epoch 98/100
50000/50000 [==============================] - 133s 3ms/step - loss: 0.3663 - acc: 0.9953 - val_loss: 0.9615 - val_acc: 0.7902
Epoch 99/100
50000/50000 [==============================] - 133s 3ms/step - loss: 0.3714 - acc: 0.9944 - val_loss: 0.8414 - val_acc: 0.8188
Epoch 100/100
50000/50000 [==============================] - 138s 3ms/step - loss: 0.3682 - acc: 0.9956 - val_loss: 0.8055 - val_acc: 0.8266
###Markdown
Model OutputWe can now plot the final validation accuracy and loss:
###Code
plt.plot(trained_model.history['acc'])
plt.plot(trained_model.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
print(np.max(trained_model.history['acc']))
print(np.max(trained_model.history['val_acc']))
plt.plot(trained_model.history['loss'])
plt.plot(trained_model.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
print(np.min(trained_model.history['loss']))
print(np.min(trained_model.history['val_loss']))
###Output
0.3663262344896793
0.7790719392895699
|
dashboard_DEMO.ipynb | ###Markdown
Electricity Capacity Capacity in all regions
###Code
df = dh.get('o_capa')
df = df.loc[[i for i in df.index.levels[0] if i != 'shed']]
alltec = get_table(dh.scenarios,next(iter(dh.scenarios)),'alltec',use_name = True)['alltec'].values
plot_by_tec(
df.groupby(['alltec']).sum(),
alltec,
ylabel='Capacity (GW)',
figsize = (10, 5))
plt.title('Capacity in all modeled regions');
###Output
_____no_output_____
###Markdown
Capacity in focus region
###Code
df = dh.get('o_capa')
df = df.loc[[i for i in df.index.levels[0] if i != 'shed']]
alltec = get_table(dh.scenarios,next(iter(dh.scenarios)),'alltec',use_name = True)['alltec'].values
plot_by_tec(
df.groupby(['alltec', 'r']).sum().xs(focus_region, level='r'),
alltec,
ylabel='Capacity (GW)',
figsize = (10, 5))
plt.title('Capacity in {}'.format(focus_region));
###Output
_____no_output_____
###Markdown
Balance Electricity balance in all regions
###Code
plot_energy_balance(dh,figsize = (10, 5),show_data=False);
###Output
_____no_output_____
###Markdown
Electricity balance in focus region
###Code
plot_energy_balance(dh,focus_region,figsize = (10, 5),show_data=False);
###Output
_____no_output_____
###Markdown
Electricity Prices Baseload Prices of all regions
###Code
dh.get('o_prices').groupby('r').mean().transpose().T.plot.bar(xlabel = '', ylabel = 'Baseload Price (Euro/MWh)', rot = 0,figsize = (10, 5))
plt.legend(loc='center', bbox_to_anchor=(1.1, 0.5))
plt.grid(axis='y')
plt.title('Baseload Price');
###Output
_____no_output_____
###Markdown
Price duration curve of focus region
###Code
pdc_pivot(dh.get('o_prices'), region=focus_region).plot(xlabel='Hour', ylabel= 'Price (Euro/MWh)',figsize = (10, 5))
plt.grid(axis='y')
plt.legend(loc='center', bbox_to_anchor=(1.1, 0.5))
plt.title('Price Duration Curve of {}'.format(focus_region));
###Output
_____no_output_____
###Markdown
Market value
###Code
plot_mv(dh,['solar','wion','wiof'],show_data=False);
###Output
_____no_output_____
###Markdown
Hydrogen Hydrogen balance of all regions
###Code
plot_hydrogen_balance(dh,figsize = (10, 5),show_data=False);
###Output
_____no_output_____
###Markdown
Hydrogen balance of focus region
###Code
plot_hydrogen_balance(dh,focus_region,figsize = (10, 5),show_data=False);
###Output
_____no_output_____
###Markdown
Hydrogen Prices
###Code
dh.get('o_h2price_buy').transpose().T.plot.bar(xlabel = '', ylabel = 'H2 Price (Euro/MWht)', rot = 0,figsize = (10, 5))
plt.legend(loc='center', bbox_to_anchor=(1.1, 0.5))
plt.grid(axis='y')
plt.title('H2 Price Buy');
###Output
_____no_output_____
###Markdown
CO2 Emissions
###Code
plot_co2_emissions(dh,figsize = (10, 5),show_data=False);
###Output
_____no_output_____
###Markdown
Transmissions
###Code
dh.get('o_flow').groupby('r').sum().div(1000).transpose().T.plot.bar(xlabel = '', ylabel = 'Electricity Exports (TWh)', rot = 0,figsize = (10, 5))
plt.legend(loc='center', bbox_to_anchor=(1.1, 0.5))
plt.grid(axis='y')
plt.title('Yearly electricity transmissions');
###Output
_____no_output_____ |
quickstarts/reading-tabular-data.ipynb | ###Markdown
Reading Tabular DataThe Planetary Computer provides tabular data in the [Apache Parquet](https://parquet.apache.org/) file format. Small datasets can be read using [pandas](https://pandas.pydata.org/). For example, we can read the boundary table from the [Forest Inventory and Analysis](https://aka.ms/ai4edata-fia) dataset, which has about 190,000 rows of information about forest health and location in the US.
###Code
import pandas as pd
df = pd.read_parquet(
"az://cpdata/raw/fia/boundary.parquet/part.0.parquet",
storage_options={"account_name": "cpdataeuwest"},
columns=["CN", "AZMLEFT", "AZMCORN"],
)
df
###Output
_____no_output_____
###Markdown
Larger datasets can be read using [Dask](https://dask.org/). For example, the `cpdata/raw/fia/tree.parquet` folder contains about 160 individual Parquet files, totalling about 22 million rows. In this case, pass the path to the directory to `dask.dataframe.read_parquet`.
###Code
import dask.dataframe as dd
df = dd.read_parquet(
"az://cpdata/raw/fia/tree.parquet", storage_options={"account_name": "cpdataeuwest"}
)
df[["SPCD", "CARBON_BG", "CARBON_AG"]]
###Output
_____no_output_____
###Markdown
That lazily loads the data into a Dask DataFrame. We can operate on the DataFrame with pandas-like methods, and call `.compute()` to get the result. In this case, we'll compute the average amount of carbon sequestered above and below ground for each tree, grouped by species type. To cut down on execution time we'll select just the first partition.
###Code
result = (
df[["SPCD", "CARBON_BG", "CARBON_AG"]]
.get_partition(0)
.groupby("SPCD") # group by species
.mean()
.compute()
)
result
###Output
_____no_output_____
###Markdown
Reading Tabular DataThe Planetary Computer provides tabular data in the [Apache Parquet](https://parquet.apache.org/) file format. Small datasets can be read using [pandas](https://pandas.pydata.org/). For example, we can read the boundary table from the [Forest Inventory and Analysis](https://aka.ms/ai4edata-fia) dataset, which has about 190,000 rows of information about forest health and location in the US.
###Code
import pandas as pd
df = pd.read_parquet(
"az://cpdata/raw/fia/boundary.parquet/part.0.parquet",
storage_options={"account_name": "cpdataeuwest"},
columns=["CN", "AZMLEFT", "AZMCORN"],
)
df
###Output
_____no_output_____
###Markdown
Larger datasets can be read using [Dask](https://dask.org/). For example, the `cpdata/raw/fia/tree.parquet` folder contains about 160 individual Parquet files, totalling about 22 million rows. In this case, pass the path to the directory to `dask.dataframe.read_parquet`.
###Code
import dask.dataframe as dd
df = dd.read_parquet(
"az://cpdata/raw/fia/tree.parquet", storage_options={"account_name": "cpdataeuwest"}
)
df[["SPCD", "CARBON_BG", "CARBON_AG"]]
###Output
_____no_output_____
###Markdown
That lazily loads the data into a Dask DataFrame. We can operate on the DataFrame with pandas-like methods, and call `.compute()` to get the result. In this case, we'll compute the average amount of carbon sequestered above and below ground for each tree, grouped by species type. To cut down on execution time we'll select just the first partition.
###Code
result = (
df[["SPCD", "CARBON_BG", "CARBON_AG"]]
.get_partition(0)
.groupby("SPCD") # group by species
.mean()
.compute()
)
result
###Output
_____no_output_____
###Markdown
Reading Tabular DataThe Planetary Computer provides tabular data in the [Apache Parquet](https://parquet.apache.org/) file format, which provides a standarized high-performance columnar storage format.When working from Python, there are several options for reading parquet datasets. The right choice depends on the size and kind of the data you're reading. When reading geospatial data, with one or more columns containing vector geometries, we recommend using [geopandas](https://geopandas.org/) for small datasets and [dask-geopandas](https://github.com/geopandas/dask-geopandas) for large datasets. For non-geospatial tabular data, we recommend [pandas](https://pandas.pydata.org/) for small datasets and [Dask](https://dask.org/) for large datasets.Regardless of which library you're using to read the data, we recommend using [STAC](https://stacspec.org/) to discover which datasets are available, and which options should be provided when reading the data.In this example we'll work with data from the US Forest Service's [Forest Inventory and Analysis](https://planetarycomputer.microsoft.com/dataset/fia) dataset. This includes a collection of tables providing information about forest health and location in the United States.
###Code
import pystac_client
catalog = pystac_client.Client.open(
"https://planetarycomputer.microsoft.com/api/stac/v1"
)
fia = catalog.get_collection("fia")
fia
###Output
_____no_output_____
###Markdown
The FIA Collection has a number of items, each of which represents a different table stored in Parquet format.
###Code
list(fia.get_all_items())
###Output
_____no_output_____
###Markdown
To load a single table, get it's item and extract the `href` from the `data` asset. The "boundary" table, which provides information about subplots, is relatively small and doesn't contain a geospatial geometry column, so it can be read with pandas.
###Code
import pandas as pd
import planetary_computer
boundary = fia.get_item(id="boundary")
boundary = planetary_computer.sign(boundary)
asset = boundary.assets["data"]
df = pd.read_parquet(
asset.href,
storage_options=asset.extra_fields["table:storage_options"],
columns=["CN", "AZMLEFT", "AZMCORN"],
)
df.head()
###Output
_____no_output_____
###Markdown
There are a few imporant pieces to highlight1. As usual with the Planetary Computer, we signed the STAC item so that we could access the data. See [Using tokens for data access](https://planetarycomputer.microsoft.com/docs/concepts/sas/) for more.2. We relied on the asset to provide all the information necessary to load the data like the `href` and the `storage_options`. All we needed to know was the ID of the Collection and Item3. We used pandas' and parquet's ability to select subsets of the data with the `columns` keyword. Larger datasets can be read using [Dask](https://dask.org/). For example, the `cpdata/raw/fia/tree.parquet` folder contains about 160 individual Parquet files, totalling about 22 million rows. In this case, pass the path to the directory to `dask.dataframe.read_parquet`.
###Code
import dask.dataframe as dd
tree = planetary_computer.sign(fia.get_item(id="tree"))
asset = tree.assets["data"]
df = dd.read_parquet(
asset.href,
storage_options=asset.extra_fields["table:storage_options"],
columns=["SPCD", "CARBON_BG", "CARBON_AG"],
engine="pyarrow",
)
df
###Output
_____no_output_____
###Markdown
That lazily loads the data into a Dask DataFrame. We can operate on the DataFrame with pandas-like methods, and call `.compute()` to get the result. In this case, we'll compute the average amount of carbon sequestered above and below ground for each tree, grouped by species type. To cut down on execution time we'll select just the first partition.
###Code
result = df.get_partition(0).groupby("SPCD").mean().compute() # group by species
result
###Output
_____no_output_____ |
scotch_reviews.ipynb | ###Markdown
Scotch Reviews Created by Chris Ceder - 12/30/12
###Code
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Activation
scotch_df = pd.read_csv('scotch_review.csv')
scotch_df
ranked_scotch = scotch_df[['name', 'review.point', 'description']].sort_values(by='review.point', ascending=False)
ranked_scotch
def create_sentiment(int):
if int >= 60 and int < 85:
return 0
if int >= 85 and int < 91:
return 1
if int >= 91 and int <= 100:
return 2
sentiment = []
ranked_scotch['Sentiment'] = ranked_scotch['review.point'].apply(create_sentiment)
tokenizer = keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(ranked_scotch.description)
tokenizer.get_config()
ranked_scotch = tokenizer.texts_to_sequences(ranked_scotch.description)
ranked_scotch
ranked_scotch = tokenizer.sequences_to_matrix(ranked_scotch)
ranked_scotch
model = Sequential()
model.add(Dense(677, activation='sigmoid'))
###Output
_____no_output_____ |
fpc_methods/SFC_HAE/SFC_HAE ipynbs/HAE_16II_I4.ipynb | ###Markdown
Mounting your google driveYou can use google drive to store and access files e.g. storing and loading data from numpy or CSV files. Use the following command to mount your GDrive and access your files.
###Code
from google.colab import drive
drive.mount('/content/gdrive/')
!pip install ffmpeg
!pip install vtk
import os
# change the current path. The user can adjust the path depend on the requirement
os.chdir("/content/gdrive/MyDrive/Cola-Notebooks/FYP/YF")
import vtktools
! /opt/bin/nvidia-smi
# !unzip csv_data.zip
%matplotlib inline
import numpy as np
import pandas as pd
import scipy
import numpy.linalg as la
import scipy.linalg as sl
import scipy.sparse.linalg as spl
import matplotlib.pyplot as plt
import torch.nn as nn # Neural network module
import scipy.sparse as sp
import scipy.optimize as sop
import progressbar
# making slopes
import torch
from torch.utils.data import TensorDataset
import torch.nn.functional as F
from matplotlib.pyplot import LinearLocator
import matplotlib as mpl
import matplotlib.colors as colors
# create an animation
from matplotlib import animation
from IPython.display import HTML
from matplotlib import animation
import math
import ffmpeg
!pip install pycm livelossplot
%pylab inline
from livelossplot import PlotLosses
from torch.utils.data import DataLoader
import torch.utils.data as Data
import time
import platform
print('python version', platform.python_version())
print('torch version', torch.__version__)
print('numpy version', np.version.version)
def set_seed(seed):
"""
Use this to set ALL the random seeds to a fixed value and take out any randomness from cuda kernels
"""
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.benchmark = True ##uses the inbuilt cudnn auto-tuner to find the fastest convolution algorithms. -
torch.backends.cudnn.enabled = True
return True
device = 'cuda' # Set out device to GPU
print('Cuda installed, running on GPU!') # print sentence
# These functions are saved in function.py and the note are also added to that file
def saveIndex(path_train, path_valid, path_test,train_index, valid_index, test_index):
# save training and validation loss
np.savetxt(path_train,train_index, delimiter=',')
np.savetxt(path_valid,valid_index, delimiter=',')
np.savetxt(path_test,test_index, delimiter=',')
def getIndex(path_train,path_valid,path_test):
train_index = np.loadtxt(path_train,delimiter=",")
valid_index = np.loadtxt(path_valid,delimiter=",")
test_index = np.loadtxt(path_test,delimiter=",")
return train_index,valid_index,test_index
def saveMode(path_train, path_valid, path_test,mode_train, mode_valid, mode_test):
# save training and validation loss
np.savetxt(path_train,mode_train.cpu().data.numpy(), delimiter=',')
np.savetxt(path_valid,mode_valid.cpu().data.numpy(), delimiter=',')
np.savetxt(path_test,mode_test.cpu().data.numpy(), delimiter=',')
def getMode(path_train,path_valid,path_test):
mode_train = np.loadtxt(path_train,delimiter=",")
mode_valid = np.loadtxt(path_valid,delimiter=",")
mode_test = np.loadtxt(path_test,delimiter=",")
return mode_train,mode_valid,mode_test
def saveCsv(pathcsv,EPOCH):
# save training and validation loss
losses_combined = np.zeros((EPOCH,3))
losses_combined[:,0] = np.asarray(epoch_list)
losses_combined[:,1] = np.asarray(loss_list)
losses_combined[:,2] = np.asarray(loss_valid)
np.savetxt(pathcsv, losses_combined , delimiter=',')
def PlotMSELoss(pathName,name):
epoch = pd.read_csv(pathName,usecols=[0]).values
train_loss = pd.read_csv(pathName,usecols=[1]).values
val_loss = pd.read_csv(pathName,usecols=[2]).values
fig = plt.figure(figsize=(10,7))
axe1 = plt.subplot(111)
axe1.semilogy(epoch,train_loss,label = "train")
axe1.plot(epoch,val_loss,label = "valid")
axe1.legend(loc = "best",fontsize=14)
axe1.set_xlabel("$epoch$",fontsize=14)
axe1.set_ylabel("$MSE loss$",fontsize=14)
axe1.set_title(name,fontsize=14)
def getTotal_decoded(training_decoded,valid_decoded,test_decoded,train_index,valid_index,test_index):
total_decoded = np.zeros((nTotal,nNodes,2))
for i in range(len(train_index)):
total_decoded[int(train_index[i]),:,0] = training_decoded.cpu().detach().numpy()[i,:,0]
total_decoded[int(train_index[i]),:,1] = training_decoded.cpu().detach().numpy()[i,:,1]
for i in range(len(valid_index)):
total_decoded[int(valid_index[i]),:,0] = valid_decoded.cpu().detach().numpy()[i,:,0]
total_decoded[int(valid_index[i]),:,1] = valid_decoded.cpu().detach().numpy()[i,:,1]
for i in range(len(test_index)):
total_decoded[int(test_index[i]),:,0] = test_decoded.cpu().detach().numpy()[i,:,0]
total_decoded[int(test_index[i]),:,1] = test_decoded.cpu().detach().numpy()[i,:,1]
return total_decoded
def getMSELoss(pathName):
epoch = pd.read_csv(pathName,usecols=[0]).values
train_loss = pd.read_csv(pathName,usecols=[1]).values
val_loss = pd.read_csv(pathName,usecols=[2]).values
return train_loss,val_loss,epoch
# def get_clean_vtu(filename):
# "Removes fields and arrays from a vtk file, leaving the coordinates/connectivity information."
# vtu_data = vtktools.vtu(filename)
# clean_vtu = vtktools.vtu()
# clean_vtu.ugrid.DeepCopy(vtu_data.ugrid)
# fieldNames = clean_vtu.GetFieldNames()
# # remove all fields and arrays from this vtu
# for field in fieldNames:
# clean_vtu.RemoveField(field)
# fieldNames = clean_vtu.GetFieldNames()
# vtkdata=clean_vtu.ugrid.GetCellData()
# arrayNames = [vtkdata.GetArrayName(i) for i in range(vtkdata.GetNumberOfArrays())]
# for array in arrayNames:
# vtkdata.RemoveArray(array)
# return clean_vtu
def index_split(train_ratio, valid_ratio, test_ratio, total_num):
if train_ratio + valid_ratio + test_ratio != 1:
raise ValueError("Three input ratio should sum to be 1!")
total_index = np.arange(total_num)
rng = np.random.default_rng()
total_index = rng.permutation(total_index)
knot_1 = int(total_num * train_ratio)
knot_2 = int(total_num * valid_ratio) + knot_1
train_index, valid_index, test_index = np.split(total_index, [knot_1, knot_2])
return train_index, valid_index, test_index
def saveNumpy(path,mode):
np.savetxt(path,mode, delimiter=',')
def get1Mode(path):
mode = np.loadtxt(path,delimiter=",")
return mode
def oneMSE(error_autoencoder):
N = error_autoencoder.shape[0]
MSE_list = np.zeros([N,1])
for i in range(N):
MSE_list[i,0] = (error_autoencoder[i,:,:]**2).mean()
return MSE_list
path_train = "/content/gdrive/MyDrive/Cola-Notebooks/FYP/YF/"+"new_FPC_train_index.csv"
path_valid = "/content/gdrive/MyDrive/Cola-Notebooks/FYP/YF/"+"new_FPC_valid_index.csv"
path_test = "/content/gdrive/MyDrive/Cola-Notebooks/FYP/YF/"+"new_FPC_test_index.csv"
# saveIndex(path_train, path_valid, path_test,train_index, valid_index, test_index)
# Load the train_index, valid_index and test_index
train_index,valid_index,test_index= getIndex(path_train,path_valid,path_test)
print(test_index)
###Output
[ 133. 490. 1480. 730. 481. 1382. 440. 750. 1502. 1451. 692. 1094.
1679. 510. 1241. 1101. 543. 1312. 1432. 1988. 1148. 1801. 1519. 367.
1858. 1043. 1175. 1218. 1479. 103. 1363. 800. 258. 1851. 267. 999.
611. 1824. 318. 753. 1413. 727. 1273. 1358. 1090. 838. 250. 1763.
1038. 439. 1199. 334. 1848. 1924. 1013. 271. 936. 600. 1553. 423.
1467. 1658. 929. 1748. 783. 329. 303. 1067. 868. 374. 1102. 1843.
683. 449. 855. 1142. 1393. 194. 1112. 636. 1617. 1910. 1722. 536.
1149. 1765. 468. 1922. 1703. 1311. 341. 110. 1258. 1257. 1711. 93.
1969. 396. 1259. 199. 962. 1704. 462. 1407. 634. 535. 1505. 537.
612. 1707. 1565. 1963. 1955. 3. 1058. 1946. 372. 1653. 1077. 414.
469. 680. 1430. 649. 215. 234. 1692. 653. 1455. 582. 1169. 1138.
411. 518. 865. 1977. 1688. 822. 397. 1388. 1221. 239. 249. 1781.
1751. 915. 278. 1970. 907. 477. 1552. 703. 870. 916. 1650. 561.
1401. 129. 1123. 1804. 1871. 1527. 308. 94. 1911. 1425. 1574. 72.
399. 1410. 1818. 926. 897. 1238. 1628. 498. 1066. 1908. 36. 550.
1010. 524. 996. 732. 1048. 1041. 1474. 1339. 1889. 1289. 1795. 869.
1935. 1837. 684. 380. 967. 1445. 1729. 160.]
###Markdown
Hierarchical autoencoder First subnetwork load data
###Code
os.chdir('/content/gdrive/MyDrive/Cola-Notebooks/FYP/YF')
print(os.getcwd())
# read in the data (1000 csv files)
nTrain = 1600
nValid = 200
nTest = 200
nTotal = nTrain + nValid + nTest
nNodes = 20550 # should really work this out
# The below method to load data is too slow. Therefore, we use load pt file
# [:, :, 2] is speed, [:, :, 3] is u, [:, :, 4] is v
# (speed not really needed)
# [:, :, 0] and [:, :, 1] are the SFC orderings
# training_data = np.zeros((nTrain,nNodes,5))
# for i in range(nTrain):
# data = np.loadtxt('csv_data/data_' +str(int(train_index[i]))+ '.csv', delimiter=',')
# training_data[i,:,:] = data
# training_data = np.array(training_data)
# print('size training data', training_data.shape)
# valid_data = np.zeros((nValid,nNodes,5))
# for i in range(nValid):
# data = np.loadtxt('csv_data/data_' +str(int(valid_index[i]))+ '.csv', delimiter=',')
# valid_data[i,:,:] = data
# valid_data = np.array(valid_data)
# print('size validation data', valid_data.shape)
# test_data = np.zeros((nTest,nNodes,5))
# for i in range(nTest):
# data = np.loadtxt('csv_data/data_' +str(int(test_index[i]))+ '.csv', delimiter=',')
# test_data[i,:,:] = data
# test_data = np.array(test_data)
# print('size test data', test_data.shape)
# total_data = np.zeros((nTotal,nNodes,5))
# for i in range(len(train_index)):
# total_data[int(train_index[i]),:,:] = training_data[i,:,:]
# for i in range(len(valid_index)):
# total_data[int(valid_index[i]),:,:] = valid_data[i,:,:]
# for i in range(len(test_index)):
# total_data[int(test_index[i]),:,:] = test_data[i,:,:]
# print('size total data', total_data.shape)
# Before we save the pt file, we must load the data according to the above method
# torch.save(training_data, '/content/gdrive/MyDrive/FPC_new_random_train.pt')
# torch.save(valid_data, '/content/gdrive/MyDrive/FPC_new_random_valid.pt')
# torch.save(test_data, '/content/gdrive/MyDrive/FPC_new_random_test.pt')
# torch.save(total_data, '/content/gdrive/MyDrive/FPC_new_random_total.pt')
# load the data, this method save the time
training_data = torch.load('/content/gdrive/MyDrive/FPC_new_random_train.pt')
valid_data = torch.load('/content/gdrive/MyDrive/FPC_new_random_valid.pt')
test_data = torch.load('/content/gdrive/MyDrive/FPC_new_random_test.pt')
total_data = torch.load('/content/gdrive/MyDrive/FPC_new_random_total.pt')
print(training_data.shape)
print(valid_data.shape)
print(test_data.shape)
print(total_data.shape)
# rescale the data so that u and v data lies in the range [-1,1] (and speed in [0,1])
ma = np.max(training_data[:, :, 2])
mi = np.min(training_data[:, :, 2])
k = 1./(ma - mi)
b = 1 - k*ma #k*mi
training_data[:, :, 2] = k * training_data[:, :, 2] + b #- b
# this won't be used
ma = np.max(training_data[:, :, 3])
mi = np.min(training_data[:, :, 3])
ku = 2./(ma - mi)
bu = 1 - ku*ma
training_data[:, :, 3] = ku * training_data[:, :, 3] + bu
valid_data[:, :, 3] = ku * valid_data[:, :, 3] + bu
test_data[:, :, 3] = ku * test_data[:, :, 3] + bu
total_data[:, :, 3] = ku * total_data[:, :, 3] + bu
ma = np.max(training_data[:, :, 4])
mi = np.min(training_data[:, :, 4])
kv = 2./(ma - mi)
bv = 1 - kv*ma
training_data[:, :, 4] = kv * training_data[:, :, 4] + bv
valid_data[:, :, 4] = kv * valid_data[:, :, 4] + bv
test_data[:, :, 4] = kv * test_data[:, :, 4] + bv
total_data[:, :, 4] = kv * total_data[:, :, 4] + bv
###Output
_____no_output_____
###Markdown
Network architetcure
###Code
# SFC-CAE: one curve with nearest neighbour smoothing and compressing to 2 latent variables
print("compress to 2")
Latent_num = 2
torch.manual_seed(42)
# Hyper-parameters
EPOCH = 3001
BATCH_SIZE = 16
LR = 0.0001
k = nNodes # number of nodes - this has to match training_data.shape[0]
print(training_data.shape) # nTrain by number of nodes by 5
# Data Loader for easy mini-batch return in training
train_loader = Data.DataLoader(dataset = training_data, batch_size = BATCH_SIZE, shuffle = True)
# Standard
class CNN_1(nn.Module):
def __init__(self):
super(CNN_1, self).__init__()
self.encoder_h1 = nn.Sequential(
# input shape (16,4,20550) # The first 16 is the batch size
nn.Tanh(),
nn.Conv1d(4, 8, 16, 4, 9),
# output shape (16, 8, 5139)
nn.Tanh(),
nn.Conv1d(8, 8, 16, 4, 9),
# output shape (16, 8,1286)
nn.Tanh(),
nn.Conv1d(8, 16, 16, 4, 9),
# output shape (16,16,323)
nn.Tanh(),
nn.Conv1d(16, 16, 16, 4, 9),
# output shape (16, 16, 82)
nn.Tanh(),
)
self.fc1 = nn.Sequential(
nn.Linear(16*82, 2),
nn.Tanh(),
)
self.fc2 = nn.Sequential(
nn.Linear(2, 16*82),
nn.Tanh(),
)
self.decoder_h1 = nn.Sequential(
# (16, 16, 82)
nn.Tanh(),
nn.ConvTranspose1d(16, 16, 17, 4, 9), # (16, 16, 323)
nn.Tanh(),
nn.ConvTranspose1d(16, 8, 16, 4, 9), # (16, 8, 1286)
nn.Tanh(),
nn.ConvTranspose1d(8, 8, 17, 4, 9), # (16, 8, 5139)
nn.Tanh(),
nn.ConvTranspose1d(8, 4, 16, 4, 9), # (16, 4, 20550)
nn.Tanh(),
)
# input sparse layers, initialize weight as 0.33, bias as 0
self.weight1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight1_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight1_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias1 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight11 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight11_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight11_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias11 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight2 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight2_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight2_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias2 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight22 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight22_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight22_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias22 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight3 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight3_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight3_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias3 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
self.weight33 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight33_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight33_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias33 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
self.weight4 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight4_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight4_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias4 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
self.weight44 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight44_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight44_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias44 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
# output sparse layers, initialize weight as 0.083, bias as 0
self.weight_out1 = torch.nn.Parameter(torch.FloatTensor(0.083 *torch.ones(k)),requires_grad = True)
self.weight_out1_0 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out1_1 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out11 = torch.nn.Parameter(torch.FloatTensor(0.083 *torch.ones(k)),requires_grad = True)
self.weight_out11_0 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out11_1 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out2 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out2_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out2_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out22 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out22_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out22_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out3 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out3_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out3_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out33 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out33_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out33_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out4 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out4_0= torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out4_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out44 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out44_0= torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out44_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.bias_out1 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.bias_out2 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
def forward(self, x):
# print("X_size",x.size())
# first curve
ToSFC1 = x[:, :, 0] # The first column is the first SFC ordering
ToSFC1Up = torch.zeros_like(ToSFC1)
ToSFC1Down = torch.zeros_like(ToSFC1)
ToSFC1Up[:-1] = ToSFC1[1:]
ToSFC1Up[-1] = ToSFC1[-1]
ToSFC1Down[1:] = ToSFC1[:-1]
ToSFC1Down[0] = ToSFC1[0]
batch_num = ToSFC1.shape[0]
x1 = x[:, :, 3:5] # The fourth column and fifth column are velocities u and v respectively
#print("x1", x1.shape) # # (16, 20550, 2)
x1_1d = torch.zeros((batch_num, 4, k)).to(device)
# first input sparse layer, then transform to sfc order1
for j in range(batch_num):
x1_1d[j, 0, :] = x1[j, :, 0][ToSFC1[j].long()] * self.weight1 + \
x1[j, :, 0][ToSFC1Up[j].long()] * self.weight1_0 + \
x1[j, :, 0][ToSFC1Down[j].long()] * self.weight1_1 + self.bias1
x1_1d[j, 1, :] = x1[j, :, 0][ToSFC1[j].long()] * self.weight11 + \
x1[j, :, 0][ToSFC1Up[j].long()] * self.weight11_0 + \
x1[j, :, 0][ToSFC1Down[j].long()] * self.weight11_1 + self.bias11
x1_1d[j, 2, :] = x1[j, :, 1][ToSFC1[j].long()] * self.weight2 + \
x1[j, :, 1][ToSFC1Up[j].long()] * self.weight2_0 + \
x1[j, :, 1][ToSFC1Down[j].long()] * self.weight2_1 + self.bias2
x1_1d[j, 3, :] = x1[j, :, 1][ToSFC1[j].long()] * self.weight22 + \
x1[j, :, 1][ToSFC1Up[j].long()] * self.weight22_0 + \
x1[j, :, 1][ToSFC1Down[j].long()] * self.weight22_1 + self.bias22
# first cnn encoder
encoded_1 = self.encoder_h1(x1_1d.view(-1, 4, k)) #(16,4,20550)
# print("encoded", encoded_1.shape)
# flatten and concatenate
encoded_3 = encoded_1.view(-1,16*82)
# print("Before FC", encoded_3.shape)
# fully connection
encoded = self.fc1(encoded_3) # (b,64)
# print("After encoder FC,the output of encoder",encoded.shape)
decoded_3 = self.decoder_h1(self.fc2(encoded).view(-1, 16, 82))
# print("The output of decoder: ", decoded_3.shape)
BackSFC1 = torch.argsort(ToSFC1)
BackSFC1Up = torch.argsort(ToSFC1Up)
BackSFC1Down = torch.argsort(ToSFC1Down)
decoded_sp = torch.zeros((batch_num, k, 2)).to(device)
# output sparse layer, resort according to sfc transform
for j in range(batch_num):
decoded_sp[j, :, 0] = decoded_3[j, 0, :][BackSFC1[j].long()]* self.weight_out1 + \
decoded_3[j, 0, :][BackSFC1Up[j].long()] * self.weight_out1_0 + \
decoded_3[j, 0, :][BackSFC1Down[j].long()] * self.weight_out1_1 + \
decoded_3[j, 1, :][BackSFC1[j].long()]* self.weight_out11 + \
decoded_3[j, 1, :][BackSFC1Up[j].long()] * self.weight_out11_0 + \
decoded_3[j, 1, :][BackSFC1Down[j].long()] * self.weight_out11_1 + self.bias_out1
decoded_sp[j, :, 1] = decoded_3[j, 2, :][BackSFC1[j].long()] * self.weight_out3 + \
decoded_3[j, 2, :][BackSFC1Up[j].long()] * self.weight_out3_0 + \
decoded_3[j, 2, :][BackSFC1Down[j].long()] * self.weight_out3_1 + \
decoded_3[j, 3, :][BackSFC1[j].long()] * self.weight_out33 + \
decoded_3[j, 3, :][BackSFC1Up[j].long()] * self.weight_out33_0 + \
decoded_3[j, 3, :][BackSFC1Down[j].long()] * self.weight_out33_1 + self.bias_out2
# resort 1D to 2D
decoded = F.tanh(decoded_sp) # both are BATCH_SIZE by nNodes by 2
return encoded, decoded
###Output
_____no_output_____
###Markdown
Train
###Code
# The first network has been trained at SFC-CAE. Therefore, the mode we can load directly
# autoencoder = torch.load("./SFC_CAE/pkl/II_Eran3000_LV2_B16_n1600_L0.0001.pkl")
# pass training, validation and test data through the autoencoder
# t_predict_0 = time.time()
# mode_1train, training_decoded = autoencoder.to(device)(torch.tensor(training_data).to(device))
# error_autoencoder = (training_decoded.cpu().detach().numpy() - training_data[:,:,3:5])
# print("MSE_err of training data", (error_autoencoder**2).mean())
# mode_1valid, valid_decoded = autoencoder.to(device)(torch.tensor(valid_data).to(device))
# error_autoencoder = (valid_decoded.cpu().detach().numpy() - valid_data[:, :, 3:5])
# print("Mse_err of validation data", (error_autoencoder**2).mean())
# mode_1test, test_decoded = autoencoder.to(device)(torch.tensor(test_data).to(device))
# error_autoencoder = (test_decoded.cpu().detach().numpy() - test_data[:, :, 3:5])
# print("Mse_err of test data", (error_autoencoder**2).mean())
# t_predict_1 = time.time()
# total_decoded = getTotal_decoded(training_decoded,valid_decoded,test_decoded,train_index,valid_index,test_index)
# error_autoencoder = (total_decoded - total_data[:, :, 3:5])
# print("Mse_err of total data", (error_autoencoder**2).mean())
# print(mode_1train.shape)
# print(mode_1valid.shape)
# print(mode_1test.shape)
# print('Predict time:',t_predict_1-t_predict_0)
###Output
_____no_output_____
###Markdown
Save and Plot loss
###Code
pathName = "./SFC_CAE/csv/II_Eran3000_LV2_B16_n1600_L0.0001.csv"
name = "SFC-CAE MSE loss of 2 compression variables"
PlotMSELoss(pathName,name)
###Output
_____no_output_____
###Markdown
Get mode
###Code
Latent_num = 2
torch.manual_seed(42)
BATCH_SIZE = 16
LR = 0.0001
nTrain = 1600
path_train = "./HAE/mode_new/II_mode1_LV"+str(Latent_num)+"_Eran"+str(3000) + "_B"+str(BATCH_SIZE)+"_n"+ str(nTrain)+"_L"+str(LR)+"_train.csv"
path_valid = "./HAE/mode_new/II_mode1_LV"+str(Latent_num)+"_Eran"+str(3000) + "_B"+str(BATCH_SIZE)+"_n"+ str(nTrain)+"_L"+str(LR)+"_valid.csv"
path_test = "./HAE/mode_new/II_mode1_LV"+str(Latent_num)+"_Eran"+str(3000) + "_B"+str(BATCH_SIZE)+"_n"+ str(nTrain)+"_L"+str(LR)+"_test.csv"
print(path_train)
# saveMode(path_train,path_valid,path_test,mode_1train,mode_1valid,mode_1test)
mode_1train,mode_1valid,mode_1test = getMode(path_train,path_valid,path_test)
mode_1train = torch.from_numpy(mode_1train).to(device)
mode_1valid = torch.from_numpy(mode_1valid).to(device)
mode_1test = torch.from_numpy(mode_1test).to(device)
print(mode_1train.shape)
print(mode_1test.shape)
print(mode_1valid.shape)
print(mode_1valid)
###Output
torch.Size([1600, 2])
torch.Size([200, 2])
torch.Size([200, 2])
tensor([[-1.6067e-01, -6.0914e-01],
[-6.7483e-01, -1.3558e-01],
[ 5.6826e-01, -2.5459e-01],
[-5.1274e-01, 4.0481e-01],
[ 5.5193e-01, -4.1480e-01],
[ 5.1502e-01, -3.2300e-02],
[-7.0195e-01, 2.4039e-02],
[-9.6313e-01, 2.5269e-01],
[-1.6802e-01, -6.1879e-01],
[ 1.3573e-02, -1.3445e-01],
[-1.2965e-02, -1.3394e-01],
[-3.0778e-01, -9.2148e-01],
[ 8.5247e-01, -7.3854e-01],
[ 6.7126e-01, -4.4685e-01],
[-8.8079e-01, -2.1218e-01],
[-3.8233e-02, -1.5102e-01],
[ 3.7645e-01, 2.1929e-01],
[ 3.3710e-01, -5.2522e-01],
[-7.3544e-01, -6.6473e-01],
[-6.4132e-01, 5.6740e-01],
[-5.8853e-01, -3.4657e-01],
[-3.8374e-01, 7.0756e-01],
[-1.2801e-01, -6.1552e-01],
[-9.5243e-01, 3.3528e-01],
[-6.3002e-01, -7.5842e-01],
[ 1.3645e-01, 1.0243e-01],
[ 4.9770e-01, -5.4784e-01],
[-4.0562e-01, -7.4432e-01],
[-3.0172e-01, -8.6753e-01],
[-2.5700e-02, -1.3625e-01],
[-1.0756e-02, -1.3325e-01],
[-8.0016e-01, -4.8827e-01],
[ 7.9833e-01, -6.0391e-01],
[ 8.2153e-01, -3.5402e-01],
[-3.2293e-01, 4.6698e-01],
[-9.6494e-01, 8.6302e-02],
[-3.1883e-01, 4.7087e-01],
[-7.5930e-01, -4.7540e-01],
[-2.2399e-01, -5.2497e-01],
[-9.7025e-01, 7.0327e-02],
[-9.0261e-02, -6.4030e-01],
[-6.1310e-01, 6.8782e-01],
[ 6.0632e-02, -6.6193e-01],
[-2.3229e-01, 5.1994e-01],
[ 8.1705e-01, -8.0925e-01],
[-9.5073e-04, -1.3105e-01],
[ 2.8826e-01, 3.7372e-01],
[ 8.8725e-01, -1.0399e-01],
[-9.5391e-01, 3.2673e-01],
[ 1.5141e-02, -1.3863e-01],
[ 8.7408e-01, -7.0220e-01],
[ 6.2016e-01, -1.4862e-01],
[-8.6308e-01, 5.4508e-01],
[-4.6024e-01, 4.9740e-01],
[-4.9620e-02, 6.0094e-01],
[ 1.2572e-01, 3.7247e-01],
[-5.3104e-01, -6.5892e-01],
[ 8.5630e-01, -7.2903e-01],
[ 1.6501e-01, -4.9241e-01],
[ 8.9578e-01, -6.9957e-01],
[-3.3123e-02, -1.4578e-01],
[ 8.4265e-01, -1.4138e-02],
[ 6.0917e-01, -4.2997e-01],
[ 3.2017e-02, -6.4756e-01],
[-6.2615e-01, -9.8961e-02],
[-6.0298e-01, 6.9417e-01],
[-2.4154e-01, -8.0081e-01],
[-7.3134e-01, 6.3471e-01],
[ 1.5478e-01, 3.7448e-01],
[-9.5691e-01, 2.5985e-01],
[ 2.3923e-01, -5.8800e-02],
[ 3.7933e-01, -5.9122e-01],
[-6.1449e-02, -8.1711e-01],
[ 1.2292e-01, 5.0035e-01],
[-9.5889e-01, 2.3324e-01],
[-1.6617e-01, 5.9632e-01],
[ 5.0793e-01, -7.3014e-01],
[ 4.7622e-02, 4.1166e-01],
[ 5.6872e-01, -3.1311e-01],
[ 5.6085e-01, -4.6075e-01],
[-6.9620e-02, 1.1523e-01],
[-6.7561e-01, -4.8201e-01],
[ 1.6385e-01, 3.6668e-01],
[ 2.1029e-01, 2.7996e-01],
[ 8.1117e-01, -8.1412e-01],
[-2.6087e-01, -5.8505e-01],
[-8.2214e-01, -5.5413e-01],
[ 1.0091e-01, -6.4847e-01],
[-5.4335e-01, 6.1284e-01],
[ 1.8223e-01, -7.3013e-01],
[ 8.8920e-01, -1.4935e-01],
[ 1.6763e-01, -6.4419e-01],
[ 4.4452e-01, 1.1852e-01],
[ 7.3987e-01, 7.1536e-02],
[ 2.5596e-01, 4.1835e-01],
[-9.6403e-01, 1.5448e-01],
[ 6.3057e-01, -7.9943e-01],
[-4.3272e-01, 4.4192e-01],
[ 6.1031e-01, -6.7628e-01],
[ 7.8401e-01, 6.9054e-02],
[ 8.1149e-01, 3.6830e-03],
[ 3.5122e-01, 2.7184e-01],
[ 4.1859e-01, 2.2061e-01],
[ 6.1381e-01, -5.6305e-01],
[ 1.6185e-01, -6.5975e-01],
[ 5.3869e-01, -1.0166e-01],
[-6.5426e-01, -3.1142e-01],
[ 7.1002e-01, 6.1435e-02],
[ 2.4002e-01, -6.5376e-01],
[-3.1981e-01, -9.1899e-01],
[ 1.4153e-01, -6.4662e-01],
[-3.0789e-01, -5.5596e-01],
[-5.3471e-01, 6.0174e-01],
[ 9.4405e-01, -3.8412e-01],
[-3.9303e-02, -1.5598e-01],
[-6.0758e-01, 1.7649e-01],
[-1.5562e-01, -9.2917e-01],
[ 6.8870e-02, 4.4360e-01],
[ 3.4789e-01, 3.0404e-01],
[-5.9847e-01, 2.9386e-01],
[ 1.7426e-01, 3.5957e-01],
[-7.7926e-01, 6.0112e-01],
[ 5.6627e-01, -2.2685e-01],
[-4.0806e-01, 3.2683e-01],
[-6.1778e-01, -2.7157e-01],
[-2.9336e-01, 4.6965e-01],
[-9.1791e-01, 4.2473e-01],
[-6.1300e-01, -1.7112e-01],
[-9.0806e-01, 4.3110e-01],
[-6.9916e-01, -2.6045e-02],
[-8.7655e-01, -5.0848e-01],
[-9.6973e-01, 1.6627e-02],
[ 6.7525e-01, -3.5808e-01],
[ 3.9278e-01, -5.8830e-01],
[ 5.8689e-01, -3.6265e-01],
[ 9.1865e-01, -2.7594e-01],
[-3.3084e-01, -5.0889e-01],
[-5.7668e-01, -6.7955e-01],
[-4.3657e-01, 4.4005e-01],
[-6.7762e-01, 6.6165e-01],
[-6.8883e-01, 3.0077e-01],
[-6.2191e-01, 2.6004e-01],
[ 9.6481e-02, 3.8998e-01],
[-2.4110e-02, -1.3306e-01],
[ 9.3469e-01, -5.8318e-01],
[-5.1758e-01, 3.8991e-01],
[-8.8389e-01, 5.1722e-01],
[-5.8712e-01, 3.2249e-01],
[-6.2805e-01, 2.2029e-01],
[ 1.9273e-02, 4.2134e-01],
[-6.2000e-01, -8.0570e-01],
[ 5.7576e-01, 1.1533e-01],
[-3.0782e-01, 4.2206e-01],
[-1.1492e-01, -1.3049e-01],
[ 2.0825e-01, -6.5768e-01],
[ 7.0103e-01, -3.3153e-01],
[ 5.5039e-01, -1.7839e-01],
[ 5.8695e-01, -2.4742e-01],
[ 1.6704e-01, -9.5915e-01],
[-5.6155e-01, 3.0939e-01],
[ 4.7356e-01, -9.5397e-01],
[ 5.2163e-01, -5.2162e-01],
[ 2.8600e-01, -6.4604e-01],
[ 4.8966e-01, 4.1140e-02],
[ 5.2059e-01, 2.5238e-01],
[ 6.7368e-01, -4.1768e-01],
[ 1.3215e-02, 4.2344e-01],
[ 5.2750e-01, -8.3227e-02],
[-6.3877e-01, 1.7302e-01],
[-3.2818e-01, 4.6731e-01],
[-5.1544e-02, -1.6405e-01],
[ 6.8391e-01, 1.4485e-01],
[-3.7518e-01, -5.1995e-01],
[ 5.2136e-01, -8.1631e-02],
[-1.3430e-01, 6.4343e-01],
[ 9.0494e-01, -5.7514e-01],
[-6.5605e-01, 4.0178e-01],
[-1.5349e-01, 6.4620e-01],
[-1.6063e-01, -2.0841e-02],
[-9.6131e-01, -4.0573e-02],
[-9.6411e-01, 1.2663e-01],
[ 5.8162e-01, -2.4927e-01],
[ 2.3146e-02, -9.4542e-01],
[ 9.0846e-01, -6.7051e-01],
[ 5.2004e-01, 2.4024e-01],
[-9.0095e-01, -5.5061e-02],
[-2.9964e-01, -4.6193e-01],
[ 9.2176e-01, -4.6568e-01],
[ 8.5012e-01, -2.9281e-01],
[-6.1496e-01, 9.8138e-02],
[ 3.7472e-01, -6.1952e-01],
[-8.5847e-01, 5.5058e-01],
[ 6.6163e-01, -4.3064e-01],
[ 2.4736e-01, -9.4699e-01],
[ 7.0158e-01, -8.8146e-01],
[-2.1489e-02, 3.6318e-01],
[-4.8272e-02, -1.6243e-01],
[ 5.7745e-02, 4.0628e-01],
[ 9.3572e-01, -3.1791e-01],
[-8.7965e-01, 5.2355e-01]], device='cuda:0', dtype=torch.float64)
###Markdown
Second network Network architecture
###Code
# SFC-HAE: one curve with nearest neighbour smoothing and compressing to 4 latent variables
print("compress to 4")
torch.manual_seed(42)
# Hyper-parameters
Latent_num = 4
EPOCH = 3001
BATCH_SIZE = 16
LR = 0.0001
k = nNodes # number of nodes - this has to match training_data.shape[0]
print(training_data.shape) # nTrain by number of nodes by 5
# Combing the input data and the mode
train_set = TensorDataset(torch.from_numpy(training_data), mode_1train)
# Data Loader for easy mini-batch return in training
train_loader = Data.DataLoader(dataset = train_set, batch_size =BATCH_SIZE , shuffle = True)
class CNN_2(nn.Module):
def __init__(self):
super(CNN_2, self).__init__()
self.encoder_h1 = nn.Sequential(
# input shape (16,4,20550) # The first 16 is the batch size
nn.Tanh(),
nn.Conv1d(4, 16, 32, 4, 16),
# output shape (16, 16, 5138)
nn.Tanh(),
nn.Conv1d(16, 16, 32, 4, 16),
# output shape (16, 16,1285)
nn.Tanh(),
nn.Conv1d(16, 16, 32, 4, 16),
# output shape (16,16,322)
nn.Tanh(),
nn.Conv1d(16, 16, 32, 4, 16),
# output shape (16,16,81)
nn.Tanh(),
)
self.fc1 = nn.Sequential(
nn.Linear(1296, 2),
nn.Tanh(),
)
self.fc2 = nn.Sequential(
nn.Linear(2*2, 16*81),
nn.Tanh(),
)
self.decoder_h1 = nn.Sequential(
# (b, 16, 81)
nn.Tanh(),
nn.ConvTranspose1d(16, 16, 32, 4, 15), # (16, 16, 322)
nn.Tanh(),
nn.ConvTranspose1d(16, 16, 32, 4, 15), # (16, 16, 1286)
nn.Tanh(),
nn.ConvTranspose1d(16, 16, 32, 4, 16), # (16, 16, 5140)
nn.Tanh(),
nn.ConvTranspose1d(16, 4, 32, 4, 19), # (16, 4, 20550)
nn.Tanh(),
)
# input sparse layers, initialize weight as 0.33, bias as 0
self.weight1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight1_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight1_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias1 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight11 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight11_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight11_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias11 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight2 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight2_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight2_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias2 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight22 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight22_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight22_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias22 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.weight3 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight3_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight3_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias3 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
self.weight33 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight33_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight33_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias33 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
self.weight4 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight4_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight4_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias4 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
self.weight44 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight44_0 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.weight44_1 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.ones(k)),requires_grad = True)
self.bias44 = torch.nn.Parameter(torch.FloatTensor(0.33 * torch.zeros(k)),requires_grad = True)
# output sparse layers, initialize weight as 0.083, bias as 0
self.weight_out1 = torch.nn.Parameter(torch.FloatTensor(0.083 *torch.ones(k)),requires_grad = True)
self.weight_out1_0 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out1_1 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out11 = torch.nn.Parameter(torch.FloatTensor(0.083 *torch.ones(k)),requires_grad = True)
self.weight_out11_0 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out11_1 = torch.nn.Parameter(torch.FloatTensor(0.083* torch.ones(k)),requires_grad = True)
self.weight_out2 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out2_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out2_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out22 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out22_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out22_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out3 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out3_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out3_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out33 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out33_0 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out33_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out4 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out4_0= torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out4_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out44 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out44_0= torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.weight_out44_1 = torch.nn.Parameter(torch.FloatTensor(0.083 * torch.ones(k)),requires_grad = True)
self.bias_out1 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
self.bias_out2 = torch.nn.Parameter(torch.FloatTensor(torch.zeros(k)),requires_grad = True)
def forward(self, x, mode):
# print("X_size",x.size())
# first curve
ToSFC1 = x[:, :, 0] # # The first column is the first SFC ordering
ToSFC1Up = torch.zeros_like(ToSFC1)
ToSFC1Down = torch.zeros_like(ToSFC1)
ToSFC1Up[:-1] = ToSFC1[1:]
ToSFC1Up[-1] = ToSFC1[-1]
ToSFC1Down[1:] = ToSFC1[:-1]
ToSFC1Down[0] = ToSFC1[0]
batch_num = ToSFC1.shape[0]
#print("ToSFC1",ToSFC1.shape) # (16, 20550)
x1 = x[:, :, 3:5] # The fourth column and fifth column are velocities u and v respectively
#print("x1", x1.shape) # # (16, 20550, 2)
x1_1d = torch.zeros((batch_num, 4, k)).to(device)
# first input sparse layer, then transform to sfc order1
for j in range(batch_num):
x1_1d[j, 0, :] = x1[j, :, 0][ToSFC1[j].long()] * self.weight1 + \
x1[j, :, 0][ToSFC1Up[j].long()] * self.weight1_0 + \
x1[j, :, 0][ToSFC1Down[j].long()] * self.weight1_1 + self.bias1
x1_1d[j, 1, :] = x1[j, :, 0][ToSFC1[j].long()] * self.weight11 + \
x1[j, :, 0][ToSFC1Up[j].long()] * self.weight11_0 + \
x1[j, :, 0][ToSFC1Down[j].long()] * self.weight11_1 + self.bias11
x1_1d[j, 2, :] = x1[j, :, 1][ToSFC1[j].long()] * self.weight2 + \
x1[j, :, 1][ToSFC1Up[j].long()] * self.weight2_0 + \
x1[j, :, 1][ToSFC1Down[j].long()] * self.weight2_1 + self.bias2
x1_1d[j, 3, :] = x1[j, :, 1][ToSFC1[j].long()] * self.weight22 + \
x1[j, :, 1][ToSFC1Up[j].long()] * self.weight22_0 + \
x1[j, :, 1][ToSFC1Down[j].long()] * self.weight22_1 + self.bias22
# first cnn encoder
encoded_1 = self.encoder_h1(x1_1d.view(-1, 4, k)) #(16,4,20550)
# print("encoded", encoded_1.shape)
# flatten and concatenate
encoded_3 = encoded_1.view(-1,16*81)
# print("Before FC", encoded_3.shape)
# fully connection
encoded = self.fc1(encoded_3) # (b,128)
# print("After encoder FC,the output of encoder",encoded.shape)
encoded = torch.cat((encoded, mode),axis = 1) # Combine the mode_1 to the x1
# print("encoded_combine",encoded.shape)
decoded_3 = self.decoder_h1(self.fc2(encoded).view(-1, 16, 81))
# print("The output of decoder: ", decoded_3.shape) # (16, 2, 20550)
BackSFC1 = torch.argsort(ToSFC1)
BackSFC1Up = torch.argsort(ToSFC1Up)
BackSFC1Down = torch.argsort(ToSFC1Down)
# k = 20550
# batch_num = ToSFC1.shape[0]
decoded_sp = torch.zeros((batch_num, k, 2)).to(device)
# output sparse layer, resort according to sfc transform
for j in range(batch_num):
decoded_sp[j, :, 0] = decoded_3[j, 0, :][BackSFC1[j].long()]* self.weight_out1 + \
decoded_3[j, 0, :][BackSFC1Up[j].long()] * self.weight_out1_0 + \
decoded_3[j, 0, :][BackSFC1Down[j].long()] * self.weight_out1_1 + \
decoded_3[j, 1, :][BackSFC1[j].long()]* self.weight_out11 + \
decoded_3[j, 1, :][BackSFC1Up[j].long()] * self.weight_out11_0 + \
decoded_3[j, 1, :][BackSFC1Down[j].long()] * self.weight_out11_1 + self.bias_out1
decoded_sp[j, :, 1] = decoded_3[j, 2, :][BackSFC1[j].long()] * self.weight_out3 + \
decoded_3[j, 2, :][BackSFC1Up[j].long()] * self.weight_out3_0 + \
decoded_3[j, 2, :][BackSFC1Down[j].long()] * self.weight_out3_1 + \
decoded_3[j, 3, :][BackSFC1[j].long()] * self.weight_out33 + \
decoded_3[j, 3, :][BackSFC1Up[j].long()] * self.weight_out33_0 + \
decoded_3[j, 3, :][BackSFC1Down[j].long()] * self.weight_out33_1 + self.bias_out2
# resort 1D to 2D
decoded = F.tanh(decoded_sp) # both are BATCH_SIZE by nNodes by 2
return encoded, decoded
###Output
_____no_output_____
###Markdown
Train
###Code
# train the autoencoder
t_train_0 = time.time()
autoencoder_2 = CNN_2().to(device)
optimizer = torch.optim.Adam(autoencoder_2.parameters(), lr=LR)
loss_func = nn.MSELoss()
loss_list = []
loss_valid = []
epoch_list=[]
for epoch in range(EPOCH):
for x, mode in train_loader:
# Use the detach to copy the value of mode but set requires_grad = false
detach_mode = mode.detach()
b_y = x[:, :, 3:5].to(device) # b_y= False
b_x = x.to(device) # b_x: False
b_mode = detach_mode.to(device)
# print("b_mode",b_mode.requires_grad)
encoded, decoded = autoencoder_2(b_x.float(),b_mode.float()) #decoded true
# decoded.detach_()
# decoded = decoded.detach()
loss = loss_func(decoded, b_y.float()) # Loss: True # mean square error
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
loss_list.append(loss)
encoded, decoded = autoencoder_2(torch.tensor(valid_data).to(device),mode_1valid.float().to(device))
error_autoencoder_2 = (decoded.detach() - torch.tensor(valid_data[:,:, 3:5]).to(device))
MSE_valid = (error_autoencoder_2**2).mean()
loss_valid.append(MSE_valid)
epoch_list.append(epoch)
print('Epoch: ', epoch, '| train loss: %.6f' % loss.cpu().data.numpy(), '| valid loss: %.6f' % MSE_valid)
#save the weights every 500 epochs
if (epoch%500 == 0):
torch.save(autoencoder_2, "./HAE/pkl/II_I_Eran"+str(epoch) +"_LV"+str(Latent_num)+ "_B"+str(BATCH_SIZE)+"_n"+ str(nTrain)+"_L"+str(LR)+".pkl")
pathcsv= "./HAE/csv/II_I_Eran"+str(epoch)+"_LV"+str(Latent_num) + "_B"+str(BATCH_SIZE)+"_n"+ str(nTrain)+"_L"+str(LR)+".csv"
saveCsv(pathcsv,epoch+1)
t_train_1 = time.time()
# torch.save(autoencoder_2, path)
print(t_train_1-t_train_0) # 3000 epoch
###Output
38702.73992228508
###Markdown
Save loss and plot
###Code
pathName = "./HAE/csv/II_I_Eran3000_LV4_B16_n1600_L0.0001.csv"
name = "SFC-HAE MSE loss of 4 compression variables"
PlotMSELoss(pathName,name)
autoencoder_2 = torch.load("./HAE/pkl/II_I_Eran3000_LV4_B16_n1600_L0.0001.pkl")
###Output
_____no_output_____
###Markdown
Get mode
###Code
# pass training, validation and test data through the autoencoder
t_predict_0 = time.time()
mode_2train, training_decoded_2 = autoencoder_2.to(device)(torch.tensor(training_data).to(device),mode_1train.float().to(device))
error_autoencoder = (training_decoded_2.cpu().detach().numpy() - training_data[:,:,3:5])
print("MSE_err of training data", (error_autoencoder**2).mean())
mode_2valid, valid_decoded_2 = autoencoder_2.to(device)(torch.tensor(valid_data).to(device),mode_1valid.float().to(device))
error_autoencoder = (valid_decoded_2.cpu().detach().numpy() - valid_data[:, :, 3:5])
print("Mse_err of validation data", (error_autoencoder**2).mean())
mode_2test, test_decoded_2 = autoencoder_2.to(device)(torch.tensor(test_data).to(device),mode_1test.float().to(device))
error_autoencoder = (test_decoded_2.cpu().detach().numpy() - test_data[:, :, 3:5])
print("Mse_err of test data", (error_autoencoder**2).mean())
t_predict_1 = time.time()
total_decoded_2 = getTotal_decoded(training_decoded_2,valid_decoded_2,test_decoded_2,train_index,valid_index,test_index)
error_autoencoder = (total_decoded_2 - total_data[:, :, 3:5])
print("Mse_err of total data", (error_autoencoder**2).mean())
print(mode_2train.shape)
print(mode_2valid.shape)
print(mode_2test.shape)
print('Predict time:',t_predict_1-t_predict_0)
###Output
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:1794: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
###Markdown
Convert csv to vtu
###Code
# Before convert csv file to vtu file, the range of data must be recovered
training_decoded_2[:, :, 0] = (training_decoded_2[:, :, 0] - bu)/ku
valid_decoded_2[:, :, 0] = (valid_decoded_2[:, :, 0] - bu)/ku
test_decoded_2[:, :, 0] = (test_decoded_2[:, :, 0] - bu)/ku
total_decoded_2[:, :, 0] = (total_decoded_2[:, :, 0] - bu)/ku
training_decoded_2[:, :, 1] = (training_decoded_2[:, :, 1] - bv)/kv
valid_decoded_2[:, :, 1] = (valid_decoded_2[:, :, 1] - bv)/kv
test_decoded_2[:, :, 1] = (test_decoded_2[:, :, 1] - bv)/kv
total_decoded_2[:, :, 1] = (total_decoded_2[:, :, 1] - bv)/kv
training_data[:, :, 3] = (training_data[:, :, 3] - bu)/ku
valid_data[:, :, 3] = (valid_data[:, :, 3] - bu)/ku
test_data[:, :, 3] = (test_data[:, :, 3] - bu)/ku
total_data[:, :, 3] = (total_data[:, :, 3] - bu)/ku
training_data[:, :, 4] = (training_data[:, :, 4] - bv)/kv
valid_data[:, :, 4] = (valid_data[:, :, 4] - bv)/kv
test_data[:, :, 4] = (test_data[:, :, 4] - bv)/kv
total_data[:, :, 4] = (total_data[:, :, 4] - bv)/kv
# results = np.concatenate((training_decoded_2.cpu().data.numpy(), valid_decoded_2.cpu().data.numpy(), test_decoded_2.cpu().data.numpy()))
results = total_decoded_2
print('results shape', results.shape)
N = results.shape[1] * results.shape[2]
results = results.reshape((results.shape[0],N), order='F')
print('results shape', results.shape, type(results))
# The path can be defined by user depending on the requirements
path = "./HAE/All_results/HII_I"+"_LV"+str(Latent_num) + "_B"+str(BATCH_SIZE)+'E_'+str(3000)+"_result.csv"
## write results to file
np.savetxt(path, results , delimiter=',')
###Output
results shape (2000, 20550, 2)
results shape (2000, 41100) <class 'numpy.ndarray'>
|
anomaly-detection-algorithm/code/model3-1.ipynb | ###Markdown
Import
###Code
name = "swinLp4img384_cv10_lr0002_batch16"
import warnings
warnings.filterwarnings('ignore')
from glob import glob
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
import cv2
import pickle
import os
import timm
import random
from efficientnet_pytorch import EfficientNet
import torch
from torch.utils.data import Dataset, DataLoader
import torch.nn as nn
import torchvision.transforms as transforms
from sklearn.metrics import f1_score, accuracy_score
import time
from sklearn.model_selection import StratifiedKFold
device = torch.device('cuda:2')
train_png = sorted(glob('../data/train/*.png'))
test_png = sorted(glob('../data/test/*.png'))
train_y = pd.read_csv("../data/train_df.csv")
train_labels = train_y["label"]
label_unique = sorted(np.unique(train_labels))
label_unique = {key:value for key,value in zip(label_unique, range(len(label_unique)))}
train_labels = [label_unique[k] for k in train_labels]
# def img_load(path):
# img = cv2.imread(path)[:,:,::-1]
# img = cv2.resize(img, (384-8, 384-8),interpolation = cv2.INTER_AREA)
# return img
# train_imgs = [img_load(m) for m in tqdm(train_png)]
# test_imgs = [img_load(n) for n in tqdm(test_png)]
# np.save('../data/train_imgs_384', np.array(train_imgs))
# np.save('../data/test_imgs_384', np.array(test_imgs))
train_imgs = np.load('../data/train_imgs_384.npy')
test_imgs = np.load('../data/test_imgs_384.npy')
# meanRGB = [np.mean(x, axis=(0,1)) for x in train_imgs]
# stdRGB = [np.std(x, axis=(0,1)) for x in train_imgs]
# meanR = np.mean([m[0] for m in meanRGB])/255
# meanG = np.mean([m[1] for m in meanRGB])/255
# meanB = np.mean([m[2] for m in meanRGB])/255
# stdR = np.mean([s[0] for s in stdRGB])/255
# stdG = np.mean([s[1] for s in stdRGB])/255
# stdB = np.mean([s[2] for s in stdRGB])/255
# print("train 평균",meanR, meanG, meanB)
# print("train 표준편차",stdR, stdG, stdB)
# meanRGB = [np.mean(x, axis=(0,1)) for x in test_imgs]
# stdRGB = [np.std(x, axis=(0,1)) for x in test_imgs]
# meanR = np.mean([m[0] for m in meanRGB])/255
# meanG = np.mean([m[1] for m in meanRGB])/255
# meanB = np.mean([m[2] for m in meanRGB])/255
# stdR = np.mean([s[0] for s in stdRGB])/255
# stdG = np.mean([s[1] for s in stdRGB])/255
# stdB = np.mean([s[2] for s in stdRGB])/255
# print("test 평균",meanR, meanG, meanB)
# print("test 표준편차",stdR, stdG, stdB)
class Custom_dataset(Dataset):
def __init__(self, img_paths, labels, mode='train'):
self.img_paths = img_paths
self.labels = labels
self.mode=mode
def __len__(self):
return len(self.img_paths)
def __getitem__(self, idx):
img = self.img_paths[idx]
if self.mode == 'train':
train_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean = [0.433038, 0.403458, 0.394151],
std = [0.181572, 0.174035, 0.163234]),
transforms.RandomAffine((-45, 45)),
])
img = train_transform(img)
if self.mode == 'test':
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean = [0.418256, 0.393101, 0.386632],
std = [0.195055, 0.190053, 0.185323])
])
img = test_transform(img)
label = self.labels[idx]
return img, label
class Network(nn.Module):
def __init__(self,mode = 'train'):
super(Network, self).__init__()
self.mode = mode
if self.mode == 'train':
self.model = timm.create_model('swin_large_patch4_window12_384', pretrained=True, num_classes=88, drop_path_rate = 0.2)
if self.mode == 'test':
self.model = timm.create_model('swin_large_patch4_window12_384', pretrained=True, num_classes=88, drop_path_rate = 0)
def forward(self, x):
x = self.model(x)
return x
def score_function(real, pred):
score = f1_score(real, pred, average="macro")
return score
def main(seed = 2022):
os.environ['PYTHONHASHSEED'] = str(seed)
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.benchmark = True
main(2022)
pred_train_dict = {}
pred_test_dict = {}
import gc
cv = StratifiedKFold(n_splits = 10, random_state = 2022, shuffle=True)
batch_size = 16
epochs = 80
pred_ensemble = []
for idx, (train_idx, val_idx) in enumerate(cv.split(train_imgs, np.array(train_labels))):
# print("----------fold_{} start!----------".format(idx))
t_imgs, val_imgs = train_imgs[train_idx], train_imgs[val_idx]
t_labels, val_labels = np.array(train_labels)[train_idx], np.array(train_labels)[val_idx]
# Train
train_dataset = Custom_dataset(np.array(t_imgs), np.array(t_labels), mode='train')
train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)
# Val
val_dataset = Custom_dataset(np.array(val_imgs), np.array(val_labels), mode='test')
val_loader = DataLoader(val_dataset, shuffle=True, batch_size=batch_size)
gc.collect()
torch.cuda.empty_cache()
best=0
model = Network().to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=2e-4, weight_decay = 2e-2)
criterion = nn.CrossEntropyLoss()
scaler = torch.cuda.amp.GradScaler()
best_f1 = 0
early_stopping = 0
for epoch in range(epochs):
start=time.time()
train_loss = 0
train_pred=[]
train_y=[]
model.train()
for batch in (train_loader):
optimizer.zero_grad()
x = torch.tensor(batch[0], dtype=torch.float32, device=device)
y = torch.tensor(batch[1], dtype=torch.long, device=device)
with torch.cuda.amp.autocast():
pred = model(x)
loss = criterion(pred, y)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
train_loss += loss.item()/len(train_loader)
train_pred += pred.argmax(1).detach().cpu().numpy().tolist()
train_y += y.detach().cpu().numpy().tolist()
train_f1 = score_function(train_y, train_pred)
state_dict= model.state_dict()
model.eval()
with torch.no_grad():
val_loss = 0
val_pred = []
val_y = []
for batch in (val_loader):
x_val = torch.tensor(batch[0], dtype = torch.float32, device = device)
y_val = torch.tensor(batch[1], dtype=torch.long, device=device)
with torch.cuda.amp.autocast():
pred_val = model(x_val)
loss_val = criterion(pred_val, y_val)
val_loss += loss_val.item()/len(val_loader)
val_pred += pred_val.argmax(1).detach().cpu().numpy().tolist()
val_y += y_val.detach().cpu().numpy().tolist()
val_f1 = score_function(val_y, val_pred)
print(f'fold{idx+1} epoch{epoch} score: {val_f1:.5f}')
if val_f1 > best_f1:
best_epoch = epoch
best_loss = val_loss
best_f1 = val_f1
early_stopping = 0
torch.save({'epoch':epoch,
'state_dict':state_dict,
'optimizer': optimizer.state_dict(),
'scaler': scaler.state_dict(),
}, f'../model/{name}_best_model_{idx+1}.pth')
# print('-----------------SAVE:{} epoch----------------'.format(best_epoch+1))
else:
early_stopping += 1
# Early Stopping
if early_stopping == 20:
TIME = time.time() - start
print(f'epoch : {epoch+1}/{epochs} time : {TIME:.0f}s/{TIME*(epochs-epoch-1):.0f}s')
print(f'TRAIN loss : {train_loss:.5f} f1 : {train_f1:.5f}')
print(f'Val loss : {val_loss:.5f} f1 : {val_f1:.5f}')
break
TIME = time.time() - start
print(f'epoch : {epoch+1}/{epochs} time : {TIME:.0f}s/{TIME*(epochs-epoch-1):.0f}s')
print(f'TRAIN loss : {train_loss:.5f} f1 : {train_f1:.5f}')
print(f'Val loss : {val_loss:.5f} f1 : {val_f1:.5f}')
pred_train = np.zeros((len(train_imgs), 88))
pred_test = np.zeros((len(test_imgs), 88))
test_dataset = Custom_dataset(np.array(test_imgs), np.array(["tmp"]*len(test_imgs)), mode='test')
test_loader = DataLoader(test_dataset, shuffle=False, batch_size=batch_size)
for idx, (train_idx, val_idx) in enumerate(cv.split(train_imgs, np.array(train_labels))):
print("----------fold_{} predict start!----------".format(idx+1))
t_imgs, val_imgs = train_imgs[train_idx], train_imgs[val_idx]
t_labels, val_labels = np.array(train_labels)[train_idx], np.array(train_labels)[val_idx]
# Val
val_dataset = Custom_dataset(np.array(val_imgs), np.array(val_labels), mode='test')
val_loader = DataLoader(val_dataset, shuffle=False, batch_size=batch_size)
gc.collect()
torch.cuda.empty_cache()
model_test = Network(mode = 'test').to(device)
model_test.load_state_dict(torch.load((f'../model/{name}_best_model_{idx+1}.pth'))['state_dict'])
model_test.eval()
pred_train_list = []
with torch.no_grad():
for batch in (val_loader):
x = torch.tensor(batch[0], dtype = torch.float32, device = device)
with torch.cuda.amp.autocast():
pred_train_local = model_test(x)
pred_train_list.extend(pred_train_local.detach().cpu().numpy())
gc.collect()
torch.cuda.empty_cache()
model_test = Network(mode = 'test').to(device)
model_test.load_state_dict(torch.load((f'../model/{name}_best_model_{idx+1}.pth'))['state_dict'])
model_test.eval()
pred_test_list = []
with torch.no_grad():
for batch in (test_loader):
x = torch.tensor(batch[0], dtype = torch.float32, device = device)
with torch.cuda.amp.autocast():
pred_test_local = model_test(x)
pred_test_list.extend(pred_test_local.detach().cpu().numpy())
pred_train[val_idx, :] = pred_train_list
pred_test += np.array(pred_test_list) / 10
pred_train_dict[f'{name}_seed{str(2022)}'] = pred_train
pred_test_dict[f'{name}_seed{str(2022)}'] = pred_test
def sort_dict(model, pred_dict, pred_test_dict):
pred_dict_local = {}
for key, value in pred_dict.items():
if model in key:
pred_dict_local[key]=value
pred_test_dict_local = {}
for key, value in pred_test_dict.items():
if model in key:
pred_test_dict_local[key]=value
pred_dict_new_local = dict(sorted(
pred_dict_local.items(),
key=lambda x:score_function((train_labels), np.argmax(list(x[1]), axis=1)), reverse=False)[:5])
pred_test_dict_new_local = {}
for key, value in pred_dict_new_local.items():
pred_test_dict_new_local[key]=pred_test_dict_local[key]
return pred_dict_new_local, pred_test_dict_new_local
def save_dict(model, pred_dict, pred_test_dict):
with open('../pickle/pred_train_dict_'+model+'.pickle', 'wb') as fw:
pickle.dump(pred_dict, fw)
with open('../pickle/pred_test_dict_'+model+'.pickle', 'wb') as fw:
pickle.dump(pred_test_dict, fw)
pred_train_dict_global, pred_test_dict_global = sort_dict(name, pred_train_dict, pred_test_dict)
save_dict(name, pred_train_dict_global, pred_test_dict_global)
###Output
_____no_output_____ |
24_alpha_factor_library/01_sample_selection.ipynb | ###Markdown
Data Prep
###Code
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
idx = pd.IndexSlice
deciles = np.arange(.1, 1, .1).round(1)
###Output
_____no_output_____
###Markdown
Load Data
###Code
DATA_STORE = Path('..', 'data', 'assets.h5')
with pd.HDFStore(DATA_STORE) as store:
data = (store['quandl/wiki/prices']
.loc[idx['2007':'2016', :],
['adj_open', 'adj_high', 'adj_low', 'adj_close', 'adj_volume']]
.dropna()
.swaplevel()
.sort_index()
.rename(columns=lambda x: x.replace('adj_', '')))
metadata = store['us_equities/stocks'].loc[:, ['marketcap', 'sector']]
data.info(null_counts=True)
metadata.sector = pd.factorize(metadata.sector)[0]
metadata.info()
data = data.join(metadata).dropna(subset=['sector'])
data.info(null_counts=True)
print(f"# Tickers: {len(data.index.unique('ticker')):,.0f} | # Dates: {len(data.index.unique('date')):,.0f}")
###Output
# Tickers: 2,399 | # Dates: 2,547
###Markdown
Select 500 most-traded stocks
###Code
dv = data.close.mul(data.volume)
top500 = (dv.groupby(level='date')
.rank(ascending=False)
.unstack('ticker')
.dropna(thresh=8*252, axis=1)
.mean()
.nsmallest(500))
###Output
_____no_output_____
###Markdown
Visualize the 200 most liquid stocks
###Code
top200 = (data.close
.mul(data.volume)
.unstack('ticker')
.dropna(thresh=8*252, axis=1)
.mean()
.div(1e6)
.nlargest(200))
cutoffs = [0, 50, 100, 150, 200]
fig, axes = plt.subplots(ncols=4, figsize=(20, 10), sharex=True)
axes = axes.flatten()
for i, cutoff in enumerate(cutoffs[1:], 1):
top200.iloc[cutoffs[i-1]:cutoffs[i]
].sort_values().plot.barh(logx=True, ax=axes[i-1])
fig.tight_layout()
to_drop = data.index.unique('ticker').difference(top500.index)
len(to_drop)
data = data.drop(to_drop, level='ticker')
data.info(null_counts=True)
print(f"# Tickers: {len(data.index.unique('ticker')):,.0f} | # Dates: {len(data.index.unique('date')):,.0f}")
###Output
# Tickers: 500 | # Dates: 2,518
###Markdown
Remove outlier observations based on daily returns
###Code
before = len(data)
data['ret'] = data.groupby('ticker').close.pct_change()
data = data[data.ret.between(-1, 1)].drop('ret', axis=1)
print(f'Dropped {before-len(data):,.0f}')
tickers = data.index.unique('ticker')
print(f"# Tickers: {len(tickers):,.0f} | # Dates: {len(data.index.unique('date')):,.0f}")
###Output
# Tickers: 500 | # Dates: 2,517
###Markdown
Sample price data for illustration
###Code
ticker = 'AAPL'
# alternative
# ticker = np.random.choice(tickers)
price_sample = data.loc[idx[ticker, :], :].reset_index('ticker', drop=True)
price_sample.info()
price_sample.to_hdf('data.h5', 'data/sample')
###Output
_____no_output_____
###Markdown
Compute returns Group data by ticker
###Code
by_ticker = data.groupby(level='ticker')
###Output
_____no_output_____
###Markdown
Historical returns
###Code
T = [1, 2, 3, 4, 5, 10, 21, 42, 63, 126, 252]
for t in T:
data[f'ret_{t:02}'] = by_ticker.close.pct_change(t)
###Output
_____no_output_____
###Markdown
Forward returns
###Code
data['ret_fwd'] = by_ticker.ret_01.shift(-1)
data = data.dropna(subset=['ret_fwd'])
###Output
_____no_output_____
###Markdown
Persist results
###Code
data.info(null_counts=True)
data.to_hdf('data.h5', 'data/top500')
###Output
_____no_output_____ |
Olist.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
**Carregando os Dados** Tabela de Pedidos
###Code
df_orders = pd.read_csv('/content/drive/MyDrive/datasets/olist/olist_orders_dataset.csv', parse_dates=['order_approved_at'])
df_orders.head()
###Output
_____no_output_____
###Markdown
Tabela de Item-Pedidos
###Code
df_order_items = pd.read_csv('/content/drive/MyDrive/datasets/olist/olist_order_items_dataset.csv')
df_order_items.head()
###Output
_____no_output_____
###Markdown
Tabela de Cadastro de Sellers
###Code
df_sellers = pd.read_csv('/content/drive/MyDrive/datasets/olist/olist_sellers_dataset.csv')
df_sellers.head()
###Output
_____no_output_____
###Markdown
Criando a ABT de Classificação
###Code
# criando histórico da abt de trino
df_historico_abt_train = (
df_order_items
.merge(df_orders, on='order_id', how='left')
.query('order_status == "delivered"')
.query('order_approved_at >= "2017-01-01" & order_approved_at < "2018-07-01"')
.merge(df_sellers, on='seller_id', how='left')
)
df_historico_abt_train.head()
# criando as features
df_features_train = (
df_historico_abt_train
.query('order_approved_at < "2018-01-01"')
.groupby('seller_id')
.agg(uf = ('seller_state', 'first'),
tot_orders_12m = ('order_id', 'nunique'),
tot_items_12m = ('product_id', 'count'),
tot_items_dist_12m = ('product_id', 'nunique'),
receita_12m = ('price', 'sum'),
data_ult_vnd = ('order_approved_at', 'max'))
.reset_index()
.assign(data_ref = pd.to_datetime('2018-01-01 00:00:00'))
.assign(recencia = lambda df: (df['data_ref'] - df['data_ult_vnd']).dt.days)
)
df_features_train.head()
# df_orders = pd.read_csv('/content/drive/MyDrive/datasets/olist/olist_orders_dataset.csv', parse_dates=['order_approved_at'])
# Declarar nuevamente para hacer la operacion de la resta de los dias. Recordar que una vez hecho esto correr todo de nuevo.
df_features_train.sort_values('recencia', ascending=False).head()
# criando o target
(
df_historico_abt_train
.query('order_approved_at >= "2018-01-01" & order_approved_at < "2018-07-01"')
.filter(['seller_id'])
.drop_duplicates()
)
# criando o target
(
df_historico_abt_train
.query('order_approved_at >= "2018-01-01" & order_approved_at < "2018-07-01"')
.filter(['seller_id'])
.drop_duplicates()
.query('seller_id == "c9a06ece156bb057372c68718ec8909b"')
)
# criando o target
df_target_train = (
df_historico_abt_train
.query('order_approved_at >= "2018-01-01" & order_approved_at < "2018-07-01"')
.filter(['seller_id'])
.drop_duplicates()
)
df_target_train.head()
# criando a abt de fato
(
df_features_train
.merge(df_target_train, on='seller_id', how='left', indicator=True)
).head()
# criando a abt de fato
(
df_features_train
.merge(df_target_train, on='seller_id', how='left', indicator=True)
.assign(churn_6m = lambda df: np.where(df['_merge'] == "left_only", 1, 0))
).head()
# criando a abt de fato
df_abt_churn_train = (
df_features_train
.merge(df_target_train, on='seller_id', how='left', indicator=True)
# left_only = churn (1), both = não churn (0)
.assign(churn_6m = lambda df: np.where(df['_merge'] == "left_only", 1, 0))
.filter(['data_ref',
'seller_id',
'uf',
'tot_orders_12m',
'tot_items_12m',
'tot_items_dist_12m',
'receita_12m',
'recencia',
'churn_6m'])
)
df_abt_churn_train.head()
###Output
_____no_output_____
###Markdown
Criando a ABT Out of Time (Validação ou Teste)
###Code
# criando histórico da abt de treino
df_historico_abt_oot = (
df_order_items
.merge(df_orders, on='order_id', how='left')
.query('order_status == "delivered"')
.query('order_approved_at >= "2017-02-01" & order_approved_at < "2018-08-01"')
.merge(df_sellers, on='seller_id', how='left')
)
df_historico_abt_train.head()
df_historico_abt_oot.agg({'order_approved_at': ['min', 'max']})
# criando as features
df_features_oot = (
df_historico_abt_oot
.query('order_approved_at < "2018-02-01"')
.groupby('seller_id')
.agg(uf = ('seller_state', 'first'),
tot_orders_12m = ('order_id', 'nunique'),
tot_items_12m = ('product_id', 'count'),
tot_items_dist_12m = ('product_id', 'nunique'),
receita_12m = ('price', 'sum'),
data_ult_vnd = ('order_approved_at', 'max'))
.reset_index()
.assign(data_ref = pd.to_datetime('2018-02-01 00:00:00'))
.assign(recencia = lambda df: (df['data_ref'] - df['data_ult_vnd']).dt.days)
)
df_features_oot.head()
# criando o target
df_target_oot = (
df_historico_abt_oot
.query('order_approved_at >= "2018-02-01" & order_approved_at < "2018-08-01"')
.filter(['seller_id'])
.drop_duplicates()
)
df_target_oot.head()
# criando a abt out of time
df_abt_churn_oot = (
df_features_oot
.merge(df_target_oot, on='seller_id', how='left', indicator=True)
# left_only = churn (1), both = não churn (0)
.assign(churn_6m = lambda df: np.where(df['_merge'] == "left_only", 1, 0))
.filter(['data_ref',
'seller_id',
'uf',
'tot_orders_12m',
'tot_items_12m',
'tot_items_dist_12m',
'receita_12m',
'recencia',
'churn_6m'])
)
df_abt_churn_oot.head()
###Output
_____no_output_____
###Markdown
Verificando as Distribuições da Variável Target
###Code
df_abt_churn_train['churn_6m'].value_counts(normalize=True)
df_abt_churn_oot['churn_6m'].value_counts(normalize=True)
df_abt_churn_oot['churn_6m'].value_counts()
df_abt_churn_train['churn_6m'].value_counts()
###Output
_____no_output_____
###Markdown
Salvando as ABTs
###Code
import joblib
joblib.dump(df_abt_churn_train, '/content/drive/MyDrive/datasets/olist/abt_classificacao_churn_train.pkl')
joblib.load('/content/drive/MyDrive/datasets/olist/abt_classificacao_churn_train.pkl')
df_abt_churn_train.to_csv('/content/drive/MyDrive/datasets/olist/abt_classificacao_churn_train.csv', index=False)
df_abt_churn_oot.to_csv('/content/drive/MyDrive/datasets/olist/abt_classificacao_churn_oot.csv', index=False)
###Output
_____no_output_____ |
2020-Tencent-Advertisement-Algorithm-Competition-Rank19/3_LSTM_v10_win30_300size_10folds.ipynb | ###Markdown
读取数据
###Code
df = pd.read_pickle(os.path.join(data_path, 'processed_data_numerical.pkl'))
df['age'] = df['age'] - 1
df['gender'] = df['gender'] - 1
df.head(1)
###Output
_____no_output_____
###Markdown
读取预训练好的Word Embedding
###Code
os.listdir(embedding_path)
embedding = np.load(os.path.join(embedding_path, 'embedding_w2v_sg1_hs0_win30_size300.npz'))
creative = embedding['creative_w2v']
ad= embedding['ad_w2v']
advertiser = embedding['advertiser_w2v']
product = embedding['product_w2v']
industry = embedding['industry_w2v']
product_cate = embedding['product_cate_w2v']
del embedding
gc.collect()
###Output
_____no_output_____
###Markdown
需要使用的embedding特征以及对应的序列编号
###Code
# 这里将需要使用到的特征列直接拼接成一个向量,后面直接split即可
data_seq = df[['creative_id', 'ad_id', 'advertiser_id', 'product_id', 'industry', 'click_times']].progress_apply(lambda s: np.hstack(s.values), axis=1).values
# embedding_list = [creative_embed, ad_embed, advertiser_embed, product_embed]
# embedding_list = [creative_glove, ad_glove, advertiser_glove, product_glove]
embedding_list = [creative, ad, advertiser, product, industry]
###Output
100%|██████████| 4000000/4000000 [08:10<00:00, 8150.43it/s]
###Markdown
建立PyTorch Dataset 和 Dataloader
###Code
class CustomDataset(Dataset):
def __init__(self, seqs, labels, input_num, shuffle=False):
self.seqs = seqs
self.labels = labels
self.input_num = input_num
self.shuffle = shuffle
def __len__(self):
return len(self.seqs)
def __getitem__(self, idx):
length = int(self.seqs[idx].shape[0]/self.input_num)
seq_list = list(torch.LongTensor(self.seqs[idx]).split(length, dim=0))
label = torch.LongTensor(self.labels[idx])
# 对数据进行随机shuffle
if self.shuffle and torch.rand(1) < 0.4:
random_pos = torch.randperm(length)
for i in range(len(seq_list)):
seq_list[i] = seq_list[i][random_pos]
return seq_list + [length, label]
def pad_truncate(Batch):
*seqs, lengths, labels = list(zip(*Batch))
# 长度截取到99%的大小,可以缩短pad长度,大大节省显存
trun_len = torch.topk(torch.tensor(lengths), max(int(0.01*len(lengths)), 1))[0][-1]
# 保险起见,再设置一个最大长度
max_len = min(trun_len, 150)
seq_list = list(pad_sequence(seq, batch_first=True)[:, :max_len] for seq in seqs)
return seq_list, torch.tensor(lengths).clamp_max(max_len), torch.stack(labels)
input_num = 6
BATCH_SIZE_TRAIN = 1024
BATCH_SIZE_VAL = 2048
BATCH_SIZE_TEST = 2048
kf = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)
data_folds = []
valid_indexs = [] # 用于后面保存五折的验证集结果时,按照1到900000对应顺序
for idx, (train_index, valid_index) in enumerate(kf.split(X=df.iloc[:3000000], y=df.iloc[:3000000]['age'])):
valid_indexs.append(valid_index)
X_train, X_val, X_test = data_seq[train_index], data_seq[valid_index], data_seq[3000000:]
y_train, y_val = np.array(df.iloc[train_index, -2:]), np.array(df.iloc[valid_index, -2:])
y_test = np.random.rand(X_test.shape[0], 2)
train_dataset = CustomDataset(X_train, y_train, input_num, shuffle=True)
val_dataset = CustomDataset(X_val, y_val, input_num, shuffle=False)
test_dataset = CustomDataset(X_test, y_test, input_num, shuffle=False)
train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE_TRAIN, shuffle=True, collate_fn=pad_truncate, num_workers=0, worker_init_fn=worker_init_fn)
valid_dataloader = DataLoader(val_dataset, batch_size=BATCH_SIZE_VAL, sampler=SequentialSampler(val_dataset), shuffle=False, collate_fn=pad_truncate, num_workers=0, worker_init_fn=worker_init_fn)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE_TEST, sampler=SequentialSampler(test_dataset), shuffle=False, collate_fn=pad_truncate, num_workers=0, worker_init_fn=worker_init_fn)
data_folds.append((train_dataloader, valid_dataloader, test_dataloader))
del data_seq, creative, ad, advertiser, product, industry, product_cate
gc.collect()
###Output
_____no_output_____
###Markdown
搭建模型
###Code
class BiLSTM(nn.Module):
def __init__(self, embedding_list, embedding_freeze, lstm_size, fc1, fc2, num_layers=1, rnn_dropout=0.2, embedding_dropout=0.2, fc_dropout=0.2):
super().__init__()
self.embedding_layers = nn.ModuleList([nn.Embedding.from_pretrained(torch.HalfTensor(embedding).cuda(), freeze=freeze) for embedding, freeze in zip(embedding_list, embedding_freeze)])
self.input_dim = int(np.sum([embedding.shape[1] for embedding in embedding_list]))
self.lstm = nn.LSTM(input_size = self.input_dim,
hidden_size = lstm_size,
num_layers = num_layers,
bidirectional = True,
batch_first = True,
dropout = rnn_dropout)
self.fc1 = nn.Linear(2*lstm_size, fc1)
self.fc2 = nn.Linear(fc1, fc2)
self.fc3 = nn.Linear(fc2, 12)
self.rnn_dropout = nn.Dropout(rnn_dropout)
self.embedding_dropout = nn.Dropout(embedding_dropout)
self.fc_dropout = nn.Dropout(fc_dropout)
def forward(self, seq_list, lengths):
batch_size, total_length= seq_list[0].size()
lstm_outputs = []
click_time = seq_list[-1]
embeddings = []
for idx, seq in enumerate(seq_list[:-1]):
embedding = self.embedding_layers[idx](seq).to(torch.float32)
embedding = self.embedding_dropout(embedding)
embeddings.append(embedding)
packed = pack_padded_sequence(torch.cat(embeddings, dim=-1), lengths, batch_first=True, enforce_sorted=False)
packed_output, (h_n, c_n) = self.lstm(packed)
lstm_output, _ = pad_packed_sequence(packed_output, batch_first=True, total_length=total_length, padding_value=-float('inf'))
lstm_output = self.rnn_dropout(lstm_output)
# lstm_output shape: (batchsize, total_length, 2*lstm_size)
max_output = F.max_pool2d(lstm_output, (total_length, 1), stride=(1, 1)).squeeze()
# output shape: (batchsize, 2*lstm_size)
fc_out = F.relu(self.fc1(max_output))
fc_out = self.fc_dropout(fc_out)
fc_out = F.relu(self.fc2(fc_out))
pred = self.fc3(fc_out)
age_pred = pred[:, :10]
gender_pred = pred[:, -2:]
return age_pred, gender_pred
###Output
_____no_output_____
###Markdown
训练模型
###Code
def validate(model, val_dataloader, criterion, history, n_iters):
model.eval()
global best_acc, best_model, validate_history
costs = []
age_accs = []
gender_accs = []
with torch.no_grad():
for idx, batch in enumerate(val_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
labels = labels.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
loss = criterion(age_output, gender_output, labels)
costs.append(loss.item())
_, age_preds = torch.max(age_output, 1)
_, gender_preds = torch.max(gender_output, 1)
age_accs.append((age_preds == labels[:, 0]).float().mean().item())
gender_accs.append((gender_preds == labels[:, 1]).float().mean().item())
torch.cuda.empty_cache()
mean_accs = np.mean(age_accs) + np.mean(gender_accs)
mean_costs = np.mean(costs)
writer.add_scalar('gender/validate_accuracy', np.mean(gender_accs), n_iters)
writer.add_scalar('gender/validate_loss', mean_costs, n_iters)
writer.add_scalar('age/validate_accuracy',np.mean(age_accs), n_iters)
writer.add_scalar('age/validate_loss', mean_costs, n_iters)
if mean_accs > history['best_model'][0][0]:
save_dict = copy.deepcopy(model.state_dict())
embedding_keys = []
for key in save_dict.keys():
if key.startswith('embedding'):
embedding_keys.append(key)
for key in embedding_keys:
save_dict.pop(key)
heapq.heapify(history['best_model'])
checkpoint_pth = history['best_model'][0][1]
heapq.heappushpop(history['best_model'], (mean_accs, checkpoint_pth))
torch.save(save_dict, checkpoint_pth)
del save_dict
gc.collect()
torch.cuda.empty_cache()
return mean_costs, mean_accs
def train(model, train_dataloader, val_dataloader, criterion, optimizer, epoch, history, validate_points, scheduler, step=True):
model.train()
costs = []
age_accs = []
gender_accs = []
val_loss, val_acc = 0, 0
with tqdm(total=len(train_dataloader.dataset), desc='Epoch{}'.format(epoch)) as pbar:
for idx, batch in enumerate(train_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
labels = labels.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
loss = criterion(age_output, gender_output, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step:
scheduler.step()
with torch.no_grad():
costs.append(loss.item())
_, age_preds = torch.max(age_output, 1)
_, gender_preds = torch.max(gender_output, 1)
age_accs.append((age_preds == labels[:, 0]).float().mean().item())
gender_accs.append((gender_preds == labels[:, 1]).float().mean().item())
pbar.update(labels.size(0))
n_iters = idx + len(train_dataloader)*(epoch-1)
if idx in validate_points:
val_loss, val_acc = validate(model, val_dataloader, criterion, history, n_iters)
model.train()
writer.add_scalar('gender/train_accuracy', gender_accs[-1], n_iters)
writer.add_scalar('gender/train_loss', costs[-1], n_iters)
writer.add_scalar('age/train_accuracy', age_accs[-1], n_iters)
writer.add_scalar('age/train_loss', costs[-1], n_iters)
writer.add_scalar('age/learning_rate', scheduler.get_lr()[0], n_iters)
pbar.set_postfix_str('loss:{:.4f}, acc:{:.4f}, val-loss:{:.4f}, val-acc:{:.4f}'.format(np.mean(costs[-10:]), np.mean(age_accs[-10:])+np.mean(gender_accs[-10:]), val_loss, val_acc))
torch.cuda.empty_cache()
def test(oof_train_test, model, test_dataloader, val_dataloader, valid_index, weight=1):
# 这里测试的时候对验证集也进行计算,以便于后续模型融合和search weight等提高
model.eval()
y_val = []
age_pred = []
gender_pred = []
age_pred_val = []
gender_pred_val = []
with torch.no_grad():
for idx, batch in enumerate(test_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
age_pred.append(age_output.cpu())
gender_pred.append(gender_output.cpu())
torch.cuda.empty_cache()
for idx, batch in enumerate(val_dataloader):
seq_list, lengths, labels = batch
seq_list_device = [seq.cuda() for seq in seq_list]
lengths_device = lengths.cuda()
age_output, gender_output = model(seq_list_device, lengths_device)
age_pred_val.append(age_output.cpu())
gender_pred_val.append(gender_output.cpu())
y_val.append(labels)
torch.cuda.empty_cache()
# 0到9列存储age的预测概率分布,10列到11列存储gender的预测概率分布,12、13列分别存储age和gender的真实标签
oof_train_test[valid_index, :10] += F.softmax(torch.cat(age_pred_val)).numpy() * weight
oof_train_test[valid_index, 10:12] += F.softmax(torch.cat(gender_pred_val)).numpy() * weight
oof_train_test[valid_index, 12:] = torch.cat(y_val).numpy()
oof_train_test[3000000:, :10] += F.softmax(torch.cat(age_pred)).numpy() * (1/5) * weight
oof_train_test[3000000:, 10:12] += F.softmax(torch.cat(gender_pred)).numpy() * (1/5) * weight
# 定义联合损失函数
def criterion(age_output, gender_output, labels):
age_loss = nn.CrossEntropyLoss()(age_output, labels[:, 0])
gender_loss = nn.CrossEntropyLoss()(gender_output, labels[:, 1])
return age_loss*0.6 + gender_loss*0.4
# 0到9列存储age的预测概率分布,10列到11列存储gender的预测概率分布,12、13列分别存储age和gender的真实标签
oof_train_test = np.zeros((4000000, 14))
# oof_train_test = np.load(os.path.join(model_save, "lstm_v10_300size_win30_fold_1.npy"))
acc_folds = []
model_name = 'lstm_v10_300size_win30'
best_checkpoint_num = 3
for idx, (train_dataloader, val_dataloader, test_dataloader) in enumerate(data_folds):
# if idx in [0, 1]:
# continue
history = {'best_model': []}
for i in range(best_checkpoint_num):
history['best_model'].append((0, os.path.join(model_save, '{}_checkpoint_{}.pth'.format(model_name, i))))
# 对应顺序: creative_w2v, ad_w2v, advertiser_w2v, product_w2v, industry_w2v
embedding_freeze = [True, True, True, True, True]
validate_points = list(np.linspace(0, len(train_dataloader)-1, 2).astype(int))[1:]
model = BiLSTM(embedding_list, embedding_freeze, lstm_size=1500, fc1=1500, fc2=800, num_layers=2, rnn_dropout=0.0, fc_dropout=0.0, embedding_dropout=0.0)
model = model.cuda()
model = nn.parallel.DistributedDataParallel(model, find_unused_parameters=True)
optimizer = torch.optim.Adam(model.parameters(), betas=(0.9, 0.999), lr=1e-3)
epochs = 5
# scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.7)
scheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=1e-5, max_lr=2e-3, step_size_up=int(len(train_dataloader)/2), cycle_momentum=False, mode='triangular')
# scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=3e-3, epochs=epochs, steps_per_epoch=len(train_dataloader), pct_start=0.2, anneal_strategy='linear', div_factor=30, final_div_factor=1e4)
for epoch in range(1, epochs+1):
writer = SummaryWriter(log_dir='./runs/{}_fold_{}'.format(model_name, idx))
train(model, train_dataloader, val_dataloader, criterion, optimizer, epoch, history, validate_points, scheduler, step=True)
# scheduler.step()
gc.collect()
for (acc, checkpoint_pth), weight in zip(sorted(history['best_model'], reverse=True), [0.5, 0.3, 0.2]):
model.load_state_dict(torch.load(checkpoint_pth, map_location=torch.device('cpu')), strict=False)
test(oof_train_test, model, test_dataloader, val_dataloader, valid_indexs[idx], weight=weight)
acc_folds.append(sorted(history['best_model'], reverse=True)[0][0])
np.save(os.path.join(model_save, "{}_fold_{}.npy".format(model_name, idx)), oof_train_test)
del model, history
gc.collect()
torch.cuda.empty_cache()
acc_folds
np.save(os.path.join(res_path, "{}_10folds_{:.4f}.npy".format(model_name, np.mean(acc_folds))), oof_train_test)
y_pred_age = (oof_train_test[3000000:, :10]).argmax(axis=1)
y_pred_gender = oof_train_test[3000000:, 10:12].argmax(axis=1)
df_submit = df.iloc[3000000:, -2:].rename({'age': 'predicted_age', 'gender':'predicted_gender'}, axis=1)
df_submit['predicted_age'] = y_pred_age + 1
df_submit['predicted_gender'] = y_pred_gender + 1
df_submit.to_csv(os.path.join(res_path, "submission.csv"))
###Output
_____no_output_____ |
Example Notebooks/.ipynb_checkpoints/K-means Clustering-checkpoint.ipynb | ###Markdown
K-means Clustering in GenePattern NotebookCluster genes and/or samples into a specified number of clusters. The result is k clusters, each centered around a randomly selected data point. Before you begin* Sign in to GenePattern by entering your username and password into the form below. If you are seeing a block of code instead of the login form, go to the menu above and select Cell > Run All.* Gene expression data must be in a [GCT or RES file](https://genepattern.broadinstitute.org/gp/pages/protocols/GctResFiles.html). * Example file: [all_aml_test.gct](https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.gct).* Learn more by reading about [file formats](http://www.broadinstitute.org/cancer/software/genepattern/file-formats-guideGCT).
###Code
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.GPAuthWidget(genepattern.register_session("https://genepattern.broadinstitute.org/gp", "", ""))
###Output
_____no_output_____
###Markdown
Step 1: PreprocessDatasetPreprocess gene expression data to remove platform noise and genes that have little variation. Although researchers generally preprocess data before clustering if doing so removes relevant biological information, skip this step. Considerations* PreprocessDataset can preprocess the data in one or more ways (in this order): 1. Set threshold and ceiling values. Any value lower/higher than the threshold/ceiling value is reset to the threshold/ceiling value. 2. Convert each expression value to the log base 2 of the value. 3. Remove genes (rows) if a given number of its sample values are less than a given threshold. 4. Remove genes (rows) that do not have a minimum fold change or expression variation. 5. Discretize or normalize the data.* When using ratios to compare gene expression between samples, convert values to log base 2 of the value to bring up- and down-regulated genes to the same scale. For example, ratios of 2 and .5 indicating two-fold changes for up- and down-regulated expression, respectively, are converted to +1 and -1. * If you did not generate the expression data, check whether preprocessing steps have already been taken before running the PreprocessDataset module. * Learn more by reading about the [PreprocessDataset](https://genepattern.broadinstitute.org/gp/getTaskDoc.jsp?name=PreprocessDataset) module.
###Code
preprocessdataset_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00020')
preprocessdataset_job_spec = preprocessdataset_task.make_job_spec()
preprocessdataset_job_spec.set_parameter("input.filename", "https://software.broadinstitute.org/cancer/software/genepattern/data/all_aml/all_aml_test.gct")
preprocessdataset_job_spec.set_parameter("threshold.and.filter", "1")
preprocessdataset_job_spec.set_parameter("floor", "20")
preprocessdataset_job_spec.set_parameter("ceiling", "20000")
preprocessdataset_job_spec.set_parameter("min.fold.change", "3")
preprocessdataset_job_spec.set_parameter("min.delta", "100")
preprocessdataset_job_spec.set_parameter("num.outliers.to.exclude", "0")
preprocessdataset_job_spec.set_parameter("row.normalization", "0")
preprocessdataset_job_spec.set_parameter("row.sampling.rate", "1")
preprocessdataset_job_spec.set_parameter("threshold.for.removing.rows", "")
preprocessdataset_job_spec.set_parameter("number.of.columns.above.threshold", "")
preprocessdataset_job_spec.set_parameter("log2.transform", "0")
preprocessdataset_job_spec.set_parameter("output.file.format", "3")
preprocessdataset_job_spec.set_parameter("output.file", "<input.filename_basename>.preprocessed")
genepattern.GPTaskWidget(preprocessdataset_task)
###Output
_____no_output_____
###Markdown
Step 2: KMeansClusteringRun k-means clustering on genes (rows) or samples (columns). The module creates a GCT file for each cluster and a GCT file that organizes all of the expression data by cluster.
###Code
kmeansclustering_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00081')
kmeansclustering_job_spec = kmeansclustering_task.make_job_spec()
kmeansclustering_job_spec.set_parameter("input.filename", "https://software.broadinstitute.org/cancer/software/genepattern/data/protocols/all_aml_test.preprocessed.gct")
kmeansclustering_job_spec.set_parameter("output.base.name", "<input.filename_basename>_KMcluster_output")
kmeansclustering_job_spec.set_parameter("number.of.clusters", "2")
kmeansclustering_job_spec.set_parameter("seed.value", "12345")
kmeansclustering_job_spec.set_parameter("cluster.by", "0")
kmeansclustering_job_spec.set_parameter("distance.metric", "0")
genepattern.GPTaskWidget(kmeansclustering_task)
###Output
_____no_output_____
###Markdown
Step 2: HeatMapViewerFor an overview of the results, use a heatmap to display the expression data organized by cluster. Considerations* The HeatMapViewer displays gene expression data as a heat map, which makes it easier to see patterns in the numeric data. Gene names are row labels and sample names are column labels. * Learn more by reading about the [HeatMapViewer](https://genepattern.broadinstitute.org/gp/getTaskDoc.jsp?name=HeatMapViewer) module.
###Code
heatmapviewer_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.visualizer:00010')
heatmapviewer_job_spec = heatmapviewer_task.make_job_spec()
heatmapviewer_job_spec.set_parameter("dataset", "")
genepattern.GPTaskWidget(heatmapviewer_task)
###Output
_____no_output_____ |
notebooks/04-more-tokens-and-context.ipynb | ###Markdown
The previous two notebooks might have gotten your attention but usually we get the response; > But what about BERT-embeddings? Let's explain how to get there, but first ... we should explain languages.
###Code
%load_ext autoreload
%autoreload 2
from whatlies import Embedding, EmbeddingSet
import spacy
import matplotlib.pylab as plt
###Output
_____no_output_____
###Markdown
Multi-Token EmbeddingsWe can also have embeddings that represent more than one token. If we'd do this via spacy, we'd have a an average of all the word embeddings.
###Code
from whatlies.language import SpacyLanguage
from whatlies.transformers import Pca
lang = SpacyLanguage("en_core_web_sm")
contexts = ["i am super duper happy",
"happy happy joy joy",
"programming is super fun!",
"i am going crazy i hate it",
"boo and hiss",]
emb = lang[contexts]
emb.transform(Pca(2)).plot_interactive('pca_0', 'pca_1').properties(width=400, height=400)
nlp = spacy.load("en_core_web_sm")
contexts = ("this snake is a python",
"i like to program in python",
"programming is super fun!",
"i go to the supermarket",
"i like to code",
"i love animals")
emb = EmbeddingSet({k: Embedding(k, nlp(k).vector) for k in contexts})
x_str, y_str = "python is for programming", "snakes are slimy creatures"
x_axis = Embedding(x_str, nlp(x_str).vector)
y_axis = Embedding(y_str, nlp(y_str).vector)
emb.plot_interactive(x_axis=x_axis, y_axis=y_axis)
###Output
_____no_output_____
###Markdown
Embeddings of Tokens with ContextBut maybe we'd like to have BERT-style models. These models work differently. Luckily ... spaCy also supports this these days. Note that you'll need to download and install this model first. You can do that by running;```pip install spacy-transformerspython -m spacy download en_trf_robertabase_lg```
###Code
nlp = spacy.load("en_trf_robertabase_lg")
contexts = ("this snake is a python",
"i like to program in python",
"programming is super fun!",
"i go to the supermarket",
"i like to code",
"i love animals")
t = EmbeddingSet({k: Embedding(k, nlp(k).vector) for k in contexts})
x_str, y_str = "python is for programming", "dogs are cool"
x_axis = Embedding(x_str, nlp(x_str).vector)
y_axis = Embedding(y_str, nlp(y_str).vector)
t.plot_interactive(x_axis=x_axis, y_axis=y_axis)
###Output
_____no_output_____
###Markdown
We can go a step further too. If we have the sentence `this snake is a python` then an algorithm like Bert will not apply seperate word embeddings for each token. Rather, the entire document will first learn it's representation before assigning it to seperate tokens. If you are interested in a Bert representation of a word given the context that it is in ... you can get them with a special syntax.
###Code
contexts = ("i put my money on the [bank]",
"i put my money on the bank",
"the water flows on the river [bank]",
"the water flows on the river bank",
"i really like [to swim] in water",
"i want to be so rich that i am [drowning] in money",
"i have plenty of [cash] on me",
"money is important to my [cash] flow",
"a beach is next to the ocean",
"google gives me a wealth of information",
"that banker person is very wealthy",
"i like cats and dogs")
###Output
_____no_output_____
###Markdown
But to make use of this syntax we need a new object; the `Language` object. This is a tool for `whatlies` to grab the appropriate word embeddings on your behalf. It will handle the context but can also be seen as a lazy `EmbeddingSet`.
###Code
import numpy as np
from whatlies.language import SpacyLanguage
lang = SpacyLanguage("en_trf_robertabase_lg")
lang['red'].vector[:10]
###Output
_____no_output_____
###Markdown
Note that these embeddings are kind of special, they depend on the context around the token of interest!
###Code
np.array_equal(lang['Going to the [store]'].vector,
lang['[store] this in the drawer please.'].vector)
###Output
_____no_output_____
###Markdown
But we can also use the `EmbeddingSet` again.
###Code
from whatlies.transformers import Umap
t = EmbeddingSet({k: lang[k] for k in contexts}).transform(Umap(2))
p1 = t.plot_interactive("i like cats and dogs", "i put my money on the [bank]")
p2 = t.plot_interactive("i like cats and dogs", "i put my money on the bank")
p1 | p2
###Output
_____no_output_____ |
Week-06/3_Challenge_Stock_Prediction.ipynb | ###Markdown
Challenge - Stock Prediction ![](https://miro.medium.com/max/9216/1*NG0bzk0wtQcBdMYAnXKeBQ.jpeg) Background Information Get data
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import datetime
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
data = pd.read_csv('TSLA.csv')
# explore data
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# data visualization
# standardization
###Output
_____no_output_____
###Markdown
Splitting Data
###Code
train,test =
###Output
_____no_output_____
###Markdown
Model
###Code
from tensorflow.keras.layers import LSTM
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout
import time
# transform price array to (X,y) dataset
def create_dataset(dataset, look_back=1, forward_days=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1-forward_days):
a = dataset.iloc[i:(i+look_back)]
dataX.append(a)
dataY.append(dataset.iloc[i + look_back:i + look_back + forward_days])
return np.array(dataX), np.array(dataY)
look_back = 40
forward_days = 10
trainX, trainY = create_dataset()
testX, testY = create_dataset()
# The LSTM network expects the input data to be provided with a specific array structure in the form of: [samples, time steps, features].
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
#build model
model = Sequential()
model.add()
###Output
_____no_output_____
###Markdown
Training and Predicting
###Code
history = model.fit()
results = model.predict(testX[:4])
###Output
_____no_output_____ |
homework-10.30/01-What-is-Polynomial-Regression.ipynb | ###Markdown
什么是多项式回归
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.random.uniform(-3, 3, size=100)
X = x.reshape(-1, 1)
y = 0.5 * x**2 + x + 2 + np.random.normal(0, 1, 100)
plt.scatter(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
线性回归?
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
y_predict = lin_reg.predict(X)
plt.scatter(x, y)
plt.plot(x, y_predict, color='r')
plt.show()
###Output
_____no_output_____
###Markdown
解决方案, 添加一个特征
###Code
X2 = np.hstack([X, X**2])
X2.shape
lin_reg2 = LinearRegression()
lin_reg2.fit(X2, y)
y_predict2 = lin_reg2.predict(X2)
plt.scatter(x, y)
plt.plot(np.sort(x), y_predict2[np.argsort(x)], color='r')
plt.show()
lin_reg2.coef_
lin_reg2.intercept_
###Output
_____no_output_____ |
scripts/reproducibility/figures/Manuscript-Figure AP n20.ipynb | ###Markdown
DSB2018 n20: AP scores on validation data
###Code
alpha0_5_n20 = read_Noise2Seg_results('alpha0.5', 'dsb_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 100.0], score_type = '')
baseline_dsb_n20 = read_Noise2Seg_results('fin', 'dsb_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 100.0], score_type = '')
sequential_dsb_n20 = read_Noise2Seg_results('finSeq', 'dsb_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 16.0, 32.0, 64.0, 100.0], score_type = '')
plt.rc('font', family = 'serif', size = 16)
fig = plt.figure(figsize=cm2inch(12.2/2,3)) # 12.2cm is the text-widht of the MICCAI template
plt.rcParams['axes.axisbelow'] = True
plt.plot(fraction_to_abs(alpha0_5_n20[:, 0], max_num_imgs = 3800),
alpha0_5_n20[:, 1],
color = '#8F89B4', alpha = 1, linewidth=2, label = r'\textsc{DenoiSeg} ($\alpha = 0.5$)')
plt.fill_between(fraction_to_abs(alpha0_5_n20[:, 0], max_num_imgs = 3800),
y1 = alpha0_5_n20[:, 1] + alpha0_5_n20[:, 2],
y2 = alpha0_5_n20[:, 1] - alpha0_5_n20[:, 2],
color = '#8F89B4', alpha = 0.5)
plt.plot(fraction_to_abs(sequential_dsb_n20[:, 0], max_num_imgs = 3800),
sequential_dsb_n20[:, 1],
color = '#526B34', alpha = 1, linewidth=2, label = r'Sequential Baseline')
plt.fill_between(fraction_to_abs(sequential_dsb_n20[:, 0], max_num_imgs = 3800),
y1 = sequential_dsb_n20[:, 1] + sequential_dsb_n20[:, 2],
y2 = sequential_dsb_n20[:, 1] - sequential_dsb_n20[:, 2],
color = '#526B34', alpha = 0.5)
plt.plot(fraction_to_abs(baseline_dsb_n20[:, 0], max_num_imgs = 3800),
baseline_dsb_n20[:, 1],
color = '#6D3B2B', alpha = 1, linewidth=2, label = r'Baseline')
plt.fill_between(fraction_to_abs(baseline_dsb_n20[:, 0], max_num_imgs = 3800),
y1 = baseline_dsb_n20[:, 1] + baseline_dsb_n20[:, 2],
y2 = baseline_dsb_n20[:, 1] - baseline_dsb_n20[:, 2],
color = '#6D3B2B', alpha = 0.25)
plt.semilogx()
leg = plt.legend(loc = 'lower right')
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.ylabel(r'\textbf{AP}')
plt.xlabel(r'\textbf{Number of Annotated Training Images}')
plt.grid(axis='y')
plt.xticks(ticks=fraction_to_abs(baseline_dsb_n20[:, 0], max_num_imgs = 3800),
labels=fraction_to_abs(baseline_dsb_n20[:, 0], max_num_imgs = 3800).astype(np.int),
rotation=45)
plt.minorticks_off()
plt.yticks(rotation=45)
plt.xlim([8.5, 4500])
plt.tight_layout();
plt.savefig('AP_n20_area.pdf', pad_inches=0.0);
plt.savefig('AP_n20_area.svg', pad_inches=0.0);
###Output
_____no_output_____ |
01_Churn_Modelling_ANN/ann.ipynb | ###Markdown
Importing the database
###Code
file = glob.iglob('*.csv')
dataset = pd.read_csv(*file)
dataset.head(10)
print(f'The length of the Dataset is - {len(dataset)}')
###Output
The length of the Dataset is - 10000
###Markdown
Splitting the dataset into Independent and Dependent variable
###Code
X = dataset.iloc[:, 3:-1].values
y = dataset.iloc[:, -1].values
X
y
###Output
_____no_output_____
###Markdown
Label - Encoding the Male/Female
###Code
le = LabelEncoder()
X[:, 2] = le.fit_transform(X[:, 2])
print(X)
###Output
[[619 'France' 0 ... 1 1 101348.88]
[608 'Spain' 0 ... 0 1 112542.58]
[502 'France' 0 ... 1 0 113931.57]
...
[709 'France' 0 ... 0 1 42085.58]
[772 'Germany' 1 ... 1 0 92888.52]
[792 'France' 0 ... 1 0 38190.78]]
###Markdown
OneHotEncoding the Country Name
###Code
ct = ColumnTransformer(transformers=[('encoder',
OneHotEncoder(),
[1])],
remainder='passthrough')
X = np.array(ct.fit_transform(X))
print(X)
###Output
[[1.0 0.0 0.0 ... 1 1 101348.88]
[0.0 0.0 1.0 ... 0 1 112542.58]
[1.0 0.0 0.0 ... 1 0 113931.57]
...
[1.0 0.0 0.0 ... 0 1 42085.58]
[0.0 1.0 0.0 ... 1 0 92888.52]
[1.0 0.0 0.0 ... 1 0 38190.78]]
###Markdown
Splitting The dataset into Training and Test Set
###Code
X_train, X_test, Y_train, Y_test = train_test_split(X,
y,
test_size = 0.2,
random_state = 0)
print(f"The Dimenstion of X_train - {X_train.shape}")
print(f"The Dimenstion of X_test - {X_test.shape}")
print(f"The Dimenstion of Y_train - {Y_train.shape}")
print(f"The Dimenstion of Y_test - {Y_test.shape}")
###Output
The Dimenstion of X_train - (8000, 12)
The Dimenstion of X_test - (2000, 12)
The Dimenstion of Y_train - (8000,)
The Dimenstion of Y_test - (2000,)
###Markdown
Feature Scaling the Dataset
###Code
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print("Training Set\n",X_train)
print("-------------------")
print("Testing Set\n",X_test)
###Output
Training Set
[[-1.01460667 -0.5698444 1.74309049 ... 0.64259497 -1.03227043
1.10643166]
[-1.01460667 1.75486502 -0.57369368 ... 0.64259497 0.9687384
-0.74866447]
[ 0.98560362 -0.5698444 -0.57369368 ... 0.64259497 -1.03227043
1.48533467]
...
[ 0.98560362 -0.5698444 -0.57369368 ... 0.64259497 -1.03227043
1.41231994]
[-1.01460667 -0.5698444 1.74309049 ... 0.64259497 0.9687384
0.84432121]
[-1.01460667 1.75486502 -0.57369368 ... 0.64259497 -1.03227043
0.32472465]]
-------------------
Testing Set
[[-1.01460667 1.75486502 -0.57369368 ... 0.64259497 0.9687384
1.61085707]
[ 0.98560362 -0.5698444 -0.57369368 ... 0.64259497 -1.03227043
0.49587037]
[-1.01460667 -0.5698444 1.74309049 ... 0.64259497 0.9687384
-0.42478674]
...
[-1.01460667 -0.5698444 1.74309049 ... 0.64259497 -1.03227043
0.71888467]
[-1.01460667 1.75486502 -0.57369368 ... 0.64259497 0.9687384
-1.54507805]
[-1.01460667 1.75486502 -0.57369368 ... 0.64259497 -1.03227043
1.61255917]]
###Markdown
Building the ANN Steps1. Initializing the ANN2. Adding Input Layer and the first hidden Layer3. Adding the Second hidder Layer4. Adding the output Layer
###Code
#1
ann = tf.keras.models.Sequential()
#2 - Shallow Neural Network (If only one hidden layer)
ann.add(tf.keras.layers.Dense(units = 6,
activation = 'relu'))
#3.
ann.add(tf.keras.layers.Dense(units = 6,
activation = 'relu'))
#4.
ann.add(tf.keras.layers.Dense(units = 1,
activation = 'sigmoid'))
###Output
_____no_output_____
###Markdown
Compiling the ANN1. optimizer2. loss function3. metrics
###Code
# For non binary classification
# We need to enter categorical_crossentropy and softmax activation function
ann.compile(optimizer = 'adam',
loss = 'binary_crossentropy',
metrics = ['accuracy']
)
###Output
_____no_output_____
###Markdown
Training the ANN
###Code
start = time.time()
ann.fit(X_train, Y_train,
batch_size = 32,
epochs = 100
)
end = time.time()
print(f"Total Time Taken - {end-start}")
print(ann.summary())
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (32, 6) 78
_________________________________________________________________
dense_1 (Dense) (32, 6) 42
_________________________________________________________________
dense_2 (Dense) (32, 1) 7
=================================================================
Total params: 127
Trainable params: 127
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Predicting the Results
###Code
res = ann.predict(sc.transform([[1, 0, 0, 600, 1, 40, 3, 60000, 2, 1, 1, 50000]])) > 0.5
if res == False:
print("Not Gonna Leave")
else:
print("Gonna Leave")
###Output
Not Gonna Leave
###Markdown
Vector of Predictions
###Code
y_pred = ann.predict(X_test)
y_pred = (y_pred > 0.5)
print(np.concatenate((y_pred.reshape(len(y_pred),
1),
Y_test.reshape(len(Y_test), 1)
), 1)
)
###Output
[[0 0]
[0 1]
[0 0]
...
[0 0]
[0 0]
[0 0]]
###Markdown
Confusion Matrix
###Code
cm = confusion_matrix(Y_test, y_pred)
print(cm)
skplot.metrics.plot_confusion_matrix(Y_test, y_pred)
plt.show()
print(f"The accuracy of the Model is - {accuracy_score(Y_test, y_pred)*100}%")
###Output
The accuracy of the Model is - 86.05000000000001%
|
4.1 MNIST-Datenbank.ipynb | ###Markdown
Die Kategorisierung der Handschrift dient sehr gut als Klassifizerungsaufgabe für unser Neuronales Netz. Es ist genügend unscharf und nicht zu schwierig. Ausserdem kann gezeigt werden, dass das NN auch mit großen Knotenmengen und Datenmengen umgehen kann. Das es hin und wieder auch für Menschen schwierig sein kann, die Klasse zuzuordnen zeigt obiges Bsp. Ist das nun eine 4 oder eine 9? Wir nutzen dafür nun eine Datenbank mit Bildern handschriftlich geschriebener Ziffern, die gerne für diese Zwecke genutzt wird. Es handelt sich dabei um die MNIST-Datenbank von Yann LeCun. WIr haben für unsere Zwecke einen Trainingsdatensatz mit 60.000 gekennzeichneten Beispielen und einen Testdatensatz mit 10.000 Beispielen (und Kennungen) zusammengestellt. Am Besten man öffnet dafür mal die Datei mnist_test.csv in Excel.Was steht darin: 1. Wert des Labels. Also wie heißt die tatsächliche Ziffer, die das handgeschriebene Zeichen darstellen soll. 2. Werte der Pixelwerte. Ein Ziffernbild besteht aus 28X28 Pixel, daher stehen 784 Werte nach der Kennung. Laden wir in unserem Fall einen etwas kleineren Datensatz erst mal ein.
###Code
data_file = open("data/mnist_dataset/mnist_train_100.csv", 'r')
data_list = data_file.readlines()
data_file.close()
#Anzeige der Länge der Liste
len(data_list)
#Inhalt der Liste ausgeben
data_list[0]
#Import von diversen Bibliotheken unter anderem um die Inhalte grafisch darzustellen
import numpy
import matplotlib.pyplot
%matplotlib inline
all_values = data_list[0].split(',')
image_array = numpy.asfarray(all_values[1:]).reshape((28,28))
matplotlib.pyplot.imshow(image_array, cmap='Greys', interpolation='None')
###Output
_____no_output_____
###Markdown
Oben wird nun der erste Datensatz (eine 5) aufgrund der pixelinformationen grafisch angezeigt. Dies kann gerne auch mal für den 66.Datensatz getestet werden.
###Code
all_values = data_list[65].split(',')
image_array = numpy.asfarray(all_values[1:]).reshape((28,28))
matplotlib.pyplot.imshow(image_array, cmap='Greys', interpolation='None')
###Output
_____no_output_____
###Markdown
Um die Eingabewerte des Eingangsdatensatzes in unserem üblichen Format der Gewichte zwischen -1 und 1 zu haben, müssen wir zunächst die Daten der Pixelinformationen entsprechend skalieren. Dies ist ein wichtiger Schritt, da dies für zukünftige Netze entsprechend angepasst werden muss.Folgende Schritte sind bei der Skalierung der Inhalte zu beachten: (Output ist eine Liste mit skalierten Werten zwischen 0 und 1)1. Die Werte in den Features liegen zwischen 0 und 255 -->Daher müssen alle Werte durch 255 geteilt werden. Jetzt sind diese Werte im Bereich 0 bis 12. Um die Werte aus 1. auf 0,0 bis 0,99 zu bekommen müssen diese Werte mit 0,99 multipliziert werden.3. Anschließend werden die Werte mit 0,01 addiert, sodass wir keine reinen 0-Werte bekommen.Zu beachten sind folgende Begriffe 1. all_values[1:] -->das heißt das jeweils alle Werte nach der ersten Spalte betroffen sind. In der ersten Spalte sind unsere Prüfwerte für unseren jeweiligen Datensatz (also das Label)2. numpy.asfarray --> gibt einen Array zurück in dem die Werte Fließkommazahlen (Float) sein können
###Code
scaled_input = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
print(scaled_input)
###Output
[0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.03329412 0.15364706 0.15364706
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.04105882 0.208 0.43705882
0.63117647 0.81364706 0.99223529 0.99223529 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.21576471
0.538 0.83305882 0.99223529 0.99611765 0.99223529 0.99223529
0.99223529 0.73988235 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.25458824
0.32058824 0.76705882 1. 0.99611765 0.99611765 0.87188235
0.71270588 0.71658824 0.71270588 0.53411765 0.21188235 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.07211765 0.87576471 0.98058824 0.99223529 0.99223529
0.99611765 0.71658824 0.07988235 0.04882353 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.02164706 0.52635294 0.89517647
0.99223529 0.96894118 0.84858824 0.59623529 0.27788235 0.07211765
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.23517647 0.99223529 0.99611765 0.99223529 0.46035294
0.02552941 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.15752941
0.89129412 0.99611765 0.99223529 0.89129412 0.34776471 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.13811765 0.71658824
0.97670588 0.99611765 0.79811765 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.27011765 0.99223529
0.84470588 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.274 0.99223529 0.61952941 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.53411765 0.36717647 0.01 0.01 0.01 0.01
0.72435294 0.99223529 0.49529412 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.99611765 0.85635294
0.18858824 0.01 0.01 0.11482353 0.94952941 0.99223529
0.21964706 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.67 0.99611765 0.99611765 0.84470588
0.89517647 1. 0.99611765 0.52635294 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.13035294 0.63117647 0.80976471 0.99223529 0.84082353 0.55352941
0.42929412 0.07211765 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.03329412 0.14976471 0.05270588 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01
0.01 0.01 0.01 0.01 ]
###Markdown
Wie sollten unsere Ausgabeknoten aussehen? Benötigen wir 728 Knoten? Nein. Es sind 10. also von 0 bis 9. Da wir auch nur 10 Ziffern haben. Er soll feuern wenn es die richtige Kennung getroffen hat. In folgendem Beispiel ist in derletzten Spalte zu 86% die Ziffer 9 zu erkennen. Das ist was wir haben wollen.
###Code
#output nodes is 10 (example)
onodes = 10
targets = numpy.zeros(onodes) + 0.01
targets[int(all_values[0])] = 0.99
###Output
_____no_output_____
###Markdown
Was passiert nun in Zeile 10:1. Die Anzahl an Ausgabeknoten wird auf 10 gesetzt.2. numpy.zeros() erzeugt ein mit Nullen gefülltes Array. Größe und Gestalt des Arrays mit der Länge "onodes" wird als Parameter übergeben. HInzuaddiert wird 0,01 um reine Nullwerte zu vermeiden.3. Zunächst wird hier die Kennung des MNIST-Datensatzes übernommen. Diese Kennung wird ein eine Ganzzahl gewandelt und mit dem array-index verbunden. 9 ist dann targets[9]
###Code
print(targets)
###Output
[0.01 0.01 0.01 0.01 0.01 0.99 0.01 0.01 0.01 0.01]
###Markdown
Daraus ergibt sich dann vorerst folgender Code:
###Code
#######################################################################################
##################### 1. Import der benötigten Bibs ###################################
import numpy
# scipy.special for the sigmoid function expit()
import scipy.special
# library for plotting arrays
import matplotlib.pyplot
# ensure the plots are inside this notebook, not an external window
%matplotlib inline
#######################################################################################
##################### 2. Anpassung der Variablen ######################################
# number of input, hidden and output nodes
input_nodes = 784
hidden_nodes = 100
output_nodes = 10
# learning rate
learning_rate = 0.3
# epochs is the number of times the training data set is used for training
epochs = 3
# load the mnist training data CSV file into a list
training_data_file = open("data/mnist_dataset/mnist_train_100.csv", 'r')
training_data_list = training_data_file.readlines()
training_data_file.close()
# load the mnist test data CSV file into a list
test_data_file = open("data/mnist_dataset/mnist_test_10.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()
#######################################################################################
##################### 3. Klasse des Neuronalen Netzes #################################
# neural network class definition
class neuralNetwork:
# initialise the neural network
def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
# set number of nodes in each input, hidden, output layer
self.inodes = inputnodes
self.hnodes = hiddennodes
self.onodes = outputnodes
# link weight matrices, wih and who
# weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
# w11 w21
# w12 w22 etc
self.wih = numpy.random.normal(0.0, pow(self.inodes, -0.5), (self.hnodes, self.inodes))
self.who = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.onodes, self.hnodes))
# learning rate
self.lr = learningrate
# activation function is the sigmoid function
self.activation_function = lambda x: scipy.special.expit(x)
pass
# train the neural network
def train(self, inputs_list, targets_list):
# convert inputs list to 2d array
inputs = numpy.array(inputs_list, ndmin=2).T
targets = numpy.array(targets_list, ndmin=2).T
# calculate signals into hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
# calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# calculate signals into final output layer
final_inputs = numpy.dot(self.who, hidden_outputs)
# calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
# output layer error is the (target - actual)
output_errors = targets - final_outputs
# hidden layer error is the output_errors, split by weights, recombined at hidden nodes
hidden_errors = numpy.dot(self.who.T, output_errors)
# update the weights for the links between the hidden and output layers
self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
# update the weights for the links between the input and hidden layers
self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
pass
# query the neural network
def query(self, inputs_list):
# convert inputs list to 2d array
inputs = numpy.array(inputs_list, ndmin=2).T
# calculate signals into hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
# calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# calculate signals into final output layer
final_inputs = numpy.dot(self.who, hidden_outputs)
# calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
return final_outputs
#######################################################################################
##################### 4. Erstellen eines Objekts der obigen Klasse ####################
# create instance of neural network
n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)
#######################################################################################
##################### 5. Das Netz basierend auf den Epochen trainieren ################
# train the neural network
for e in range(epochs):
# go through all records in the training data set
for record in training_data_list:
# split the record by the ',' commas
all_values = record.split(',')
# scale and shift the inputs
inputs = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
# create the target output values (all 0.01, except the desired label which is 0.99)
targets = numpy.zeros(output_nodes) + 0.01
# all_values[0] is the target label for this record
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
pass
#######################################################################################
##################### 6. Das Netz auf Basis der Testdaten prüfen ######################
# test the neural network
# scorecard for how well the network performs, initially empty
scorecard = []
# go through all the records in the test data set
for record in test_data_list:
# split the record by the ',' commas
all_values = record.split(',')
# correct answer is first value
correct_label = int(all_values[0])
# scale and shift the inputs
inputs = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
# query the network
outputs = n.query(inputs)
# the index of the highest value corresponds to the label
label = numpy.argmax(outputs)
# append correct or incorrect to list
if (label == correct_label):
# network's answer matches correct answer, add 1 to scorecard
scorecard.append(1)
else:
# network's answer doesn't match correct answer, add 0 to scorecard
scorecard.append(0)
pass
pass
#######################################################################################
##################### 7. Ausgabe der Genauigkeit des Netzes (Performance) #############
# calculate the performance score, the fraction of correct answers
scorecard_array = numpy.asarray(scorecard)
print ("performance = ", scorecard_array.sum() / scorecard_array.size)
#######################################################################################
#######################################################################################
###Output
performance = 0.6
###Markdown
1. 784 Eingabeknoten entsprechen den 784 Pixelinformationen2. 100 Knoten der verdeckten Schicht sind quasi der Prüfstandard. Hintergrund ist, dass NN Features oder Muster erkennen sollen, die sich in kürzerer Form als der Anzahl Eingabedaten finden lässt.3. 10 Ausgabeknoten entsprechen den 10 Ziffern.HINWEIS: Es gibt keine perfekte Methode die Anzahl Knoten vorab mathematisch zu bestimmen. Hier gilt experimentieren.Nebenden Trainingsdaten haben wir auch die Testdaten eingebunden. Wir wollen nun mal schauen wie gut unser Training war. Dafür beziehen wir den ersten Datensatz aus dem Testdatensatz und lassen diesen uns grafisch wie auch per NN ermitteln.
###Code
# Ersten Datensatz auslesen
all_values=test_data_list[0].split(',')
# Ausgabe des Labels
print(all_values[0])
all_values = test_data_list[0].split(',')
image_array = numpy.asfarray(all_values[1:]).reshape((28,28))
matplotlib.pyplot.imshow(image_array, cmap='Greys', interpolation='None')
n.query((numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01)
###Output
_____no_output_____
###Markdown
Was ist hier passiert:1. In Zeile 16 lesen wir das Label (also die Kennung des ersten Datensatzes aus der Testreihe aus)2. In Zeile 17 geben wir das grafische Abbild der Pixel dieses Datensatzes aus3. In Zeile 18 lassen wir uns die Ausgabeknotenwerte anzeigen. Auch hier wird der siebte Knoten mit 85% am höchsten bewertet. Das Ganze wird schlußendlich mittels einer For-Schleife für alle Datensätze getestet. Der entsprechende Quellcode zeigt sich in Zeile 19.1. scorecard: Dies ist eine leere Trefferliste, welche nach jedem Datensatz aktualisiert wird. Dies dient dem späteren Messen der Performance.
###Code
print(scorecard)
# calculate the performance score, the fraction of correct answers
scorecard_array = numpy.asarray(scorecard)
print ("performance = ", scorecard_array.sum() / scorecard_array.size)
###Output
performance = 0.6
###Markdown
Sollte die Antwort des Netzes mit der Antwort des Labels übereinstimmen, so wird für den Datensatz eine 1 ausgegeben, andernfalls eine 0. In Zeile 19 haben wir noch 4 Fehltreffer. Dies zeigt sich dann auch in der Performancemessung in Zeile 20.Mit zunehmender Datenmenge wird dies genauer. Daher kann hier nun auch der größere Datensatz eingelesen werden. 4.1.1 Optimierungsmöglichkeiten **Lernrate**: Unter Umständen muß die Lernrate zu einem späteren Zeitpunkt angepasst werden. Diese Lernrate wird notwendig um den Sprung im Gradientenverfahren nicht zu klein oder zu groß werden zu lassen. **Trainingswiederholungen**: Mit weiteren Anzahl Epochen wird derselbe Trainingsdatensatz erneut durchlaufen. Dies dient dazu den Gradientenabstieg zu verbessern. **Mehr Daten**: Je größer die Datenmengen für das Training, desto genauer werden die Ergebnisse. **Netzstruktur ändern**: Anzahl der Knoten der versteckten Schicht anpassen.
###Code
#######################################################################################
##################### 1. Import der benötigten Bibs ###################################
import numpy
# scipy.special for the sigmoid function expit()
import scipy.special
# library for plotting arrays
import matplotlib.pyplot
# ensure the plots are inside this notebook, not an external window
%matplotlib inline
#######################################################################################
##################### 2. Anpassung der Variablen ######################################
# number of input, hidden and output nodes
input_nodes = 784
hidden_nodes = 100
output_nodes = 10
# learning rate
learning_rate = 0.01
# epochs is the number of times the training data set is used for training
epochs = 10
# load the mnist training data CSV file into a list
training_data_file = open("data/mnist_dataset/mnist_train_100.csv", 'r')
training_data_list = training_data_file.readlines()
training_data_file.close()
# load the mnist test data CSV file into a list
test_data_file = open("data/mnist_dataset/mnist_test_10.csv", 'r')
test_data_list = test_data_file.readlines()
test_data_file.close()
#######################################################################################
##################### 3. Klasse des Neuronalen Netzes #################################
# neural network class definition
class neuralNetwork:
# initialise the neural network
def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
# set number of nodes in each input, hidden, output layer
self.inodes = inputnodes
self.hnodes = hiddennodes
self.onodes = outputnodes
# link weight matrices, wih and who
# weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
# w11 w21
# w12 w22 etc
self.wih = numpy.random.normal(0.0, pow(self.inodes, -0.5), (self.hnodes, self.inodes))
self.who = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.onodes, self.hnodes))
# learning rate
self.lr = learningrate
# activation function is the sigmoid function
self.activation_function = lambda x: scipy.special.expit(x)
pass
# train the neural network
def train(self, inputs_list, targets_list):
# convert inputs list to 2d array
inputs = numpy.array(inputs_list, ndmin=2).T
targets = numpy.array(targets_list, ndmin=2).T
# calculate signals into hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
# calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# calculate signals into final output layer
final_inputs = numpy.dot(self.who, hidden_outputs)
# calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
# output layer error is the (target - actual)
output_errors = targets - final_outputs
# hidden layer error is the output_errors, split by weights, recombined at hidden nodes
hidden_errors = numpy.dot(self.who.T, output_errors)
# update the weights for the links between the hidden and output layers
self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
# update the weights for the links between the input and hidden layers
self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
pass
# query the neural network
def query(self, inputs_list):
# convert inputs list to 2d array
inputs = numpy.array(inputs_list, ndmin=2).T
# calculate signals into hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
# calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# calculate signals into final output layer
final_inputs = numpy.dot(self.who, hidden_outputs)
# calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
return final_outputs
#######################################################################################
##################### 4. Erstellen eines Objekts der obigen Klasse ####################
# create instance of neural network
n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)
#######################################################################################
##################### 5. Das Netz basierend auf den Epochen trainieren ################
# train the neural network
for e in range(epochs):
# go through all records in the training data set
for record in training_data_list:
# split the record by the ',' commas
all_values = record.split(',')
# scale and shift the inputs
inputs = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
# create the target output values (all 0.01, except the desired label which is 0.99)
targets = numpy.zeros(output_nodes) + 0.01
# all_values[0] is the target label for this record
targets[int(all_values[0])] = 0.99
n.train(inputs, targets)
pass
pass
#######################################################################################
##################### 6. Das Netz auf Basis der Testdaten prüfen ######################
# test the neural network
# scorecard for how well the network performs, initially empty
scorecard = []
# go through all the records in the test data set
for record in test_data_list:
# split the record by the ',' commas
all_values = record.split(',')
# correct answer is first value
correct_label = int(all_values[0])
# scale and shift the inputs
inputs = (numpy.asfarray(all_values[1:]) / 255.0 * 0.99) + 0.01
# query the network
outputs = n.query(inputs)
# the index of the highest value corresponds to the label
label = numpy.argmax(outputs)
# append correct or incorrect to list
if (label == correct_label):
# network's answer matches correct answer, add 1 to scorecard
scorecard.append(1)
else:
# network's answer doesn't match correct answer, add 0 to scorecard
scorecard.append(0)
pass
pass
#######################################################################################
##################### 7. Ausgabe der Genauigkeit des Netzes (Performance) #############
# calculate the performance score, the fraction of correct answers
scorecard_array = numpy.asarray(scorecard)
print ("performance = ", scorecard_array.sum() / scorecard_array.size)
#######################################################################################
#######################################################################################
###Output
performance = 0.6
|
site/en/guide/eager.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class CustomLayer(tf.keras.layers.Layer):
def __init__(self):
super(CustomLayer, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = CustomLayer()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(1000):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](../keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(1000):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `warch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager essentials View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoints.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoints.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](../keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](../keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(1000):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 80 == 0:
print()
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 80 == 0:
print()
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.variables)
optimizer.apply_gradients(zip(grads, mnist_model.variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.Variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager essentials View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
import tensorflow.compat.v2 as tf #gpu
except Exception:
pass
tf.enable_v2_behavior()
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoints.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoints.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.test.is_gpu_available():
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usageUpgrade to the latest version of TensorFlow:
###Code
!pip install --upgrade tensorflow==1.11
###Output
_____no_output_____
###Markdown
To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 80 == 0:
print()
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.variables)
optimizer.apply_gradients(zip(grads, mnist_model.variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
This example uses the[dataset.py module](https://github.com/tensorflow/models/blob/master/official/mnist/dataset.py)from the[TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist);download this file to your local directory. Run the following to download theMNIST data files to your working directory and prepare a `tf.data.Dataset`for training: Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = '/path/to/model_dir'
os.makedirs(checkpoint_dir, exist_ok=True)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.Variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a convenient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager essentials View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoints.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoints.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.test.is_gpu_available():
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a convenient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save(checkpoint_path)
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a convenient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a convenient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(1000):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usageUpgrade to the latest version of TensorFlow:
###Code
!pip install --upgrade tensorflow
###Output
_____no_output_____
###Markdown
To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 80 == 0:
print()
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, model.variables)
optimizer.apply_gradients(zip(grads, model.variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
This example uses the[dataset.py module](https://github.com/tensorflow/models/blob/master/official/mnist/dataset.py)from the[TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist);download this file to your local directory. Run the following to download theMNIST data files to your working directory and prepare a `tf.data.Dataset`for training: Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = '/path/to/model_dir'
os.makedirs(checkpoint_dir, exist_ok=True)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.Variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 80 == 0:
print()
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.variables)
optimizer.apply_gradients(zip(grads, mnist_model.variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
This example uses the[dataset.py module](https://github.com/tensorflow/models/blob/master/official/mnist/dataset.py)from the[TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist);download this file to your local directory. Run the following to download theMNIST data files to your working directory and prepare a `tf.data.Dataset`for training: Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = '/path/to/model_dir'
os.makedirs(checkpoint_dir, exist_ok=True)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.Variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager essentials View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x #gpu
except Exception:
pass
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train():
for epoch in range(3):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train()
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith TF 1.x graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("gpu:0"):
print("GPU enabled")
v = tf.Variable(tf.random.normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based savingThis section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager execution View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration.Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage
###Code
import os
import tensorflow as tf
import cProfile
###Output
_____no_output_____
###Markdown
In Tensorflow 2.0, eager execution is enabled by default.
###Code
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. The TensorFlow`tf.math` operations convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.You can use `tf.GradientTape` to train and/or compute gradients in eager. It is especially useful for complicated training loops. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu',
input_shape=(None, None, 1)),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
loss_history = []
###Output
_____no_output_____
###Markdown
Note: Use the assert functions in `tf.debugging` to check if a condition holds up. This works in eager and graph execution.
###Code
def train_step(images, labels):
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
# Add asserts to check the shape of the output.
tf.debugging.assert_equal(logits.shape, (32, 10))
loss_value = loss_object(labels, logits)
loss_history.append(loss_value.numpy().mean())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
def train(epochs):
for epoch in range(epochs):
for (batch, (images, labels)) in enumerate(dataset):
train_step(images, labels)
print ('Epoch {} finished'.format(epoch))
train(epochs = 3)
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor`-like values accessed duringtraining to make automatic differentiation easier. The collections of variables can be encapsulated into layers or models, along with methods that operate on them. See [Custom Keras layers and models](./keras/custom_layers_and_models.ipynb) for details. The main difference between layers and models is that models add methods like `Model.fit`, `Model.evaluate`, and `Model.save`.For example, the automatic differentiation example abovecan be rewritten:
###Code
class Linear(tf.keras.Model):
def __init__(self):
super(Linear, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random.normal([NUM_EXAMPLES])
noise = tf.random.normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
###Output
_____no_output_____
###Markdown
Next:1. Create the model.2. The Derivatives of a loss function with respect to model parameters.3. A strategy for updating the variables based on the derivatives.
###Code
model = Linear()
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
steps = 300
for i in range(steps):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]))
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Note: Variables persist until the last reference to the python objectis removed, and is the variable is deleted. Object-based saving A `tf.keras.Model` includes a covienient `save_weights` method allowing you to easily create a checkpoint:
###Code
model.save_weights('weights')
status = model.load_weights('weights')
###Output
_____no_output_____
###Markdown
Using `tf.train.Checkpoint` you can take full control over this process.This section is an abbreviated version of the [guide to training checkpoints](./checkpoint.ipynb).
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
checkpoint_dir = 'path/to/model_dir'
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model)
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Note: In many training loops, variables are created after `tf.train.Checkpoint.restore` is called. These variables will be restored as soon as they are created, and assertions are available to ensure that a checkpoint has been fully loaded. See the [guide to training checkpoints](./checkpoint.ipynb) for details. Object-oriented metrics`tf.keras.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tf.keras.metrics.result` method,for example:
###Code
m = tf.keras.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](https://tensorflow.org/tensorboard) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.You can use `tf.summary` to record summaries of variable in eager execution.For example, to record summaries of `loss` once every 100 training steps:
###Code
logdir = "./tb/"
writer = tf.summary.create_file_writer(logdir)
steps = 1000
with writer.as_default(): # or call writer.set_as_default() before the loop.
for i in range(steps):
step = i + 1
# Calculate loss with your real train function.
loss = 1 - 0.001 * step
if step % 100 == 0:
tf.summary.scalar('loss', loss, step=step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically tracked.
# But to calculate a gradient from a tensor, you must `watch` it.
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.math.log(1 + tf.exp(x))
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# The gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.math.log(1 + e), grad
def grad_log1pexp(x):
with tf.GradientTape() as tape:
tape.watch(x)
value = log1pexp(x)
return tape.gradient(value, x)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(tf.constant(0.)).numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(tf.constant(100.)).numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random.normal(shape), steps)))
# Run on GPU, if available:
if tf.config.experimental.list_physical_devices("GPU"):
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random.normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.config.experimental.list_physical_devices("GPU"):
x = tf.random.normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
###Output
_____no_output_____ |
assignments/05/10215075/10215075_Ahmad_Nawwaaf_Work_of_Friction.ipynb | ###Markdown
Kerja Gaya GesekAhmad Nawwaaf1, Tim Pendukung2 Program Studi Sarjana Fisika, Institut Teknologi Bandung Jalan Gensha 10, Bandung 40132, Indonesia [email protected], https://github.com/anawwaaf [email protected], https://github.com/timpendukungKerja yang dilakukan oleh gaya gesek merupakan bentuk kerja yang tidak diharapkan karena energi yang dikeluarkan, biasanya dalam bentuk panas atau bunyi yang dilepas ke lingkungan, tidak dapat dimanfaatkan lagi oleh sistem sehingga energi sistem berkurang. Gerak benda di atas lantai mendatar kasarSistem yang ditinjau adalah suatu benda yang bergerak di atas lantai mendatar kasar. Benda diberi kecepatan awal tertentu dan bergerak melambat sampai berhenti karena adanya gaya gesek kinetis antara benda dan lantai kasar. ParameterBeberapa parameter yang digunakan adalah seperti pada tabel berikut ini.Tabel 1. Simbol beserta satuan dan artinya.|Simbol | Satuan | Arti||:- | :- | :-||$t$ | s | waktu||$v_{0}$ | m/s | kecepatan awal||$x_{0}$ | m | posisi awal||$v$ | m/s | kecepatan saat $t$||$x$ | m | waktu saat $t$||$a$ | m/s2 | percepatan||$\mu_{k}$ | - | koefisien gesek kinetis||$f_{k}$ | N | gaya gesek kinetis||$m$ | kg | massa benda||$F$ | N | total gaya yang bekerja||$N$ | N | gaya normal||$w$ | N | gaya gravitasi|Simbol-simbol pada Tabel [1](tab1) akan diberi nilai kemudian saat diimplementasikan dalam program. PersamaanPersamaan-persamaan yang akan digunakan adalah seperti dicantumkan pada bagian ini. KinematikaHubungan antara kecepatan $v$, kecepatan awal $v_{0}$, percepatan $a$, dan waktu $t$ diberikan oleh\begin{equation}\label{eqn:kinematics-v-a-t}\tag{1}v = v_{0} + at\end{equation} Posisi benda $x$ bergantung pada posisi awal $x_{0}$, kecepatan awal $v_{0}$, percepatan $a$, dan waktu $t$ melalui hubungan\begin{equation}\label{eqn:kinematics-x-v-a-t}\tag{2}x = x_{0} + v_{0} t + \tfrac12 at^{2}\end{equation} Selain kedua persamaan sebelumnya, terdapat pula persamaan berikut\begin{equation}\label{eqn:kinematics-v-x-a}\tag{3}v^2 = v_{0}^{2} + 2a(x - x_{0})\end{equation}yang menghubungkan kecepatan $v$ dengan kecepatan awal $v_{0}$, percepatan $a$, dan jarak yang ditempuh $x - x_{0}$ DinamikaHukum Newton I menyatakan bahwa benda yang semula diam akan tetap diam dan yang semula bergerak dengan kecepatan tetap akan tetap bergerak dengan kecepatan tetap bila tidak ada gaya yang bekerja pada benda atau jumlah gaya-gaya yang bekerja sama dengan nol\begin{equation}\label{eqn:newtons-law-1}\tag{4}\sum F = 0\end{equation} Bila ada gaya yang bekerja pada benda bermassa $m$ atau jumlah gaya-gaya tidak nol\begin{equation}\label{eqn:newtons-law-2}\tag{5}\sum F = ma\end{equation}maka keadaan gerak benda akan berubah melalui percepatan $a$, dengan $m > 0$ dan $a \ne 0$ UsahaUsaha oleh suatu gaya $F$ dengan posisi awal $x_{0}$ dan posisi akhir $x_{0}$ dapat diperoleh melalui\begin{equation}\label{eqn:work-1}\tag{6}W = \int_{x_{0}}^{x} F dx\end{equation}atau dengan\begin{equation}\label{eqn:work-2}\tag{7}W = \Delta K\end{equation}dengan $K$ adalah energi kinetik. Persamaan ([7](eqn7)) akan memberikan gaya oleh semua gaya. Dengan demikian bila $F$ adalah satu-satunya gaya yang bekerja pada benda, maka persamaan ini akan menjadi Persamaan ([6](eqn6)). SistemIlustrasi sistem perlu diberikan agar dapat terbayangan dan memudahkan penyelesaian masalah. Selain itu juga perlu disajikan diagram gaya-gaya yang bekerja pada benda. IlustrasiSistem benda yang bermassa $m$ bergerak di atas lantai kasar dapat digambarkan seperti berikut ini.
###Code
%%html
<svg
width="320"
height="140"
viewBox="0 0 320 140.00001"
id="svg2"
version="1.1"
inkscape:version="1.1.2 (b8e25be833, 2022-02-05)"
sodipodi:docname="mass-horizontal-rough-surface.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<defs
id="defs4">
<marker
style="overflow:visible"
id="TriangleOutM"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479" />
</marker>
<marker
style="overflow:visible"
id="marker11604"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11602" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Mend"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11361" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-1" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-35"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-0" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-0"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-4" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-37"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-9" />
</marker>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1.5"
inkscape:cx="173"
inkscape:cy="97.333333"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:snap-bbox="false"
inkscape:snap-global="false"
units="px"
showborder="true"
inkscape:showpageshadow="true"
borderlayer="false"
inkscape:window-width="1366"
inkscape:window-height="705"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:pagecheckerboard="0">
<inkscape:grid
type="xygrid"
id="grid970" />
</sodipodi:namedview>
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-732.36216)">
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="120.0725"
y="759.6109"
id="text2711-6-2-9"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2"
x="120.0725"
y="759.6109"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923">v</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668">0</tspan></tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM)"
d="m 84.656156,757.55169 25.738704,1.3e-4"
id="path11252" />
<rect
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-opacity:1"
id="rect1007"
width="59"
height="59"
x="56.5"
y="772.86218"
rx="0"
ry="0" />
<path
style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
d="m 20,832.86218 280,-2e-5"
id="path1386" />
<rect
style="fill:#ffffff;fill-opacity:1;stroke:#c8c8c8;stroke-width:0.5;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:2, 2;stroke-dashoffset:0;stroke-opacity:1"
id="rect1007-2"
width="59"
height="59"
x="225.16667"
y="772.86218"
rx="0"
ry="0" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#c8c8c8;fill-opacity:1;stroke:none"
x="236.05922"
y="759.6109"
id="text2711-6-2-9-9"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-8"
x="236.05922"
y="759.6109"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, ';fill:#c8c8c8;fill-opacity:1"><tspan
style="font-style:italic;fill:#c8c8c8;fill-opacity:1"
id="tspan9923-8">v</tspan> = 0</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="149.18359"
y="824.54877"
id="text2711-6-2-9-96"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-6"
x="149.18359"
y="824.54877"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan3028">μ<tspan
style="font-size:65%;baseline-shift:sub"
id="tspan3074">k</tspan></tspan> > 0</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="79.505844"
y="806.37714"
id="text2711-6-2-9-2"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-84"
x="79.505844"
y="806.37714"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">m</tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-37)"
d="m 33.785239,770.82609 -1.3e-4,25.7387"
id="path11252-5" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="29.173132"
y="759.45776"
id="text2711-6-2-9-8"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-2"
x="29.173132"
y="759.45776"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">g</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="79.368446"
y="849.21539"
id="text2711-6-2-9-23"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-3"
x="79.368446"
y="849.21539"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923-0">x</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668-9">0</tspan></tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="250.91145"
y="849.21539"
id="text2711-6-2-9-23-0"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-3-0"
x="250.91145"
y="849.21539"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923-0-3">x</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668-9-3" /></tspan></text>
</g>
</svg>
<br/>
Gambar <a name='fig1'>1</a>. Sistem benda bermassa $m$ begerak di atas lantai
mendatar kasar dengan koefisien gesek kinetis $\mu_{k}$.
###Output
_____no_output_____
###Markdown
Keadaan akhir benda, yaitu saat kecepatan $v = 0$ diberikan pada bagian kanan Gambar [1](fig1) dengan warna abu-abu. Diagram gayaDiagram gaya-gaya yang berja pada benda perlu dibuat berdasarkan informasi dari Gambar [1](fig1) dan Tabel [1](tab1), yang diberikan berikut ini.
###Code
%%html
<svg
width="320"
height="200"
viewBox="0 0 320 200.00001"
id="svg2"
version="1.1"
inkscape:version="1.1.2 (b8e25be833, 2022-02-05)"
sodipodi:docname="mass-horizontal-rough-surface-fbd.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<defs
id="defs4">
<marker
style="overflow:visible"
id="TriangleOutM"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479" />
</marker>
<marker
style="overflow:visible"
id="marker11604"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11602" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Mend"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11361" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-1" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-35"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-0" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-0"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-4" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-37"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-9" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-9"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-8" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-9-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-8-3" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-37-5"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-9-9" />
</marker>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1.2079428"
inkscape:cx="159.36185"
inkscape:cy="35.597712"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:snap-bbox="false"
inkscape:snap-global="false"
units="px"
showborder="true"
inkscape:showpageshadow="true"
borderlayer="false"
inkscape:window-width="1366"
inkscape:window-height="705"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:pagecheckerboard="0">
<inkscape:grid
type="xygrid"
id="grid970" />
</sodipodi:namedview>
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-732.36216)">
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="148.01953"
y="766.72156"
id="text2711-6-2-9-23"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-3"
x="148.01953"
y="766.72156"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">N</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="251.40584"
y="806.94421"
id="text2711-6-2-9"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2"
x="251.40584"
y="806.94421"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923">v</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668" /></tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM)"
d="m 215.98949,804.88502 25.7387,1.3e-4"
id="path11252" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="153.68098"
y="915.71051"
id="text2711-6-2-9-2"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-84"
x="153.68098"
y="915.71051"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">w</tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-37)"
d="m 31.113403,791.97918 -1.3e-4,25.7387"
id="path11252-5" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="26.501303"
y="780.6109"
id="text2711-6-2-9-8"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-2"
x="26.501303"
y="780.6109"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">g</tspan></text>
<rect
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-opacity:1"
id="rect1007"
width="59"
height="59"
x="130.5"
y="792.86218"
rx="0"
ry="0" />
<g
id="g1363"
transform="translate(-6,20)">
<path
style="fill:none;stroke:#ff0000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-9)"
d="m 161.00001,831.69534 -45.73871,1.3e-4"
id="path11252-4" />
<path
style="fill:none;stroke:#0000ff;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-9-3)"
d="m 160.79738,832.36215 -1.3e-4,-75.7387"
id="path11252-4-6" />
</g>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-37-5)"
d="m 159.99967,822.02879 3.4e-4,75.73871"
id="path11252-5-0" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="85.624084"
y="854.51099"
id="text2711-6-2-9-8-4"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-2-1"
x="85.624084"
y="854.51099"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">f<tspan
style="font-size:65%;baseline-shift:sub"
id="tspan2197">k</tspan></tspan></text>
</g>
</svg>
<br>
Gambar <a name='fig2'>2</a>. Diagram gaya-gaya yang bekerja pada benda
bermassa $m$.
###Output
_____no_output_____
###Markdown
Terlihat bahwa pada arah $y$ terdapat gaya normal $N$ dan gaya gravitasi $w$, sedangkan pada arah $x$ hanya terdapat gaya gesek kinetis $f_{k}$ yang melawan arah gerak benda. Arah gerak benda diberikan oleh arah kecepatan $v$. Metode numerikInterasi suatu fungsi $f(x)$ berbentuk\begin{equation}\label{eqn:integral-1}\tag{8}A = \int_a^b f(x) dx\end{equation}dapat didekati dengan\begin{equation}\label{eqn:integral-2}\tag{9}A \approx \sum_{i = 0}^N f\left[ \tfrac12(x_i + x_{i+1}) \right] \Delta x\end{equation}yang dikenal sebagai metode persegi titik tengah, di mana\begin{equation}\label{eqn:integral-3}\tag{10}\Delta x = \frac{b - a}{N}\end{equation}dengan $N$ adalah jumlah partisi. Variabel $x_{i}$ pada Persamaan ([9](eqn9)) diberikan oleh\begin{equation}\label{eqn:integral-4}\tag{11}x_{i} = a + i\Delta x\end{equation}dengan $i = 0, \dots, N$. PenyelesaianPenerapan Persamaan ([1](eqn1)), ([2](eqn2)), ([3](eqn3)), ([4](eqn4)), dan ([5](eqn5)) pada Gambar [2](fig2) akan menghasilkan\begin{equation}\label{eqn:friction}\tag{10}f_k = \mu_k mg\end{equation}dan usahanya adalah\begin{equation}\label{eqn:friction-work}\tag{11}\begin{array}{rcl}W & = & \displaystyle \int_{x_0}^x f_k dx \newline& = & \displaystyle \int_{x_0}^x \mu_k m g dx \newline& = & \displaystyle m g \int_{x_0}^x \mu_k dx\end{array}\end{equation}dengan koefisien gesek statisnya dapat merupakan fungsi dari posisi $\mu_k = \mu_k(x)$.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
# set integral lower and upper bounds
a = 0
b = 1
# generate x
x = [1, 2, 3, 4, 5]
# generate y from numerical integration
y = [1, 2, 3, 5, 6]
## plot results
fig, ax = plt.subplots()
ax.scatter(x, y)
ax.set_xlabel("$x - x_0$")
ax.set_ylabel("W")
from IPython import display
from IPython.core.display import HTML
HTML('''
<div>
Gambar <a name='fig3'>3</a>. Kurva antara usaha $W$ dan jarak tempuh $x - x_0$.
</div>
''')
###Output
_____no_output_____ |
Chapter02/2.10 MNIST digits classification in TensorFlow 2.0.ipynb | ###Markdown
MNIST digit classification in TensorFlow 2.0 Now, we will see how can we perform the MNIST handwritten digits classification usingtensorflow 2.0. It hardly a few lines of code compared to the tensorflow 1.x. As we learned,tensorflow 2.0 uses as keras as its high-level API, we just need to add tf.keras to the kerascode. Import the libraries:
###Code
import warnings
warnings.filterwarnings('ignore')
import tensorflow as tf
print tf.__version__
###Output
2.0.0-alpha0
###Markdown
Load the dataset:
###Code
mnist = tf.keras.datasets.mnist
###Output
_____no_output_____
###Markdown
Create a train and test set:
###Code
(x_train,y_train), (x_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Normalize the x values by diving with maximum value of x which is 255 and convert them to float:
###Code
x_train, x_test = tf.cast(x_train/255.0, tf.float32), tf.cast(x_test/255.0, tf.float32)
###Output
_____no_output_____
###Markdown
convert y values to int:
###Code
y_train, y_test = tf.cast(y_train,tf.int64),tf.cast(y_test,tf.int64)
###Output
_____no_output_____
###Markdown
Define the sequential model:
###Code
model = tf.keras.models.Sequential()
###Output
_____no_output_____
###Markdown
Add the layers - We use a three-layered network. We apply ReLU activation at the first two layers and in the final output layer we apply softmax function:
###Code
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256, activation="relu"))
model.add(tf.keras.layers.Dense(128, activation="relu"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
Compile the model with Stochastic Gradient Descent, that is 'sgd' (we will learn about this in the next chapter) as optimizer and sparse_categorical_crossentropy as loss function and with accuracy as a metric:
###Code
model.compile(optimizer='sgd', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the model for 10 epochs with batch_size as 32:
###Code
model.fit(x_train, y_train, batch_size=32, epochs=10)
###Output
Epoch 1/10
60000/60000 [==============================] - 6s 95us/sample - loss: 1.7537 - accuracy: 0.5562
Epoch 2/10
60000/60000 [==============================] - 5s 85us/sample - loss: 0.8721 - accuracy: 0.8102
Epoch 3/10
60000/60000 [==============================] - 6s 94us/sample - loss: 0.5765 - accuracy: 0.8612
Epoch 4/10
60000/60000 [==============================] - 5s 85us/sample - loss: 0.4684 - accuracy: 0.8796
Epoch 5/10
60000/60000 [==============================] - 5s 91us/sample - loss: 0.4136 - accuracy: 0.8905
Epoch 6/10
60000/60000 [==============================] - 4s 74us/sample - loss: 0.3800 - accuracy: 0.8971
Epoch 7/10
60000/60000 [==============================] - 5s 90us/sample - loss: 0.3566 - accuracy: 0.9018
Epoch 8/10
60000/60000 [==============================] - 4s 71us/sample - loss: 0.3389 - accuracy: 0.9060
Epoch 9/10
60000/60000 [==============================] - 6s 92us/sample - loss: 0.3247 - accuracy: 0.9097
Epoch 10/10
60000/60000 [==============================] - 5s 88us/sample - loss: 0.3129 - accuracy: 0.9120
###Markdown
Evaluate the model on test sets:
###Code
model.evaluate(x_test, y_test)
###Output
10000/10000 [==============================] - 0s 43us/sample - loss: 0.2937 - accuracy: 0.9195
|
NLP_LAB4.ipynb | ###Markdown
![TITLE.JPG](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAeAB4AAD/4RD0RXhpZgAATU0AKgAAAAgABAE7AAIAAAAOAAAISodpAAQAAAABAAAIWJydAAEAAAAcAAAQ0OocAAcAAAgMAAAAPgAAAAAc6gAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAE5pa3VuaiBiYW5zYWwAAAWQAwACAAAAFAAAEKaQBAACAAAAFAAAELqSkQACAAAAAzgxAACSkgACAAAAAzgxAADqHAAHAAAIDAAACJoAAAAAHOoAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyMDIwOjA5OjA5IDIxOjE4OjIwADIwMjA6MDk6MDkgMjE6MTg6MjAAAABOAGkAawB1AG4AagAgAGIAYQBuAHMAYQBsAAAA/+ELIGh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8APD94cGFja2V0IGJlZ2luPSfvu78nIGlkPSdXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQnPz4NCjx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iPjxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+PHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9InV1aWQ6ZmFmNWJkZDUtYmEzZC0xMWRhLWFkMzEtZDMzZDc1MTgyZjFiIiB4bWxuczpkYz0iaHR0cDovL3B1cmwub3JnL2RjL2VsZW1lbnRzLzEuMS8iLz48cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0idXVpZDpmYWY1YmRkNS1iYTNkLTExZGEtYWQzMS1kMzNkNzUxODJmMWIiIHhtbG5zOnhtcD0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyI+PHhtcDpDcmVhdGVEYXRlPjIwMjAtMDktMDlUMjE6MTg6MjAuODA1PC94bXA6Q3JlYXRlRGF0ZT48L3JkZjpEZXNjcmlwdGlvbj48cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0idXVpZDpmYWY1YmRkNS1iYTNkLTExZGEtYWQzMS1kMzNkNzUxODJmMWIiIHhtbG5zOmRjPSJodHRwOi8vcHVybC5vcmcvZGMvZWxlbWVudHMvMS4xLyI+PGRjOmNyZWF0b3I+PHJkZjpTZXEgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj48cmRmOmxpPk5pa3VuaiBiYW5zYWw8L3JkZjpsaT48L3JkZjpTZXE+DQoJCQk8L2RjOmNyZWF0b3I+PC9yZGY6RGVzY3JpcHRpb24+PC9yZGY6UkRGPjwveDp4bXBtZXRhPg0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgCiAgICAgICAgICAgICAgICAgICAgICAgICAgICA8P3hwYWNrZXQgZW5kPSd3Jz8+/9sAQwAHBQUGBQQHBgUGCAcHCAoRCwoJCQoVDxAMERgVGhkYFRgXGx4nIRsdJR0XGCIuIiUoKSssKxogLzMvKjInKisq/9sAQwEHCAgKCQoUCwsUKhwYHCoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq/8AAEQgAcwD7AwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A+kaKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKK+f8A9pDxd4g0HVtGs9D1i806Ga3eSQWkxiLMGAGWXB/WuZ+BPjfxRqnxRs9N1TxBqN9ZzxTGSG7uWmBIjJH3iccgdKAPqaiiigAooqO4nitbaW4uHEcUSF3c9FUDJP5UASUV8S+JPiv4u1rxFe31r4j1aytpZ3aC3t7t4lijJ+VcKQOBgZp3hn4s+LtE8RWV7d+ItUvrWOZDPb3V28yyR5+ZcMT1GeaAPtiimQzR3EEc0Dh4pFDo69GBGQafQAUVwfxo17UfDnwq1O/0a5a1u90UazJ95A0igkHscEjNfJv/AAsLxp/0N+vf+DOb/wCKoA+7qK+Ef+FheNP+hv17/wAGc3/xVaulfGPx9pEyPD4lu7gL1S8InDD0O/J/XNAH2zRXlHwl+NUHj6Y6RrFvHY60iF1EZ/d3IHUqDyCOu3njnPXHq9ABRRRQAUUUUAFFc7488XWvgfwbe63dbWaFdtvET/rZTwi/nyfYE9q+PL74oeOdQumnm8V6tGzEnbb3bwqPoqEAUAfctFfH3wx+InjCX4kaHZXPiTU7q2ur2OKaK6uWmV1YgEfOTj8K+waACiiigAooooAKKKKACiiigD5n/am/5GbQf+vOT/0OuU/Z7/5LNpv/AFxn/wDRTV1f7U3/ACM2g/8AXnJ/6HXKfs9/8lm03/rjP/6KagD7DooooAK81+PXib/hHfhXexRPtudUYWUeDztYEuf++Aw/EV6VXyv+0v4lOpeOrXQ4ZMw6VbgyKD/y1kwx/Jdn5mgDzjwJ4Ybxj450vQwWWO6m/fOvVY1BZyPfaDj3xWTqunT6PrF5pt4u24s53gkHoysQf5V7n+y74b87VNX8STJ8tvGLOAn+82Gf8QAv/fVc3+0Z4b/sb4mHUoU22+rwLOCBx5i/I4/RW/4FQB7d8B/E3/CRfCuxjlk33WmE2UuTzheU/wDHCo/A16RXyx+zR4m/s3xxd6FM+IdWg3Rgn/lrHlh+al/yFfU9AHmX7Qn/ACRnUv8ArtB/6NWvkSwVX1K2VwGVpkBBGQRkV9d/tCf8kZ1L/rtB/wCjVr5DspFh1C3kkOESVWY46AGgD7mb4eeC2Ug+EdCwRjjTYR/7LXg/7QXwx0Twrp9j4h8OWy2Mdxc/Zri2QnZuKFlZR/DwjAgcdOnOfVz8e/hsFJHiPPsLG45/8h14f8bfi5Z+PxaaToEUy6XZymZppl2meTBUEL2ABbryd3QY5AOE8AXs+nfEbw9c2rlJF1GAZ9QXAYfQgkfjX3fXyN8CvhzfeJvGNnrt3bPHo2mSiczOuFnlU5VF9cMAT2AGO4r65oAKKKKACiivP/jJ49HgTwLNJaybdUv829kAeVJHzSf8BBz9SvrQB4X+0F4+/wCEn8Y/2Jp8u7TdHZoyVPEs/R299v3R9G9a5C98EzaZ8LLLxVfB431LUBBaxnj9yqOS5/3mAx7LnvTfhx4MuPHvje00lN4t8+deTD/lnCCNxz6nIA9yK9w/aXtILD4c6DaWcSw29vfLHFGgwEURMAB9BQB4f8Lv+SreGv8AsJQ/+hCvuWvhr4Xf8lW8Nf8AYSh/9CFfctABRRRQAUUUUAFFFFABRRRQB8z/ALU3/IzaD/15yf8Aodcp+z3/AMlm03/rjP8A+imrq/2pv+Rm0H/rzk/9DrlP2e/+Szab/wBcZ/8A0U1AH2HRRRQBBf3sGm6bc314+y3tYmmlb+6qgkn8hXwP4g1ibxD4j1DV7r/W31w87DP3dxJA+gHH4V9VftDeJv7C+GMthC2LjWJRbLjqIx8zn8gF/wCBV8iUAfa3wa8N/wDCMfCvSbeRNlxdR/bJ+Od0nIB9wu0fhXOftIeHf7X+Gy6pFHun0i4WUkdRE/yOPzKH/gNfJlFAGl4e1mfw74k0/WLQnzbK4SZQDjdtOSv0IyPxr73sbyDUdPt72zcSW9zEs0Tj+JWAIP5Gvz2r67/Z58Tf278MYrCZs3Gjym2bPUxn5kP5Er/wGgCx+0J/yRnUv+u0H/o1a+Po42mlSOMZd2CqM9Sa+wf2hP8AkjOpf9doP/Rq18i6d/yFLT/rsn/oQoA7v/hQvxJ/6Fv/AMnrb/45XM+I/BHiXwlsPiLRrqxRztWV13RsfQOuVzx0zX3lWL4x0uy1nwXq9hqiq1rLaSbyw+5hSQ31BAI9xQB8n/D/AONHiTwXe28Nzdzano6kLJZzvuKJ/wBM2PKkdh09u9fXdprVjf8Ah+LWrSbzLGW3Fykg7pt3dPXHavz+r7D+BIOofA7Tra7ZpI2+0Q8/3DI4x+poA4Z/2rFDts8HErn5SdTwSPp5VN/4at/6k3/yqf8A2mnP+ympdtnjEhc/KDpmSB9fNpv/AAyl/wBTl/5S/wD7dQB6j8MPiVD8SNAu9RGnnTHtJ/KkiacSjG0MG3bV9T27V8vfFvx03jzx3c3kLk6da5t7Je3lg8v9WOT9MDtX0X4b+HCfDT4W+JbG31F9RuLm3uJzOIfKwfJIVQu5umPXvXx8jtG6ujFWU5VgcEH1oA+wfgZ4A/4QzwSt3fw7NW1ULPPuHzRJj5I/bAOT7sR2rnf2o/8AkRdH/wCwl/7SevAP+FheNP8Aob9e/wDBnN/8VVHVPE+v65AkGt65qWoxRtvSO7u5JVVsYyAxODg9aANf4Xf8lW8Nf9hKH/0IV9y1+etrdXFjdxXVlPLb3ELB45oXKOjDoQRyD719Cfs3eK/EOueItZtNa1a/1K3S1SRTeXDzeW+/AwWJxkE9PSgD6GooooAKKKKACiiigAooooA+Z/2pv+Rm0H/rzk/9Dry74e+Mf+ED8Z22v/Yft/kJInked5W7cpX721umfSvqT4p/CK2+JclhOdVk0y6s1ZBIIPOV0Yg4K7l5BHXPc155/wAMpf8AU5f+Uv8A+3UAH/DVv/Um/wDlU/8AtNaGg/tNx6x4i0/TJ/CrWyXlzHAZl1DzDHvYLnb5Yz19RWf/AMMpf9Tl/wCUv/7dWhoH7MkWjeItO1OfxU11HZ3MdwYF0/yzJsYNt3eYcZx6UAcJ+0d4l/tn4kLpcL7rfR4BFgdPNfDOfy2D/gNaP7Mfhw33jLUNelTMWm2/lREj/lrJxkfRAw/4EK63xF+zQuveJdQ1ZfFskP265kuDHLYeYVLMWxu8xc9fSvSPhx8P7L4c+G30uzuXvJJpjPPcOgUuxAAwOcAADjJ7+tAHW0UUUAfHv7QHhv8AsD4qXdxEm231WNbxMDjceHH13KT/AMCFX/2cfEv9jfEhtLmfbb6xAYsHp5qZZD+W8f8AAq92+KPwrs/iZaWIlv2066sWfy51hEmVbGVK5GeVBHPHPrXDeHf2aBoXiTT9WPi6SU2NwlwqR2HllirBgN3mHHT0oA6n9oT/AJIzqX/XaD/0atfINtN9nuoptu7y3D4zjODmvuzxv4StfHHhG80G9mkt0udpWaMZMbKwYHHfkdK8W/4ZS/6nL/yl/wD26gA/4at/6k3/AMqn/wBprkPHn7QGu+MtFm0ixsYtHsrgFbjy5TLJKv8Ac3YACnvgc/TIrr/+GUv+py/8pf8A9urR039lrRoZFOreIr67UdVt4Ehz+ZegD530LQtR8S61b6Vo1s9zd3DhURRwOfvE9lHcnpX3L4O8Nw+EfB2m6FbtvWyhCM4GN7klnbHuxJ/GofCngXw74JtGg8OabHalxiSYkvJJ/vOeT9OntXQUAFFFFAAQGUhhkHgg9654/D7wYxJPhHQiTySdNh5/8droaKAOd/4V74L/AOhQ0H/wWQ//ABNePftHeGNA0PwbpU+iaHpunSyX+x5LS0jiZl8tjglQMjI6V9B1xvxM+HcHxI8PQabPqEmnvbz+fHMkYkGdpXBXIyOfUUAfI/w4tbe++Jnh61vYIri3mv4kkhmQOjqWGQQeCPavtnSvD2i6D5v9h6RYab52PN+x2yQ78ZxnaBnGT19a8j8Ifs4W/hjxVYa1ceJZL02MyzpClkItzLyMku3GfavbaACiiigAooooAKKKKACsbxd4osvBnha71/VIriW1tNnmJbqrOdzqgwGIHVh36Vs1na9oOm+J9DuNI1y2+1WFzt82LzGTdtYMOVII5UHg9qAPKv8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7R/wANP+C/+gZr3/gPD/8AHa6L/hQvw2/6Fv8A8nrn/wCOUf8AChfht/0Lf/k9c/8AxygDnf8Ahp/wX/0DNe/8B4f/AI7W14R+PHhjxn4ptNA0ux1aK6u9/lvcQxKg2oznJWQnop7dasf8KF+G3/Qt/wDk9c//ABytHQfhH4I8Ma5b6voeifZb+23eVL9rnfbuUqeGcg8MRyO9AHZ0UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAf//Z)
###Code
import nltk
import pandas as pd
###Output
_____no_output_____
###Markdown
**Importing Data**
###Code
para_object = open('lab4_text.txt')
input_str = para_object.read()
input_str = input_str.lower() # Making all letters lowercase in text.
input_str
###Output
_____no_output_____
###Markdown
**Tokenization** **Word Tokenization**
###Code
nltk.download('punkt')
from nltk.tokenize import word_tokenize
word_token_reslt = word_tokenize(input_str)
print(word_token_reslt)
###Output
['i', 'must', 'be', 'honest', 'with', 'you', '.', 'i', 'was', 'about', 'to', 'go', 'to', 'sleep', 'before', 'i', 'opened', 'my', 'mail', 'and', 'read', 'your', 'letter', '.', 'now', 'i', 'am', 'wide', 'awake', ',', 'sitting', 'upright', ',', 'because', 'you', 'made', 'me', 'question', 'my', 'preferences', 'between', 'shoes', 'and', 'slippers', '.', 'be', 'patient', 'with', 'me', '.', 'i', 'need', 'proper', 'peaceful', 'sleep', 'and', 'let', '’', 's', 'sort', 'this', 'out', '.', 'i', 'am', 'a', 'shoe', 'person', '.', 'correction', ':', 'i', 'am', 'a', 'person', 'obsessed', 'with', 'dry', ',', 'dust-free', 'feet', '.', 'whatever', 'helps', 'me', 'keep', 'a', 'grip', 'on', 'my', 'walk', 'and', 'posture', 'in', 'spite', 'of', 'my', 'profusely', 'sweaty', 'feet', ',', 'i', 'am', 'that', 'person', '.', 'whatever', 'keeps', 'the', 'grainy', ',', 'pokey', 'sensation', 'of', 'soil', 'or', 'sand', 'in', 'parts', 'of', 'my', 'skin', 'away', ',', 'i', 'am', 'that', 'person', '.', 'i', 'guess', ',', 'i', 'am', 'a', 'shoe', 'person', 'because', 'it', 'is', 'loyal', 'to', 'my', 'whole', 'feet', '.', 'not', 'making', 'parts', 'of', 'it', 'lying', 'exposed', 'and', 'parts', 'of', 'it', 'covered', ',', 'without', 'any', 'symmetry', '.', 'i', 'am', 'also', 'cotton', 'sock', 'person', ',', 'preferably', 'red', '.', 'sometimes', ',', 'i', 'am', 'a', 'clean', 'matte', '(', 'anti-slippery', ')', 'floor', 'person', '.', 'basically', ',', 'i', 'am', 'a', 'person', 'with', 'issues', '.', 'i', 'really', 'liked', 'your', 'host-guest', 'theory', 'of', 'travelling', ',', 'how', 'we', 'do', 'not', 'just', 'get', 'out', 'of', 'home', 'when', 'we', 'travel', ',', 'the', 'home', 'expands', 'when', 'we', 'step', 'foot', 'at', 'a', 'place', 'engulfing', 'it', 'into', 'our', 'comfort', 'zone', '.', 'but', 'the', 'thing', 'is', 'i', 'wear', 'socks', 'all', 'the', 'time', 'in', 'my', 'house', ',', 'and', 'open', 'them', 'when', 'i', 'go', 'to', 'bed', ',', 'replacing', 'the', 'sock', 'with', 'a', 'comforter', '.', 'i', 'guess', 'i', 'need', 'that', 'comfort', 'zone', 'close', 'and', 'maybe', ',', 'mine', 'is', 'restricted', 'to', 'my', 'own', 'skin', '.', 'so', 'much', 'so', ',', 'that', 'it', 'doesn', '’', 't', 'even', 'include', 'my', 'own', 'home', '.', 'when', 'i', 'travel', ',', 'i', 'like', 'wearing', 'shoes', ',', 'with', 'a', 'firm', 'grip', 'on', 'myself', ',', 'dust', 'free', ',', 'protected', '.', 'when', 'i', 'face', 'situations', 'where', 'i', 'have', 'to', 'open', 'them', 'and', 'i', 'don', '’', 't', 'want', 'to', ',', 'i', 'usually', 'wrinkle', 'my', 'nose', '.', 'it', 'doesn', '’', 't', 'get', 'better', 'that', 'i', 'can', '’', 't', 'wear', 'those', 'shoes', 'again', 'with', 'a', 'dirty', 'feet', '.', 'i', 'take', 'a', 'handkerchief', '.', 'imagine', 'my', 'situation', 'at', 'the', 'beach', 'or', 'in', 'a', 'hill', 'stream', ',', 'when', 'the', 'water', 'feels', 'soothing', 'to', 'my', 'sweaty', 'feet', 'and', 'removes', 'the', 'dust', ',', 'only', 'to', 'attract', 'more', 'of', 'it', 'when', 'i', 'walk', 'on', 'the', 'sand', '.', 'i', 'let', 'it', 'irritate', 'me', 'those', 'times', ',', 'because', 'i', 'always', 'have', 'the', 'ocean', 'and', 'the', 'river', 'on', 'my', 'side', '.', 'i', 'like', 'my', 'bare', 'feet', 'drowning', 'with', 'no', 'air', 'bubbles', 'left', '.', 'i', 'don', '’', 't', 'like', 'it', 'when', 'droplets', 'of', 'water', 'make', 'only', 'a', 'part', 'of', 'my', 'feet', 'wet', ',', 'only', 'to', 'let', 'me', 'realise', 'and', 'miss', 'the', 'comfort', 'of', 'dry', 'skin', 'and', 'scowl', 'at', 'the', 'linger', 'of', 'irritation', 'due', 'to', 'a', 'little', 'dampness', '.', 'i', 'don', '’', 't', 'like', 'it', 'when', 'i', 'have', 'to', 'keep', 'on', 'brushing', 'my', 'left', 'leg', 'against', 'my', 'right', 'to', 'flatten', 'the', 'spheres', 'of', 'water', '.', 'and', 'repeat', 'it', 'with', 'the', 'right', '.', 'i', 'just', 'realised', 'why', 'i', 'don', '’', 't', 'like', 'drizzling', 'rains', '.', 'eventually', ',', 'when', 'i', 'have', 'to', 'get', 'out', 'of', 'the', 'water', ',', 'i', 'give', 'up', 'my', 'fight', 'with', 'the', 'dirt', 'and', 'wear', 'those', 'damn', 'shoes', 'with', 'muddy', 'feet', '.', 'those', 'are', 'the', 'times', 'i', 'miss', 'my', 'slippery', 'slippers', '.', 'don', '’', 't', 'let', 'my', 'socks', 'or', 'shoes', 'hear', 'me', 'say', 'that', '.', 'they', 'do', 'not', 'know', 'that', 'sometimes', 'i', 'keep', 'a', 'back-up', 'pair', 'of', 'red', 'flip-flops', 'for', 'such', 'scenarios', '.', 'does', 'this', 'make', 'me', 'a', 'slipper', 'person', '?', 'i', 'guess', 'i', 'am', 'a', 'slipper', 'person', 'when', 'my', 'feet', 'are', 'ankle', 'deep', 'in', 'mud', ',', 'so', 'that', 'there', 'is', 'no', 'place', 'for', 'me', 'to', 'complain', '.', 'no', 'alternative', '.', 'no', 'uneasy', 'half-done', 'feeling', '.', 'otherwise', ',', 'not', 'much', 'of', 'a', 'slipper', 'person', '.', 'i', 'am', 'a', 'person', 'who', 'likes', 'symmetry', '.', 'all', 'in', 'or', 'none', 'at', 'all', '.', 'it', '’', 's', 'difficult', 'being', 'me', 'in', 'this', 'world', 'with', 'its', 'shades', 'of', 'grey', '.', 'but', 'then', ',', 'my', 'favourite', 'colour', 'is', 'red', 'and', 'thankfully', 'it', 'covers', 'the', 'entire', 'spectrum', '.', 'all', 'i', 'need', 'is', 'an', 'emotion', ',', 'and', 'the', 'rest', 'is', 'taken', 'care', 'of', '.', 'thank', 'you', 'for', 'being', 'the', 'listener', 'you', 'are', '.', 'this', 'has', 'been', 'a', 'selfish', 'post', '.', 'i', 'used', 'your', 'letter', 'for', 'my', 'sense', 'of', 'clarity', '.', 'you', 'always', 'make', 'me', 'reflect', 'and', 'dig', 'deep', '.', 'i', 'can', 'finally', 'sleep', 'now', '.', 'goodnight', '.']
###Markdown
**Sentence Tokenization**
###Code
from nltk.tokenize import sent_tokenize
sent_token_reslt = sent_tokenize(input_str)
print(sent_token_reslt)
###Output
['i must be honest with you.', 'i was about to go to sleep before i opened my mail and read your letter.', 'now i am wide awake, sitting upright, because you made me question my preferences between shoes and slippers.', 'be patient with me.', 'i need proper peaceful sleep and let’s sort this out.', 'i am a shoe person.', 'correction: i am a person obsessed with dry, dust-free feet.', 'whatever helps me keep a grip on my walk and posture in spite of my profusely sweaty feet, i am that person.', 'whatever keeps the grainy, pokey sensation of soil or sand in parts of my skin away, i am that person.', 'i guess, i am a shoe person because it is loyal to my whole feet.', 'not making parts of it lying exposed and parts of it covered, without any symmetry.', 'i am also cotton sock person, preferably red.', 'sometimes, i am a clean matte (anti-slippery) floor person.', 'basically, i am a person with issues.', 'i really liked your host-guest theory of travelling, how we do not just get out of home when we travel, the home expands when we step foot at a place engulfing it into our comfort zone.', 'but the thing is i wear socks all the time in my house, and open them when i go to bed, replacing the sock with a comforter.', 'i guess i need that comfort zone close and maybe, mine is restricted to my own skin.', 'so much so, that it doesn’t even include my own home.', 'when i travel, i like wearing shoes, with a firm grip on myself, dust free, protected.', 'when i face situations where i have to open them and i don’t want to, i usually wrinkle my nose.', 'it doesn’t get better that i can’t wear those shoes again with a dirty feet.', 'i take a handkerchief.', 'imagine my situation at the beach or in a hill stream, when the water feels soothing to my sweaty feet and removes the dust, only to attract more of it when i walk on the sand.', 'i let it irritate me those times, because i always have the ocean and the river on my side.', 'i like my bare feet drowning with no air bubbles left.', 'i don’t like it when droplets of water make only a part of my feet wet, only to let me realise and miss the comfort of dry skin and scowl at the linger of irritation due to a little dampness.', 'i don’t like it when i have to keep on brushing my left leg against my right to flatten the spheres of water.', 'and repeat it with the right.', 'i just realised why i don’t like drizzling rains.', 'eventually, when i have to get out of the water, i give up my fight with the dirt and wear those damn shoes with muddy feet.', 'those are the times i miss my slippery slippers.', 'don’t let my socks or shoes hear me say that.', 'they do not know that sometimes i keep a back-up pair of red flip-flops for such scenarios.', 'does this make me a slipper person?', 'i guess i am a slipper person when my feet are ankle deep in mud, so that there is no place for me to complain.', 'no alternative.', 'no uneasy half-done feeling.', 'otherwise, not much of a slipper person.', 'i am a person who likes symmetry.', 'all in or none at all.', 'it’s difficult being me in this world with its shades of grey.', 'but then, my favourite colour is red and thankfully it covers the entire spectrum.', 'all i need is an emotion, and the rest is taken care of.', 'thank you for being the listener you are.', 'this has been a selfish post.', 'i used your letter for my sense of clarity.', 'you always make me reflect and dig deep.', 'i can finally sleep now.', 'goodnight.']
###Markdown
**Tokenize using Regular Expression**
###Code
from nltk.tokenize import RegexpTokenizer
t = RegexpTokenizer(r'\w+')
tokens = t.tokenize(input_str)
print(tokens)
###Output
['i', 'must', 'be', 'honest', 'with', 'you', 'i', 'was', 'about', 'to', 'go', 'to', 'sleep', 'before', 'i', 'opened', 'my', 'mail', 'and', 'read', 'your', 'letter', 'now', 'i', 'am', 'wide', 'awake', 'sitting', 'upright', 'because', 'you', 'made', 'me', 'question', 'my', 'preferences', 'between', 'shoes', 'and', 'slippers', 'be', 'patient', 'with', 'me', 'i', 'need', 'proper', 'peaceful', 'sleep', 'and', 'let', 's', 'sort', 'this', 'out', 'i', 'am', 'a', 'shoe', 'person', 'correction', 'i', 'am', 'a', 'person', 'obsessed', 'with', 'dry', 'dust', 'free', 'feet', 'whatever', 'helps', 'me', 'keep', 'a', 'grip', 'on', 'my', 'walk', 'and', 'posture', 'in', 'spite', 'of', 'my', 'profusely', 'sweaty', 'feet', 'i', 'am', 'that', 'person', 'whatever', 'keeps', 'the', 'grainy', 'pokey', 'sensation', 'of', 'soil', 'or', 'sand', 'in', 'parts', 'of', 'my', 'skin', 'away', 'i', 'am', 'that', 'person', 'i', 'guess', 'i', 'am', 'a', 'shoe', 'person', 'because', 'it', 'is', 'loyal', 'to', 'my', 'whole', 'feet', 'not', 'making', 'parts', 'of', 'it', 'lying', 'exposed', 'and', 'parts', 'of', 'it', 'covered', 'without', 'any', 'symmetry', 'i', 'am', 'also', 'cotton', 'sock', 'person', 'preferably', 'red', 'sometimes', 'i', 'am', 'a', 'clean', 'matte', 'anti', 'slippery', 'floor', 'person', 'basically', 'i', 'am', 'a', 'person', 'with', 'issues', 'i', 'really', 'liked', 'your', 'host', 'guest', 'theory', 'of', 'travelling', 'how', 'we', 'do', 'not', 'just', 'get', 'out', 'of', 'home', 'when', 'we', 'travel', 'the', 'home', 'expands', 'when', 'we', 'step', 'foot', 'at', 'a', 'place', 'engulfing', 'it', 'into', 'our', 'comfort', 'zone', 'but', 'the', 'thing', 'is', 'i', 'wear', 'socks', 'all', 'the', 'time', 'in', 'my', 'house', 'and', 'open', 'them', 'when', 'i', 'go', 'to', 'bed', 'replacing', 'the', 'sock', 'with', 'a', 'comforter', 'i', 'guess', 'i', 'need', 'that', 'comfort', 'zone', 'close', 'and', 'maybe', 'mine', 'is', 'restricted', 'to', 'my', 'own', 'skin', 'so', 'much', 'so', 'that', 'it', 'doesn', 't', 'even', 'include', 'my', 'own', 'home', 'when', 'i', 'travel', 'i', 'like', 'wearing', 'shoes', 'with', 'a', 'firm', 'grip', 'on', 'myself', 'dust', 'free', 'protected', 'when', 'i', 'face', 'situations', 'where', 'i', 'have', 'to', 'open', 'them', 'and', 'i', 'don', 't', 'want', 'to', 'i', 'usually', 'wrinkle', 'my', 'nose', 'it', 'doesn', 't', 'get', 'better', 'that', 'i', 'can', 't', 'wear', 'those', 'shoes', 'again', 'with', 'a', 'dirty', 'feet', 'i', 'take', 'a', 'handkerchief', 'imagine', 'my', 'situation', 'at', 'the', 'beach', 'or', 'in', 'a', 'hill', 'stream', 'when', 'the', 'water', 'feels', 'soothing', 'to', 'my', 'sweaty', 'feet', 'and', 'removes', 'the', 'dust', 'only', 'to', 'attract', 'more', 'of', 'it', 'when', 'i', 'walk', 'on', 'the', 'sand', 'i', 'let', 'it', 'irritate', 'me', 'those', 'times', 'because', 'i', 'always', 'have', 'the', 'ocean', 'and', 'the', 'river', 'on', 'my', 'side', 'i', 'like', 'my', 'bare', 'feet', 'drowning', 'with', 'no', 'air', 'bubbles', 'left', 'i', 'don', 't', 'like', 'it', 'when', 'droplets', 'of', 'water', 'make', 'only', 'a', 'part', 'of', 'my', 'feet', 'wet', 'only', 'to', 'let', 'me', 'realise', 'and', 'miss', 'the', 'comfort', 'of', 'dry', 'skin', 'and', 'scowl', 'at', 'the', 'linger', 'of', 'irritation', 'due', 'to', 'a', 'little', 'dampness', 'i', 'don', 't', 'like', 'it', 'when', 'i', 'have', 'to', 'keep', 'on', 'brushing', 'my', 'left', 'leg', 'against', 'my', 'right', 'to', 'flatten', 'the', 'spheres', 'of', 'water', 'and', 'repeat', 'it', 'with', 'the', 'right', 'i', 'just', 'realised', 'why', 'i', 'don', 't', 'like', 'drizzling', 'rains', 'eventually', 'when', 'i', 'have', 'to', 'get', 'out', 'of', 'the', 'water', 'i', 'give', 'up', 'my', 'fight', 'with', 'the', 'dirt', 'and', 'wear', 'those', 'damn', 'shoes', 'with', 'muddy', 'feet', 'those', 'are', 'the', 'times', 'i', 'miss', 'my', 'slippery', 'slippers', 'don', 't', 'let', 'my', 'socks', 'or', 'shoes', 'hear', 'me', 'say', 'that', 'they', 'do', 'not', 'know', 'that', 'sometimes', 'i', 'keep', 'a', 'back', 'up', 'pair', 'of', 'red', 'flip', 'flops', 'for', 'such', 'scenarios', 'does', 'this', 'make', 'me', 'a', 'slipper', 'person', 'i', 'guess', 'i', 'am', 'a', 'slipper', 'person', 'when', 'my', 'feet', 'are', 'ankle', 'deep', 'in', 'mud', 'so', 'that', 'there', 'is', 'no', 'place', 'for', 'me', 'to', 'complain', 'no', 'alternative', 'no', 'uneasy', 'half', 'done', 'feeling', 'otherwise', 'not', 'much', 'of', 'a', 'slipper', 'person', 'i', 'am', 'a', 'person', 'who', 'likes', 'symmetry', 'all', 'in', 'or', 'none', 'at', 'all', 'it', 's', 'difficult', 'being', 'me', 'in', 'this', 'world', 'with', 'its', 'shades', 'of', 'grey', 'but', 'then', 'my', 'favourite', 'colour', 'is', 'red', 'and', 'thankfully', 'it', 'covers', 'the', 'entire', 'spectrum', 'all', 'i', 'need', 'is', 'an', 'emotion', 'and', 'the', 'rest', 'is', 'taken', 'care', 'of', 'thank', 'you', 'for', 'being', 'the', 'listener', 'you', 'are', 'this', 'has', 'been', 'a', 'selfish', 'post', 'i', 'used', 'your', 'letter', 'for', 'my', 'sense', 'of', 'clarity', 'you', 'always', 'make', 'me', 'reflect', 'and', 'dig', 'deep', 'i', 'can', 'finally', 'sleep', 'now', 'goodnight']
###Markdown
**Importing StopWords**
###Code
nltk.download('stopwords')
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
###Output
_____no_output_____
###Markdown
**Remove StopWords from given tokens**
###Code
for i in tokens:
if i in stop_words:
tokens.remove(i)
print("After removing Stop Words: ",tokens)
###Output
After removing Stop Words: ['must', 'honest', 'was', 'go', 'sleep', 'opened', 'mail', 'read', 'letter', 'wide', 'awake', 'sitting', 'upright', 'made', 'question', 'preferences', 'shoes', 'slippers', 'patient', 'need', 'proper', 'peaceful', 'sleep', 'let', 'sort', 'am', 'shoe', 'person', 'correction', 'am', 'person', 'obsessed', 'dry', 'dust', 'free', 'feet', 'whatever', 'helps', 'keep', 'grip', 'walk', 'posture', 'spite', 'profusely', 'sweaty', 'feet', 'am', 'person', 'whatever', 'keeps', 'grainy', 'pokey', 'sensation', 'soil', 'sand', 'parts', 'skin', 'away', 'am', 'person', 'guess', 'am', 'shoe', 'person', 'loyal', 'whole', 'feet', 'making', 'parts', 'lying', 'exposed', 'parts', 'covered', 'without', 'symmetry', 'am', 'also', 'cotton', 'sock', 'person', 'preferably', 'red', 'sometimes', 'am', 'clean', 'matte', 'anti', 'slippery', 'floor', 'person', 'basically', 'am', 'person', 'issues', 'really', 'liked', 'host', 'guest', 'theory', 'travelling', 'we', 'get', 'home', 'we', 'travel', 'home', 'expands', 'we', 'step', 'foot', 'place', 'engulfing', 'into', 'comfort', 'zone', 'thing', 'wear', 'socks', 'time', 'house', 'open', 'go', 'bed', 'replacing', 'sock', 'comforter', 'guess', 'need', 'comfort', 'zone', 'close', 'maybe', 'mine', 'restricted', 'skin', 'much', 'doesn', 'even', 'include', 'own', 'home', 'travel', 'like', 'wearing', 'shoes', 'firm', 'grip', 'myself', 'dust', 'free', 'protected', 'face', 'situations', 'open', 'want', 'usually', 'wrinkle', 'nose', 'doesn', 'get', 'better', 'wear', 'shoes', 'dirty', 'feet', 'take', 'handkerchief', 'imagine', 'my', 'situation', 'beach', 'hill', 'stream', 'water', 'feels', 'soothing', 'my', 'sweaty', 'feet', 'removes', 'dust', 'attract', 'walk', 'sand', 'let', 'irritate', 'times', 'always', 'ocean', 'the', 'river', 'my', 'side', 'like', 'my', 'bare', 'feet', 'drowning', 'air', 'bubbles', 'left', 'like', 'when', 'droplets', 'water', 'make', 'a', 'part', 'my', 'feet', 'wet', 'let', 'realise', 'miss', 'the', 'comfort', 'dry', 'skin', 'scowl', 'the', 'linger', 'irritation', 'due', 'to', 'a', 'little', 'dampness', 'don', 't', 'like', 'when', 'to', 'keep', 'brushing', 'my', 'left', 'leg', 'my', 'right', 'to', 'flatten', 'the', 'spheres', 'water', 'repeat', 'it', 'the', 'right', 'i', 'just', 'realised', 'i', 'don', 't', 'like', 'drizzling', 'rains', 'eventually', 'when', 'i', 'have', 'to', 'get', 'out', 'the', 'water', 'i', 'give', 'my', 'fight', 'the', 'dirt', 'wear', 'damn', 'shoes', 'with', 'muddy', 'feet', 'those', 'the', 'times', 'i', 'miss', 'my', 'slippery', 'slippers', 'don', 't', 'let', 'my', 'socks', 'shoes', 'hear', 'say', 'they', 'not', 'know', 'that', 'sometimes', 'i', 'keep', 'a', 'back', 'pair', 'red', 'flip', 'flops', 'such', 'scenarios', 'make', 'a', 'slipper', 'person', 'i', 'guess', 'i', 'am', 'a', 'slipper', 'person', 'when', 'my', 'feet', 'are', 'ankle', 'deep', 'mud', 'that', 'place', 'me', 'to', 'complain', 'alternative', 'no', 'uneasy', 'half', 'done', 'feeling', 'otherwise', 'not', 'much', 'a', 'slipper', 'person', 'i', 'am', 'a', 'person', 'likes', 'symmetry', 'in', 'none', 'it', 's', 'difficult', 'me', 'in', 'this', 'world', 'with', 'its', 'shades', 'of', 'grey', 'then', 'my', 'favourite', 'colour', 'red', 'thankfully', 'it', 'covers', 'the', 'entire', 'spectrum', 'all', 'i', 'need', 'an', 'emotion', 'the', 'rest', 'is', 'taken', 'care', 'of', 'thank', 'the', 'listener', 'you', 'are', 'this', 'has', 'a', 'selfish', 'post', 'i', 'used', 'letter', 'for', 'my', 'sense', 'of', 'clarity', 'you', 'always', 'make', 'me', 'reflect', 'and', 'dig', 'deep', 'i', 'can', 'finally', 'sleep', 'goodnight']
###Markdown
**Count word frequency**
###Code
f = nltk.FreqDist(tokens)
for key,val in f.items():
print (str(key) + ':' + str(val))
###Output
must:1
honest:1
was:1
go:2
sleep:3
opened:1
mail:1
read:1
letter:2
wide:1
awake:1
sitting:1
upright:1
made:1
question:1
preferences:1
shoes:5
slippers:2
patient:1
need:3
proper:1
peaceful:1
let:4
sort:1
am:10
shoe:2
person:12
correction:1
obsessed:1
dry:2
dust:3
free:2
feet:9
whatever:2
helps:1
keep:3
grip:2
walk:2
posture:1
spite:1
profusely:1
sweaty:2
keeps:1
grainy:1
pokey:1
sensation:1
soil:1
sand:2
parts:3
skin:3
away:1
guess:3
loyal:1
whole:1
making:1
lying:1
exposed:1
covered:1
without:1
symmetry:2
also:1
cotton:1
sock:2
preferably:1
red:3
sometimes:2
clean:1
matte:1
anti:1
slippery:2
floor:1
basically:1
issues:1
really:1
liked:1
host:1
guest:1
theory:1
travelling:1
we:3
get:3
home:3
travel:2
expands:1
step:1
foot:1
place:2
engulfing:1
into:1
comfort:3
zone:2
thing:1
wear:3
socks:2
time:1
house:1
open:2
bed:1
replacing:1
comforter:1
close:1
maybe:1
mine:1
restricted:1
much:2
doesn:2
even:1
include:1
own:1
like:5
wearing:1
firm:1
myself:1
protected:1
face:1
situations:1
want:1
usually:1
wrinkle:1
nose:1
better:1
dirty:1
take:1
handkerchief:1
imagine:1
my:13
situation:1
beach:1
hill:1
stream:1
water:4
feels:1
soothing:1
removes:1
attract:1
irritate:1
times:2
always:2
ocean:1
the:11
river:1
side:1
bare:1
drowning:1
air:1
bubbles:1
left:2
when:4
droplets:1
make:3
a:8
part:1
wet:1
realise:1
miss:2
scowl:1
linger:1
irritation:1
due:1
to:5
little:1
dampness:1
don:3
t:3
brushing:1
leg:1
right:2
flatten:1
spheres:1
repeat:1
it:3
i:12
just:1
realised:1
drizzling:1
rains:1
eventually:1
have:1
out:1
give:1
fight:1
dirt:1
damn:1
with:2
muddy:1
those:1
hear:1
say:1
they:1
not:2
know:1
that:2
back:1
pair:1
flip:1
flops:1
such:1
scenarios:1
slipper:3
are:2
ankle:1
deep:2
mud:1
me:3
complain:1
alternative:1
no:1
uneasy:1
half:1
done:1
feeling:1
otherwise:1
likes:1
in:2
none:1
s:1
difficult:1
this:2
world:1
its:1
shades:1
of:3
grey:1
then:1
favourite:1
colour:1
thankfully:1
covers:1
entire:1
spectrum:1
all:1
an:1
emotion:1
rest:1
is:1
taken:1
care:1
thank:1
listener:1
you:2
has:1
selfish:1
post:1
used:1
for:1
sense:1
clarity:1
reflect:1
and:1
dig:1
can:1
finally:1
goodnight:1
###Markdown
**Stemming**
###Code
from nltk.stem import PorterStemmer
p = PorterStemmer()
for i in tokens:
print( i , " : ",p.stem(i))
###Output
must : must
honest : honest
was : wa
go : go
sleep : sleep
opened : open
mail : mail
read : read
letter : letter
wide : wide
awake : awak
sitting : sit
upright : upright
made : made
question : question
preferences : prefer
shoes : shoe
slippers : slipper
patient : patient
need : need
proper : proper
peaceful : peac
sleep : sleep
let : let
sort : sort
am : am
shoe : shoe
person : person
correction : correct
am : am
person : person
obsessed : obsess
dry : dri
dust : dust
free : free
feet : feet
whatever : whatev
helps : help
keep : keep
grip : grip
walk : walk
posture : postur
spite : spite
profusely : profus
sweaty : sweati
feet : feet
am : am
person : person
whatever : whatev
keeps : keep
grainy : graini
pokey : pokey
sensation : sensat
soil : soil
sand : sand
parts : part
skin : skin
away : away
am : am
person : person
guess : guess
am : am
shoe : shoe
person : person
loyal : loyal
whole : whole
feet : feet
making : make
parts : part
lying : lie
exposed : expos
parts : part
covered : cover
without : without
symmetry : symmetri
am : am
also : also
cotton : cotton
sock : sock
person : person
preferably : prefer
red : red
sometimes : sometim
am : am
clean : clean
matte : matt
anti : anti
slippery : slipperi
floor : floor
person : person
basically : basic
am : am
person : person
issues : issu
really : realli
liked : like
host : host
guest : guest
theory : theori
travelling : travel
we : we
get : get
home : home
we : we
travel : travel
home : home
expands : expand
we : we
step : step
foot : foot
place : place
engulfing : engulf
into : into
comfort : comfort
zone : zone
thing : thing
wear : wear
socks : sock
time : time
house : hous
open : open
go : go
bed : bed
replacing : replac
sock : sock
comforter : comfort
guess : guess
need : need
comfort : comfort
zone : zone
close : close
maybe : mayb
mine : mine
restricted : restrict
skin : skin
much : much
doesn : doesn
even : even
include : includ
own : own
home : home
travel : travel
like : like
wearing : wear
shoes : shoe
firm : firm
grip : grip
myself : myself
dust : dust
free : free
protected : protect
face : face
situations : situat
open : open
want : want
usually : usual
wrinkle : wrinkl
nose : nose
doesn : doesn
get : get
better : better
wear : wear
shoes : shoe
dirty : dirti
feet : feet
take : take
handkerchief : handkerchief
imagine : imagin
my : my
situation : situat
beach : beach
hill : hill
stream : stream
water : water
feels : feel
soothing : sooth
my : my
sweaty : sweati
feet : feet
removes : remov
dust : dust
attract : attract
walk : walk
sand : sand
let : let
irritate : irrit
times : time
always : alway
ocean : ocean
the : the
river : river
my : my
side : side
like : like
my : my
bare : bare
feet : feet
drowning : drown
air : air
bubbles : bubbl
left : left
like : like
when : when
droplets : droplet
water : water
make : make
a : a
part : part
my : my
feet : feet
wet : wet
let : let
realise : realis
miss : miss
the : the
comfort : comfort
dry : dri
skin : skin
scowl : scowl
the : the
linger : linger
irritation : irrit
due : due
to : to
a : a
little : littl
dampness : damp
don : don
t : t
like : like
when : when
to : to
keep : keep
brushing : brush
my : my
left : left
leg : leg
my : my
right : right
to : to
flatten : flatten
the : the
spheres : sphere
water : water
repeat : repeat
it : it
the : the
right : right
i : i
just : just
realised : realis
i : i
don : don
t : t
like : like
drizzling : drizzl
rains : rain
eventually : eventu
when : when
i : i
have : have
to : to
get : get
out : out
the : the
water : water
i : i
give : give
my : my
fight : fight
the : the
dirt : dirt
wear : wear
damn : damn
shoes : shoe
with : with
muddy : muddi
feet : feet
those : those
the : the
times : time
i : i
miss : miss
my : my
slippery : slipperi
slippers : slipper
don : don
t : t
let : let
my : my
socks : sock
shoes : shoe
hear : hear
say : say
they : they
not : not
know : know
that : that
sometimes : sometim
i : i
keep : keep
a : a
back : back
pair : pair
red : red
flip : flip
flops : flop
such : such
scenarios : scenario
make : make
a : a
slipper : slipper
person : person
i : i
guess : guess
i : i
am : am
a : a
slipper : slipper
person : person
when : when
my : my
feet : feet
are : are
ankle : ankl
deep : deep
mud : mud
that : that
place : place
me : me
to : to
complain : complain
alternative : altern
no : no
uneasy : uneasi
half : half
done : done
feeling : feel
otherwise : otherwis
not : not
much : much
a : a
slipper : slipper
person : person
i : i
am : am
a : a
person : person
likes : like
symmetry : symmetri
in : in
none : none
it : it
s : s
difficult : difficult
me : me
in : in
this : thi
world : world
with : with
its : it
shades : shade
of : of
grey : grey
then : then
my : my
favourite : favourit
colour : colour
red : red
thankfully : thank
it : it
covers : cover
the : the
entire : entir
spectrum : spectrum
all : all
i : i
need : need
an : an
emotion : emot
the : the
rest : rest
is : is
taken : taken
care : care
of : of
thank : thank
the : the
listener : listen
you : you
are : are
this : thi
has : ha
a : a
selfish : selfish
post : post
i : i
used : use
letter : letter
for : for
my : my
sense : sens
of : of
clarity : clariti
you : you
always : alway
make : make
me : me
reflect : reflect
and : and
dig : dig
deep : deep
i : i
can : can
finally : final
sleep : sleep
goodnight : goodnight
###Markdown
**Lemmatization**
###Code
nltk.download('wordnet')
from nltk.stem import WordNetLemmatizer
wordnet_lemma = WordNetLemmatizer()
for i in tokens:
print("Lemma for {} is {}".format(i,wordnet_lemma.lemmatize(i)))
###Output
Lemma for must is must
Lemma for honest is honest
Lemma for was is wa
Lemma for go is go
Lemma for sleep is sleep
Lemma for opened is opened
Lemma for mail is mail
Lemma for read is read
Lemma for letter is letter
Lemma for wide is wide
Lemma for awake is awake
Lemma for sitting is sitting
Lemma for upright is upright
Lemma for made is made
Lemma for question is question
Lemma for preferences is preference
Lemma for shoes is shoe
Lemma for slippers is slipper
Lemma for patient is patient
Lemma for need is need
Lemma for proper is proper
Lemma for peaceful is peaceful
Lemma for sleep is sleep
Lemma for let is let
Lemma for sort is sort
Lemma for am is am
Lemma for shoe is shoe
Lemma for person is person
Lemma for correction is correction
Lemma for am is am
Lemma for person is person
Lemma for obsessed is obsessed
Lemma for dry is dry
Lemma for dust is dust
Lemma for free is free
Lemma for feet is foot
Lemma for whatever is whatever
Lemma for helps is help
Lemma for keep is keep
Lemma for grip is grip
Lemma for walk is walk
Lemma for posture is posture
Lemma for spite is spite
Lemma for profusely is profusely
Lemma for sweaty is sweaty
Lemma for feet is foot
Lemma for am is am
Lemma for person is person
Lemma for whatever is whatever
Lemma for keeps is keep
Lemma for grainy is grainy
Lemma for pokey is pokey
Lemma for sensation is sensation
Lemma for soil is soil
Lemma for sand is sand
Lemma for parts is part
Lemma for skin is skin
Lemma for away is away
Lemma for am is am
Lemma for person is person
Lemma for guess is guess
Lemma for am is am
Lemma for shoe is shoe
Lemma for person is person
Lemma for loyal is loyal
Lemma for whole is whole
Lemma for feet is foot
Lemma for making is making
Lemma for parts is part
Lemma for lying is lying
Lemma for exposed is exposed
Lemma for parts is part
Lemma for covered is covered
Lemma for without is without
Lemma for symmetry is symmetry
Lemma for am is am
Lemma for also is also
Lemma for cotton is cotton
Lemma for sock is sock
Lemma for person is person
Lemma for preferably is preferably
Lemma for red is red
Lemma for sometimes is sometimes
Lemma for am is am
Lemma for clean is clean
Lemma for matte is matte
Lemma for anti is anti
Lemma for slippery is slippery
Lemma for floor is floor
Lemma for person is person
Lemma for basically is basically
Lemma for am is am
Lemma for person is person
Lemma for issues is issue
Lemma for really is really
Lemma for liked is liked
Lemma for host is host
Lemma for guest is guest
Lemma for theory is theory
Lemma for travelling is travelling
Lemma for we is we
Lemma for get is get
Lemma for home is home
Lemma for we is we
Lemma for travel is travel
Lemma for home is home
Lemma for expands is expands
Lemma for we is we
Lemma for step is step
Lemma for foot is foot
Lemma for place is place
Lemma for engulfing is engulfing
Lemma for into is into
Lemma for comfort is comfort
Lemma for zone is zone
Lemma for thing is thing
Lemma for wear is wear
Lemma for socks is sock
Lemma for time is time
Lemma for house is house
Lemma for open is open
Lemma for go is go
Lemma for bed is bed
Lemma for replacing is replacing
Lemma for sock is sock
Lemma for comforter is comforter
Lemma for guess is guess
Lemma for need is need
Lemma for comfort is comfort
Lemma for zone is zone
Lemma for close is close
Lemma for maybe is maybe
Lemma for mine is mine
Lemma for restricted is restricted
Lemma for skin is skin
Lemma for much is much
Lemma for doesn is doesn
Lemma for even is even
Lemma for include is include
Lemma for own is own
Lemma for home is home
Lemma for travel is travel
Lemma for like is like
Lemma for wearing is wearing
Lemma for shoes is shoe
Lemma for firm is firm
Lemma for grip is grip
Lemma for myself is myself
Lemma for dust is dust
Lemma for free is free
Lemma for protected is protected
Lemma for face is face
Lemma for situations is situation
Lemma for open is open
Lemma for want is want
Lemma for usually is usually
Lemma for wrinkle is wrinkle
Lemma for nose is nose
Lemma for doesn is doesn
Lemma for get is get
Lemma for better is better
Lemma for wear is wear
Lemma for shoes is shoe
Lemma for dirty is dirty
Lemma for feet is foot
Lemma for take is take
Lemma for handkerchief is handkerchief
Lemma for imagine is imagine
Lemma for my is my
Lemma for situation is situation
Lemma for beach is beach
Lemma for hill is hill
Lemma for stream is stream
Lemma for water is water
Lemma for feels is feel
Lemma for soothing is soothing
Lemma for my is my
Lemma for sweaty is sweaty
Lemma for feet is foot
Lemma for removes is remove
Lemma for dust is dust
Lemma for attract is attract
Lemma for walk is walk
Lemma for sand is sand
Lemma for let is let
Lemma for irritate is irritate
Lemma for times is time
Lemma for always is always
Lemma for ocean is ocean
Lemma for the is the
Lemma for river is river
Lemma for my is my
Lemma for side is side
Lemma for like is like
Lemma for my is my
Lemma for bare is bare
Lemma for feet is foot
Lemma for drowning is drowning
Lemma for air is air
Lemma for bubbles is bubble
Lemma for left is left
Lemma for like is like
Lemma for when is when
Lemma for droplets is droplet
Lemma for water is water
Lemma for make is make
Lemma for a is a
Lemma for part is part
Lemma for my is my
Lemma for feet is foot
Lemma for wet is wet
Lemma for let is let
Lemma for realise is realise
Lemma for miss is miss
Lemma for the is the
Lemma for comfort is comfort
Lemma for dry is dry
Lemma for skin is skin
Lemma for scowl is scowl
Lemma for the is the
Lemma for linger is linger
Lemma for irritation is irritation
Lemma for due is due
Lemma for to is to
Lemma for a is a
Lemma for little is little
Lemma for dampness is dampness
Lemma for don is don
Lemma for t is t
Lemma for like is like
Lemma for when is when
Lemma for to is to
Lemma for keep is keep
Lemma for brushing is brushing
Lemma for my is my
Lemma for left is left
Lemma for leg is leg
Lemma for my is my
Lemma for right is right
Lemma for to is to
Lemma for flatten is flatten
Lemma for the is the
Lemma for spheres is sphere
Lemma for water is water
Lemma for repeat is repeat
Lemma for it is it
Lemma for the is the
Lemma for right is right
Lemma for i is i
Lemma for just is just
Lemma for realised is realised
Lemma for i is i
Lemma for don is don
Lemma for t is t
Lemma for like is like
Lemma for drizzling is drizzling
Lemma for rains is rain
Lemma for eventually is eventually
Lemma for when is when
Lemma for i is i
Lemma for have is have
Lemma for to is to
Lemma for get is get
Lemma for out is out
Lemma for the is the
Lemma for water is water
Lemma for i is i
Lemma for give is give
Lemma for my is my
Lemma for fight is fight
Lemma for the is the
Lemma for dirt is dirt
Lemma for wear is wear
Lemma for damn is damn
Lemma for shoes is shoe
Lemma for with is with
Lemma for muddy is muddy
Lemma for feet is foot
Lemma for those is those
Lemma for the is the
Lemma for times is time
Lemma for i is i
Lemma for miss is miss
Lemma for my is my
Lemma for slippery is slippery
Lemma for slippers is slipper
Lemma for don is don
Lemma for t is t
Lemma for let is let
Lemma for my is my
Lemma for socks is sock
Lemma for shoes is shoe
Lemma for hear is hear
Lemma for say is say
Lemma for they is they
Lemma for not is not
Lemma for know is know
Lemma for that is that
Lemma for sometimes is sometimes
Lemma for i is i
Lemma for keep is keep
Lemma for a is a
Lemma for back is back
Lemma for pair is pair
Lemma for red is red
Lemma for flip is flip
Lemma for flops is flop
Lemma for such is such
Lemma for scenarios is scenario
Lemma for make is make
Lemma for a is a
Lemma for slipper is slipper
Lemma for person is person
Lemma for i is i
Lemma for guess is guess
Lemma for i is i
Lemma for am is am
Lemma for a is a
Lemma for slipper is slipper
Lemma for person is person
Lemma for when is when
Lemma for my is my
Lemma for feet is foot
Lemma for are is are
Lemma for ankle is ankle
Lemma for deep is deep
Lemma for mud is mud
Lemma for that is that
Lemma for place is place
Lemma for me is me
Lemma for to is to
Lemma for complain is complain
Lemma for alternative is alternative
Lemma for no is no
Lemma for uneasy is uneasy
Lemma for half is half
Lemma for done is done
Lemma for feeling is feeling
Lemma for otherwise is otherwise
Lemma for not is not
Lemma for much is much
Lemma for a is a
Lemma for slipper is slipper
Lemma for person is person
Lemma for i is i
Lemma for am is am
Lemma for a is a
Lemma for person is person
Lemma for likes is like
Lemma for symmetry is symmetry
Lemma for in is in
Lemma for none is none
Lemma for it is it
Lemma for s is s
Lemma for difficult is difficult
Lemma for me is me
Lemma for in is in
Lemma for this is this
Lemma for world is world
Lemma for with is with
Lemma for its is it
Lemma for shades is shade
Lemma for of is of
Lemma for grey is grey
Lemma for then is then
Lemma for my is my
Lemma for favourite is favourite
Lemma for colour is colour
Lemma for red is red
Lemma for thankfully is thankfully
Lemma for it is it
Lemma for covers is cover
Lemma for the is the
Lemma for entire is entire
Lemma for spectrum is spectrum
Lemma for all is all
Lemma for i is i
Lemma for need is need
Lemma for an is an
Lemma for emotion is emotion
Lemma for the is the
Lemma for rest is rest
Lemma for is is is
Lemma for taken is taken
Lemma for care is care
Lemma for of is of
Lemma for thank is thank
Lemma for the is the
Lemma for listener is listener
Lemma for you is you
Lemma for are is are
Lemma for this is this
Lemma for has is ha
Lemma for a is a
Lemma for selfish is selfish
Lemma for post is post
Lemma for i is i
Lemma for used is used
Lemma for letter is letter
Lemma for for is for
Lemma for my is my
Lemma for sense is sense
Lemma for of is of
Lemma for clarity is clarity
Lemma for you is you
Lemma for always is always
Lemma for make is make
Lemma for me is me
Lemma for reflect is reflect
Lemma for and is and
Lemma for dig is dig
Lemma for deep is deep
Lemma for i is i
Lemma for can is can
Lemma for finally is finally
Lemma for sleep is sleep
Lemma for goodnight is goodnight
###Markdown
**Implementation of a Sentence Segmentation Algorithm**
###Code
def look_sentences(a_str, sub):
start = 0
while True:
start = a_str.find(sub, start)
if start == -1:
return
yield start
start += len(sub)
def sentence_end(para):
poss_end = []
sentence_enders = sentence_enders_list + [z + w for w in sentence_containers_list for z in sentence_enders_list]
for p in sentence_enders:
e_Indexs = list(look_sentences(para, p))
poss_end.extend(([] if not len(e_Indexs) else [[i, len(p)] for i in e_Indexs]))
if len(para) in [pe[0] + pe[1] for pe in poss_end]:
max_end_start = max([pe[0] for pe in poss_end])
poss_end = [pe for pe in poss_end if pe[0] != max_end_start]
poss_end = [pe[0] + pe[1] for pe in poss_end if sum(pe) > len(para) or (sum(pe) < len(para) and para[sum(pe)] == ' ')]
end = (-1 if not len(poss_end) else max(poss_end))
return end
def sentences(para):
end = True
sentences_list = []
while end > -1:
end = sentence_end(para)
if end > -1:
sentences_list.append(para[end:].strip())
para = para[:end]
sentences_list.append(para)
sentences_list.reverse()
while('' in sentences_list):
sentences_list.remove("")
return sentences_list
sentence_enders_list = ['.', '.\n', '!', '!\n', '?', '?\n','\n']
sentence_containers_list = ['}', ')', '"', ']', "'"]
sentences(input_str)
###Output
_____no_output_____ |
01_wisconsin/02_SMOTE_log_regression.ipynb | ###Markdown
Creation of synthetic data for Wisoncsin Breat Cancer data set using SMOTE. Tested using a logistic regression model. AimTo test a statistic method (principal component analysis) for synthesising data that can be used to train a logistic regression machine learning model. DataRaw data is avilable at: https://www.kaggle.com/uciml/breast-cancer-wisconsin-data Basic methods description* Create synthetic data by sampling from distributions based on SMOTE* Train logistic regression model on synthetic data and test against held-back raw dataLemaitre, G., Nogueira, F. and Aridas, C. (2016), Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning. arXiv:1609.06570 [cs] Code & results
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
# Turn warnings off for notebook publication
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Import Data
###Code
def load_data():
""""
Load Wisconsin Breast Cancer Data Set
Inputs
------
None
Returns
-------
X: NumPy array of X
y: Numpy array of y
col_names: column names for X
"""
# Load data and drop 'id' column
data = pd.read_csv('./wisconsin.csv')
data.drop('id', axis=1, inplace=True)
# Change 'diagnosis' column to 'label', and put in last column place
data['label'] = data['diagnosis'] == 'M'
data.drop('diagnosis', axis=1, inplace=True)
# Split data in X and y
X = data.drop(['label'], axis=1)
y = data['label'] * 1.0 # convert to 0/1
# Get col names and convert to NumPy arrays
X_col_names = list(X)
X = X.values
y = y.values
return data, X, y, X_col_names
###Output
_____no_output_____
###Markdown
Data processing Split X and y into training and test sets
###Code
def split_into_train_test(X, y, test_proportion=0.25):
""""
Randomly split X and y numpy arrays into training and test data sets
Inputs
------
X and y NumPy arrays
Returns
-------
X_test, X_train, y_test, y_train Numpy arrays
"""
X_train, X_test, y_train, y_test = \
train_test_split(X, y, shuffle=True, test_size=test_proportion)
return X_train, X_test, y_train, y_test
###Output
_____no_output_____
###Markdown
Standardise data
###Code
def standardise_data(X_train, X_test):
""""
Standardise training and tets data sets according to mean and standard
deviation of test set
Inputs
------
X_train, X_test NumPy arrays
Returns
-------
X_train_std, X_test_std
"""
mu = X_train.mean(axis=0)
std = X_train.std(axis=0)
X_train_std = (X_train - mu) / std
X_test_std = (X_test - mu) /std
return X_train_std, X_test_std
###Output
_____no_output_____
###Markdown
Calculate accuracy measures
###Code
def calculate_diagnostic_performance(actual, predicted):
""" Calculate sensitivty and specificty.
Inputs
------
actual, predted numpy arrays (1 = +ve, 0 = -ve)
Returns
-------
A dictionary of results:
1) accuracy: proportion of test results that are correct
2) sensitivity: proportion of true +ve identified
3) specificity: proportion of true -ve identified
4) positive likelihood: increased probability of true +ve if test +ve
5) negative likelihood: reduced probability of true +ve if test -ve
6) false positive rate: proportion of false +ves in true -ve patients
7) false negative rate: proportion of false -ves in true +ve patients
8) positive predictive value: chance of true +ve if test +ve
9) negative predictive value: chance of true -ve if test -ve
10) actual positive rate: proportion of actual values that are +ve
11) predicted positive rate: proportion of predicted vales that are +ve
12) recall: same as sensitivity
13) precision: the proportion of predicted +ve that are true +ve
14) f1 = 2 * ((precision * recall) / (precision + recall))
*false positive rate is the percentage of healthy individuals who
incorrectly receive a positive test result
* alse neagtive rate is the percentage of diseased individuals who
incorrectly receive a negative test result
"""
# Calculate results
actual_positives = actual == 1
actual_negatives = actual == 0
test_positives = predicted == 1
test_negatives = predicted == 0
test_correct = actual == predicted
accuracy = test_correct.mean()
true_positives = actual_positives & test_positives
false_positives = actual_negatives & test_positives
true_negatives = actual_negatives & test_negatives
sensitivity = true_positives.sum() / actual_positives.sum()
specificity = np.sum(true_negatives) / np.sum(actual_negatives)
positive_likelihood = sensitivity / (1 - specificity)
negative_likelihood = (1 - sensitivity) / specificity
false_postive_rate = 1 - specificity
false_negative_rate = 1 - sensitivity
positive_predictive_value = true_positives.sum() / test_positives.sum()
negative_predicitive_value = true_negatives.sum() / test_negatives.sum()
actual_positive_rate = actual.mean()
predicted_positive_rate = predicted.mean()
recall = sensitivity
precision = \
true_positives.sum() / (true_positives.sum() + false_positives.sum())
f1 = 2 * ((precision * recall) / (precision + recall))
# Add results to dictionary
results = dict()
results['accuracy'] = accuracy
results['sensitivity'] = sensitivity
results['specificity'] = specificity
results['positive_likelihood'] = positive_likelihood
results['negative_likelihood'] = negative_likelihood
results['false_postive_rate'] = false_postive_rate
results['false_postive_rate'] = false_postive_rate
results['false_negative_rate'] = false_negative_rate
results['positive_predictive_value'] = positive_predictive_value
results['negative_predicitive_value'] = negative_predicitive_value
results['actual_positive_rate'] = actual_positive_rate
results['predicted_positive_rate'] = predicted_positive_rate
results['recall'] = recall
results['precision'] = precision
results['f1'] = f1
return results
###Output
_____no_output_____
###Markdown
Logistic Regression Model
###Code
def fit_and_test_logistic_regression_model(X_train, X_test, y_train, y_test):
""""
Fit and test logistic regression model.
Return a dictionary of accuracy measures.
Calls on `calculate_diagnostic_performance` to calculate results
Inputs
------
X_train, X_test NumPy arrays
Returns
-------
A dictionary of accuracy results.
"""
# Fit logistic regression model
lr = LogisticRegression(C=0.1)
lr.fit(X_train,y_train)
# Predict tets set labels
y_pred = lr.predict(X_test_std)
# Get accuracy results
accuracy_results = calculate_diagnostic_performance(y_test, y_pred)
return accuracy_results
###Output
_____no_output_____
###Markdown
Synthetic Data Method - SMOTE
###Code
def make_synthetic_data_smote(X, y, number_of_samples=1000):
"""
Synthetic data generation.
Inputs
------
original_data: X, y numpy arrays
number_of_samples: number of synthetic samples to generate
n_components: number of principal components to use for data synthesis
Returns
-------
X_synthetic: NumPy array
y_synthetic: NumPy array
"""
from imblearn.over_sampling import SMOTE
count_label_0 = np.sum(y==0)
count_label_1 = np.sum(y==1)
n_class_0 = number_of_samples + count_label_0
n_class_1 = number_of_samples + count_label_1
X_resampled, y_resampled = SMOTE(
sampling_strategy = {0:n_class_0, 1:n_class_1}).fit_resample(X, y)
X_synthetic = X_resampled[len(X):]
y_synthetic = y_resampled[len(y):]
return X_synthetic, y_synthetic
###Output
_____no_output_____
###Markdown
Main code
###Code
# Load data
original_data, X, y, X_col_names = load_data()
# Set up results DataFrame
results = pd.DataFrame()
###Output
_____no_output_____
###Markdown
Fitting classification model to raw data
###Code
# Set number of replicate runs
number_of_runs = 5
# Set up lists for results
accuracy_measure_names = []
accuracy_measure_data = []
for run in range(number_of_runs):
# Print progress
print (run + 1, end=' ')
# Split training and test set
X_train, X_test, y_train, y_test = split_into_train_test(X, y)
# Standardise data
X_train_std, X_test_std = standardise_data(X_train, X_test)
# Get accuracy of fitted model
accuracy = fit_and_test_logistic_regression_model(
X_train_std, X_test_std, y_train, y_test)
# Get accuracy measure names if not previously done
if len(accuracy_measure_names) == 0:
for key, value in accuracy.items():
accuracy_measure_names.append(key)
# Get accuracy values
run_accuracy_results = []
for key, value in accuracy.items():
run_accuracy_results.append(value)
# Add results to results list
accuracy_measure_data.append(run_accuracy_results)
# Strore mean and sem in results DataFrame
accuracy_array = np.array(accuracy_measure_data)
results['raw_mean'] = accuracy_array.mean(axis=0)
results['raw_sem'] = accuracy_array.std(axis=0)/np.sqrt(number_of_runs)
results.index = accuracy_measure_names
###Output
1 2 3 4 5
###Markdown
Fitting classification model to synthetic data
###Code
# Set number of replicate runs
number_of_runs = 5
# Set up lists for results
accuracy_measure_names = []
accuracy_measure_data = []
synthetic_data = []
for run in range(number_of_runs):
# Get synthetic data
X_synthetic, y_synthetic = make_synthetic_data_smote(
X, y, number_of_samples=1000)
# Print progress
print (run + 1, end=' ')
# Split training and test set
X_train, X_test, y_train, y_test = split_into_train_test(X, y)
# Standardise data (using synthetic data)
X_train_std, X_test_std = standardise_data(X_synthetic, X_test)
# Get accuracy of fitted model
accuracy = fit_and_test_logistic_regression_model(
X_train_std, X_test_std, y_synthetic, y_test)
# Get accuracy measure names if not previously done
if len(accuracy_measure_names) == 0:
for key, value in accuracy.items():
accuracy_measure_names.append(key)
# Get accuracy values
run_accuracy_results = []
for key, value in accuracy.items():
run_accuracy_results.append(value)
# Add results to results list
accuracy_measure_data.append(run_accuracy_results)
# Save synthetic data set
# -----------------------
# Create a data frame with id
synth_df = pd.DataFrame()
# Transfer X values to DataFrame
synth_df=pd.concat([synth_df,
pd.DataFrame(X_synthetic, columns=X_col_names)],
axis=1)
# Add a label
y_list = list(y_synthetic)
synth_df['label'] = y_list
# Shuffle data
synth_df = synth_df.sample(frac=1.0)
# Add to synthetic data results list
synthetic_data.append(synth_df)
# Strore mean and sem in results DataFrame
accuracy_array = np.array(accuracy_measure_data)
results['smote_mean'] = accuracy_array.mean(axis=0)
results['smote_sem'] = accuracy_array.std(axis=0)/np.sqrt(number_of_runs)
###Output
1 2 3 4 5
###Markdown
Show results
###Code
results
###Output
_____no_output_____
###Markdown
Compare raw and synthetic data means and standard deviations
###Code
descriptive_stats_all_runs = []
for run in range(number_of_runs):
synth_df = synthetic_data[run]
descriptive_stats = pd.DataFrame()
descriptive_stats['Original pos_label mean'] = \
original_data[original_data['label'] == 1].mean()
descriptive_stats['Synthetic pos_label mean'] = \
synth_df[synth_df['label'] == 1].mean()
descriptive_stats['Original neg_label mean'] = \
original_data[original_data['label'] == 0].mean()
descriptive_stats['Synthetic neg_label mean'] = \
synth_df[synth_df['label'] == 0].mean()
descriptive_stats['Original pos_label std'] = \
original_data[original_data['label'] == 1].std()
descriptive_stats['Synthetic pos_label std'] = \
synth_df[synth_df['label'] == 1].std()
descriptive_stats['Original neg_label std'] = \
original_data[original_data['label'] == 0].std()
descriptive_stats['Synthetic neg_label std'] = \
synth_df[synth_df['label'] == 0].std()
descriptive_stats_all_runs.append(descriptive_stats)
colours = ['k', 'b', 'g', 'r', 'y', 'c', 'm']
fig = plt.figure(figsize=(10,10))
# Negative label mean
ax1 = fig.add_subplot(221)
for run in range(number_of_runs):
x = descriptive_stats_all_runs[0]['Original neg_label mean'].copy()
y = descriptive_stats_all_runs[run]['Synthetic neg_label mean'].copy()
x.drop(labels ='label', inplace=True)
y.drop(labels ='label', inplace=True)
colour = colours[run % 7] # Cycle through 7 colours
ax1.scatter(x,y, color=colour, alpha=0.5)
ax1.set_xlabel('Original data')
ax1.set_ylabel('Synthetic data')
ax1.set_title('Negative label samples mean')
ax1.set_xscale('log')
ax1.set_yscale('log')
ax1.grid()
# Positive label mean
ax2 = fig.add_subplot(222)
for run in range(number_of_runs):
x = descriptive_stats_all_runs[0]['Original pos_label mean'].copy()
y = descriptive_stats_all_runs[run]['Synthetic pos_label mean'].copy()
x.drop(labels ='label', inplace=True)
y.drop(labels ='label', inplace=True)
colour = colours[run % 7] # Cycle through 7 colours
ax2.scatter(x,y, color=colour, alpha=0.5)
ax2.set_xlabel('Original data')
ax2.set_ylabel('Synthetic data')
ax2.set_title('Positive label samples mean')
ax2.set_xscale('log')
ax2.set_yscale('log')
ax2.grid()
# Negative label standard deviation
ax3 = fig.add_subplot(223)
for run in range(number_of_runs):
x = descriptive_stats_all_runs[0]['Original neg_label std'].copy()
y = descriptive_stats_all_runs[run]['Synthetic neg_label std'].copy()
x.drop(labels ='label', inplace=True)
y.drop(labels ='label', inplace=True)
colour = colours[run % 7] # Cycle through 7 colours
ax3.scatter(x,y, color=colour, alpha=0.5)
ax3.set_xlabel('Original data')
ax3.set_ylabel('Synthetic data')
ax3.set_title('Negative label standard deviation')
ax3.set_xscale('log')
ax3.set_yscale('log')
ax3.grid()
# Positive label standard deviation
ax4 = fig.add_subplot(224)
for run in range(number_of_runs):
x = descriptive_stats_all_runs[0]['Original pos_label std'].copy()
y = descriptive_stats_all_runs[run]['Synthetic pos_label std'].copy()
x.drop(labels ='label', inplace=True)
y.drop(labels ='label', inplace=True)
colour = colours[run % 7] # Cycle through 7 colours
ax4.scatter(x,y, color=colour, alpha=0.5)
ax4.set_xlabel('Original data')
ax4.set_ylabel('Synthetic data')
ax4.set_title('Positive label standard deviation')
ax4.set_xscale('log')
ax4.set_yscale('log')
ax4.grid()
plt.tight_layout(pad=2)
plt.savefig('Output/smote_correls.png', facecolor='w', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Calculate correlations between means and standard deviations for negative and positive classes.
###Code
correl_mean_neg = []
correl_std_neg = []
correl_mean_pos = []
correl_std_pos = []
for run in range(number_of_runs):
# Get correlation of means
x = descriptive_stats_all_runs[run]['Original neg_label mean']
y = descriptive_stats_all_runs[run]['Synthetic neg_label mean']
correl_mean_neg.append(np.corrcoef(x,y)[0,1])
x = descriptive_stats_all_runs[run]['Original pos_label mean']
y = descriptive_stats_all_runs[run]['Synthetic pos_label mean']
correl_mean_pos.append(np.corrcoef(x,y)[0,1])
# Get correlation of standard deviations
x = descriptive_stats_all_runs[run]['Original neg_label std']
y = descriptive_stats_all_runs[run]['Synthetic neg_label std']
correl_std_neg.append(np.corrcoef(x,y)[0,1])
x = descriptive_stats_all_runs[run]['Original pos_label std']
y = descriptive_stats_all_runs[run]['Synthetic pos_label std']
correl_std_pos.append(np.corrcoef(x,y)[0,1])
# Get correlation of means
mean_r_square_mean_neg = np.mean(np.square(correl_mean_neg))
mean_r_square_mean_pos = np.mean(np.square(correl_mean_pos))
sem_square_mean_neg = np.std(np.square(correl_mean_neg))/np.sqrt(number_of_runs)
sem_square_mean_pos = np.std(np.square(correl_mean_pos))/np.sqrt(number_of_runs)
print ('R-square of means (negative), mean (std): ', end='')
print (f'{mean_r_square_mean_neg:0.3f} ({sem_square_mean_neg:0.3f})')
print ('R-square of means (positive), mean (std): ', end='')
print (f'{mean_r_square_mean_pos:0.3f} ({sem_square_mean_pos:0.3f})')
# Get correlation of standard deviations
mean_r_square_sd_neg = np.mean(np.square(correl_std_neg))
mean_r_square_sd_pos = np.mean(np.square(correl_std_pos))
sem_square_sd_neg = np.std(np.square(correl_std_neg))/np.sqrt(number_of_runs)
sem_square_sd_pos = np.std(np.square(correl_std_pos))/np.sqrt(number_of_runs)
print ('R-square of standard deviations (negative), mean (sem): ', end='')
print (f'{mean_r_square_sd_neg:0.3f} ({sem_square_sd_neg:0.3f})')
print ('R-square of standard deviations (positive), mean (sem): ', end='')
print (f'{mean_r_square_sd_pos:0.3f} ({sem_square_sd_pos:0.3f})')
###Output
R-square of means (negative), mean (std): 1.000 (0.000)
R-square of means (positive), mean (std): 1.000 (0.000)
R-square of standard deviations (negative), mean (sem): 1.000 (0.000)
R-square of standard deviations (positive), mean (sem): 1.000 (0.000)
###Markdown
Single run example
###Code
descriptive_stats_all_runs[0]
###Output
_____no_output_____
###Markdown
Correlation between featuresHere we calculate a correlation matric between all features for original and synthetic data.
###Code
neg_correlation_original = []
neg_correlation_synthetic = []
pos_correlation_original = []
pos_correlation_synthetic = []
correl_coeff_neg = []
correl_coeff_pos= []
# Original data
mask = original_data['label'] == 0
neg_o = original_data[mask].copy()
neg_o.drop('label', axis=1, inplace=True)
neg_correlation_original = neg_o.corr().values.flatten()
mask = original_data['label'] == 1
pos_o = original_data[mask].copy()
pos_o.drop('label', axis=1, inplace=True)
pos_correlation_original = pos_o.corr().values.flatten()
# Synthetic data
for i in range (number_of_runs):
data_s = synthetic_data[i]
mask = data_s['label'] == 0
neg_s = data_s[mask].copy()
neg_s.drop('label', axis=1, inplace=True)
corr_neg_s = neg_s.corr().values.flatten()
neg_correlation_synthetic.append(corr_neg_s)
mask = data_s['label'] == 1
pos_s = data_s[mask].copy()
pos_s.drop('label', axis=1, inplace=True)
corr_pos_s = pos_s.corr().values.flatten()
pos_correlation_synthetic.append(corr_pos_s)
# Get correlation coefficients
correl_coeff_neg.append(np.corrcoef(
neg_correlation_original, corr_neg_s)[0,1])
correl_coeff_pos.append(np.corrcoef(
pos_correlation_original, corr_pos_s)[0,1])
colours = ['k', 'b', 'g', 'r', 'y', 'c', 'm']
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
for run in range(number_of_runs):
colour = colours[run % 7] # Cycle through 7 colours
ax1.scatter(
neg_correlation_original,
neg_correlation_synthetic[run],
color=colour,
alpha=0.25)
ax1.grid()
ax1.set_xlabel('Original data correlation')
ax1.set_ylabel('Synthetic data correlation')
ax1.set_title('Negative label samples correlation of features')
ax2 = fig.add_subplot(122)
for run in range(number_of_runs):
colour = colours[run % 7] # Cycle through 7 colours
ax2.scatter(
pos_correlation_original,
pos_correlation_synthetic[run],
color=colour,
alpha=0.25)
ax2.grid()
ax2.set_xlabel('Original data correlation')
ax2.set_ylabel('Synthetic data correlation')
ax2.set_title('Positive label samples correlation of features')
plt.tight_layout(pad=2)
plt.savefig('Output/smote_cov.png', facecolor='w', dpi=300)
plt.show()
r_square_neg_mean = np.mean(np.square(correl_coeff_neg))
r_square_pos_mean = np.mean(np.square(correl_coeff_pos))
r_square_neg_sem = np.std(np.square(correl_coeff_neg))/np.sqrt(number_of_runs)
r_square_pos_sem = np.std(np.square(correl_coeff_pos))/np.sqrt(number_of_runs)
print ('Corrleation of correlations (negative), mean (sem): ', end='')
print (f'{r_square_neg_mean:0.3f} ({r_square_neg_sem:0.3f})')
print ('Corrleation of correlations (positive), mean (sem): ', end = '')
print (f'{r_square_pos_mean:0.3f} ({r_square_pos_sem:0.3f})')
###Output
Corrleation of correlations (negative), mean (sem): 0.984 (0.002)
Corrleation of correlations (positive), mean (sem): 0.981 (0.002)
|
doc/source/visualizing/Volume_Rendering_Tutorial.ipynb | ###Markdown
Volume Rendering Tutorial This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. The tutorial proceeds in the following steps: 1. [Creating the Scene](1.-Creating-the-Scene)2. [Displaying the Scene](2.-Displaying-the-Scene)3. [Adjusting Transfer Functions](3.-Adjusting-Transfer-Functions)4. [Saving an Image](4.-Saving-an-Image)5. [Adding Annotations](5.-Adding-Annotations) 1. Creating the Scene To begin, we load up a dataset and use the `yt.create_scene` method to set up a basic Scene. We store the Scene in a variable called `sc` and render the default `('gas', 'density')` field.
###Code
import yt
import numpy as np
from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper
from yt.visualization.volume_rendering.api import Scene, VolumeSource
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
###Output
_____no_output_____
###Markdown
Note that to render a different field, we would use pass the field name to `yt.create_scene` using the `field` argument. Now we can look at some information about the Scene we just created using the python print keyword:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
###Code
print (sc.get_source())
###Output
_____no_output_____
###Markdown
2. Displaying the Scene We can see that the `yt.create_source` method has created a `VolumeSource` with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling `sc.show()`.
###Code
sc.show()
###Output
_____no_output_____
###Markdown
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
###Code
sc.camera.zoom(3.0)
###Output
_____no_output_____
###Markdown
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call `sc.show()` here - we can just have Ipython evaluate the Scene and that will display it automatically.
###Code
sc.render()
sc
###Output
_____no_output_____
###Markdown
That's better! The image looks a little washed-out though, so we use the `sigma_clip` argument to `sc.show()` to improve the contrast:
###Code
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Applying different values of `sigma_clip` with `sc.show()` is a relatively fast process because `sc.show()` will pull the most recently rendered image and apply the contrast adjustment without rendering the scene again. While this is useful for quickly testing the affect of different values of `sigma_clip`, it can lead to confusion if we don't remember to render after making changes to the camera. For example, if we zoom in again and simply call `sc.show()`, then we get the same image as before:
###Code
sc.camera.zoom(3.0)
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
For the change to the camera to take affect, we have to explictly render again:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
As a general rule, any changes to the scene itself such as adjusting the camera or changing transfer functions requires rendering again. Before moving on, let's undo the last zoom:
###Code
sc.camera.zoom(1./3.0)
###Output
_____no_output_____
###Markdown
3. Adjusting Transfer FunctionsNext, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the `gist_rainbow` colormap, and then re-create the image as follows:
###Code
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field('density')
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap='gist_rainbow')
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source()
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
###Code
cam = sc.add_camera(ds, lens_type='perspective')
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')
normal_vector = [1., 0., 0.]
north_vector = [0., 0., 1.]
cam.switch_orientation(normal_vector=normal_vector,
north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print (sc.camera)
###Output
_____no_output_____
###Markdown
The resulting image looks like:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
4. Saving an ImageTo save a volume rendering to an image file at any point, we can use `sc.save` as follows:
###Code
sc.save('volume_render.png',render=False)
###Output
_____no_output_____
###Markdown
Including the keyword argument `render=False` indicates that the most recently rendered image will be saved (otherwise, `sc.save()` will trigger a call to `sc.render()`). This behavior differs from `sc.show()`, which always uses the most recently rendered image. An additional caveat is that if we used `sigma_clip` in our call to `sc.show()`, then we must **also** pass it to `sc.save()` as sigma clipping is applied on top of a rendered image array. In that case, we would do the following:
###Code
sc.save('volume_render_clip4.png',sigma_clip=4.0,render=False)
###Output
_____no_output_____
###Markdown
5. Adding AnnotationsFinally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
###Code
# set the lens type back to plane-parallel
sc.camera.set_lens('plane-parallel')
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. To begin, we load up a dataset and use the yt.create_scene method to set up a basic Scene. We store the Scene in a variable called 'sc' and render the default ('gas', 'density') field.
###Code
import yt
import numpy as np
from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper
from yt.visualization.volume_rendering.api import Scene, VolumeSource
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
###Output
_____no_output_____
###Markdown
Now we can look at some information about the Scene we just created using the python print keyword:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
###Code
print (sc.get_source(0))
###Output
_____no_output_____
###Markdown
We can see that the yt.create_source has created a VolumeSource with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling sc.show().
###Code
sc.show()
###Output
_____no_output_____
###Markdown
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
###Code
sc.camera.zoom(3.0)
###Output
_____no_output_____
###Markdown
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call sc.show() here - we can just have Ipython evaluate the Scene and that will display it automatically.
###Code
sc.render()
sc
###Output
_____no_output_____
###Markdown
That's better! The image looks a little washed-out though, so we use the sigma_clip argument to sc.show() to improve the contrast:
###Code
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Next, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the "gist_rainbow" colormap, and then re-create the image as follows:
###Code
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field('density')
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap='gist_rainbow')
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source(0)
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
###Code
cam = sc.add_camera(ds, lens_type='perspective')
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')
normal_vector = [1., 0., 0.]
north_vector = [0., 0., 1.]
cam.switch_orientation(normal_vector=normal_vector,
north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print (sc.camera)
###Output
_____no_output_____
###Markdown
The resulting image looks like:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Finally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
###Code
# set the lens type back to plane-parallel
sc.camera.set_lens('plane-parallel')
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Volume Rendering Tutorial This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. The tutorial proceeds in the following steps: 1. [Creating the Scene](1.-Creating-the-Scene)2. [Displaying the Scene](2.-Displaying-the-Scene)3. [Adjusting Transfer Functions](3.-Adjusting-Transfer-Functions)4. [Saving an Image](4.-Saving-an-Image)5. [Adding Annotations](5.-Adding-Annotations) 1. Creating the Scene To begin, we load up a dataset and use the `yt.create_scene` method to set up a basic Scene. We store the Scene in a variable called `sc` and render the default `('gas', 'density')` field.
###Code
import yt
import numpy as np
from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
###Output
_____no_output_____
###Markdown
Note that to render a different field, we would use pass the field name to `yt.create_scene` using the `field` argument. Now we can look at some information about the Scene we just created using the python print keyword:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
###Code
print (sc.get_source())
###Output
_____no_output_____
###Markdown
2. Displaying the Scene We can see that the `yt.create_source` method has created a `VolumeSource` with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling `sc.show()`.
###Code
sc.show()
###Output
_____no_output_____
###Markdown
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
###Code
sc.camera.zoom(3.0)
###Output
_____no_output_____
###Markdown
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call `sc.show()` here - we can just have Ipython evaluate the Scene and that will display it automatically.
###Code
sc.render()
sc
###Output
_____no_output_____
###Markdown
That's better! The image looks a little washed-out though, so we use the `sigma_clip` argument to `sc.show()` to improve the contrast:
###Code
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Applying different values of `sigma_clip` with `sc.show()` is a relatively fast process because `sc.show()` will pull the most recently rendered image and apply the contrast adjustment without rendering the scene again. While this is useful for quickly testing the affect of different values of `sigma_clip`, it can lead to confusion if we don't remember to render after making changes to the camera. For example, if we zoom in again and simply call `sc.show()`, then we get the same image as before:
###Code
sc.camera.zoom(3.0)
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
For the change to the camera to take affect, we have to explictly render again:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
As a general rule, any changes to the scene itself such as adjusting the camera or changing transfer functions requires rendering again. Before moving on, let's undo the last zoom:
###Code
sc.camera.zoom(1./3.0)
###Output
_____no_output_____
###Markdown
3. Adjusting Transfer FunctionsNext, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the `gist_rainbow` colormap, and then re-create the image as follows:
###Code
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field('density')
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap='gist_rainbow')
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source()
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
###Code
cam = sc.add_camera(ds, lens_type='perspective')
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')
normal_vector = [1., 0., 0.]
north_vector = [0., 0., 1.]
cam.switch_orientation(normal_vector=normal_vector,
north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print (sc.camera)
###Output
_____no_output_____
###Markdown
The resulting image looks like:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
4. Saving an ImageTo save a volume rendering to an image file at any point, we can use `sc.save` as follows:
###Code
sc.save('volume_render.png',render=False)
###Output
_____no_output_____
###Markdown
Including the keyword argument `render=False` indicates that the most recently rendered image will be saved (otherwise, `sc.save()` will trigger a call to `sc.render()`). This behavior differs from `sc.show()`, which always uses the most recently rendered image. An additional caveat is that if we used `sigma_clip` in our call to `sc.show()`, then we must **also** pass it to `sc.save()` as sigma clipping is applied on top of a rendered image array. In that case, we would do the following:
###Code
sc.save('volume_render_clip4.png',sigma_clip=4.0,render=False)
###Output
_____no_output_____
###Markdown
5. Adding AnnotationsFinally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
###Code
# set the lens type back to plane-parallel
sc.camera.set_lens('plane-parallel')
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. To begin, we load up a dataset and use the yt.create_scene method to set up a basic Scene. We store the Scene in a variable called 'sc' and render the default ('gas', 'density') field.
###Code
import yt
import numpy as np
from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper
from yt.visualization.volume_rendering.api import Scene, VolumeSource
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
###Output
_____no_output_____
###Markdown
Now we can look at some information about the Scene we just created using the python print keyword:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
###Code
print (sc.get_source())
###Output
_____no_output_____
###Markdown
We can see that the yt.create_source has created a VolumeSource with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling sc.show().
###Code
sc.show()
###Output
_____no_output_____
###Markdown
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
###Code
sc.camera.zoom(3.0)
###Output
_____no_output_____
###Markdown
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call sc.show() here - we can just have Ipython evaluate the Scene and that will display it automatically.
###Code
sc.render()
sc
###Output
_____no_output_____
###Markdown
That's better! The image looks a little washed-out though, so we use the sigma_clip argument to sc.show() to improve the contrast:
###Code
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Next, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the "gist_rainbow" colormap, and then re-create the image as follows:
###Code
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field('density')
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap='gist_rainbow')
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source()
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
###Code
cam = sc.add_camera(ds, lens_type='perspective')
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')
normal_vector = [1., 0., 0.]
north_vector = [0., 0., 1.]
cam.switch_orientation(normal_vector=normal_vector,
north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print (sc.camera)
###Output
_____no_output_____
###Markdown
The resulting image looks like:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Finally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
###Code
# set the lens type back to plane-parallel
sc.camera.set_lens('plane-parallel')
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Volume Rendering Tutorial This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. The tutorial proceeds in the following steps: 1. [Creating the Scene](1.-Creating-the-Scene)2. [Displaying the Scene](2.-Displaying-the-Scene)3. [Adjusting Transfer Functions](3.-Adjusting-Transfer-Functions)4. [Saving an Image](4.-Saving-an-Image)5. [Adding Annotations](5.-Adding-Annotations) 1. Creating the Scene To begin, we load up a dataset and use the `yt.create_scene` method to set up a basic Scene. We store the Scene in a variable called `sc` and render the default `('gas', 'density')` field.
###Code
import yt
import numpy as np
from yt.visualization.volume_rendering.transfer_function_helper import TransferFunctionHelper
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
###Output
_____no_output_____
###Markdown
Note that to render a different field, we would use pass the field name to `yt.create_scene` using the `field` argument. Now we can look at some information about the Scene we just created using the python print keyword:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
###Code
print (sc.get_source())
###Output
_____no_output_____
###Markdown
2. Displaying the Scene We can see that the `yt.create_source` method has created a `VolumeSource` with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling `sc.show()`.
###Code
sc.show()
###Output
_____no_output_____
###Markdown
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
###Code
sc.camera.zoom(3.0)
###Output
_____no_output_____
###Markdown
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
###Code
print (sc)
###Output
_____no_output_____
###Markdown
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call `sc.show()` here - we can just have Ipython evaluate the Scene and that will display it automatically.
###Code
sc.render()
sc
###Output
_____no_output_____
###Markdown
That's better! The image looks a little washed-out though, so we use the `sigma_clip` argument to `sc.show()` to improve the contrast:
###Code
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Applying different values of `sigma_clip` with `sc.show()` is a relatively fast process because `sc.show()` will pull the most recently rendered image and apply the contrast adjustment without rendering the scene again. While this is useful for quickly testing the affect of different values of `sigma_clip`, it can lead to confusion if we don't remember to render after making changes to the camera. For example, if we zoom in again and simply call `sc.show()`, then we get the same image as before:
###Code
sc.camera.zoom(3.0)
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
For the change to the camera to take affect, we have to explicitly render again:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
As a general rule, any changes to the scene itself such as adjusting the camera or changing transfer functions requires rendering again. Before moving on, let's undo the last zoom:
###Code
sc.camera.zoom(1./3.0)
###Output
_____no_output_____
###Markdown
3. Adjusting Transfer FunctionsNext, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the `gist_rainbow` colormap, and then re-create the image as follows:
###Code
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field('density')
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap='gist_rainbow')
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source()
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
###Code
cam = sc.add_camera(ds, lens_type='perspective')
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], 'code_length')
normal_vector = [1., 0., 0.]
north_vector = [0., 0., 1.]
cam.switch_orientation(normal_vector=normal_vector,
north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print (sc.camera)
###Output
_____no_output_____
###Markdown
The resulting image looks like:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
4. Saving an ImageTo save a volume rendering to an image file at any point, we can use `sc.save` as follows:
###Code
sc.save('volume_render.png',render=False)
###Output
_____no_output_____
###Markdown
Including the keyword argument `render=False` indicates that the most recently rendered image will be saved (otherwise, `sc.save()` will trigger a call to `sc.render()`). This behavior differs from `sc.show()`, which always uses the most recently rendered image. An additional caveat is that if we used `sigma_clip` in our call to `sc.show()`, then we must **also** pass it to `sc.save()` as sigma clipping is applied on top of a rendered image array. In that case, we would do the following:
###Code
sc.save('volume_render_clip4.png',sigma_clip=4.0,render=False)
###Output
_____no_output_____
###Markdown
5. Adding AnnotationsFinally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
###Code
# set the lens type back to plane-parallel
sc.camera.set_lens('plane-parallel')
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Volume Rendering Tutorial This notebook shows how to use the new (in version 3.3) Scene interface to create custom volume renderings. The tutorial proceeds in the following steps: 1. [Creating the Scene](1.-Creating-the-Scene)2. [Displaying the Scene](2.-Displaying-the-Scene)3. [Adjusting Transfer Functions](3.-Adjusting-Transfer-Functions)4. [Saving an Image](4.-Saving-an-Image)5. [Adding Annotations](5.-Adding-Annotations) 1. Creating the Scene To begin, we load up a dataset and use the `yt.create_scene` method to set up a basic Scene. We store the Scene in a variable called `sc` and render the default `('gas', 'density')` field.
###Code
import yt
from yt.visualization.volume_rendering.transfer_function_helper import (
TransferFunctionHelper,
)
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
sc = yt.create_scene(ds)
###Output
_____no_output_____
###Markdown
Note that to render a different field, we would use pass the field name to `yt.create_scene` using the `field` argument. Now we can look at some information about the Scene we just created using the python print keyword:
###Code
print(sc)
###Output
_____no_output_____
###Markdown
This prints out information about the Sources, Camera, and Lens associated with this Scene. Each of these can also be printed individually. For example, to print only the information about the first (and currently, only) Source, we can do:
###Code
print(sc.get_source())
###Output
_____no_output_____
###Markdown
2. Displaying the Scene We can see that the `yt.create_source` method has created a `VolumeSource` with default values for the center, bounds, and transfer function. Now, let's see what this Scene looks like. In the notebook, we can do this by calling `sc.show()`.
###Code
sc.show()
###Output
_____no_output_____
###Markdown
That looks okay, but it's a little too zoomed-out. To fix this, let's modify the Camera associated with our Scene. This next bit of code will zoom in the camera (i.e. decrease the width of the view) by a factor of 3.
###Code
sc.camera.zoom(3.0)
###Output
_____no_output_____
###Markdown
Now when we print the Scene, we see that the Camera width has decreased by a factor of 3:
###Code
print(sc)
###Output
_____no_output_____
###Markdown
To see what this looks like, we re-render the image and display the scene again. Note that we don't actually have to call `sc.show()` here - we can just have Ipython evaluate the Scene and that will display it automatically.
###Code
sc.render()
sc
###Output
_____no_output_____
###Markdown
That's better! The image looks a little washed-out though, so we use the `sigma_clip` argument to `sc.show()` to improve the contrast:
###Code
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Applying different values of `sigma_clip` with `sc.show()` is a relatively fast process because `sc.show()` will pull the most recently rendered image and apply the contrast adjustment without rendering the scene again. While this is useful for quickly testing the affect of different values of `sigma_clip`, it can lead to confusion if we don't remember to render after making changes to the camera. For example, if we zoom in again and simply call `sc.show()`, then we get the same image as before:
###Code
sc.camera.zoom(3.0)
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
For the change to the camera to take affect, we have to explicitly render again:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
As a general rule, any changes to the scene itself such as adjusting the camera or changing transfer functions requires rendering again. Before moving on, let's undo the last zoom:
###Code
sc.camera.zoom(1.0 / 3.0)
###Output
_____no_output_____
###Markdown
3. Adjusting Transfer FunctionsNext, we demonstrate how to change the mapping between the field values and the colors in the image. We use the TransferFunctionHelper to create a new transfer function using the `gist_rainbow` colormap, and then re-create the image as follows:
###Code
# Set up a custom transfer function using the TransferFunctionHelper.
# We use 10 Gaussians evenly spaced logarithmically between the min and max
# field values.
tfh = TransferFunctionHelper(ds)
tfh.set_field("density")
tfh.set_log(True)
tfh.set_bounds()
tfh.build_transfer_function()
tfh.tf.add_layers(10, colormap="gist_rainbow")
# Grab the first render source and set it to use the new transfer function
render_source = sc.get_source()
render_source.transfer_function = tfh.tf
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
Now, let's try using a different lens type. We can give a sense of depth to the image by using the perspective lens. To do, we create a new Camera below. We also demonstrate how to switch the camera to a new position and orientation.
###Code
cam = sc.add_camera(ds, lens_type="perspective")
# Standing at (x=0.05, y=0.5, z=0.5), we look at the area of x>0.05 (with some open angle
# specified by camera width) along the positive x direction.
cam.position = ds.arr([0.05, 0.5, 0.5], "code_length")
normal_vector = [1.0, 0.0, 0.0]
north_vector = [0.0, 0.0, 1.0]
cam.switch_orientation(normal_vector=normal_vector, north_vector=north_vector)
# The width determines the opening angle
cam.set_width(ds.domain_width * 0.5)
print(sc.camera)
###Output
_____no_output_____
###Markdown
The resulting image looks like:
###Code
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
4. Saving an ImageTo save a volume rendering to an image file at any point, we can use `sc.save` as follows:
###Code
sc.save("volume_render.png", render=False)
###Output
_____no_output_____
###Markdown
Including the keyword argument `render=False` indicates that the most recently rendered image will be saved (otherwise, `sc.save()` will trigger a call to `sc.render()`). This behavior differs from `sc.show()`, which always uses the most recently rendered image. An additional caveat is that if we used `sigma_clip` in our call to `sc.show()`, then we must **also** pass it to `sc.save()` as sigma clipping is applied on top of a rendered image array. In that case, we would do the following:
###Code
sc.save("volume_render_clip4.png", sigma_clip=4.0, render=False)
###Output
_____no_output_____
###Markdown
5. Adding AnnotationsFinally, the next cell restores the lens and the transfer function to the defaults, moves the camera, and adds an opaque source that shows the axes of the simulation coordinate system.
###Code
# set the lens type back to plane-parallel
sc.camera.set_lens("plane-parallel")
# move the camera to the left edge of the domain
sc.camera.set_position(ds.domain_left_edge)
sc.camera.switch_orientation()
# add an opaque source to the scene
sc.annotate_axes()
sc.render()
sc.show(sigma_clip=4.0)
###Output
_____no_output_____ |
RecSys-Content-Based-movies-py-v1.ipynb | ###Markdown
CONTENT-BASED FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous, and can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore Content-based recommendation systems and implement a simple version of one using Python and the Pandas library. Table of contents Acquiring the Data Preprocessing Content-Based Filtering Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
print('unziping ...')
!unzip -o -j moviedataset.zip
###Output
--2019-05-19 16:24:24-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.193
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.193|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 160301210 (153M) [application/zip]
Saving to: ‘moviedataset.zip’
moviedataset.zip 100%[===================>] 152.88M 28.8MB/s in 5.0s
2019-05-19 16:24:30 (30.5 MB/s) - ‘moviedataset.zip’ saved [160301210/160301210]
unziping ...
Archive: moviedataset.zip
inflating: links.csv
inflating: movies.csv
inflating: ratings.csv
inflating: README.txt
inflating: tags.csv
###Markdown
Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
#Dataframe manipulation library
import pandas as pd
#Math functions, we'll only need the sqrt function so let's import only that
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
#Storing the movie information into a pandas dataframe
movies_df = pd.read_csv('movies.csv')
#Storing the user information into a pandas dataframe
ratings_df = pd.read_csv('ratings.csv')
#Head is a function that gets the first N rows of a dataframe. N's default is 5.
movies_df.head()
###Output
_____no_output_____
###Markdown
Let's also remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
#Using regular expressions to find a year stored between parentheses
#We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))',expand=False)
#Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)',expand=False)
#Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
#Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also split the values in the __Genres__ column into a __list of Genres__ to simplify future use. This can be achieved by applying Python's split string function on the correct column.
###Code
#Every genre is separated by a | so we simply have to call the split function on |
movies_df['genres'] = movies_df.genres.str.split('|')
movies_df.head()
###Output
_____no_output_____
###Markdown
Since keeping genres in a list format isn't optimal for the content-based recommendation system technique, we will use the One Hot Encoding technique to convert the list of genres to a vector where each column corresponds to one possible value of the feature. This encoding is needed for feeding categorical data. In this case, we store every different genre in columns that contain either 1 or 0. 1 shows that a movie has that genre and 0 shows that it doesn't. Let's also store this dataframe in another variable since genres won't be important for our first recommendation system.
###Code
#Copying the movie dataframe into a new one since we won't need to use the genre information in our first case.
moviesWithGenres_df = movies_df.copy()
#For every row in the dataframe, iterate through the list of genres and place a 1 into the corresponding column
for index, row in movies_df.iterrows():
for genre in row['genres']:
moviesWithGenres_df.at[index, genre] = 1
#Filling in the NaN values with 0 to show that a movie doesn't have that column's genre
moviesWithGenres_df = moviesWithGenres_df.fillna(0)
moviesWithGenres_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
#Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
ratings_df.head()
###Output
_____no_output_____
###Markdown
Content-Based recommendation system Now, let's take a look at how to implement __Content-Based__ or __Item-Item recommendation systems__. This technique attempts to figure out what a user's favourite aspects of an item is, and then recommends items that present those aspects. In our case, we're going to try to figure out the input's favorite genres from the movies and ratings given.Let's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the __userInput__. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input userWith the input complete, let's extract the input movie's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movie's title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
#Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
#Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
#Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('genres', 1).drop('year', 1)
#Final input dataframe
#If a movie you added in above isn't here, then it might not be in the original
#dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
We're going to start by learning the input's preferences, so let's get the subset of movies that the input has watched from the Dataframe containing genres defined with binary values.
###Code
#Filtering out the movies from the input
userMovies = moviesWithGenres_df[moviesWithGenres_df['movieId'].isin(inputMovies['movieId'].tolist())]
userMovies
###Output
_____no_output_____
###Markdown
We'll only need the actual genre table, so let's clean this up a bit by resetting the index and dropping the movieId, title, genres and year columns.
###Code
#Resetting the index to avoid future issues
userMovies = userMovies.reset_index(drop=True)
#Dropping unnecessary issues due to save memory and to avoid issues
userGenreTable = userMovies.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
userGenreTable
###Output
_____no_output_____
###Markdown
Now we're ready to start learning the input's preferences!To do this, we're going to turn each genre into weights. We can do this by using the input's reviews and multiplying them into the input's genre table and then summing up the resulting table by column. This operation is actually a dot product between a matrix and a vector, so we can simply accomplish by calling Pandas's "dot" function.
###Code
inputMovies['rating']
#Dot produt to get weights
userProfile = userGenreTable.transpose().dot(inputMovies['rating'])
#The user profile
userProfile
###Output
_____no_output_____
###Markdown
Now, we have the weights for every of the user's preferences. This is known as the User Profile. Using this, we can recommend movies that satisfy the user's preferences. Let's start by extracting the genre table from the original dataframe:
###Code
#Now let's get the genres of every movie in our original dataframe
genreTable = moviesWithGenres_df.set_index(moviesWithGenres_df['movieId'])
#And drop the unnecessary information
genreTable = genreTable.drop('movieId', 1).drop('title', 1).drop('genres', 1).drop('year', 1)
genreTable.head()
genreTable.shape
###Output
_____no_output_____
###Markdown
With the input's profile and the complete list of movies and their genres in hand, we're going to take the weighted average of every movie based on the input profile and recommend the top twenty movies that most satisfy it.
###Code
#Multiply the genres by the weights and then take the weighted average
recommendationTable_df = ((genreTable*userProfile).sum(axis=1))/(userProfile.sum())
recommendationTable_df.head()
#Sort our recommendations in descending order
recommendationTable_df = recommendationTable_df.sort_values(ascending=False)
#Just a peek at the values
recommendationTable_df.head()
###Output
_____no_output_____
###Markdown
Now here's the recommendation table!
###Code
#The final recommendation table
movies_df.loc[movies_df['movieId'].isin(recommendationTable_df.head(20).keys())]
###Output
_____no_output_____ |
Notebooks/.ipynb_checkpoints/spotifyGenreReport-checkpoint.ipynb | ###Markdown
1. Normalizing the Data - During EDA we did not find any particularly stand out relationships between different track feature data. - For the best results in model construction it may be best to simplify our data and reduce the dimensionality. - First we must standardize our data on a standard normal scale
###Code
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler()
spotify_features = spotify_df[["acousticness","danceability","duration_ms","energy","instrumentalness","key","liveness","loudness","mode","speechiness","tempo","time_signature","valence","popularity"]]
spotify_lables = spotify_df['category_id'].values
std_scaler.fit(spotify_features)
spotify_features_scaled = std_scaler.fit_transform(spotify_features)
from sklearn.decomposition import PCA
pca = PCA().fit(spotify_features_scaled)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.axhline(y=0.9, linestyle='--')
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance');
n = 10
pca = PCA(n, random_state=10)
pca.fit(spotify_features_scaled)
spotify_df_pca = pca.transform(spotify_features_scaled)
seed = 58143581
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spotify_df_pca, spotify_lables, test_size = 0.2, random_state = seed)
from sklearn.linear_model import LogisticRegression
# Fit the Logistic regression model
logreg = LogisticRegression(penalty = 'none', random_state=seed,max_iter=3000)
logreg.fit(X_train, y_train)
logreg.score(X_test,y_test)
## use train-test split
seed = 58143581
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spotify_features_scaled,spotify_lables, test_size = 0.2, random_state = seed)
from sklearn.linear_model import LogisticRegression
# Fit the Logistic regression model
logreg = LogisticRegression(penalty = 'none', random_state=seed,max_iter=3000)
logreg.fit(X_train, y_train)
logreg.score(X_test,y_test)
train_set.shape
test_set.shape
## Standardization
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler()
#Define Features
track_feature_list = ["acousticness","danceability","duration_ms","energy","instrumentalness","key","liveness","loudness","mode","speechiness","tempo","time_signature","valence","popularity"]
traing_features = train_set[track_feature_list].values
testing_features = test_set[track_feature_list].values
traing_target = train_set['category_id'].values
testing_target = test_set['category_id'].values
##Standard scale (mean = 0, variance = 1)
std_scaler.fit(traing_features)
scaled_traing_features = std_scaler.transform(traing_features)
scaled_testing_features = std_scaler.transform(testing_features)
#Creating new Features
from sklearn.linear_model import LogisticRegression
# Fit the Logistic regression model
logreg = LogisticRegression(penalty = 'none', random_state=seed,max_iter=3000)
logreg.fit(scaled_traing_features, traing_target)
logreg_prediction = logreg.predict(scaled_testing_features)
logreg.score(scaled_testing_features,testing_target)
from sklearn.metrics import plot_confusion_matrix
plt.figure(figsize=(15,8))
cm = plot_confusion_matrix(logreg, scaled_testing_features, testing_target)
fig, ax = plt.subplots(figsize=(15, 15))
plot_confusion_matrix(logreg, scaled_testing_features, testing_target, cmap=plt.cm.Blues, ax=ax)
ax.set_title("LogisticRegression Confusion Matrix")
#Support Vector Machine - Linear
from sklearn.svm import SVC
svm = SVC(kernel='linear')
svm.fit(scaled_traing_features, traing_target)
svm.score(scaled_testing_features, testing_target)
#Decision Tree
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier()
tree.fit(scaled_traing_features, traing_target)
tree.score(scaled_testing_features, testing_target)
#Neural Network
from sklearn.neural_network import MLPClassifier
nn = MLPClassifier(500)
nn.fit(scaled_traing_features, traing_target)
nn.score(scaled_testing_features, testing_target)
###Output
/home/seanr/anaconda3/lib/python3.8/site-packages/sklearn/neural_network/_multilayer_perceptron.py:582: ConvergenceWarning: Stochastic Optimizer: Maximum iterations (200) reached and the optimization hasn't converged yet.
warnings.warn(
|
notebooks/Classification_Models.ipynb | ###Markdown
Classification Models
###Code
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import pickle
import seaborn as sns
import statsmodels.api as sm
from sklearn.linear_model import LogisticRegressionCV
from sklearn.linear_model import LogisticRegression
import sklearn.metrics as metrics
from sklearn.tree import DecisionTreeClassifier, DecisionTreeClassifier
from matplotlib.ticker import FuncFormatter
from sklearn.model_selection import cross_val_score
import collections
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
import sklearn.tree as tree
import collections
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data Imputation and Dummy Creation
###Code
'''Read in our cleaned, aggregated data'''
plt.style.use('seaborn')
with open('aggregate_data.p', 'rb') as f:
data = pickle.load(f)
#Impute missings with mean by category. Missings occur because the Million Songs Database only goes up until 2010.
category_means = data.groupby('category').mean()
tmp = data.join(category_means, rsuffix = '_category_mean', on = 'category')
means = tmp[[x+'_category_mean' for x in data.columns if x not in ['category', 'featured','num_tracks','num_followers']]]
means.columns = [x.split('_category_mean')[0] for x in means.columns]
#fill with mean by category and replace with overall mean if no mean by category
data = data.fillna(means)
data = data.fillna(data.mean())
# create dummies for category variable
def create_categorical_vars(data):
categorical = ['category']
dummies = {}
for var in categorical:
dummies[var] = pd.get_dummies(data[var], prefix = var)
cols_to_keep = dummies[var].columns[0:len(dummies[var].columns)-1]
data = data.join(dummies[var][cols_to_keep])
data = data.drop(var, 1)
return data
raw_data = data
data = create_categorical_vars(data)
#split into train / test to avoid cheating
np.random.seed(1234)
train_pct = .5
msk = np.random.uniform(0,1,len(data)) < train_pct
train = data.loc[msk, :]
test = data.loc[~msk, :]
'''We take a peek at our new training dataset'''
train.head()
###Output
_____no_output_____
###Markdown
Create Classification Quintiles
###Code
'''We split our dependent var into quantiles for classification. We chose to use quintiles.'''
data['num_followers_quantile'] = pd.qcut(data['num_followers'], 5, labels=False)
quantiles = data['num_followers_quantile']
y_train = train['num_followers'].astype(float)
y_test = test['num_followers'].astype(float)
y_train_class = pd.concat([y_train, quantiles], axis=1, join_axes=[y_train.index]).drop('num_followers', axis = 1).values.ravel()
y_test_class = pd.concat([y_test, quantiles], axis=1, join_axes=[y_test.index]).drop('num_followers', axis = 1).values.ravel()
y_train_class_by_cat = raw_data.groupby('category')['num_followers'].apply(lambda x: pd.qcut(x, 3, labels = False)).loc[msk]
y_test_class_by_cat = raw_data.groupby('category')['num_followers'].apply(lambda x: pd.qcut(x, 3, labels = False)).loc[~msk]
###Output
_____no_output_____
###Markdown
Standardize
###Code
'''Standardize numeric features'''
to_x_train = train[[x for x in train.columns if x != 'num_followers']]
to_x_test = test[[x for x in test.columns if x != 'num_followers']]
#Define continuous vars
continuous_variables = [x for x in to_x_train.columns if 'category' not in x and x != 'available_markets_max' and x != 'featured']
non_continuous_variables = [x for x in to_x_train.columns if 'category' in x]
#standardize data
def standardize_data(data, train):
return (data - train.mean()) / train.std()
x_train_cont = standardize_data(to_x_train[continuous_variables], to_x_train[continuous_variables])
x_test_cont = standardize_data(to_x_test[continuous_variables], to_x_train[continuous_variables])
#merge back on non-continuous variables
x_train_std = x_train_cont.join(to_x_train[non_continuous_variables])
x_test_std = x_test_cont.join(to_x_test[non_continuous_variables])
x_train_std2 = x_train_std.join(to_x_train['available_markets_max'])
x_test_std2 = x_test_std.join(to_x_test['available_markets_max'])
x_train_std3 = x_train_std2.join(to_x_train['featured'])
x_test_std3 = x_test_std2.join(to_x_test['featured'])
x_train_class = sm.tools.add_constant(x_train_std3, has_constant = 'add')
x_test_class = sm.tools.add_constant(x_test_std3, has_constant = 'add')
'''calculate classification accuracy'''
def calculate_cr(classifications, y):
correct = classifications == y
cr = correct.sum()/len(correct)
return cr
###Output
_____no_output_____
###Markdown
Baseline Model
###Code
'''Begin with logistic models as baseline
Multinomial Logistic'''
logistic_regression_mn = LogisticRegressionCV(Cs=10, multi_class='multinomial').fit(x_train_class, y_train_class)
logistic_classifications_train_mn = logistic_regression_mn.predict(x_train_class)
logistic_classifications_test_mn = logistic_regression_mn.predict(x_test_class)
print("Multinomial Logistic Regression")
print("\tTrain CR:", str(calculate_cr(logistic_classifications_train_mn, y_train_class)))
print("\tTest CR:", str(calculate_cr(logistic_classifications_test_mn, y_test_class)))
#OvR Logistic Reg
logistic_regression_ovr = LogisticRegressionCV(Cs=10, multi_class='ovr').fit(x_train_class, y_train_class)
logistic_classifications_train_ovr = logistic_regression_ovr.predict(x_train_class)
logistic_classifications_test_ovr = logistic_regression_ovr.predict(x_test_class)
print("OvR Logistic Regression")
print("\tTrain CR:", str(calculate_cr(logistic_classifications_train_ovr, y_train_class)))
print("\tTest CR:", str(calculate_cr(logistic_classifications_test_ovr, y_test_class)))
###Output
Multinomial Logistic Regression
Train CR: 0.706586826347
Test CR: 0.355555555556
OvR Logistic Regression
Train CR: 0.494011976048
Test CR: 0.363888888889
###Markdown
Additional Models - Across Categories Decision Tree
###Code
'''Decision Tree with CV to pick max depth'''
param_grid = {'max_depth' : range(1,30)}
clf = GridSearchCV(DecisionTreeClassifier(), param_grid = param_grid, cv = 5, refit = True)
clf.fit(x_train_class, y_train_class)
print('Cross-Validated Max Depth: {x}'.format(x = clf.best_params_['max_depth']))
print('Avg Cross-Validation Accuracy at Max: {x}%'.format(x = str(clf.best_score_*100)[0:5]))
print('Test Accuracy: {x}%'.format(x = str(clf.score(x_test_class,y_test_class)*100)[0:5]))
###Output
Cross-Validated Max Depth: 4
Avg Cross-Validation Accuracy at Max: 33.53%
Test Accuracy: 34.72%
###Markdown
Random Forest
###Code
'''Random Forest with CV to pick max depth and Number of Trees'''
param_grid = {'n_estimators' : [2**i for i in [1,2,3,4,5,6,7,8, 9, 10]],
'max_depth' : [1,2,3,4,5,6,7,8]}
clf = GridSearchCV(RandomForestClassifier(), param_grid = param_grid, cv = 5, refit = True, n_jobs = 4)
clf.fit(x_train_class, y_train_class)
print('Cross-Validated Max Depth: {x}'.format(x = clf.best_params_['max_depth']))
print('Cross-Validated Num Trees: {x}'.format(x = clf.best_params_['n_estimators']))
print('Avg Cross-Validation Accuracy at Max: {x}%'.format(x = str(clf.best_score_*100)[0:5]))
print('Test Accuracy: {x}%'.format(x = str(clf.score(x_test_class,y_test_class)*100)[0:5]))
''' Top 10 most important features based on RF (as expected, popularity is important)'''
feature_importance = pd.Series(clf.best_estimator_.feature_importances_, index = x_train_class.columns)
feature_importance.sort_values(ascending = False).head(10).sort_values(ascending = True).plot('barh')
###Output
_____no_output_____
###Markdown
AdaBoosted Decision Trees
###Code
'''AdaBoost with CV to pick max depth and number of trees'''
learning_rate = .05
param_grid = {'n_estimators' : [2**i for i in [1,2,3,4,5,6,7,8]],
'base_estimator__max_depth' : [1,2,3,4,5,6,7,8, 9, 10, 11, 12]}
clf = GridSearchCV(AdaBoostClassifier(DecisionTreeClassifier(), learning_rate = learning_rate), param_grid = param_grid, cv = 5, refit = True, n_jobs = 4)
clf.fit(x_train_class, y_train_class)
print('Cross-Validated Max Depth: {x}'.format(x = clf.best_params_['base_estimator__max_depth']))
print('Cross-Validated Num Trees: {x}'.format(x = clf.best_params_['n_estimators']))
print('Avg Cross-Validation Accuracy at Max: {x}%'.format(x = str(clf.best_score_*100)[0:5]))
print('Test Accuracy: {x}%'.format(x = str(clf.score(x_test_class,y_test_class)*100)[0:5]))
'''This shows similar results for popularity being important'''
feature_importance = pd.Series(clf.best_estimator_.feature_importances_, index = x_train_class.columns)
feature_importance.sort_values(ascending = False).head(10).sort_values(ascending = True).plot('barh')
###Output
_____no_output_____
###Markdown
Classification Within Categories Decision Tree
###Code
'''Decision Tree with CV to pick max depth'''
param_grid = {'max_depth' : range(1,30)}
clf = GridSearchCV(DecisionTreeClassifier(), param_grid = param_grid, cv = 5, refit = True)
clf.fit(x_train_class, y_train_class_by_cat)
print('Cross-Validated Max Depth: {x}'.format(x = clf.best_params_['max_depth']))
print('Avg Cross-Validation Accuracy at Max: {x}%'.format(x = str(clf.best_score_*100)[0:5]))
print('Test Accuracy: {x}%'.format(x = str(clf.score(x_test_class,y_test_class_by_cat)*100)[0:5]))
###Output
Cross-Validated Max Depth: 2
Avg Cross-Validation Accuracy at Max: 46.10%
Test Accuracy: 46.94%
###Markdown
Random Forest
###Code
'''Random Forest with CV to pick max depth and number of trees'''
param_grid = {'n_estimators' : [2**i for i in [1,2,3,4,5,6,7,8, 9, 10]],
'max_depth' : [1,2,3,4,5,6,7,8]}
clf = GridSearchCV(RandomForestClassifier(), param_grid = param_grid, cv = 5, refit = True, n_jobs = 4)
clf.fit(x_train_class, y_train_class_by_cat)
print('Cross-Validated Max Depth: {x}'.format(x = clf.best_params_['max_depth']))
print('Cross-Validated Num Trees: {x}'.format(x = clf.best_params_['n_estimators']))
print('Avg Cross-Validation Accuracy at Max: {x}%'.format(x = str(clf.best_score_*100)[0:5]))
print('Test Accuracy: {x}%'.format(x = str(clf.score(x_test_class,y_test_class_by_cat)*100)[0:5]))
###Output
Cross-Validated Max Depth: 4
Cross-Validated Num Trees: 32
Avg Cross-Validation Accuracy at Max: 48.20%
Test Accuracy: 45.27%
###Markdown
AdaBoosted Decision Tree
###Code
'''AdaBoost with CV to pick max depth and number of trees'''
learning_rate = .05
param_grid = {'n_estimators' : [2**i for i in [1,2,3,4,5,6,7,8]],
'base_estimator__max_depth' : [1,2,3,4,5,6,7,8, 9, 10, 11, 12]}
clf = GridSearchCV(AdaBoostClassifier(DecisionTreeClassifier(), learning_rate = learning_rate), param_grid = param_grid, cv = 5, refit = True, n_jobs = 4)
clf.fit(x_train_class, y_train_class_by_cat)
print('Cross-Validated Max Depth: {x}'.format(x = clf.best_params_['base_estimator__max_depth']))
print('Cross-Validated Num Trees: {x}'.format(x = clf.best_params_['n_estimators']))
print('Avg Cross-Validation Accuracy at Max: {x}%'.format(x = str(clf.best_score_*100)[0:5]))
print('Test Accuracy: {x}%'.format(x = str(clf.score(x_test_class,y_test_class_by_cat)*100)[0:5]))
###Output
Cross-Validated Max Depth: 1
Cross-Validated Num Trees: 64
Avg Cross-Validation Accuracy at Max: 48.20%
Test Accuracy: 47.77%
|
content/lessons/04/Class-Coding-Lab/CCL-Conditionals.ipynb | ###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number=input("Enter a number: ")
if int(number)>0:
print("That's a positive number ")
elif int(number)==0:
print("So uncreative, that's a 0 ")
else:
print("yo why art thou pesstimistic, that's a negative number ")
###Output
Enter a number: 0
So uncreative, that's a 0
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you=="rock" or you== "paper" or you=="scissors"):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: pizza
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("Haha! Paper covers rock!")
elif (you == 'paper' and computer == 'scissors'):
print("Lolololol you got cut")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
Haha! Paper covers rock!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("Yeeters you covered rock")
elif (you == 'paper' and computer == 'scissors'):
print("Boohoo you got cut")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose scissors
It's a tie!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 2
2 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input("Enter an integer: "))
if number>=0:
print("%d is positive" % (number))
else:
print("%d is negative" % (number))
###Output
Enter an integer: 8
8 is positive
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you == 'rock' or you == 'paper' or you == 'scissors'):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose scissors
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ") #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You Get Nothing You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You Get Nothing You Lose! scissor beats paper")
elif ( you == 'paper' and computer == 'rock'):
print("You Win! paper beats rock")
elif (you == 'scissors' and computer == 'rock'):
print("You Get Nothing You Lose! rock beats scissors")
elif (you == 'scissors' and computer == 'paper'):
print("You Win! scissors beats paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
It's a tie!
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You Win! paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You Lose! scissors cuts paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
You Lose! scissors cuts paper
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! scissors cuts paper")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose scissors
You lose! scissors cuts paper
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input("Enter an integer: "))
if number >= 0:
print("%d is Zero or Positive" % (number))
else:
print("%d is Negative" % (number))
###Output
Enter an integer: -20
-20 is Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you == "rock"): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose scissors
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer == 'paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'rock')
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors')
print("You lose! Scissors cuts paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper.")
elif (you == 'scissors' and computer =='rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cuts paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 32
32 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number > 0:
print("%d is Positive" % (number))
elif number == 0:
print("%d is Zero" % (number))
else:
print("%d is Negative" % (number))
###Output
Enter an integer: -2
-2 is Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in computer: # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: pizza
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ") #case sensitive
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose scissors
You win! Rock smashes scissors.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ") #case sensitive
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 2
2 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
num = int(input("Enter an integer: "))
if num > 0:
print("Positive number")
elif num == 0:
print("Zero")
else:
print("Negative number")
###Output
Enter an integer: -2
Negative number
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices:
print("You chose %s and the computer chose %s" %(you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose scissors
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose paper
It's a tie!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors')
print("You lose! Sc")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 7
7 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input("Enter an interger: "))
if number>=0:
print("%d is positive or zero" %(number))
else:
print ("%d is negative" %(number))
###Output
Enter an interger: 8
8 is positive or zero
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices : # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! paper covers rock")
elif (you == 'paper' and computer == 'scissors')
print("You lose! scissors cuts paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! paper covers rock")
elif (you == 'paper' and computer == 'scissors')
print("You lose! scissors cuts paper")
elif (you == 'scissors' and computer == 'rock')
print ("You lose! rock smashes scissors")
elif (you == 'scissors' and computer == 'paper')
print ("You win! scissors cuts paper")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input("Enter an integer: "))
if number >= 0:
print('Zero or Positive')
else:
print('Negative')
###Output
Enter an integer: -6
Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices: # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers rock.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scisscors' and computer == 'paper'):
print('You win! Scissors cut paper.')
elif (you == 'scissors' and computer == 'rock'):
print('You lose! Rock smashes scissors.')
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
It's a tie!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
num = int(input("Enter an integer: "))
if num > 0:
print("Positive number")
elif num == 0:
print("Zero")
else:
print("Negative number")
###Output
Enter an integer: 0
Zero
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices:
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
else: (you == 'rock' and computer == 'rock')
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
TODO
It's a tie!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose rock
You lose! Rock smashes scissors.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
num = int(input("Enter an integer:"))
if num > 0:
print("Positive number")
elif num == 0:
print("Zero")
else
print("Negative number")
###Output
_____no_output_____
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices:
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you =='paper' and computer =='rock')
print("You win! paper covers rock.")
elif (you == 'paper' and computer == 'scissors')
print("You lose! Scissors cut paper.")
else: (you == 'rock' and computer == 'rock')
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! paper covers rock.")
elif (you == 'paper' and computer == 'scissors')
print("You lose! Scissors cuts paper.")
else: (yoiu == 'rock' and computer =='rock')
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors')
print("You Lose! scissors cut paper")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input("Enter an integer: "))
if number%1>=0:
print("%d is Zero or Positive" % (number))
else:
print("%d is Negative" % (number))
###Output
Enter an integer: 7
7 is Zero or Positive
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose paper
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose scissors
You win! Rock smashes scissors.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers rock.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer== 'paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cuts paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 77
77 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer:"))
if number >=0:
print("%d is zero or positive" % (number))
else:
print("%d is negative" % (number))
###Output
Enter an integer:300
300 is zero or positive
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors')
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose rock
You win! Paper covers rock.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
_____no_output_____
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number > 0:
print("%d is positive" % (number))
elif number == 0:
print("%d is zero" % (number))
else:
print("%d is negative" % (number))
###Output
Enter an integer: -6
-6 is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (TODO): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper'
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer =='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock..")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
You lose! Scissors cuts paper
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock..")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cuts paper")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissorsr")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose paper
You win! Scissors cuts paper
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
100000 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Please enter an integer: "))
if number > 0:
print("%d is Zero or Positive" % (number))
else:
print("%d is negative" % (number))
###Output
13 is Zero or Positive
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose scissors
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
You lose! Scissors cut paper.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose scissors
You win! Rock smashes scissors.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 22
22 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter Integer "))
if number >= 0:
print('Zero or Positive')
else:
print('Negative')
# TODO write your program here:
number = int(input("Enter Integer "))
if number >= 0:
print('Zero or Positive')
else:
print('Negative')
###Output
Enter Integer 6
Zero or Positive
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose scissors
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose scissors
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers rock
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
elif (you == 'scissors' and computer == 'paper'):
print("You Win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print('You lose! Rock crushes scissors.')
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
elif (you == 'scissors' and computer == 'paper'):
print("You Win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print('You lose! Rock crushes scissors.')
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
elif (you == 'scissors' and computer == 'paper'):
print("You Win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print('You lose! Rock crushes scissors.')
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose scissors
It's a tie!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
###Output
_____no_output_____
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (TODO): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
###Output
_____no_output_____
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (TODO): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 0
0 is even
###Markdown
Make sure to run the cell more than once, inputting both odd and even integers to try it out. After all, we don't know if the code really works until we test out both options!On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input an integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number >= 0:
print ("%d is Zero or Positive" % (number))
else:
print ("%d is negative" % (number))
###Output
Enter an integer: -21
-21 is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent selects one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: 21
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work on the final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cut paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose rock
It's a tie!
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers rock.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave this part to you as your final exercise. Similar to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("TODO - What should this say?")
elif (you == 'paper' and computer == 'scissors'):
print("TODO - What should this say?")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cuts paper.")
elif (you == 'scissors' and computer == 'rock'):
print ("You lose! Rock smashes scissors.")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose scissors
You win! Rock smashes scissors.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
value = int(input("enter value"))
if value > 0:
print("value %d is greater than zero therefore it is positive" % (value))
elif value == 0:
print("the value is zero")
else:
print("the value %d is less thatn zero therefore it is negative" % (value))
###Output
enter value-5
the value -5 is less thatn zero therefore it is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pie
You chose pie and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you == "rock" or you == "paper" or you == "scissors"):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: pizza
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("you win paper consumes rock")
elif (you == 'paper' and computer == 'scissors'):
print("you lose scissors cuts paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
you lose scissors cuts paper
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you == "rock" or you == "paper" or you == "scissors"):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("you win paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("you lose scissors cut paper")
elif (you == "scissors" and computer == "paper"):
print("you win scissors cut paper")
elif (you == "scissors" and computer == "rock"):
print("you lose rock breaks scissors")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose rock
you lose rock breaks scissors
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 5
5 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number >=0:
print("%d is greater than or equal to zero" % (number))
else:
print("%d is not greater than or equal to zero" % (number))
###Output
Enter an integer: 5
5 is greater than or equal to zero
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose scissors
You win! Rock smashes scissors.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose paper
It's a tie!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'rock'):
print("TODO")
elif (you == 'paper' and computer == 'scissors'):
print("TODO")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("enter an integer: "))
if number >0:
print("The Number is Positive")
elif number ==0:
print("The Number is 0")
else:
print("The Number Negative")
###Output
enter an integer: -4
The Number Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if ('rock','paper','scissors'): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose paper
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! scissors cut paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose paper
It's a tie!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! scissors cut paper")
# TODO add logic for you == 'scissors' similar to the paper logic
elif (you=='scissors' and computer=='paper'):
print("You win! scissors cut paper")
elif (you=='scissors' and computer=='rock'):
print("You lose! rock smashes scissors")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose rock
You win! paper covers rock
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number >= 0:
print("Zero or Positive")
else:
print("Negative")
###Output
Enter an integer: -4
Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose scissors
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices:
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: thanos
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose rock
It's a tie!
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose: Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
You lose: Scissors cut paper.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win: Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
num = int(input("Enter an integer: "))
if num > 0:
print("Positive number")
elif num == 0:
print("Zero")
else:
print("negative number")
###Output
Enter an integer: 6
Positive number
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO")
elif (you == 'paper' and computer == 'scissors')
print("TODO")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 34
34 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number %2==0:
print ("%d is zero or positive" % number )
else:
print("%d is negative" % number)
###Output
Enter an integer: -3
-3 is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices: # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose rock
It's a tie!
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers rock.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
It's a tie!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input('Enter a number'))
if number >= 0:
print(number,'is even')
else:
print(number,'is odd')
###Output
Enter a number-5
-5 is odd
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices: # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose rock
It's a tie!
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers rock.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose scissors
It's a tie!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number=int(input("input a number"))
if number>=0:
print("the number is positive")
else:
print("the number is negative")
###Output
input a number-2
the number is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices:
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors')
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
import random
choices = ['rock','paper','scissors']
plchoices = ['rock', 'paper', 'scissors', 'gun']
computer = random.choice(choices)
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in plchoices):
print("You chose %s and the computer chose %s" % (you,computer))
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you== 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
elif (you=="scissors" and computer=="rock"):
print("You lose! Rock smashes scissors")
elif (you=="gun"):
print("You win! You shot the computer.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: gun
You chose gun and the computer chose paper
You win! You shot the computer.
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number>0:
print("%d is Positive" % (number))
elif number==0:
print("%d is Zero" % (number))
else:
print("Invalid value.")
###Output
Enter an integer: 0
0 is Zero
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose scissors
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers Rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts Paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose rock
You win! Paper covers Rock.
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similar to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers Rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts Paper.")
# TODO add logic for you == 'scissors' similar to the paper logic
elif (you == 'scissors' and computer =='paper'):
print("You win! Scissors cuts Paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes Scissors.")
elif (you == 'rock' and computer == 'paper'):
print("You lose! Paper covers Rock .")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 35
35 is odd
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number >0:
print("Positive")
elif number ==0:
print("Zero")
else:
print("Negative")
###Output
Enter an integer: -2
Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose paper
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose paper
You lose! Paper covers rock.
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer == 'rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
You lose! Scissors cuts paper
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cuts paper")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 654
654 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number=float(input("enter an integer: "))
if number >0:
print("your number is positive")
if number==0:
print("your number is equal to zero")
else:
print ("your number is negative")
###Output
enter an integer: -543
your number is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if you in choices:
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: meh
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = str(input("Enter your choice: rock, paper, or scissors: ")) #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("you win!")
elif (you == 'paper' and computer == 'scissors'):
print("you lose! ")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose rock
you win!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("YOU WIN YAY!")
elif (you == 'paper' and computer == 'scissors'):
print("oH NO u lose. good luck next time!")
elif (you=='scissors'and computer == 'rock'):
print('oh no u lose. good luck next time!')
elif(you=='scissors'and computer =='paper'):
print("yay u win!")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose paper
yay u win!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
Enter an integer: 24
24 is even
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
number = int(input("Enter an integer: "))
if number >= 0:
print("%d is positive" % (number))
else:
print("%d is negative" % (number))
###Output
Enter an integer: -6
-6 is negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: pizza
You chose pizza and the computer chose paper
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: papir
You didn't enter 'rock', 'paper' or 'scissors'!!!
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose rock and the computer chose rock
It's a tie!
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("Paper covers rock, you win!")
elif (you == 'paper' and computer == 'scissors'):
print("Scissor cut paper, you lose!")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose scissors
Scissor cut paper, you lose!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("Paper covers rock, you win!")
elif (you == 'paper' and computer == 'scissors'):
print("Scissors cut paper, you lose!")
elif (you == 'scissors' and computer =='paper'):
print("Scissors cut paper, you win!")
elif (you == 'scissors' and computer == 'rock'):
print("Rock crushes scissors, you lose!")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: paper
You chose paper and the computer chose rock
Paper covers rock, you win!
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
_____no_output_____
###Markdown
Make sure to run the cell more than once, inputting both odd and even integers to try it out. After all, we don't know if the code really works until we test out both options!On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input an integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
###Output
_____no_output_____
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent selects one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
_____no_output_____
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (TODO): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Playing the gameWith the input figured out, it's time to work on the final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cut paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO - What should this say?")
elif (you == 'paper' and computer == 'scissors')
print("TODO - What should this say?")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave this part to you as your final exercise. Similar to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock')
print("TODO - What should this say?")
elif (you == 'paper' and computer == 'scissors')
print("TODO - What should this say?")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
In-Class Coding Lab: ConditionalsThe goals of this lab are to help you to understand:- Relational and Logical Operators - Boolean Expressions- The if statement- Try / Except statement- How to create a program from a complex idea. Understanding ConditionalsConditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
###Code
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
###Output
_____no_output_____
###Markdown
Make sure to run the cell more than once, inputting both an odd and even integers to try it out. After all, we don't know if the code really works until we test out both options. On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`. Now Try ItWrite a similar program to input a integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
###Code
# TODO write your program here:
number = int(input("Enter an integer: "))
if number>=0:
print("Zero or Positive")
else:
print("Negative")
###Output
Enter an integer: -9
Negative
###Markdown
Rock, Paper ScissorsIn this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
###Code
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
###Output
_____no_output_____
###Markdown
Randomizing the Computer's Selection Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html) It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
###Code
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
###Output
_____no_output_____
###Markdown
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything! Getting input and guarding against stupidityWith step one out of the way, its time to move on to step 2. Getting input from the user.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem. We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play. In operatorThe `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
###Code
# TODO Try these:
'rock' in choices, 'mike' in choices
###Output
_____no_output_____
###Markdown
You Do It!Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: rock
You chose rock and the computer chose rock
###Markdown
Playing the gameWith the input figured out, it's time to work our final step, playing the game. The game itself has some simple rules:- rock beats scissors (rock smashes scissors)- scissors beats paper (scissors cuts paper)- paper beats rock (paper covers rock)So for example:- If you choose rock and the computer chooses paper, you lose because paper covers rock. - Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.- If you both choose rock, it's a tie. It's too complicated!It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
_____no_output_____
###Markdown
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended. Paper: Making the program a bit more complex.With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder. You Do ItIn the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
You chose paper and the computer chose paper
It's a tie!
###Markdown
The final programWith the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave part to you as your final exercise. Similat to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
###Code
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cut paper.")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cut paper.")
elif (you == 'scissors' and computer == 'rock'):
print("You lose! Rock smashes scissors.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
###Output
Enter your choice: rock, paper, or scissors: scissors
You chose scissors and the computer chose rock
You lose! Rock smashes scissors.
|
change_tenses.ipynb | ###Markdown
Here 'thought' was not changed. Let's check if it was labeled as a noun.
###Code
sentences = parse(text).split()
[x for x in sentences[0] if x[0] == 'thought']
###Output
_____no_output_____
###Markdown
Yup, it's labeled as a noun phrase (NP). Let's try the spaCy parser.
###Code
nlp = English()
doc=nlp(text)
[x for x in list(doc.sents)[0] if x.text == 'thought'][0].tag_
###Output
_____no_output_____
###Markdown
Well that's good, spaCy got it right! Let's build the same parser, but using spaCy instead of pattern.
###Code
def change_tense_spaCy(text, to_tense):
doc = nlp(unicode(text))
out = []
out.append(doc[0].text)
for word_pair in pairwise(doc):
if (word_pair[0].string == 'will' and word_pair[1].pos_ == u'VERB') \
or word_pair[1].tag_ == u'VBD' or word_pair[1].tag_ == u'VBP':
if to_tense == 'present':
out.append(conjugate(word_pair[1].text, PRESENT))
elif to_tense == 'past':
out.append(conjugate(word_pair[1].text, PAST))
elif to_tense == 'future':
out.append('will')
out.append(conjugate(word_pair[1].text, 'inf'))
elif word_pair[1].text == 'will' and word_pair[1].tag_ == 'MD':
pass
else:
out.append(word_pair[1].text)
text_out = ' '.join(out)
for char in string.punctuation:
if char in """(<['‘""":
text_out = text_out.replace(char+' ',char)
else:
text_out = text_out.replace(' '+char,char)
text_out = text_out.replace(" 's","'s") #fix posessive 's
return text_out
print(change_tense_spaCy(text, 'present'))
print(change_tense_spaCy(text,"future"))
###Output
Alice will be beginning to get very tired of sitting by her sister on the bank and of having nothing to do: once or twice she will have peeped into the book her sister will be reading, but it will have no pictures or conversations in it, ‘ and what is the use of a book,’ will think Alice ‘ without pictures or conversations?’ So she will be considering in her own mind (as well as she could, for the hot day will make her feel very sleepy and stupid), whether the pleasure of making a daisy- chain would be worth the trouble of getting up and picking the daisies, when suddenly White Rabbit with pink eyes will run close by her.
###Markdown
Looking good! However, it will fail if we make the following change to the last sentence:
###Code
text = "White rabbits with pink eyes ran close by her."
change_tense_spaCy(text, 'present')
###Output
_____no_output_____
###Markdown
This fails because the verb "ran" confujates to "runs" if the subject is singular, but conjugates to "run" if the subject is plural. To fix this, we'll have to figure out a way to tell the verb the number of its subject.
###Code
from spacy.symbols import NOUN
SUBJ_DEPS = {'agent', 'csubj', 'csubjpass', 'expl', 'nsubj', 'nsubjpass'}
def _get_conjuncts(tok):
"""
Return conjunct dependents of the leftmost conjunct in a coordinated phrase,
e.g. "Burton, [Dan], and [Josh] ...".
"""
return [right for right in tok.rights
if right.dep_ == 'conj']
def is_plural_noun(token):
"""
Returns True if token is a plural noun, False otherwise.
Args:
token (``spacy.Token``): parent document must have POS information
Returns:
bool
"""
if token.doc.is_tagged is False:
raise ValueError('token is not POS-tagged')
return True if token.pos == NOUN and token.lemma != token.lower else False
def get_subjects_of_verb(verb):
"""Return all subjects of a verb according to the dependency parse."""
subjs = [tok for tok in verb.lefts
if tok.dep_ in SUBJ_DEPS]
# get additional conjunct subjects
subjs.extend(tok for subj in subjs for tok in _get_conjuncts(subj))
return subjs
def is_plural_verb(token):
if token.doc.is_tagged is False:
raise ValueError('token is not POS-tagged')
subjects = get_subjects_of_verb(token)
plural_score = sum([is_plural_noun(x) for x in subjects])/len(subjects)
return plural_score > .5
conjugate??
def change_tense_spaCy(text, to_tense):
doc = nlp(unicode(text))
out = []
out.append(doc[0].text)
for word_pair in pairwise(doc):
if (word_pair[0].string == 'will' and word_pair[1].pos_ == u'VERB') \
or word_pair[1].tag_ == u'VBD' or word_pair[1].tag_ == u'VBP':
if to_tense == 'present':
if is_plural_verb(word_pair[1]):
out.append(conjugate(word_pair[1].text, PRESENT, None, PLURAL))
else:
out.append(conjugate(word_pair[1].text, PRESENT))
elif to_tense == 'past':
out.append(conjugate(word_pair[1].text, PAST))
elif to_tense == 'future':
out.append('will')
out.append(conjugate(word_pair[1].text, 'inf'))
elif word_pair[1].text == 'will' and word_pair[1].tag_ == 'MD':
pass
else:
out.append(word_pair[1].text)
text_out = ' '.join(out)
for char in string.punctuation:
if char in """(<['‘""":
text_out = text_out.replace(char+' ',char)
else:
text_out = text_out.replace(' '+char,char)
text_out = text_out.replace(" 's","'s") #fix posessive 's
return text_out
text_plural_check = "Rabbits with white fur ran close by her."
change_tense_spaCy(text_plural_check, 'present')
nlp = English()
sent = u"I was shooting an elephant"
doc=nlp(sent)
sub_toks = [tok for tok in doc if (tok.dep_ == "nsubj") ]
print(sub_toks)
# Finding a verb with a subject from below — good
verbs = set()
for possible_subject in doc:
if possible_subject.dep == nsubj and possible_subject.head.pos == VERB:
verbs.add((possible_subject, possible_subject.head))
verbs
text2 = "We will see about that"
sentences = parse(text2).split()
sentences
pprint(parsetree("I walk to the store"))
pairwise(sentences[0])[0]
parse("I will walk").split()
text2 = """Dr. Dichter's interest in community psychiatry began as a fourth year resident when he and a co-resident ran a psychiatric inpatient and outpatient program at Fort McCoy Wisconsin treating formally institutionalized chronically mentally ill Cuban refugees from the Mariel Boatlift. He came to Philadelphia to provide short-term inpatient treatment, alleviating emergency room congestion. There he first encountered the problems of homelessness and was particularly interested in the relationship between the homeless and their families. Dr. Dichter has been the Director of an outpatient department and inpatient unit, as well as the Director of Family Therapy at AEMC. His work with families focused on the impact of chronic mental illness on the family system. He was the first Medical Director for a Medicaid Managed Care Organization and has consulted with SAMHSA, CMS and several states assisting them to monitor access and quality of care for their public patients. He currently is the Medical Director for Pathways to Housing PA, where he has assists chronically homeless to maintain stable housing and recover from the ravages of mental illness and substance abuse."""
text2
change_tense_spaCy(text2,'future')
s = parsetree(text2,relations=True)[0]
' '.join([chunk.string for chunk in s.chunks])
s.string
conjugate('focussed','inf',parse=False)
tenses('focused')
from stat_parser import Parser
parser = Parser()
text = "He came to Philadelphia to provide short-term inpatient treatment, alleviating emergency room congestion."
text = "I will be there."
result = parser.parse(text)
result
sentence = result
LABELS = [x._label for x in sentence[0]]
vps = [x for x in sentence[0] if x._label == 'VP']
#verbs = x for x in vps
WORDS,POS = zip(*result.pos())
vps[0].pos()
vps[0]
doc
#fix formatting
import string
##TODO: fix spacing around single and double quotes
###Output
_____no_output_____ |
notebooks/TickMarks_Part2.ipynb | ###Markdown
Part 2 of Tick Marks, Grids and Labels: Tick Marks - MarginsThis page is primarily based on the following page at the Circos documentation site:- [2. Tick Marks - Margins](????????????)That page is found as part number 4 of the ??? part ['Tick Marks, Grids and Labels' section](http://circos.ca/documentation/tutorials/quick_start/) of [the larger set of Circos tutorials](http://circos.ca/documentation/tutorials/).Go back to Part 1 by clicking [here &8592;](TickMarks_Part1.ipynb).----4 --- Tick Marks, Grids and Labels==================================2. Tick Marks - Margins-----------------------::: {menu4}[[Lesson](/documentation/tutorials/ticks_and_labels/margins/lesson){.clean}]{.active}[Images](/documentation/tutorials/ticks_and_labels/margins/images){.normal}[Configuration](/documentation/tutorials/ticks_and_labels/margins/configuration){.normal}:::If your ticks are densely spaced, they may overlap and form dreaded tickblobs. Likewise, tick labels can start to overlap and lose theirlegibility. To mitigate this problem, you can insist that adjacent ticks(or labels) are separated by a minimum distance.Here I show how to manage spacing between tick marks. The next tutorialdiscusses label spacing. minimum tick mark separationThe tick\_separation parameter controls the minimum distance between twoticks marks. Note that this parameter applies to tick marks only, not tolabels. Labels have their own minimum distance parameter, covered in thenext session. Of course, if a tick mark is not drawn, neither will itslabel. ```ini define minimum separation for all tickstick_separation = 3p...``` ```ini define minimum separation for a specific tick grouptick_separation = 2p...``` ```ini...``` ```ini``` The primary purpose of the tick\_separation parameter is to allowautomatic supression of ticks if you shrink the image size, change theideogram position radius, change the ideogram scale or, in general,perform any adjustment to the image that changes the base/pixelresolution.Since Circos supports local scale adjustments (at the level of ideogram,or region of ideogram), the the tick separation parameter is used todynamically show/hide ticks across the image. You should keep this valueat 2-3 pixels at all times, so that your tick marks do not run into eachother.Tick marks are suppressed on a group-by-group basis. In other words,tick separation is checked separately for each \ block. Forexample, if you define 1u, 5u and 10u ticks, these will be checked foroverlap independently (Circos does not check if the 10u tick overlapswith a tick from another group, such as the 1u tick).This approach is slightly different than the method that was initiallyimplemented, which compared inter-tick distances across tick groups.The tick mark thickness plays no role in determining the distancebetween ticks. The tick-to-tick distance is calculated based on thepositions of the ticks.In the first example image in this tutorial, three ideograms are drawn,each at a different scale. Depending on the magnification, ticks aresuppressed for some ideograms because they are closer than thetick\_separation parameter. For example, 0.25u and 0.5u ticks do notappear on hs1 and 0.25u ticks do not appear on hs2.In the second example image, only one chromosome is shown but its scaleis smoothly expanded. Region 100-110 Mb is magnified at 10x and with thescale in the neighbourhood decreasing smoothly from 10x to 1x. All tickmarks are shown within the magnified area and away from it, as the scalereturns to 1x, some ticks disappear. minimum tick distance to ideogram edgeTo suppress ticks near the edge of the ideogram, usemin\_distance\_to\_edge. This parameter can be used globally in the\ block to affect all ticks, or locally with a \ block toaffect an individual tick group. ```inimin_distance_to_edge = 10p...min_distance_to_edge = 5p......``` The corresponding parameter to suppress labels near an ideogram edge ismin\_label\_distance\_to\_edge.---- Generating the plot produced by this example codeThe following two cells will generate the plot. The first cell adjusts the current working directory.
###Code
%cd ../circos-tutorials-0.67/tutorials/4/2/
%%bash
../../../../circos-0.69-6/bin/circos -conf circos.conf
###Output
debuggroup summary 0.43s welcome to circos v0.69-6 31 July 2017 on Perl 5.022000
debuggroup summary 0.44s current working directory /home/jovyan/circos-tutorials-0.67/tutorials/4/2
debuggroup summary 0.44s command ../../../../circos-0.69-6/bin/circos -conf circos.conf
debuggroup summary 0.44s loading configuration from file circos.conf
debuggroup summary 0.44s found conf file circos.conf
debuggroup summary 0.62s debug will appear for these features: output,summary
debuggroup summary 0.62s bitmap output image ./circos.png
debuggroup summary 0.62s SVG output image ./circos.svg
debuggroup summary 0.62s parsing karyotype and organizing ideograms
debuggroup summary 0.73s karyotype has 24 chromosomes of total size 3,095,677,436
debuggroup summary 0.73s applying global and local scaling
debuggroup summary 0.74s allocating image, colors and brushes
debuggroup summary 2.91s drawing 1 ideograms of total size 249,250,622
debuggroup summary 2.91s drawing highlights and ideograms
debuggroup output 3.55s generating output
debuggroup output 4.35s created PNG image ./circos.png (158 kb)
debuggroup output 4.35s created SVG image ./circos.svg (58 kb)
###Markdown
View the plot in this page using the following cell.
###Code
from IPython.display import Image
Image("circos.png")
###Output
_____no_output_____ |
Seaborn - Loading Dataset.ipynb | ###Markdown
Seaborn | Part-1: Loading Datasets: When working with Seaborn, we can either use one of the [built-in datasets](https://github.com/mwaskom/seaborn-data) that Seaborn offers or we can load a Pandas DataFrame. Seaborn is part of the [PyData](https://pydata.org/) stack hence accepts Pandas’ data structures.Let us begin by importing few built-in datasets but before that we shall import few other libraries as well that our Seaborn would depend upon:
###Code
# Importing intrinsic libraries:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Once we have imported the required libraries, now it is time to load built-in dataset. The dataset we would be dealing with in this illustration is [Iris Flower Dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set).
###Code
# Loading built-in Datasets:
iris = sns.load_dataset("iris")
###Output
_____no_output_____
###Markdown
Similarly we may load other dataset as well and for illustration sake, I shall code few of them down here (though won't be referencing to):
###Code
# Refer to 'Dataset Source Reference' for list of all built-in Seaborn datasets.
tips = sns.load_dataset("tips")
exercise = sns.load_dataset("exercise")
titanic = sns.load_dataset("titanic")
flights = sns.load_dataset("flights")
###Output
_____no_output_____
###Markdown
Let us take a sneak peek as to how this Iris dataset looks like and we shall be using Pandas to do so:
###Code
iris.head(10)
###Output
_____no_output_____
###Markdown
Iris dataset actually has 50 samples from each of three species of Iris flower (Setosa, Virginica and Versicolor). Four features were measured (in centimetres) from each sample: Length and Width of the Sepals and Petals. Let us try to have a summarized view of this dataset:
###Code
iris.describe()
###Output
_____no_output_____
###Markdown
`.describe()` is a very useful method in Pandas as it generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. Without getting in-depth into analysis here, let us try to plot something simple from this dataset:
###Code
sns.set()
%matplotlib inline
# Later in the course I shall explain why above 2 lines of code have been added.
sns.swarmplot(x="species", y="petal_length", data=iris)
###Output
_____no_output_____
###Markdown
This beautiful representation of data we see above is known as a `Swarm Plot` with minimal parameters. I shall be covering this in detail later on but for now I just wanted you to have a feel of serenity we're getting into. Let us now try to load a random dataset and the one I've picked for this illustration is [PoliceKillingsUS](https://github.com/washingtonpost/data-police-shootings) dataset. This dataset has been prepared by The Washington Post (they keep updating it on runtime) with every fatal shooting in the United States by a police officer in the line of duty since Jan. 1, 2015.
###Code
# Loading Pandas DataFrame:
df = pd.read_csv("C:/Users/Alok/Downloads/PoliceKillingsUS.csv", encoding="windows-1252")
###Output
_____no_output_____
###Markdown
Just the way we looked into Iris Data set, let us know have a preview of this dataset as well. We won't be getting into deep analysis of this dataset because our agenda is only to visualize the content within. So, let's do this:
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
This dataset is pretty self-descriptive and has limited number of features (may read as columns).`race`:`W`: White, non-Hispanic`B`: Black, non-Hispanic`A`: Asian`N`: Native American`H`: Hispanic`O`: Other`None`: unknownAnd, `gender` indicates:`M`: Male`F`: Female`None`: unknownThe threat_level column include incidents where officers or others were shot at, threatened with a gun, attacked with other weapons or physical force, etc. The attack category is meant to flag the highest level of threat. The `other` and `undetermined` categories represent all remaining cases. `Other` includes many incidents where officers or others faced significant threats.The `threat column` and the `fleeing column` are not necessarily related. Also, `attacks` represent a status immediately before fatal shots by police; while `fleeing` could begin slightly earlier and involve a chase. Latly, `body_camera` indicates if an officer was wearing a body camera and it may have recorded some portion of the incident.Let us now look into the descriptive statistics:
###Code
df.describe()
###Output
_____no_output_____
###Markdown
These stats in particular do not really make much sense. Instead let us try to visualize age of people who were claimed to be armed as per this dataset.Quick Note: Two special lines of code that we added earlier won't be required again. As promised, I shall reason that in upcoming lectures.
###Code
sns.stripplot(x="armed", y="age", data=df)
###Output
_____no_output_____
###Markdown
Seaborn | Part-1: Loading Datasets: When working with Seaborn, we can either use one of the [built-in datasets](https://github.com/mwaskom/seaborn-data) that Seaborn offers or we can load a Pandas DataFrame. Seaborn is part of the [PyData](https://pydata.org/) stack hence accepts Pandas’ data structures.Let us begin by importing few built-in datasets but before that we shall import few other libraries as well that our Seaborn would depend upon:
###Code
# Importing intrinsic libraries:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Once we have imported the required libraries, now it is time to load built-in dataset. The dataset we would be dealing with in this illustration is [Iris Flower Dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set).
###Code
# Loading built-in Datasets:
iris = sns.load_dataset("iris")
###Output
_____no_output_____
###Markdown
Similarly we may load other dataset as well and for illustration sake, I shall code few of them down here (though won't be referencing to):
###Code
# Refer to 'Dataset Source Reference' for list of all built-in Seaborn datasets.
tips = sns.load_dataset("tips")
exercise = sns.load_dataset("exercise")
titanic = sns.load_dataset("titanic")
flights = sns.load_dataset("flights")
###Output
_____no_output_____
###Markdown
Let us take a sneak peek as to how this Iris dataset looks like and we shall be using Pandas to do so:
###Code
iris.head(10)
###Output
_____no_output_____
###Markdown
Iris dataset actually has 50 samples from each of three species of Iris flower (Setosa, Virginica and Versicolor). Four features were measured (in centimetres) from each sample: Length and Width of the Sepals and Petals. Let us try to have a summarized view of this dataset:
###Code
iris.describe()
###Output
_____no_output_____
###Markdown
`.describe()` is a very useful method in Pandas as it generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. Without getting in-depth into analysis here, let us try to plot something simple from this dataset:
###Code
sns.set()
%matplotlib inline
# Later in the course I shall explain why above 2 lines of code have been added.
sns.swarmplot(x="species", y="petal_length", data=iris)
###Output
_____no_output_____
###Markdown
This beautiful representation of data we see above is known as a `Swarm Plot` with minimal parameters. I shall be covering this in detail later on but for now I just wanted you to have a feel of serenity we're getting into. Let us now try to load a random dataset and the one I've picked for this illustration is [PoliceKillingsUS](https://github.com/washingtonpost/data-police-shootings) dataset. This dataset has been prepared by The Washington Post (they keep updating it on runtime) with every fatal shooting in the United States by a police officer in the line of duty since Jan. 1, 2015.
###Code
# Loading Pandas DataFrame:
df = pd.read_csv("C:/Users/Alok/Downloads/PoliceKillingsUS.csv", encoding="windows-1252")
###Output
_____no_output_____
###Markdown
Just the way we looked into Iris Data set, let us know have a preview of this dataset as well. We won't be getting into deep analysis of this dataset because our agenda is only to visualize the content within. So, let's do this:
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
This dataset is pretty self-descriptive and has limited number of features (may read as columns).`race`:`W`: White, non-Hispanic`B`: Black, non-Hispanic`A`: Asian`N`: Native American`H`: Hispanic`O`: Other`None`: unknownAnd, `gender` indicates:`M`: Male`F`: Female`None`: unknownThe threat_level column include incidents where officers or others were shot at, threatened with a gun, attacked with other weapons or physical force, etc. The attack category is meant to flag the highest level of threat. The `other` and `undetermined` categories represent all remaining cases. `Other` includes many incidents where officers or others faced significant threats.The `threat column` and the `fleeing column` are not necessarily related. Also, `attacks` represent a status immediately before fatal shots by police; while `fleeing` could begin slightly earlier and involve a chase. Latly, `body_camera` indicates if an officer was wearing a body camera and it may have recorded some portion of the incident.Let us now look into the descriptive statistics:
###Code
df.describe()
###Output
_____no_output_____
###Markdown
These stats in particular do not really make much sense. Instead let us try to visualize age of people who were claimed to be armed as per this dataset.Quick Note: Two special lines of code that we added earlier won't be required again. As promised, I shall reason that in upcoming lectures.
###Code
sns.stripplot(x="armed", y="age", data=df)
###Output
_____no_output_____
###Markdown
Seaborn | Part-1: Loading Datasets: When working with Seaborn, we can either use one of the [built-in datasets](https://github.com/mwaskom/seaborn-data) that Seaborn offers or we can load a Pandas DataFrame. Seaborn is part of the [PyData](https://pydata.org/) stack hence accepts Pandas’ data structures.Let us begin by importing few built-in datasets but before that we shall import few other libraries as well that our Seaborn would depend upon:
###Code
# Importing intrinsic libraries:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Once we have imported the required libraries, now it is time to load built-in dataset. The dataset we would be dealing with in this illustration is [Iris Flower Dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set).
###Code
# Loading built-in Datasets:
iris = sns.load_dataset("iris")
###Output
_____no_output_____
###Markdown
Similarly we may load other dataset as well and for illustration sake, I shall code few of them down here (though won't be referencing to):
###Code
# Refer to 'Dataset Source Reference' for list of all built-in Seaborn datasets.
tips = sns.load_dataset("tips")
exercise = sns.load_dataset("exercise")
titanic = sns.load_dataset("titanic")
flights = sns.load_dataset("flights")
###Output
_____no_output_____
###Markdown
Let us take a sneak peek as to how this Iris dataset looks like and we shall be using Pandas to do so:
###Code
iris.head(10)
###Output
_____no_output_____
###Markdown
Iris dataset actually has 50 samples from each of three species of Iris flower (Setosa, Virginica and Versicolor). Four features were measured (in centimetres) from each sample: Length and Width of the Sepals and Petals. Let us try to have a summarized view of this dataset:
###Code
iris.describe()
###Output
_____no_output_____
###Markdown
`.describe()` is a very useful method in Pandas as it generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. Without getting in-depth into analysis here, let us try to plot something simple from this dataset:
###Code
sns.set()
%matplotlib inline
# Later in the course I shall explain why above 2 lines of code have been added.
sns.swarmplot(x="species", y="petal_length", data=iris)
###Output
_____no_output_____
###Markdown
This beautiful representation of data we see above is known as a `Swarm Plot` with minimal parameters. I shall be covering this in detail later on but for now I just wanted you to have a feel of serenity we're getting into. Let us now try to load a random dataset and the one I've picked for this illustration is [PoliceKillingsUS](https://github.com/washingtonpost/data-police-shootings) dataset. This dataset has been prepared by The Washington Post (they keep updating it on runtime) with every fatal shooting in the United States by a police officer in the line of duty since Jan. 1, 2015.
###Code
# Loading Pandas DataFrame:
df = pd.read_csv("C:/Users/Alok/Downloads/PoliceKillingsUS.csv", encoding="windows-1252")
###Output
_____no_output_____
###Markdown
Just the way we looked into Iris Data set, let us know have a preview of this dataset as well. We won't be getting into deep analysis of this dataset because our agenda is only to visualize the content within. So, let's do this:
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
This dataset is pretty self-descriptive and has limited number of features (may read as columns).`race`:`W`: White, non-Hispanic`B`: Black, non-Hispanic`A`: Asian`N`: Native American`H`: Hispanic`O`: Other`None`: unknownAnd, `gender` indicates:`M`: Male`F`: Female`None`: unknownThe threat_level column include incidents where officers or others were shot at, threatened with a gun, attacked with other weapons or physical force, etc. The attack category is meant to flag the highest level of threat. The `other` and `undetermined` categories represent all remaining cases. `Other` includes many incidents where officers or others faced significant threats.The `threat column` and the `fleeing column` are not necessarily related. Also, `attacks` represent a status immediately before fatal shots by police; while `fleeing` could begin slightly earlier and involve a chase. Latly, `body_camera` indicates if an officer was wearing a body camera and it may have recorded some portion of the incident.Let us now look into the descriptive statistics:
###Code
df.describe()
###Output
_____no_output_____
###Markdown
These stats in particular do not really make much sense. Instead let us try to visualize age of people who were claimed to be armed as per this dataset.Quick Note: Two special lines of code that we added earlier won't be required again. As promised, I shall reason that in upcoming lectures.
###Code
sns.stripplot(x="armed", y="age", data=df)
###Output
_____no_output_____ |
Student_model_with _KD(BBC_news_dataset).ipynb | ###Markdown
Data---
###Code
news_df = pd.read_csv("/content/drive/MyDrive/Data/A4/TrainData.csv")
news_df['Category'].value_counts()
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(handle_unknown = 'ignore')
enc.fit(np.array(news_df["Category"]).reshape(-1,1))
y_train = enc.transform(np.array(news_df["Category"]).reshape(-1,1)).toarray()
y_train[0]
!pip install contractions
import contractions
# Data Cleaning
def clean_text(text):
clean_texts = ""
expanded_words = ""
# remove everything except alphabets
# text = re.sub("[^a-zA-Z]", " ", text)
words = text.split(" ")
for word in words:
word = contractions.fix(word)
tokens = word.split(" ")
for token in tokens:
if(len(token) > 1):
expanded_words = expanded_words + ' ' + token
clean_texts = clean_texts + ' ' + expanded_words
# remove whitespaces
clean_texts = ' '.join(clean_texts.split())
clean_texts = clean_texts.lower()
return clean_texts
# creating clean text feature
news_df['clean_text'] = news_df['Text'].apply(clean_text)
X = news_df.loc[:,'clean_text']
max_seq_length = max([len(text.split(" ")) for text in X])
avg_seq_length = np.mean([len(text.split(" ")) for text in X])
print("max_seq_length = " , max_seq_length)
print("avg_seq_length = " , avg_seq_length)
print(X[0])
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
max_words = 1000 #3000
tokenizer = Tokenizer(oov_token = "OOV")
tokenizer.fit_on_texts(X)
sequence_train = tokenizer.texts_to_sequences(X)
sequence_train = pad_sequences(sequence_train,padding='post', maxlen=max_words)
x_train = np.asarray(sequence_train)
# Inspect the dimenstions of our training and test data (this is helpful to debug)
print('x_train shape:', x_train.shape)
print('y_train shape:', y_train.shape)
news_test_df = pd.read_csv("/content/drive/MyDrive/Data/A4/TestData_Inputs.csv")
news_test_df2 = pd.read_excel("/content/drive/MyDrive/Data/A4/Assignment4_TestLabels.xlsx")
# creating clean text feature
news_test_df['clean_text'] = news_test_df['Text'].apply(clean_text)
# news_test_df['clean_text'] = news_test_df['clean_text'].apply(lambda x: remove_stopwords(x))
X_test = news_test_df.loc[:,'clean_text']
y_test = enc.transform(np.array(news_test_df2['Label - (business, tech, politics, sport, entertainment)']).reshape(-1,1)).toarray()
sequence_test = tokenizer.texts_to_sequences(X_test)
sequence_test = pad_sequences(sequence_test,padding='post', maxlen=max_words)
X_test = np.asarray(sequence_test)
# Inspect the dimenstions of our training and test data (this is helpful to debug)
print('x_test shape:', X_test.shape)
print('y_test shape:', y_test.shape)
news_test_df.head()
dict(list((tokenizer.word_index).items())[0:10])
# embedding_matrix_w2v = np.load("/content/drive/MyDrive/Data/A4/A4_matrix_w2v2.npy")
# embedding_matrix_ft = np.load("/content/drive/MyDrive/Data/A4/A4_matrix_ft2.npy")
# embedding_matrix_glove = np.load("/content/drive/MyDrive/Data/A4/A4_matrix_glove2.npy")
###Output
_____no_output_____
###Markdown
Positional EncodingA vector added to the embedding to encode positional informationhttps://www.tensorflow.org/tutorials/text/transformerpositional_encoding
###Code
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to even indices in the array; 2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Multi Headed Attention Test of multi-headed Attention Shape = (batch_size , num_heads, seq_length, depth) For each of [query, value, key] we reshape from (batch_size , seq_length, depth) -> (batch_size , seq_length, num_heads, multi_headed_depth) then permute -> (batch_size , num_heads, seq_length, multi_headed_depth) where multi-headed_depth = depth / num_headsThe dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.
###Code
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model = 512, num_heads = 8, causal=False, dropout=0.0):
super(MultiHeadAttention, self).__init__()
assert d_model % num_heads == 0
depth = d_model // num_heads
self.w_query = tf.keras.layers.Dense(d_model)
self.split_reshape_query = tf.keras.layers.Reshape((-1,num_heads,depth))
self.split_permute_query = tf.keras.layers.Permute((2,1,3))
self.w_value = tf.keras.layers.Dense(d_model)
self.split_reshape_value = tf.keras.layers.Reshape((-1,num_heads,depth))
self.split_permute_value = tf.keras.layers.Permute((2,1,3))
self.w_key = tf.keras.layers.Dense(d_model)
self.split_reshape_key = tf.keras.layers.Reshape((-1,num_heads,depth))
self.split_permute_key = tf.keras.layers.Permute((2,1,3))
self.attention = tf.keras.layers.Attention(causal=causal, dropout=dropout)
self.join_permute_attention = tf.keras.layers.Permute((2,1,3))
self.join_reshape_attention = tf.keras.layers.Reshape((-1,d_model))
self.dense = tf.keras.layers.Dense(d_model)
def call(self, inputs, mask=None, training=None):
q = inputs[0]
v = inputs[1]
k = inputs[2] if len(inputs) > 2 else v
query = self.w_query(q)
query = self.split_reshape_query(query)
query = self.split_permute_query(query)
value = self.w_value(v)
value = self.split_reshape_value(value)
value = self.split_permute_value(value)
key = self.w_key(k)
key = self.split_reshape_key(key)
key = self.split_permute_key(key)
if mask is not None:
if mask[0] is not None:
mask[0] = tf.keras.layers.Reshape((-1,1))(mask[0])
mask[0] = tf.keras.layers.Permute((2,1))(mask[0])
if mask[1] is not None:
mask[1] = tf.keras.layers.Reshape((-1,1))(mask[1])
mask[1] = tf.keras.layers.Permute((2,1))(mask[1])
attention = self.attention([query, value, key], mask=mask)
attention = self.join_permute_attention(attention)
attention = self.join_reshape_attention(attention)
x = self.dense(attention)
return x
###Output
_____no_output_____
###Markdown
Encoder Layers
###Code
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model = 512, num_heads = 8, dff = 2048, dropout = 0.0):
super(EncoderLayer, self).__init__()
self.multi_head_attention = MultiHeadAttention(d_model, num_heads)
self.dropout_attention = tf.keras.layers.Dropout(dropout)
self.add_attention = tf.keras.layers.Add()
self.layer_norm_attention = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dense1 = tf.keras.layers.Dense(dff, activation='relu')
self.dense2 = tf.keras.layers.Dense(d_model)
self.dropout_dense = tf.keras.layers.Dropout(dropout)
self.add_dense = tf.keras.layers.Add()
self.layer_norm_dense = tf.keras.layers.LayerNormalization(epsilon=1e-6)
def call(self, inputs, mask=None, training=None):
# print(mask)
attention = self.multi_head_attention([inputs,inputs,inputs], mask = [mask,mask])
attention = self.dropout_attention(attention, training = training)
x = self.add_attention([inputs , attention])
x = self.layer_norm_attention(x)
# x = inputs
## Feed Forward
dense = self.dense1(x)
dense = self.dense2(dense)
dense = self.dropout_dense(dense, training = training)
x = self.add_dense([x , dense])
x = self.layer_norm_dense(x)
return x
###Output
_____no_output_____
###Markdown
the causal = True argument for multi_head_attention1 automatically masks future tokens in the sequence Encoder Blocks
###Code
class Encoder(tf.keras.layers.Layer):
def __init__(self, input_vocab_size, num_layers = 4, d_model = 512, num_heads = 8, dff = 2048, maximum_position_encoding = 10000, dropout = 0.0):
super(Encoder, self).__init__()
self.d_model = d_model
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model, mask_zero=True)
self.pos = positional_encoding(maximum_position_encoding, d_model)
self.encoder_layers = [ EncoderLayer(d_model = d_model, num_heads = num_heads, dff = dff, dropout = dropout) for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(dropout)
def call(self, inputs, mask=None, training=None):
x = self.embedding(inputs)
# positional encoding
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32)) # scaling by the sqrt of d_model, not sure why or if needed??
x += self.pos[: , :tf.shape(x)[1], :]
x = self.dropout(x, training=training)
#Encoder layer
embedding_mask = self.embedding.compute_mask(inputs)
for encoder_layer in self.encoder_layers:
x = encoder_layer(x, mask = embedding_mask)
return x
def compute_mask(self, inputs, mask=None):
return self.embedding.compute_mask(inputs)
###Output
_____no_output_____
###Markdown
Transformer model
###Code
num_layers = 1
d_model = 64
dff = 128
num_heads = 4
dropout_rate = 0.4
MAX_LENGTH = max_words
alpha = 0.2
input_vocab_size = len(tokenizer.index_word) + 1
input = tf.keras.layers.Input(shape=(MAX_LENGTH,))
encoder = Encoder(input_vocab_size, num_layers = num_layers, d_model = d_model, num_heads = num_heads,
dff = dff, dropout = dropout_rate)
x = encoder(input)
x = Bidirectional(GRU(64, dropout=0.2))(x)
output = Dense(5, activation='sigmoid')(x)
# flatten = Flatten()(x)
# dense1 = Dense(256, activation="relu")(flatten)
# dropout = Dropout(0.2)(dense1)
# output = Dense(5, activation="sigmoid")(dropout)
model = Model(input, output)
teacher_model = tf.keras.models.load_model("/content/drive/MyDrive/Data/Teacher_model")
teacher_y_pred = teacher_model.predict(x_train, verbose=1)
T_y_pred = K.constant(teacher_y_pred)
def custom_loss_function(y_true, y_pred):
student_loss = - y_true * tf.math.log(y_pred) - (1 - y_true) * tf.math.log(1 - y_pred)
distil_loss = - T_y_pred * tf.math.log(y_pred) - (1 - T_y_pred) * tf.math.log(1 - y_pred)
loss = alpha * student_loss + (1-alpha) * distil_loss
print(tf.reduce_mean(loss, axis=-1))
return tf.reduce_mean(loss, axis=-1)
model.compile(loss = custom_loss_function, optimizer='adam', metrics=['accuracy'])
model.summary()
print(teacher_y_pred.shape)
type(teacher_y_pred)
###Output
(1490, 5)
###Markdown
Training
###Code
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode = 'min' ,patience=6)
history = model.fit(x_train , y_train,
batch_size=16,
epochs=15,
verbose=2,
callbacks=[callback],
validation_split=0.2)
score, acc = model.evaluate(X_test, y_test)
###Output
23/23 [==============================] - 3s 143ms/step - loss: 0.0708 - accuracy: 0.9537
|
Python_Stock/Apply_Mathematics_Trading_Investment/Manhattan_Distance_Stock.ipynb | ###Markdown
Manhattan Distance Manhattan distance between two vectors, A and B Formula: Σ|Ai – Bi| where i is the ith element in each vector. Manhattan distance is calculated as the sum of the absolute differences between the two vectors. The Manhattan distance is related to the L1 vector norm and the sum absolute error and mean absolute error metric This distance is used to measure the dissimilarity between two vectors and is commonly used in machine learning algorithms.
###Code
import numpy as np
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
symbol = 'AMD'
start = '2018-01-01'
end = '2019-01-01'
# Read data
dataset = yf.download(symbol,start,end)
# View Columns
dataset.head()
Open = np.array(dataset['Open'])
Close = np.array(dataset['Adj Close'])
Open
Close
def manhattan_distance(a, b):
manhattan = sum(abs(value1-value2) for value1, value2 in zip(a,b))
return manhattan
manhattan_distance(Open, Close)
###Output
_____no_output_____ |
Clustering_&_Retrieval/Week4/Assignment2/.ipynb_checkpoints/4_em-with-text-data_blank-checkpoint.ipynb | ###Markdown
Fitting a diagonal covariance Gaussian mixture model to text dataIn a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component. * Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix. * A model with many parameters require more data: bserve that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences.Both of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is:\begin{align*}\hat{\Sigma}_k &= \frac{1}{N_k^{soft}} \sum_{i=1}^N r_{ik} (x_i-\hat{\mu}_k)(x_i - \hat{\mu}_k)^T\end{align*}Note that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by\begin{align*}\hat{\Sigma}_{k, v, w} &= \frac{1}{N_k^{soft}} \sum_{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})(x_{iw} - \hat{\mu}_{kw})\end{align*}When we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation. \begin{align*}\hat{\sigma}^2_{k, v} &= \hat{\Sigma}_{k, v, v} \\&= \frac{1}{N_k^{soft}} \sum_{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})^2\end{align*}In this section, we will use an EM implementation to fit a Gaussian mixture model with **diagonal** covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term. We'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import necessary packages
###Code
import graphlab
###Output
_____no_output_____
###Markdown
We also have a Python file containing implementations for several functions that will be used during the course of this assignment.
###Code
from em_utilities import *
###Output
_____no_output_____
###Markdown
Load Wikipedia data and extract TF-IDF features Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation.
###Code
wiki = graphlab.SFrame('people_wiki.gl/').head(5000)
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])
###Output
_____no_output_____
###Markdown
Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data.
###Code
tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')
###Output
_____no_output_____
###Markdown
As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector.
###Code
tf_idf = normalize(tf_idf)
###Output
_____no_output_____
###Markdown
We can check that the length (Euclidean norm) of each row is now 1.0, as expected.
###Code
for i in range(5):
doc = tf_idf[i]
print(np.linalg.norm(doc.todense()))
###Output
_____no_output_____
###Markdown
EM in high dimensionsEM for high-dimensional data requires some special treatment: * E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python. * All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data. * Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow. We provide the complete implementation for you in the file `em_utilities.py`. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment. You are expected to answer some quiz questions using the results of clustering. **Initializing mean parameters using k-means**Recall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm.
###Code
from sklearn.cluster import KMeans
np.random.seed(5)
num_clusters = 25
# Use scikit-learn's k-means to simplify workflow
kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1)
kmeans_model.fit(tf_idf)
centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_
means = [centroid for centroid in centroids]
###Output
_____no_output_____
###Markdown
**Initializing cluster weights**We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above.
###Code
num_docs = tf_idf.shape[0]
weights = []
for i in xrange(num_clusters):
# Compute the number of data points assigned to cluster i:
num_assigned = ... # YOUR CODE HERE
w = float(num_assigned) / num_docs
weights.append(w)
###Output
_____no_output_____
###Markdown
**Initializing covariances**To initialize our covariance parameters, we compute $\hat{\sigma}_{k, j}^2 = \sum_{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block.
###Code
covs = []
for i in xrange(num_clusters):
member_rows = tf_idf[cluster_assignment==i]
cov = (member_rows.power(2) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \
+ means[i]**2
cov[cov < 1e-8] = 1e-8
covs.append(cov)
###Output
_____no_output_____
###Markdown
**Running EM**Now that we have initialized all of our parameters, run EM.
###Code
out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10)
out['loglik']
###Output
_____no_output_____
###Markdown
Interpret clustering results In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters.Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix). A sample output may be:```==========================================================Cluster 0: Largest mean parameters in cluster Word Mean Variance football 1.08e-01 8.64e-03season 5.80e-02 2.93e-03club 4.48e-02 1.99e-03league 3.94e-02 1.08e-03played 3.83e-02 8.45e-04...```
###Code
# Fill in the blanks
def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word):
print('')
print('==========================================================')
num_clusters = len(means)
for c in xrange(num_clusters):
print('Cluster {0:d}: Largest mean parameters in cluster '.format(c))
print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance'))
# The k'th element of sorted_word_ids should be the index of the word
# that has the k'th-largest value in the cluster mean. Hint: Use np.argsort().
sorted_word_ids = ... # YOUR CODE HERE
for i in sorted_word_ids[:5]:
print '{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word['category'][i],
means[c][i],
covs[c][i])
print '\n=========================================================='
'''By EM'''
visualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word)
###Output
_____no_output_____
###Markdown
**Quiz Question**. Select all the topics that have a cluster in the model created above. [multiple choice] Comparing to random initialization Create variables for randomly initializing the EM algorithm. Complete the following code block.
###Code
np.random.seed(5)
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the standard univariate normal distribution (mean 0, variance 1).
# YOUR CODE HERE
mean = ...
# Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.
# YOUR CODE HERE
cov = ...
# Initially give each cluster equal weight.
# YOUR CODE HERE
weight = ...
random_means.append(mean)
random_covs.append(cov)
random_weights.append(weight)
###Output
_____no_output_____
###Markdown
**Quiz Question**: Try fitting EM with the random initial parameters you created above. (Use `cov_smoothing=1e-5`.) Store the result to `out_random_init`. What is the final loglikelihood that the algorithm converges to? **Quiz Question:** Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means? **Quiz Question**: For the above model, `out_random_init`, use the `visualize_EM_clusters` method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means?
###Code
# YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word.
...
###Output
_____no_output_____ |
notebooks/ensemble_ex_02.ipynb | ###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the statistical performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
rf = RandomForestRegressor(n_estimators=3)
rf.fit(data_train, target_train)
y_pred = rf.predict(data_test)
print(f'MAE: {mean_absolute_error(y_pred, target_test):0.02f} g')
###Output
MAE: 366.03 g
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
import matplotlib.pyplot as plt
import seaborn as sns
sub_penguins = penguins[(penguins["Flipper Length (mm)"]>=170) & (penguins["Flipper Length (mm)"]<=230)]
sub_penguins.sort_values(by='Flipper Length (mm)', inplace=True)
sns.scatterplot(x="Flipper Length (mm)", y="Body Mass (g)", data=sub_penguins, color='black', alpha=0.5)
sub_penguins['rf_predict']=rf.predict(sub_penguins["Flipper Length (mm)"].to_numpy().reshape(-1,1))
for idx, estimator in enumerate(rf.estimators_):
print('estimator'+str(idx))
sub_penguins['estimator'+str(idx)] = estimator.predict(sub_penguins["Flipper Length (mm)"].to_numpy().reshape(-1,1))
plt.plot(sub_penguins["Flipper Length (mm)"], sub_penguins['estimator'+str(idx)], label='estimator'+str(idx))
plt.plot(sub_penguins["Flipper Length (mm)"], sub_penguins["rf_predict"], label='RF prediction', color= 'orange')
plt.legend()
plt.show();
###Output
estimator0
estimator1
estimator2
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_name = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_name]], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_name = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_name]], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=3,n_jobs=2)
forest.fit(data_train, target_train)
from sklearn.metrics import mean_absolute_error
target_predicted = forest.predict(data_test)
error = mean_absolute_error(target_test, target_predicted)
print(f"Mean absolute error is: {error:.3f}")
###Output
Mean absolute error is: 349.940
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
penguins.describe()
penguins_min = penguins[penguins[feature_name]>=170.0]
penguins_min_max = penguins_min[penguins_min[feature_name]<=230.0]
penguins_min_max.min()
data, target = penguins_min_max[[feature_name]], penguins_min_max[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(x=data_train[feature_name], y=target_train, color="black",
alpha=0.5)
_ = plt.title(f"Training data for the Penguins dataset with {feature_name} between 170.0 and 203.0")
forest.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the statistical performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import mean_absolute_error
random_forest = RandomForestClassifier(n_estimators=3)
random_forest.fit(data_train, target_train)
target_predicted = random_forest.predict(data_test)
mean_absolute_error(target_test, target_predicted)
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
import numpy as np
data_ranges = pd.DataFrame(np.linspace(170, 235, num=300), columns=data.columns)
tree_predictions = []
for tree in random_forest.estimators_:
tree_predictions.append(tree.predict(data_ranges))
forest_predictions = random_forest.predict(data_ranges)
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(data=penguins, x=feature_names[0], y=target_name, color="black", alpha=0.5)
# plot tree predictions
for tree_idx, predictions in enumerate(tree_predictions):
plt.plot(data_ranges, predictions, label=f"Tree #{tree_idx}", linestyle="--", alpha=0.8)
plt.plot(data_ranges, forest_predictions, label=f"Random forest")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_name = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_name]], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_name = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_name]], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise 02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the statistical performance on the testing set.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise 02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the statistical performance on the testing set.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_name = "Flipper Length (mm)"
target_name = "Body Mass (g)"
data, target = penguins[[feature_name]], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn's random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the generalization performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the statistical performance on the testing set.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
📝 Exercise M6.02The aim of this exercise it to explore some attributes available inscikit-learn random forest.First, we will fit the penguins regression dataset.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
penguins = pd.read_csv("../datasets/penguins_regression.csv")
feature_names = ["Flipper Length (mm)"]
target_name = "Body Mass (g)"
data, target = penguins[feature_names], penguins[target_name]
data_train, data_test, target_train, target_test = train_test_split(
data, target, random_state=0)
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. Create a random forest containing three trees. Train the forest andcheck the statistical performance on the testing set in terms of meanabsolute error.
###Code
# Write your code here.
###Output
_____no_output_____
###Markdown
The next steps of this exercise are to:- create a new dataset containing the penguins with a flipper length between 170 mm and 230 mm;- plot the training data using a scatter plot;- plot the decision of each individual tree by predicting on the newly created dataset;- plot the decision of the random forest using this newly created dataset.TipThe trees contained in the forest that you created can be accessedwith the attribute estimators_.
###Code
# Write your code here.
###Output
_____no_output_____ |
python-numpy-basics/Dictionary.ipynb | ###Markdown
Dictionaries in Python* A dictionary is a collection which is ordered, changeable and does not allow duplicates.* Dictionaries are written with curly brackets, and have keys and values pairs. Example :
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
print(student)
#if I add a duplicate value, the duplicate value will overwrite existing value
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024,
"grad_year":2023
}
print('Dictionary after adding duplicates')
print(student)
#The new value of grad_year is overwritten
###Output
{'id': '191022987', 'branch': 'IT', 'grad_year': 2024}
Dictionary after adding duplicates
{'id': '191022987', 'branch': 'IT', 'grad_year': 2023}
###Markdown
Accessing ItemsYou can access the items of a dictionary by referring to its key name, inside square brackets or also by a method called `get()`:
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
#get the value of 'id' key
student_id=student["id"]
print(student_id)
# now by get() method
student_id=student.get("id")
print(student_id)
###Output
191022987
191022987
###Markdown
Adding and updating a Dictionary* Adding an item to the dictionary is done by using a new index key and assigning a value to it * The update() method will update the dictionary with the items from a given argument. If the item does not exist, the item will be added.
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
student["grade"]="BB"
print(student)
#The branch key already exists after updating it changes to "EXTC"
student.update({"branch":"EXTC"})
print(student)
#The address key doesn't exist so after updating address key is added
student.update({"address":"Mumbai"})
print(student)
###Output
{'id': '191022987', 'branch': 'IT', 'grad_year': 2024, 'grade': 'BB'}
{'id': '191022987', 'branch': 'EXTC', 'grad_year': 2024, 'grade': 'BB'}
{'id': '191022987', 'branch': 'EXTC', 'grad_year': 2024, 'grade': 'BB', 'address': 'Mumbai'}
###Markdown
Removing Items * The **pop()** method removes the item with the specified key name.* The **popitem()** method removes the last inserted item (in versions before 3.7, a random item is removed instead)* The **del** keyword removes the item with the specified key name.The **del** keyword can also delete the dictionary completely.* The **clear()** method empties the dictionary.
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
student.pop("branch")
print(student)
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
student.popitem()
print(student)
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
del student["branch"]
print(student)
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
student.clear()
print(student)
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
del student
#this will give an error as the dictionary itself is deleted
print(student)
###Output
{'id': '191022987', 'grad_year': 2024}
{'id': '191022987', 'branch': 'IT'}
{'id': '191022987', 'grad_year': 2024}
{}
###Markdown
Dictionary key() methodThe **keys()** method returns a view object. The view object contains the keys of the dictionary, as a list.The view object will reflect any changes done to the dictionary, see example below.
###Code
# When an item is added in the dictionary, the view object also gets updated:
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
x = student.keys()
print("x before adding item in the dictionary")
print(x)
student["grade"] = "BB"
print("x after adding item in the dictionary")
print(x)
###Output
x before adding item in the dictionary
dict_keys(['id', 'branch', 'grad_year'])
x after adding item in the dictionary
dict_keys(['id', 'branch', 'grad_year', 'grade'])
###Markdown
Dictionary values() methodThe **values()** method returns a view object. The view object contains the values of the dictionary, as a list.The view object will reflect any changes done to the dictionary, see example below.
###Code
# When a values is changed in the dictionary, the view object also gets updated:
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
x = student.values()
print("x before changing an item in the dictionary")
print(x)
student["branch"] = "EXTC"
print("x after changing an item in the dictionary")
print(x)
###Output
x before changing an item in the dictionary
dict_values(['191022987', 'IT', 2024])
x after changing an item in the dictionary
dict_values(['191022987', 'EXTC', 2024])
###Markdown
Looping through a dictionaryYou can loop through a dictionary by using a `for` loop.When looping through a dictionary, the return value are the keys of the dictionary, but there are methods to return the values as well.
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
#Print all key names in the dictionary, one by one:
for x in student:
print(x)
print("\n")#line break
#Print all values in the dictionary, one by one:
for x in student:
print(student[x])
print("\n")#line break
#You can also use the values() method to return values of a dictionary:
for x in student.values():
print(x)
print("\n")#line break
#You can use the keys() method to return the keys of a dictionary:
for x in student.keys():
print(x)
print("\n")#line break
#Loop through both keys and values, by using the items() method:
for x, y in student.items():
print(x, y)
###Output
id
branch
grad_year
191022987
IT
2024
191022987
IT
2024
id
branch
grad_year
id 191022987
branch IT
grad_year 2024
###Markdown
Copying a Dictionary You cannot copy a dictionary simply by typing `dict2` = `dict1`, because: `dict2` will only be a reference to `dict1`, and changes made in `dict1` will automatically also be made in `dict2`.There are ways to make a copy, one way is to use the built-in Dictionary method **copy()**
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
st = student.copy()
print(st)
student["branch"]="CS"
print("student",student)
"""In the output below you can see that updating branch key in student dictionary,
st dictionary has not changed"""
print("st",st)
###Output
{'id': '191022987', 'branch': 'IT', 'grad_year': 2024}
student {'id': '191022987', 'branch': 'CS', 'grad_year': 2024}
st {'id': '191022987', 'branch': 'IT', 'grad_year': 2024}
###Markdown
Another way to make a copy is to use the built-in function **dict()**.
###Code
student = {
"id": "191022987",
"branch": "IT",
"grad_year":2024
}
st = dict(student)
print(st)
student["branch"]="CS"
print("student",student)
"""In the output below you can see that updating branch key in student dictionary,
st dictionary has not changed"""
print("st",st)
###Output
{'id': '191022987', 'branch': 'IT', 'grad_year': 2024}
student {'id': '191022987', 'branch': 'CS', 'grad_year': 2024}
st {'id': '191022987', 'branch': 'IT', 'grad_year': 2024}
###Markdown
Nested DictionariesA dictionary can contain dictionaries, this is called nested dictionaries.
###Code
student = {
"student1":{
"id": "191022987",
"branch": "IT",
"grad_year":2024
},
"student2":{
"id": "191022100",
"branch": "EXTC",
"grad_year":2023
}
}
print("student1 :",student["student1"])
###Output
student1 : {'id': '191022987', 'branch': 'IT', 'grad_year': 2024}
|
src/shl-traditional-features.ipynb | ###Markdown
Using traditional models and feature engineering to classify SHL timeseries
###Code
import zipfile
import tempfile
import pathlib
import pandas as pd
import numpy as np
shl_dataset_label_order = [
'Null',
'Still',
'Walking',
'Run',
'Bike',
'Car',
'Bus',
'Train',
'Subway',
]
class SHLDataset:
def __init__(
self,
acc_x, acc_y, acc_z,
acc_mag,
mag_x, mag_y, mag_z,
mag_mag,
gyr_x, gyr_y, gyr_z,
gyr_mag,
labels
):
self.acc_x = acc_x
self.acc_y = acc_y
self.acc_z = acc_z
self.acc_mag = acc_mag
self.mag_x = mag_x
self.mag_y = mag_y
self.mag_z = mag_z
self.mag_mag = mag_mag
self.gyr_x = gyr_x
self.gyr_y = gyr_y
self.gyr_z = gyr_z
self.gyr_mag = gyr_mag
self.labels = labels
def concat_inplace(self, other):
self.acc_x = np.concatenate((self.acc_x, other.acc_x), axis=0)
self.acc_y = np.concatenate((self.acc_y, other.acc_y), axis=0)
self.acc_z = np.concatenate((self.acc_z, other.acc_z), axis=0)
self.acc_mag = np.concatenate((self.acc_mag, other.acc_mag), axis=0)
self.mag_x = np.concatenate((self.mag_x, other.mag_x), axis=0)
self.mag_y = np.concatenate((self.mag_y, other.mag_y), axis=0)
self.mag_z = np.concatenate((self.mag_z, other.mag_z), axis=0)
self.mag_mag = np.concatenate((self.mag_mag, other.mag_mag), axis=0)
self.gyr_x = np.concatenate((self.gyr_x, other.gyr_x), axis=0)
self.gyr_y = np.concatenate((self.gyr_y, other.gyr_y), axis=0)
self.gyr_z = np.concatenate((self.gyr_z, other.gyr_z), axis=0)
self.gyr_mag = np.concatenate((self.gyr_mag, other.gyr_mag), axis=0)
self.labels = np.concatenate((self.labels, other.labels), axis=0)
def load_shl_dataset(dataset_dir: pathlib.Path, nrows=None):
acc_x = np.nan_to_num(pd.read_csv(dataset_dir / 'Acc_x.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Acc_x Import Done')
acc_y = np.nan_to_num(pd.read_csv(dataset_dir / 'Acc_y.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Acc_y Import Done')
acc_z = np.nan_to_num(pd.read_csv(dataset_dir / 'Acc_z.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Acc_z Import Done')
acc_mag = np.sqrt(acc_x**2 + acc_y**2 + acc_z**2)
print('Acc_mag Import Done')
mag_x = np.nan_to_num(pd.read_csv(dataset_dir / 'Mag_x.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Mag_x Import Done')
mag_y = np.nan_to_num(pd.read_csv(dataset_dir / 'Mag_y.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Mag_y Import Done')
mag_z = np.nan_to_num(pd.read_csv(dataset_dir / 'Mag_z.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Mag_z Import Done')
mag_mag = np.sqrt(mag_x**2 + mag_y**2 + mag_z**2)
print('Mag_mag Import Done')
gyr_x = np.nan_to_num(pd.read_csv(dataset_dir / 'Gyr_x.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Gyr_x Import Done')
gyr_y = np.nan_to_num(pd.read_csv(dataset_dir / 'Gyr_y.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Gyr_y Import Done')
gyr_z = np.nan_to_num(pd.read_csv(dataset_dir / 'Gyr_z.txt', header=None, sep=' ', nrows=nrows).to_numpy())
print('Gyr_z Import Done')
gyr_mag = np.sqrt(gyr_x**2 + gyr_y**2 + gyr_z**2)
print('Gyr_mag Import Done')
labels = np.nan_to_num(pd.read_csv(dataset_dir / 'Label.txt', header=None, sep=' ', nrows=nrows).mode(axis=1).to_numpy().flatten())
print('Labels Import Done')
return SHLDataset(
acc_x, acc_y, acc_z,
acc_mag,
mag_x, mag_y, mag_z,
mag_mag,
gyr_x, gyr_y, gyr_z,
gyr_mag,
labels
)
def load_zipped_shl_dataset(zip_dir: pathlib.Path, tqdm=None, nrows=None, subdir_in_zip='train'):
with tempfile.TemporaryDirectory() as unzip_dir:
with zipfile.ZipFile(zip_dir, 'r') as zip_ref:
if tqdm:
for member in tqdm(zip_ref.infolist(), desc=f'Extracting {zip_dir}'):
zip_ref.extract(member, unzip_dir)
else:
zip_ref.extractall(unzip_dir)
train_dir = pathlib.Path(unzip_dir) / subdir_in_zip
sub_dirs = [x for x in train_dir.iterdir() if train_dir.is_dir()]
result_dataset = None
for sub_dir in sub_dirs:
sub_dataset = load_shl_dataset(train_dir / sub_dir, nrows=nrows)
if result_dataset is None:
result_dataset = sub_dataset
else:
result_dataset.concat_inplace(sub_dataset)
del sub_dataset
return result_dataset
from pathlib import Path
# We are going to train the models on a small subsample of the whole dataset
# The assumption behind this is that traditional models require significantly
# less amount of training data
DATASET_DIRS = [
Path('shl-dataset/challenge-2019-train_torso.zip'),
Path('shl-dataset/challenge-2019-train_bag.zip'),
Path('shl-dataset/challenge-2019-train_hips.zip'),
Path('shl-dataset/challenge-2020-train_hand.zip'),
]
NROWS_PER_DATASET = 5000
import numpy as np
from tqdm import tqdm
from sklearn.utils.class_weight import compute_class_weight
# Join all datasets
acc_mag_conc = None
mag_mag_conc = None
gyr_mag_conc = None
y_conc = None
for dataset_dir in DATASET_DIRS:
# Load dataset from zip file into temporary directory
dataset = load_zipped_shl_dataset(dataset_dir, tqdm=tqdm, nrows=NROWS_PER_DATASET)
if acc_mag_conc is None:
acc_mag_conc = dataset.acc_mag
else:
acc_mag_conc = np.concatenate((acc_mag_conc, dataset.acc_mag), axis=0)
if mag_mag_conc is None:
mag_mag_conc = dataset.mag_mag
else:
mag_mag_conc = np.concatenate((mag_mag_conc, dataset.mag_mag), axis=0)
if gyr_mag_conc is None:
gyr_mag_conc = dataset.gyr_mag
else:
gyr_mag_conc = np.concatenate((gyr_mag_conc, dataset.gyr_mag), axis=0)
if y_conc is None:
y_conc = dataset.labels
else:
y_conc = np.concatenate((y_conc, dataset.labels), axis=0)
del dataset
# Check that we don't have NaNs in our dataset
assert not np.isnan(acc_mag_conc).any()
assert not np.isnan(mag_mag_conc).any()
assert not np.isnan(gyr_mag_conc).any()
import joblib
from sklearn.preprocessing import PowerTransformer
acc_scaler = joblib.load('models/acc-scaler.joblib')
mag_scaler = joblib.load('models/mag-scaler.joblib')
gyr_scaler = joblib.load('models/gyr-scaler.joblib')
# Fit and export scalers
acc_mag_scaled = acc_scaler.fit_transform(acc_mag_conc)
del acc_mag_conc
mag_mag_scaled = mag_scaler.fit_transform(mag_mag_conc)
del mag_mag_conc
gyr_mag_scaled = gyr_scaler.fit_transform(gyr_mag_conc)
del gyr_mag_conc
import numpy as np
from scipy import signal
from scipy.special import entr
def magnitude(x,y,z):
return np.sqrt(x**2 + y**2 + z**2)
def entrop(pk,axis=0):
pk = pk / np.sum(pk, axis=axis, keepdims=True)
vec = entr(pk)
S = np.sum(vec, axis=axis)
return S
def autocorr(x,axis=0):
result = np.correlate(x, x, mode='full')
return result[result.size // 2:]
# Statistical Feature Calculation
acc_mean = np.mean(acc_mag_scaled,axis=1)
acc_std = np.std(acc_mag_scaled,axis=1)
acc_max = np.max(acc_mag_scaled,axis=1)
acc_min = np.min(acc_mag_scaled,axis=1)
mag_mean = np.mean(mag_mag_scaled,axis=1)
mag_std = np.std(mag_mag_scaled,axis=1)
mag_max = np.max(mag_mag_scaled,axis=1)
mag_min = np.min(mag_mag_scaled,axis=1)
gyr_mean = np.mean(gyr_mag_scaled,axis=1)
gyr_std = np.std(gyr_mag_scaled,axis=1)
gyr_max = np.max(gyr_mag_scaled,axis=1)
gyr_min = np.min(gyr_mag_scaled,axis=1)
# Frequency Domain Feature Calculation
fs = 100
acc_FREQ,acc_PSD = signal.welch(acc_mag_scaled,fs,nperseg=500,axis=1)
mag_FREQ,mag_PSD = signal.welch(mag_mag_scaled,fs,nperseg=500,axis=1)
gyr_FREQ,gyr_PSD = signal.welch(gyr_mag_scaled,fs,nperseg=500,axis=1)
# Max PSD value
acc_PSDmax = np.max(acc_PSD,axis=1)
mag_PSDmax = np.max(mag_PSD,axis=1)
gyr_PSDmax = np.max(gyr_PSD,axis=1)
acc_PSDmin = np.min(acc_PSD,axis=1)
mag_PSDmin = np.min(mag_PSD,axis=1)
gyr_PSDmin = np.min(gyr_PSD,axis=1)
# Frequency Entropy
acc_entropy = entrop(acc_PSD,axis=1)
mag_entropy = entrop(mag_PSD,axis=1)
gyr_entropy = entrop(gyr_PSD,axis=1)
# Frequency Center
acc_fc = np.sum((acc_FREQ*acc_PSD),axis=1) / np.sum(acc_PSD,axis=1)
mag_fc = np.sum((mag_FREQ*mag_PSD),axis=1) / np.sum(mag_PSD,axis=1)
gyr_fc = np.sum((gyr_FREQ*gyr_PSD),axis=1) / np.sum(gyr_PSD,axis=1)
# Autocorrelation Calculation
acc_acr = np.apply_along_axis(autocorr,1,acc_mag_scaled)
mag_acr = np.apply_along_axis(autocorr,1,mag_mag_scaled)
gyr_acr = np.apply_along_axis(autocorr,1,gyr_mag_scaled)
acc_features = np.stack((acc_mean,acc_std,acc_max,acc_min,acc_PSDmax,acc_PSDmin,acc_entropy,acc_fc),axis=1)
mag_features = np.stack((mag_mean,mag_std,mag_max,mag_min,mag_PSDmax,mag_PSDmin,mag_entropy,mag_fc),axis=1)
gyr_features = np.stack((gyr_mean,gyr_std,gyr_max,gyr_min,gyr_PSDmax,gyr_PSDmin,gyr_entropy,gyr_fc),axis=1)
X = np.concatenate([acc_features,mag_features,gyr_features],axis=1)
print("X shape: ",X.shape)
print("y shape: ",y_conc.shape)
print("Feature Extraction Done")
# Install imblearn, a package with functionalities to balance our dataset
import sys
!{sys.executable} -m pip install imbalanced-learn
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y_conc, test_size=0.33, random_state=1337)
# Weight train dataset classes using SMOTE
from imblearn.over_sampling import SMOTE
oversampler = SMOTE()
X_train, y_train = oversampler.fit_resample(X_train, y_train)
# Check that classes are now balanced
print(compute_class_weight('balanced', classes=np.unique(y_train), y=y_train))
from sklearn.metrics import accuracy_score, f1_score
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
names = [
"KNN", "SVM",
"DT", "RF", "MLP",
]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
]
import tempfile
import os
from joblib import dump
accuracies = []
f1scores = []
model_sizes = []
for model_name, model in zip(names, classifiers):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
accuracies.append(accuracy)
print(f'Acc of {model_name}: {accuracy}')
f1 = f1_score(y_test, y_pred, average='weighted')
f1scores.append(f1)
print(f'F1 of {model_name}: {f1}')
with tempfile.TemporaryDirectory() as tmp_dir:
filename = f"{tmp_dir}/model.joblib"
dump(model, filename)
model_size = os.path.getsize(filename)
model_sizes.append(model_size)
print(f'Size of {model_name} in Bytes: {model_size}')
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1, 3)
fig.set_size_inches(6 * 3, 4)
bar0 = axs[0].barh(names, [a * 100 for a in accuracies], color='grey')
axs[0].set_xlabel('Accuracy in %')
axs[0].set_xlim(0, 100)
for rect in bar0:
width = rect.get_width()
axs[0].text(1.15*rect.get_width(), rect.get_y()+0.5*rect.get_height(),
'{0:.2f}%'.format(width),
ha='center', va='center')
bar1 = axs[1].barh(names, [f1 * 100 for f1 in f1scores], color='grey')
axs[1].set_xlabel('F1-Score in %')
axs[1].set_xlim(0, 100)
for rect in bar1:
width = rect.get_width()
axs[1].text(1.15*rect.get_width(), rect.get_y()+0.5*rect.get_height(),
'{0:.2f}%'.format(width),
ha='center', va='center')
bar2 = axs[2].barh(names, [s / 1_000 for s in model_sizes], color='grey')
axs[2].set_xlim(0, 10000)
axs[2].set_xlabel('Größe in kB')
for rect in bar2:
width = rect.get_width()
axs[2].text(max(750, 1.15*rect.get_width()), rect.get_y()+0.5*rect.get_height(),
'{}kB'.format(int(width)),
ha='center', va='center')
for i in range(3):
axs[i].grid(
b = True,
color ='grey',
linestyle ='-.',
linewidth = 0.5,
alpha = 0.2
)
plt.savefig(
f'../images/shl/traditional-models.pdf',
dpi=1200,
bbox_inches='tight'
)
plt.show()
###Output
_____no_output_____ |
notebooks/01-census-api-scraper.ipynb | ###Markdown
Compiling census data This script downloads 2016 ACS census data into a `.csv` file.
###Code
import pandas as pd
import requests
from census import Census
from us import states
# To reproduce the data below, you'll need to save your
# Census API key to `../data/census-api-key.txt`.
# You can obtain a key here: https://api.census.gov/data/key_signup.html
api_key = open("../data/census-api-key.txt").read().strip()
c = Census(api_key)
###Output
_____no_output_____
###Markdown
The counties of the New York-Newark-Jersey City, NY-NJ-PA Metropolitan Statistical Area, sourced [from here](https://www.bea.gov/regional/docs/msalist.cfm?mlist=45):* 34003 — "Bergen, NJ"* 34013 — "Essex, NJ"* 34017 — "Hudson, NJ"* 34019 — "Hunterdon, NJ"* 34023 — "Middlesex, NJ"* 34025 — "Monmouth, NJ"* 34027 — "Morris, NJ"* 34029 — "Ocean, NJ"* 34031 — "Passaic, NJ"* 34035 — "Somerset, NJ"* 34037 — "Sussex, NJ"* 34039 — "Union, NJ"* 36005 — "Bronx, NY"* 36027 — "Dutchess, NY"* 36047 — "Kings, NY"* 36059 — "Nassau, NY"* 36061 — "New York, NY"* 36071 — "Orange, NY"* 36079 — "Putnam, NY"* 36081 — "Queens, NY"* 36085 — "Richmond, NY"* 36087 — "Rockland, NY"* 36103 — "Suffolk, NY"* 36119 — "Westchester, NY"* 42103 — "Pike, PA"
###Code
nyc_met_area = [
{"state_code":"34", "county_code": "003", "county_name": "Bergen, NJ"},
{"state_code":"34", "county_code": "013", "county_name": "Essex, NJ"},
{"state_code":"34", "county_code": "017", "county_name": "Hudson, NJ"},
{"state_code":"34", "county_code": "019", "county_name": "Hunterdon, NJ"},
{"state_code":"34", "county_code": "023", "county_name": "Middlesex, NJ"},
{"state_code":"34", "county_code": "025", "county_name": "Monmouth, NJ"},
{"state_code":"34", "county_code": "027", "county_name": "Morris, NJ"},
{"state_code":"34", "county_code": "029", "county_name": "Ocean, NJ"},
{"state_code":"34", "county_code": "031", "county_name": "Passaic, NJ"},
{"state_code":"34", "county_code": "035", "county_name": "Somerset, NJ"},
{"state_code":"34", "county_code": "037", "county_name": "Sussex, NJ"},
{"state_code":"34", "county_code": "039", "county_name": "Union, NJ"},
{"state_code":"36", "county_code": "005", "county_name": "Bronx, NY"},
{"state_code":"36", "county_code": "027", "county_name": "Dutchess, NY"},
{"state_code":"36", "county_code": "047", "county_name": "Kings, NY"},
{"state_code":"36", "county_code": "059", "county_name": "Nassau, NY"},
{"state_code":"36", "county_code": "061", "county_name": "New York, NY"},
{"state_code":"36", "county_code": "071", "county_name": "Orange, NY"},
{"state_code":"36", "county_code": "079", "county_name": "Putnam, NY"},
{"state_code":"36", "county_code": "081", "county_name": "Queens, NY"},
{"state_code":"36", "county_code": "085", "county_name": "Richmond, NY"},
{"state_code":"36", "county_code": "087", "county_name": "Rockland, NY"},
{"state_code":"36", "county_code": "103", "county_name": "Suffolk, NY"},
{"state_code":"36", "county_code": "119", "county_name": "Westchester, NY"},
{"state_code":"42", "county_code": "103", "county_name": "Pike, PA"}
]
# Full API variable list available here https://api.census.gov/data/2016/acs/acs5/variables/
categories = [
'NAME', # county name
'B01001_001E', # Total population
'B19013_001E', # Median income
'B25077_001E', # Median home value
'B15011_001E', # Total population age 25+ years with a bachelor's degree or higher
'B03002_003E', # Not Hispanic or Latino!!White alone
'B03002_004E', # Not Hispanic or Latino!!Black or African American alone
'B02001_004E', # American Indian and Alaska Native Alone
'B03002_006E', # Not Hispanic or Latino!!Asian alone
'B03002_007E', # Not Hispanic or Latino!!Native Hawaiian and Other Pacific Islander alone
'B03002_008E', # Not Hispanic or Latino!!Some other race alone
'B03002_009E', # Not Hispanic or Latino!!Two or more races
'B03002_012E', # Hispanic or Latino
]
def get_acs_data(state_code, county_code):
results = c.acs5.state_county_tract(
categories,
state_code,
county_code,
Census.ALL,
year = 2016
)
return [ {
'geoid': res['state'] + res['county'] + res['tract'],
'name': res['NAME'],
'total_population': res['B01001_001E'],
'median_income': res['B19013_001E'],
'median_home_value': res['B25077_001E'],
'educational_attainment': res['B15011_001E'],
'white_alone': res['B03002_003E'],
'black_alone': res['B03002_004E'],
'native': res['B02001_004E'],
'asian': res['B03002_006E'],
'native_hawaiian_pacific_islander': res['B03002_007E'],
'some_other_race_alone': res['B03002_008E'],
'two_or_more': res['B03002_009E'],
'hispanic_or_latino': res['B03002_012E']
} for res in results ]
census_data = []
for county in nyc_met_area:
print(county["county_name"])
census_data += get_acs_data(county["state_code"], county["county_code"])
census_data = pd.DataFrame(census_data)[[
'geoid',
'name',
'total_population',
'median_income',
'median_home_value',
'educational_attainment',
'white_alone',
'black_alone',
'native',
'asian',
'native_hawaiian_pacific_islander',
'some_other_race_alone',
'two_or_more',
'hispanic_or_latino',
]]
census_data.head()
len(census_data)
census_data.to_csv(
"../output/2016_census_data.csv",
index = False
)
###Output
_____no_output_____
###Markdown
Tract counts by county:
###Code
(
census_data
.assign(
state_code = lambda df: df["geoid"].str.slice(0, 2),
county_code = lambda df: df["geoid"].str.slice(2, 5)
)
.groupby([
"state_code",
"county_code"
])
.size()
.to_frame("tracts")
.reset_index()
.merge(
pd.DataFrame(nyc_met_area),
how = "outer",
on = [
"state_code",
"county_code"
]
)
.sort_values("tracts", ascending = False)
)
###Output
_____no_output_____ |
2_Curso/Laboratorio/SAGE-noteb/IPYNB/PROGR/COMPL/56-COMPL-cython.ipynb | ###Markdown
Sin CythonEste programa genera $N$ enteros aleatorios entre $1$ y $M$, y una vez obtenidos los eleva al cuadrado y devuelve la suma de los cuadrados. Por tanto, calcula el cuadrado de la longitud de un vector aleatorio con coordenadas enteros en el intervalo $[1,M]$.
###Code
def cuadrados(N,M):
res = 0
for muda in xrange(N):
x = randint(1,M)
res += x*x
return res
for n in srange(3,8):
%time A = cuadrados(10^n,10^6)
###Output
CPU times: user 12 ms, sys: 4 ms, total: 16 ms
Wall time: 11.7 ms
CPU times: user 56 ms, sys: 36 ms, total: 92 ms
Wall time: 70.5 ms
CPU times: user 644 ms, sys: 76 ms, total: 720 ms
Wall time: 539 ms
CPU times: user 3.58 s, sys: 212 ms, total: 3.79 s
Wall time: 3.54 s
CPU times: user 33.1 s, sys: 392 ms, total: 33.5 s
Wall time: 33.1 s
###Markdown
Con CythonEsta sección debe usar el núcleo de Python2. Efectuamos el mismo cálculo:
###Code
%load_ext cython
%%cython -a
import math
import random
def cuadrados_cy(long long N, long long M):
cdef long long res = 0
cdef long long muda
cdef long long x
for muda in xrange(N):
x = random.randint(1,M)
res += math.pow(x,2)
return res
for n in range(3,8):
%time A = cuadrados_cy(10^n,10^6)
###Output
CPU times: user 4 ms, sys: 0 ns, total: 4 ms
Wall time: 2.1 ms
CPU times: user 16 ms, sys: 16 ms, total: 32 ms
Wall time: 20.2 ms
CPU times: user 220 ms, sys: 76 ms, total: 296 ms
Wall time: 202 ms
CPU times: user 1.7 s, sys: 144 ms, total: 1.84 s
Wall time: 1.66 s
CPU times: user 15.6 s, sys: 140 ms, total: 15.7 s
Wall time: 15.6 s
###Markdown
Optimizando el cálculo de números aleatorios: Esta sección debe utilizar el núcleo de Sage. No funciona la opción *-a* al llamar a cython y no vemos la dependencia de Python del código. La primera parte de la celda, hasta *def main():*, genera enteros aleatorios entre $1$ y $10^6$ usando librerías externas compilables en C. Este trozo se puede reutilizar.
###Code
%%cython
cdef extern from "gsl/gsl_rng.h":
ctypedef struct gsl_rng_type:
pass
ctypedef struct gsl_rng:
pass
gsl_rng_type *gsl_rng_mt19937
gsl_rng *gsl_rng_alloc(gsl_rng_type * T)
cdef gsl_rng * r = gsl_rng_alloc(gsl_rng_mt19937)
cdef extern from "gsl/gsl_randist.h":
long int uniform "gsl_rng_uniform_int"(gsl_rng * r, unsigned long int n)
def main():
cdef int n
n = uniform(r,1000000)
return n
cdef long f(long x):
return x**2
import random
def cuadrados_cy2(int N):
cdef long res = 0
cdef int muda
for muda in range(N):
res += f(main())
return res
for n in srange(3,8):
%time A = cuadrados_cy2(10^n)
###Output
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 175 µs
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 1.2 ms
CPU times: user 12 ms, sys: 0 ns, total: 12 ms
Wall time: 10.5 ms
CPU times: user 80 ms, sys: 0 ns, total: 80 ms
Wall time: 80 ms
CPU times: user 540 ms, sys: 0 ns, total: 540 ms
Wall time: 540 ms
###Markdown
Problema similar sin números aleatorios:
###Code
%%cython
def cuadrados_cy3(long long int N):
cdef long long int res = 0
cdef long long int k
for k in range(N):
res += k**2
return res
for n in srange(3,8):
%time A = cuadrados_cy3(10^n)
def cuadrados5(N):
res = 0
for k in range(N):
res += k**2
return res
for n in srange(3,8):
%time A = cuadrados5(10^n)
###Output
CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 932 µs
CPU times: user 8 ms, sys: 0 ns, total: 8 ms
Wall time: 6.77 ms
CPU times: user 56 ms, sys: 0 ns, total: 56 ms
Wall time: 55.3 ms
CPU times: user 364 ms, sys: 24 ms, total: 388 ms
Wall time: 385 ms
CPU times: user 3.12 s, sys: 136 ms, total: 3.26 s
Wall time: 3.26 s
|
Extra-Day-14/target-encoding.ipynb | ###Markdown
Introduction Most of the techniques we've seen in this course have been for numerical features. The technique we'll look at in this lesson, *target encoding*, is instead meant for categorical features. It's a method of encoding categories as numbers, like one-hot or label encoding, with the difference that it also uses the *target* to create the encoding. This makes it what we call a **supervised** feature engineering technique.
###Code
import pandas as pd
autos = pd.read_csv("data/autos.csv")
###Output
_____no_output_____
###Markdown
Target Encoding A **target encoding** is any kind of encoding that replaces a feature's categories with some number derived from the target.A simple and effective version is to apply a group aggregation from Lesson 3, like the mean. Using the *Automobiles* dataset, this computes the average price of each vehicle's make:
###Code
autos.head()
autos["make_encoded"] = autos.groupby("make")["price"].transform("mean")
autos[["make", "price", "make_encoded"]].head(10)
###Output
_____no_output_____
###Markdown
This kind of target encoding is sometimes called a **mean encoding**. Applied to a binary target, it's also called **bin counting**. (Other names you might come across include: likelihood encoding, impact encoding, and leave-one-out encoding.) Smoothing An encoding like this presents a couple of problems, however. First are *unknown categories*. Target encodings create a special risk of overfitting, which means they need to be trained on an independent "encoding" split. When you join the encoding to future splits, Pandas will fill in missing values for any categories not present in the encoding split. These missing values you would have to impute somehow.Second are *rare categories*. When a category only occurs a few times in the dataset, any statistics calculated on its group are unlikely to be very accurate. In the *Automobiles* dataset, the `mercurcy` make only occurs once. The "mean" price we calculated is just the price of that one vehicle, which might not be very representative of any Mercuries we might see in the future. Target encoding rare categories can make overfitting more likely.A solution to these problems is to add **smoothing**. The idea is to blend the *in-category* average with the *overall* average. Rare categories get less weight on their category average, while missing categories just get the overall average.In pseudocode:```encoding = weight * in_category + (1 - weight) * overall```where `weight` is a value between 0 and 1 calculated from the category frequency.An easy way to determine the value for `weight` is to compute an **m-estimate**:```weight = n / (n + m)```where `n` is the total number of times that category occurs in the data. The parameter `m` determines the "smoothing factor". Larger values of `m` put more weight on the overall estimate.In the *Automobiles* dataset there are three cars with the make `chevrolet`. If you chose `m=2.0`, then the `chevrolet` category would be encoded with 60% of the average Chevrolet price plus 40% of the overall average price.
###Code
print(f"autos shape: {autos.shape}")
print(f"Chevrolet shape: {autos[autos.make == 'chevrolet'].shape}")
print(f"Average price of all vehicles: {autos.price.mean()}")
print(f"Average price of chevrolet: {autos[autos.make == 'chevrolet'].price.mean()}")
###Output
autos shape: (193, 26)
Chevrolet shape: (3, 26)
Average price of all vehicles: 13285.025906735751
Average price of chevrolet: 6007.0
###Markdown
```weight = n / (n + m) = 3 / (3 + 2) = 3 / 5 = 0.6chevrolet = 0.6 * 6000.00 + 0.4 * 13285.03```When choosing a value for `m`, consider how noisy you expect the categories to be. Does the price of a vehicle vary a great deal within each make? Would you need a lot of data to get good estimates? If so, it could be better to choose a larger value for `m`; if the average price for each make were relatively stable, a smaller value could be okay.Use Cases for Target EncodingTarget encoding is great for:High-cardinality features: A feature with a large number of categories can be troublesome to encode: a one-hot encoding would generate too many features and alternatives, like a label encoding, might not be appropriate for that feature. A target encoding derives numbers for the categories using the feature's most important property: its relationship with the target.Domain-motivated features: From prior experience, you might suspect that a categorical feature should be important even if it scored poorly with a feature metric. A target encoding can help reveal a feature's true informativeness. Example - MovieLens1M The [*MovieLens1M*](https://www.kaggle.com/grouplens/movielens-20m-dataset) dataset contains one-million movie ratings by users of the MovieLens website, with features describing each user and movie. This hidden cell sets everything up:
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
warnings.filterwarnings('ignore')
df = pd.read_csv("data/movielens1m.csv")
df = df.astype(np.uint8, errors='ignore') # reduce memory footprint
print("Number of Unique Zipcodes: {}".format(df["Zipcode"].nunique()))
###Output
Number of Unique Zipcodes: 3439
###Markdown
With over 3000 categories, the `Zipcode` feature makes a good candidate for target encoding, and the size of this dataset (over one-million rows) means we can spare some data to create the encoding.We'll start by creating a 25% split to train the target encoder.
###Code
X = df.copy()
y = X.pop('Rating')
# Encoding split
X_encode = X.sample(frac=0.25)
y_encode = y[X_encode.index]
# Training split
X_pretrain = X.drop(X_encode.index)
y_train = y[X_pretrain.index]
###Output
_____no_output_____
###Markdown
The `category_encoders` package in `scikit-learn-contrib` implements an m-estimate encoder, which we'll use to encode our `Zipcode` feature.
###Code
from category_encoders import MEstimateEncoder
# Create the encoder instance. Choose m to control noise.
encoder = MEstimateEncoder(cols=["Zipcode"], m=5.0)
# Fit the encoder on the encoding split.
encoder.fit(X_encode, y_encode)
# Encode the Zipcode column to create the final training data
X_train = encoder.transform(X_pretrain)
###Output
_____no_output_____
###Markdown
Let's compare the encoded values to the target to see how informative our encoding might be.
###Code
plt.figure(dpi=90)
ax = sns.distplot(y, kde=False, norm_hist=True)
ax = sns.kdeplot(X_train.Zipcode, color='r', ax=ax)
ax.set_xlabel("Rating")
ax.legend(labels=['Zipcode', 'Rating']);
###Output
_____no_output_____ |
5.5.4- Challenge What test to use.ipynb | ###Markdown
Were people more trusting in 2012, or 2014?
###Code
for country in countries:
fig, ax1 = plt.subplots(1, 2, figsize=(5, 2))
ax1[0].hist(df['ppltrst'][(df['cntry'] == country) & (df['year']==6)])
ax1[1].hist(df['ppltrst'][(df['cntry'] == country) & (df['year']==7)])
fig.text(0.5, 1, country)
ax1[0].set_title('2012')
ax1[1].set_title('2014')
plt.show()
for country in countries:
print(country)
print('2012 mean: ' + str(df['ppltrst'][(df['cntry'] == country) & (df['year']==6)].mean()))
print('2014 mean: ' + str(df['ppltrst'][(df['cntry'] == country) & (df['year']==7)].mean()))
print(stats.ttest_rel(df['ppltrst'][(df['cntry'] == country) & (df['year']==6)],
df['ppltrst'][(df['cntry'] == country) & (df['year']==7)],
nan_policy='omit'
))
###Output
CH
2012 mean: 5.677878395860285
2014 mean: 5.751617076326003
Ttest_relResult(statistic=-0.6586851756725737, pvalue=0.5102943511301135)
CZ
2012 mean: 4.362519201228879
2014 mean: 4.424657534246576
Ttest_relResult(statistic=-0.5001638336887216, pvalue=0.617129268240474)
DE
2012 mean: 5.214285714285714
2014 mean: 5.357142857142857
Ttest_relResult(statistic=-0.18399501804849683, pvalue=0.8568563797095805)
ES
2012 mean: 5.114591920857379
2014 mean: 4.895127993393889
Ttest_relResult(statistic=2.4561906976601646, pvalue=0.014181580725320284)
NO
2012 mean: 6.64931506849315
2014 mean: 6.598630136986301
Ttest_relResult(statistic=0.5073077081124404, pvalue=0.61209257015177)
SE
2012 mean: 6.058498896247241
2014 mean: 6.257709251101321
Ttest_relResult(statistic=-2.0671082026033982, pvalue=0.03900781670958545)
###Markdown
It seems that only Norway and Spain decreased in trust. However, Norway's decrease was slight. It does seem that Norway had a much smaller sample here, so the mean would be more sensitive that Spain's, and may have not experienced such a large decrease at all. Were people happier in 2012, or, 2014?
###Code
for country in countries:
fig, ax1 = plt.subplots(1, 2, figsize=(5, 2))
ax1[0].hist(df['happy'][(df['cntry'] == country) & (df['year']==6)])
ax1[1].hist(df['happy'][(df['cntry'] == country) & (df['year']==7)])
fig.text(0.5, 1, country)
ax1[0].set_title('2012')
ax1[1].set_title('2014')
plt.show()
for country in countries:
print(country)
print('2012 mean: ' + str(df['happy'][(df['cntry'] == country) & (df['year']==6)].mean()))
print('2014 mean: ' + str(df['happy'][(df['cntry'] == country) & (df['year']==7)].mean()))
print(stats.ttest_rel(df['happy'][(df['cntry'] == country) & (df['year']==6)],
df['happy'][(df['cntry'] == country) & (df['year']==7)],
nan_policy='omit'
))
###Output
CH
2012 mean: 8.088311688311688
2014 mean: 8.116429495472186
Ttest_relResult(statistic=-0.319412957862232, pvalue=0.7495001355429063)
CZ
2012 mean: 6.7708978328173375
2014 mean: 6.914110429447852
Ttest_relResult(statistic=-1.4561384833039597, pvalue=0.1458454843389451)
DE
2012 mean: 7.428571428571429
2014 mean: 7.857142857142857
Ttest_relResult(statistic=-0.8062257748298549, pvalue=0.4346138707734991)
ES
2012 mean: 7.548679867986799
2014 mean: 7.41996699669967
Ttest_relResult(statistic=1.613832417735418, pvalue=0.10682451556479494)
NO
2012 mean: 8.25171939477304
2014 mean: 7.9151846785225715
Ttest_relResult(statistic=4.2856826576235925, pvalue=2.067453013405473e-05)
SE
2012 mean: 7.907386990077177
2014 mean: 7.946961325966851
Ttest_relResult(statistic=-0.5581637086030012, pvalue=0.5768709591233714)
###Markdown
It seems that all countries, other than Spain and Norway again decreased in their mean here. other than that, all countries appeared happier. I used the Wilcoxn ranked test, since all histograms were non-normal.
###Code
male = df['tvtot'][(df['gndr'] == 1.0) & (df['year']==6)]
female = df['tvtot'][(df['gndr'] == 2.0) & (df['year']==6)]
fig, ax1 = plt.subplots(1, 2, figsize=(7, 4))
ax1[0].hist(male)
ax1[1].hist(female)
fig.text(0.5, 1, 'TV')
ax1[0].set_title('Male')
ax1[1].set_title('Female')
plt.show()
print('Male mean: ' + str( male.mean()))
print('Male sample count:', male.count())
print('Male variance:', male.var())
print('Female variance:', female.var())
print('Female mean: ' + str(female.mean()))
print('female sample count:', female.count())
print(stats.ttest_ind(male,
female,
nan_policy='omit'
))
###Output
Male mean: 3.901906090190609
Male sample count: 2151
female sample count: 2140
Male variance: 3.9350242721070847
Female variance: 4.2002724218234535
Female mean: 3.944392523364486
Ttest_indResult(statistic=-0.6899928109209502, pvalue=0.49023604027095813)
###Markdown
There is not much of a difference in the means for these samples. I used the t-test measure, because the samples and variances were similar. Who was more likely to believe people were fair in 2012, people living with a partner or people living alone?
###Code
partner = df['pplfair'][(df['partner'] == 1.0)]
no_partner = df['pplfair'][(df['partner'] == 2.0)]
fig, ax1 = plt.subplots(1, 2, figsize=(7, 4))
ax1[0].hist(partner)
ax1[1].hist(no_partner)
fig.text(0.5, 1, 'Are People Fair?')
ax1[0].set_title('Partner')
ax1[1].set_title('No Partner')
plt.show()
print('Partner mean: ' + str(partner.mean()))
print('Partner sample count:', partner.count())
print('Partner variance:', partner.var())
print('Partner Standard Deviation:', partner.std())
print('No Partner variance:', no_partner.var())
print('No Partner mean: ' + str(no_partner.mean()))
print('No Partner sample count:', no_partner.count())
print('No Partner Standard Deviation', no_partner.std())
print(stats.ttest_ind(partner,
no_partner,
nan_policy='omit'))
###Output
Partner mean: 6.063890473474045
Partner sample count: 5259
Partner variance: 4.451223431135867
Partner Standard Deviation: 2.1097922720343507
No Partner variance: 4.665807447987503
No Partner mean: 5.911280487804878
No Partner sample count: 3280
No Partner Standard Deviation 2.1600480198337033
Ttest_indResult(statistic=3.221397103615396, pvalue=0.001280455731833167)
###Markdown
After looking at this, it would be reasonable to say that the data supports that people with partners find others more fair. I used a t-test because the means looked relatively normal (a small negative skewness) and the standard deviations were almost the same. So, it would be reasonable to conclude that with the partner mean more than two above the no partner mean, they are different. However, the sample sizes are a bit off. However, they are both over 3000. Pick three or four of the countries in the sample and compare how often people met socially in 2014. Are there differences, and if so, which countries stand out?
###Code
czech = df['sclmeet'][(df['cntry'] == 'CZ') & (df['year'] == 7)].dropna()
norway = df['sclmeet'][(df['cntry'] == 'NO') & (df['year'] == 7)].dropna()
spain = df['sclmeet'][(df['cntry'] == 'ES')& (df['year'] == 7)].dropna()
fig, axs = plt.subplots(1, 3, figsize=(8, 6))
axs[0].hist(czech)
axs[1].hist(norway)
axs[2].hist(spain)
fig.text(0.25, 1, '2014 Social Meetings')
axs[0].set_title('CZ')
axs[1].set_title('NO')
axs[2].set_title('SP')
plt.tight_layout()
plt.show()
F, p = stats.f_oneway(czech, norway, spain)
print('F score: ' + str(F))
print('P-value: ' + str(p))
print('Czech Republic mean: ' + str(czech.mean()))
print('Czech Republic count: ' + str(czech.count()))
print('Norway mean: ' + str(norway.mean()))
print('Norway count: ' + str(norway.count()))
print('Spain mean: ' + str(spain.mean()))
print('Spain count: ' + str(spain.count()))
print('Norway and Czech Republic: ' + str(stats.ttest_ind(norway, czech)))
print('Spain and Norway: ' + str(stats.ttest_ind(spain, norway)))
print('Czech Republic and Spain: ' + str(stats.ttest_ind(czech, spain)))
###Output
Norway and Czech Republic: Ttest_indResult(statistic=11.269186128577815, pvalue=3.0334022155191707e-28)
Spain and Norway: Ttest_indResult(statistic=-0.632916395870007, pvalue=0.5268628350318294)
Czech Republic and Spain: Ttest_indResult(statistic=-11.400026538179093, pvalue=3.7676844407353374e-29)
###Markdown
The Czech Republic had a much lower mean, but also a much smaller sample. Which made the better comparison between Norway and Spain, which was closer to acheiving significance. Pick three or four of the countries in the sample and compare how often people took part in social activities, relative to others their age, in 2014. Are there differences, and if so, which countries stand out?
###Code
czech = df['sclact'][(df['cntry'] == 'CZ') & (df['year'] == 7)].dropna()
norway = df['sclact'][(df['cntry'] == 'NO') & (df['year'] == 7)].dropna()
spain = df['sclact'][(df['cntry'] == 'ES')& (df['year'] == 7)].dropna()
fig, axs = plt.subplots(1, 3, figsize=(8, 6))
axs[0].hist(czech)
axs[1].hist(norway)
axs[2].hist(spain)
fig.text(0.25, 1, '2014 Social Meetings by Age')
axs[0].set_title('CZ')
axs[1].set_title('NO')
axs[2].set_title('SP')
plt.tight_layout()
plt.show()
F, p = stats.f_oneway(czech, norway, spain)
print('F score: ' + str(F))
print('P-value: ' + str(p))
###Output
F score: 16.607418390848494
P-value: 6.82063334451585e-08
###Markdown
A low p value indicates a difference in the means.
###Code
print('Czech Republic mean: ' + str(czech.mean()))
print('Czech Republic count: ' + str(czech.count()))
print('Norway mean: ' + str(norway.mean()))
print('Norway count: ' + str(norway.count()))
print('Spain mean: ' + str(spain.mean()))
print('Spain count: ' + str(spain.count()))
print('Norway and Czech Republic: ' + str(stats.ttest_ind(norway, czech)))
print('Spain and Norway: ' + str(stats.ttest_ind(spain, norway)))
print('Czech Republic and Spain: ' + str(stats.ttest_ind(czech, spain)))
There is not a large difference in the means, but the pairwise t-tests
###Output
_____no_output_____ |
mh_book/docs/models/rf.ipynb | ###Markdown
Random Forest Classifier We begin our analysis with randomm forest classifer. Random forest is the meta estimator which fits number of decision tree classifiers on various subsamples of data and uses averaging for improving the model accuracy.
###Code
# Load required packages
import pandas as pd
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix, classification_report, precision_score, recall_score, f1_score
from fairlearn.metrics import MetricFrame
from fairlearn.reductions import GridSearch, EqualizedOdds
import shap
import plotly.express as px
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Modelling Company Employees
###Code
# Load data into dataframe
df = pd.read_csv('./../../../datasets/preprocessed_ce.csv')
###Output
_____no_output_____
###Markdown
Splitting data
###Code
tgt_col = 'have you ever sought treatment for a mental health disorder from a mental health professional?'
y = df[tgt_col]
X = df.drop(tgt_col, axis=1)
###Output
_____no_output_____
###Markdown
Let's check if the data is imbalanced or not.
###Code
# Split data into trainining and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=42)
# Keep copy of original variables
X_train_ori = X_train.copy()
X_test_ori = X_test.copy()
###Output
_____no_output_____
###Markdown
Categorical features encoding Before we move forward to encode categorical features, it is necessary to identify them first.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1225 entries, 0 to 1224
Data columns (total 55 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 are you self-employed? 1225 non-null int64
1 how many employees does your company or organization have? 1225 non-null object
2 is your employer primarily a tech company/organization? 1225 non-null float64
3 is your primary role within your company related to tech/it? 1225 non-null float64
4 does your employer provide mental health benefits as part of healthcare coverage? 1225 non-null object
5 do you know the options for mental health care available under your employer-provided health coverage? 1225 non-null object
6 has your employer ever formally discussed mental health (for example, as part of a wellness campaign or other official communication)? 1225 non-null object
7 does your employer offer resources to learn more about mental health disorders and options for seeking help? 1225 non-null object
8 is your anonymity protected if you choose to take advantage of mental health or substance abuse treatment resources provided by your employer? 1225 non-null object
9 if a mental health issue prompted you to request a medical leave from work, how easy or difficult would it be to ask for that leave? 1225 non-null object
10 would you feel more comfortable talking to your coworkers about your physical health or your mental health? 1225 non-null object
11 would you feel comfortable discussing a mental health issue with your direct supervisor(s)? 1225 non-null object
12 have you ever discussed your mental health with your employer? 1225 non-null float64
13 would you feel comfortable discussing a mental health issue with your coworkers? 1225 non-null object
14 have you ever discussed your mental health with coworkers? 1225 non-null float64
15 have you ever had a coworker discuss their or another coworker's mental health with you? 1225 non-null float64
16 overall, how much importance does your employer place on physical health? 1225 non-null float64
17 overall, how much importance does your employer place on mental health? 1225 non-null float64
18 do you have previous employers? 1225 non-null int64
19 was your employer primarily a tech company/organization? 1225 non-null float64
20 have your previous employers provided mental health benefits? 1225 non-null object
21 were you aware of the options for mental health care provided by your previous employers? 1225 non-null object
22 did your previous employers ever formally discuss mental health (as part of a wellness campaign or other official communication)? 1225 non-null object
23 did your previous employers provide resources to learn more about mental health disorders and how to seek help? 1225 non-null object
24 was your anonymity protected if you chose to take advantage of mental health or substance abuse treatment resources with previous employers? 1225 non-null object
25 would you have felt more comfortable talking to your previous employer about your physical health or your mental health? 1225 non-null object
26 would you have been willing to discuss your mental health with your direct supervisor(s)? 1225 non-null object
27 did you ever discuss your mental health with your previous employer? 1225 non-null float64
28 would you have been willing to discuss your mental health with your coworkers at previous employers? 1225 non-null object
29 did you ever discuss your mental health with a previous coworker(s)? 1225 non-null float64
30 did you ever have a previous coworker discuss their or another coworker's mental health with you? 1225 non-null float64
31 overall, how much importance did your previous employer place on physical health? 1225 non-null float64
32 overall, how much importance did your previous employer place on mental health? 1225 non-null float64
33 do you currently have a mental health disorder? 1225 non-null object
34 have you had a mental health disorder in the past? 1225 non-null object
35 have you ever sought treatment for a mental health disorder from a mental health professional? 1225 non-null int64
36 do you have a family history of mental illness? 1225 non-null object
37 if you have a mental health disorder, how often do you feel that it interferes with your work when being treated effectively? 1225 non-null object
38 if you have a mental health disorder, how often do you feel that it interferes with your work when not being treated effectively (i.e., when you are experiencing symptoms)? 1225 non-null object
39 have your observations of how another individual who discussed a mental health issue made you less likely to reveal a mental health issue yourself in your current workplace? 1225 non-null object
40 how willing would you be to share with friends and family that you have a mental illness? 1225 non-null int64
41 would you be willing to bring up a physical health issue with a potential employer in an interview? 1225 non-null object
42 would you bring up your mental health with a potential employer in an interview? 1225 non-null object
43 are you openly identified at work as a person with a mental health issue? 1225 non-null float64
44 if they knew you suffered from a mental health disorder, how do you think that team members/co-workers would react? 1225 non-null float64
45 have you observed or experienced an unsupportive or badly handled response to a mental health issue in your current or previous workplace? 1225 non-null object
46 have you observed or experienced supportive or well handled response to a mental health issue in your current or previous workplace? 1225 non-null object
47 overall, how well do you think the tech industry supports employees with mental health issues? 1225 non-null float64
48 would you be willing to talk to one of us more extensively about your experiences with mental health issues in the tech industry? (note that all interview responses would be used anonymously and only with your permission.) 1225 non-null float64
49 what is your age? 1225 non-null float64
50 what is your gender? 1225 non-null object
51 what country do you live in? 1225 non-null object
52 what is your race? 1225 non-null object
53 what country do you work in? 1225 non-null object
54 year 1225 non-null int64
dtypes: float64(18), int64(5), object(32)
memory usage: 526.5+ KB
###Markdown
Looking at the information of dataframe, there are quite a lot of fetuares which has data type as "object". It is not necessary that all the features with data type as "object" be categorical features. There may be certain columns which might binary values which can be represented by booleans. It is better to check column one by one. But for now, I would like to go with the assumption that all the columns with data type as "object" are categorical columns.
###Code
cat_cols = df.select_dtypes(include=['object']).columns
###Output
_____no_output_____
###Markdown
There are 32 columns out of 55 which are categorical in nature. Out of those, after examining the data manually, we can infer that one of them is ordinal in nature and others can be treated as nominal columns. The column - "how many employees does your company or organization have?" - which gives information regarding the size of the company can be treated as ordinal coulmn.
###Code
# Encoding ordinal column for training data
X_train['how many employees does your company or organization have?'] = X_train['how many employees does your company or organization have?'].replace({'1-5':1,
'6-25':2,
'26-100':3,
'100-500':4,
'500-1000':5,
'More than 1000':6})
# Encoding ordinal column for testing data
X_test['how many employees does your company or organization have?'] = X_test['how many employees does your company or organization have?'].replace({'1-5':1,
'6-25':2,
'26-100':3,
'100-500':4,
'500-1000':5,
'More than 1000':6})
# Encoding nominal columns for training data
for column in cat_cols:
dummy = pd.get_dummies(X_train[column], prefix=str(column))
X_train = pd.concat([X_train, dummy], axis=1)
X_train.drop(column, axis=1, inplace=True)
# Encoding nominal columns for testing data
for column in cat_cols:
dummy = pd.get_dummies(X_test[column], prefix=str(column))
X_test = pd.concat([X_test, dummy], axis=1)
X_test.drop(column, axis=1, inplace=True)
# Fill value 0 for mismatched columns
mis_cols = list(set(X_train.columns) - set(X_test.columns))
X_test[mis_cols] = 0
###Output
_____no_output_____
###Markdown
Imbalance check
###Code
y.value_counts()
###Output
_____no_output_____
###Markdown
The data is imbalanced. In order to use any of the machine learning algorithm, we need to either over the minority class or downsample the majority. Considering the fact that we have less number of records in the data set, it is better to oversample. But, only training data needs to be oversample. For oversampling, Sample Minority Oversampling Technique (SMOTE) will be used.
###Code
# Oversample the minority class in the target variable
oversample = SMOTE()
X_train, y_train = oversample.fit_resample(X_train.values, y_train.ravel())
###Output
_____no_output_____
###Markdown
Model training There are various paramters which random forest algorithm uses to train the model. Our aim is to find those paramters, also known as hyperparamters, which yeilds us the model with the best fit.
###Code
# Declare parameters for grid search
# Declare the classifer
clf = RandomForestClassifier(class_weight="balanced", bootstrap=True, oob_score=True)
# Declare the paramter grid for searching
param_grid = dict(
n_estimators = [100, 200, 400],
criterion = ['gini', 'entropy'],
max_depth = [10, 20, 40, None],
max_features = ['sqrt', 'log2', None],
max_samples = [0.4, 0.8, None]
)
# Train the model
rf_clf = GridSearchCV(clf, param_grid, scoring='f1', n_jobs=7, cv=5, verbose=2)
rf_clf.fit(X_train, y_train)
rf_clf.best_estimator_
# Save and load the model if required
import joblib
# joblib.dump(rf_clf.best_estimator_, './../../../models/rf_clf.pkl')
rf_clf = joblib.load('./../../../models/rf_clf.pkl')
# Predict outcomes with test set
# y_pred = rf_clf.best_estimator_.predict(X_test)
y_pred = rf_clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Model Evaluation In order to compute sensitivity and specificity, we need values such as true positives, true negatives, false positives and false neagatives. These values can be easily obtained from confusion matrix.
###Code
# Get values from confusion metrix
tn, fp, fn, tp = confusion_matrix(y_test, y_pred).ravel()
# Compute sensitivity
sensitivity = tp/(tp+fn)
print(f"Sensitivity: {sensitivity} \n")
# Compute specificity
specificity = tn/(tn+fp)
print(f"Specificity: {specificity} \n")
# Compute f1 score
f1 = f1_score(y_test, y_pred)
print(f"F1 score: {f1} \n")
# Compute classicfication report
print(classification_report(y_test, y_pred))
###Output
Sensitivity: 0.8662420382165605
Specificity: 0.8068181818181818
F1 score: 0.8774193548387097
precision recall f1-score support
0 0.77 0.81 0.79 88
1 0.89 0.87 0.88 157
accuracy 0.84 245
macro avg 0.83 0.84 0.83 245
weighted avg 0.85 0.84 0.85 245
###Markdown
From the above report, it can inferred that model is finding it difficult to predict people who need to seek help from mental health professional and that can be acceptable. It won't harm any of us to visit a mental health professional even if we need not seek any help for any mental health issue. On the other hand the model is quite good at telling in case we that much needed help from mental health professional. An average F1 score of 0.83 and the overall F1 score of 0.88 is quite good considering the amount of data that we are training with. Though the model is quite better in predicting the individuals who need help than the ones who do not, the values of specificity and sensitivity are not far apart and hence the overall performance of the model is laudable. Fairness evaluation There are certain sensitive columns present in the data for which model should be less likely to be biased. Following are the columns for which we will be expecting that model be fair as much as possible:1. Feature revealing gender of the participant ('what is your gender?')2. Feature revealing race of the participant ('what is your race?') Disparity check with respect to gender
###Code
# Create fairness metrics with respect to gender
fair_metrics_sex = MetricFrame({'f1': f1_score, 'precision': precision_score, 'recall': recall_score},
y_test, y_pred, sensitive_features=X_test_ori['what is your gender?'])
# Display overall metrics
fair_metrics_sex.overall
###Output
_____no_output_____
###Markdown
The overall metrics remains the same as the ungrouped metrics metrics that are calculated above. The overall precision and recall are close enough, indicating that the most of the people selected by the model for seeking help for mental health issues are relevant and also the most of the relevant people are picked by the model. Though obviously, there is huge scope for model improvement.
###Code
# Display metrcis by group
fair_metrics_sex.by_group
###Output
_____no_output_____
###Markdown
Model is finding it easier to tag more female candidates who need help than the other counterparts.
###Code
diff_metrics = pd.DataFrame(fair_metrics_sex.difference(method='between_groups'), columns=['Difference'])
diff_metrics['Percentage'] = diff_metrics['Difference']*100
diff_metrics
###Output
_____no_output_____
###Markdown
On a positive note, the difference between the minimum and the maximum metric is not huge Disparity check with respect to race
###Code
# Create fairness metrics
fair_metrics_race = MetricFrame({'f1': f1_score, 'precision': precision_score, 'recall': recall_score},
y_test, y_pred, sensitive_features=X_test_ori['what is your race?'])
# Display overall metrics
fair_metrics_race.overall
# Display metrcis by group
fair_metrics_race.by_group
###Output
_____no_output_____
###Markdown
The model is working perfectly for black or African Americans but working the worst for asian participants. Moreover, for people belongs to more than one race, people selected by the model for help are all relevant but it could not identify all the those who are relevant. For white participants, model is working quite good and the disparity differences with the highest expected value is quite less.
###Code
diff_metrics = pd.DataFrame(fair_metrics_race.difference(method='between_groups'), columns=['Difference'])
diff_metrics['Percentage'] = diff_metrics['Difference']*100
diff_metrics
###Output
_____no_output_____
###Markdown
The disparity between the scores are huge and it should be mitigated. Mitigated Model Training Here we will be utilizing the best estimator that we trained using grid search. For the constraint, we will be utilizing the equalized odds method.**Equalized Odds Parity** This parity is considered for binary classification. Let $X$ denote the feature vector, $A$ denote a senstive feature, $Y$ be the true labels. Parity constraint defined over the distribution of $(X, A, Y)$ is that a classifier $h$ satisfies equalized odds unders a distribution over $(X, A, Y)$ if its prediction $h(X)$ is conditionally independent of the sensitive feature $A$ given the true label $Y$. Mathematically, it can be expressed as - $ E[h(x) | A=a, Y=y] = E[h(X) | Y=y]$
###Code
# Declare paramters for mitigated model training
best_estimator = RandomForestClassifier(class_weight='balanced', criterion='entropy',
max_depth=10, max_features='sqrt',
max_samples=0.8,oob_score=True, bootstrap=True)
# Declare the constraint for training
constraint = EqualizedOdds(difference_bound=0.01)
# Select sensitive features
X_train_rc = pd.DataFrame(X_train, columns=X_test.columns)
sensitive_features_columns = [column for index, column in enumerate(X_test.columns) if ('what is your gender?' in column) or ('what is your race?' in column)]
# Re-arrange the training data
X_train_sf = X_train_rc[sensitive_features_columns]
X_train_rc.drop(sensitive_features, axis=1, inplace=True)
# Train the model
mitigator = GridSearch(best_estimator, constraint, grid_size=100)
mitigator.fit(X_train_rc, y_train, sensitive_features=X_train_sf)
# Save and load the model if required
# import joblib
# joblib.dump(mitigator, './../../../models/mitigated_rf_clf.pkl')
# mitigated_rf = joblib.load('./../../../models/mitigated_rf_clf.pkl')
# Apply the transformations to the testing data
X_test_rc = X_test.drop(sensitive_features, axis=1)
# Predict using the mitigated models
y_pred_mitigated = mitigator.predict(X_test_rc)
###Output
_____no_output_____
###Markdown
Mitigated Model Evaluation For mitigated models, the metrics for evaluation remains the same that are sensitivity, specificity, f1 score, precision and recall.
###Code
# Get values from confusion metrix
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_mitigated).ravel()
# Compute sensitivity
sensitivity = tp/(tp+fn)
print(f"Sensitivity: {sensitivity} \n")
# Compute specificity
specificity = tn/(tn+fp)
print(f"Specificity: {specificity} \n")
# Compute f1 score
f1 = f1_score(y_test, y_pred)
print(f"F1 score: {f1} \n")
# Compute classicfication report
print(classification_report(y_test, y_pred_mitigated))
###Output
Sensitivity: 0.8598726114649682
Specificity: 0.7727272727272727
F1 score: 0.8774193548387097
precision recall f1-score support
0 0.76 0.77 0.76 88
1 0.87 0.86 0.87 157
accuracy 0.83 245
macro avg 0.81 0.82 0.81 245
weighted avg 0.83 0.83 0.83 245
###Markdown
For mitigated models, there is no huge difference in the evaluations metrics as compared to the unmitigated models. The average f1 score decrease by just 0.02 but the overall f1 did not take a major hit. The difference between sensitivity and specificity has increased significantly. It is necessary to achieve the balance between performance and fairness. Moreover, there is no major hit on the performance of the model. Fairness Evaluation for Mitigated Models Here again, we will be evaluating the model with respect to the selected sensitive features which are related to gender and race of the participants.
###Code
# Create fairness metrics with respect to gender
fair_metrics_sex = MetricFrame({'f1': f1_score, 'precision': precision_score, 'recall': recall_score},
y_test, y_pred_mitigated, sensitive_features=X_test_ori['what is your gender?'])
# Display overall metrics
fair_metrics_sex.overall
###Output
_____no_output_____
###Markdown
The recall has decreased by just 0.01 which is not at all huge as compared to the increase the fairness of the model.
###Code
# Display metrcis by group
fair_metrics_sex.by_group
###Output
_____no_output_____
###Markdown
The models is struggling with respect to the participants other than binary but the difference in parity has decreased. ```{margin}Research in progress for decreasing the parity.```
###Code
diff_metrics = pd.DataFrame(fair_metrics_sex.difference(method='between_groups'), columns=['Difference'])
diff_metrics['Percentage'] = diff_metrics['Difference']*100
diff_metrics
###Output
_____no_output_____
###Markdown
There is near about 3% decrease in the parity of the models with regards to gender for all the evaluation metrics. Eventhough, it is not huge but the model could decrease parity without compromising on its performance which is remarkable.
###Code
# Create fairness metrics
fair_metrics_race = MetricFrame({'f1': f1_score, 'precision': precision_score, 'recall': recall_score},
y_test, y_pred_mitigated, sensitive_features=X_test_ori['what is your race?'])
# Display metrcis by group
fair_metrics_race.by_group
diff_metrics = pd.DataFrame(fair_metrics_race.difference(method='between_groups'), columns=['Difference'])
diff_metrics['Percentage'] = diff_metrics['Difference']*100
diff_metrics
###Output
_____no_output_____
###Markdown
There is no improvement in the model's parity difference with respect to race of the participants. On the positive note, the model is still performing quite good for all the races except 1. One of the reason I see that model is finding it difficult to make a good call with respect to Asians is that the quality of data available is not up to the mark. The number of data points available for this group is also too less for the model to derive any significant information. Improving the data quality and quantity might help model in better predicting the need to seek help for mental health issues from the professionals. Model Interpretation The fairness models are not supported by shap package to compute the shap values and hence can not be used for interpreting the models. But the unmitigated model is not too different the mitigated model and hence for our analysis we will be utilizing that model.Beginning with the feature importances of the random forest classifier.
###Code
# Create a feature importance dataframe
feat_imp_data = zip(list(X_test.columns), rf_clf.feature_importances_)
feat_imp_df = pd.DataFrame(columns=['column', 'feature_importance'], data=feat_imp_data)
# Sort feature importance
feat_imp_df.sort_values('feature_importance', ascending=False, inplace=True)
fig = px.bar(feat_imp_df[:20], x='feature_importance', y='column', orientation='h')
fig.update_layout(width=2400)
fig.show()
###Output
_____no_output_____
###Markdown
On a glance, having past history of mental health disorder is the most important feature for predicting the need to seek help from mental health professional. This is closely followed by features converying the present mental state condition, how the employee perceives that the mental health disorder affect his/her work, willingness to share the status of mental health disorder to the family member, extent of comfort with discussing the issue with a colleage and the age of the participant.
###Code
# Comput shap values
explainer = shap.explainers.Tree(rf_clf, X_train, feature_names=X_test.columns)
shap_values = explainer.shap_values(X_test, check_additivity=False)
shap.summary_plot(shap_values[1], X_test, X_test.columns, title="SHAP summary plot", plot_size=(16.0, 30.0))
###Output
_____no_output_____ |
labs/Lab6_Classification_PCA/Singular Value Decomposition.ipynb | ###Markdown
Singular Value Decomposition and Applications References: SVD Image Compression Notebook Introduction The singular value decomposition of a matrix has many applications. Here I'll focus on an introduction to singular value decomposition and an application in clustering articles by topic. In another notebook (link) I show how singular value decomposition can be used in image compression.Any matrix $A$ can be decomposed to three matrices $U$, $\Sigma$, and $V$ such that $A = U \Sigma V$, this is called singular value decomposition. The columns of $U$ and $V$ are orthonormal and $\Sigma$ is diagonal. Most scientific computing packages have a function to compute the singular value decomposition, I won't go into the details of how to find $U$, $\Sigma$ and $V$ here. Some sources write the decomposition as $A = U \Sigma V^T$, so that their $V^T$ is our $V$. The usage in this notebook is consistent with how numpy's singular value decomposition function returns $V$. Example with a small matrix $A$: If $A = \begin{bmatrix} 1 & 0 \\ 1 & 2 \end{bmatrix}$ $A$ can be written as $U \Sigma V$ where $U$, $\Sigma$, and $V$ are, rounded to 2 decimal places:$U = \begin{bmatrix} -0.23 & -0.97 \\ -0.97 & 0.23 \end{bmatrix}$ $S = \begin{bmatrix} 2.29 & 0 \\ 0 & 0.87 \end{bmatrix}$ $V = \begin{bmatrix} -0.53 & -0.85 \\ -0.85 & 0.53 \end{bmatrix}$ Interpretation Although the singular value decomposition has interesting properties from a linear algebra standpoint, I'm going to focus here on some of its applications and skip the derivation and geometric interpretations.Let $A$ be a $m \times n$ matrix with column vectors $\vec{a}_1, \vec{a}_2, ..., \vec{a}_n$. In the singular value decomposition of $A$, $U$ will be $m \times m$, $\Sigma$ will be $m \times n$ and $V$ will be $n \times n$. We denote the column vectors of $U$ as $\vec{u}_1, \vec{u}_2, ..., \vec{u}_m$ and $V$ as $\vec{v}_1, \vec{v}_2, ..., \vec{v}_n$, similarly to $A$. We'll call the values along the diagonal of $\Sigma$ as $\sigma_1, \sigma_2, ...$.We have that $A = U \Sigma V$ where:$U = \begin{bmatrix} \\ \\ \\ \vec{u}_1 & \vec{u}_2 & \dots & \vec{u}_m \\ \\ \\ \end{bmatrix}$$\Sigma = \begin{bmatrix} \sigma_1 & 0 & \dots \\ 0 & \sigma_2 & \dots \\ \vdots & \vdots & \ddots \end{bmatrix}$$V = \begin{bmatrix} \\ \\ \\ \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_n \\ \\ \\ \end{bmatrix}$Because $\Sigma$ is diagonal, the columns of $A$ can be written as:$\vec{a}_i = \vec{u}_1 * \sigma_1 * V_{1,i} + \vec{u}_2 * \sigma_2 * V_{2,i} + ... = U * \Sigma * \vec{v}_i$ This is equivalent to creating a vector $\vec{w}_i$, where the elements of $\vec{w}_i$ are the elements of $\vec{v}_i$, weighted by the $\sigma$'s:$\vec{w}_i = \begin{bmatrix} \sigma_1V_{1,i} \\ \sigma_2V_{2,i} \\ \sigma_3V_{3,i} \\ \vdots \end{bmatrix} = \Sigma * \vec{v}_i$ Then $\vec{a}_i = U * \vec{w}_i$. That is to say that every column $\vec{a}_i$ of $A$ is expressed by a sum over all the columns of $U$, weighted by the values in the $i^{th}$ column of $V$, and the $\sigma$'s. By convention the order of the columns in $U$ and rows in $V$ is chosen such that the values in $\Sigma = \begin{bmatrix} \sigma_1 & 0 & \dots \\ 0 & \sigma_2 & \dots \\ \vdots & \vdots & \ddots \end{bmatrix}$ obey $\sigma_1 > \sigma_2 > \sigma_3 > ...$. This means that as a whole, the first column of $U$ and the first row of $V$ contribute more to the final values of $A$ than subsequent columns. This has applications in image compression (link to another notebook) and reducing the dimensionality of data by selecting the most import components. Brief discussion of dimensionality This section isn't required to understand how singular value decomposition is useful, but I've included it for completeness.If $A$ is $m \times n$ ($m$ rows and $n$ columns), $U$ will be $m \times m$, $\Sigma$ will be $m \times n$ and $V$ will be $n \times n$. However, there are only $r = rank(A)$ non-zero values in $\Sigma$, i.e. $\sigma_1, ..., \sigma_r \neq 0$; $\sigma_{r+1}, ..., \sigma_n = 0$. Therefore columns of $U$ beyond the $r^{th}$ column and rows of $V$ beyond the $r^{th}$ row do not contribute to $A$ and are usually omitted, leaving $U$ an $m \times r$ matrix, $\Sigma$ an $r \times r$ diagonal matrix and $V$ an $r \times n$ matrix. Example with data: Singular value decomposition can be used to classify similar objects (for example, news articles on a particular topic). Note above that similar $\vec{a_i}$'s will have similar $\vec{v_i}$'s.Imagine four blog posts, two about skiing and two about hockey. I've made up some data about five different words and the number of times they appear in each post:
###Code
import pandas as pd
c_names = ['post1', 'post2', 'post3', 'post4']
words = ['ice', 'snow', 'tahoe', 'goal', 'puck']
post_words = pd.DataFrame([[4, 4, 6, 2],
[6, 1, 0, 5],
[3, 0, 0, 5],
[0, 6, 5, 1],
[0, 4, 5, 0]],
index = words,
columns = c_names)
post_words.index.names = ['word:']
post_words
###Output
_____no_output_____
###Markdown
It looks like posts 1 and 4 pertain to skiing, and while posts 2 and 3 are about hockey. Imagine the DataFrame post_words as the matrix $A$, where the entries represent the number of times a given word appears in the post. The singular value decomposition of $A$ can be calculated using numpy.
###Code
import numpy as np
U, sigma, V = np.linalg.svd(post_words)
print ("V = ")
print (np.round(V, decimals=2) )
###Output
V =
[[-0.4 -0.57 -0.63 -0.35]
[-0.6 0.33 0.41 -0.6 ]
[ 0.6 -0.41 0.32 -0.61]
[-0.34 -0.63 0.58 0.39]]
###Markdown
Recall that $\vec{a}_i = U * \Sigma * \vec{v}_i$, that is each column $\vec{v}_i$ of $V$ defines the entries in that column, $\vec{a}_i$, of our data matrix, $A$. Let's label V with the identities of the posts using a DataFrame:
###Code
V_df = pd.DataFrame(V, columns=c_names)
V_df
###Output
_____no_output_____
###Markdown
Note how post1 and post4 agree closely in value in the first two rows of $V$, as do post2 and post3. This indicates that posts 1 and 4 contain similar words (in this case words relating to skiing). However, the agreement is less close in the last two rows, even among related posts. This is because the weights of the last two rows, $\sigma_3$ and $\sigma_4$, are small compared to $\sigma_1$ and $\sigma_2$. Let's look at the values for the $\sigma$'s.
###Code
sigma
###Output
_____no_output_____
###Markdown
$\sigma_1$ and $\sigma_2$ are about an order of magnitude greater than $\sigma_3$ and $\sigma_4$, indicating that the values in the first two rows of $V$ are much more important than the values in the last two. In fact we could closely reproduce $A$ using just the first two rows of $V$ and first two columns of $U$, with an error of at most 1 word:
###Code
A_approx = np.matrix(U[:, :2]) * np.diag(sigma[:2]) * np.matrix(V[:2, :])
print ("A calculated using only the first two components:\n")
print (pd.DataFrame(A_approx, index=words, columns=c_names))
print ("\nError from actual value:\n")
print (post_words - A_approx)
###Output
A calculated using only the first two components:
post1 post2 post3 post4
ice 3.197084 4.818556 5.325736 2.792675
snow 5.619793 0.588201 0.384675 5.412204
tahoe 4.043943 0.071665 -0.123639 3.917015
goal 0.682117 5.089628 5.762122 0.336491
puck 0.129398 4.219523 4.799185 -0.143946
Error from actual value:
post1 post2 post3 post4
word:
ice 0.802916 -0.818556 0.674264 -0.792675
snow 0.380207 0.411799 -0.384675 -0.412204
tahoe -1.043943 -0.071665 0.123639 1.082985
goal -0.682117 0.910372 -0.762122 0.663509
puck -0.129398 -0.219523 0.200815 0.143946
###Markdown
To help visualize the similarity between posts, $V$ can be displayed as an image. Notice how the similar posts (1 and 4, 2 and 3) have similar color values in the first two rows:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(V, interpolation='none')
plt.xticks(xrange(len(c_names)))
plt.yticks(xrange(len(words)))
plt.ylim([len(words) - 1.5, -.5])
ax = plt.gca()
ax.set_xticklabels(c_names)
ax.set_yticklabels(xrange(1, len(words) + 1))
plt.title("$V$")
plt.colorbar();
###Output
_____no_output_____
###Markdown
Another thing the singular value decomposition tells us is what most defines the different categories of posts. The skiing posts have very different values from the hockey posts in the second row of $V$, i.e. $V_{2,1} \approx V_{2, 4}$ and $V_{2,2} \approx V_{2, 3}$ but $V_{2,1} \neq V_{2, 2}$.Recall from above that:$\vec{a}_i = \vec{u}_1 * \sigma_1 * V_{1,i} + \vec{u}_2 * \sigma_2 * V_{2,i} + ...$ Thus the posts differ very much in how much the values in $\vec{u}_2$ contribute to their final word count. Here is $\vec{u}_2$:
###Code
pd.DataFrame(U[:,1], index=words)
###Output
_____no_output_____
###Markdown
From this we can conclude that, at least in this small data set, the words 'snow' and 'tahoe' identify a different class of posts from the words 'goal' and 'puck'. Identifying similar research papers using singular value decomposition Moving on from the simple example above, here is an application using singular value decomposition to find similar research papers.I've collect several different papers for analysis. Unfortunately due to the sorry state of open access for scientific papers I can't share the full article text that was used for analysis. Cell, for example, cautions that "you may not copy, display, distribute, modify, publish, reproduce, store, transmit, post, ..." Yikes. However I did chose articles such that you should be able to download the pdf's from the publisher for free.Here are the papers included in analysis (with shortened names in parentheses):Two papers on the molecular motor ClpX, describing very similar experiments:ClpX(P) Generates Mechanical Force to Unfold and Translocate Its Protein Substrates (clpx1)Single-Molecule Protein Unfolding and Translocation by an ATP-Fueled Proteolytic Machine (clpx2)Papers on a very different molecular motor, dynein:Lis1 Acts as a “Clutch” between the ATPase and Microtubule-Binding Domains of the Dynein Motor (dyn-lis1)Single-Molecule Analysis of Dynein Processivity and Stepping Behavior (dyn-steps1)Dynein achieves processive motion using both stochastic and coordinated stepping (dyn-steps2)Insights into dynein motor domain function from a 3.3-A crystal structure (dyn-structure)A paper on T-cell signaling:Biophysical mechanism of T-cell receptor triggering in a reconsistuted system (tcell) Reading in the data To start, we'll need to read in the word counts for each paper. I used python PDFMiner to convert the pdf documents to plain text. I also used a list of "stop words" (link), words such as "the", and "and", that appear in all English documents.
###Code
with open('input/stopwords.txt') as f:
stopwords = f.read().strip().split(',')
stopwords = set(stopwords) # use a set for fast membership testing
import collections
import os
import re
def word_count(fname):
"""Return a collections.Counter instance counting
the words in file fname."""
with open(fname) as f:
file_content = f.read()
words = re.split(r'\W+', file_content.lower())
words = [word for word in words
if len(word) > 3 and word not in stopwords]
word_count = collections.Counter(words)
return word_count
file_list = ['input/papers/' + f for f in os.listdir('input/papers/')
if f.endswith('.txt')]
word_df = pd.DataFrame()
for fname in file_list:
word_counter = word_count(fname)
file_df = pd.DataFrame.from_dict(word_counter,
orient='index')
file_df.columns = [fname.replace('input/papers/', '').replace('.txt', '')]
# normalize word count by the total number of words in the file:
file_df.ix[:, 0] = file_df.values.flatten() / float(file_df.values.sum())
word_df = word_df.join(file_df, how='outer', )
word_df = word_df.fillna(0)
print "Number of unique words: %s" % len(word_df)
###Output
Number of unique words: 5657
###Markdown
Here are the results, sorted by the most common words in the first paper:
###Code
word_df.sort(columns=word_df.columns[0], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Now to calculate the singular value decomposition of this data.
###Code
U, sigma, V = np.linalg.svd(word_df)
###Output
_____no_output_____
###Markdown
Here is a look at $V$, with the column names added:
###Code
v_df = pd.DataFrame(V, columns=word_df.columns)
v_df.apply(lambda x: np.round(x, decimals=2))
###Output
_____no_output_____
###Markdown
Here are the values of $V$ represented as an image:
###Code
plt.imshow(V, interpolation='none')
ax = plt.gca()
plt.xticks(xrange(len(v_df.columns.values)))
plt.yticks(xrange(len(v_df.index.values)))
plt.title("$V$")
ax.set_xticklabels(v_df.columns.values, rotation=90)
plt.colorbar();
###Output
_____no_output_____
###Markdown
Note how in the above image, in the first three rows the similarities between the clpx papers is apparent, as well as between the first three dyn papers. The last dyn paper is somewhat different, but this is to be expected since it is a structure paper and the other three dyn papers involve more microscopy. In terms of comparing the papers, singular value decomposition allowed us to reduce the 5657 different words found in the papers into 6 values that are pre-sorted in order of importance! Quantifying similarity Now we'll look in more detail at how similar each paper is to the others. I've defined a function to calculate the distance between two column vectors of $V$, weighted by the weights in $\Sigma$. For $\vec{v}_i$ and $\vec{v}_j$ the function calculates $\|\Sigma * (\vec{v}_i - \vec{v}_j)\|$. This function is applied to every pairwise combination of $\vec{v}_i$ and $\vec{v}_j$, giving a metric of how similar two papers are (smaller values are more similar).
###Code
def dist(col1, col2, sigma=sigma):
"""Return the norm of (col1 - col2), where the differences in
each dimension are wighted by the values in sigma."""
return np.linalg.norm(np.array(col1 - col2) * sigma)
dist_df = pd.DataFrame(index=v_df.columns, columns=v_df.columns)
for cname in v_df.columns:
dist_df[cname] = v_df.apply(lambda x: dist(v_df[cname].values, x.values))
plt.imshow(dist_df.values, interpolation='none')
ax = plt.gca()
plt.xticks(xrange(len(dist_df.columns.values)))
plt.yticks(xrange(len(dist_df.index.values)))
ax.set_xticklabels(dist_df.columns.values, rotation=90)
ax.set_yticklabels(dist_df.index.values)
plt.title("Similarity between papers\nLower value = more similar")
plt.colorbar()
dist_df
###Output
_____no_output_____
###Markdown
The two clpx papers and the two dyn-steps are most similar to each other, as expected, while all the dyn paper do bear some similarity to each other. For a quicker readout, I've grouped the data into three similarity levels (in practice this could be done automatically with a clustering algorithm).
###Code
levels = [0.06, 0.075]
binned_df = dist_df.copy()
binned_df[(dist_df <= levels[0]) & (dist_df > 0)] = 1
binned_df[(dist_df <= levels[1]) & (dist_df > levels[0])] = 2
binned_df[(dist_df < 1) & (dist_df > levels[1])] = 3
plt.imshow(binned_df.values, interpolation='none')
ax = plt.gca()
plt.xticks(xrange(len(binned_df.columns.values)))
plt.yticks(xrange(len(binned_df.index.values)))
ax.set_xticklabels(binned_df.columns.values, rotation=90)
ax.set_yticklabels(binned_df.index.values)
plt.title("Similarity between papers\nLower value = more similar")
plt.colorbar();
###Output
_____no_output_____
###Markdown
Finally, let's output a list for each paper of the other papers, sorted in order of decreasing similarity:
###Code
for paper in dist_df.columns:
sim_papers_df = dist_df.sort(columns=paper)[paper]
sim_papers = sim_papers_df.drop([paper]).index
print 'Papers most similar to ' + paper + ':'
print ', '.join(sim_papers)
print '\n'
###Output
Papers most similar to clpx1:
clpx2, dyn-structure, dyn-steps1, tcell, dyn-steps2, dyn-lis1
Papers most similar to clpx2:
clpx1, dyn-structure, dyn-steps1, tcell, dyn-steps2, dyn-lis1
Papers most similar to dyn-lis1:
dyn-steps1, dyn-steps2, dyn-structure, clpx2, clpx1, tcell
Papers most similar to dyn-steps1:
dyn-steps2, dyn-lis1, dyn-structure, clpx2, clpx1, tcell
Papers most similar to dyn-steps2:
dyn-steps1, dyn-lis1, dyn-structure, clpx2, clpx1, tcell
Papers most similar to dyn-structure:
dyn-steps1, clpx2, dyn-steps2, clpx1, dyn-lis1, tcell
Papers most similar to tcell:
clpx2, dyn-structure, clpx1, dyn-steps1, dyn-steps2, dyn-lis1
###Markdown
Singular Value Decomposition and Applications Introduction The singular value decomposition of a matrix has many applications. Here I'll focus on an introduction to singular value decomposition and an application in clustering articles by topic. Any matrix $A$ can be decomposed to three matrices $U$, $\Sigma$, and $V$ such that $A = U \Sigma V$, this is called singular value decomposition. The columns of $U$ and $V$ are orthonormal and $\Sigma$ is diagonal. Most scientific computing packages have a function to compute the singular value decomposition, I won't go into the details of how to find $U$, $\Sigma$ and $V$ here. Some sources write the decomposition as $A = U \Sigma V^T$, so that their $V^T$ is our $V$. The usage in this notebook is consistent with how numpy's singular value decomposition function returns $V$. Example with a small matrix $A$: If $A = \begin{bmatrix} 1 & 0 \\ 1 & 2 \end{bmatrix}$ $A$ can be written as $U \Sigma V$ where $U$, $\Sigma$, and $V$ are, rounded to 2 decimal places:$U = \begin{bmatrix} -0.23 & -0.97 \\ -0.97 & 0.23 \end{bmatrix}$ $S = \begin{bmatrix} 2.29 & 0 \\ 0 & 0.87 \end{bmatrix}$ $V = \begin{bmatrix} -0.53 & -0.85 \\ -0.85 & 0.53 \end{bmatrix}$ Interpretation Although the singular value decomposition has interesting properties from a linear algebra standpoint, I'm going to focus here on some of its applications and skip the derivation and geometric interpretations.Let $A$ be a $m \times n$ matrix with column vectors $\vec{a}_1, \vec{a}_2, ..., \vec{a}_n$. In the singular value decomposition of $A$, $U$ will be $m \times m$, $\Sigma$ will be $m \times n$ and $V$ will be $n \times n$. We denote the column vectors of $U$ as $\vec{u}_1, \vec{u}_2, ..., \vec{u}_m$ and $V$ as $\vec{v}_1, \vec{v}_2, ..., \vec{v}_n$, similarly to $A$. We'll call the values along the diagonal of $\Sigma$ as $\sigma_1, \sigma_2, ...$.We have that $A = U \Sigma V$ where:$U = \begin{bmatrix} \\ \\ \\ \vec{u}_1 & \vec{u}_2 & \dots & \vec{u}_m \\ \\ \\ \end{bmatrix}$$\Sigma = \begin{bmatrix} \sigma_1 & 0 & \dots \\ 0 & \sigma_2 & \dots \\ \vdots & \vdots & \ddots \end{bmatrix}$$V = \begin{bmatrix} \\ \\ \\ \vec{v}_1 & \vec{v}_2 & \dots & \vec{v}_n \\ \\ \\ \end{bmatrix}$Because $\Sigma$ is diagonal, the columns of $A$ can be written as:$\vec{a}_i = \vec{u}_1 * \sigma_1 * V_{1,i} + \vec{u}_2 * \sigma_2 * V_{2,i} + ... = U * \Sigma * \vec{v}_i$ This is equivalent to creating a vector $\vec{w}_i$, where the elements of $\vec{w}_i$ are the elements of $\vec{v}_i$, weighted by the $\sigma$'s:$\vec{w}_i = \begin{bmatrix} \sigma_1V_{1,i} \\ \sigma_2V_{2,i} \\ \sigma_3V_{3,i} \\ \vdots \end{bmatrix} = \Sigma * \vec{v}_i$ Then $\vec{a}_i = U * \vec{w}_i$. That is to say that every column $\vec{a}_i$ of $A$ is expressed by a sum over all the columns of $U$, weighted by the values in the $i^{th}$ column of $V$, and the $\sigma$'s. By convention the order of the columns in $U$ and rows in $V$ is chosen such that the values in $\Sigma = \begin{bmatrix} \sigma_1 & 0 & \dots \\ 0 & \sigma_2 & \dots \\ \vdots & \vdots & \ddots \end{bmatrix}$ obey $\sigma_1 > \sigma_2 > \sigma_3 > ...$. This means that as a whole, the first column of $U$ and the first row of $V$ contribute more to the final values of $A$ than subsequent columns. This has applications in image compression (link to another notebook) and reducing the dimensionality of data by selecting the most import components. Brief discussion of dimensionality This section isn't required to understand how singular value decomposition is useful, but I've included it for completeness.If $A$ is $m \times n$ ($m$ rows and $n$ columns), $U$ will be $m \times m$, $\Sigma$ will be $m \times n$ and $V$ will be $n \times n$. However, there are only $r = rank(A)$ non-zero values in $\Sigma$, i.e. $\sigma_1, ..., \sigma_r \neq 0$; $\sigma_{r+1}, ..., \sigma_n = 0$. Therefore columns of $U$ beyond the $r^{th}$ column and rows of $V$ beyond the $r^{th}$ row do not contribute to $A$ and are usually omitted, leaving $U$ an $m \times r$ matrix, $\Sigma$ an $r \times r$ diagonal matrix and $V$ an $r \times n$ matrix. Example with data: Singular value decomposition can be used to classify similar objects (for example, news articles on a particular topic). Note above that similar $\vec{a_i}$'s will have similar $\vec{v_i}$'s.Imagine four blog posts, two about skiing and two about hockey. I've made up some data about five different words and the number of times they appear in each post:
###Code
import pandas as pd
c_names = ['post1', 'post2', 'post3', 'post4']
words = ['ice', 'snow', 'tahoe', 'goal', 'puck']
post_words = pd.DataFrame([[4, 4, 6, 2],
[6, 1, 0, 5],
[3, 0, 0, 5],
[0, 6, 5, 1],
[0, 4, 5, 0]],
index = words,
columns = c_names)
post_words.index.names = ['word:']
post_words
###Output
_____no_output_____
###Markdown
It looks like posts 1 and 4 pertain to skiing, and while posts 2 and 3 are about hockey. Imagine the DataFrame post_words as the matrix $A$, where the entries represent the number of times a given word appears in the post. The singular value decomposition of $A$ can be calculated using numpy.
###Code
import numpy as np
U, sigma, V = np.linalg.svd(post_words)
print ("V = ")
print (np.round(V, decimals=2) )
###Output
V =
[[-0.4 -0.57 -0.63 -0.35]
[-0.6 0.33 0.41 -0.6 ]
[ 0.6 -0.41 0.32 -0.61]
[-0.34 -0.63 0.58 0.39]]
###Markdown
Recall that $\vec{a}_i = U * \Sigma * \vec{v}_i$, that is each column $\vec{v}_i$ of $V$ defines the entries in that column, $\vec{a}_i$, of our data matrix, $A$. Let's label V with the identities of the posts using a DataFrame:
###Code
V_df = pd.DataFrame(V, columns=c_names)
V_df
###Output
_____no_output_____
###Markdown
Note how post1 and post4 agree closely in value in the first two rows of $V$, as do post2 and post3. This indicates that posts 1 and 4 contain similar words (in this case words relating to skiing). However, the agreement is less close in the last two rows, even among related posts. This is because the weights of the last two rows, $\sigma_3$ and $\sigma_4$, are small compared to $\sigma_1$ and $\sigma_2$. Let's look at the values for the $\sigma$'s.
###Code
sigma
###Output
_____no_output_____
###Markdown
$\sigma_1$ and $\sigma_2$ are about an order of magnitude greater than $\sigma_3$ and $\sigma_4$, indicating that the values in the first two rows of $V$ are much more important than the values in the last two. In fact we could closely reproduce $A$ using just the first two rows of $V$ and first two columns of $U$, with an error of at most 1 word:
###Code
A_approx = np.matrix(U[:, :2]) * np.diag(sigma[:2]) * np.matrix(V[:2, :])
print ("A calculated using only the first two components:\n")
print (pd.DataFrame(A_approx, index=words, columns=c_names))
print ("\nError from actual value:\n")
print (post_words - A_approx)
###Output
A calculated using only the first two components:
post1 post2 post3 post4
ice 3.197084 4.818556 5.325736 2.792675
snow 5.619793 0.588201 0.384675 5.412204
tahoe 4.043943 0.071665 -0.123639 3.917015
goal 0.682117 5.089628 5.762122 0.336491
puck 0.129398 4.219523 4.799185 -0.143946
Error from actual value:
post1 post2 post3 post4
word:
ice 0.802916 -0.818556 0.674264 -0.792675
snow 0.380207 0.411799 -0.384675 -0.412204
tahoe -1.043943 -0.071665 0.123639 1.082985
goal -0.682117 0.910372 -0.762122 0.663509
puck -0.129398 -0.219523 0.200815 0.143946
###Markdown
To help visualize the similarity between posts, $V$ can be displayed as an image. Notice how the similar posts (1 and 4, 2 and 3) have similar color values in the first two rows:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(V, interpolation='none')
plt.xticks(xrange(len(c_names)))
plt.yticks(xrange(len(words)))
plt.ylim([len(words) - 1.5, -.5])
ax = plt.gca()
ax.set_xticklabels(c_names)
ax.set_yticklabels(xrange(1, len(words) + 1))
plt.title("$V$")
plt.colorbar();
###Output
_____no_output_____
###Markdown
Another thing the singular value decomposition tells us is what most defines the different categories of posts. The skiing posts have very different values from the hockey posts in the second row of $V$, i.e. $V_{2,1} \approx V_{2, 4}$ and $V_{2,2} \approx V_{2, 3}$ but $V_{2,1} \neq V_{2, 2}$.Recall from above that:$\vec{a}_i = \vec{u}_1 * \sigma_1 * V_{1,i} + \vec{u}_2 * \sigma_2 * V_{2,i} + ...$ Thus the posts differ very much in how much the values in $\vec{u}_2$ contribute to their final word count. Here is $\vec{u}_2$:
###Code
pd.DataFrame(U[:,1], index=words)
###Output
_____no_output_____
###Markdown
From this we can conclude that, at least in this small data set, the words 'snow' and 'tahoe' identify a different class of posts from the words 'goal' and 'puck'. Identifying similar research papers using singular value decomposition Moving on from the simple example above, here is an application using singular value decomposition to find similar research papers.I've collect several different papers for analysis. Unfortunately due to the sorry state of open access for scientific papers I can't share the full article text that was used for analysis. Cell, for example, cautions that "you may not copy, display, distribute, modify, publish, reproduce, store, transmit, post, ..." Yikes. However I did chose articles such that you should be able to download the pdf's from the publisher for free.Here are the papers included in analysis (with shortened names in parentheses):Two papers on the molecular motor ClpX, describing very similar experiments:ClpX(P) Generates Mechanical Force to Unfold and Translocate Its Protein Substrates (clpx1)Single-Molecule Protein Unfolding and Translocation by an ATP-Fueled Proteolytic Machine (clpx2)Papers on a very different molecular motor, dynein:Lis1 Acts as a “Clutch” between the ATPase and Microtubule-Binding Domains of the Dynein Motor (dyn-lis1)Single-Molecule Analysis of Dynein Processivity and Stepping Behavior (dyn-steps1)Dynein achieves processive motion using both stochastic and coordinated stepping (dyn-steps2)Insights into dynein motor domain function from a 3.3-A crystal structure (dyn-structure)A paper on T-cell signaling:Biophysical mechanism of T-cell receptor triggering in a reconsistuted system (tcell) Reading in the data To start, we'll need to read in the word counts for each paper. I used python PDFMiner to convert the pdf documents to plain text. I also used a list of "stop words" (link), words such as "the", and "and", that appear in all English documents.
###Code
with open('input/stopwords.txt') as f:
stopwords = f.read().strip().split(',')
stopwords = set(stopwords) # use a set for fast membership testing
import collections
import os
import re
def word_count(fname):
"""Return a collections.Counter instance counting
the words in file fname."""
with open(fname) as f:
file_content = f.read()
words = re.split(r'\W+', file_content.lower())
words = [word for word in words
if len(word) > 3 and word not in stopwords]
word_count = collections.Counter(words)
return word_count
file_list = ['input/papers/' + f for f in os.listdir('input/papers/')
if f.endswith('.txt')]
word_df = pd.DataFrame()
for fname in file_list:
word_counter = word_count(fname)
file_df = pd.DataFrame.from_dict(word_counter,
orient='index')
file_df.columns = [fname.replace('input/papers/', '').replace('.txt', '')]
# normalize word count by the total number of words in the file:
file_df.ix[:, 0] = file_df.values.flatten() / float(file_df.values.sum())
word_df = word_df.join(file_df, how='outer', )
word_df = word_df.fillna(0)
print "Number of unique words: %s" % len(word_df)
###Output
Number of unique words: 5657
###Markdown
Here are the results, sorted by the most common words in the first paper:
###Code
word_df.sort(columns=word_df.columns[0], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Now to calculate the singular value decomposition of this data.
###Code
U, sigma, V = np.linalg.svd(word_df)
###Output
_____no_output_____
###Markdown
Here is a look at $V$, with the column names added:
###Code
v_df = pd.DataFrame(V, columns=word_df.columns)
v_df.apply(lambda x: np.round(x, decimals=2))
###Output
_____no_output_____
###Markdown
Here are the values of $V$ represented as an image:
###Code
plt.imshow(V, interpolation='none')
ax = plt.gca()
plt.xticks(xrange(len(v_df.columns.values)))
plt.yticks(xrange(len(v_df.index.values)))
plt.title("$V$")
ax.set_xticklabels(v_df.columns.values, rotation=90)
plt.colorbar();
###Output
_____no_output_____
###Markdown
Note how in the above image, in the first three rows the similarities between the clpx papers is apparent, as well as between the first three dyn papers. The last dyn paper is somewhat different, but this is to be expected since it is a structure paper and the other three dyn papers involve more microscopy. In terms of comparing the papers, singular value decomposition allowed us to reduce the 5657 different words found in the papers into 6 values that are pre-sorted in order of importance! Quantifying similarity Now we'll look in more detail at how similar each paper is to the others. I've defined a function to calculate the distance between two column vectors of $V$, weighted by the weights in $\Sigma$. For $\vec{v}_i$ and $\vec{v}_j$ the function calculates $\|\Sigma * (\vec{v}_i - \vec{v}_j)\|$. This function is applied to every pairwise combination of $\vec{v}_i$ and $\vec{v}_j$, giving a metric of how similar two papers are (smaller values are more similar).
###Code
def dist(col1, col2, sigma=sigma):
"""Return the norm of (col1 - col2), where the differences in
each dimension are wighted by the values in sigma."""
return np.linalg.norm(np.array(col1 - col2) * sigma)
dist_df = pd.DataFrame(index=v_df.columns, columns=v_df.columns)
for cname in v_df.columns:
dist_df[cname] = v_df.apply(lambda x: dist(v_df[cname].values, x.values))
plt.imshow(dist_df.values, interpolation='none')
ax = plt.gca()
plt.xticks(xrange(len(dist_df.columns.values)))
plt.yticks(xrange(len(dist_df.index.values)))
ax.set_xticklabels(dist_df.columns.values, rotation=90)
ax.set_yticklabels(dist_df.index.values)
plt.title("Similarity between papers\nLower value = more similar")
plt.colorbar()
dist_df
###Output
_____no_output_____
###Markdown
The two clpx papers and the two dyn-steps are most similar to each other, as expected, while all the dyn paper do bear some similarity to each other. For a quicker readout, I've grouped the data into three similarity levels (in practice this could be done automatically with a clustering algorithm).
###Code
levels = [0.06, 0.075]
binned_df = dist_df.copy()
binned_df[(dist_df <= levels[0]) & (dist_df > 0)] = 1
binned_df[(dist_df <= levels[1]) & (dist_df > levels[0])] = 2
binned_df[(dist_df < 1) & (dist_df > levels[1])] = 3
plt.imshow(binned_df.values, interpolation='none')
ax = plt.gca()
plt.xticks(xrange(len(binned_df.columns.values)))
plt.yticks(xrange(len(binned_df.index.values)))
ax.set_xticklabels(binned_df.columns.values, rotation=90)
ax.set_yticklabels(binned_df.index.values)
plt.title("Similarity between papers\nLower value = more similar")
plt.colorbar();
###Output
_____no_output_____
###Markdown
Finally, let's output a list for each paper of the other papers, sorted in order of decreasing similarity:
###Code
for paper in dist_df.columns:
sim_papers_df = dist_df.sort(columns=paper)[paper]
sim_papers = sim_papers_df.drop([paper]).index
print 'Papers most similar to ' + paper + ':'
print ', '.join(sim_papers)
print '\n'
###Output
Papers most similar to clpx1:
clpx2, dyn-structure, dyn-steps1, tcell, dyn-steps2, dyn-lis1
Papers most similar to clpx2:
clpx1, dyn-structure, dyn-steps1, tcell, dyn-steps2, dyn-lis1
Papers most similar to dyn-lis1:
dyn-steps1, dyn-steps2, dyn-structure, clpx2, clpx1, tcell
Papers most similar to dyn-steps1:
dyn-steps2, dyn-lis1, dyn-structure, clpx2, clpx1, tcell
Papers most similar to dyn-steps2:
dyn-steps1, dyn-lis1, dyn-structure, clpx2, clpx1, tcell
Papers most similar to dyn-structure:
dyn-steps1, clpx2, dyn-steps2, clpx1, dyn-lis1, tcell
Papers most similar to tcell:
clpx2, dyn-structure, clpx1, dyn-steps1, dyn-steps2, dyn-lis1
|
fastai/nbs/09c_vision.widgets.ipynb | ###Markdown
Vision widgets> ipywidgets for images
###Code
#export
@patch
def __getitem__(self:Box, i): return self.children[i]
#export
def widget(im, *args, **layout):
"Convert anything that can be `display`ed by IPython into a widget"
o = Output(layout=merge(*args, layout))
with o: display(im)
return o
im = Image.open('images/puppy.jpg').to_thumb(256,512)
VBox([widgets.HTML('Puppy'),
widget(im, max_width="192px")])
#export
def _update_children(change):
for o in change['owner'].children:
if not o.layout.flex: o.layout.flex = '0 0 auto'
#export
def carousel(children=(), **layout):
"A horizontally scrolling carousel"
def_layout = dict(overflow='scroll hidden', flex_flow='row', display='flex')
res = Box([], layout=merge(def_layout, layout))
res.observe(_update_children, names='children')
res.children = children
return res
ts = [VBox([widget(im, max_width='192px'), Button(description='click')])
for o in range(3)]
carousel(ts, width='450px')
#export
def _open_thumb(fn, h, w): return Image.open(fn).to_thumb(h, w).convert('RGBA')
#export
class ImagesCleaner:
"A widget that displays all images in `fns` along with a `Dropdown`"
def __init__(self, opts=(), height=128, width=256, max_n=30):
opts = ('<Keep>', '<Delete>')+tuple(opts)
store_attr('opts,height,width,max_n')
self.widget = carousel(width='100%')
def set_fns(self, fns):
self.fns = L(fns)[:self.max_n]
ims = parallel(_open_thumb, self.fns, h=self.height, w=self.width, progress=False,
n_workers=min(len(self.fns)//10,defaults.cpus))
self.widget.children = [VBox([widget(im, height=f'{self.height}px'), Dropdown(
options=self.opts, layout={'width': 'max-content'})]) for im in ims]
def _ipython_display_(self): display(self.widget)
def values(self): return L(self.widget.children).itemgot(1).attrgot('value')
def delete(self): return self.values().argwhere(eq('<Delete>'))
def change(self):
idxs = self.values().argwhere(not_(in_(['<Delete>','<Keep>'])))
return idxs.zipwith(self.values()[idxs])
fns = get_image_files('images')
w = ImagesCleaner(('A','B'))
w.set_fns(fns)
w
w.delete(),w.change()
#export
def _get_iw_info(learn, ds_idx=0):
dl = learn.dls[ds_idx].new(shuffle=False, drop_last=False)
inp,probs,targs,preds,losses = learn.get_preds(dl=dl, with_input=True, with_loss=True, with_decoded=True)
inp,targs = L(zip(*dl.decode_batch((inp,targs), max_n=9999)))
return L([dl.dataset.items,targs,losses]).zip()
#export
@delegates(ImagesCleaner)
class ImageClassifierCleaner(GetAttr):
"A widget that provides an `ImagesCleaner` with a CNN `Learner`"
def __init__(self, learn, **kwargs):
vocab = learn.dls.vocab
self.default = self.iw = ImagesCleaner(vocab, **kwargs)
self.dd_cats = Dropdown(options=vocab)
self.dd_ds = Dropdown(options=('Train','Valid'))
self.iwis = _get_iw_info(learn,0),_get_iw_info(learn,1)
self.dd_ds.observe(self.on_change_ds, 'value')
self.dd_cats.observe(self.on_change_ds, 'value')
self.on_change_ds()
self.widget = VBox([self.dd_cats, self.dd_ds, self.iw.widget])
def _ipython_display_(self): display(self.widget)
def on_change_ds(self, change=None):
info = L(o for o in self.iwis[self.dd_ds.index] if o[1]==self.dd_cats.value)
self.iw.set_fns(info.sorted(2, reverse=True).itemgot(0))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
|
Colab_instruction.ipynb | ###Markdown
Mount Google Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Clone project from Github
###Code
!git clone https://github.com/shaobaili3/CS39-EXPLAINABLE-NEURAL-NETWORK.git
!pwd
###Output
fatal: destination path 'CS39-EXPLAINABLE-NEURAL-NETWORK' already exists and is not an empty directory.
/content
|
04_dataframe.ipynb | ###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame` API. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/04_exo1.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/04_exo2.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/04_exo3.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
%load solutions/04_exo4.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/04_exo5.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But let's try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to save the graph to disk so that we can zoom in more easily):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/04_map_partitions.py
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](04-schedulers.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](02-dask-arrays.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. We'll cover this in more detail in [Distributed DataFrames](05-distributed-dataframes-and-efficiency.ipynb).For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grant total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](04-schedulers.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](02-dask-arrays.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. We'll cover this in more detail in [Distributed DataFrames](05-distributed-dataframes-and-efficiency.ipynb).For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFrames我们通过使用 dask.delayed 在 CSV 文件目录上构建并行数据帧计算完成了第 1 章。 在本节中,我们使用 `dask.dataframe` 自动构建类似的计算,用于表格计算的常见情况。 Dask DataFrame的外观和感觉与 Pandas 数据帧相似,但Dask DataFrame运行在支持`dask.delayed`的相同基础架构上。在这个笔记本中,我们像以前一样使用相同的航线数据,但是现在我们让`dask.dataframe’`为我们构造计算,而不是写 for循环。函数可以接受`data/nycflights/*`这样的全局字符串,然后在我们所有的数据上建立并行计算。 何时使用 `dask.dataframe`Pandas非常适合存储在内存中的表格数据集。当要分析的数据集大于机器的内存时,Dask 就变得有用了。我们使用的演示数据集大约只有200MB,因此你可以在合理的时间内下载它,但是`dask.dataframe`将扩展到比内存大得多的数据集。 `dask.dataframe` 模块实现了一个分块的并行 `DataFrame` 对象,它模仿了 Pandas `DataFrame` API 的一个大集合。 一个 Dask `DataFrame` 由许多沿索引分隔的内存中的 Pandas `DataFrames` 组成。 Dask `DataFrame` 上的一个操作会以一种注意潜在并行性和内存限制的方式触发对组成 Pandas `DataFrame` 的许多 Pandas 操作。**相关文档*** [DataFrame 文档](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame 屏幕录像](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame 示例](https://examples.dask.org/dataframe.html)* [Pandas 文档](https://pandas.pydata.org/pandas-docs/stable/)**主要知识**1. Dask DataFrame对pandas用户来说应该很熟悉2. 数据流的划分对于有效执行非常重要 创建数据
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
设置
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
我们创建了人造数据。
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
文件名包含一个通配符`*`,因此路径中匹配该通配符的所有文件将被读入同一个的 Dask DataFrame。
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# 加载并计算行数
len(df)
###Output
_____no_output_____
###Markdown
这里发生了什么?- Dask调查了输入路径,发现有3个匹配的文件- 为每个块智能地创建了一组任务 - 在这种情况下每个原始 CSV 文件对应一个任务- 每个文件都被加载到一个 Pandas 数据帧中,并应用了 `len()`- 小计被合并为您最终的总计。 真实数据让我们以几年来在美国的航班为例来尝试一下。这些数据是专门针对纽约市地区三个机场的航班。
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
请注意,DataFrame对象的描述不包含任何数据 —— Dask 刚刚读取了第一个文件的开头,并推断出了列名和 dtype。
###Code
df
###Output
_____no_output_____
###Markdown
我们可以查看数据的首尾
###Code
df.head()
df.tail() # 该操作会失败
###Output
_____no_output_____
###Markdown
刚才发生了什么?与在推断数据类型之前读取整个文件的 `pandas.read_csv` 不同,`dask.dataframe.read_csv` 仅从文件的开头(或第一个文件,如果使用通配)读取样本。然后在读取所有分区时强制执行这些推断的数据类型。在这种情况下,样本中推断的数据类型不正确。前n行中`CRSElapsedTime`没有值(pandas 将其推断为`float`),后来变成字符串(`object` dtype)。请注意,Dask 会提供有关不匹配的信息性错误消息。发生这种情况时,您有几个选择:- 直接使用 `dtype` 关键字指定 dtypes。这是推荐的解决方案,因为它最不容易出错(显式比隐式更好),而且性能最高。- 增加 `sample` 关键字的大小(以字节为单位)- 使用 `assume_missing` 使 `dask` 假定推断为 `int`(不允许缺失值)的列实际上是浮点数(允许缺失值)。在我们的特殊情况下,这不适用。在我们的例子中,我们将使用第一个选项并直接指定报错列的 `dtypes`。
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # 现在运行成功了
###Output
_____no_output_____
###Markdown
使用 `dask.dataframe` 计算我们计算`DepDelay`列的最大值。 仅使用pandas的情况下,我们将遍历每个文件以找到各个最大值,然后在所有各个最大值上找到最终最大值```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```我们可以用`dask.delayed`来包装`pd.read_csv`使其并行运行。 无论如何,我们仍然必须考虑循环,中间结果(每个文件一个)和最终减少量(中间最大值的`max`)。这只是围绕真实任务的噪音,pandas用以下代码来解决```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` 让我们编写类似于pandas的代码,该代码可以并行处理大于内存数据集的操作。
###Code
%time df["DepDelay"].max().compute()
###Output
_____no_output_____
###Markdown
这会为我们编写延迟计算,然后运行它。一些注意事项:1.与`dask.delayed`一样,我们需要在完成后调用`.compute()`。 到目前为止,一切都是惰性的。2. Dask 将尽快删除中间结果(如每个文件的完整 Pandas 数据框)。 - 这让我们可以处理大于内存的数据集 - 这意味着每次重复计算都必须加载所有数据(再次运行上面的代码,它是否比您预期的更快或更慢?) 与`delayed`对象一样,您可以使用`.visualize`方法查看底层任务图:
###Code
# 注意并行
df["DepDelay"].max().visualize()
###Output
_____no_output_____
###Markdown
练习本节中我们进行了一些 `dask.dataframe` 计算. 如果您对pandas很适应,那么这些应该很熟悉. 您将需要思考何时调用 `compute`. 1.) 数据集有多少行?如果您不熟悉pandas,您将如何检查元组列表中有多少条记录?
###Code
# 在这儿输入你的代码
len(df)
###Output
_____no_output_____
###Markdown
2.) 总共有多少非取消(non-canceled)航班?用pandas的话,您需要使用 [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# 在这儿输入你的代码
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) 每个机场总共有多少非取消(non-canceled)航班?*提示*: 使用 [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# 在这儿输入你的代码
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) 每个机场的平均起飞延误是多少?请注意,这与您在之前的笔记本中所做的计算相同(这种方法是更快还是更慢?)
###Code
# 在这儿输入你的代码
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) 一周中哪一天的平均出发延误最严重?
###Code
# 在这儿输入你的代码
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
共享中间结果在计算上述所有内容时,我们有时会多次执行相同的操作。 对于大多数操作,`dask.dataframe` 散列参数,允许共享重复计算,并且只计算一次。例如,让我们计算所有未取消航班的出发延迟的平均值和标准偏差。 由于 dask 操作是惰性的,因此这些值还不是最终结果。 它们只是获得结果所需的配方。如果我们通过两次计算调用来计算它们,则中间计算不会共享。
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
但是让我们尝试将两者都传递给单个 `compute` 调用。
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
使用 `dask.compute` 大约需要 1/2 的时间。 这是因为在调用 `dask.compute` 时合并了两个结果的任务图,允许共享操作只执行一次而不是两次。 特别是,使用 `dask.compute` 只执行以下一次:- 调用 `read_csv`- 过滤器(`df[~df.Cancelled]`)- 一些必要的归约(`sum`,`count`)要查看多个结果之间的合并任务图是什么样的(以及共享的内容),您可以使用 `dask.visualize` 函数(我们可能希望使用 `filename='graph.pdf'` 将图形保存到磁盘,以便我们可以更轻松地放大):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
这与 Pandas 相比如何? Pandas 比 dask.dataframe 更成熟、功能更齐全。如果您的数据适合内存,那么您应该使用 Pandas。当您对不适合内存的数据集进行操作时,`dask.dataframe` 模块为您提供了有限的 `pandas` 体验。在本教程中,我们提供了一个由几个 CSV 文件组成的小数据集。这个数据集在磁盘上有 45MB,在内存中扩展到大约 400MB。该数据集足够小,您通常可以使用 Pandas。我们选择了这个尺寸,以便练习快速完成。 Dask.dataframe 只有对比这大得多的问题才真正变得有意义,此时,Pandas 打破了可怕的 MemoryError: ... 此外,分布式调度器允许跨集群执行相同的数据帧表达式。为了实现海量“大数据”处理,可以执行数据摄取功能,例如`read_csv`,其中数据保存在每个工作节点(例如亚马逊的 S3)都可以访问的存储中,并且因为大多数操作从仅选择一些列开始,转换和过滤数据,只需要在机器之间通信相对少量的数据。Dask.dataframe 操作在内部使用 `pandas` 操作。除了以下两种情况外,它们通常以大致相同的速度运行:1. Dask 引入了一些开销,每个任务大约 1 毫秒。这通常可以忽略不计。2. 当 Pandas释放GIL 时,`dask.dataframe` 可以在一个进程内并行调用多个 Pandas 操作,提高速度与内核数量成正比。对于不释放 GIL 的操作,需要多个进程才能获得相同的加速。 Dask DataFrame 数据模型在大多数情况下,Dask DataFrame 感觉就像一个 Pandas DataFrame。到目前为止,我们看到的最大区别是 Dask 操作是惰性的; 他们建立了一个任务图而不是立即执行(更多细节见 [调度器](05_distributed.ipynb))。这让 Dask 可以在内核外并行执行操作。在[Dask Arrays](03_array.ipynb)中, 我们看到一个 `dask.array` 由许多 NumPy 数组组成,沿着一个或多个维度分块。在`dask.dataframe`中也是相似的: Dask DataFrame 由许多 Pandas DataFrame 组成。 对于`dask.dataframe`,分块仅沿索引发生。我们称每个块为 *partition*,上限/下限为 *division*。Dask *可以* 存储有关division的信息。 目前,当您编写自定义函数以应用于 Dask DataFrame 时会出现分区。 将 `CRSDepTime` 转化为时间戳该数据集将时间戳存储为`HHMM`,在`read_csv`中作为整数读入:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
要将这些转换为预定出发时间的时间戳,我们需要将这些整数转换为 `pd.Timedelta` 对象,然后将它们与 `Date` 列组合。在 Pandas 中,我们会使用 `pd.to_timedelta` 函数和一些算术来做到这一点:
###Code
import pandas as pd
# 获取前 10 个日期来补充我们的 `crs_dep_time`
date = df["Date"].head(10)
# 以整数形式获取小时数,转换为时间增量(timedelta)
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# 以整数形式获取分钟数,转换为时间增量(timedelta)
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# 应用 timedeltas 以按出发时间偏移日期
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
自定义代码和Dask Dataframe我们可以将 `pd.to_timedelta` 换成 `dd.to_timedelta`,并对整个 dask DataFrame 执行相同的操作。 但是假设 Dask 还没有实现适用于 Dask DataFrame 的 `dd.to_timedelta`。 那你会怎么做?`dask.dataframe` 提供了一些方法来更容易地将自定义函数应用于 Dask DataFrames:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)在这里,我们将只讨论 `map_partitions`,我们可以使用它来自己实现 `to_timedelta`:
###Code
# 查看`map_partitions`的文档
help(df["CRSDepTime"].map_partitions)
###Output
_____no_output_____
###Markdown
基本思想是将一个对 DataFrame 进行操作的函数应用于每个分区。在这种情况下,我们将应用 `pd.to_timedelta`。
###Code
hours = df["CRSDepTime"] // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df["CRSDepTime"] % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df["Date"] + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
练习:重写上面的代码以使用对 `map_partitions` 的单个调用这将比两个单独的调用稍微更有效,因为它减少了图中的任务数量。
###Code
def compute_departure_timestamp(df):
pass # 目标:完成它
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
局限性 什么不起作用? Dask.dataframe 只涵盖了 Pandas API 中较小,但广泛使用的部分。这种限制有两个原因:1. Pandas API真的*很多*2. 一些操作真的很难并行执行(例如排序)此外,一些重要的操作,如 ``set_index`` 可以用dask.dataframe实现,但与 Pandas 相比,速度会较慢,因为它们包括大量的数据混洗,并且可能会写出到磁盘。 了解更多* [DataFrame 文档](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame 屏幕录像](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame 示例](https://examples.dask.org/dataframe.html)* [Pandas 文档](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask数据框我们使用`dask.delayed`在CSV文件的目录上建立一个并行的数据帧计算,从而结束了第一章。 在本节中,我们使用`dask.dataframe`来自动构建类似的计算,用于常见的表格计算。 Dask数据框看起来和感觉都像Pandas数据框,但它们运行在与`dask.delayed`相同的基础设施上。在这个笔记本中,我们使用了和以前一样的航空公司数据,但现在我们不写for-loops,而是让`dask.dataframe`为我们构造计算。 `dask.dataframe.read_csv`函数可以接受一个像`"data/nycflights/*.csv"`这样的globstring,并一次对我们所有的数据进行并行计算。 何时使用`dask.dataframe`?Pandas对于能在内存中处理的表格数据集是非常优秀的工具。当你要分析的数据集大于你的机器内存时,Dask就变得很有用。我们正在使用的演示数据集只有大约200MB,所以你可以在合理的时间内下载它,但`dask.dataframe`将扩展到比内存大得多的数据集。 `dask.dataframe`模块实现了一个阻塞的并行`DataFrame`对象,它模仿了Pandas`DataFrame`API的一个子集。一个Dask`DataFrame`是由许多内存中的pandas`DataFrame`组成,沿着索引分开。对Dask`DataFrame`的一个操作会触发对组成的pandas`DataFrame`的许多pandas操作,这种方式是注意潜在的并行性和内存限制。**相关文档*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**主要收获**1. Dask DataFrame应该是Pandas用户所熟悉的了2. 数据框的分区对高效执行很重要。 构建数据
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
创建了人工数据。
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
文件名包含一个 glob 模式 `*`,因此路径中与该模式匹配的所有文件都将被读入同一个 Dask DataFrame。
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# 加载计算行数
len(df)
###Output
_____no_output_____
###Markdown
这里发生了什么?- Dask调查了输入路径,发现有三个匹配的文件。- 为每个块智能地创建了一组作业--在这种情况下,每个原始CSV文件都有一个作业。- 每个文件都被加载到一个pandas数据框中,并应用`len()`对其进行处理。- 将小计合并,得出最后的总数。 真实数据让我们用美国几年来的航班摘录来试试。这个数据是针对纽约市地区三个机场的航班的。上市公司财务报表数据
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
请注意,数据框对象的respresentation不包含任何数据--Dask只是做了足够的工作来读取第一个文件的开始,并推断出列名和dtypes。
###Code
df
###Output
_____no_output_____
###Markdown
我们可以查看数据的开始和结束。
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
发生了什么?与`pandas.read_csv`在推断数据类型之前读取整个文件不同,`dask.dataframe.read_csv`只读取文件开头的样本(如果使用glob,则读取第一个文件)。这些推断的数据类型会在读取所有分区时强制执行。在这种情况下,样本中推断的数据类型是不正确的。前`n`行没有`CRSElapsedTime`的值(pandas推断为`float`),后来变成了字符串(`object`dtype)。请注意,Dask会给出一个关于不匹配的错误信息。当这种情况发生时,你有几个选择。- 直接使用`dtype`关键字指定dtypes。这是推荐的解决方案,因为它是最不容易出错的(显式比隐式好),也是性能最好的。- 增加`sample`关键字的大小(以字节为单位)。- 使用 "assume_missing "使 "dask "假定推断为 "int "的列(不允许缺失值)实际上是floats(允许缺失值)。在我们的特殊情况下,这并不适用。在我们的例子中,我们将使用第一个选项,直接指定违规列的`dtypes`。
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
用`dask.dataframe`进行计算我们计算`DepDelay`列的最大值。如果只用pandas,我们会在每个文件上循环找到各个最大值,然后在所有的最大值上找到最后的最大值。```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```我们可以用`dask.delayed`来封装`d.read_csv`,这样它就可以并行运行。无论如何,我们还是要考虑循环、中间结果(每个文件一个)和最终的减少(中间最大值的`max`)。这只是真正的任务周围的噪音,pandas会用```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe`让我们可以编写类似于pandas的代码,对大于内存的数据集进行并行操作。
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
这将为我们写入延迟计算,然后运行它。一些需要注意的事情。1. 和`dask.delayed`一样,我们需要在完成后调用`.compute()`。 在这之前,所有的东西都是懒惰的。2. Dask会尽快删除中间结果(比如每个文件的完整pandas数据框架)。 - 这让我们可以处理比内存大的数据集。 - 这意味着重复计算每次都要把所有的数据加载进来(再运行上面的代码,是比你预期的快还是慢?与`Delayed`对象一样,你可以使用`.visualize`方法查看底层任务图。
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
练习在本节中,我们将进行一些`dask.dataframe`的计算。如果你对Pandas很熟悉,那么这些应该很熟悉。你将不得不考虑何时调用`compute`。 1.) 我们的数据集中有多少条记录?如果你对pandas不熟悉,你会如何检查一个tuple的列表中有多少记录?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) 总共乘坐了多少个未取消的航班?如果是pandas,你会使用[布尔索引](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing)。
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) 每个机场总共有多少个未取消的航班?*提示*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) 每个机场的平均起飞延误是多少?注意,这和你在上一个笔记本中的计算结果是一样的(这种方法是快了还是慢了?
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) 一周中哪一天的平均出发延误最严重?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
分享中间成果在计算上述所有操作时,我们有时会不止一次地进行相同的操作。对于大多数操作,`dask.dataframe`会对参数进行哈希,允许重复的计算被共享,并且只计算一次。例如,让我们计算所有未取消航班的出发延误的平均值和标准差。由于dask操作是懒惰的,这些值还不是最终结果。它们只是得到结果所需的配方。如果我们用两次调用计算来计算它们,就不会出现中间计算的共享。
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
但让我们尝试将这两者传递给一个`compute`调用。
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
使用`dask.compute`大约需要1/2的时间。这是因为在调用`dask.compute`时,两个结果的任务图都被合并,使得共享操作只做一次而不是两次。特别是,使用`dask.compute`只做一次以下操作。- 调用 "read_csv "和 "dask.compute"。- 过滤器(`df[~df.Cancelled]`)- 一些必要的还原("和"、"数")要查看多个结果之间的合并任务图是什么样子的(以及共享的内容),可以使用`dask.visualize`函数(我们可能想使用`filename='graph.pdf'`将图保存到磁盘上,这样我们就可以更容易地放大)。
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
这与pandas相比,如何? Pandas比`dask.dataframe`更成熟,功能更齐全。 如果你的数据适合放在内存中,那么你应该使用Pandas。 当你对不适合在内存中操作的数据集进行操作时,`dask.dataframe`模块给你提供了有限的`pandas`体验。在本教程中,我们提供了一个由几个CSV文件组成的小数据集。 这个数据集在磁盘上是45MB,在内存中可扩展到约400MB。这个数据集足够小,你通常会使用Pandas。我们选择这个大小是为了让练习快速完成。 Dask.dataframe只有在比这个大得多的问题上才真正有意义,当Pandas用可怕的 MemoryError: ...此外,分布式调度器允许相同的数据框架表达式在一个集群中执行。为了实现大规模的 "大数据 "处理,可以执行数据摄取函数,比如`read_csv`,数据存放在每个worker节点都可以访问的存储上(比如amazon的S3),由于大部分操作只从选择一些列开始,对数据进行转换和过滤,所以机器之间只需要进行相对少量的数据通信。Dask.dataframe操作内部使用`pandas`操作。 一般来说,除了以下两种情况,它们的运行速度是差不多的。1. Dask引入了一点开销,每个任务大约1ms。 这通常可以忽略不计。2. 当Pandas释放GIL时,`dask.dataframe`可以在一个进程内并行调用多个pandas操作,速度的提升与核心数成一定比例。对于不释放GIL的操作,需要多个进程才能获得同样的速度提升。 Dask DataFrame 数据模型在大多数情况下,Dask DataFrame感觉就像一个熊猫的DataFrame。到目前为止,我们所看到的最大的区别是Dask的操作是懒惰的;它们会建立一个任务图,而不是立即执行(更多细节将在[Schedulers](05_distributed.ipynb)中介绍)。这让Dask可以在核心之外并行地进行操作。在[Dask数组](03_array.ipynb)中,我们看到一个`dask.array`是由许多NumPy数组组成,沿着一个或多个维度分块。对于`dask.dataframe`来说也是如此:一个Dask DataFrame是由许多pandas DataFrames组成的。对于`dask.dataframe`来说,分块只沿着索引发生。我们把每个分块称为*分区*,上/下界是*分部*。Dask *可以*存储关于分区的信息。现在,当你写自定义函数应用于Dask DataFrames时,分区就会出现。 将 "CRSDepTime "转换为时间戳该数据集存储的时间戳为`HHMM`,在`read_csv`中作为整数读入。
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
为了将这些转换为预定出发时间的时间戳,我们需要将这些整数转换为`pd.Timedelta`对象,然后将它们与`Date`列结合起来。在pandas中,我们会使用`pd.to_timedelta`函数,并进行一些运算。
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
自定义代码和Dask数据框架我们可以将 "pd.to_timedelta "换成 "dd.to_timedelta",并在整个dask DataFrame上做同样的操作。但是,假设Dask没有实现`dd.to_timedelta`在Dask DataFrames上工作。那么你会怎么做呢?`dask.dataframe`提供了一些方法来使应用自定义函数到Dask DataFrames更容易。- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)这里我们只讨论`map_partitions`,我们可以用它来自己实现`to_timedelta`。
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
基本的想法是将一个对DataFrame进行操作的函数应用到每个分区。在本例中,我们将应用`pd.to_timedelta`。
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
练习:重写上面的内容,只需调用 "map_partitions这将比两次单独调用的效率略高,因为它减少了图中的任务数量。
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
限制 哪些地方不能用? Dask.dataframe只涵盖了Pandas API的一小部分,但使用得很好。这种限制有两个原因。1. Pandas API是*大的2. 有些操作确实很难并行完成(如排序)。此外,一些重要的操作,如``set_index``可以工作,但比Pandas慢,因为它们包括大量的数据洗牌,可能会写到磁盘上。 了解更多* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame` API. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But let's try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to save the graph to disk so that we can zoom in more easily):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame` API. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But let's try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to save the graph to disk so that we can zoom in more easily):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
- Downloading NYC Flights dataset... done
- Extracting flight data... done
- Creating json data... done
** Created flights dataset! in 7.80s**
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=16)
###Output
/home/robin/.local/lib/python3.6/site-packages/distributed/node.py:155: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 43459 instead
http_address["port"], self.http_server.port
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
CPU times: user 442 ms, sys: 52.4 ms, total: 495 ms
Wall time: 2.6 s
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
len(df)
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
len(df[~df.Cancelled])
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
df[~df.Cancelled].groupby("Origin").Origin.count().compute()
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
df.groupby("Origin").DepDelay.mean().compute()
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
df.groupby("DayOfWeek").DepDelay.mean().compute()
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
CPU times: user 539 ms, sys: 94.7 ms, total: 634 ms
Wall time: 3.6 s
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
CPU times: user 302 ms, sys: 68.5 ms, total: 371 ms
Wall time: 1.86 s
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
Help on method map_partitions in module dask.dataframe.core:
map_partitions(func, *args, **kwargs) method of dask.dataframe.core.Series instance
Apply Python function on each DataFrame partition.
Note that the index and divisions are assumed to remain unchanged.
Parameters
----------
func : function
Function applied to each partition.
args, kwargs :
Arguments and keywords to pass to the function. The partition will
be the first argument, and these will be passed *after*. Arguments
and keywords may contain ``Scalar``, ``Delayed`` or regular
python objects. DataFrame-like args (both dask and pandas) will be
repartitioned to align (if necessary) before applying the function.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty ``pd.DataFrame`` or ``pd.Series`` that matches the dtypes
and column names of the output. This metadata is necessary for
many algorithms in dask dataframe to work. For ease of use, some
alternative inputs are also available. Instead of a ``DataFrame``,
a ``dict`` of ``{name: dtype}`` or iterable of ``(name, dtype)``
can be provided (note that the order of the names should match the
order of the columns). Instead of a series, a tuple of ``(name,
dtype)`` can be used. If not provided, dask will try to infer the
metadata. This may lead to unexpected results, so providing
``meta`` is recommended. For more information, see
``dask.dataframe.utils.make_meta``.
Examples
--------
Given a DataFrame, Series, or Index, such as:
>>> import dask.dataframe as dd
>>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5],
... 'y': [1., 2., 3., 4., 5.]})
>>> ddf = dd.from_pandas(df, npartitions=2)
One can use ``map_partitions`` to apply a function on each partition.
Extra arguments and keywords can optionally be provided, and will be
passed to the function after the partition.
Here we apply a function with arguments and keywords to a DataFrame,
resulting in a Series:
>>> def myadd(df, a, b=1):
... return df.x + df.y + a + b
>>> res = ddf.map_partitions(myadd, 1, b=2)
>>> res.dtype
dtype('float64')
By default, dask tries to infer the output metadata by running your
provided function on some fake data. This works well in many cases, but
can sometimes be expensive, or even fail. To avoid this, you can
manually specify the output metadata with the ``meta`` keyword. This
can be specified in many forms, for more information see
``dask.dataframe.utils.make_meta``.
Here we specify the output is a Series with no name, and dtype
``float64``:
>>> res = ddf.map_partitions(myadd, 1, b=2, meta=(None, 'f8'))
Here we map a function that takes in a DataFrame, and returns a
DataFrame with a new column:
>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y))
>>> res.dtypes
x int64
y float64
z float64
dtype: object
As before, the output metadata can also be specified manually. This
time we pass in a ``dict``, as the output is a DataFrame:
>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y),
... meta={'x': 'i8', 'y': 'f8', 'z': 'f8'})
In the case where the metadata doesn't change, you can also pass in
the object itself directly:
>>> res = ddf.map_partitions(lambda df: df.head(), meta=ddf)
Also note that the index and divisions are assumed to remain unchanged.
If the function you're mapping changes the index/divisions, you'll need
to clear them afterwards:
>>> ddf.map_partitions(func).clear_divisions() # doctest: +SKIP
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO: implement this
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = pd.to_timedelta
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
CPU times: user 423 ms, sys: 45.1 ms, total: 468 ms
Wall time: 4.57 s
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
stop
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
# df.info()
len(df.index)
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[df.Cancelled != True])
# df.count(np.where(df.Cancelled == True))
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df.groupby('Origin')['Cancelled'].value_counts()
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame` API. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We create artificial data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But let's try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to save the graph to disk so that we can zoom in more easily):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup
###Code
from dask.distributed import Client
client = Client()
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. What definitely works? * Trivially parallelizable operations (fast): * Elementwise operations: ``df.x + df.y`` * Row-wise selections: ``df[df.x > 0]`` * Loc: ``df.loc[4.0:10.5]`` * Common aggregations: ``df.x.max()`` * Is in: ``df[df.x.isin([1, 2, 3])]`` * Datetime/string accessors: ``df.timestamp.month``* Cleverly parallelizable operations (also fast): * groupby-aggregate (with common aggregations): ``df.groupby(df.x).y.max()`` * value_counts: ``df.x.value_counts`` * Drop duplicates: ``df.x.drop_duplicates()`` * Join on index: ``dd.merge(df1, df2, left_index=True, right_index=True)``* Operations requiring a shuffle (slow-ish, unless on index) * Set index: ``df.set_index(df.x)`` * groupby-apply (with anything): ``df.groupby(df.x).apply(myfunc)`` * Join not on the index: ``pd.merge(df1, df2, on='name')``* Ingest operations * Files: ``dd.read_csv, dd.read_parquet, dd.read_json, dd.read_orc``, etc. * Pandas: ``dd.from_pandas`` * Anything supporting numpy slicing: ``dd.from_array`` * From any set of functions creating sub dataframes via ``dd.from_delayed``. * Dask.bag: ``mybag.to_dataframe(columns=[...])``
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup
###Code
from dask.distributed import Client
client = Client()
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. What definitely works? * Trivially parallelizable operations (fast): * Elementwise operations: ``df.x + df.y`` * Row-wise selections: ``df[df.x > 0]`` * Loc: ``df.loc[4.0:10.5]`` * Common aggregations: ``df.x.max()`` * Is in: ``df[df.x.isin([1, 2, 3])]`` * Datetime/string accessors: ``df.timestamp.month``* Cleverly parallelizable operations (also fast): * groupby-aggregate (with common aggregations): ``df.groupby(df.x).y.max()`` * value_counts: ``df.x.value_counts`` * Drop duplicates: ``df.x.drop_duplicates()`` * Join on index: ``dd.merge(df1, df2, left_index=True, right_index=True)``* Operations requiring a shuffle (slow-ish, unless on index) * Set index: ``df.set_index(df.x)`` * groupby-apply (with anything): ``df.groupby(df.x).apply(myfunc)`` * Join not on the index: ``pd.merge(df1, df2, on='name')``* Ingest operations * Files: ``dd.read_csv, dd.read_parquet, dd.read_json, dd.read_orc``, etc. * Pandas: ``dd.from_pandas`` * Anything supporting numpy slicing: ``dd.from_array`` * From any set of functions creating sub dataframes via ``dd.from_delayed``. * Dask.bag: ``mybag.to_dataframe(columns=[...])``
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame` API. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d accounts
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We load the accounts data.
###Code
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Let's also read the holidays data which will use in the exercises
###Code
holidays = dd.read_parquet(os.path.join('data', "holidays"))
holidays.head()
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
6.) What holiday has the worst average departure delay?*Hint*: use [`df.merge`](https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html) to bring holiday information.*Note*: If you have prepared the dataset with `--small` argument or set the `DASK_TUTORIAL_SMALL` environment variable to `True`, you might see only a couple of holidays. This is because the small dataset contains a limited number of rows.
###Code
# Your code here
df.merge(holidays, on=["Date"], how="left").groupby("holiday").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But let's try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to save the graph to disk so that we can zoom in more easily):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grant total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](04-schedulers.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](02-dask-arrays.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. We'll cover this in more detail in [Distributed DataFrames](05-distributed-dataframes-and-efficiency.ipynb).For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 1 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)**Main Take-aways**1. Dask DataFrame should be familiar to Pandas users2. The partitioning of dataframes is important for efficient execution Create data
###Code
%run prep.py -d flights
###Output
_____no_output_____
###Markdown
Setup
###Code
from dask.distributed import Client
client = Client(n_workers=4)
###Output
_____no_output_____
###Markdown
We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs()
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
filename
###Output
_____no_output_____
###Markdown
Filename includes a glob pattern `*`, so all files in the path matching that pattern will be read into the same Dask DataFrame.
###Code
import dask.dataframe as dd
df = dd.read_csv(filename)
df.head()
# load and count number of rows
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and dtypes.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df[~df.Cancelled])
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df[~df.Cancelled].groupby('Origin').Origin.count().compute()
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.groupby("Origin").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby("DayOfWeek").DepDelay.mean().compute()
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe required to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory. This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
pass # TODO: implement this
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
def compute_departure_timestamp(df):
hours = df.CRSDepTime // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df.CRSDepTime % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
return df.Date + hours_timedelta + minutes_timedelta
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Limitations What doesn't work? Dask.dataframe only covers a small but well-used portion of the Pandas API.This limitation is for two reasons:1. The Pandas API is *huge*2. Some operations are genuinely hard to do in parallel (e.g. sort)Additionally, some important operations like ``set_index`` work, but are slowerthan in Pandas because they include substantial shuffling of data, and may write out to disk. Learn More* [DataFrame documentation](https://docs.dask.org/en/latest/dataframe.html)* [DataFrame screencast](https://youtu.be/AT2XtFehFSQ)* [DataFrame API](https://docs.dask.org/en/latest/dataframe-api.html)* [DataFrame examples](https://examples.dask.org/dataframe.html)* [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/)
###Code
client.shutdown()
###Output
_____no_output_____
###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grand total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
CPU times: user 8.58 s, sys: 1.92 s, total: 10.5 s
Wall time: 5.83 s
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
len(df)
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
len(df.loc[~df['Cancelled']])
%%timeit
(~df['Cancelled']).sum().compute()
%%timeit
len(df.loc[~df['Cancelled']])
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
df.loc[~df['Cancelled']].groupby('Origin')['Origin'].count().compute()
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
%%time
# Your code here
df.groupby('Origin')['DepDelay'].mean().compute()
###Output
CPU times: user 8.68 s, sys: 1.91 s, total: 10.6 s
Wall time: 5.7 s
###Markdown
That seems slower.. Expected?
###Code
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
CPU times: user 8.85 s, sys: 2.02 s, total: 10.9 s
Wall time: 6 s
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
df.groupby('DayOfWeek')['DepDelay'].mean().idxmax().compute()
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
CPU times: user 17.4 s, sys: 4.11 s, total: 21.5 s
Wall time: 11.7 s
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
CPU times: user 8.86 s, sys: 2.05 s, total: 10.9 s
Wall time: 5.91 s
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay, filename='graph.pdf')
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](05_distributed.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](03_array.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
Help on method map_partitions in module dask.dataframe.core:
map_partitions(func, *args, **kwargs) method of dask.dataframe.core.Series instance
Apply Python function on each DataFrame partition.
Note that the index and divisions are assumed to remain unchanged.
Parameters
----------
func : function
Function applied to each partition.
args, kwargs :
Arguments and keywords to pass to the function. The partition will
be the first argument, and these will be passed *after*. Arguments
and keywords may contain ``Scalar``, ``Delayed`` or regular
python objects.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty ``pd.DataFrame`` or ``pd.Series`` that matches the dtypes
and column names of the output. This metadata is necessary for
many algorithms in dask dataframe to work. For ease of use, some
alternative inputs are also available. Instead of a ``DataFrame``,
a ``dict`` of ``{name: dtype}`` or iterable of ``(name, dtype)``
can be provided. Instead of a series, a tuple of ``(name, dtype)``
can be used. If not provided, dask will try to infer the metadata.
This may lead to unexpected results, so providing ``meta`` is
recommended. For more information, see
``dask.dataframe.utils.make_meta``.
Examples
--------
Given a DataFrame, Series, or Index, such as:
>>> import dask.dataframe as dd
>>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5],
... 'y': [1., 2., 3., 4., 5.]})
>>> ddf = dd.from_pandas(df, npartitions=2)
One can use ``map_partitions`` to apply a function on each partition.
Extra arguments and keywords can optionally be provided, and will be
passed to the function after the partition.
Here we apply a function with arguments and keywords to a DataFrame,
resulting in a Series:
>>> def myadd(df, a, b=1):
... return df.x + df.y + a + b
>>> res = ddf.map_partitions(myadd, 1, b=2)
>>> res.dtype
dtype('float64')
By default, dask tries to infer the output metadata by running your
provided function on some fake data. This works well in many cases, but
can sometimes be expensive, or even fail. To avoid this, you can
manually specify the output metadata with the ``meta`` keyword. This
can be specified in many forms, for more information see
``dask.dataframe.utils.make_meta``.
Here we specify the output is a Series with no name, and dtype
``float64``:
>>> res = ddf.map_partitions(myadd, 1, b=2, meta=(None, 'f8'))
Here we map a function that takes in a DataFrame, and returns a
DataFrame with a new column:
>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y))
>>> res.dtypes
x int64
y float64
z float64
dtype: object
As before, the output metadata can also be specified manually. This
time we pass in a ``dict``, as the output is a DataFrame:
>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y),
... meta={'x': 'i8', 'y': 'f8', 'z': 'f8'})
In the case where the metadata doesn't change, you can also pass in
the object itself directly:
>>> res = ddf.map_partitions(lambda df: df.head(), meta=df)
Also note that the index and divisions are assumed to remain unchanged.
If the function you're mapping changes the index/divisions, you'll need
to clear them afterwards:
>>> ddf.map_partitions(func).clear_divisions() # doctest: +SKIP
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
%%time
departure_timestamp.head()
###Output
CPU times: user 667 ms, sys: 133 ms, total: 799 ms
Wall time: 792 ms
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
hours = df['CRSDepTime'] // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
minutes = df['CRSDepTime'] % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
return departure_timestamp
departure_timestamp = df.map_partitions(compute_departure_timestamp)
%%time
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____ |
src/jseg/test/data_test.ipynb | ###Markdown
Overlapping Circles
###Code
# Print all the different overlapping situation
shape = (100, 100)
fig,axs = plt.subplots(2,len(OVERLAPPING_CIRCLE_TYPES))
fig.set_size_inches(30,6)
for i,segmentation_type in enumerate(OVERLAPPING_CIRCLE_TYPES):
label, segmentation = overlapping_circles(segmentation_type= segmentation_type,
background_label = 0,
shape = shape)
axs[0,i].imshow(label)
axs[1,i].imshow(segmentation)
axs[0,i].set_title(segmentation_type)
axs[0,i].set_ylabel("Label")
axs[1,i].set_ylabel("Segmentation")
###Output
_____no_output_____ |
bs4_PTT_stock .ipynb | ###Markdown
以下複習bs4的使用
###Code
num = 3000
PTT_stock_URL = 'https://www.ptt.cc/bbs/Stock/index'+str(num)+'.html'
driver = webdriver.PhantomJS(executable_path='/Users/mac/Desktop/Programming/phantomjs-2.1.1-macosx/bin/phantomjs')
driver.get(PTT_stock_URL)
PTT_page = driver.page_source
soup = BeautifulSoup(PTT_page, 'lxml')
###Output
_____no_output_____
###Markdown
bs4裡面的method都是針對單一個對象的,不要多目標一起用會出錯
###Code
soup.title
soup.title.string
print(soup.prettify())
a_tags = soup.find_all('a', string=re.compile('三大')) #只搜尋子字串含有___的人
a_tags
article_list = soup.find_all(href=re.compile('/bbs/Stock/M')) #代表真實文章的所有列表
article_list
soup.find_all(href=re.compile('/bbs/Stock/M'))[0].string #只有單個品項才可以取出string
soup.find_all(href=re.compile('/bbs/Stock/M'))[0].attrs #只有單個品項才可以取出attrs
soup.find_all(href=re.compile('/bbs/Stock/M'))[0].attrs['href'] #單個品項先取出attrs再用字典取出href
div_tags = soup.find_all('div', class_='r-ent') #注意,是class_
div_tags
#日期
div_tags[0].find('div', class_='date').string
#作者
div_tags[0].find('div', class_='author').string
#推文噓文數目
div_tags[0].find('div', class_='nrec').string
#文章名稱、URL
div_tags[0].find('a', href=re.compile('/bbs/Stock/M'))
#文章URL
div_tags[0].find('a', href=re.compile('/bbs/Stock/M')).attrs['href']
#文章名稱
div_tags[0].find('a', href=re.compile('/bbs/Stock/M')).string
###Output
_____no_output_____
###Markdown
總結以上一個文章列表可以獲得的資訊
###Code
dict_page_topic = []
dict_page_topic_URL = []
dict_page_author = []
dict_page_date = []
dict_page_good_boo = []
for topic_content in soup.find_all('div', class_='r-ent'):
dict_page_topic.append(topic_content.find('a', href=re.compile('/bbs/Stock/M')).string)
dict_page_topic_URL.append('https://www.ptt.cc'+topic_content.find('a', href=re.compile('/bbs/Stock/M')).attrs['href'])
dict_page_author.append(topic_content.find('div', class_='author').string)
dict_page_date.append(topic_content.find('div', class_='date').string)
dict_page_good_boo.append(topic_content.find('div', class_='nrec').string)
df = pd.DataFrame({
'topic': dict_page_topic,
'topic_URL': dict_page_topic_URL,
'author': dict_page_author,
'date': dict_page_date,
'good_boo': dict_page_good_boo,
})
df
###Output
_____no_output_____
###Markdown
用for迴圈拓展爬3000-3718頁面的所有文章(近2年),並障礙排除刪除的文章
###Code
dict_page_topic = []
dict_page_topic_URL = []
dict_page_author = []
dict_page_date = []
dict_page_good_boo = []
driver = webdriver.PhantomJS(executable_path='/Users/mac/Desktop/Programming/phantomjs-2.1.1-macosx/bin/phantomjs')
for i in range(3000,3718):
PTT_stock_URL = 'https://www.ptt.cc/bbs/Stock/index'+str(i)+'.html'
driver.get(PTT_stock_URL)
PTT_page = driver.page_source
soup = BeautifulSoup(PTT_page, 'lxml')
for topic_content in soup.find_all('div', class_='r-ent'):
try:
topic_content.find('a', href=re.compile('/bbs/Stock/M')).string
dict_page_topic.append(topic_content.find('a', href=re.compile('/bbs/Stock/M')).string)
dict_page_topic_URL.append('https://www.ptt.cc'+topic_content.find('a', href=re.compile('/bbs/Stock/M')).attrs['href'])
dict_page_author.append(topic_content.find('div', class_='author').string)
dict_page_date.append(topic_content.find('div', class_='date').string)
dict_page_good_boo.append(topic_content.find('div', class_='nrec').string)
except AttributeError: #若是有人刪除文章,topic找不到會跑出這個錯誤
continue
if i%10 == 0:
print('finished page:', i)
print('Finished PTT scarping!')
df = pd.DataFrame({
'topic': dict_page_topic,
'topic_URL': dict_page_topic_URL,
'author': dict_page_author,
'date': dict_page_date,
'good_boo': dict_page_good_boo,
})
df.tail()
df.to_csv('PTT_stock_p3000_p3718.csv')
df = pd.read_csv('PTT_stock_p3000_p3718.csv', index_col=0)
df
###Output
_____no_output_____
###Markdown
近n天盤中閒聊推文內容
###Code
n = 60 #最近n天的盤中推文
df[df['topic'].str.contains('盤中閒聊')][-n:]
daychat_URL = df[df['topic'].str.contains('盤中閒聊')]['topic_URL'][-n:].values
daychat_URL
driver.get(daychat_URL[0])
daychat = driver.page_source
soup = BeautifulSoup(daychat, 'lxml')
soup.find_all('div', class_='push')[0]
soup.find_all('div', class_='push')[0].find('span', class_='f3 push-content').string
driver.quit()
import time
push_type = []
push_ID = []
push_content = []
push_time = []
count = 0
start = time.time()
driver = webdriver.PhantomJS(executable_path='/Users/mac/Desktop/Programming/phantomjs-2.1.1-macosx/bin/phantomjs')
for url in daychat_URL:
driver.get(url)
daychat = driver.page_source
soup = BeautifulSoup(daychat,'lxml')
for topic_content in soup.find_all('div', class_='push'):
try: #推文
push_type.append(topic_content.find('span', class_='hl push-tag').string)
push_ID.append(topic_content.find('span', class_='f3 hl push-userid').string)
push_content.append(topic_content.find('span', class_='f3 push-content').string)
push_time.append(topic_content.find('span', class_='push-ipdatetime').string)
except AttributeError: #切換到噓文,用identation就可以一層一層處理異常了!!
try:
push_type.append(topic_content.find('span', class_='f1 hl push-tag').string)
push_ID.append(topic_content.find('span', class_='f3 hl push-userid').string)
push_content.append(topic_content.find('span', class_='f3 push-content').string)
push_time.append(topic_content.find('span', class_='push-ipdatetime').string)
except AttributeError: #連噓文都沒辦法,就給大宗師Exception忽略異常吧
print('連噓文都沒辦法,就忽略異常吧 <div class="push center warning-box">檔案過大!部分文章無法顯示</div>')
continue
count += 1
if count % 10 == 0:
print('finished chats:', count, 'time used(sec):', time.time()-start)
print('finished chat push scraping')
driver.quit()
df_daychat_push = pd.DataFrame({
'type': push_type,
'ID': push_ID,
'content': push_content,
'time': push_time
})
df_daychat_push
df_daychat_push.to_csv('daychat_push_60d_1006.csv')
df_daychat_push = pd.read_csv('daychat_push_60d_1006.csv', index_col=0)
df_daychat_push['ID'].value_counts()
df_daychat_push['type'].value_counts()
len(set(df_daychat_push['ID'].values))
df_daychat_push.dropna(inplace=True) #有Nan會有method不好用
df_daychat_push.to_csv('daychat_push_60d_1006.csv')
df_daychat_push.shape
df_daychat_push[df_daychat_push['content'].str.contains('崩')]
df_daychat_push[df_daychat_push['content'].str.contains('多')]
df_daychat_push[df_daychat_push['content'].str.contains('可成')]
###Output
_____no_output_____
###Markdown
對照用^TWII大盤指數: 2015/10~2017/10
###Code
from pandas_datareader import data as web
TWII = web.DataReader(name='^TWII', data_source='yahoo', start='2015-10-01')
TWII.to_csv('TWII_20151001_20171006.csv')
TWII['up_1_down_0'] = np.where(TWII['Close']-TWII['Close'].shift(1)>0 , 1 , 0)
TWII['Pct_change'] = TWII['Close'].pct_change()*100 #percentage
TWII['Volatility level'] = np.where(np.abs(TWII['Pct_change'])>0.8 , 'high' , 'low') #用變動8%當界定線
TWII
TWII.to_csv('TWII_20151001_20171006.csv')
###Output
_____no_output_____ |
scripts/python-scripts/heatmaps/0002_getting-heatmap-effector_new.ipynb | ###Markdown
Visualizing CNN Layers
###Code
from tensorflow.keras.models import load_model
from matplotlib import pyplot
import numpy as np
import pandas as pd
from plotnine import *
###Output
_____no_output_____
###Markdown
Load model
###Code
# Load the model
model = load_model("../../results/model_ensemble/models/weights/cnn_lstm_30-0.41.hdf5")
model.summary()
###Output
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 4034, 20)] 0
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, 4034, 32) 640 input_1[0][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, 4032, 32) 1920 input_1[0][0]
__________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, 4030, 32) 3200 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 4034, 32) 128 conv1d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 4032, 32) 128 conv1d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 4030, 32) 128 conv1d_3[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 4034, 32) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 4032, 32) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 4030, 32) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 12096, 32) 0 activation_1[0][0]
activation_2[0][0]
activation_3[0][0]
__________________________________________________________________________________________________
conv1d_4 (Conv1D) (None, 12094, 64) 6208 concatenate_1[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 16) 5184 conv1d_4[0][0]
__________________________________________________________________________________________________
lstm_2 (LSTM) (None, 16) 5184 conv1d_4[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 32) 0 lstm_1[0][0]
lstm_2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 32) 1056 concatenate_2[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 32) 0 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 33 dropout_1[0][0]
==================================================================================================
Total params: 23,809
Trainable params: 23,617
Non-trainable params: 192
__________________________________________________________________________________________________
###Markdown
Transform CNN models layers into data frames
###Code
def get_filter_data(layer_num):
# retrieve weights from the hidden layer
filters = model.layers[layer_num].get_weights()
# reshape layers
filters = filters[0]
# normalize filter values to 0-1 so we can visualize them
f_min, f_max = filters.min(), filters.max()
filters = (filters - f_min) / (f_max - f_min)
# Limits for loop iterations
num_x, num_y, num_filters = np.shape(filters)
# Create data frame
filters_df = pd.DataFrame({
"x" : [],
"y" : [],
"filter_num" : [],
"value" : []
})
# Loop to save filters data into df
for x in range(num_x):
for y in range(num_y):
for filt in range(num_filters):
filters_df.loc[len(filters_df)] = np.array([x + 1, y + 1, filt + 1, filters[x,y,filt]])
# Make x, y, filter columns integers
filters_df = (
filters_df
.astype({
"x": "int64",
"y": "int64",
"filter_num": "int64"
})
)
return(filters_df)
###Output
_____no_output_____
###Markdown
Visualize each of CNN models layers
###Code
def plot_filters(conv_df, conv_title):
filters_fig = (
ggplot(conv_df) +
aes(x = "y", y = "x", fill = "value") +
geom_tile() +
scale_x_continuous(expand = [0,0,0,0], breaks = None) +
scale_y_continuous(expand = [0,0,0,0], breaks = None) +
facet_wrap("filter_num", ncol = 4) +
# scale_fill_manual(limits = [0,1]) +
coord_fixed() +
labs(
title = "Filters for " + conv_title + " layer",
x = "x Dimension",
y = "y Dimension"
) +
theme_light() +
theme(
figure_size = [10,6],
panel_grid_major = element_blank(),
panel_grid_minor = element_blank(),
strip_text = element_text(colour = 'black', size = 10),
strip_background = element_rect(colour = None, fill = "#BDBDBD")
)
)
return(filters_fig)
###Output
_____no_output_____
###Markdown
Run functions
###Code
conv1d_1_df = get_filter_data(1)
conv1d_2_df = get_filter_data(2)
conv1d_3_df = get_filter_data(3)
conv1d_4_df = get_filter_data(11)
plot_filters(conv1d_1_df, "Conv1D 1")
plot_filters(conv1d_2_df, "Conv1D 2")
plot_filters(conv1d_3_df, "Conv1D 3")
plot_filters(conv1d_4_df, "Conv1D 4")
###Output
_____no_output_____
###Markdown
Visializing predictions on CNN layers
###Code
from tensorflow.keras.models import Model
from numpy import expand_dims
###Output
_____no_output_____
###Markdown
Load data
###Code
# Get the reprocessed data from .npy file
x_train = np.load('../r-scripts/getting-data-current/data-sets/x_train.npy')
y_train = np.load('../r-scripts/getting-data-current/data-sets/y_train.npy')
x_train.shape
# x_dev = np.load('../r-scripts/getting-data-current/data-sets/x_val.npy')
# y_dev = np.load('../r-scripts/getting-data-current/data-sets/y_val.npy')
# x_test = np.load('../r-scripts/getting-data-current/data-sets/x_test.npy')
# y_test = np.load('../r-scripts/getting-data-current/data-sets/y_test.npy')
###Output
_____no_output_____
###Markdown
Apply model and transform data into data frame
###Code
def get_partial_output_data(num_layer, sequences, seq_length, show_filters):
# Get feature maps
data_for_checking = x_train[sequences, :, :]
model_partial = Model(inputs = model.inputs, outputs = model.layers[num_layer].output)
feature_map = model_partial.predict(data_for_checking)
feature_map = feature_map[:, :seq_length, :]
# normalize filter values to 0-1 so we can visualize them
f_min, f_max = feature_map.min(), feature_map.max()
feature_map = (feature_map - f_min) / (f_max - f_min)
# Limits for loop iterations
num_x, num_y, num_filters = np.shape(feature_map)
# Create data frame
feature_map_df = pd.DataFrame({
"x" : [],
"y" : [],
"filter_num" : [],
"value" : []
})
# Loop to save filters data into df
for x in range(num_x):
for y in range(num_y):
for filt in range(num_filters):
feature_map_df.loc[len(feature_map_df)] = np.array([x + 1, y + 1, filt + 1, feature_map[x, y, filt]])
# Make x, y, filter columns integers
feature_map_df = (
feature_map_df
.astype({
"x": "int64",
"y": "int64",
"filter_num": "int64"
})
.query("filter_num in @show_filters")
)
return(feature_map_df)
###Output
_____no_output_____
###Markdown
Visualize layers outpus
###Code
def plot_layer_outputs(conv_df, conv_title):
outputs_fig = (
ggplot(conv_df) +
aes(x = "y", y = "x", fill = "value") +
geom_tile() +
# scale_x_continuous(expand = [0,0,0,0], breaks = np.arange(0, 4034, 1)) +
scale_x_continuous(expand = [0,0,0,0], breaks = None) +
scale_y_continuous(expand = [0,0,0,0], breaks = np.arange(0, 462, 1)) +
facet_wrap("filter_num", ncol = 4) +
# scale_fill_manual(limits = [0,1]) +
coord_fixed() +
labs(
title = "Outputs for " + conv_title + " layer",
x = "Sequence length",
y = "Sequence"
) +
theme_light() +
theme(
figure_size = [10,6],
panel_grid_major = element_blank(),
panel_grid_minor = element_blank(),
strip_text = element_text(colour = 'black', size = 10),
strip_background = element_rect(colour = None, fill = "#BDBDBD")
)
)
return(outputs_fig)
###Output
_____no_output_____
###Markdown
Run functions
###Code
# Getting the visualisation from Conv1d_1
feature_map_conv_layer1 = get_partial_output_data(
num_layer = 1,
sequences = range(0,1),
seq_length = 20,
show_filters = range(1, 32 + 1) # +1 so the last one can be shown
)
plot_layer_outputs(feature_map_conv_layer1, "Conv1D 1")
# Getting the visualisation from concatenation
feature_map_conv_layer2 = get_partial_output_data(
num_layer = 2,
sequences = range(0,1),
seq_length = 20,
show_filters = range(1, 16 + 1) # +1 so the last one can be shown
)
plot_layer_outputs(feature_map_conv_layer2, "Conv1D 2")
# Getting the visualisation from concatenation
feature_map_concatenation = get_partial_output_data(
num_layer = 10,
sequences = range(0,1),
seq_length = 20,
show_filters = range(1, 16 + 1) # +1 so the last one can be shown
)
plot_layer_outputs(feature_map_concatenation, "Concatenation layer")
# Getting the visualisation from Conv1d_4
feature_map = get_partial_output_data(
num_layer = 11,
sequences = range(0,1),
seq_length = 20,
show_filters = range(1, 16 + 1) # +1 so the last one can be shown
)
plot_layer_outputs(feature_map, "Conv1D 4")
###Output
_____no_output_____ |
Real Life Data.ipynb | ###Markdown
![GMITLOGO](https://www.pchei.ie/images/college_crests/gmit_crest.jpg) Programming for Data Analysis - Project By Simona Vasiliauskaite G00263352**Main Objective**Create a data set by simulating a real-world phenomenon of your choosing.**Tasks:*** Choose a real-world phenomenon that can be measured and collect at least one-hundred data points across at least four different variables.* Investigate the types of variables involved, their likely distributions, and their relationships with each other.* Synthesise/simulate a data set as closely matching their properties as possible.* Detail research and implement the simulation in a Jupyter notebook. 1. Chosen Real Life Phenomenon I have chosen to analyse social media usage across Ireland based on population's age, gender and mobile usage and how it may impact their buying behaviour online.It goes without saying that we live in an age where technology proliferates and prevails. It shapes the way we work and live. And sad as it may be, it also dictates how we think and act too. According to the world at large, we’re a bunch of digital obsessives that live through the lens of our smartphones, addicted to scrolling, refreshing and then scrolling some more. Below are some digital and social media statistics that tell us how Irish people act online in 2018. (1)**What accounts are most popular?*** 65% have a Facebook account, 69% of whom access it daily * 27% have a Linkedin account, 18% of whom access it daily * 32% have an Instagram account, 51% of whom access it daily* 29% have a Twitter account, 37% of whom access it daily* 40% of us now use WhatsApp (2)![Social Media](http://i2.wp.com/communicationshub.ie/wp-content/uploads/2018/02/account-ownership-nov17.jpg)**It also influences our purchasing desicions**Social media is the most influential tool for Irish consumers when finding inspiration for purchases, particularly for younger age groups. Millennials and Generation Z consumers are more likely to make purchases when retailers actively engage on social media with this age grouping. Irish consumers cited social media (38%) as the most influential channel along with individual retailer websites for inspiring purchases. Social media ranked higher than other online media channels, such as blogs and digital press and magazines. With 90% of 18-24 year olds using social media to inspire purchases, this is a key demographic group in terms of encouraging social media engagement.(4)* Finding information about goods and services (86%) was the most common activity carried out on the internet by Irish individuals (CSO, 2017) * Over a quarter of us have purchased online six or more times in the last three months (CSO, 2017)* Ireland is ranked ninth in the EU when it comes to online shopping, up from thirteenth the year before (European Commission, 2017)* 58% of large enterprises in Ireland experienced e-commerce sales in the last year – accounting for 43% of their sales in total (CSO, 2017)* Clothes or sports goods were the most popular online purchase in 2017, purchased by 44% of individuals.**Shopping on Mobiles Phones**The smartphone has become intertwined into our daily lives, with ninety eight percent of smartphone owners using their devices on a daily basis. Smartphone capabilities and utilities are becoming ever greater and usage continues to evolve. Websites must be mobile-enabled as mobile devices are becoming a key purchasing tool when shopping online. Mobile payments are set to double by 2023 so retailers need to ensure that they have smooth, effective mobile payment options in-store.(5) * 90% of Irish adults own a smartphone * The number of +65 year olds with access to an e-reader has increased from 30% to 45%.* Access to tablets among the 65+ market has grown from 57% in 2017 to 70% in 2018.* Irish adults look at their mobile phone 57 times a day.* 16% admit to looking at their phone more than 100 times a day against a European average of 8%.* Just under one in three of us check our phone within five minutes of going to sleep.* More than half of us think we use our phone too much – nearly 60% of us think our partners do* 68% of 18-24 year olds watch live videos or stories on social media on a daily basis. (5)**Age Group**Instagram took the top spot for people aged between 18 and 34. Facebook reclaims the top spot for people aged 35 to 54 where Instagram dropped to third place behind Twitter. (6)
###Code
# Import Python libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Variables involved, their likely distributions, and their relationships with each other. I will investigate 4 variables across a dataset of 3.3 million social media users in Ireland.**Variables:**1. User's Age: 15-65 2. Sex: Female or Male3. Hours spent on a phone per day: 0-24 (hours)4. Apps Downloaded: Facebook, Instagram, Pinterest, Twitter, WhatsAapp (Min 1 - Max 5) 1. Variable - Age DistributionLet's look at the age distribution among 3.3 million social media users in Ireland. Statistics showed that average users age is around 34, minimum age for account creation is 15 and people up to age 65+ were active users. Age is a big factor in mobile usage. Millenials and generation Z consumers are more likely to make purchases when retailers actively engage on social media with this age grouping.
###Code
np.random.seed(12345) # I did a seed reset so the numbers stay the same every time
age = np.random.triangular(15, 34, 65, 3300000).round(0) # generate values
age
# Create plot
plt.hist(np.random.triangular(15, 34, 65, 3300000), bins=10,
normed=True, edgecolor='k')
plt.ylabel('Frequency') # Label y axis
plt.xlabel('User Age') # Label x axis
plt.title('Social Media User Age') # Add title
plt.show() # Show Plot
###Output
_____no_output_____
###Markdown
2. Variable - Gender Distribution Gender is also a huge factor to consider when analysing purchasing behaviour. How social media content is consumed heavily depends on the type of person/gender.
###Code
# Male = 1
# Female = 2
np.random.seed(12345) # I did a seed reset so the numbers stay the same every time
gender = np.random.randint(1, 3, 3300000) # 1 - males, 3 - females but not included, 3.3 million users
gender
# Plot data in a histrogram
plt.hist(gender, bins=3, edgecolor='k')
plt.ylabel("Users") # Label y axis
plt.xlabel("Gender - Male or Female") # Label x axis
plt.show() # Show plot
###Output
_____no_output_____
###Markdown
We can see from histogram above that the gender distribution is even. 3. Variable - Phone UsageLet's see the distribution of a user checking their phone every day. Statistics taken from above show that on average a person checks their phone 57 times a day. The hyphotesis here would be to see whether a person who is exposed to more social media content on a daily basis purchases more online than a person who spends less time online.
###Code
np.random.seed(12345) # I did a seed reset so the numbers stay the same every time
usage = np.random.normal(57,3,3300000). round(0) # generate values
usage
min(usage) # Check for minimum amount of phone usage per day
max(usage) # Check for maximum amount of times a phone is used
plt.hist(usage,bins=73, edgecolor='k') # Add sample count, amount of bins, and edgecolour of the bins
plt.ylabel('Users') # Label y axis
plt.xlabel('Times') # Label x axis
plt.title('Times Looked at a Phone Daily') # Add title
plt.show() # Show Plot
###Output
_____no_output_____
###Markdown
4. Variable - Social Media AppsI would also like to see possible distribution of the amount of social media platforms a person may use or download. In the above statistics it states that a person has at least one social media app downloaded and most popular ones are Facebook, Instagram, Twitter, WhatsApp and Pinterest so I will say a person on average has a maximum 5 social media accounts on their phone. Online business use different applications to promote their product and service so if a person has more apps downloaded they are exposed to more advertisements and may be influenced to purchase more.
###Code
np.random.seed(12345) # I did a seed reset so the numbers stay the same every time
social = np.random.uniform(1, 5, 3300000).round(0) # generate values
social
plt.hist(social,bins=10, edgecolor='k') # Add sample count, amount of bins, and edgecolour of the bins
plt.ylabel('Users') # Label y axis
plt.xlabel('Social Apps') # Label x axis
plt.title('Apps Downloaded') # Add title
plt.show() # Show Plot
min(social)
max(social)
# Create data
height = [65, 32, 40, 27, 21]
bars = ('Facebook', 'Instagram', 'WhatsApp', 'LinkedIn', 'Pinterest')
# Create bars
y_pos = np.arange(len(bars))
plt.bar(y_pos, height)
# Create names on the x-axis
plt.xticks(y_pos, bars)
plt.xlabel('Social Media Apps', fontweight='bold', color = 'black')
plt.ylabel("Percentage")
plt.title("Social Media Apps Downloaded")
plt.show()
###Output
_____no_output_____
###Markdown
3. Data Simulation
###Code
# Creating a database with information gathered
np.random.seed(1234) # Added seed reset so the numbers stay the same every time
NewData = pd.DataFrame({'Age':age.round(0), 'Gender':np.random.randint(1, 3, 3300000), 'Phone Daily Usage':np.random.normal(57, 3, 3300000).round(0),'Social Media Apps':np.random.uniform(1, 5, 3300000).round()})
NewData['Gender'].replace({1:'Male', 2:'Female'}, inplace=True) # Replacing numbers with strings accordingly
NewData # Print new data frame
NewData.shape
NewData.head()
NewData.tail()
NewData["Age"].describe().round(0) # check for descriptive statistics of age variable
NewData["Phone Daily Usage"].describe().round(0) # check for descriptive statistics of daily phone usage variable
# Extact data with Males who have downloaded 5 apps and their daily usage
array = ['Male']
Male5 = NewData.loc[(NewData['Social Media Apps'] == 5) & NewData['Gender'].isin(array)]
Male5 # Print Data
# Extract data with Males who have downloaded 1 app and their daily phone usage
array = ['Male']
Male1 = NewData.loc[(NewData['Social Media Apps'] == 1) & NewData['Gender'].isin(array)]
Male1 # Print
# Extract data with Females who have downloaded 1 application
array = ['Female']
Female1 = NewData.loc[(NewData['Social Media Apps'] == 1) & NewData['Gender'].isin(array)]
Female1
# Extract data with Females who have downloaded 5 applications
array = ['Female']
Female5= NewData.loc[(NewData['Social Media Apps'] == 5) & NewData['Gender'].isin(array)]
Female5
sns.distplot( Male5["Phone Daily Usage"] , color="skyblue", label="Male")
sns.distplot( Female5["Phone Daily Usage"] , color="red", label="Female")
plt.legend()
plt.ylabel("Frequency")
plt.show()
sns.distplot( Male1["Phone Daily Usage"] , color="skyblue", label="Male")
sns.distplot( Female1["Phone Daily Usage"] , color="red", label="Female")
plt.legend()
plt.ylabel("Frequency")
plt.show()
###Output
_____no_output_____ |
chapter_optimization/adadelta.ipynb | ###Markdown
Adadelta除了 RMSProp 以外,另一个常用优化算法 Adadelta 也针对 Adagrad 在迭代后期可能较难找到有用解的问题做了改进 [1]。有意思的是,Adadelta 没有学习率这一超参数。 算法Adadelta 算法也像 RMSProp 一样,使用了小批量随机梯度 $\boldsymbol{g}_t$ 按元素平方的指数加权移动平均变量 $\boldsymbol{s}_t$。在时间步 0,它的所有元素被初始化为 0。给定超参数 $0 \leq \rho 0$,同 RMSProp 一样计算$$\boldsymbol{s}_t \leftarrow \rho \boldsymbol{s}_{t-1} + (1 - \rho) \boldsymbol{g}_t \odot \boldsymbol{g}_t. $$与 RMSProp 不同的是,Adadelta 还维护一个额外的状态变量 $\Delta\boldsymbol{x}_t$,其元素同样在时间步 0 时被初始化为 0。我们使用 $\Delta\boldsymbol{x}_{t-1}$ 来计算自变量的变化量:$$ \boldsymbol{g}_t' \leftarrow \sqrt{\frac{\Delta\boldsymbol{x}_{t-1} + \epsilon}{\boldsymbol{s}_t + \epsilon}} \odot \boldsymbol{g}_t, $$其中 $\epsilon$ 是为了维持数值稳定性而添加的常数,例如 $10^{-5}$。接着更新自变量:$$\boldsymbol{x}_t \leftarrow \boldsymbol{x}_{t-1} - \boldsymbol{g}'_t. $$最后,我们使用 $\Delta\boldsymbol{x}$ 来记录自变量变化量 $\boldsymbol{g}'$ 按元素平方的指数加权移动平均:$$\Delta\boldsymbol{x}_t \leftarrow \rho \Delta\boldsymbol{x}_{t-1} + (1 - \rho) \boldsymbol{g}'_t \odot \boldsymbol{g}'_t. $$可以看到,如不考虑 $\epsilon$ 的影响,Adadelta 跟 RMSProp 不同之处在于使用 $\sqrt{\Delta\boldsymbol{x}_{t-1}}$ 来替代超参数 $\eta$。 从零开始实现Adadelta 需要对每个自变量维护两个状态变量,$\boldsymbol{s}_t$ 和 $\Delta\boldsymbol{x}_t$。我们按算法中的公式实现 Adadelta。
###Code
%matplotlib inline
import d2lzh as d2l
from mxnet import nd
features, labels = d2l.get_data_ch7()
def init_adadelta_states():
s_w, s_b = nd.zeros((features.shape[1], 1)), nd.zeros(1)
delta_w, delta_b = nd.zeros((features.shape[1], 1)), nd.zeros(1)
return ((s_w, delta_w), (s_b, delta_b))
def adadelta(params, states, hyperparams):
rho, eps = hyperparams['rho'], 1e-5
for p, (s, delta) in zip(params, states):
s[:] = rho * s + (1 - rho) * p.grad.square()
g = ((delta + eps).sqrt() / (s + eps).sqrt()) * p.grad
p[:] -= g
delta[:] = rho * delta + (1 - rho) * g * g
###Output
_____no_output_____
###Markdown
使用超参数 $\rho=0.9$ 来训练模型。
###Code
d2l.train_ch7(adadelta, init_adadelta_states(), {'rho': 0.9}, features,
labels)
###Output
loss: 0.243955, 0.501521 sec per epoch
###Markdown
简洁实现通过算法名称为“adadelta”的`Trainer`实例,我们便可在 Gluon 中使用 Adadelta 算法。它的超参数可以通过`rho`来指定。
###Code
d2l.train_gluon_ch7('adadelta', {'rho': 0.9}, features, labels)
###Output
loss: 0.243461, 0.403651 sec per epoch
###Markdown
AdaDelta算法除了RMSProp算法以外,另一个常用优化算法AdaDelta算法也针对AdaGrad算法在迭代后期可能较难找到有用解的问题做了改进 [1]。有意思的是,AdaDelta算法没有学习率这一超参数。 算法AdaDelta算法也像RMSProp算法一样,使用了小批量随机梯度$\boldsymbol{g}_t$按元素平方的指数加权移动平均变量$\boldsymbol{s}_t$。在时间步0,它的所有元素被初始化为0。给定超参数$0 \leq \rho 0$,同RMSProp算法一样计算$$\boldsymbol{s}_t \leftarrow \rho \boldsymbol{s}_{t-1} + (1 - \rho) \boldsymbol{g}_t \odot \boldsymbol{g}_t. $$与RMSProp算法不同的是,AdaDelta算法还维护一个额外的状态变量$\Delta\boldsymbol{x}_t$,其元素同样在时间步0时被初始化为0。我们使用$\Delta\boldsymbol{x}_{t-1}$来计算自变量的变化量:$$ \boldsymbol{g}_t' \leftarrow \sqrt{\frac{\Delta\boldsymbol{x}_{t-1} + \epsilon}{\boldsymbol{s}_t + \epsilon}} \odot \boldsymbol{g}_t, $$其中$\epsilon$是为了维持数值稳定性而添加的常数,如$10^{-5}$。接着更新自变量:$$\boldsymbol{x}_t \leftarrow \boldsymbol{x}_{t-1} - \boldsymbol{g}'_t. $$最后,我们使用$\Delta\boldsymbol{x}_t$来记录自变量变化量$\boldsymbol{g}'_t$按元素平方的指数加权移动平均:$$\Delta\boldsymbol{x}_t \leftarrow \rho \Delta\boldsymbol{x}_{t-1} + (1 - \rho) \boldsymbol{g}'_t \odot \boldsymbol{g}'_t. $$可以看到,如不考虑$\epsilon$的影响,AdaDelta算法与RMSProp算法的不同之处在于使用$\sqrt{\Delta\boldsymbol{x}_{t-1}}$来替代超参数$\eta$。 从零开始实现AdaDelta算法需要对每个自变量维护两个状态变量,即$\boldsymbol{s}_t$和$\Delta\boldsymbol{x}_t$。我们按AdaDelta算法中的公式实现该算法。
###Code
%matplotlib inline
import d2lzh as d2l
from mxnet import nd
features, labels = d2l.get_data_ch7()
def init_adadelta_states():
s_w, s_b = nd.zeros((features.shape[1], 1)), nd.zeros(1)
delta_w, delta_b = nd.zeros((features.shape[1], 1)), nd.zeros(1)
return ((s_w, delta_w), (s_b, delta_b))
def adadelta(params, states, hyperparams):
rho, eps = hyperparams['rho'], 1e-5
for p, (s, delta) in zip(params, states):
s[:] = rho * s + (1 - rho) * p.grad.square()
g = ((delta + eps).sqrt() / (s + eps).sqrt()) * p.grad
p[:] -= g
delta[:] = rho * delta + (1 - rho) * g * g
###Output
_____no_output_____
###Markdown
使用超参数$\rho=0.9$来训练模型。
###Code
d2l.train_ch7(adadelta, init_adadelta_states(), {'rho': 0.9}, features,
labels)
###Output
loss: 0.242859, 0.365652 sec per epoch
###Markdown
简洁实现通过名称为“adadelta”的`Trainer`实例,我们便可使用Gluon提供的AdaDelta算法。它的超参数可以通过`rho`来指定。
###Code
d2l.train_gluon_ch7('adadelta', {'rho': 0.9}, features, labels)
###Output
loss: 0.243492, 0.405834 sec per epoch
|
python/d2l-en/pytorch/chapter_deep-learning-computation/custom-layer.ipynb | ###Markdown
Custom LayersOne factor behind deep learning's successis the availability of a wide range of layersthat can be composed in creative waysto design architectures suitablefor a wide variety of tasks.For instance, researchers have invented layersspecifically for handling images, text,looping over sequential data,andperforming dynamic programming.Sooner or later, you will encounter or inventa layer that does not exist yet in the deep learning framework.In these cases, you must build a custom layer.In this section, we show you how. (**Layers without Parameters**)To start, we construct a custom layerthat does not have any parameters of its own.This should look familiar if you recall ourintroduction to block in :numref:`sec_model_construction`.The following `CenteredLayer` class simplysubtracts the mean from its input.To build it, we simply need to inheritfrom the base layer class and implement the forward propagation function.
###Code
import torch
from torch import nn
from torch.nn import functional as F
class CenteredLayer(nn.Module):
def __init__(self):
super().__init__()
def forward(self, X):
return X - X.mean()
###Output
_____no_output_____
###Markdown
Let us verify that our layer works as intended by feeding some data through it.
###Code
layer = CenteredLayer()
layer(torch.FloatTensor([1, 2, 3, 4, 5]))
###Output
_____no_output_____
###Markdown
We can now [**incorporate our layer as a componentin constructing more complex models.**]
###Code
net = nn.Sequential(nn.Linear(8, 128), CenteredLayer())
###Output
_____no_output_____
###Markdown
As an extra sanity check, we can send random datathrough the network and check that the mean is in fact 0.Because we are dealing with floating point numbers,we may still see a very small nonzero numberdue to quantization.
###Code
Y = net(torch.rand(4, 8))
Y.mean()
###Output
_____no_output_____
###Markdown
[**Layers with Parameters**]Now that we know how to define simple layers,let us move on to defining layers with parametersthat can be adjusted through training.We can use built-in functions to create parameters, whichprovide some basic housekeeping functionality.In particular, they govern access, initialization,sharing, saving, and loading model parameters.This way, among other benefits, we will not need to writecustom serialization routines for every custom layer.Now let us implement our own version of the fully-connected layer.Recall that this layer requires two parameters,one to represent the weight and the other for the bias.In this implementation, we bake in the ReLU activation as a default.This layer requires to input arguments: `in_units` and `units`, whichdenote the number of inputs and outputs, respectively.
###Code
class MyLinear(nn.Module):
def __init__(self, in_units, units):
super().__init__()
self.weight = nn.Parameter(torch.randn(in_units, units))
self.bias = nn.Parameter(torch.randn(units,))
def forward(self, X):
linear = torch.matmul(X, self.weight.data) + self.bias.data
return F.relu(linear)
###Output
_____no_output_____
###Markdown
Next, we instantiate the `MyLinear` classand access its model parameters.
###Code
linear = MyLinear(5, 3)
linear.weight
###Output
_____no_output_____
###Markdown
We can [**directly carry out forward propagation calculations using custom layers.**]
###Code
linear(torch.rand(2, 5))
###Output
_____no_output_____
###Markdown
We can also (**construct models using custom layers.**)Once we have that we can use it just like the built-in fully-connected layer.
###Code
net = nn.Sequential(MyLinear(64, 8), MyLinear(8, 1))
net(torch.rand(2, 64))
###Output
_____no_output_____ |
project3/.Trash-0/files/project_3_starter 11.ipynb | ###Markdown
Project 3: Smart Beta Portfolio and Portfolio Optimization OverviewSmart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. By contrast, a purely alpha fund may create a portfolio of specific stocks, not related to an index, or may choose from the global universe of stocks. The other characteristic that makes a smart beta portfolio "beta" is that it gives its investors a diversified broad exposure to a particular market.Imagine you're a portfolio manager, and wish to try out some different portfolio weighting methods.One way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results. For instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this: Companies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice.You may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse.So the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility).Smart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF. InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. PackagesWhen you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. Install Packages
###Code
import sys
!{sys.executable} -m pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Load Packages
###Code
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
###Output
_____no_output_____
###Markdown
Market Data Load DataFor this universe of stocks, we'll be selecting large dollar volume stocks. We're using this universe, since it is highly liquid.
###Code
df = pd.read_csv('../../data/project_3/eod-quotemedia.csv')
percent_top_dollar = 0.2
high_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar)
df = df[df['ticker'].isin(high_volume_symbols)]
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
volume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume')
dividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends')
###Output
_____no_output_____
###Markdown
View DataTo see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
###Code
project_helper.print_dataframe(close)
###Output
_____no_output_____
###Markdown
Part 1: Smart Beta PortfolioIn Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs. Note that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index. Index WeightsThe index we'll be using is based on large dollar volume stocks. Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data:``` Prices A B ...2013-07-08 2 2 ...2013-07-09 5 6 ...2013-07-10 1 2 ...2013-07-11 6 5 ...... ... ... ... Volume A B ...2013-07-08 100 340 ...2013-07-09 240 220 ...2013-07-10 120 500 ...2013-07-11 10 100 ...... ... ... ...```The weights created from the function `generate_dollar_volume_weights` should be the following:``` A B ...2013-07-08 0.126.. 0.194.. ...2013-07-09 0.759.. 0.377.. ...2013-07-10 0.075.. 0.285.. ...2013-07-11 0.037.. 0.142.. ...... ... ... ...```
###Code
def generate_dollar_volume_weights(close, volume):
"""
Generate dollar volume weights.
Parameters
----------
close : DataFrame
Close price for each ticker and date
volume : str
Volume for each ticker and date
Returns
-------
dollar_volume_weights : DataFrame
The dollar volume weights for each ticker and date
"""
assert close.index.equals(volume.index)
assert close.columns.equals(volume.columns)
#TODO: Implement function
return None
project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights)
###Output
_____no_output_____
###Markdown
View DataLet's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap.
###Code
index_weights = generate_dollar_volume_weights(close, volume)
project_helper.plot_weights(index_weights, 'Index Weights')
###Output
_____no_output_____
###Markdown
Portfolio WeightsNow that we have the index weights, let's choose the portfolio weights based on dividends.Implement `calculate_dividend_weights` to returns the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead.For example, assume the following is `dividends` data:``` Prices A B2013-07-08 0 02013-07-09 0 12013-07-10 0.5 02013-07-11 0 02013-07-12 2 0... ... ...```The weights created from the function `calculate_dividend_weights` should be the following:``` A B2013-07-08 NaN NaN2013-07-09 0 12013-07-10 0.333.. 0.666..2013-07-11 0.333.. 0.666..2013-07-12 0.714.. 0.285..... ... ...```
###Code
def calculate_dividend_weights(dividends):
"""
Calculate dividend weights.
Parameters
----------
ex_dividend : DataFrame
Ex-dividend for each stock and date
Returns
-------
dividend_weights : DataFrame
Weights for each stock and date
"""
#TODO: Implement function
return None
project_tests.test_calculate_dividend_weights(calculate_dividend_weights)
###Output
_____no_output_____
###Markdown
View DataJust like the index weights, let's generate the ETF weights and view them using a heatmap.
###Code
etf_weights = calculate_dividend_weights(dividends)
project_helper.plot_weights(etf_weights, 'ETF Weights')
###Output
_____no_output_____
###Markdown
ReturnsImplement `generate_returns` to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns.
###Code
def generate_returns(prices):
"""
Generate returns for ticker and date.
Parameters
----------
prices : DataFrame
Price for each ticker and date
Returns
-------
returns : Dataframe
The returns for each ticker and date
"""
#TODO: Implement function
return None
project_tests.test_generate_returns(generate_returns)
###Output
_____no_output_____
###Markdown
View DataLet's generate the closing returns using `generate_returns` and view them using a heatmap.
###Code
returns = generate_returns(close)
project_helper.plot_returns(returns, 'Close Returns')
###Output
_____no_output_____
###Markdown
Weighted ReturnsWith the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using the returns and weights.
###Code
def generate_weighted_returns(returns, weights):
"""
Generate weighted returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weights : DataFrame
Weights for each ticker and date
Returns
-------
weighted_returns : DataFrame
Weighted returns for each ticker and date
"""
assert returns.index.equals(weights.index)
assert returns.columns.equals(weights.columns)
#TODO: Implement function
return None
project_tests.test_generate_weighted_returns(generate_weighted_returns)
###Output
_____no_output_____
###Markdown
View DataLet's generate the ETF and index returns using `generate_weighted_returns` and view them using a heatmap.
###Code
index_weighted_returns = generate_weighted_returns(returns, index_weights)
etf_weighted_returns = generate_weighted_returns(returns, etf_weights)
project_helper.plot_returns(index_weighted_returns, 'Index Returns')
project_helper.plot_returns(etf_weighted_returns, 'ETF Returns')
###Output
_____no_output_____
###Markdown
Cumulative ReturnsTo compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement `calculate_cumulative_returns` to calculate the cumulative returns over time given the returns.
###Code
def calculate_cumulative_returns(returns):
"""
Calculate cumulative returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
cumulative_returns : Pandas Series
Cumulative returns for each date
"""
#TODO: Implement function
return None
project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns)
###Output
_____no_output_____
###Markdown
View DataLet's generate the ETF and index cumulative returns using `calculate_cumulative_returns` and compare the two.
###Code
index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns)
etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index')
###Output
_____no_output_____
###Markdown
Tracking ErrorIn order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement `tracking_error` to return the tracking error between the ETF and benchmark.For reference, we'll be using the following annualized tracking error function:$$ TE = \sqrt{252} * SampleStdev(r_p - r_b) $$Where $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns.
###Code
def tracking_error(benchmark_returns_by_date, etf_returns_by_date):
"""
Calculate the tracking error.
Parameters
----------
benchmark_returns_by_date : Pandas Series
The benchmark returns for each date
etf_returns_by_date : Pandas Series
The ETF returns for each date
Returns
-------
tracking_error : float
The tracking error
"""
assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index)
#TODO: Implement function
return None
project_tests.test_tracking_error(tracking_error)
###Output
_____no_output_____
###Markdown
View DataLet's generate the tracking error using `tracking_error`.
###Code
smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1))
print('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error))
###Output
_____no_output_____
###Markdown
Part 2: Portfolio OptimizationNow, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1.We want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index.$Minimize \left [ \sigma^2_p + \lambda \sqrt{\sum_{1}^{m}(weight_i - indexWeight_i)^2} \right ]$ where $m$ is the number of stocks in the portfolio, and $\lambda$ is a scaling factor that you can choose.Why are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index. CovarianceImplement `get_covariance_returns` to calculate the covariance of the `returns`. We'll use this to calculate the portfolio variance.If we have $m$ stock series, the covariance matrix is an $m \times m$ matrix containing the covariance between each pair of stocks. We can use [numpy.cov](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time.The covariance matrix $\mathbf{P} = \begin{bmatrix}\sigma^2_{1,1} & ... & \sigma^2_{1,m} \\ ... & ... & ...\\\sigma_{m,1} & ... & \sigma^2_{m,m} \\\end{bmatrix}$
###Code
def get_covariance_returns(returns):
"""
Calculate covariance matrices.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
returns_covariance : 2 dimensional Ndarray
The covariance of the returns
"""
#TODO: Implement function
return None
project_tests.test_get_covariance_returns(get_covariance_returns)
###Output
_____no_output_____
###Markdown
View DataLet's look at the covariance generated from `get_covariance_returns`.
###Code
covariance_returns = get_covariance_returns(returns)
covariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns)
covariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns))))
covariance_returns_correlation = pd.DataFrame(
covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation),
covariance_returns.index,
covariance_returns.columns)
project_helper.plot_covariance_returns_correlation(
covariance_returns_correlation,
'Covariance Returns Correlation Matrix')
###Output
_____no_output_____
###Markdown
portfolio varianceWe can write the portfolio variance $\sigma^2_p = \mathbf{x^T} \mathbf{P} \mathbf{x}$Recall that the $\mathbf{x^T} \mathbf{P} \mathbf{x}$ is called the quadratic form.We can use the cvxpy function `quad_form(x,P)` to get the quadratic form. Distance from index weightsWe want portfolio weights that track the index closely. So we want to minimize the distance between them.Recall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\sqrt{\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\left \| \mathbf{x} - \mathbf{index} \right \|_2$. There's a cvxpy function called [norm()](https://www.cvxpy.org/api_reference/cvxpy.atoms.other_atoms.htmlnorm)`norm(x, p=2, axis=None)`. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights. objective functionWe want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights.We also want to choose a `scale` constant, which is $\lambda$ in the expression. $\mathbf{x^T} \mathbf{P} \mathbf{x} + \lambda \left \| \mathbf{x} - \mathbf{index} \right \|_2$This lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for `scale` ($\lambda$).We can find the objective function using cvxpy `objective = cvx.Minimize()`. Can you guess what to pass into this function? constraintsWe can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as `[x >= 0, sum(x) == 1]`, where x was created using `cvx.Variable()`. optimizationSo now that we have our objective function and constraints, we can solve for the values of $\mathbf{x}$.cvxpy has the constructor `Problem(objective, constraints)`, which returns a `Problem` object.The `Problem` object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio.It also updates the vector $\mathbf{x}$.We can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using `x.value`
###Code
import cvxpy as cvx
def get_optimal_weights(covariance_returns, index_weights, scale=2.0):
"""
Find the optimal weights.
Parameters
----------
covariance_returns : 2 dimensional Ndarray
The covariance of the returns
index_weights : Pandas Series
Index weights for all tickers at a period in time
scale : int
The penalty factor for weights the deviate from the index
Returns
-------
x : 1 dimensional Ndarray
The solution for x
"""
assert len(covariance_returns.shape) == 2
assert len(index_weights.shape) == 1
assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0]
#TODO: Implement function
return None
project_tests.test_get_optimal_weights(get_optimal_weights)
###Output
_____no_output_____
###Markdown
Optimized PortfolioUsing the `get_optimal_weights` function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time.
###Code
raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1])
optimal_single_rebalance_etf_weights = pd.DataFrame(
np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)),
returns.index,
returns.columns)
###Output
_____no_output_____
###Markdown
With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns.
###Code
optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights)
optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index')
optim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1))
print('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error))
###Output
_____no_output_____
###Markdown
Rebalance Portfolio Over TimeThe single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement `rebalance_portfolio` to rebalance a portfolio.Reblance the portfolio every n number of days, which is given as `shift_size`. When rebalancing, you should look back a certain number of days of data in the past, denoted as `chunk_size`. Using this data, compute the optoimal weights using `get_optimal_weights` and `get_covariance_returns`.
###Code
def rebalance_portfolio(returns, index_weights, shift_size, chunk_size):
"""
Get weights for each rebalancing of the portfolio.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
index_weights : DataFrame
Index weight for each ticker and date
shift_size : int
The number of days between each rebalance
chunk_size : int
The number of days to look in the past for rebalancing
Returns
-------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
"""
assert returns.index.equals(index_weights.index)
assert returns.columns.equals(index_weights.columns)
assert shift_size > 0
assert chunk_size >= 0
#TODO: Implement function
return None
project_tests.test_rebalance_portfolio(rebalance_portfolio)
###Output
_____no_output_____
###Markdown
Run the following cell to rebalance the portfolio using `rebalance_portfolio`.
###Code
chunk_size = 250
shift_size = 5
all_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size)
###Output
_____no_output_____
###Markdown
Portfolio TurnoverWith the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Implement `get_portfolio_turnover` to calculate the annual portfolio turnover. We'll be using the formulas used in the classroom:$ AnnualizedTurnover =\frac{SumTotalTurnover}{NumberOfRebalanceEvents} * NumberofRebalanceEventsPerYear $$ SumTotalTurnover =\sum_{t,n}{\left | x_{t,n} - x_{t+1,n} \right |} $ Where $ x_{t,n} $ are the weights at time $ t $ for equity $ n $.$ SumTotalTurnover $ is just a different way of writing $ \sum \left | x_{t_1,n} - x_{t_2,n} \right | $
###Code
def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252):
"""
Calculage portfolio turnover.
Parameters
----------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
shift_size : int
The number of days between each rebalance
rebalance_count : int
Number of times the portfolio was rebalanced
n_trading_days_in_year: int
Number of trading days in a year
Returns
-------
portfolio_turnover : float
The portfolio turnover
"""
assert shift_size > 0
assert rebalance_count > 0
#TODO: Implement function
return None
project_tests.test_get_portfolio_turnover(get_portfolio_turnover)
###Output
_____no_output_____
###Markdown
Run the following cell to get the portfolio turnover from `get_portfolio turnover`.
###Code
print(get_portfolio_turnover(all_rebalance_weights, shift_size, returns.shape[1]))
###Output
_____no_output_____ |
table-linker-full-pipeline/table-linker-full-pipeline-model-prediction.ipynb | ###Markdown
Peak at the input file
###Code
pd.read_csv(input_file_path).fillna("")
###Output
_____no_output_____
###Markdown
Canonicalize
###Code
!tl canonicalize \
-c "$wikify_column_name" \
--add-context \
{input_file_path} > {canonical}
pd.read_csv(canonical, nrows = 10)
###Output
_____no_output_____
###Markdown
Candidate Generation
###Code
%%time
!tl clean -c label -o label_clean {canonical} / \
--url $es_url --index $es_index \
get-fuzzy-augmented-matches -c label_clean \
--auxiliary-fields {aux_field} \
--auxiliary-folder $temp_dir / \
--url $es_url --index $es_index \
get-exact-matches -c label_clean \
--auxiliary-fields {aux_field} \
--auxiliary-folder {temp_dir} > {candidates}
for field in aux_field.split(','):
aux_list = []
for f in glob.glob(f'{temp_dir}/*{aux_field}.tsv'):
aux_list.append(pd.read_csv(f, sep='\t', dtype=object))
aux_df = pd.concat(aux_list).drop_duplicates(subset=['qnode']).rename(columns={aux_field: 'embedding'})
aux_df.to_csv(f'{temp_dir}/{aux_field}.tsv', sep='\t', index=False)
pd.read_csv(candidates, nrows = 10).fillna("")
###Output
_____no_output_____
###Markdown
Feature Voting
###Code
%%time
!tl smallest-qnode-number {candidates} \
/ string-similarity -i --method symmetric_monge_elkan:tokenizer=word -o monge_elkan \
/ string-similarity -i --method jaccard:tokenizer=word -c kg_descriptions context -o des_cont_jaccard \
/ string-similarity -i --method jaro_winkler -o jaro_winkler \
/ feature-voting -c "pagerank,smallest_qnode_number,monge_elkan,des_cont_jaccard" > {feature_votes}
pd.read_csv(feature_votes, nrows = 10).fillna("")
###Output
_____no_output_____
###Markdown
Compute Embedding Score using Column Vector Strategy
###Code
!tl score-using-embedding $feature_votes \
--column-vector-strategy centroid-of-singletons \
-o graph-embedding-score --embedding-file $embedding_file \
> $score_file
df = pd.read_csv(score_file).fillna("")
df.sort_values(by=['votes'], ascending=False)
###Output
_____no_output_____
###Markdown
Generate Additional Features required for Model Prediction
###Code
## TODO: Need to add these features as cli commands in Table Linker
def create_singleton_feature(df):
d = df[df['method'] == 'exact-match'].groupby(['column','row'])[['kg_id']].count()
l = list(d[d['kg_id'] == 1].index)
singleton_feat = []
for i,row in df.iterrows():
col_num,row_num = row['column'],row['row']
if (col_num,row_num) in l:
singleton_feat.append(1)
else:
singleton_feat.append(0)
df['singleton'] = singleton_feat
return df
def generate_reciprocal_rank(df):
final_list = []
grouped_obj = df.groupby(['row', 'column'])
for cell in grouped_obj:
reciprocal_rank = list(1/cell[1]['graph-embedding-score'].rank())
cell[1]['reciprocal_rank'] = reciprocal_rank
final_list.extend(cell[1].to_dict(orient='records'))
odf = pd.DataFrame(final_list)
return odf
features_df = pd.read_csv(score_file)
features_df = create_singleton_feature(features_df)
features_df['num_char'] = features_df['kg_labels'].apply(lambda x: len(x) if not(pd.isna(x)) else 0)
features_df['num_tokens'] = features_df['kg_labels'].apply(lambda x: len(x.split()) if not(pd.isna(x)) else 0)
features_df = generate_reciprocal_rank(features_df)
features_df.head().fillna("")
###Output
_____no_output_____
###Markdown
Final Ranking Score Predicted by Model
###Code
features = ['pagerank','retrieval_score','monge_elkan',
'des_cont_jaccard','jaro_winkler','graph-embedding-score',
'singleton','num_char','num_tokens','reciprocal_rank']
model = pickle.load(open(model_name,'rb'))
data = features_df[features]
predicted_score = model.predict(data)
features_df['model_prediction'] = predicted_score
features_df.to_csv(final_score,index=False)
pd.read_csv(final_score, nrows=10).fillna("")
###Output
_____no_output_____
###Markdown
Get Top5 KG Links
###Code
!tl get-kg-links -c model_prediction -l label -k 3 $final_score > $top_k_file
pd.read_csv(top_k_file, nrows = 10)
###Output
_____no_output_____
###Markdown
Join to Produce final result
###Code
!tl join -f $input_file_path --csv -c ranking_score $top_k_file > $final_output
pd.read_csv(final_output).fillna("")
###Output
_____no_output_____
###Markdown
Clean up temporary files
###Code
shutil.rmtree(temp_dir)
###Output
_____no_output_____ |
source/sample_ml/Chapter03/3-6 GMM.ipynb | ###Markdown
GMM 混合高斯模型
###Code
from sklearn.datasets import load_iris
from sklearn.mixture import GaussianMixture
import matplotlib.pyplot as plt
import numpy as np
data = load_iris()
X=data.data[:,:3]
print(X.shape)
n_components = 3 # 高斯分布的数量
model = GaussianMixture(n_components=n_components)
model.fit(X)
y=model.predict(X)
print(model.means_) # 各高斯分布的均值
print(model.covariances_) # 各高斯分布的方差
###Output
(150, 3)
[[6.06484109 2.81865029 4.49503422]
[5.0060001 3.42800022 1.46200003]
[6.6298468 2.97153653 5.67275436]]
[[[0.28288871 0.09672907 0.25119586]
[0.09672907 0.09603064 0.11237849]
[0.25119586 0.11237849 0.37288505]]
[[0.12176497 0.09723191 0.01602799]
[0.09723191 0.14081678 0.01146397]
[0.01602799 0.01146397 0.029557 ]]
[[0.51084202 0.10986135 0.38433907]
[0.10986135 0.1197479 0.07822918]
[0.38433907 0.07822918 0.3349755 ]]]
###Markdown
GMM Density Estimation[Density Estimation for a Gaussian mixture — scikit-learn 1.0.2 documentation](https://scikit-learn.org/stable/auto_examples/mixture/plot_gmm_pdf.htmlsphx-glr-auto-examples-mixture-plot-gmm-pdf-py)
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from sklearn import mixture
n_samples = 300
# generate random sample, two components
np.random.seed(0)
# generate spherical data centered on (20, 20)
shifted_gaussian = np.random.randn(n_samples, 2) + np.array([20, 20])
# generate zero centered stretched Gaussian data
C = np.array([[0.0, -0.7], [3.5, 0.7]])
stretched_gaussian = np.dot(np.random.randn(n_samples, 2), C)
# concatenate the two datasets into the final training set
X_train = np.vstack([shifted_gaussian, stretched_gaussian])
# fit a Gaussian Mixture Model with two components
clf = mixture.GaussianMixture(n_components=2, covariance_type="full")
clf.fit(X_train)
print(clf.means_)
print(clf.covariances_)
# display predicted scores by the model as a contour plot
x = np.linspace(-20.0, 30.0)
y = np.linspace(-20.0, 40.0)
X, Y = np.meshgrid(x, y)
XX = np.array([X.ravel(), Y.ravel()]).T
Z = -clf.score_samples(XX)
Z = Z.reshape(X.shape)
CS = plt.contour(
X, Y, Z, norm=LogNorm(vmin=1.0, vmax=1000.0), levels=np.logspace(0, 3, 10)
)
CB = plt.colorbar(CS, shrink=0.8, extend="both")
plt.scatter(X_train[:, 0], X_train[:, 1], 0.8)
plt.title("Negative log-likelihood predicted by a GMM")
plt.axis("tight")
plt.show()
###Output
[[19.91453549 19.97556345]
[-0.13607006 -0.07059606]]
[[[1.02179964e+00 3.28158679e-03]
[3.28158679e-03 9.90375215e-01]]
[[1.13328040e+01 2.25048269e+00]
[2.25048269e+00 8.77009968e-01]]]
###Markdown
GMM covariances [GMM covariances — scikit-learn 1.0.2 documentation](https://scikit-learn.org/stable/auto_examples/mixture/plot_gmm_covariances.htmlsphx-glr-auto-examples-mixture-plot-gmm-covariances-py)
###Code
# Author: Ron Weiss <[email protected]>, Gael Varoquaux
# Modified by Thierry Guillemot <[email protected]>
# License: BSD 3 clause
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from sklearn.mixture import GaussianMixture
from sklearn.model_selection import StratifiedKFold
colors = ["navy", "turquoise", "darkorange"]
def make_ellipses(gmm, ax):
for n, color in enumerate(colors):
if gmm.covariance_type == "full":
covariances = gmm.covariances_[n][:2, :2]
elif gmm.covariance_type == "tied":
covariances = gmm.covariances_[:2, :2]
elif gmm.covariance_type == "diag":
covariances = np.diag(gmm.covariances_[n][:2])
elif gmm.covariance_type == "spherical":
covariances = np.eye(gmm.means_.shape[1]) * gmm.covariances_[n]
v, w = np.linalg.eigh(covariances)
u = w[0] / np.linalg.norm(w[0])
angle = np.arctan2(u[1], u[0])
angle = 180 * angle / np.pi # convert to degrees
v = 2.0 * np.sqrt(2.0) * np.sqrt(v)
ell = mpl.patches.Ellipse(
gmm.means_[n, :2], v[0], v[1], 180 + angle, color=color
)
ell.set_clip_box(ax.bbox)
ell.set_alpha(0.5)
ax.add_artist(ell)
ax.set_aspect("equal", "datalim")
iris = datasets.load_iris()
# Break up the dataset into non-overlapping training (75%) and testing
# (25%) sets.
skf = StratifiedKFold(n_splits=4)
# Only take the first fold.
train_index, test_index = next(iter(skf.split(iris.data, iris.target)))
X_train = iris.data[train_index]
y_train = iris.target[train_index]
X_test = iris.data[test_index]
y_test = iris.target[test_index]
n_classes = len(np.unique(y_train))
# Try GMMs using different types of covariances.
estimators = {
cov_type: GaussianMixture(
n_components=n_classes, covariance_type=cov_type, max_iter=20, random_state=0
)
for cov_type in ["spherical", "diag", "tied", "full"]
}
n_estimators = len(estimators)
plt.figure(figsize=(3 * n_estimators // 2, 6))
plt.subplots_adjust(
bottom=0.01, top=0.95, hspace=0.15, wspace=0.05, left=0.01, right=0.99
)
for index, (name, estimator) in enumerate(estimators.items()):
# Since we have class labels for the training data, we can
# initialize the GMM parameters in a supervised manner.
estimator.means_init = np.array(
[X_train[y_train == i].mean(axis=0) for i in range(n_classes)]
)
# Train the other parameters using the EM algorithm.
estimator.fit(X_train)
h = plt.subplot(2, n_estimators // 2, index + 1)
make_ellipses(estimator, h)
for n, color in enumerate(colors):
data = iris.data[iris.target == n]
plt.scatter(
data[:, 0], data[:, 1], s=0.8, color=color, label=iris.target_names[n]
)
# Plot the test data with crosses
for n, color in enumerate(colors):
data = X_test[y_test == n]
plt.scatter(data[:, 0], data[:, 1], marker="x", color=color)
y_train_pred = estimator.predict(X_train)
train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100
plt.text(0.05, 0.9, "Train accuracy: %.1f" % train_accuracy, transform=h.transAxes)
y_test_pred = estimator.predict(X_test)
test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100
plt.text(0.05, 0.8, "Test accuracy: %.1f" % test_accuracy, transform=h.transAxes)
plt.xticks(())
plt.yticks(())
plt.title(name)
plt.legend(scatterpoints=1, loc="lower right", prop=dict(size=12))
plt.show()
###Output
_____no_output_____ |
notebooksML101/03_Backprop_Exercise.ipynb | ###Markdown
Backpropagation ExerciseIn this exercise we will use backpropagation to train a multi-layer perceptron (with a single hidden layer). We will experiment with different patterns and see how quickly or slowly the weights converge. We will see the impact and interplay of different parameters such as learning rate, number of iterations, and number of data points.
###Code
#Preliminaries
from __future__ import division, print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Fill out the code below so that it creates a multi-layer perceptron with a single hidden layer (with 4 nodes) and trains it via back-propagation. Specifically your code should:1. Initialize the weights to random values between -1 and 11. Perform the feed-forward computation1. Compute the loss function1. Calculate the gradients for all the weights via back-propagation1. Update the weight matrices (using a learning_rate parameter)1. Execute steps 2-5 for a fixed number of iterations1. Plot the accuracies and log loss and observe how they change over timeOnce your code is running, try it for the different patterns below.- Which patterns was the neural network able to learn quickly and which took longer?- What learning rates and numbers of iterations worked well?- If you have time, try varying the size of the hidden layer and experiment with different activation functions (e.g. ReLu)
###Code
## This code below generates two x values and a y value according to different patterns
## It also creates a "bias" term (a vector of 1s)
## The goal is then to learn the mapping from x to y using a neural network via back-propagation
num_obs = 500
x_mat_1 = np.random.uniform(-1,1,size = (num_obs,2))
x_mat_bias = np.ones((num_obs,1))
x_mat_full = np.concatenate( (x_mat_1,x_mat_bias), axis=1)
# PICK ONE PATTERN BELOW and comment out the rest.
# # Circle pattern
# y = (np.sqrt(x_mat_full[:,0]**2 + x_mat_full[:,1]**2)<.75).astype(int)
# # Diamond Pattern
y = ((np.abs(x_mat_full[:,0]) + np.abs(x_mat_full[:,1]))<1).astype(int)
# # Centered square
# y = ((np.maximum(np.abs(x_mat_full[:,0]), np.abs(x_mat_full[:,1])))<.5).astype(int)
# # Thick Right Angle pattern
# y = (((np.maximum((x_mat_full[:,0]), (x_mat_full[:,1])))<.5) & ((np.maximum((x_mat_full[:,0]), (x_mat_full[:,1])))>-.5)).astype(int)
# # Thin right angle pattern
# y = (((np.maximum((x_mat_full[:,0]), (x_mat_full[:,1])))<.5) & ((np.maximum((x_mat_full[:,0]), (x_mat_full[:,1])))>0)).astype(int)
print('shape of x_mat_full is {}'.format(x_mat_full.shape))
print('shape of y is {}'.format(y.shape))
fig, ax = plt.subplots(figsize=(5, 5))
ax.plot(x_mat_full[y==1, 0],x_mat_full[y==1, 1], 'ro', label='class 1', color='darkslateblue')
ax.plot(x_mat_full[y==0, 0],x_mat_full[y==0, 1], 'bx', label='class 0', color='chocolate')
# ax.grid(True)
ax.legend(loc='best')
ax.axis('equal');
###Output
shape of x_mat_full is (500, 3)
shape of y is (500,)
###Markdown
Here are some helper functions
###Code
def sigmoid(x):
"""
Sigmoid function
"""
return 1.0 / (1.0 + np.exp(-x))
def loss_fn(y_true, y_pred, eps=1e-16):
"""
Loss function we would like to optimize (minimize)
We are using Logarithmic Loss
http://scikit-learn.org/stable/modules/model_evaluation.html#log-loss
"""
y_pred = np.maximum(y_pred,eps)
y_pred = np.minimum(y_pred,(1-eps))
return -(np.sum(y_true * np.log(y_pred)) + np.sum((1-y_true)*np.log(1-y_pred)))/len(y_true)
def forward_pass(W1, W2):
"""
Does a forward computation of the neural network
Takes the input `x_mat` (global variable) and produces the output `y_pred`
Also produces the gradient of the log loss function
"""
global x_mat
global y
global num_
# First, compute the new predictions `y_pred`
z_2 = np.dot(x_mat, W_1)
a_2 = sigmoid(z_2)
z_3 = np.dot(a_2, W_2)
y_pred = sigmoid(z_3).reshape((len(x_mat),))
# Now compute the gradient
J_z_3_grad = -y + y_pred
J_W_2_grad = np.dot(J_z_3_grad, a_2)
a_2_z_2_grad = sigmoid(z_2)*(1-sigmoid(z_2))
J_W_1_grad = (np.dot((J_z_3_grad).reshape(-1,1), W_2.reshape(-1,1).T)*a_2_z_2_grad).T.dot(x_mat).T
gradient = (J_W_1_grad, J_W_2_grad)
# return
return y_pred, gradient
def plot_loss_accuracy(loss_vals, accuracies):
fig = plt.figure(figsize=(16, 8))
fig.suptitle('Log Loss and Accuracy over iterations')
ax = fig.add_subplot(1, 2, 1)
ax.plot(loss_vals)
ax.grid(True)
ax.set(xlabel='iterations', title='Log Loss')
ax = fig.add_subplot(1, 2, 2)
ax.plot(accuracies)
ax.grid(True)
ax.set(xlabel='iterations', title='Accuracy');
###Output
_____no_output_____
###Markdown
Complete the pseudocode below
###Code
#### Initialize the network parameters
np.random.seed(1241)
W_1 =
W_2 =
num_iter =
learning_rate =
x_mat = x_mat_full
loss_vals, accuracies = [], []
for i in range(num_iter):
### Do a forward computation, and get the gradient
## Update the weight matrices
### Compute the loss and accuracy
## Print the loss and accuracy for every 200th iteration
plot_loss_accuracy(loss_vals, accuracies)
###Output
_____no_output_____
###Markdown
SOLUTION
###Code
#### Initialize the network parameters
np.random.seed(1241)
W_1 = np.random.uniform(-1,1,size=(3,4))
W_2 = np.random.uniform(-1,1,size=(4))
num_iter = 5000
learning_rate = .001
x_mat = x_mat_full
loss_vals, accuracies = [], []
for i in range(num_iter):
### Do a forward computation, and get the gradient
y_pred, (J_W_1_grad, J_W_2_grad) = forward_pass(W_1, W_2)
## Update the weight matrices
W_1 = W_1 - learning_rate*J_W_1_grad
W_2 = W_2 - learning_rate*J_W_2_grad
### Compute the loss and accuracy
curr_loss = loss_fn(y,y_pred)
loss_vals.append(curr_loss)
acc = np.sum((y_pred>=.5) == y)/num_obs
accuracies.append(acc)
## Print the loss and accuracy for every 200th iteration
if((i%200) == 0):
print('iteration {}, log loss is {:.4f}, accuracy is {}'.format(
i, curr_loss, acc
))
plot_loss_accuracy(loss_vals, accuracies)
###Output
iteration 0, log loss is 0.7686, accuracy is 0.544
iteration 200, log loss is 0.6821, accuracy is 0.472
iteration 400, log loss is 0.6636, accuracy is 0.572
iteration 600, log loss is 0.5995, accuracy is 0.754
iteration 800, log loss is 0.5252, accuracy is 0.774
iteration 1000, log loss is 0.4993, accuracy is 0.782
iteration 1200, log loss is 0.4922, accuracy is 0.786
iteration 1400, log loss is 0.4855, accuracy is 0.792
iteration 1600, log loss is 0.4628, accuracy is 0.794
iteration 1800, log loss is 0.3892, accuracy is 0.89
iteration 2000, log loss is 0.3316, accuracy is 0.892
iteration 2200, log loss is 0.3015, accuracy is 0.9
iteration 2400, log loss is 0.2790, accuracy is 0.902
iteration 2600, log loss is 0.2594, accuracy is 0.912
iteration 2800, log loss is 0.2443, accuracy is 0.914
iteration 3000, log loss is 0.2331, accuracy is 0.916
iteration 3200, log loss is 0.2231, accuracy is 0.92
iteration 3400, log loss is 0.2109, accuracy is 0.93
iteration 3600, log loss is 0.1992, accuracy is 0.946
iteration 3800, log loss is 0.1903, accuracy is 0.948
iteration 4000, log loss is 0.1837, accuracy is 0.95
iteration 4200, log loss is 0.1785, accuracy is 0.954
iteration 4400, log loss is 0.1743, accuracy is 0.958
iteration 4600, log loss is 0.1706, accuracy is 0.958
iteration 4800, log loss is 0.1675, accuracy is 0.958
###Markdown
Plot the predicted answers, with mistakes in yellow
###Code
pred1 = (y_pred>=.5)
pred0 = (y_pred<.5)
fig, ax = plt.subplots(figsize=(8, 8))
# true predictions
ax.plot(x_mat[pred1 & (y==1),0],x_mat[pred1 & (y==1),1], 'ro', label='true positives')
ax.plot(x_mat[pred0 & (y==0),0],x_mat[pred0 & (y==0),1], 'bx', label='true negatives')
# false predictions
ax.plot(x_mat[pred1 & (y==0),0],x_mat[pred1 & (y==0),1], 'yx', label='false positives', markersize=15)
ax.plot(x_mat[pred0 & (y==1),0],x_mat[pred0 & (y==1),1], 'yo', label='false negatives', markersize=15, alpha=.6)
ax.set(title='Truth vs Prediction')
ax.legend(bbox_to_anchor=(1, 0.8), fancybox=True, shadow=True, fontsize='x-large');
###Output
_____no_output_____ |
01_notebooks/02_EDA_I.ipynb | ###Markdown
¿En qué estados es la proporción de duplicados más alta?
###Code
dup_data_states = (data
.groupby('state')
.apply(lambda df: df.duplicated(subset=['lat', 'long'], keep=False).mean())
.reset_index()
.rename(columns={0: 'pct_dups'})
.sort_values(by='pct_dups')
.reset_index(drop=True)
)
fig, ax = plt.subplots(figsize=(15, 10))
dup_data_states.plot(kind='barh', y='pct_dups', x='state',ax=ax)
for idx, row in dup_data_states.iterrows():
ax.text(row['pct_dups'],
idx,
'{0:.1%}'.format(row['pct_dups']),
va='center',
)
plt.legend(loc='upper left', bbox_to_anchor=(1, 1))
plt.show()
###Output
_____no_output_____
###Markdown
Pueden haber decisiones diferentes de analista y modelo, de está manera se buscar homogeneisar la decisión final
###Code
data = data.assign(final_decision=lambda x: np.where(x.analyst_decision.isin(['A', 'R']), x.analyst_decision,
np.where(x.model_decision.isin(['A', 'R']),
x.model_decision,
'undefined')))
agg_dups_data = (data
.assign(tag_dup=lambda x: np.where(x.duplicated(subset=['state', 'lat', 'long'], keep=False), 'has_dups', 'no_dups'))
.query('tag_dup=="has_dups"')
.groupby(['tag_dup','state', 'lat', 'long'])
.agg(len_final=('final_decision', lambda x: len(x)),
len_unique_final=('final_decision', lambda x: len(x.unique())))
.reset_index()
)
data_dups_state = (agg_dups_data[['state', 'len_unique_final']]
.groupby('state')
.apply(lambda df: df.len_unique_final.value_counts(normalize=True)*100)
.reset_index()
.pivot_table(values='len_unique_final',
index='state',
columns='level_1',
fill_value=0)
.reset_index()
.sort_values(by=[1], ascending=True)
)
fig, ax = plt.subplots(figsize=(15, 9))
data_dups_state.plot(x='state', kind='barh', stacked=True, ax=ax)
plt.legend(loc='upper left', bbox_to_anchor=(1, 1))
plt.show()
###Output
_____no_output_____
###Markdown
En los duplicados, hay más de una decisión? Excluyendo duplicados: donde la decisión es unanime se deja la decisión de todas las ubicaciones, de lo contrario se toma la decisión de la mayoría; en los casos de empate. aleatoriamente se selecciona una decisión
###Code
np.random.seed(2020)
data = (data
.assign(uno=1)
.groupby(['state','census_code','lat', 'long','final_decision'])
.agg(count=('uno', sum))
.reset_index()
.assign(random_index=lambda x: np.random.normal(size=x.shape[0]))
.sort_values(by=['state', 'lat', 'long','count', 'random_index'], ascending=False)
.drop_duplicates(subset=['census_code','state', 'lat', 'long'], keep='first')
.drop(columns=['count', 'random_index'])
.reset_index(drop=True)
)
fig, ax = plt.subplots(figsize=(9, 5))
data.final_decision.value_counts(normalize=True).plot.barh(ax=ax)
for idx, text_i in enumerate(data.final_decision.value_counts(normalize=True)[['R', 'A', 'undefined']]):
plt.text(text_i, idx, '{0:.1%}'.format(text_i))
plt.show()
data
###Output
_____no_output_____ |
contour_visualizations/contours_pipeline.ipynb | ###Markdown
Contours Visualization Pipeline This is the **pipeline** version of the contours-visualization algorithm. It does not display any visualizations. Its sole purpose is to read heat-events data, run the contours logic, and produce the artifacts (1) metadata, (2) images, and (3) video. After it uploads the files to Azure, it flushes the local disk, to keep the Kubernetes disk space clean.
###Code
!pip install opencv-python-headless
from typing import List
import itertools
import os
import shutil
import uuid
from collections import Counter
from datetime import datetime, timedelta
from pathlib import Path
import subprocess
import tempfile
import time
import warnings
import numpy as np
import pandas as pd
import xarray as xr
import zarr
import fsspec
import cv2
from matplotlib import pyplot as plt
import matplotlib.dates as mdates
from matplotlib.patches import Rectangle
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.rcParams['figure.figsize'] = 12,8
import getpass
import azure.storage.blob
from azure.storage.blob import BlobClient, BlobServiceClient
from azure.core.exceptions import ResourceExistsError, HttpResponseError
###Output
_____no_output_____
###Markdown
Please make sure to give "write" permissions when creating the SAS token. Connect to Azure
###Code
####################################
# paste the Azure SAS code.
####################################
SAS_TOKEN = getpass.getpass() # of the whole "cmip6" folder in Azure.
URL_PREFIX = 'https://nasanex30analysis.blob.core.windows.net/cmip6'
###Output
···········································································································································
###Markdown
Configure Contours Constants
###Code
####################################
# CONSTANTS
####################################
# constants for openCV countour finding
SMOOTH_RATIO = 0
MIN_AREA = 10
CONVEX = False
# constants for the rolling-window aggregation
ROLLING = 4
###Output
_____no_output_____
###Markdown
Utils to read/write Azure
###Code
####################################
# Utils
####################################
class AzureSource():
"""Class to manage interactions with the Azure blobs. The methods are somewhat hardcoded, e.g.
the blobnames and path format is fit for our naming conventions for this project."""
def __init__(self, model:str, year:int):
fn = f"Ext_max_t__Rgn_1__{year}__Abv_Avg_5_K_for_3_days__CMIP6_{model}_Avg_yrs_1950_79.nc"
self.filename = fn
abspath = f"extremes_max/{model}/Region_1/Avg_yrs_1950_79/Abv_Avg_5_K_for_3_days/{fn}"
self.abspath = abspath
def download(self):
if not os.path.isfile(self.filename):
sas_url = f"{URL_PREFIX}/{self.abspath}?{SAS_TOKEN}"
blob_client = BlobClient.from_blob_url(sas_url)
with tempfile.TemporaryFile() as f:
fp = f"{f.name}.tmp"
with open(fp, "wb") as my_blob:
download_stream = blob_client.download_blob()
my_blob.write(download_stream.readall())
os.rename(fp, self.filename)
while os.path.getsize(self.filename)/10**6 < 10: # MB
time.sleep(2)
class AzureTarget():
"""Class to manage download operations from Azure."""
def __init__(self, filename):
self.filename = filename
def upload(self, upload_folder:str):
sas_url = f"{URL_PREFIX}/{upload_folder}/{self.filename}?{SAS_TOKEN}"
blob_client = BlobClient.from_blob_url(sas_url)
with open(self.filename, "rb") as f:
if blob_client.exists():
warnings.warn(f"{self.filename} exists. Overwriting..")
blob_client.upload_blob(f, overwrite=True)
###Output
_____no_output_____
###Markdown
Setup the Pipeline to Create Contours from Dataset
###Code
####################################
# Define Contour obj
####################################
"""
Bounding-contours algorithm to find the extend of the heat events and
produce visualizations. It uses the the heat events y/n dataset
which was (supposed to be pre-) produced by the "Heatwave Analysis" algorithm.
"""
class Contour(object):
"""A single contour obj. All unit operations are managed here."""
def __init__(self, cnt:np.array, lons, lats):
self.contour = cnt
self.lons = lons
self.lats = lats
self.name = uuid.uuid4().hex[:6]
self._area = 0.0
self._smoothened = np.array([], dtype=np.int32)
self._projected = np.array([], dtype=np.float64)
self._center = ()
def __repr__(self):
return self.name
@property
def area(self):
return cv2.contourArea(self.contour)
@property
def smoothened(self):
cnt = self.contour
arc = SMOOTH_RATIO*cv2.arcLength(cnt,True)
return cv2.approxPolyDP(cnt,arc,True)
@property
def projected(self):
squeezed = self.smoothened.squeeze()
proj = [(float(self.lons[x]), float(self.lats[y])) for (x,y) in squeezed]
return np.array(proj).reshape((-1,1,2))
@property
def center(self):
M = cv2.moments(self.contour)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])
return (float(self.lons[cX]), float(self.lats[cY]))
def position_to(self, c2:object)->str:
"""Find the relative position of a Contour obj to another.
Return if c1 is inside or outside c2, or they intersect."""
f = cv2.pointPolygonTest
c1 = self.contour.squeeze().astype(float)
tf = np.array([int((f(c2.contour, x, False))) for x in c1])
if all(tf==-1):
return "outside"
elif all(tf==1):
return "inside"
else:
return "intersect"
def __add__(self, obj2:object):
"""Fuse two countor objects ('bubbles'). Better do this if they
intersect or one is enclosed inside the other."""
c1, c2 = self.contour, obj2.contour
fused = cv2.convexHull(np.vstack([c1, c2]))
new_obj = self.__class__(fused, self.lons, self.lats)
return new_obj
class ContourCollection(list):
"""Essentially just a list, except overloads behavior for "in" operator."""
def __init__(self, items:List[Contour]):
self.items = items
super(ContourCollection, self).__init__(items)
def __contains__(self, x):
result = False
for c in self.items:
if x.name==c.name and x.area==c.area:
result = True
return result
####################################
# Find the independent contours for a given day
####################################
def find_daily_contours(ds:xr.Dataset)->List[ContourCollection]:
"""Give a dataset and it will loop through days and
find all contours per day, if any. This function does ~
df['contours'].rolling(window=4).sum() """
def find_contours(arr2d: np.array,
convex:bool=False,
min_area:int=150) -> List[np.array]:
"""Encapsulate islands of 1s and return contours, [(i,j),(..),].
input: day-slice of a dataset tasmax dataarray
output: list of contours (np.arrays)"""
H = arr2d.astype(np.uint8)
ret, thresh = cv2.threshold(H, 0, 1, 0, cv2.THRESH_BINARY)
kernel = np.ones((10,10), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
contours, hier = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if convex:
contours = [cv2.convexHull(c) for c in contours]
contours = [c for c in contours if c.shape[0]>1] # filter single points
for c in contours:
if c.ndim!=3:
print(c.shape)
lons = ds.coords['lon']
lats = ds.coords['lat']
contours = [Contour(c, lons, lats) for c in contours]
contours = [c for c in contours if c.area>min_area]
return ContourCollection(contours)
all_contours = []
dr = pd.DatetimeIndex(ds['time'].dt.floor('D').values.astype('str'))
days = []
for d in dr:
day = d.strftime("%Y-%m-%d")
extreme = ds['extreme_yn'].sel(time=day)
arr2d = extreme.values[0]
all_contours += [find_contours(arr2d, convex=CONVEX, min_area=MIN_AREA)]
days += [day]
return all_contours, days
####################################
# Rolling-window contours summation on time axis
####################################
def collapse(contours:List[Contour]) -> List[Contour]:
"""Recursive func to fuse multiple contour objects, if overlapping."""
if type(contours)==float and pd.isna(contours):
return []
conts = contours[:] # prevent mutation
for cnt1, cnt2 in itertools.combinations(conts, 2):
if cnt1.position_to(cnt2) in ("inside", "intersect"):
cnt_new = cnt1+cnt2
conts.remove(cnt1)
conts.remove(cnt2)
conts.append(cnt_new)
return collapse(conts) # recursion
return conts
def rolling_sum(all_contours:list, window:int=ROLLING)->pd.DataFrame:
"""Provide df with daily contours calculated, and it will df.rolling(w).sum()
The only reason we can't use pandas is that its .rolling method refuses sum(lists)."""
if window==1:
warnings.warn("window=1 just returns contours as-is.")
df = pd.DataFrame(dict(contours=all_contours))
for i in range(1, window):
df[f"shift{i}"] = df['contours'].shift(i)
df['rolling_append'] = df.filter(regex=r'contours|shift*', axis=1).dropna().sum(axis=1)
df['rolling_sum'] = df['rolling_append'].apply(collapse)
# drop tmp columns:
df = df[[c for c in df.columns if "shift" not in c]]
df = df.drop("rolling_append", axis=1)
assert len(ds['extreme_yn'])==len(df)
return df
####################################
# Serialize metadata ready to json
####################################
def serialize(df:pd.DataFrame) -> pd.DataFrame:
df1 = df.explode('contours')[['days','contours']].reset_index(drop=True)
df1['type'] = 'daily'
df1 = df1.rename({'contours':'contour'}, axis=1)
df2 = df.explode('rolling_sum')[['days','rolling_sum']].reset_index(drop=True)
df2['type'] = 'rolling_sum'
df2 = df2.rename({'rolling_sum':'contour'}, axis=1)
df3 = pd.concat([df1,df2], axis=0)\
.sort_values(by=['days','type'], ascending=True)\
.dropna()\
.reset_index(drop=True)
df3['name'] = [x.name for x in df3['contour']]
df3['center'] = [x.center for x in df3['contour']]
df3['area'] = [x.area for x in df3['contour']]
df3['projected'] = [x.projected for x in df3['contour']]
df3 = df3.drop('contour', axis=1)
return df3
####################################
# Generate figures for each day with contours
####################################
def validate(df:pd.DataFrame):
assert "contours" in df.columns
assert "rolling_sum" in df.columns
assert df.index.is_monotonic
def create_figures(df:pd.DataFrame, window:int, save=False, folder:str=None):
validate(df)
def add_patches(column:str, _idx:int, color:str, linewidths:int, alpha=1):
contours = df[column][df.index==_idx].values[0]
patches = [Polygon(c.projected.squeeze(), True) for c in contours]
args = dict(edgecolors=(color,), linewidths=(linewidths,), facecolor="none", alpha=alpha)
p = PatchCollection(patches, **args)
ax1.add_collection(p)
[ax1.scatter(x=c.center[0], y=c.center[1], c=color, s=3) for c in contours]
p = PatchCollection(patches, **args)
ax2.add_collection(p)
[ax2.scatter(x=c.center[0], y=c.center[1], c=color, s=3) for c in contours]
for i, idx in enumerate(df.index):
dr = pd.DatetimeIndex(ds['time'].dt.floor('D').values.astype('str'))
day = dr[idx].strftime("%Y-%m-%d")
tasmax = ds['tasmax'].sel(time=day)
tdiff = ds['above_threshold'].sel(time=day)
extreme = ds['extreme_yn'].sel(time=day)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(24,8))
im1 = extreme.squeeze().plot.imshow(ax=ax1, cmap='cividis')
im2 = tdiff.squeeze().plot.imshow(ax=ax2, cmap='coolwarm', vmin=4, vmax=-4, alpha=0.8)
colors = 'r b c w m g y'.split()*100
for x in range(i+1):
add_patches('contours', idx-x, colors[i-x], 1.5)
if x==window:
add_patches('rolling_sum', idx, 'g', 4, alpha=0.8)
break
fig.tight_layout()
if save:
# save image locally
if not os.path.exists(folder):
os.mkdir(folder)
fig.savefig(f"{folder}/{day}.jpg")
fig.clear()
plt.close(fig)
####################################
# Compile a video from images
####################################
def create_video(files:List[str], fn_out:str)->None:
h,w,_ = cv2.imread(files[0]).shape
with tempfile.TemporaryFile() as f:
fp = f"{f.name}.avi"
fourcc = cv2.VideoWriter_fourcc(*'XVID')
video = cv2.VideoWriter(fp, fourcc, 10, (w,h))
for fn in files:
img = cv2.imread(fn)
video.write(img)
video.release()
os.rename(fp, 'out.avi')
time.sleep(2)
fn_in = 'out.avi'
cmd = f"ffmpeg -i '{fn_in}' -ac 2 -b:v 2000k -c:a aac -c:v libx264 -b:a 160k -vprofile high -bf 0 -strict experimental -f mp4 '{fn_out}'"
subprocess.run(cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.STDOUT)
os.remove('out.avi')
####################################
# Run the Pipeline
####################################
models = ["GISS_E2_1_G_ssp585", "GFDL_ESM4_ssp245", "GFDL_ESM4_ssp585", "GISS_E2_1_G_ssp245"]
years = list(range(2026,2030))
for model in models:
for year in years:
t1 = time.time()
# import dataset
################################
at = AzureSource(model, year)
at.download()
ds = xr.open_mfdataset(at.filename)
days_above = ds.attrs['Number of continuous days to be considered extreme']
kelv_above = ds.attrs['threshold']
upload_folder = f"NEWcontours_{days_above}days_{kelv_above}K/{model}"
# find contours
################################
dc, days = find_daily_contours(ds)
df_daily = rolling_sum(dc)
df_daily['days'] = days
# create metadata
################################
path_meta = f"{model}_{year}.json"
df_meta = serialize(df_daily)
df_meta.to_json(path_meta)
# create images
################################
img_folder = f"{model}_{year}"
create_figures(df_daily, window=ROLLING, save=True, folder=img_folder)
figs = sorted([str(p) for p in Path(img_folder).rglob("*.jpg")])
# create video
################################
path_video = f"{model}_{year}.mp4"
create_video(figs, path_video)
# export all to Azure
################################
AzureTarget(path_meta).upload(upload_folder)
[AzureTarget(fn).upload(upload_folder) for fn in figs]
AzureTarget(path_video).upload(upload_folder)
# delete local files
################################
os.remove(path_meta)
shutil.rmtree(img_folder)
os.remove(path_video)
os.remove(at.filename)
print(f"{model}\t{year}\t{round((time.time()-t1)/60,2)} min")
###Output
_____no_output_____ |
notebooks/rnaSeq/Omics_Pipe_GUI_RNAseq_counts_GUI_v2.ipynb | ###Markdown
Omics Pipe GUI -- RNAseq_Count_Based Pipeline Author: K. FischEmail: [email protected]: May 2016 Note: Before editing this notebook, please make a copy (File --> Make a copy). Table of Contents1. Introduction * Configuration * Parameters * User Input Required 2. Omics Pipe RNAseq Count-based Pipeline3. Omics Pipe Results * Raw Data Quality Control (FastQC) * Alignment (STAR) * Quantification (HTSeq) * Differential Expression Analysis (DEseq2)4. Functional Enrichment Analysis5. Network Analysis IntroductionOmics pipe is an open-source, modular computational platform that automates ‘best practice’ multi-omics data analysis pipelines. This Jupyter notebook wraps the functionality of Omics Pipe into an easy-to-use interactive Jupyter notebook and parsesthe output for genomic interpretation. Read more about Omics Pipe at https://pythonhosted.org/omics_pipe/.
###Code
#Omics Pipe Overview
from IPython.display import Image
Image(filename='/data/core_analysis_pipelines/RNAseq/Omics_Pipe_RNAseq_count_based_pipeline/images/op_diagram.png', width=500, height=100)
###Output
_____no_output_____
###Markdown
Set up your Jupyter notebook to enable nbextensions and import Python modules needed
###Code
#Activate Jupyter Notebook Extensions
import notebook
E = notebook.nbextensions.EnableNBExtensionApp()
E.enable_nbextension('usability/codefolding/main')
E.enable_nbextension('usability/comment-uncomment/main')
E.enable_nbextension('usability/datestamper/main')
E.enable_nbextension('usability/dragdrop/main')
E.enable_nbextension('usability/hide_input/main')
#E.enable_nbextension('usability/read-only/main')
E.enable_nbextension('usability/runtools/main')
E.enable_nbextension('usability/search-replace/main')
E.enable_nbextension('usability/toc/main')
#disable extension
#D = notebook.nbextensions.DisableNBExtensionApp()
#D.disable_nbextension('usability/codefolding/main')
#Import Omics pipe and module dependencies
import yaml
from omics_pipe.parameters.default_parameters import default_parameters
from ruffus import *
import sys
import os
import time
import datetime
import drmaa
import csv
from omics_pipe.utils import *
from IPython.display import IFrame
import pandas
import glob
import os
import matplotlib.pyplot as plt
%matplotlib inline
#%matplotlib notebook
import qgrid
qgrid.nbinstall(overwrite=True)
qgrid.set_defaults(remote_js=True, precision=4)
from IPython.display import HTML
import mygene
now = datetime.datetime.now()
date = now.strftime("%Y-%m-%d %H:%M")
#Change top directory to locate result files
os.chdir("/data/core_analysis_pipelines/RNAseq/Omics_Pipe_RNAseq_count_based_pipeline")
###Output
_____no_output_____
###Markdown
Customize input parameters for Omics PipeRequired: Sample names, condition for each sampleOptional: genome build, gene annotation, output paths, tool parameters, etc. See full Omics Pipe documentation for a description of the configurable parameters.
###Code
#Omics Pipe documentation: Parameters
IFrame("https://pythonhosted.org/omics_pipe/parameter_file.html", width=700, height=250)
###Output
_____no_output_____
###Markdown
***User Input Required Here ***
###Code
###Customize parameters: Specify sample names and conditions
sample_names = ["468-3_CTRL-2","468-3_LPS-2","468-4_CTRL","468-4_LPS","468-6_CTRL","468-6_LPS","685-1_CTRL","685-1_LPS","685-2_CTRL-3","685-2_LPS-2","685-5_CTRL-3","685-5_LPS-1","685-7_CTRL-3","685-7_LPS-2","697-1_CTRL-3","697-1_LPS-1","697-2_CTRL","697-2_LPS","697-3_CTRL-3","697-3_LPS-2","697-4_CTRL-2","697-4_LPS-2","697-5_CTRL","697-5_LPS"]
condition = ["Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS","Control","LPS"]
lib_type = ["single_end"]*len(condition)
pair = ["468-3","468-3","468-4","468-4","468-6","468-6","685-1","685-1","685-2","685-2","685-5","685-5","685-7","685-7","697-1","697-1","697-2","697-2","697-3","697-3","697-4","697-4","697-5","697-5"]
genotype = ["het","het","wt","wt","mut","mut","mut","mut","wt","wt","het","het","het","het","wt","wt","mut","mut","wt","wt","het","het","mut","mut"]
#Update Metadata File
meta = {'Sample': pandas.Series(sample_names), 'condition': pandas.Series(condition) , 'libType': pandas.Series(lib_type),
'pair': pandas.Series(pair), 'genotype': pandas.Series(genotype)}
meta_df = pandas.DataFrame(data = meta)
deseq_meta_new = "/data/mccoy/new_meta.csv"
meta_df.to_csv(deseq_meta_new,index=False)
print meta_df
###Update parameters, such as GENOME, GTF_FILE, paths, etc
parameters = "/root/src/omics-pipe/tests/test_params_RNAseq_counts_AWS.yaml"
stream = file(parameters, 'r')
params = yaml.load(stream)
params.update({"SAMPLE_LIST": sample_names})
params.update({"DESEQ_META": deseq_meta_new})
params.update({"R_VERSION": '3.2.3'})
params.update({"GENOME": '/database/Mus_musculus/Mus_musculus/UCSC/mm10/Sequence/WholeGenomeFasta/genome.fa'})
params.update({"STAR_INDEX": '/database/Mus_musculus/Mus_musculus/STAR_index'})
params.update({"REF_GENES": '/database/Mus_musculus/Mus_musculus/UCSC/mm10/Annotation/Genes/genes.gtf'})
params.update({"RAW_DATA_DIR": '/data/mccoy/fastq'})
params.update({"TEMP_DIR": '/data/tmp'})
params.update({"PIPE_MULTIPROCESS": 100})
params.update({"STAR_VERSION": '2.4.5a'})
params.update({"PARAMS_FILE": '/data/mccoy/Omics_Pipe_RNAseq_params.yaml'})
params.update({"LOG_PATH": ':/data/mccoy/logs'})
params.update({"QC_PATH": "/data/mccoy/QC"})
params.update({"FLAG_PATH": "/data/mccoy/flags"})
params.update({"DESEQ_RESULTS": "/data/mccoy/deseq"})
params.update({"STAR_OPTIONS": '--readFilesCommand cat --runThreadN 8 --outSAMstrandField intronMotif --outFilterIntronMotifs RemoveNoncanonical'})
params.update({"REPORT_RESULTS": "/data/mccoy/report"})
params.update({"STAR_RESULTS": "/data/mccoy/star"})
params.update({"HTSEQ_RESULTS": "/data/mccoy/counts"})
params.update({"DESIGN": '~condition'})
#update params
default_parameters.update(params)
#write yaml file
stream = file('updated_params.yaml', 'w')
yaml.dump(params,stream)
p = Bunch(default_parameters)
#View Parameters
print "Run Parameters: \n" + str(params)
###Output
_____no_output_____
###Markdown
Omics Pipe RNAseq Count-based PipelineThe following commands execute the Omics Pipe RNAseq Count-based Pipeline which is based on the Nature Methods paper Anders et al. 2013.
###Code
### Omics Pipe Pipelines
from IPython.display import Image
Image(filename='/data/core_analysis_pipelines/RNAseq/Omics_Pipe_RNAseq_count_based_pipeline/images/op_pipelines.png', width=700, height=150)
###Run Omics Pipe from the command line
!omics_pipe RNAseq_count_based /data/mccoy/updated_params.yaml
###Output
_____no_output_____
###Markdown
Omics Pipe ResultsOmics Pipe produces output files for each of the steps in the pipeline, as well as log files and run information (for reproducibility). Summarized output for each of the steps is displayed below for biological interpretation.
###Code
#Change top directory to locate result files
os.chdir("/data/mccoy")
#Display Omics Pipe Pipeline Run Status
#pipeline = './flags/pipeline_combined_%s.pdf' % date
pipeline = './flags/pipeline_combined_2016-05-16 17:41.pdf'
IFrame(pipeline, width=700, height=500)
###Output
_____no_output_____
###Markdown
Quality Control of Raw Data -- FastQCQuality control of the raw data (fastq files) was assessed using the tool FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/). The results for all samples are summarized below, and samples are given a PASS/FAIL rating.
###Code
###Summarize FastQC raw data QC results per sample
results_dir = './QC/'
# Below is the complete list of labels in the summary file
summary_labels = ["Basic Statistics", "Per base sequence quality", "Per tile sequence quality",
"Per sequence quality scores", "Per base sequence content", "Per sequence GC content",
"Per base N content", "Sequence Length Distribution", "Sequence Duplication Levels",
"Overrepresented sequences", "Adapter Content", "Kmer Content"]
# Below is the list I anticipate caring about; I leave the full list above in case it turns out later
# I anticipated wrong and need to update this one.
labels_of_interest = ["Basic Statistics", "Per base sequence quality"]
# Look for each file named summary.txt in each subdirectory named *_fastqc in the results directory
summary_wildpath = os.path.join(results_dir, '*/*_fastqc', "summary.txt")
summary_filepaths = [x for x in glob.glob(summary_wildpath)]
#print os.getcwd()
# Examine each of these files to find lines starting with "FAIL" or "WARN"
for curr_summary_path in summary_filepaths:
has_error = False
#print(divider)
with open(curr_summary_path, 'r') as f:
for line in f:
if line.startswith("FAIL") or line.startswith("WARN"):
fields = line.split("\t")
if not has_error:
print(fields[2].strip() + ": PASS") # the file name
has_error = True
if fields[1] in labels_of_interest:
print(fields[0] + "\t" + fields[1])
#Display QC results for individual samples
sample = "468-3_CTRL-2"
name = './QC/%s/%s_fastqc/fastqc_report.html' % (sample,sample)
IFrame(name, width=1000, height=600)
###Output
_____no_output_____
###Markdown
Alignment Summary Statistics -- STARThe samples were aligned to the genome with the STAR aligner (https://github.com/alexdobin/STAR). The alignment statistics for all samples are summarized and displayed below. Samples that do not pass the alignment quality filter (Good quality = aligned reads > 10 million and % aligned > 60%) are excluded from downstream analyses.
###Code
##Summarize Alignment QC Statistics
import sys
from io import StringIO
star_dir = './star/'
# Look for each file named summary.txt in each subdirectory named *_fastqc in the results directory
summary_wildpath = os.path.join(star_dir, '*/', "Log.final.out")
#summary_wildpath = os.path.join(star_dir, "*Log.final.out")
summary_filepaths = [x for x in glob.glob(summary_wildpath)]
#print summary_filepaths
alignment_stats = pandas.DataFrame()
for curr_summary_path in summary_filepaths:
#with open(curr_summary_path, 'r') as f:
filename = curr_summary_path.replace("./star/","")
filename2 = filename.replace("/Log.final.out","")
df = pandas.read_csv(curr_summary_path, sep="\t", header=None)
raw_reads = df.iloc[[4]]
y = raw_reads[1].to_frame()
aligned_reads = df.iloc[[7]]
z = aligned_reads[1].to_frame()
percent_aligned = df.iloc[[8]]
#print percent_aligned
a = percent_aligned[1]
b = a.to_string()
c = b.replace("%","")
c1 = c.replace("8 ","")
e = float(c1)
d = {"Sample": pandas.Series(filename2), "Raw_Reads": pandas.Series(float(y[1])),
"Aligned_Reads": pandas.Series(float(z[1])),
"Percent_Uniquely_Aligned": pandas.Series(e)}
p = pandas.DataFrame(data=d)
alignment_stats = alignment_stats.append(p)
#print alignment_stats
#View interactive table
qgrid.show_grid(alignment_stats, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})
#Barplot of number of aligned reads per sample
plt.figure(figsize=(10,10))
ax = plt.subplot(111)
alignment_stats.plot(ax=ax, kind='barh', title='# of Reads')
ax.axis(x='off')
ax.axvline(x=10000000, linewidth=2, color='Red', zorder=0)
#plt.xlabel('# Aligned Reads',fontsize=16)
for i, x in enumerate(alignment_stats.Sample):
ax.text(0, i + 0, x, ha='right', va= "bottom", fontsize='medium')
plt.savefig('./alignment_stats_%s' %date ,dpi=300) # save figure
###Flag samples with poor alignment or low numbers of reads
df = alignment_stats
failed_samples = df.loc[(df.Aligned_Reads < 10000000) | (df.Percent_Uniquely_Aligned < 60), ['Sample','Raw_Reads', 'Aligned_Reads', 'Percent_Uniquely_Aligned']]
#View interactive table
qgrid.show_grid(failed_samples, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})
#View Alignment Statistics for failed samples
for failed in failed_samples["Sample"]:
#fname = "/data/results/star/%s/Log.final.out" % failed
fname = "./star/%s/Log.final.out" % failed
with open(fname, 'r') as fin:
print failed + fin.read()
###Samples that passed QC for alignment
passed_samples = df.loc[(df.Aligned_Reads > 10000000) | (df.Percent_Uniquely_Aligned > 60), ['Sample','Raw_Reads', 'Aligned_Reads', 'Percent_Uniquely_Aligned']]
print "Number of samples that passed alignment QC = " + str(len(passed_samples))
#View interactive table
qgrid.show_grid(passed_samples, grid_options={'forceFitColumns': False, 'defaultColumnWidth': 200})
#View Alignment Statistics for passed samples
for passed in passed_samples["Sample"]:
#fname = "/data/results/star/%s/Log.final.out" % passed
fname = "./star/%s/Log.final.out" % passed
with open(fname, 'r') as fin:
print passed + fin.read()
#Create new metadata file with samples that passed QC for differential expression analyses
passed_list = passed_samples["Sample"]
meta_df_passed = meta_df.loc[meta_df.Sample.isin(passed_list2)]
deseq_meta_new2 = "/data/mccoy/new_meta_QCpassed.csv"
meta_df_passed.to_csv(deseq_meta_new2,index=False)
print meta_df_passed
print passed_list
###Output
_____no_output_____
###Markdown
Counts Summary Statistics -- HTSeqThe aligned reads were quantifed using RefSeq mm10 annotation using HTSeq-count (http://www-huber.embl.de/users/anders/HTSeq/doc/count.html). The counts for all samples are summarized and displayed below. Differential Expression Analysis in R Switch to R Kernel at top of screen: Kernel --> Change Kernel --> R
###Code
##Set working directory
working_dir <- "/data/mccoy"
setwd(working_dir)
date <- Sys.Date()
#Set R options
options(jupyter.plot_mimetypes = 'image/png')
options(useHTTPS=FALSE)
options(scipen=500)
###Output
_____no_output_____
###Markdown
Differential Expression Analysis -- Bioconductor DESeq2Differential expression analysis was performed with DESeq2 in Bioconductor (https://bioconductor.org/packages/release/bioc/html/DESeq2.html). The differentially expressed genes and raw counts for all samples are summarized and displayed below.
###Code
#Load custom R scripts
source("/data/ccbb_tickets/20160504_McCoy_Prince_RNAseq_pathways/src/rnaSeq/RNA_seq_DE.R")
#Load R packages; Execute this twice to clear the log
require(limma)
require(edgeR)
require(DESeq2)
require(RColorBrewer)
require(cluster)
library(gplots)
library(SPIA)
library(graphite)
library(PoiClaClu)
library(ggplot2)
library(pathview)
library(KEGG.db)
library(mygene)
library(splitstackshape)
library(reshape)
library(hwriter)
library(ReportingTools)
library("EnrichmentBrowser")
library(IRdisplay)
library(repr)
###Output
_____no_output_____
###Markdown
Read in gene count files for each sample
###Code
#Compile individual count files
#======================================================================================================================================
#Specify working directory. Should be the name of your project, and there should be a subfolder within this
#directory named "counts" which contains the raw count files in .txt format for all samples (output from htseq in Omics Pipe)
##Set working directory
setwd(working_dir)
name<- "McCoy_Prince_RNAseq_20160517"
#Reads in count files
dir <- paste(getwd(), "/counts", sep="")
countFiles <- paste(dir, "/", dir(dir), sep='')
countNames1 <- gsub('_counts.txt', '', countFiles)
countNames <- gsub(sprintf("%s/", dir), '', countNames1)
countsDf <- NULL
for (i in countFiles) {
dat <- read.csv(i, header=F, sep="\t", na.strings="", as.is=T)
countsDf <- cbind(countsDf, dat[,2])
}
x1 <- dim(countsDf)[1]-4
x2 <- dim(countsDf)[1]
countsDf <- countsDf[-c(x1:x2),] # remove the last 5 lines, they hold no genes
rownames(countsDf) <- read.csv(i, header=F, sep="\t", na.strings="", as.is=T)$V1[-c(x1:x2)]
colnames(countsDf) <- countNames
write.csv(countsDf, sprintf("%s/%s_ALL_counts.csv", working_dir, name)) #Creates file with all counts in one file
df <- countsDf
geneCount <- df
rc <- rowSums(geneCount)
geneCount <- geneCount[rc > 0,]
N <- colSums(geneCount)
names <- names(N)
print("Top of Raw Counts File:")
head(geneCount)
###Output
_____no_output_____
###Markdown
Visualize library size distribution for all samples from number of counts
###Code
##Visualize library size distribution (# aligned reads)
par(oma=c(5,1,1,1) + 0.1)
barplot(N*1e-6,
ylab="Library size (millions)",
main=c("Library size distribution"),
names=names(N),
las=2,
cex.names=0.75
)
###Output
_____no_output_____
###Markdown
Preprocess count data and read in metadata (design file)
###Code
# Preprocess data & read in metadata
#=====================================================================================================================================
#Read in design file. Example in s3://ucsd-ccb-data-analysis/Katie/RNAseq_scripts
#meta <- read.csv(sprintf("%s_design.csv",name), header=T, stringsAsFactor=FALSE)
#Read in design file with good quality samples only
meta <- read.csv(sprintf("%s/new_meta_QCpassed.csv",working_dir), header=T, stringsAsFactors=FALSE)
dds <- DESeqDataSetFromMatrix(countData = geneCount,
colData = meta,
design = ~condition)
#Run differential expression analysis
dds <- DESeq(dds)
###Output
_____no_output_____
###Markdown
MDS (PCA) plot
###Code
#Create MDS plot for all samples
rld <- rlog(dds)
poisd <- PoissonDistance(t(counts(dds)))
samplePoisDistMatrix <- as.matrix( poisd$dd )
rownames(samplePoisDistMatrix) <- paste( dds$dex, dds$cell, sep="-" )
mds <- data.frame(cmdscale(samplePoisDistMatrix))
mds <- cbind(mds, colData(rld))
mds <- as.data.frame(mds)
qplot(X1,X2,color=condition,data=mds,size=5, shape=genotype)
##Run all plotting code and save to PDF
pdf(sprintf("%s/%s_all_samples_plots_%s.pdf", working_dir, name,date))
par(oma=c(5,1,1,1) + 0.1)
barplot(N*1e-6,
ylab="Library size (millions)",
main=c("Library size distribution"),
names=names(N),
las=2,
cex.names=0.75
)
poisd <- PoissonDistance(t(counts(dds)))
samplePoisDistMatrix <- as.matrix( poisd$dd )
rownames(samplePoisDistMatrix) <- paste( dds$dex, dds$cell, sep="-" )
mds <- data.frame(cmdscale(samplePoisDistMatrix))
mds <- cbind(mds, colData(rld))
mds <- as.data.frame(mds)
qplot(X1,X2,color=condition,data=mds,size=5, shape=genotype)
dev.off()
###Output
_____no_output_____
###Markdown
Specify samples for desired comparisons for differential expression analysis Wt Only LPS vs Control
###Code
#Read in design file with good quality samples only
meta <- read.csv(sprintf("%s/new_meta_QCpassed.csv",working_dir), header=T, stringsAsFactors=FALSE)
#Create new meta files with subsets of samples for desired comparisons
#Failed samples 468-4-LPS, 468-6_CTRL, 697-4_LPS-2
#wt to wt LPS vs Control
desired_samples <- c("468-4_CTRL","685-2_CTRL-3","685-2_LPS-2",
"697-1_CTRL-3","697-1_LPS-1","697-3_CTRL-3","697-3_LPS-2") #removed failed sample 468-4 LPS
desired_design <- "~condition + pair"
name2 <- "WTonly_LPSvsControl"
desired_samples
desired_design
name2
#Reload count files for desired meta file
df <- countsDf
geneCount <- df
rc <- rowSums(geneCount)
geneCount <- geneCount[rc > 0,]
N <- colSums(geneCount)
names <- names(N)
#Update meta data file
#meta <- meta[match(colnames(geneCount),desired_samples),]
meta <- meta[(colnames(geneCount) %in% desired_samples),]
meta <- meta[complete.cases(meta),]
rownames(meta) <- meta$Sample
print("Meta Data File:")
meta
#Subset geneCounts for desired samples
geneCount <- geneCount[,meta$Sample]
df<-df[,meta$Sample]
check <- cbind(meta$Sample, colnames(geneCount))
group <- meta$condition
print("Top of Counts File:")
head(geneCount)
###Output
_____no_output_____
###Markdown
Normalization & Differential Expression Analysis
###Code
#Differential expression
#=====================================================================================================================================
#Normalization of raw counts using deseq
trsLength <- NA
if(is.element("length", colnames(df)))
trsLength <- df[rc > 0,"length"]
norm <- getNormData(df, group, trsLength, addRaw=TRUE)
deseq <- log2(norm$DESeq+1)
sf_deseq <- getSizeFactor(df, group)$DESeq # save the size factors of the ref cohort
write.csv(deseq, sprintf("%s/DEseq_normalized_counts_%s.csv",working_dir,name))
# nonspecic Filtering
sds <- apply(deseq,1,sd)
use <- (sds > quantile(sds, 0.75))
deseqNsf <- deseq[use,]
#Create DEseq dataset from matrix
dds <- DESeqDataSetFromMatrix(countData = geneCount,
colData = meta,
design = formula(desired_design))
#Run differential expression analysis
dds <- DESeq(dds)
ddsClean <- replaceOutliersWithTrimmedMean(dds)
ddsClean <- DESeq(ddsClean)
res <- results(ddsClean)
#res <- results(ddsClean, contrast=c("condition", "LPS", "Control")) #Specify conditions for DE comparison here if more than two conditions
res <- res[order(res$padj),]
write.csv(res, sprintf("%s/DE_genes_%s_%s_%s.csv", working_dir, name, name2, date)) #Writes results of differential expression analysis to this file in your working dir
###Output
_____no_output_____
###Markdown
Summarize Differentially Expressed Genes
###Code
#Differentially expressed genes
DE <- subset(res, padj < 0.001) #specify level of DE
DE2 <- subset(DE, abs(log2FoldChange) > 1) #specify level of DE
gene_list <- row.names(DE2)
write.csv(gene_list, sprintf("%s/DE_gene_ID_list_%s_%s_%s.csv", working_dir, name, name2, date))
print("Number of Differentially Expressed Genes padj < 0.001:")
nrow(DE)
print("Number of Differentially Expressed Genes padj < 0.001 and log2FoldChange > 1:")
nrow(DE2)
print("Top of Differentially Expressed Genes List:")
head(as.data.frame(DE2))
print("List of Differentially Expressed Genes to Cut and Copy for Enrichment Analyses")
cat(gene_list)
###Output
_____no_output_____
###Markdown
Plots for Differential Expression
###Code
##Set working directory
setwd(working_dir)
#Create distance matrix heatmap and clustering
#pdf(sprintf("%s_%s_plots_%s.pdf", name, name2,date)) #Uncomment this to save all plots to a pdf file
#png(sprintf("%s_%s_plots_%s.png", name, name2,date),res=1200, width=4,height=4, units='in') #Uncomment this to save all plots to a pdf file
rld <- rlog(dds)
distsRL <- dist(t(assay(rld)))
mat <- as.matrix(distsRL)
rownames(mat) <- colnames(mat) <- with(colData(dds), paste(Sample, genotype, sep=":"))
hmcol <- colorRampPalette(brewer.pal(9, "GnBu"))(100)
heatmap.2(mat, trace="none", col = rev(hmcol), margin=c(13, 13))
#Create MDS plot for samples in desired comparison
poisd <- PoissonDistance(t(counts(dds)))
samplePoisDistMatrix <- as.matrix( poisd$dd )
rownames(samplePoisDistMatrix) <- paste( dds$dex, dds$cell, sep="-" )
mds <- data.frame(cmdscale(samplePoisDistMatrix))
mds <- cbind(mds, colData(rld))
mds <- as.data.frame(mds)
qplot(X1,X2,color=condition,data=mds,size=5)
#Create Heatmap of differentially expressed genes
#DE <- subset(res, padj < 0.05) #specify level of DE
#DE <- subset(top$table, FDR < 0.05) #specify level of DE
DE <- subset(res, padj < 0.001) #specify level of DE
DE2 <- subset(DE, abs(log2FoldChange) > 1) #specify level of DE
#DE <- subset(res, pvalue < 0.01)
useHeat <- row.names(DE2)
deseqHeat <- deseq[useHeat,]
colnames(deseqHeat) <- with(colData(dds), paste(Sample, genotype, sep=":"))
#deseqHeat <-deseqHeat[,]
par(oma=c(5,1,1,1) + 0.1)
heatmap.2(deseqHeat,
Rowv=TRUE,
#Colv=hc,
col=rev(redgreen(75)),
scale="row",
#ColSideColors=unlist(sapply(group, mycol)),
trace="none",
key=TRUE,
cexRow=0.35,
cexCol=1,
dendrogram="both"
#labRow=TRUE
)
#Create Heatmap of top 100 differentially expressed genes
#DE <- subset(res, padj < 0.05) #specify level of DE
#DE <- subset(top$table, FDR < 0.05) #specify level of DE
DE <- subset(res, padj < 0.001) #specify level of DE
DE2 <- subset(DE, abs(log2FoldChange) > 1) #specify level of DE
DE_100 <- DE2[1:100,]
#DE <- subset(res, pvalue < 0.01)
useHeat <- row.names(DE_100)
deseqHeat <- deseq[useHeat,]
colnames(deseqHeat) <- with(colData(dds), paste(Sample, genotype, sep=":"))
#deseqHeat <-deseqHeat[,]
par(oma=c(5,1,1,1) + 0.1)
heatmap.2(deseqHeat,
Rowv=TRUE,
#Colv=hc,
col=rev(redgreen(75)),
scale="row",
#ColSideColors=unlist(sapply(group, mycol)),
trace="none",
key=TRUE,
cexRow=0.35,
cexCol=1,
dendrogram="both"
#labRow=TRUE
)
##Run all plotting code and save to PDF
##Set working directory
setwd(working_dir)
#Create distance matrix heatmap and clustering
pdf(sprintf("%s/%s_%s_plots_%s.pdf", working_dir, name, name2,date)) #Uncomment this to save all plots to a pdf file
#png(sprintf("%s_%s_plots_%s.png", name, name2,date),res=1200, width=4,height=4, units='in') #Uncomment this to save all plots to a pdf file
#Create distance matrix
rld <- rlog(dds)
distsRL <- dist(t(assay(rld)))
mat <- as.matrix(distsRL)
rownames(mat) <- colnames(mat) <- with(colData(dds), paste(Sample, genotype, sep=":"))
hmcol <- colorRampPalette(brewer.pal(9, "GnBu"))(100)
heatmap.2(mat, trace="none", col = rev(hmcol), margin=c(13, 13))
#Create MDS plot for samples in desired comparison
poisd <- PoissonDistance(t(counts(dds)))
samplePoisDistMatrix <- as.matrix( poisd$dd )
rownames(samplePoisDistMatrix) <- paste( dds$dex, dds$cell, sep="-" )
mds <- data.frame(cmdscale(samplePoisDistMatrix))
mds <- cbind(mds, colData(rld))
mds <- as.data.frame(mds)
qplot(X1,X2,color=condition,data=mds,size=5)
#Create Heatmap of differentially expressed genes
#DE <- subset(res, padj < 0.05) #specify level of DE
#DE <- subset(top$table, FDR < 0.05) #specify level of DE
DE <- subset(res, padj < 0.001) #specify level of DE
DE2 <- subset(DE, abs(log2FoldChange) > 1) #specify level of DE
#DE <- subset(res, pvalue < 0.01)
useHeat <- row.names(DE2)
deseqHeat <- deseq[useHeat,]
colnames(deseqHeat) <- with(colData(dds), paste(Sample, genotype, sep=":"))
#deseqHeat <-deseqHeat[,]
par(oma=c(5,1,1,1) + 0.1)
heatmap.2(deseqHeat,
Rowv=TRUE,
#Colv=hc,
col=rev(redgreen(75)),
scale="row",
#ColSideColors=unlist(sapply(group, mycol)),
trace="none",
key=TRUE,
cexRow=0.35,
cexCol=1,
dendrogram="both"
#labRow=TRUE
)
#Create Heatmap of top 100 differentially expressed genes
#DE <- subset(res, padj < 0.05) #specify level of DE
#DE <- subset(top$table, FDR < 0.05) #specify level of DE
DE <- subset(res, padj < 0.001) #specify level of DE
DE2 <- subset(DE, abs(log2FoldChange) > 1) #specify level of DE
DE_100 <- DE2[1:100,]
#DE <- subset(res, pvalue < 0.01)
useHeat <- row.names(DE_100)
deseqHeat <- deseq[useHeat,]
colnames(deseqHeat) <- with(colData(dds), paste(Sample, genotype, sep=":"))
#deseqHeat <-deseqHeat[,]
par(oma=c(5,1,1,1) + 0.1)
heatmap.2(deseqHeat,
Rowv=TRUE,
#Colv=hc,
col=rev(redgreen(75)),
scale="row",
#ColSideColors=unlist(sapply(group, mycol)),
trace="none",
key=TRUE,
cexRow=0.35,
cexCol=1,
dendrogram="both"
#labRow=TRUE
)
dev.off()
###Output
_____no_output_____
###Markdown
Run Functional Enrichment Analyses Prepare DE results from above as input to functional enrichment analyses
###Code
##Set working directory
setwd(working_dir)
###Annotated differential expression results for all genes
all_results <- as.data.frame(res)
id_list_all <- row.names(res)
out_all<-queryMany(id_list_all, scopes="symbol", fields="entrezgene", species="mouse")
##Merge annotations with original DE results
merged_all <- merge(all_results, out_all, by.x="row.names", by.y="query", all.x=TRUE)
merged_all_sub <- subset(merged_all, !is.na(merged_all$entrezgene))
head(merged_all_sub)
nrow(merged_all_sub)
#Prepare Differentially expressed genes and All genes for SPIA input
DE <- subset(res, padj < 0.001) #specify level of DE
DE2 <- subset(DE, abs(log2FoldChange) > 1) #specify level of DE
id_list <- row.names(DE2)
out<-queryMany(id_list, scopes="symbol", fields="entrezgene", species="mouse")
merged <- merge(data.frame(DE2), out, by.x="row.names", by.y="query", all.x=TRUE)
merged_sub <- subset(merged, !is.na(merged$entrezgene))
DE_genes1 <- as.vector(merged_sub$log2FoldChange)
DE_genes2 <- gsub("Inf", 5, DE_genes1)
DE_genes2 <- as.numeric(DE_genes2)
DE_genes <- gsub("-Inf", -5, DE_genes2)
DE_genes <-as.numeric(DE_genes)
names(DE_genes) <- merged_sub$entrezgene
head(DE_genes)
ALL_genes <- merged_all_sub$entrezgene
head(ALL_genes)
###Output
_____no_output_____
###Markdown
Run Signaling Pathway Impact Analysis (SPIA)
###Code
##Set working directory
setwd(working_dir)
##Run SPIA
res = spia(de=DE_genes, all=ALL_genes, organism="mmu", nB=2000, plots=FALSE, beta=NULL, combine="fisher" ) #MAYBE NEED TO ADD DATADIR
write.csv(res, file = sprintf("%s/spia_output__%s_%s_fisher.csv", working_dir, name, date))
#View top of the results table
head(res)
###Output
_____no_output_____
###Markdown
Run EnrichmentBrowser Tool
###Code
##Set working directory
setwd(working_dir)
##Download, run, and prepare mmu databases
#setwd(working_dir)
kegg.gs.mmu <- get.kegg.genesets("mmu")
go.gs.mmu <- get.go.genesets(org="mmu", onto="BP", mode="GO.db")
pwys.mmu <- download.kegg.pathways("mmu")
mmu.grn <- compile.grn.from.kegg(pwys.mmu)
###Output
_____no_output_____
###Markdown
Prepare differential expression result data as Bioconductor ExpressionSet
###Code
##Set working directory
setwd(working_dir)
gene_ids_from_merged_all <- merged_all_sub$Row.names
merged_all_sub_unique <- merged_all_sub[!duplicated(merged_all_sub$Row.names),]
unique_genes <- gene_ids_from_merged_all[!duplicated(gene_ids_from_merged_all)]
#length(unique_genes)
exprs1 <- subset(geneCount, rownames(geneCount) %in% unique_genes)
exprs <- as.matrix(exprs1)
row.names(exprs) <- NULL
colnames(exprs) <- NULL
#nrow(exprs)
write.table(exprs, sprintf("/data/mccoy/DE_exprs_%s_%s_%s.tab", name, name2, date), sep="\t",row.names = F,col.names = F)
pdat1 <- data.frame("names" =colnames(geneCount))
#pdat1
meta_merge <- merge(pdat1, meta, by.x = "names", by.y="Sample")
#meta_merge
meta_merge$condition_binary <- ifelse(meta_merge$condition == "LPS", 1, 0)
pdat2 <- data.frame(meta_merge$names, meta_merge$condition_binary, meta_merge$pair)
pdat <- as.matrix(pdat2)
row.names(pdat) <- NULL
colnames(pdat) <- NULL
write.table(pdat, sprintf("/data/mccoy/DE_pdat_%s_%s_%s.tab", name, name2, date), sep="\t",row.names = F,col.names = F)
fdat1 <- data.frame("names"= row.names(exprs1))
fdat2 <- merge(fdat1, merged_all_sub_unique, by.x="names", by.y="Row.names")
fdat <- data.frame(fdat2$entrezgene)
#nrow(fdat)
#head(fdat)
write.table(fdat, sprintf("/data/mccoy/DE_fdat_%s_%s_%s.tab", name, name2, date), sep="\t",row.names = F,col.names = F)
#Create fdat from DE results instead of built in DE function from EnrichmentBrowser
fdat_DE <- data.frame("ENTREZID" = merged_all_sub_unique$entrezgene, "FC" = merged_all_sub_unique$log2FoldChange,
"ADJ.PVAL" = merged_all_sub_unique$padj, "DESeq.STAT" = merged_all_sub_unique$stat)
row.names(fdat_DE) <- fdat_DE$ENTREZID
head(fdat_DE)
write.table(fdat_DE, sprintf("/data/mccoy/DE_fdat_DEresults_%s_%s_%s.tab", name, name2, date), sep="\t",row.names = F,col.names = F)
#Create Expression Set from real data, does not include DE expression results
eset_raw <- read.eset(exprs.file=sprintf("/data/mccoy/DE_exprs_%s_%s_%s.tab", name, name2, date), pdat.file=sprintf("/data/mccoy/DE_pdat_%s_%s_%s.tab", name, name2, date),
fdat.file=sprintf("/data/mccoy/DE_fdat_%s_%s_%s.tab", name, name2, date), data.type='rseq')
#Create ExpressionSet from real data, include DE expression results as fdata
eset_DE <- read.eset(exprs.file=sprintf("/data/mccoy/DE_exprs_%s_%s_%s.tab", name, name2, date), pdat.file=sprintf("/data/mccoy/DE_pdat_%s_%s_%s.tab", name, name2, date),
fdat.file=sprintf("/data/mccoy/DE_fdat_DEresults_%s_%s_%s.tab", name, name2, date), data.type='rseq')
#Fix column names for eset_DE and check
colnames(fData(eset_DE)) <- c("ENTREZID", "FC", "ADJ.PVAL", "DESeq.STAT")
#Recode pvalues to capture desired significance level
fData(eset_DE)$ADJ.PVAL <- as.numeric(fData(eset_DE)$ADJ.PVAL)
fData(eset_DE)$FC <- as.numeric(fData(eset_DE)$FC)
fData(eset_DE)$ADJ.PVAL[is.na(fData(eset_DE)$ADJ.PVAL)] <- 1
fData(eset_DE)$FC[is.na(fData(eset_DE)$FC)] <- 0
class(fData(eset_DE)$ADJ.PVAL)
class(fData(eset_DE)$FC)
head(fData(eset_DE))
#View plots of ExpressionSets
par(mfrow=c(1,2))
pdistr(fData(eset_DE)$ADJ.PVAL)
volcano(fData(eset_DE)$FC, fData(eset_DE)$ADJ.PVAL)
###Output
_____no_output_____
###Markdown
Run EnrichmentBrowser for KEGG and GO Gene Sets
###Code
###Run SBEA kegg for original pvalues
sbea.res.kegg <- sbea(method="ora", eset=eset_DE, gs=kegg.gs.mmu, perm=0, alpha=0.001, beta = 1, padj.method="BH",
out.file=sprintf("%s/SBEA_KEGG_results_%s_%s_%s.txt", working_dir, name, name2, date))
#, out.file="/data/mccoy/SBEA_KEGG_RESULTS_test.txt",beta = 1, sig.stat='&')
#This works and gives good concordance with Webgestalt and ToppGene
#Basic Overrepresentation Analysis
sbea.res.kegg <- sbea(method="ora", eset=eset_DE, gs=kegg.gs.mmu, perm=0, alpha=0.001, beta = 1, padj.method="BH")
gs.ranking(sbea.res.kegg,signif.only=TRUE)
###Enrichment Browser functions not being recognized for some reason. Defining them here works.
determine.edge.color <- function(edge.cons){
ifelse(edge.cons < 0, rgb(0,0,abs(edge.cons)), rgb(abs(edge.cons),0,0))
}
is.consistent <-function (grn.rel) {
act.cons <- mean(abs(grn.rel[1:2]))
if (length(grn.rel) == 2)
return(act.cons)
if (sum(sign(grn.rel[1:2])) == 0)
act.cons <- -act.cons
return(ifelse(grn.rel[3] == 1, act.cons, -act.cons))
}
#Create EnrichmentBrowser Html Report with Pathway Viz
setwd("/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports")
ea.browse(sbea.res.kegg)
#Compress and Move html results from default EnrichmentBrowser directory to desired directory
dir.create(sprintf("%s/EnrichmentBrowser", working_dir), showWarnings=FALSE)
zip(zipfile = sprintf("/data/mccoy/EnrichmentBrowser/SBEA_KEGG_results_%s_%s_%s.zip", name, name2, date), "/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports")
###Run SBEA GO for original pvalues
###sbea.res.go <- sbea(method="ora", eset=eset_DE, gs=go.gs.mmu, perm=0, alpha=0.001, beta = 1, padj.method="BH",
# #out.file=sprintf("%s/SBEA_GO_results_%s_%s_%s.txt", working_dir, name, name2, date))
#, out.file="/data/mccoy/SBEA_KEGG_RESULTS_test.txt",beta = 1, sig.stat='&')
#This works and gives good concordance with Webgestalt and ToppGene
#Basic Overrepresentation Analysis
##sbea.res.go <- sbea(method="ora", eset=eset_DE, gs=go.gs.mmu, perm=0, alpha=0.001, beta = 1, padj.method="BH")
##gs.ranking(sbea.res.go,signif.only=TRUE)
#Create EnrichmentBrowser Html Report with Pathway Viz
#this works!!
#setwd("/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports")
##ea.browse(sbea.res.go)
#Compress and Move html results from default EnrichmentBrowser directory to desired directory
##zip(zipfile = sprintf("/data/mccoy/EnrichmentBrowser/SBEA_GO_results_%s_%s_%s.zip", name, name2, date), "/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports")
###Output
_____no_output_____
###Markdown
Network Enrichment Analysis using EnrichmentBrowser
###Code
#Network based enrichment analysis using EnrichmentBrowser
#kegg.gs.mmu <- get.kegg.genesets("mmu")
#go.gs.mmu <- get.go.genesets(org="mmu", onto="BP", mode="GO.db")
#pwys.mmu <- download.kegg.pathways("mmu")
#mmu.grn <- compile.grn.from.kegg(pwys.mmu)
# perform GGEA using the compiled KEGG regulatory network
nbea.res <- nbea(method="ggea", eset=eset_DE, gs=kegg.gs.mmu, grn=mmu.grn)
gs.ranking(nbea.res)
#View network
par(mfrow=c(1,2))
ggea.graph(
gs=kegg.gs.mmu[["mmu04145_Phagosome"]],
grn=mmu.grn, eset=eset_DE)
ggea.graph.legend()
#Combine enrichment results from different analysis methods
res.list <- list(sbea.res.kegg, nbea.res)
comb.res <- comb.ea.results(res.list)
ea.browse(comb.res, graph.view=mmu.grn, nr.show=5)
#Compress and Move html results from default EnrichmentBrowser directory to desired directory
zip(zipfile = sprintf("%s/EnrichmentBrowser/SBEA_Combined_results_%s_%s_%s.zip", working_dir, name, name2, date), "/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports")
###Output
_____no_output_____
###Markdown
Visualize Enrichment results
###Code
#Display Pathway Results for KEGG pathway Lysosome
display_png(file="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04142_volc.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04142_kpath.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04142_hmap.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04142_hmap2.png")
#Display Pathway Results for KEGG pathway Phagosome
display_png(file="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04145_volc.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04145_kpath.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04145_hmap.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04145_hmap2.png")
#Display Pathway Results for KEGG pathway Leukocyte transendothelial migration
display_png(file="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04670_volc.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04670_kpath.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04670_hmap.png")
display_png(file ="/root/anaconda3/lib/R/library/EnrichmentBrowser/results/reports/mmu04670_hmap2.png")
###Output
_____no_output_____ |
jupyter_notebooks/Tutorials/objects/ModelParameterization.ipynb | ###Markdown
Model ParameterizationThe fundamental role of Model objects in pyGSTi is to simulate circuits, that is, to map circuits to outcome probability distributions. This mapping is *parameterized* by some set of real-valued parameters, meaning that the mapping between circuits and outcome distribution depends on the values of a `Model`'s parameters. Model objects have a `num_params` attribute holding its parameter count, and `to_vector` and `from_vector` methods which get or set a model's vector of parameters.`ModelMember` objects such as state prepations, operations, and measurements (POVMs) are also parameterized, and similarly possess a `num_params` attribute and `to_vector` and `from_vector` methods. For models that hold member objects to implement their operations (e.g., both explicit and implicit models), the model's parameterization the result of combining the parameterizations of all its members.In explicit models, the parameterization is properly viewed as a mapping between the model's parameter space and the space of $d^2 \times d^2$ operation matrices and length-$d^2$ SPAM vectors. A `Model`'s contents always correspond to a valid set of parameters, which can be obtained by its `to_vector` method, and can always be initialized from a vector of parameters via its `from_vector` method. The number of parameters (obtained via `num_params`) is independent (and need not equal!) the total number of gate-matrix and SPAM-vector elements comprising the `Model`. For example, in a "TP-parameterized" model, the first row of each operation matrix is fixed at `[1,0,...0]`, regardless to what the `Model`'s underlying parameters are. One of pyGSTi's primary capabilities is model optimization: the optimization of a fit function (often the log-likelihood) over the parameter space of an initial `Model` (often times the "target" model). Thus, specifying a model's parameterization specifies the constraints under which the model is optimized, or equivalently the space of possible circuit-to-outcome-distribution mappings that are searched for a best-fit estimate. In the simplest case, each gate and SPAM vector within a `ExplicitOpModel` have independent paramterizations, so that each `pygsti.modelmembers.ModelMember`-derived object has its own separate parameters accessed by its `to_vector` and `from_vector` methods. The `ExplictOpModel`'s parameter vector, in this case, is just the concatenation of the parameter vectors of its contents, usually in the order: 1) state preparation vectors, 2) measurement vectors, 3) gates. Operation typesOperations on quantum states exist within the `pygsti.modelmembers.operations` subpackage. Most of the classes therein represent a unique combination of a:a. category of operation that can be represented, andb. parameterization of that category of operations.For example, the `FullArbitraryOp` class can represent an arbitrary (Markovian) operation, and "fully" parameterizes the operation by exposing every element of the operation's dense process matrix as a parameter. The `StaticCliffordOp` class can only represent Clifford operations, and is "static", meaning it exposes no parameters and so cannot be changed in an optimization. Here are brief descriptions of several of the most commonly used operation types:- The `FullArbitraryOp` class represents a arbitrary process matrix which has a parameter for every element, and thus optimizations using this gate class allow the operation matrix to be varied completely.- The `StaticArbitraryOp` class also represents an arbitrary process matrix but has no parameters, and thus is not optimized at all.- The `FullTPOp` class represents a process matrix whose first row must be `[1,0,...0]`. This corresponds to a trace-preserving (TP) gate in the Gell-Mann and Pauli-product bases. Each element in the remaining rows is a separate parameter, similar to a fully parameterized gate. Optimizations using this gate type are used to constrain the estimated gate to being trace preserving.- The `LindbladErrorgen` class defines an error generator that takes a particular Lindblad form. This class is fairly flexible, but is predominantly used to constrain optimizations to the set of infinitesimally-generated CPTP maps. To produce a gate or layer operation, error generators must be exponentiated using the `ExpErrorgenOp` class.Similarly, there classes represnting quantum states in `pygsti.modelmembers.states` and those for POVMs and POVM effects in `pygsti.modelmembers.povms`. Many of these classes run parallel to those for operations. For example, there exist `FullState` and `TPState` classes, the latter which fixes its first element to $\sqrt{d}$, where $d^2$ is the vector length, as this is the appropriate value for a unit-trace state preparation.There are other operation types that simply combine or modify other operations. These types don't correspond to a particular category of operations or parameterization, they simply inherit these from the operations they act upon. The are:- The `ComposedOp` class combines zero or more other operations by acting them one after the other. This has the effect of producing a map whose process matrix would be the product of the process matrices of the factor operations. - The `ComposedErrorgen` class combines zero or more error generators by effectively summing them together.- The `EmbeddedOp` class embeds a lower-dimensional operation (e.g. a 1-qubit gate) into a higer-dimensional space (e.g. a 3-qubit space).- The `EmbeddedErrorgen` class embeds a lower-dimensional error generator into a higher-dimensional space.- The `ExpErrorgenOp` class exponentiates an error generator operation, making it into a map on quantum states.- The `RepeatedOp` class simply repeats a single operation $k$ times.These operations act as critical building blocks when constructing complex gate and circuit-layer operations, especially on a many-qubit spaces. Again, there are analogous classes for states, POVMs, etc., within the other sub-packages beneath `pygsti.modelmembers`. Specifying operation types when creating modelsMany of the model construction functions take arguments dictating the type of modelmember objects to create. As described above, by changing the type of a gate you select how that gate is represented (e.g. Clifford gates can be represented more efficiently than arbitrary gates) and how it is parameterized. This in turn dictates how the overall model is paramterized.For a brief overview of the available options, here is an incomplete list of parameterization arguments and their associated `pygsti.modelmember` class. Most types start with either `"full"` or `"static"` - these indicate whether the model members have parameters or not, respectively. Parameterizations without a prefix are "full" by default. See the related [ForwardSimulation tutorial](../algorithms/advanced/ForwardSimulationTypes.ipynb) for how each parameterization relates to the allowed types of forward simulation in PyGSTi.- `gate_type` for `modelmember.operations`: - `"static"` $\rightarrow$ `StaticArbitraryOp` - `"full"` $\rightarrow$ `FullArbitraryOp` - `"static standard"` $\rightarrow$ `StaticStandardOp` - `"static clifford"` $\rightarrow$ `StaticCliffordOp` - `"static unitary"` $\rightarrow$ `StaticUnitaryOp` - `"full unitary"` $\rightarrow$ `FullUnitaryOp` - `"full TP"` $\rightarrow$ `FullTPOp` - `"CPTP"`, `"H+S"`, etc. $\rightarrow$ `ExpErrorgenOp` + `LindbladErrorgen`- `prep_type` for `modelmember.states`: - `"computational"` $\rightarrow$ `ComputationalBasisState` - `"static pure"` $\rightarrow$ `StaticPureState` - `"full pure"` $\rightarrow$ `FullPureState` - `"static"` $\rightarrow$ `StaticState` - `"full"` $\rightarrow$ `FullState` - `"full TP"` $\rightarrow$ `TPState`- `povm_type` for `modelmember.povms`: - `"computational"` $\rightarrow$ `ComputationalBasisPOVM` - `"static pure"` $\rightarrow$ `UnconstrainedPOVM` + `StaticPureEffect` - `"full pure"` $\rightarrow$ `UnconstrainedPOVM` + `FullPureEffect` - `"static"` $\rightarrow$ `UnconstrainedPOVM` + `StaticEffect` - `"full"` $\rightarrow$ `UnconstrainedPOVM` + `FullEffect` - `"full TP"` $\rightarrow$ `TPPOVM` For convenience, the `prep_type` and `povm_type` arguments also accept `"auto"`, which will try to set the parameterization based on the given `gate_type`. An incomplete list of this `gate_type` $\rightarrow$ `prep_type` / `povm_type` mapping is:- `"auto"`, `"static standard"`, `"static clifford"` $\rightarrow$ `"computational"`- `"unitary"` $\rightarrow$ `"pure"`- All others map directly Explicit ModelsWe now illustrate how one may specify the type of paramterization in `create_explicit_model`, and change the object types of all of a `ExplicitOpModel`'s contents using its `set_all_parameterizaions` method. The `create_explicit_model` function builds (layer) operations that are compositions of the ideal operations and added noise (see the [model noise tutorial](ModelNoise.ipynb)). By setting `ideal_gate_type` and similar arguments, the object type used for the initial "ideal" part of the operations is decided.
###Code
import pygsti
from pygsti.processors import QubitProcessorSpec
from pygsti.models import modelconstruction as mc
pspec = QubitProcessorSpec(1, ['Gi', 'Gxpi2', 'Gypi2']) # simple single qubit processor
model = mc.create_explicit_model(pspec)
model.print_modelmembers()
print("%d parameters" % model.num_params)
###Output
_____no_output_____
###Markdown
By default, an explicit model creates static (zero parameter) operations of types `StaticUnitaryOp`. If we specify an `ideal_gate_type` we can change this:
###Code
model = mc.create_explicit_model(pspec, ideal_gate_type="full TP")
model.print_modelmembers()
print("%d parameters" % model.num_params)
###Output
_____no_output_____
###Markdown
Switching the parameterizatio to "CPTP" gates changes the gate type accordingly:
###Code
model.set_all_parameterizations('CPTP')
model.print_modelmembers()
print("%d parameters" % model.num_params)
###Output
_____no_output_____
###Markdown
To alter an *individual* gate or SPAM vector's parameterization, one can simply construct a replacement object of the desired type and assign it to the `Model`.
###Code
# Turning ComposedOp into a dense matrix for conversion into a dense FullTPOp
newOp = pygsti.modelmembers.operations.FullTPOp(model[('Gi', 0)].to_dense())
model['Gi'] = newOp
print("model['Gi'] =",model['Gi'])
###Output
_____no_output_____
###Markdown
**NOTE:** When a `LinearOperator` or `SPAMVec`-derived object is assigned as an element of an `ExplicitOpModel` (as above), the object *replaces* any existing object with the given key. However, if any other type of object is assigned to an `ExplicitOpModel` element, an attempt is made to initialize or update the existing existing gate using the assigned data (using its `set_matrix` function internally). For example:
###Code
import numpy as np
numpy_array = np.array( [[1, 0, 0, 0],
[0, 0.5, 0, 0],
[0, 0, 0.5, 0],
[0, 0, 0, 0.5]], 'd')
model['Gi'] = numpy_array # after assignment with a numpy array...
print("model['Gi'] =",model['Gi']) # this is STILL a FullTPOp object
#If you try to assign a gate to something that is either invalid or it doesn't know how
# to deal with, it will raise an exception
invalid_TP_array = np.array( [[2, 1, 3, 0],
[0, 0.5, 0, 0],
[0, 0, 0.5, 0],
[0, 0, 0, 0.5]], 'd')
try:
model['Gi'] = invalid_TP_array
except ValueError as e:
print("ERROR!! " + str(e))
###Output
_____no_output_____
###Markdown
Implicit modelsThe story is similar with implicit models. Operations are built as compositions of ideal operations and noise, and by specifying the `ideal_gate_type` and similar arguments, you can set what type of ideal operation is created. Below we show some examples with a `LocalNoiseModel`. Let's start with the default static operation type:
###Code
mdl_locnoise = pygsti.models.create_crosstalk_free_model(pspec)
mdl_locnoise.print_modelmembers()
###Output
_____no_output_____
###Markdown
Suppose we'd like to modify the gate operations. Then we should make a model with `ideal_gate_type="full"`, so the operations are `FullArbitraryOp` objects:
###Code
mdl_locnoise = pygsti.models.create_crosstalk_free_model(pspec, ideal_gate_type='full')
mdl_locnoise.print_modelmembers()
###Output
_____no_output_____
###Markdown
These can now be modified by matrix assignment, since their parameters allow them to take on any other process matrix. Let's set the process matrix (more accurately, this is the Pauli-transfer-matrix of the gate) of `"Gxpi"` to include some depolarization:
###Code
mdl_locnoise.operation_blks['gates']['Gxpi2'] = np.array([[1, 0, 0, 0],
[0, 0.9, 0, 0],
[0, 0,-0.9, 0],
[0, 0, 0,-0.9]],'d')
###Output
_____no_output_____ |
Section08/.ipynb_checkpoints/04_connected-checkpoint.ipynb | ###Markdown
Computing connected components in an image
###Code
import itertools
import numpy as np
import networkx as nx
import matplotlib.colors as col
import matplotlib.pyplot as plt
%matplotlib inline
n = 10
img = np.random.randint(size=(n, n),
low=0, high=3)
g = nx.grid_2d_graph(n, n)
def show_image(img, ax=None, **kwargs):
ax.imshow(img, origin='lower',
interpolation='none',
**kwargs)
ax.set_axis_off()
def show_graph(g, ax=None, **kwargs):
pos = {(i, j): (j, i) for (i, j) in g.nodes()}
node_color = [img[i, j] for (i, j) in g.nodes()]
nx.draw_networkx(g,
ax=ax,
pos=pos,
node_color='w',
linewidths=3,
width=2,
edge_color='w',
with_labels=False,
node_size=50,
**kwargs)
cmap = plt.cm.Blues
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
show_image(img, ax=ax, cmap=cmap, vmin=-1)
show_graph(g, ax=ax, cmap=cmap, vmin=-1)
g2 = g.subgraph(zip(*np.nonzero(img == 2)))
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
show_image(img, ax=ax, cmap=cmap, vmin=-1)
show_graph(g2, ax=ax, cmap=cmap, vmin=-1)
components = [np.array(list(comp))
for comp in nx.connected_components(g2)
if len(comp) >= 3]
len(components)
# We copy the image, and assign a new label
# to each found component.
img_bis = img.copy()
for i, comp in enumerate(components):
img_bis[comp[:, 0], comp[:, 1]] = i + 3
# We create a new discrete color map extending
# the previous map with new colors.
colors = [cmap(.5), cmap(.75), cmap(1.),
'#f4f235', '#f4a535', '#f44b35',
'#821d10']
cmap2 = col.ListedColormap(colors, 'indexed')
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
show_image(img_bis, ax=ax, cmap=cmap2)
###Output
_____no_output_____ |
Fer2013_Model_Train.ipynb | ###Markdown
###Code
import sys, os
import pandas as pd
import numpy as np
import cv2
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from keras.losses import categorical_crossentropy
from keras.optimizers import Adam
from keras.regularizers import l2
from keras.callbacks import ReduceLROnPlateau, TensorBoard, EarlyStopping, ModelCheckpoint
from keras.models import load_model
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_json
num_features = 64
num_labels = 3
batch_size = 64
epochs = 100
width, height = 48, 48
#Mount your google drive
#from google.colab import drive
#drive.mount('/content/drive')
#emotion_dict = {0: "Angry", 1: "Disgust", 2: "Fear", 3: "Happy", 4: "Sad", 5: "Surprise", 6: "Neutral"}
#Following are labels
#Calm 6
#Surprise & Fear 5 & 2
#Anger 0
from google.colab import drive
drive.mount('/content/drive')
#https://drive.google.com/open?id=1OnveSEG0q5CwEQeZOW3QotcL_GK4UZ2l
#/content/drive/My Drive/dataset/fer2013.csv
root_path = '/content/drive/My Drive/dataset/fer2013.csv'
data = pd.read_csv(root_path)
data.tail()
#Remove parts of the data from the dataset to make small part training.
indexNames_1 = data[ data['emotion'] == 1].index
indexNames_3 = data[ data['emotion'] == 3].index
indexNames_4 = data[ data['emotion'] == 4].index
data.drop(indexNames_1 , inplace=True)
#data.drop(indexNames_3 , inplace=True)
#data.drop(indexNames_4 , inplace=True)
data.drop(indexNames_3 , inplace=True)
data.drop(indexNames_4 , inplace=True)
data.groupby('emotion').size()
#update 5 with 2
data.loc[data['emotion'] == 5, 'emotion'] = 2
#print(data['emotion']==1)
pixels = data['pixels'].tolist() # 1
faces = []
for pixel_sequence in pixels:
face = [int(pixel)/255 for pixel in pixel_sequence.split(' ')] # 2
if (len(face)) < 2304:
print("array length less than 2304")
continue
face = np.asarray(face).reshape(width, height) # 3
# There is an issue for normalizing images. Just comment out 4 and 5 lines until when I found the solution.
#face = face / 255.0 # 4
#face = cv2.resize(face.astype('uint8'), (width, height)) # 5
#face = face / 255.0
faces.append(face.astype('float32'))
faces = np.asarray(faces)
faces = np.expand_dims(faces, -1) # 6
emotions = pd.get_dummies(data['emotion']).as_matrix() # 7
#print( emotions )
X_train, X_test, y_train, y_test = train_test_split(faces, emotions, test_size=0.1, random_state=42)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=41)
#print(y_train)
#datagen = ImageDataGenerator(featurewise_center=True, featurewise_std_normalization=True)
# fit parameters from data
zca_whitening=False
rotation_angle=15
shift_range=0.1
zoom_range=0.1
horizontal_flip=True
time_delay=None
#datagen.fit(X_train)
datagen = ImageDataGenerator(featurewise_center=True,
featurewise_std_normalization=True,
zca_whitening=zca_whitening,
rotation_range=15,
width_shift_range=shift_range,
height_shift_range=shift_range,
horizontal_flip=horizontal_flip,
fill_mode="nearest",
zoom_range=zoom_range)
#time_delay=time_delay)
datagen.fit(X_train)
#data_gen.flow(self.images, self.labels, batch_size=batch_size, target_dimensions=target_dimensions)
#self.model.compile(optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7), loss=categorical_crossentropy, metrics=['accuracy'])
#self.model.fit_generator(generator=generator, validation_data=validation_data, epochs=epochs,
#callbacks=[ReduceLROnPlateau(), EarlyStopping(patience=3), PlotLosses()])
model = Sequential()
model.add(Conv2D(num_features, kernel_size=(3, 3), activation='relu', input_shape=(width, height, 1), data_format='channels_last', kernel_regularizer=l2(0.01)))
model.add(Conv2D(num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(2*num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(2*num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(2*2*num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(2*2*num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))
model.add(Conv2D(2*2*2*num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(2*2*2*num_features, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(2*2*2*num_features, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(2*2*num_features, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(2*num_features, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_labels, activation='softmax'))
model.summary()
model.compile(loss=categorical_crossentropy,
optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-7),
metrics=['accuracy'])
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=3, verbose=1)
tensorboard = TensorBoard(log_dir='/content/drive/My Drive/dataset')
early_stopper = EarlyStopping(monitor='val_loss', min_delta=0, patience=8, verbose=1, mode='auto')
MODELPATH = '/content/drive/My Drive/dataset/model_T.h5'
checkpointer = ModelCheckpoint(MODELPATH, monitor='val_loss', verbose=1, save_best_only=True)
model.fit(np.array(X_train), np.array(y_train),
batch_size=batch_size,
epochs=60,
verbose=1,
validation_data=(np.array(X_test), np.array(y_test)),
shuffle=True,
callbacks=[lr_reducer, tensorboard, checkpointer])
#callbacks=[lr_reducer, tensorboard, early_stopper, checkpointer]
#datagen.flow(trainX, trainY, batch_size=batch_size),
model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size), epochs=1000,
verbose=1,
validation_data=(np.array(X_val), np.array(y_val)),
shuffle=True,
callbacks=[tensorboard, tensorboard, checkpointer])
len(X_train)
len(y_train)
#model.fit_generator(datagen, samples_per_epoch=len(X_train), epochs=100)
model_json = model.to_json()
with open("/tmp/model.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("/tmp/model.h5")
print("Saved model to disk")
scores = model.evaluate(np.array(X_test), np.array(y_test), batch_size=batch_size)
print("Loss: " + str(scores[0]))
print("Accuracy: " + str(scores[1]))
###Output
2028/2028 [==============================] - 1s 637us/step
Loss: 1.315742569562246
Accuracy: 0.722879685005963
|
Section_5/Graph Implementation Using Adjacency Lists.ipynb | ###Markdown
Graph Implementation Using Adjacency Listsfor an undirected graph. © Joe James, 2019. Vertex ClassThe Vertex class has a constructor that sets the name of the vertex (in our example, just a letter), and creates a new empty set to store neighbors.The add_neighbor method adds the name of a neighboring vertex to the neighbors set. This set automatically eliminates duplicates.
###Code
class Vertex:
def __init__(self, n):
self.name = n
self.neighbors = set()
def add_neighbor(self, v):
self.neighbors.add(v)
###Output
_____no_output_____
###Markdown
Graph ClassThe Graph class uses a dictionary to store vertices in the format, vertex_name:vertex_object. Adding a new vertex to the graph, we first check if the object passed in is a vertex object, then we check if it already exists in the graph. If both checks pass, then we add the vertex to the graph's vertices dictionary.When adding an edge, we receive two vertex names, we first check if both vertex names are valid, then we add each to the other's neighbors set.To print the graph, we iterate through the vertices, and print each vertex name (the key) followed by its sorted neighbors list.
###Code
class Graph:
vertices = {}
def add_vertex(self, vertex):
if isinstance(vertex, Vertex) and vertex.name not in self.vertices:
self.vertices[vertex.name] = vertex
return True
else:
return False
def add_edge(self, u, v):
if u in self.vertices and v in self.vertices:
self.vertices[u].add_neighbor(v)
self.vertices[v].add_neighbor(u)
return True
else:
return False
def print_graph(self):
for key in sorted(list(self.vertices.keys())):
print(key, sorted(list(self.vertices[key].neighbors)))
###Output
_____no_output_____
###Markdown
Test CodeHere we create a new Graph object. We create a new vertex named A. We add A to the graph. Then we add new vertex B to the graph. Then we iterate from A to K and add a bunch of vertices to the graph. Since the add_vertex method checks for duplicates, A and B are not added twice.
###Code
g = Graph()
a = Vertex('A')
g.add_vertex(a)
g.add_vertex(Vertex('B'))
for i in range(ord('A'), ord('K')):
g.add_vertex(Vertex(chr(i)))
###Output
_____no_output_____
###Markdown
An edge consists of two vertex names. Here we iterate through a list of edges and add each to the graph. This print_graph method doesn't give a very good visualization of the graph, but it does show the neighbors for each vertex.
###Code
edges = ['AB', 'AE', 'BF', 'CG', 'DE', 'DH', 'EH', 'FG', 'FI', 'FJ', 'GJ', 'HI']
for edge in edges:
g.add_edge(edge[0], edge[1])
g.print_graph()
###Output
A ['B', 'E']
B ['A', 'F']
C ['G']
D ['E', 'H']
E ['A', 'D', 'H']
F ['B', 'G', 'I', 'J']
G ['C', 'F', 'J']
H ['D', 'E', 'I']
I ['F', 'H']
J ['F', 'G']
|
docs/examples/general/reinterpret.ipynb | ###Markdown
Reinterpreting TensorsSometimes the data in tensors needs to be interpreted as if it had different type or shape. For example, reading a binary file into memory produces a flat tensor of byte-valued data, which the application code may want to interpret as an array of data of specific shape and possibly different type.DALI provides the following operations which affect tensor metadata (shape, type, layout):* reshape* reinterpret* squeeze* expand_dimsThsese operations neither modify nor copy the data - the output tensor is just another view of the same region of memory, making these operations very cheap. Fixed Output ShapeThis example demonstrates the simplest use of the `reshape` operation, assigning a new fixed shape to an existing tensor.First, we'll import DALI and other necessary modules, and define a utility for displaying the data, which will be used throughout this tutorial.
###Code
import nvidia.dali as dali
import nvidia.dali.fn as fn
from nvidia.dali import pipeline_def
import numpy as np
def show_result(outputs, names=["Input", "Output"], formatter=None):
if not isinstance(outputs, tuple):
return show_result((outputs,))
outputs = [out.as_cpu() if hasattr(out, "as_cpu") else out for out in outputs]
for i in range(len(outputs[0])):
print(f"---------------- Sample #{i} ----------------")
for o, out in enumerate(outputs):
a = np.array(out[i])
s = "x".join(str(x) for x in a.shape)
title = names[o] if names is not None and o < len(names) else f"Output #{o}"
l = out.layout()
if l: l += ' '
print(f"{title} ({l}{s})")
np.set_printoptions(formatter=formatter)
print(a)
def rand_shape(dims, lo, hi):
return list(np.random.randint(lo, hi, [dims]))
###Output
_____no_output_____
###Markdown
Now let's define out pipeline - it takes data from an external source and returns it both in original form and reshaped to a fixed square shape `[5, 5]`. Additionally, output tensors' layout is set to HW
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example1(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False)
return inp, fn.reshape(inp, shape=[5, 5], layout="HW")
pipe1 = example1(lambda: np.random.randint(0, 10, size=[25], dtype=np.int32))
pipe1.build()
show_result(pipe1.run())
###Output
---------------- Sample #0 ----------------
Input (25)
[3 6 5 4 8 9 1 7 9 6 8 0 5 0 9 6 2 0 5 2 6 3 7 0 9]
Output (HW 5x5)
[[3 6 5 4 8]
[9 1 7 9 6]
[8 0 5 0 9]
[6 2 0 5 2]
[6 3 7 0 9]]
---------------- Sample #1 ----------------
Input (25)
[0 3 2 3 1 3 1 3 7 1 7 4 0 5 1 5 9 9 4 0 9 8 8 6 8]
Output (HW 5x5)
[[0 3 2 3 1]
[3 1 3 7 1]
[7 4 0 5 1]
[5 9 9 4 0]
[9 8 8 6 8]]
---------------- Sample #2 ----------------
Input (25)
[6 3 1 2 5 2 5 6 7 4 3 5 6 4 6 2 4 2 7 9 7 7 2 9 7]
Output (HW 5x5)
[[6 3 1 2 5]
[2 5 6 7 4]
[3 5 6 4 6]
[2 4 2 7 9]
[7 7 2 9 7]]
###Markdown
As we can see, the numbers from flat input tensors have been rearranged into 5x5 matrices. Reshape with WildcardsLet's now consider a more advanced use case. Imagine you have some flattened array that represents a fixed number of columns, but the number of rows is free to vary from sample to sample. In that case, you can put a wildcard dimension by specifying its shape as `-1`. Whe using wildcards, the output is resized so that the total number of elements is the same as in the input.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example2(input_data):
np.random.seed(12345)
inp = fn.external_source(input_data, batch=False)
return inp, fn.reshape(inp, shape=[-1, 5])
pipe2 = example2(lambda: np.random.randint(0, 10, size=[5*np.random.randint(3, 10)], dtype=np.int32))
pipe2.build()
show_result(pipe2.run())
###Output
---------------- Sample #0 ----------------
Input (25)
[5 1 4 9 5 2 1 6 1 9 7 6 0 2 9 1 2 6 7 7 7 8 7 1 7]
Output (5x5)
[[5 1 4 9 5]
[2 1 6 1 9]
[7 6 0 2 9]
[1 2 6 7 7]
[7 8 7 1 7]]
---------------- Sample #1 ----------------
Input (35)
[0 3 5 7 3 1 5 2 5 3 8 5 2 5 3 0 6 8 0 5 6 8 9 2 2 2 9 7 5 7 1 0 9 3 0]
Output (7x5)
[[0 3 5 7 3]
[1 5 2 5 3]
[8 5 2 5 3]
[0 6 8 0 5]
[6 8 9 2 2]
[2 9 7 5 7]
[1 0 9 3 0]]
---------------- Sample #2 ----------------
Input (30)
[0 6 2 1 5 8 6 5 1 0 5 8 2 9 4 7 9 5 2 4 8 2 5 6 5 9 6 1 9 5]
Output (6x5)
[[0 6 2 1 5]
[8 6 5 1 0]
[5 8 2 9 4]
[7 9 5 2 4]
[8 2 5 6 5]
[9 6 1 9 5]]
###Markdown
Removing and Adding Unit DimensionsThere are two dedicated operators `squeeze` and `expand_dims` which can be used for removing and adding dimensions with unit extent. The following example demonstrates the removal of a redundant dimension as well as adding two new dimensions.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_squeeze_expand(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, layout="CHW")
squeezed = fn.squeeze(inp, axes=[0])
expanded = fn.expand_dims(squeezed, axes=[0, 3], new_axis_names="FC")
return inp, fn.squeeze(inp, axes=[0]), expanded
def single_channel_generator():
return np.random.randint(0, 10,
size=[1]+rand_shape(2, 1, 7),
dtype=np.int32)
pipe_squeeze_expand = example_squeeze_expand(single_channel_generator)
pipe_squeeze_expand.build()
show_result(pipe_squeeze_expand.run())
###Output
---------------- Sample #0 ----------------
Input (CHW 1x6x3)
[[[8 2 1]
[7 5 9]
[2 4 6]
[0 8 6]
[5 3 1]
[1 6 1]]]
Output (HW 6x3)
[[8 2 1]
[7 5 9]
[2 4 6]
[0 8 6]
[5 3 1]
[1 6 1]]
Output #2 (FHWC 1x6x3x1)
[[[[8]
[2]
[1]]
[[7]
[5]
[9]]
[[2]
[4]
[6]]
[[0]
[8]
[6]]
[[5]
[3]
[1]]
[[1]
[6]
[1]]]]
---------------- Sample #1 ----------------
Input (CHW 1x2x2)
[[[6 9]
[0 9]]]
Output (HW 2x2)
[[6 9]
[0 9]]
Output #2 (FHWC 1x2x2x1)
[[[[6]
[9]]
[[0]
[9]]]]
---------------- Sample #2 ----------------
Input (CHW 1x2x6)
[[[4 4 6 6 6 3]
[8 2 1 7 9 7]]]
Output (HW 2x6)
[[4 4 6 6 6 3]
[8 2 1 7 9 7]]
Output #2 (FHWC 1x2x6x1)
[[[[4]
[4]
[6]
[6]
[6]
[3]]
[[8]
[2]
[1]
[7]
[9]
[7]]]]
###Markdown
Rearranging DimensionsReshape allows you to swap, insert or remove dimenions. The argument `src_dims` allows you to specify which source dimension is used for a given output dimension. You can also insert a new dimension by specifying -1 as a source dimension index.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_reorder(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False)
return inp, fn.reshape(inp, src_dims=[1,0])
pipe_reorder = example_reorder(lambda: np.random.randint(0, 10,
size=rand_shape(2, 1, 7),
dtype=np.int32))
pipe_reorder.build()
show_result(pipe_reorder.run())
###Output
---------------- Sample #0 ----------------
Input (6x3)
[[8 2 1]
[7 5 9]
[2 4 6]
[0 8 6]
[5 3 1]
[1 6 1]]
Output (3x6)
[[8 2 1 7 5 9]
[2 4 6 0 8 6]
[5 3 1 1 6 1]]
---------------- Sample #1 ----------------
Input (2x2)
[[6 9]
[0 9]]
Output (2x2)
[[6 9]
[0 9]]
---------------- Sample #2 ----------------
Input (2x6)
[[4 4 6 6 6 3]
[8 2 1 7 9 7]]
Output (6x2)
[[4 4]
[6 6]
[6 3]
[8 2]
[1 7]
[9 7]]
###Markdown
Adding and Removing DimensionsDimensions can be added or removed by specifying `src_dims` argument or by using dedicated `squeeze` and `expand_dims` operators.The following example reinterprets single-channel data from CHW to HWC layout by discarding the leading dimension and adding a new trailing dimension. It also specifies the output layout.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_remove_add(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, layout="CHW")
return inp, fn.reshape(inp,
src_dims=[1,2,-1], # select HW and add a new one at the end
layout="HWC") # specify the layout string
pipe_remove_add = example_remove_add(lambda: np.random.randint(0, 10, [1,4,3], dtype=np.int32))
pipe_remove_add.build()
show_result(pipe_remove_add.run())
###Output
---------------- Sample #0 ----------------
Input (CHW 1x4x3)
[[[2 8 2]
[1 7 5]
[9 2 4]
[6 0 8]]]
Output (HWC 4x3x1)
[[[2]
[8]
[2]]
[[1]
[7]
[5]]
[[9]
[2]
[4]]
[[6]
[0]
[8]]]
---------------- Sample #1 ----------------
Input (CHW 1x4x3)
[[[6 5 3]
[1 1 6]
[1 1 9]
[6 9 0]]]
Output (HWC 4x3x1)
[[[6]
[5]
[3]]
[[1]
[1]
[6]]
[[1]
[1]
[9]]
[[6]
[9]
[0]]]
---------------- Sample #2 ----------------
Input (CHW 1x4x3)
[[[9 9 5]
[4 4 6]
[6 6 3]
[8 2 1]]]
Output (HWC 4x3x1)
[[[9]
[9]
[5]]
[[4]
[4]
[6]]
[[6]
[6]
[3]]
[[8]
[2]
[1]]]
###Markdown
Relative ShapeThe output shape may be calculated in relative terms, with a new extent being a multiple of a source extent.For example, you may want to combine two subsequent rows into one - doubling the number of columns and halving the number of rows. The use of relative shape can be combined with dimension rearranging, in which case the new output extent is a multiple of a _different_ source extent.The example below reinterprets the input as having twice as many _columns_ as the input had _rows_.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_rel_shape(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False)
return inp, fn.reshape(inp,
rel_shape=[0.5, 2],
src_dims=[1,0])
pipe_rel_shape = example_rel_shape(
lambda: np.random.randint(0, 10,
[np.random.randint(1,7), 2*np.random.randint(1,5)],
dtype=np.int32))
pipe_rel_shape.build()
show_result(pipe_rel_shape.run())
###Output
---------------- Sample #0 ----------------
Input (4x6)
[[5 4 8 9 1 7]
[9 6 8 0 5 0]
[9 6 2 0 5 2]
[6 3 7 0 9 0]]
Output (3x8)
[[5 4 8 9 1 7 9 6]
[8 0 5 0 9 6 2 0]
[5 2 6 3 7 0 9 0]]
---------------- Sample #1 ----------------
Input (4x6)
[[3 1 3 1 3 7]
[1 7 4 0 5 1]
[5 9 9 4 0 9]
[8 8 6 8 6 3]]
Output (3x8)
[[3 1 3 1 3 7 1 7]
[4 0 5 1 5 9 9 4]
[0 9 8 8 6 8 6 3]]
---------------- Sample #2 ----------------
Input (2x6)
[[5 2 5 6 7 4]
[3 5 6 4 6 2]]
Output (3x4)
[[5 2 5 6]
[7 4 3 5]
[6 4 6 2]]
###Markdown
Reinterpreting Data TypeThe `reinterpret` operation can view the data as if it was of different type. When a new shape is not specified, the innermost dimension is resized accordingly.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_reinterpret(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False)
return inp, fn.reinterpret(inp, dtype=dali.types.UINT32)
pipe_reinterpret = example_reinterpret(
lambda:
np.random.randint(0, 255,
[np.random.randint(1,7), 4*np.random.randint(1,5)],
dtype=np.uint8))
pipe_reinterpret.build()
def hex_bytes(x):
f = f"0x{{:0{2*x.nbytes}x}}"
return f.format(x)
show_result(pipe_reinterpret.run(), formatter={'int':hex_bytes})
###Output
---------------- Sample #0 ----------------
Input (4x12)
[[0x35 0xdc 0x5d 0xd1 0xcc 0xec 0x0e 0x70 0x74 0x5d 0xb3 0x9c]
[0x98 0x42 0x0d 0xc9 0xf9 0xd7 0x77 0xc5 0x8f 0x7e 0xac 0xc7]
[0xb1 0xda 0x54 0xdc 0x17 0xa1 0xc8 0x45 0xe9 0x24 0x90 0x26]
[0x9a 0x5c 0xc6 0x46 0x1e 0x20 0xd2 0x32 0xab 0x7e 0x47 0xcd]]
Output (4x3)
[[0xd15ddc35 0x700eeccc 0x9cb35d74]
[0xc90d4298 0xc577d7f9 0xc7ac7e8f]
[0xdc54dab1 0x45c8a117 0x269024e9]
[0x46c65c9a 0x32d2201e 0xcd477eab]]
---------------- Sample #1 ----------------
Input (5x4)
[[0x1a 0x1f 0x3d 0xe0]
[0x76 0x35 0xbb 0x1d]
[0xba 0xe9 0x99 0x5b]
[0x78 0xe8 0x4d 0x03]
[0x70 0x37 0x41 0x80]]
Output (5x1)
[[0xe03d1f1a]
[0x1dbb3576]
[0x5b99e9ba]
[0x034de878]
[0x80413770]]
---------------- Sample #2 ----------------
Input (5x8)
[[0x50 0x6d 0xbd 0x54 0xc9 0xa3 0x73 0xb6]
[0x7f 0xc9 0x79 0xcd 0xf6 0xc0 0xc8 0x5e]
[0xfe 0x09 0x27 0x19 0xaf 0x8d 0xaa 0x8f]
[0x32 0x96 0x55 0x0e 0xf0 0x0e 0xca 0x80]
[0xfb 0x56 0x52 0x71 0x4c 0x54 0x86 0x03]]
Output (5x2)
[[0x54bd6d50 0xb673a3c9]
[0xcd79c97f 0x5ec8c0f6]
[0x192709fe 0x8faa8daf]
[0x0e559632 0x80ca0ef0]
[0x715256fb 0x0386544c]]
###Markdown
Reinterpreting TensorsSometimes the data in tensors needs to be interpreted as if it had different type or shape. For example, reading a binary file into memory produces a flat tensor of byte-valued data, which the application code may want to interpret as an array of data of specific shape and possibly different type.DALI provides the following operations which affect tensor metadata (shape, type, layout):* reshape* reinterpret* squeeze* expand_dimsThsese operations neither modify nor copy the data - the output tensor is just another view of the same region of memory, making these operations very cheap. Fixed Output ShapeThis example demonstrates the simplest use of the `reshape` operation, assigning a new fixed shape to an existing tensor.First, we'll import DALI and other necessary modules, and define a utility for displaying the data, which will be used throughout this tutorial.
###Code
import nvidia.dali as dali
import nvidia.dali.fn as fn
from nvidia.dali import pipeline_def
import nvidia.dali.types as types
import numpy as np
def show_result(outputs, names=["Input", "Output"], formatter=None):
if not isinstance(outputs, tuple):
return show_result((outputs,))
outputs = [out.as_cpu() if hasattr(out, "as_cpu") else out for out in outputs]
for i in range(len(outputs[0])):
print(f"---------------- Sample #{i} ----------------")
for o, out in enumerate(outputs):
a = np.array(out[i])
s = "x".join(str(x) for x in a.shape)
title = names[o] if names is not None and o < len(names) else f"Output #{o}"
l = out.layout()
if l: l += ' '
print(f"{title} ({l}{s})")
np.set_printoptions(formatter=formatter)
print(a)
def rand_shape(dims, lo, hi):
return list(np.random.randint(lo, hi, [dims]))
###Output
_____no_output_____
###Markdown
Now let's define out pipeline - it takes data from an external source and returns it both in original form and reshaped to a fixed square shape `[5, 5]`. Additionally, output tensors' layout is set to HW
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example1(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp, shape=[5, 5], layout="HW")
pipe1 = example1(lambda: np.random.randint(0, 10, size=[25], dtype=np.int32))
pipe1.build()
show_result(pipe1.run())
###Output
---------------- Sample #0 ----------------
Input (25)
[3 6 5 4 8 9 1 7 9 6 8 0 5 0 9 6 2 0 5 2 6 3 7 0 9]
Output (HW 5x5)
[[3 6 5 4 8]
[9 1 7 9 6]
[8 0 5 0 9]
[6 2 0 5 2]
[6 3 7 0 9]]
---------------- Sample #1 ----------------
Input (25)
[0 3 2 3 1 3 1 3 7 1 7 4 0 5 1 5 9 9 4 0 9 8 8 6 8]
Output (HW 5x5)
[[0 3 2 3 1]
[3 1 3 7 1]
[7 4 0 5 1]
[5 9 9 4 0]
[9 8 8 6 8]]
---------------- Sample #2 ----------------
Input (25)
[6 3 1 2 5 2 5 6 7 4 3 5 6 4 6 2 4 2 7 9 7 7 2 9 7]
Output (HW 5x5)
[[6 3 1 2 5]
[2 5 6 7 4]
[3 5 6 4 6]
[2 4 2 7 9]
[7 7 2 9 7]]
###Markdown
As we can see, the numbers from flat input tensors have been rearranged into 5x5 matrices. Reshape with WildcardsLet's now consider a more advanced use case. Imagine you have some flattened array that represents a fixed number of columns, but the number of rows is free to vary from sample to sample. In that case, you can put a wildcard dimension by specifying its shape as `-1`. Whe using wildcards, the output is resized so that the total number of elements is the same as in the input.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example2(input_data):
np.random.seed(12345)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp, shape=[-1, 5])
pipe2 = example2(lambda: np.random.randint(0, 10, size=[5*np.random.randint(3, 10)], dtype=np.int32))
pipe2.build()
show_result(pipe2.run())
###Output
---------------- Sample #0 ----------------
Input (25)
[5 1 4 9 5 2 1 6 1 9 7 6 0 2 9 1 2 6 7 7 7 8 7 1 7]
Output (5x5)
[[5 1 4 9 5]
[2 1 6 1 9]
[7 6 0 2 9]
[1 2 6 7 7]
[7 8 7 1 7]]
---------------- Sample #1 ----------------
Input (35)
[0 3 5 7 3 1 5 2 5 3 8 5 2 5 3 0 6 8 0 5 6 8 9 2 2 2 9 7 5 7 1 0 9 3 0]
Output (7x5)
[[0 3 5 7 3]
[1 5 2 5 3]
[8 5 2 5 3]
[0 6 8 0 5]
[6 8 9 2 2]
[2 9 7 5 7]
[1 0 9 3 0]]
---------------- Sample #2 ----------------
Input (30)
[0 6 2 1 5 8 6 5 1 0 5 8 2 9 4 7 9 5 2 4 8 2 5 6 5 9 6 1 9 5]
Output (6x5)
[[0 6 2 1 5]
[8 6 5 1 0]
[5 8 2 9 4]
[7 9 5 2 4]
[8 2 5 6 5]
[9 6 1 9 5]]
###Markdown
Removing and Adding Unit DimensionsThere are two dedicated operators `squeeze` and `expand_dims` which can be used for removing and adding dimensions with unit extent. The following example demonstrates the removal of a redundant dimension as well as adding two new dimensions.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_squeeze_expand(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, layout="CHW", dtype=types.INT32)
squeezed = fn.squeeze(inp, axes=[0])
expanded = fn.expand_dims(squeezed, axes=[0, 3], new_axis_names="FC")
return inp, fn.squeeze(inp, axes=[0]), expanded
def single_channel_generator():
return np.random.randint(0, 10,
size=[1]+rand_shape(2, 1, 7),
dtype=np.int32)
pipe_squeeze_expand = example_squeeze_expand(single_channel_generator)
pipe_squeeze_expand.build()
show_result(pipe_squeeze_expand.run())
###Output
---------------- Sample #0 ----------------
Input (CHW 1x6x3)
[[[8 2 1]
[7 5 9]
[2 4 6]
[0 8 6]
[5 3 1]
[1 6 1]]]
Output (HW 6x3)
[[8 2 1]
[7 5 9]
[2 4 6]
[0 8 6]
[5 3 1]
[1 6 1]]
Output #2 (FHWC 1x6x3x1)
[[[[8]
[2]
[1]]
[[7]
[5]
[9]]
[[2]
[4]
[6]]
[[0]
[8]
[6]]
[[5]
[3]
[1]]
[[1]
[6]
[1]]]]
---------------- Sample #1 ----------------
Input (CHW 1x2x2)
[[[6 9]
[0 9]]]
Output (HW 2x2)
[[6 9]
[0 9]]
Output #2 (FHWC 1x2x2x1)
[[[[6]
[9]]
[[0]
[9]]]]
---------------- Sample #2 ----------------
Input (CHW 1x2x6)
[[[4 4 6 6 6 3]
[8 2 1 7 9 7]]]
Output (HW 2x6)
[[4 4 6 6 6 3]
[8 2 1 7 9 7]]
Output #2 (FHWC 1x2x6x1)
[[[[4]
[4]
[6]
[6]
[6]
[3]]
[[8]
[2]
[1]
[7]
[9]
[7]]]]
###Markdown
Rearranging DimensionsReshape allows you to swap, insert or remove dimenions. The argument `src_dims` allows you to specify which source dimension is used for a given output dimension. You can also insert a new dimension by specifying -1 as a source dimension index.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_reorder(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp, src_dims=[1,0])
pipe_reorder = example_reorder(lambda: np.random.randint(0, 10,
size=rand_shape(2, 1, 7),
dtype=np.int32))
pipe_reorder.build()
show_result(pipe_reorder.run())
###Output
---------------- Sample #0 ----------------
Input (6x3)
[[8 2 1]
[7 5 9]
[2 4 6]
[0 8 6]
[5 3 1]
[1 6 1]]
Output (3x6)
[[8 2 1 7 5 9]
[2 4 6 0 8 6]
[5 3 1 1 6 1]]
---------------- Sample #1 ----------------
Input (2x2)
[[6 9]
[0 9]]
Output (2x2)
[[6 9]
[0 9]]
---------------- Sample #2 ----------------
Input (2x6)
[[4 4 6 6 6 3]
[8 2 1 7 9 7]]
Output (6x2)
[[4 4]
[6 6]
[6 3]
[8 2]
[1 7]
[9 7]]
###Markdown
Adding and Removing DimensionsDimensions can be added or removed by specifying `src_dims` argument or by using dedicated `squeeze` and `expand_dims` operators.The following example reinterprets single-channel data from CHW to HWC layout by discarding the leading dimension and adding a new trailing dimension. It also specifies the output layout.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_remove_add(input_data):
np.random.seed(4321)
inp = fn.external_source(input_data, batch=False, layout="CHW", dtype=types.INT32)
return inp, fn.reshape(inp,
src_dims=[1,2,-1], # select HW and add a new one at the end
layout="HWC") # specify the layout string
pipe_remove_add = example_remove_add(lambda: np.random.randint(0, 10, [1,4,3], dtype=np.int32))
pipe_remove_add.build()
show_result(pipe_remove_add.run())
###Output
---------------- Sample #0 ----------------
Input (CHW 1x4x3)
[[[2 8 2]
[1 7 5]
[9 2 4]
[6 0 8]]]
Output (HWC 4x3x1)
[[[2]
[8]
[2]]
[[1]
[7]
[5]]
[[9]
[2]
[4]]
[[6]
[0]
[8]]]
---------------- Sample #1 ----------------
Input (CHW 1x4x3)
[[[6 5 3]
[1 1 6]
[1 1 9]
[6 9 0]]]
Output (HWC 4x3x1)
[[[6]
[5]
[3]]
[[1]
[1]
[6]]
[[1]
[1]
[9]]
[[6]
[9]
[0]]]
---------------- Sample #2 ----------------
Input (CHW 1x4x3)
[[[9 9 5]
[4 4 6]
[6 6 3]
[8 2 1]]]
Output (HWC 4x3x1)
[[[9]
[9]
[5]]
[[4]
[4]
[6]]
[[6]
[6]
[3]]
[[8]
[2]
[1]]]
###Markdown
Relative ShapeThe output shape may be calculated in relative terms, with a new extent being a multiple of a source extent.For example, you may want to combine two subsequent rows into one - doubling the number of columns and halving the number of rows. The use of relative shape can be combined with dimension rearranging, in which case the new output extent is a multiple of a _different_ source extent.The example below reinterprets the input as having twice as many _columns_ as the input had _rows_.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_rel_shape(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False, dtype=types.INT32)
return inp, fn.reshape(inp,
rel_shape=[0.5, 2],
src_dims=[1,0])
pipe_rel_shape = example_rel_shape(
lambda: np.random.randint(0, 10,
[np.random.randint(1,7), 2*np.random.randint(1,5)],
dtype=np.int32))
pipe_rel_shape.build()
show_result(pipe_rel_shape.run())
###Output
---------------- Sample #0 ----------------
Input (4x6)
[[5 4 8 9 1 7]
[9 6 8 0 5 0]
[9 6 2 0 5 2]
[6 3 7 0 9 0]]
Output (3x8)
[[5 4 8 9 1 7 9 6]
[8 0 5 0 9 6 2 0]
[5 2 6 3 7 0 9 0]]
---------------- Sample #1 ----------------
Input (4x6)
[[3 1 3 1 3 7]
[1 7 4 0 5 1]
[5 9 9 4 0 9]
[8 8 6 8 6 3]]
Output (3x8)
[[3 1 3 1 3 7 1 7]
[4 0 5 1 5 9 9 4]
[0 9 8 8 6 8 6 3]]
---------------- Sample #2 ----------------
Input (2x6)
[[5 2 5 6 7 4]
[3 5 6 4 6 2]]
Output (3x4)
[[5 2 5 6]
[7 4 3 5]
[6 4 6 2]]
###Markdown
Reinterpreting Data TypeThe `reinterpret` operation can view the data as if it was of different type. When a new shape is not specified, the innermost dimension is resized accordingly.
###Code
@pipeline_def(device_id=0, num_threads=4, batch_size=3)
def example_reinterpret(input_data):
np.random.seed(1234)
inp = fn.external_source(input_data, batch=False, dtype=types.UINT8)
return inp, fn.reinterpret(inp, dtype=dali.types.UINT32)
pipe_reinterpret = example_reinterpret(
lambda:
np.random.randint(0, 255,
[np.random.randint(1,7), 4*np.random.randint(1,5)],
dtype=np.uint8))
pipe_reinterpret.build()
def hex_bytes(x):
f = f"0x{{:0{2*x.nbytes}x}}"
return f.format(x)
show_result(pipe_reinterpret.run(), formatter={'int':hex_bytes})
###Output
---------------- Sample #0 ----------------
Input (4x12)
[[0x35 0xdc 0x5d 0xd1 0xcc 0xec 0x0e 0x70 0x74 0x5d 0xb3 0x9c]
[0x98 0x42 0x0d 0xc9 0xf9 0xd7 0x77 0xc5 0x8f 0x7e 0xac 0xc7]
[0xb1 0xda 0x54 0xdc 0x17 0xa1 0xc8 0x45 0xe9 0x24 0x90 0x26]
[0x9a 0x5c 0xc6 0x46 0x1e 0x20 0xd2 0x32 0xab 0x7e 0x47 0xcd]]
Output (4x3)
[[0xd15ddc35 0x700eeccc 0x9cb35d74]
[0xc90d4298 0xc577d7f9 0xc7ac7e8f]
[0xdc54dab1 0x45c8a117 0x269024e9]
[0x46c65c9a 0x32d2201e 0xcd477eab]]
---------------- Sample #1 ----------------
Input (5x4)
[[0x1a 0x1f 0x3d 0xe0]
[0x76 0x35 0xbb 0x1d]
[0xba 0xe9 0x99 0x5b]
[0x78 0xe8 0x4d 0x03]
[0x70 0x37 0x41 0x80]]
Output (5x1)
[[0xe03d1f1a]
[0x1dbb3576]
[0x5b99e9ba]
[0x034de878]
[0x80413770]]
---------------- Sample #2 ----------------
Input (5x8)
[[0x50 0x6d 0xbd 0x54 0xc9 0xa3 0x73 0xb6]
[0x7f 0xc9 0x79 0xcd 0xf6 0xc0 0xc8 0x5e]
[0xfe 0x09 0x27 0x19 0xaf 0x8d 0xaa 0x8f]
[0x32 0x96 0x55 0x0e 0xf0 0x0e 0xca 0x80]
[0xfb 0x56 0x52 0x71 0x4c 0x54 0x86 0x03]]
Output (5x2)
[[0x54bd6d50 0xb673a3c9]
[0xcd79c97f 0x5ec8c0f6]
[0x192709fe 0x8faa8daf]
[0x0e559632 0x80ca0ef0]
[0x715256fb 0x0386544c]]
|
KNN.ipynb | ###Markdown
KNN programe Basic
###Code
def main():
data = {0:[(1,12,5),(2,5,8),(3,6,9),(3,10,6),(3.5,8,2.9),(2,11,4.6),(2,9,9.5),(1,7,5)],
1:[(5,3,5.4),(3,2.7,5),(1.5,9,2.9),(7,2,2.9),(6,1,4.8),(3.8,1,5.9),(5.6,4,6),(4,2,5),(2,5,1)]
}
# testing point p(x,y,z)
p = (2,5,8) # change co-ordinates
# Number of neighbours
k = 2
print("The value differentiated to point 'P' is: {}".\
format(differentiator(data, p, k)))
# here we are calling a 'differentiator' a function.
# that will do all calcutations of euclidean distances.
# it will also store values of each point from point 'p' to another points
import math
def differentiator(data,p,k=2):
distance = []
for group in data:
print(group)
for feature in data[group]:
print(feature)
euclidean_distance = math.sqrt((feature[0]-p[0])**2 +(feature[1]-p[1])**2 +(feature[2]-p[2])**2)
distance.append((euclidean_distance,group))
distance = sorted(distance)[:k]
freq1 = 0
freq2 = 0
for d in distance:
if d[0] == 0:
freq1 += 1
elif d[1] == 1:
freq2 += 1
return 0 if freq1 > freq2 else 1
if __name__ == '__main__':
main()
###Output
0
(1, 12, 5)
(2, 5, 8)
(3, 6, 9)
(3, 10, 6)
(3.5, 8, 2.9)
(2, 11, 4.6)
(2, 9, 9.5)
(1, 7, 5)
1
(5, 3, 5.4)
(3, 2.7, 5)
(1.5, 9, 2.9)
(7, 2, 2.9)
(6, 1, 4.8)
(3.8, 1, 5.9)
(5.6, 4, 6)
(4, 2, 5)
(2, 5, 1)
The value differentiated to point 'P' is: 0
###Markdown
k-nearest neighbors for Divorce Predictors Data Set The DatasetThe Dataset is from UCIMachinelearning and it provides you all the relevant information needed for the prediction of Divorce. It contains 54 features and on the basis of these features we have to predict that the couple has been divorced or not. Value 1 represent Divorced and value 0 represent not divorced. Features are as follows:1. If one of us apologizes when our discussion deteriorates, the discussion ends.2. I know we can ignore our differences, even if things get hard sometimes.3. When we need it, we can take our discussions with my spouse from the beginning and correct it.4. When I discuss with my spouse, to contact him will eventually work.5. The time I spent with my wife is special for us.6. We don't have time at home as partners.7. We are like two strangers who share the same environment at home rather than family.8. I enjoy our holidays with my wife.9. I enjoy traveling with my wife.10. Most of our goals are common to my spouse.11. I think that one day in the future, when I look back, I see that my spouse and I have been in harmony with each other.12. My spouse and I have similar values in terms of personal freedom.13. My spouse and I have similar sense of entertainment.14. Most of our goals for people (children, friends, etc.) are the same.15. Our dreams with my spouse are similar and harmonious.16. We're compatible with my spouse about what love should be.17. We share the same views about being happy in our life with my spouse18. My spouse and I have similar ideas about how marriage should be19. My spouse and I have similar ideas about how roles should be in marriage20. My spouse and I have similar values in trust.21. I know exactly what my wife likes.22. I know how my spouse wants to be taken care of when she/he sick.23. I know my spouse's favorite food.24. I can tell you what kind of stress my spouse is facing in her/his life.25. I have knowledge of my spouse's inner world.26. I know my spouse's basic anxieties.27. I know what my spouse's current sources of stress are.28. I know my spouse's hopes and wishes.29. I know my spouse very well.30. I know my spouse's friends and their social relationships.31. I feel aggressive when I argue with my spouse.32. When discussing with my spouse, I usually use expressions such as ‘you always’ or ‘you never’ .33. I can use negative statements about my spouse's personality during our discussions.34. I can use offensive expressions during our discussions.35. I can insult my spouse during our discussions.36. I can be humiliating when we discussions.37. My discussion with my spouse is not calm.38. I hate my spouse's way of open a subject.39. Our discussions often occur suddenly.40. We're just starting a discussion before I know what's going on.41. When I talk to my spouse about something, my calm suddenly breaks.42. When I argue with my spouse, ı only go out and I don't say a word.43. I mostly stay silent to calm the environment a little bit.44. Sometimes I think it's good for me to leave home for a while.45. I'd rather stay silent than discuss with my spouse.46. Even if I'm right in the discussion, I stay silent to hurt my spouse.47. When I discuss with my spouse, I stay silent because I am afraid of not being able to control my anger.48. I feel right in our discussions.49. I have nothing to do with what I've been accused of.50. I'm not actually the one who's guilty about what I'm accused of.51. I'm not the one who's wrong about problems at home.52. I wouldn't hesitate to tell my spouse about her/his inadequacy.53. When I discuss, I remind my spouse of her/his inadequacy.54. I'm not afraid to tell my spouse about her/his incompetence. Generally, logistic Machine Learning in Python has a straightforward and user-friendly implementation. It usually consists of these steps:1. Import packages, functions, and classes2. Get data to work with and, if appropriate, transform it3. Create a classification model and train (or fit) it with existing data4. Evaluate your model to see if its performance is satisfactory5. Apply your model to make predictions Import packages, functions, and classes
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn import tree
###Output
_____no_output_____
###Markdown
Get data to work with and, if appropriate, transform it
###Code
df = pd.read_csv('divorce.csv',sep=';')
y=df.Class
x_data=df.drop(columns=['Class'])
df.head(10)
###Output
_____no_output_____
###Markdown
Data description
###Code
sns.countplot(x='Class',data=df,palette='hls')
plt.show()
count_no_sub = len(df[df['Class']==0])
count_sub = len(df[df['Class']==1])
pct_of_no_sub = count_no_sub/(count_no_sub+count_sub)
print("percentage of no divorce is", pct_of_no_sub*100)
pct_of_sub = count_sub/(count_no_sub+count_sub)
print("percentage of divorce", pct_of_sub*100)
###Output
_____no_output_____
###Markdown
Normalize data
###Code
x = (x_data - np.min(x_data)) / (np.max(x_data) - np.min(x_data)).values
x.head()
###Output
_____no_output_____
###Markdown
correlation of all atribute
###Code
plt.figure(figsize=(10,8))
sns.heatmap(df.corr(), cmap='viridis');
###Output
_____no_output_____
###Markdown
Split data set
###Code
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.5,random_state=100)
print("x_train: ",x_train.shape)
print("x_test: ",x_test.shape)
print("y_train: ",y_train.shape)
print("y_test: ",y_test.shape)
###Output
x_train: (85, 54)
x_test: (85, 54)
y_train: (85,)
y_test: (85,)
###Markdown
Create a classification model and train (or fit) it with existing data Step 1. Import the model you want to useStep 2. Make an instance of the ModelStep 3. Training the model on the data, storing the information learned from the dataStep 4. Predict labels for new data
###Code
K = 5
clfk = KNeighborsClassifier(n_neighbors=K)
clfk.fit(x_train, y_train.ravel())
y_predk=clfk.predict(x_test)
print("When K = {} neighnors , KNN test accuracy: {}".format(K, clfk.score(x_test, y_test)))
print("When K = {} neighnors , KNN train accuracy: {}".format(K, clfk.score(x_train, y_train)))
print(classification_report(y_test, clfk.predict(x_test)))
print("Knn(k=5) test accuracy: ", clfk.score(x_test, y_test))
ran = np.arange(1,30)
train_list = []
test_list = []
for i,each in enumerate(ran):
clfk = KNeighborsClassifier(n_neighbors=each)
clfk.fit(x_train, y_train.ravel())
test_list.append(clfk.score(x_test, y_test))
train_list.append(clfk.score(x_train, y_train))
print("Best test score is {} , K = {}".format(np.max(test_list), test_list.index(np.max(test_list))+1))
print("Best train score is {} , K = {}".format(np.max(train_list), train_list.index(np.max(train_list))+1))
###Output
When K = 5 neighnors , KNN test accuracy: 0.9764705882352941
When K = 5 neighnors , KNN train accuracy: 0.9764705882352941
precision recall f1-score support
0 0.95 1.00 0.98 41
1 1.00 0.95 0.98 44
accuracy 0.98 85
macro avg 0.98 0.98 0.98 85
weighted avg 0.98 0.98 0.98 85
Knn(k=5) test accuracy: 0.9764705882352941
Best test score is 0.9882352941176471 , K = 1
Best train score is 1.0 , K = 1
###Markdown
Report
###Code
print(classification_report(y_test, clfk.predict(x_test)))
print('Accuracy of k-nearest neighbors classifier on test set: {:.2f}'.format(clfk.score(x_test, y_test)))
###Output
precision recall f1-score support
0 0.95 1.00 0.98 41
1 1.00 0.95 0.98 44
accuracy 0.98 85
macro avg 0.98 0.98 0.98 85
weighted avg 0.98 0.98 0.98 85
Accuracy of k-nearest neighbors classifier on test set: 0.98
###Markdown
Draw Figure differnt K
###Code
plt.figure(figsize=[15,10])
plt.plot(ran,test_list,label='Test Score')
plt.plot(ran,train_list,label = 'Train Score')
plt.xlabel('Number of Neighbers')
plt.ylabel('fav_number/retweet_count')
plt.xticks(ran)
plt.legend()
print("Best test score is {} , K = {}".format(np.max(test_list), test_list.index(np.max(test_list))+1))
print("Best train score is {} , K = {}".format(np.max(train_list), train_list.index(np.max(train_list))+1))
###Output
Best test score is 0.9882352941176471 , K = 1
Best train score is 1.0 , K = 1
###Markdown
Confusion Matrix
###Code
from sklearn.metrics import classification_report, confusion_matrix as cm
def confusionMatrix(y_pred,title,n):
plt.subplot(1,2,n)
ax=sns.heatmap(cm(y_test, y_pred)/sum(sum(cm(y_test, y_pred))), annot=True
,cmap='RdBu_r', vmin=0, vmax=0.52,cbar=False, linewidths=.5)
plt.title(title)
plt.ylabel('Actual outputs')
plt.xlabel('Prediction')
b, t=ax.get_ylim()
ax.set_ylim(b+.5, t-.5)
plt.subplot(1,2,n+1)
axx=sns.heatmap(cm(y_test, y_pred), annot=True
,cmap='plasma', vmin=0, vmax=40,cbar=False, linewidths=.5)
b, t=axx.get_ylim()
axx.set_ylim(b+.5, t-.5)
return
plt.figure(figsize=(8,6))
confusionMatrix(y_predk,'k-nearest neighbors',1)
plt.show
###Output
_____no_output_____
###Markdown
Nearest neighbor classificationArguably the most simplest classification method.We are given example input vectors $x_i$ and corresponding class labels $c_i$ for $i=1,\dots, N$. The collection of pairs $\{x_i, c_i\}$ for $i=1\dots N$ is called a _data set_. Just store the dataset and for a new observed point $x$, find it's nearest neighbor $i^*$ and report $c_{i^*}$ $$i^* = \arg\min_{i=1\dots N} D(x_i, x)$$ KNN: K nearest neighborsFind the $k$ nearest neighbors and do a majority voting.
###Code
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
df = pd.read_csv(u'data/iris.txt',sep=' ')
df
X = np.hstack([
np.matrix(df.sl).T,
np.matrix(df.sw).T,
np.matrix(df.pl).T,
np.matrix(df.pw).T])
print X[:5] # sample view
c = np.matrix(df.c).T
print c[:5]
###Output
[[ 5.1 3.5 1.4 0.2]
[ 4.9 3. 1.4 0.2]
[ 4.7 3.2 1.3 0.2]
[ 4.6 3.1 1.5 0.2]
[ 5. 3.6 1.4 0.2]]
[[1]
[1]
[1]
[1]
[1]]
###Markdown
The choice of the distance function (divergence) can be important. In practice, a popular choice is the Euclidian distance but this is by no means the only one.
###Code
def Divergence(x,y,p=2.):
e = np.array(x) - np.array(y)
if np.isscalar(p):
return np.sum(np.abs(e)**p)
else:
return np.sum(np.matrix(e)*p*np.matrix(e).T)
Divergence([0,0],[1,1],p=2)
W = np.matrix(np.diag([2,1]))
Divergence([0,0],[1,1],p=W)
W = np.matrix([[2,1],[1,2]])
Divergence([0,0],[1,1],p=W)
###Output
_____no_output_____
###Markdown
Equal distance contours
###Code
%run plot_normballs.py
def nearest(A,x, p=2):
'''A: NxD data matrix, N - number of samples, D - the number of features
x: test vector
returns the distance and index of the the nearest neigbor
'''
N = A.shape[0]
d = np.zeros((N,1))
md = np.inf
for i in range(N):
d[i] = Divergence(A[i,:], x, p)
if d[i]<md:
md = d[i]
min_idx = i
return min_idx
def predict(A, c, X, p=2):
L = X.shape[0]
return [np.asscalar(c[nearest(A, X[i,:], p=p)]) for i in range(L)]
x_test = np.mat('[3.3, 2.5,5.5,1.7]')
#d, idx = distance(X, x_test, p=2)
cc = predict(X, c, x_test)
print(cc)
#float(c[idx])
def leave_one_out(A, c, p=2):
N = A.shape[0]
correct = 0
for j in range(N):
md = np.inf
for i in range(N):
if i != j:
d = Divergence(A[i,:], A[j,:], p=p)
if d<md:
md = d
min_idx = i
if c[min_idx] == c[j]:
correct += 1
accuracy = 1.*correct/N
return accuracy
leave_one_out(X, c, p=np.diag([1,1,1,1]))
###Output
_____no_output_____
###Markdown
http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 7
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] + 0.02*np.random.randn(150,2) # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
weights='uniform'
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8,8))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
The question is why we have used k = 14 only?Whenever we are required to tune the hyper parameters in that case we use grid search cross validation algorithm, this GSA will calculate the accuracies and based on the scores of accuracies this GSA will provide us the best kvalue to be choosen, in short whenever we are required to pass hyper parameters for any algorithm we will use GSA
###Code
from pandas import read_csv
import numpy as np
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
filename = 'pima-indians-diabetes.data.csv'
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(filename,names = names)
array = dataframe.values
x = array[: , 0:8]
y = array[: , 8]
neighbors = np.array(range(1,40))
param_grid = dict(n_neighbors = neighbors)
model_grid = KNeighborsClassifier()
grid = GridSearchCV(estimator = model_grid, param_grid = param_grid)
grid.fit(x,y)
# Identifying the best score
print(grid.best_score_)
print(grid.best_params_)
###Output
0.7578558696205755
{'n_neighbors': 14}
###Markdown
from this we came to know that best value will be 14 Visualizing the CV results
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# for the getting of k between a range of 1 to 40 we will define a range
k_range = range(1,41)
# we will create one empty for appending the k scores
k_scores = []
# for iterating through different k values in model we will use for loop and after iterating through each and every value the average accuracy we will be getting as a result
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
scores = cross_val_score(knn, x,y, cv = 5) # By default it will consider 5 number of folds
k_scores.append(scores.mean())
plt.plot(k_range, k_scores)
plt.xlabel("values of K")
plt.ylabel("Cross validated accuracy")
###Output
_____no_output_____
###Markdown
Diabetes Study in Machine Learning This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.![Diabetes](https://cdn1.medicalnewstoday.com/content/images/articles/321/321097/a-doctor-writing-the-word-diabetes.jpg) CONTENT : The datasets consists of several medical predictor variables and one target variable, Outcome. Predictor variables includes the number of pregnancies the patient has had, their BMI, insulin level, age, and so on. Pregnancies: Number of times pregnant Glucose: Plasma glucose concentration a 2 hours in an oral glucose tolerance test BloodPressure: Diastolic blood pressure (mm Hg) SkinThickness: Triceps skin fold thickness (mm) Insulin: 2 Hour serum insulin (mu U/ml) BMI: Body mass index (weight in kg/(height in m)^2) DiabetesPedigreeFunction: Diabetes pedigree function Age: Age (years) Outcome: Class variable (0 or 1) 268 of 768 are 1, the others are 0 PROBLEM STATEMENT : Can you build a machine learning model to accurately predict whether or not a patient have diabetes or not?
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sb
diabetes = pd.read_csv('/Users/swaruptripathy/Desktop/Data Science and AI/datasets/diabetes.csv')
diabetes.shape
diabetes.head()
###Output
_____no_output_____
###Markdown
“Outcome” is the feature we are going to predict, 0 means No diabetes, 1 means diabetes. Of these 768 data points, 500 are labeled as 0 and 268 as 1:
###Code
print(diabetes.groupby('Outcome').size())
sb.countplot(diabetes['Outcome'],label="Count")
diabetes.info()
diabetes.describe()
diabetes.groupby('Outcome').hist(figsize=(9, 9))
sb.pairplot(diabetes)
diabetes.corr()
sb.heatmap(diabetes.corr(),annot=True)
###Output
_____no_output_____
###Markdown
k-Nearest NeighborsThe k-NN algorithm is arguably the simplest machine learning algorithm. Building the model consists only of storing the training data set. To make a prediction for a new data point, the algorithm finds the closest data points in the training data set — its “nearest neighbors.”First, Let’s investigate whether we can confirm the connection between model complexity and accuracy:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(diabetes.loc[:, diabetes.columns != 'Outcome'],
diabetes['Outcome'], stratify=diabetes['Outcome'],
random_state=66)
from sklearn.neighbors import KNeighborsClassifier
training_accuracy = []
test_accuracy = []
# try n_neighbors from 1 to 10
neighbors_settings = range(1, 11)
for i in neighbors_settings:
# build the model
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
# record training set accuracy
training_accuracy.append(knn.score(X_train, y_train))
# record test set accuracy
test_accuracy.append(knn.score(X_test, y_test))
plt.plot(neighbors_settings, training_accuracy, label="training accuracy")
plt.plot(neighbors_settings, test_accuracy, label="test accuracy")
plt.ylabel("Accuracy")
plt.xlabel("n_neighbors")
plt.legend()
###Output
_____no_output_____
###Markdown
The above plot shows the training and test set accuracy on the y-axis against the setting of n_neighbors on the x-axis. Considering if we choose one single nearest neighbor, the prediction on the training set is perfect. But when more neighbors are considered, the training accuracy drops, indicating that using the single nearest neighbor leads to a model that is too complex. The best performance is somewhere around 9 neighbors.The plot suggests that we should choose n_neighbors=9. Here we are:
###Code
knn = KNeighborsClassifier(n_neighbors=9)
knn.fit(X_train, y_train)
knn.score(X_test, y_test)
print('Accuracy of K-NN classifier on training set: {:.2f}'.format(knn.score(X_train, y_train)))
print('Accuracy of K-NN classifier on test set: {:.2f}'.format(knn.score(X_test, y_test)))
X_test.head()
len(X_test)
y_test.head()
knn.predict(X_test)[0:5]
knn.predict_proba(X_test)[0:5]
y_pred = knn.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test,y_pred))
###Output
[[105 20]
[ 23 44]]
precision recall f1-score support
0 0.82 0.84 0.83 125
1 0.69 0.66 0.67 67
micro avg 0.78 0.78 0.78 192
macro avg 0.75 0.75 0.75 192
weighted avg 0.77 0.78 0.77 192
###Markdown
Demo
###Code
train = pd.read_csv('project3_dataset3_train.txt', header=None,sep='\t' )
k = int(input("Enter the k nearest neighbour :"))
test = pd.read_csv('project3_dataset3_test.txt', header=None,sep='\t' )
predicted_values = knn(train, test, k)
actual_values = list(test.iloc[:,-1])
Accuracy, Precision, Recall, f1_score = metrics(actual_values ,predicted_values)
print("Accuracy : "+str(Accuracy))
print("Precision : "+str(Precision))
print("Recall : "+str(Recall))
print("f1_score : "+str(f1_score))
###Output
Enter the k nearest neighbour :9
Accuracy : 0.95
Precision : 0.9
Recall : 1.0
f1_score : 0.9473684210526316
###Markdown
**KNN Logic**
###Code
def knn(data, pred_pt, k):
distances = []
for grp in data:
for point in data[grp]:
dist = np.linalg.norm(np.array(pred_pt) - np.array(point))
distances.append((dist, grp))
print('All distances: ', distances)
print('k nearest neighbours: ', sorted(distances)[:k])
votes = []
for i in sorted(distances)[:k]:
votes.append(i[1])
print('k nearest neighbour classes: ', votes)
print('The predicted class is:')
return Counter(votes).most_common()[0][0]
knn(data, pred_pt, 3)
###Output
All distances: [(2.8284271247461903, 'H'), (5.0, 'H'), (4.47213595499958, 'H'), (5.656854249492381, 'L'), (5.0, 'L'), (2.8284271247461903, 'L')]
k nearest neighbours: [(2.8284271247461903, 'H'), (2.8284271247461903, 'L'), (4.47213595499958, 'H')]
k nearest neighbour classes: ['H', 'L', 'H']
The predicted class is:
###Markdown
**Visualization**
###Code
for i in data:
for j in data[i]:
plt.scatter(j[0], j[1], s=100)
plt.scatter(pred_pt[0], pred_pt[1], s=100, marker='+')
plt.show()
###Output
_____no_output_____
###Markdown
KNN (K-Nearest-Neighbors) KNN is a simple concept: define some distance metric between the items in your dataset, and find the K closest items. You can then use those items to predict some property of a test item, by having them somehow "vote" on it.As an example, let's look at the MovieLens data. We'll try to guess the rating of a movie by looking at the 10 movies that are closest to it in terms of genres and popularity.To start, we'll load up every rating in the data set into a Pandas DataFrame:
###Code
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
###Output
_____no_output_____
###Markdown
Now, we'll group everything by movie ID, and compute the total number of ratings (each movie's popularity) and the average rating for every movie:
###Code
import numpy as np
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
movieProperties.head()
###Output
_____no_output_____
###Markdown
The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
###Code
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
###Output
_____no_output_____
###Markdown
Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it.While we're at it, we'll put together everything into one big Python dictionary called movieDict. Each entry will contain the movie name, list of genre values, the normalized popularity score, and the average rating for each movie:
###Code
movieDict = {}
with open(r'ml-100k/u.item') as f:
temp = ''
for line in f:
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, genres, movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
###Output
_____no_output_____
###Markdown
For example, here's the record we end up with for movie ID 1, "Toy Story":
###Code
movieDict[1]
###Output
_____no_output_____
###Markdown
Now let's define a function that computes the "distance" between two movies based on how similar their genres are, and how similar their popularity is. Just to make sure it works, we'll compute the distance between movie ID's 2 and 4:
###Code
from scipy import spatial
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
ComputeDistance(movieDict[2], movieDict[4])
###Output
_____no_output_____
###Markdown
Remember the higher the distance, the less similar the movies are. Let's check what movies 2 and 4 actually are - and confirm they're not really all that similar:
###Code
print movieDict[2]
print movieDict[4]
###Output
('GoldenEye (1995)', [0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], 0.22298456260720412, 3.2061068702290076)
('Get Shorty (1995)', [0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 0.35677530017152659, 3.5502392344497609)
###Markdown
Now, we just need a little code to compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. When the sort those by distance, and print out the K nearest neighbors:
###Code
import operator
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distances[x][0])
return neighbors
K = 10
avgRating = 0
neighbors = getNeighbors(1, K)
for neighbor in neighbors:
avgRating += movieDict[neighbor][3]
print movieDict[neighbor][0] + " " + str(movieDict[neighbor][3])
avgRating /= float(K)
###Output
Liar Liar (1997) 3.15670103093
Aladdin (1992) 3.81278538813
Willy Wonka and the Chocolate Factory (1971) 3.63190184049
Monty Python and the Holy Grail (1974) 4.0664556962
Full Monty, The (1997) 3.92698412698
George of the Jungle (1997) 2.68518518519
Beavis and Butt-head Do America (1996) 2.78846153846
Birdcage, The (1996) 3.44368600683
Home Alone (1990) 3.08759124088
Aladdin and the King of Thieves (1996) 2.84615384615
###Markdown
While we were at it, we computed the average rating of the 10 nearest neighbors to Toy Story:
###Code
avgRating
###Output
_____no_output_____
###Markdown
How does this compare to Toy Story's actual average rating?
###Code
movieDict[1]
###Output
_____no_output_____
###Markdown
KNN
###Code
import numpy as np
def euc(x, y):
return np.linalg.norm(x - y)
def KNN(X, y, sample, k=3):
distances = []
# calculate every distance
for i, x in enumerate(X):
distances.append(euc(sample, x))
# get the k - smallest distances
d_ord = distances
d_ord.sort()
neigh_dists = d_ord[:k]
neighbours = []
neigh_classes = []
# get the neighbours of the sample
for neigh_dist in neigh_dists:
idx = distances.index(neigh_dist)
neighbours.append(X[idx])
neigh_classes.append(y[idx])
print('Neighbours: ', neighbours)
print('of classes: ', neigh_classes)
###Output
_____no_output_____
###Markdown
Examples
###Code
X = np.array([
[0.15, 0.35],
[0.15, 0.28],
[0.12, 0.2],
[0.1, 0.32],
[0.06, 0.25]
])
y = np.array([1, 2, 2, 3, 3])
sample = np.array([0.1, 0.25])
KNN(X, y, sample, k=3)
KNN(X, y, sample, k=1)
###Output
Neighbours: [array([0.15, 0.35])]
of classes: [1]
###Markdown
KNN (K-Nearest-Neighbors) KNN is a simple concept: define some distance metric between the items in your dataset, and find the K closest items. You can then use those items to predict some property of a test item, by having them somehow "vote" on it.As an example, let's look at the MovieLens data. We'll try to guess the rating of a movie by looking at the 10 movies that are closest to it in terms of genres and popularity.To start, we'll load up every rating in the data set into a Pandas DataFrame:
###Code
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('e:/sundog-consult/udemy/datascience/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
###Output
_____no_output_____
###Markdown
Now, we'll group everything by movie ID, and compute the total number of ratings (each movie's popularity) and the average rating for every movie:
###Code
import numpy as np
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
movieProperties.head()
###Output
_____no_output_____
###Markdown
The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
###Code
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
###Output
_____no_output_____
###Markdown
Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it.While we're at it, we'll put together everything into one big Python dictionary called movieDict. Each entry will contain the movie name, list of genre values, the normalized popularity score, and the average rating for each movie:
###Code
movieDict = {}
with open(r'e:/sundog-consult/udemy/datascience/ml-100k/u.item') as f:
temp = ''
for line in f:
#line.decode("ISO-8859-1")
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, np.array(list(genres)), movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
###Output
_____no_output_____
###Markdown
For example, here's the record we end up with for movie ID 1, "Toy Story":
###Code
print(movieDict[1])
###Output
('Toy Story (1995)', array([0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 0.77358490566037741, 3.8783185840707963)
###Markdown
Now let's define a function that computes the "distance" between two movies based on how similar their genres are, and how similar their popularity is. Just to make sure it works, we'll compute the distance between movie ID's 2 and 4:
###Code
from scipy import spatial
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
ComputeDistance(movieDict[2], movieDict[4])
###Output
_____no_output_____
###Markdown
Remember the higher the distance, the less similar the movies are. Let's check what movies 2 and 4 actually are - and confirm they're not really all that similar:
###Code
print(movieDict[2])
print(movieDict[4])
###Output
('GoldenEye (1995)', array([0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]), 0.22298456260720412, 3.2061068702290076)
('Get Shorty (1995)', array([0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 0.35677530017152659, 3.5502392344497609)
###Markdown
Now, we just need a little code to compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. When the sort those by distance, and print out the K nearest neighbors:
###Code
import operator
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distances[x][0])
return neighbors
K = 10
avgRating = 0
neighbors = getNeighbors(1, K)
for neighbor in neighbors:
avgRating += movieDict[neighbor][3]
print (movieDict[neighbor][0] + " " + str(movieDict[neighbor][3]))
avgRating /= K
###Output
Liar Liar (1997) 3.15670103093
Aladdin (1992) 3.81278538813
Willy Wonka and the Chocolate Factory (1971) 3.63190184049
Monty Python and the Holy Grail (1974) 4.0664556962
Full Monty, The (1997) 3.92698412698
George of the Jungle (1997) 2.68518518519
Beavis and Butt-head Do America (1996) 2.78846153846
Birdcage, The (1996) 3.44368600683
Home Alone (1990) 3.08759124088
Aladdin and the King of Thieves (1996) 2.84615384615
###Markdown
While we were at it, we computed the average rating of the 10 nearest neighbors to Toy Story:
###Code
avgRating
###Output
_____no_output_____
###Markdown
How does this compare to Toy Story's actual average rating?
###Code
movieDict[1]
###Output
_____no_output_____
###Markdown
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/My\ Drive/Colab Notebooks
!ls
# !pip3 install triplettorch
import numpy as np
import torch
import time
import os
from torch.utils.data import DataLoader
from torchvision.models import mobilenet_v2
from torchvision import transforms
from torch import nn
# from triplettorch import HardNegativeTripletMiner
# from triplettorch import AllTripletMiner
# from torch.utils.data import DataLoader
# from triplettorch import TripletDataset
from torchvision import transforms
from torchvision import datasets
import matplotlib.pyplot as plt
import torch.nn as nn
import numpy as np
import torch
import random
random.seed(0);
np.random.seed(0)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
torch.backends.cudnn.deterministic=True
# !wget http://pdd.jinr.ru/archive_full.zip
!unzip archive_full.zip -d pdd
!ls pdd
import numpy as np
import os
from torch.utils.data import Dataset
from torch.utils.data import Sampler
from torchvision.datasets import ImageFolder
class AllCropsDataset(Dataset):
def __init__(self, image_folder, subset='', transform=None, target_transform=None):
self.transform = transform
self.target_transform = target_transform
# data subset (train, test)
self.subset = subset
# store each crop data
self.datasets = []
self.crops = []
self.samples = []
self.imgs = []
self.classes = []
self.targets = []
self.class_to_idx = {}
# iterate over all folders
# with all crops
for i, d in enumerate(os.listdir(image_folder)):
self.crops.append(d)
# full path to the folder
d_path = os.path.join(image_folder, d, self.subset)
# attribute name to set attribute
attr_name = '%s_ds' % d.lower()
print("Load '%s' data" % attr_name)
# set the attribute with the specified name
setattr(self, attr_name, ImageFolder(d_path))
# add the dataset to datasets list
self.datasets.append(getattr(self, attr_name))
# get dataset attribute
ds = getattr(self, attr_name)
# add attr targets to the global targets
ds_targets = [x+len(self.classes) for x in ds.targets]
self.targets.extend(ds_targets)
# add particular classes to the global classes' list
ds_classes = []
for c in ds.classes:
new_class = '__'.join([d, c])
self.class_to_idx[new_class] = len(self.classes) + ds.class_to_idx[c]
ds_classes.append(new_class)
self.classes.extend(ds_classes)
# imgs attribute has form (file_path, target)
ds_imgs, _ = zip(*ds.imgs)
# images and samples are equal
self.imgs.extend(list(zip(ds_imgs, ds_targets)))
self.samples.extend(list(zip(ds_imgs, ds_targets)))
def __len__(self):
return len(self.samples)
def __getitem__(self, idx):
path, target = self.samples[idx]
img = self.datasets[0].loader(path)
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
DATA_PATH = 'pdd'
def prepare_datasets():
train_ds = AllCropsDataset(
DATA_PATH,
subset='train',
transform=transforms.Compose([
transforms.Resize(224),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
# transforms.Normalize([0.4352, 0.5103, 0.2836], [0.2193, 0.2073, 0.2047])]),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),
target_transform=torch.tensor)
test_ds = AllCropsDataset(
DATA_PATH,
subset='test',
transform=transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),
target_transform=torch.tensor)
# print statistics
print('Train size:', len(train_ds))
print('Test size:', len(test_ds))
print('Number of samples in the dataset:', len(train_ds))
print('Crops in the dataset:', train_ds.crops)
print('Total number of classes in the dataset:', len(train_ds.classes))
print('Classes with the corresponding targets:')
print(train_ds.class_to_idx)
return train_ds, test_ds
import numpy as np
import shutil
import os
from glob import glob
from tqdm import tqdm
# from tqdm.notebook import tqdm
TEST_SIZE = 0.2
RS = 42
def _remove_path_if_exists(path):
if os.path.exists(path):
if os.path.isfile(path):
os.remove(path)
else:
shutil.rmtree(path)
def _makedir_and_copy2(path, dirname, fnames):
path_for_saving_files = os.path.join(path, dirname)
os.makedirs(path_for_saving_files)
for fname in fnames:
shutil.copy2(fname, path_for_saving_files)
def datadir_train_test_split(origin_path, test_size, random_state=0):
"""Splits the data in directory on train and test.
# Arguments
origin_path: path to the original directory
test_size: the size of test data fraction
# Returns
Tuple of paths: `(train_path, test_path)`.
"""
print("\n\nSplit `%s` directory" % origin_path)
print("Test size: %.2f" % test_size)
print("Random state: {}".format(random_state))
train_path = os.path.join(origin_path, 'train')
test_path = os.path.join(origin_path, 'test')
_remove_path_if_exists(train_path)
_remove_path_if_exists(test_path)
try:
subfolders = glob(os.path.join(origin_path, "*", ""))
# if train/test split is already done
if set(subfolders) == set(['train', 'test']):
return (train_path, test_path)
# if train/test split is required
# recreate train/test folders
os.makedirs(train_path)
os.makedirs(test_path)
for folder in tqdm(subfolders, total=len(subfolders), ncols=57):
# collect all images
img_fnames = []
for ext in ["*.jpg", "*.png", "*jpeg"]:
img_fnames.extend(
glob(os.path.join(folder, ext)))
# set random state parameter
rs = np.random.RandomState(random_state)
# shuffle array
rs.shuffle(img_fnames)
# split on train and test
n_test_files = int(len(img_fnames)*test_size)
test_img_fnames = img_fnames[:n_test_files]
train_img_fnames = img_fnames[n_test_files:]
# copy train files into `train_path/folder`
folder_name = os.path.basename(os.path.dirname(folder))
_makedir_and_copy2(train_path, folder_name, train_img_fnames)
# copy test files into `test_path/folder`
_makedir_and_copy2(test_path, folder_name, test_img_fnames)
for folder in subfolders:
shutil.rmtree(folder)
except:
_remove_path_if_exists(train_path)
_remove_path_if_exists(test_path)
raise
return (train_path, test_path)
def split_on_train_and_test():
for crop in os.listdir('pdd'):
crop_path = os.path.join('pdd', crop)
_ = datadir_train_test_split(crop_path,
test_size=0.2,
random_state=42)
split_on_train_and_test()
BATCH_SIZE = 16
train_ds, test_ds = prepare_datasets()
train_loader = torch.utils.data.DataLoader(train_ds, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=BATCH_SIZE)
test_loader = torch.utils.data.DataLoader(test_ds, pin_memory=True, batch_size=BATCH_SIZE, shuffle=True, num_workers=BATCH_SIZE)
type(train_ds)
plt.imshow(train_ds[12][0].permute(1,2,0))
###Output
_____no_output_____
###Markdown
Обычная сеть 1024 фичи
###Code
def simple_conv_block(in_channels,
out_channels,
kernel_size,
stride,
padding,
pool_size,
pool_stride):
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding),
nn.ReLU(),
nn.BatchNorm2d(out_channels),
nn.MaxPool2d(pool_size, pool_stride))
import torch.nn.functional as F
class Model(nn.Module):
'''Feature extractor'''
def __init__(self, output_dim=1024):
super(Model, self).__init__()
self.output_dim = output_dim
self.cnn1 = simple_conv_block(3, 32, 10, 1, 1, 2, 2)
self.cnn2 = simple_conv_block(32, 64, 7, 1, 1, 2, 2)
self.cnn3 = simple_conv_block(64, 128, 5, 1, 1, 2, 2)
self.cnn4 = simple_conv_block(128, 256, 3, 1, 1, 2, 2)
self.cnn5 = simple_conv_block(256, 512, 3, 1, 1, 2, 2)
self.feature_proj = nn.Sequential(
nn.Flatten(),
nn.Linear(512*7*7, self.output_dim),
nn.ReLU()
)
self.mlp = nn.Sequential(
nn.Linear(self.output_dim, 512),
nn.ReLU(),
nn.Linear(512,256),
nn.ReLU()
)
self.fc = nn.Sequential(
# nn.Linear(self.output_dim, 15),
nn.Linear(256, 15),
nn.LogSoftmax()
)
def forward(self, x):
x = self.cnn1(x)
x = self.cnn2(x)
x = self.cnn3(x)
x = self.cnn4(x)
x = self.cnn5(x)
x = self.feature_proj(x)
x=self.mlp(x)
x = self.fc(x)
# print(x.shape)
# x = self.cnn1(x)
# x = self.cnn2(x)
# x = self.cnn3(x)
# x = self.cnn4(x)
# x = self.cnn5(x)
# print(x.shape)
# x = x.view(x.size()[0], -1)
# print(x.shape)
# x = F.relu(self.feature_proj(x))
# print(x.shape)
# x = F.log_softmax(self.fc(x), dim=1)
# x = x.view(x.size()[0], -1)
# # x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
# x = self.fc1(x)
# x = self.act3(x)
# x = self.fc2(x)
# x = self.act4(x)
# x = self.fc3(x)
# x=self.sm(x)
return x
###Output
_____no_output_____
###Markdown
Обычная сеть 2048 фич
###Code
def simple_conv_block(in_channels,
out_channels,
kernel_size,
stride,
padding,
pool_size,
pool_stride):
return nn.Sequential(
nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding),
nn.ReLU(),
nn.BatchNorm2d(out_channels),
nn.MaxPool2d(pool_size, pool_stride))
import torch.nn.functional as F
class Model(nn.Module):
'''Feature extractor'''
def __init__(self, output_dim=2048):
super(Model, self).__init__()
self.output_dim = output_dim
self.cnn1 = simple_conv_block(3, 32, 10, 1, 1, 2, 2)
self.cnn2 = simple_conv_block(32, 64, 7, 1, 1, 2, 2)
self.cnn3 = simple_conv_block(64, 128, 5, 1, 1, 2, 2)
self.cnn4 = simple_conv_block(128, 256, 3, 1, 1, 2, 2)
self.cnn5 = simple_conv_block(256, 512, 3, 1, 1, 2, 2)
self.cnn6 = simple_conv_block(512, 1024, 3, 1, 1, 2, 2)
self.cnn7 = simple_conv_block(1024, output_dim, 3, 1, 1, 2, 2)
# self.feature_proj = nn.Sequential(
# nn.Flatten(),
# nn.Linear(512*7*7, self.output_dim),
# nn.ReLU()
# )
# self.mlp = nn.Sequential(
# nn.Linear(self.output_dim, 512),
# nn.ReLU(),
# nn.Linear(512,256),
# nn.ReLU()
# )
self.fc = nn.Sequential(
# nn.Linear(1, 15),
# nn.Linear(256, 15),
# nn.Conv2d(self.output_dim, 15, 1, 1),
# nn.ReLU(),
# # nn.Linear(512*7*7, self.output_dim),
nn.Flatten(),
nn.Linear(self.output_dim, 15),
nn.LogSoftmax(dim=1)
)
def forward(self, x):
x = self.cnn1(x)
x = self.cnn2(x)
x = self.cnn3(x)
x = self.cnn4(x)
x = self.cnn5(x)
x = self.cnn6(x)
x = self.cnn7(x)
# x = self.feature_proj(x)
# x=self.mlp(x)
x = self.fc(x)
return x
###Output
_____no_output_____
###Markdown
Перенос обучения
###Code
###Output
_____no_output_____
###Markdown
###Code
try:
import torchbearer
except:
!pip install -q torchbearer
import torchbearer
print(torchbearer.__version__)
try:
import pycm
except:
!pip install -q pycm
import pycm
import torchbearer
from torchbearer.callbacks import imaging
inv_normalize = transforms.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255],
std=[1/0.229, 1/0.224, 1/0.255]
)
make_grid = imaging.MakeGrid(torchbearer.INPUT, num_images=64, nrow=8, transform=inv_normalize)
make_grid = make_grid.on_test().to_pyplot().to_file('sample.png')
# model=Model()
# model.state_dict=Model().load_state_dict(torch.load('CNNmodelNLLloss.pt'))
# model = models.resnet50(pretrained=True)
# # Disable grad for all conv layers
# for param in model.parameters():
# param.requires_grad = False
from torchvision import datasets, models, transforms
model =models.mobilenet_v2(pretrained=True)
for param in model.parameters():
param.requires_grad = False
model.classifier[0] = nn.Linear(model.last_channel, 15)
model.classifier[1]=nn.LogSoftmax(dim=1)
from torchbearer.callbacks import EarlyStopping
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
loss = torch.nn.NLLLoss()
# loss=torch.nn.BCELoss()
# optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
import torchbearer
from torchbearer import Trial
from torchbearer.callbacks import Best
import sys
# if 'tensorboardX' in sys.modules:
# import tensorboardX
# from torchbearer.callbacks import TensorBoard
# callbacks = [TensorBoard(write_batch_metrics=True)]
# else:
# callbacks = []
checkpoint = Best('bestmodel.pt', monitor='val_acc', mode='max')
# callbacks.append(make_grid)
stopping = EarlyStopping(monitor='val_acc', patience=5, mode='max')
from torchbearer.callbacks import PyCM
cm = PyCM().on_val().to_pyplot( title='Confusion Matrix: {epoch}')
# print_normalized_matrix()
# to_pyplot(normalize=True,)
#
# Decay LR by a factor of 0.1 every 7 epochs
# scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
from torchsummary import summary
summary(model, input_size=(3, 224, 224))
help(mobilenet_v2)
print(model.last_channel)
trial = Trial(model, optimizer, loss, metrics=['acc', 'loss'], callbacks=[checkpoint,cm]).to(device)
trial.with_train_generator(train_loader).with_val_generator(test_loader)
trial.to(device)
history = trial.run(epochs=70, verbose=2)
###Output
_____no_output_____
###Markdown
Тест
###Code
model_test1 =models.mobilenet_v2(pretrained=True)
model_test1 = torch.nn.Sequential(*(list(model_test1.children())[:-1]))
# model_test1.classifier[1] = nn.Linear(model_test1.last_channel, 15)
for param in model_test1.parameters():
param.requires_grad = False
model_test1.to(device)
model_test1.eval()
test_x_numpy=[]
test_x1_numpy=[]
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model_test1(inputs).detach().cpu().numpy()
targets= targets.detach().cpu().numpy()
if (outputs.shape[0]==16):
test_x_numpy.append(outputs)
test_x_numpy=np.vstack(test_x_numpy)
print(test_x_numpy.shape)
# model_test1.fc = nn.Sequential(
# nn.Linear(1280, 15),
# nn.LogSoftmax(dim=1))
model_test1 =models.mobilenet_v2(pretrained=True)
# model_test1.classifier[0] = nn.Linear(model_test1.last_channel, 15)
# model_test1.classifier[1]=nn.LogSoftmax(dim=1)
model_test1.train()
trial = Trial(model_test1, optimizer, loss, metrics=['acc', 'loss'], callbacks=[checkpoint]).to(device)
trial.with_train_generator(train_loader).with_val_generator(test_loader)
trial.to(device)
history = trial.run(epochs=1, verbose=2)
model_test1 = torch.nn.Sequential(*(list(model_test1.children())[:-1]))
model_test1.eval()
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model_test1(inputs).detach().cpu().numpy()
targets= targets.detach().cpu().numpy()
if (outputs.shape[0]==16):
test_x1_numpy.append(outputs)
test_x1_numpy=np.vstack(test_x1_numpy)
print(test_x1_numpy.shape)
np.testing.assert_allclose(test_x_numpy,test_x1_numpy)
###Output
_____no_output_____
###Markdown
Трансфер ленинг батч норм
###Code
model_test1 =models.mobilenet_v2(pretrained=True)
model_test1.classifier[0] = nn.Linear(model_test1.last_channel, 15)
model_test1.classifier[1]=nn.LogSoftmax(dim=1)
model_test1.to(device)
# model_test1.train()
# trial = Trial(model_test1, optimizer, loss, metrics=['acc', 'loss'], callbacks=[checkpoint]).to(device)
# trial.with_train_generator(train_loader).with_val_generator(test_loader)
# trial.to(device)
# history = trial.run(epochs=1, verbose=2)
model_test1 = torch.nn.Sequential(*(list(model_test1.children())[:-1]))
model_test1.eval()
test_x_numpy=[]
train_x_numpy=[]
test_y_numpy=[]
train_y_numpy=[]
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model_test1(inputs).detach().cpu().numpy()
targets= targets.detach().cpu().numpy()
if (outputs.shape[0]==16):
train_x_numpy.append(outputs)
train_y_numpy.append(targets)
train_x_numpy=np.vstack(train_x_numpy)
train_y_numpy=np.hstack(train_y_numpy)
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model_test1(inputs).detach().cpu().numpy()
targets= targets.detach().cpu().numpy()
if (outputs.shape[0]==16):
test_x_numpy.append(outputs)
test_y_numpy.append(targets)
test_x_numpy=np.vstack(test_x_numpy)
test_y_numpy=np.hstack(test_y_numpy)
x_train=torch.FloatTensor(train_x_numpy)
x_test=torch.FloatTensor(test_x_numpy)
y_train=torch.FloatTensor(train_y_numpy)
y_test=torch.FloatTensor(test_y_numpy)
# classifier = nn.Sequential(OrderedDict([
# ('fc1', nn.Linear(25088, 4096)),
# ('relu', nn.ReLU()),
# ('fc2', nn.Linear(4096, 102)),
# ('output', nn.LogSoftmax(dim=1))
# ]))
# classifier = nn.Sequential(
# nn.Linear(1280, 15),
# nn.LogSoftmax(dim=1))
# trial.with_train_generator(train_loader).with_val_generator(test_loader)
# trial.to(device)
# history = trial.run(epochs=70, verbose=2)
y_train = torch.tensor(y_train, dtype=torch.long)
y_test = torch.tensor(y_test, dtype=torch.long)
trial = Trial(cla, optimizer, loss, metrics=['acc', 'loss'], callbacks=[checkpoint]).to(device)
trial.with_train_data(x_train, y_train).with_val_data(x_test,y_test)
trial.to(device)
history = trial.run(epochs=50, verbose=2)
class cl(torch.nn.Module):
def __init__(self):
super(cl,self).__init__()
self.fc = nn.Sequential(
nn.Flatten(),
nn.Linear(8960*7, 15),
nn.LogSoftmax(dim=1)
)
def forward(self,x):
x = x.mean(3).mean(2)
x = self.fc(x)
return x
cla=cl()
cla.to(device)
torch.save(model,'CNNmodelNLLloss.pt')
torch.save(model.state_dict(),'CNNmodelNLLloss.pt')
model=model.load_state_dict(torch.load('bestmodel.pt'))
model.eval()
model = torch.nn.Sequential(*(list(model.children())[:-1]))
model
# from torchsummary import summary
# summary(model, input_size=(3, 256, 256))
# model(torch.rand(1, 3, 256, 256).to(device)).shape
print(history)
###Output
_____no_output_____
###Markdown
Перевод в Numpy
###Code
# for img in train_ds:
# print(img)
# ipt=torch.FloatTensor(img)
# # ipt.unsqueeze_(0)
from tqdm import tqdm
# from tqdm.notebook import tqdm
# i=0
from sklearn import metrics
from sklearn.neighbors import KNeighborsClassifier
count=0
scorsum=0
train_x_numpy=[]
train_y_numpy=[]
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model(inputs).detach().cpu().numpy()
targets= targets.detach().cpu().numpy()
if (outputs.shape[0]==16):
print(outputs.shape)
print(outputs.reshape(2048,16).shape)
print(targets.shape)
# knn=KNeighborsClassifier(n_neighbors=1)
# knn.fit(outputs,targets)
train_x_numpy.append(outputs.reshape(2048,16).transpose())
train_y_numpy.append(targets)
test_x_numpy=[]
test_y_numpy=[]
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = model(inputs).detach().cpu().numpy()
targets= targets.detach().cpu().numpy()
# y_pred=knn.predict(outputs)
# scor=metrics.accuracy_score(targets,y_pred)
# scorsum=scorsum+scor
# count=count+1
if (outputs.shape[0]==16):
test_x_numpy.append(outputs.reshape(2048,16).transpose())
test_y_numpy.append(targets)
# print(scorsum/count)
# for b, batch in enumerate(train_loader):
# labels, data =
# data = torch.cat( [ datum for datum in data ], axis = 0 )
# labels = torch.cat( [ label for label in labels ], axis = 0 )
# embeddings = model( data.cuda( ) ).detach( ).cpu( ).numpy( )
# labels = labels.numpy( )
# test_embeddings.append( embeddings )
# test_labels.append( labels )
# while i < len(train_ds):
# ipt= torch.FloatTensor(train_ds[i][0]).to(device)
# ipt.unsqueeze_(0)
# probs = torch.exp(model.forward(ipt))
# probsTrainNP=probs.cpu().detach().numpy()
# TrainNP=np.append(TrainNP,probsTrainNP)
# # print(probsTrainNP)
# i=i+1
print(len(train_loader))
# type(like_x_list)
# outputs.shape
# outputs.reshape(1024,5).shape
print(len(train_ds))
outputs.shape[0]
from sklearn.preprocessing import normalize
import sklearn.preprocessing
train_x_numpy=normalize(np.vstack(train_x_numpy),norm='l2')
train_y_numpy=np.hstack(train_y_numpy)
test_y_numpy=np.hstack(test_y_numpy)
test_x_numpy=normalize(np.vstack(test_x_numpy),norm='l2')
print(train_y_numpy.shape)
print(train_x_numpy.shape)
print(test_y_numpy.shape)
print(test_x_numpy.shape)
# X = normalize(numpy.vstack([X_0, X_1]), norm='l2')
# from numpy import array
# data = [[[[11, 22],
# [33, 44],
# [55, 66]]]]
# data=array(data)
# data.shape
# data.reshape(3,2).sh
print(type(train_y_numpy))
print(train_y_numpy.shape)
train_y_numpy
# like_x_list = [train_x_numpy(BATCH_SIZE, 2048).astype('float32') for _ in range(len(train_loader))]
# like_x_list = [np.random.rand(1, 1024).astype('float32') for _ in range(100)]
print (train_x_numpy.shape)
# print (train_x_numpy.reshape(-1,1).shape)
xreshpe=train_x_numpy.reshape(-1,1)
print(xreshpe.shape)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics.pairwise import cosine_similarity
# train_x_numpy=[np.random.rand(, 1024).astype('float32') for _ in range(100)]
# x_train_reshape=train_x_numpy.numpy().reshape(-1,1)
# x_train_reshape=train_x_numpy.
# y_train_reshape=train_y_numpy.reshape(-1,1)
# np.asarray(train_x_numpy).reshape(-1,1)
from sklearn import metrics
# k_range=range(1,26)
k=1
scores={}
scores_list=[]
# for k in k_range:
knn=KNeighborsClassifier(n_neighbors=k,metric=cosine_similarity)
# knn.fit(np.asarray(train_x_numpy).reshape(-1,1),np.asarray (train_y_numpy).reshape(-1,1))
knn.fit(train_x_numpy,train_y_numpy.reshape(-1,1))
y_pred=knn.predict(test_x_numpy)
scores[k]=metrics.accuracy_score(test_y_numpy,y_pred)
print(scores[k])
from sklearn.neighbors import KNeighborsClassifier
from scipy.spatial.distance import cosine
# train_x_numpy=[np.random.rand(, 1024).astype('float32') for _ in range(100)]
# x_train_reshape=train_x_numpy.numpy().reshape(-1,1)
# x_train_reshape=train_x_numpy.
# y_train_reshape=train_y_numpy.reshape(-1,1)
# np.asarray(train_x_numpy).reshape(-1,1)
from sklearn import metrics
# k_range=range(1,26)
k=1
scores={}
scores_list=[]
# for k in k_range:
knn=KNeighborsClassifier(n_neighbors=k,metric=cosine)
# knn.fit(np.asarray(train_x_numpy).reshape(-1,1),np.asarray (train_y_numpy).reshape(-1,1))
knn.fit(train_x_numpy,train_y_numpy)
y_pred=knn.predict(test_x_numpy)
scores[k]=metrics.accuracy_score(test_y_numpy,y_pred)
print(scores[k])
scores
from sklearn.ensemble import GradientBoostingRegressor
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.datasets import load_boston
from sklearn.metrics import mean_absolute_error
from sklearn.ensemble import GradientBoostingClassifier
boost = GradientBoostingClassifier()
boost.fit(train_x_numpy,train_y_numpy)
y_pred = boost.predict(test_x_numpy)
acc=metrics.accuracy_score(test_y_numpy,y_pred)
print(acc)
from sklearn.metrics import confusion_matrix
confusion_matrix(test_y_numpy, y_pred)
###Output
_____no_output_____
###Markdown
Nearest neighbor classificationArguably the most simplest classification method.We are given example input vectors $x_i$ and corresponding class labels $c_i$ for $i=1,\dots, N$. The collection of pairs $\{x_i, c_i\}$ for $i=1\dots N$ is called a _data set_. Just store the dataset and for a new observed point $x$, find it's nearest neighbor $i^*$ and report $c_{i^*}$ $$i^* = \arg\min_{i=1\dots N} D(x_i, x)$$ KNN: K nearest neighborsFind the $k$ nearest neighbors and do a majority voting.
###Code
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
df = pd.read_csv(u'data/iris.txt',sep=' ')
df
X = np.hstack([
np.matrix(df.sl).T,
np.matrix(df.sw).T,
np.matrix(df.pl).T,
np.matrix(df.pw).T])
print X[:5] # sample view
c = np.matrix(df.c).T
print c[:5]
###Output
[[ 5.1 3.5 1.4 0.2]
[ 4.9 3. 1.4 0.2]
[ 4.7 3.2 1.3 0.2]
[ 4.6 3.1 1.5 0.2]
[ 5. 3.6 1.4 0.2]]
[[1]
[1]
[1]
[1]
[1]]
###Markdown
The choice of the distance function (divergence) can be important. In practice, a popular choice is the Euclidian distance but this is by no means the only one.
###Code
def Divergence(x,y,p=2.):
e = np.array(x) - np.array(y)
if np.isscalar(p):
return np.sum(np.abs(e)**p)
else:
return np.sum(np.matrix(e)*p*np.matrix(e).T)
Divergence([0,0],[1,1],p=2)
W = np.matrix(np.diag([2,1]))
Divergence([0,0],[1,1],p=W)
W = np.matrix([[2,1],[1,2]])
Divergence([0,0],[1,1],p=W)
###Output
_____no_output_____
###Markdown
Equal distance contours
###Code
%run plot_normballs.py
def nearest(A,x, p=2):
'''A: NxD data matrix, N - number of samples, D - the number of features
x: test vector
returns the distance and index of the the nearest neigbor
'''
N = A.shape[0]
d = np.zeros((N,1))
md = np.inf
for i in range(N):
d[i] = Divergence(A[i,:], x, p)
if d[i]<md:
md = d[i]
min_idx = i
return min_idx
def predict(A, c, X, p=2):
L = X.shape[0]
return [np.asscalar(c[nearest(A, X[i,:], p=p)]) for i in range(L)]
x_test = np.mat('[3.3, 2.5,5.5,1.7]')
#d, idx = distance(X, x_test, p=2)
cc = predict(X, c, x_test)
print(cc)
#float(c[idx])
def leave_one_out(A, c, p=2):
N = A.shape[0]
correct = 0
for j in range(N):
md = np.inf
for i in range(N):
if i != j:
d = Divergence(A[i,:], A[j,:], p=p)
if d<md:
md = d
min_idx = i
if c[min_idx] == c[j]:
correct += 1
accuracy = 1.*correct/N
return accuracy
leave_one_out(X, c, p=np.diag([1,1,1,1]))
###Output
_____no_output_____
###Markdown
http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
###Code
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 3
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] + 0.02*np.random.randn(150,2) # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
weights='uniform'
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8,8))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting a contour graph
###Code
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_point, y_point = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_point[:, 0].min() - 1, stop = X_point[:, 0].max() + 1, step = 0.01),
np.arange(start = X_point[:, 1].min() - 1, stop = X_point[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_point)):
plt.scatter(X_point[y_point == j, 0], X_point[y_point == j, 1],
c = ListedColormap(('green', 'blue'))(i), label = j)
plt.title('K-NN Training set')
plt.xlabel('Age')
plt.ylabel('Salary')
plt.legend()
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_point, y_point = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_point[:, 0].min() - 1, stop = X_point[:, 0].max() + 1, step = 0.01),
np.arange(start = X_point[:, 1].min() - 1, stop = X_point[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_point)):
plt.scatter(X_point[y_point == j, 0], X_point[y_point == j, 1],
c = ListedColormap(('green', 'blue'))(i), label = j)
plt.title('K-NN Training set')
plt.xlabel('Age')
plt.ylabel('Salary')
plt.legend()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
Amir Shokri St code : 9811920009 E-mail : [email protected] K-Nearest Neighbour (KNN)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
for dirname, _, filenames in os.walk('/Users/Amirsh.nll/Downloads/glass'):
for filename in filenames:
print(os.path.join(dirname, filename))
data = pd.read_csv('glass.csv', encoding ='latin1')
data.info()
data.head(20000)
y = data['Type'].values
y = y.reshape(-1,1)
x_data = data.drop(['Type'],axis = 1)
print(x_data)
x = (x_data - np.min(x_data)) / (np.max(x_data) - np.min(x_data)).values
x.head(20000)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.5,random_state=100)
y_train = y_train.reshape(-1,1)
y_test = y_test.reshape(-1,1)
print("x_train: ",x_train.shape)
print("x_test: ",x_test.shape)
print("y_train: ",y_train.shape)
print("y_test: ",y_test.shape)
from sklearn.neighbors import KNeighborsClassifier
K = 1
knn = KNeighborsClassifier(n_neighbors=K)
knn.fit(x_train, y_train.ravel())
print("When K = {} neighnors , KNN test accuracy: {}".format(K, knn.score(x_test, y_test)))
print("When K = {} neighnors , KNN train accuracy: {}".format(K, knn.score(x_train, y_train)))
ran = np.arange(1,30)
train_list = []
test_list = []
for i,each in enumerate(ran):
knn = KNeighborsClassifier(n_neighbors=each)
knn.fit(x_train, y_train.ravel())
test_list.append(knn.score(x_test, y_test))
train_list.append(knn.score(x_train, y_train))
plt.figure(figsize=[15,10])
plt.plot(ran,test_list,label='Test Score')
plt.plot(ran,train_list,label = 'Train Score')
plt.xlabel('Number of Neighbers')
plt.ylabel('RI/Na/Mg/Al/Si/K/Ca/Ba/Fe')
plt.xticks(ran)
plt.legend()
print("Best test score is {} and K = {}".format(np.max(test_list), test_list.index(np.max(test_list))+1))
print("Best train score is {} and K = {}".format(np.max(train_list), train_list.index(np.max(train_list))+1))
###Output
Best test score is 0.6448598130841121 and K = 1
Best train score is 1.0 and K = 1
###Markdown
KNN ClassifierDatabase from: https://www.freecodecamp.org/news/how-to-build-your-first-neural-network-to-predict-house-prices-with-keras-f8db83049159/
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('/data/housepricedata.csv')
dataset.head()
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 10].values
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Fitting K-NN to the Training set
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
#Predicting the Test set results
y_pred = classifier.predict(X_test)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, classification_report
cm = confusion_matrix(y_test, y_pred)
c = classification_report(y_test, y_pred)
cm
# Applying k-Fold Cross Validation
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
accuracies.mean()
#Classification Metrics
print(c)
###Output
precision recall f1-score support
0 0.88 0.91 0.90 179
1 0.91 0.88 0.90 186
accuracy 0.90 365
macro avg 0.90 0.90 0.90 365
weighted avg 0.90 0.90 0.90 365
###Markdown
COMPSCI 589 HW1 Name: Haochen Wang SECTION 0: Load Libraries
###Code
import sklearn.model_selection
from scipy import stats
import numpy as np
import csv
import math
import matplotlib.pyplot as plt
from operator import itemgetter
from collections import Counter
###Output
_____no_output_____
###Markdown
SECTION 1: Evaluating KNN
###Code
#Load the Iris data file using python csv module
knn_file = open('iris.csv')
csvreader = csv.reader(knn_file)
knnd = []
for row in csvreader:
knnd.append(row)
knndata = []
for row in knnd:
c = []
c.append(float(row[0]))
c.append(float(row[1]))
c.append(float(row[2]))
c.append(float(row[3]))
c.append(row[4])
knndata.append(c)
# print(knndata)
# Implementing Helper functions
# Normalize module
def mini(col):
min = col[0]
for val in col:
if val < min:
min = val
return min
def maxi(col):
max = col[0]
for val in col:
if val > max:
max = val
return max
def normalizationall(col, max, min):
newarr = []
for val in col:
newarr.append((val-min)/(max-min))
return newarr
def normalization(col):
min = mini(col)
max = maxi(col)
newarr = []
for val in col:
newarr.append((val-min)/(max-min))
return newarr, min, max
def vote(arr):
return max(set(arr), key=arr.count)
# Split the Training and Testing Data
def split(dat, ranumber):
traknn, tesknn = sklearn.model_selection.train_test_split(dat, train_size=0.8, test_size=0.2, random_state=ranumber, shuffle=True)
return traknn, tesknn
# trainknn, testknn = split(knndata, 589)
# Euclidean distance
def edistance(a, b):
a = np.array(a)
b = np.array(b)
s = np.linalg.norm(a - b)
return s
# print(edistance([1,1,1,4],[5,5,5,2]))
# KD-Tree
# I will do it later if I have enough time.
# KNN Helpers
# def seperate_d_c(data):
# dat = []
# cat = []
# all = []
# for row in data:
# da = []
# da.append(float(row[0]))
# da.append(float(row[1]))
# da.append(float(row[2]))
# da.append(float(row[3]))
# dat.append(da)
# al = da.copy()
# al.append(row[4])
# all.append(al)
# cat.append(row[4])
# return dat, cat, all
# trainknndata, trainknncat, ktr = seperate_d_c(trainknn)
# testknndata, testknncat, kte = seperate_d_c(testknn)
def transpose(dat):
a = []
a.append([row[0] for row in dat])
a.append([row[1] for row in dat])
a.append([row[2] for row in dat])
a.append([row[3] for row in dat])
if len(dat[0]) > 4:
a.append([row[4] for row in dat])
return a
def normaltab(traindat, testdat):
trainnom = []
testnom = []
i = 0
for col in traindat:
trarr = []
tearr = []
if i < 4:
trarr, trmin, trmax = normalization(col)
tearr = normalizationall(testdat[i], trmax, trmin)
trainnom.append(trarr)
testnom.append(tearr)
i+=1
if len(traindat) == 5:
trainnom.append(traindat[4])
testnom.append(testdat[4])
return trainnom, testnom
def transback(dat):
ret = []
i = 0
while i < len(dat[0]):
row = []
for col in dat:
row.append(col[i])
ret.append(row)
i+=1
return ret
def distarray(normpt, normeddat):
pt1 = normpt[:-1]
cat1 = normpt[-1]
disarray = []
for ins in normeddat:
pt2 = ins[:-1]
cat2 = ins[-1]
dis = edistance(pt1, pt2)
disarray.append([dis,cat2])
return sorted(disarray, key=itemgetter(0))
def normflow(train, test):
ttrainknn = transpose(train)
ttestknn = transpose(test)
normttrain, normttest = normaltab(ttrainknn,ttestknn)
nrmtr, nrmte = transback(normttrain), transback(normttest)
return nrmtr, nrmte
# we use normtr, normte. stands for normal train & normal test.
#KNN
def knn(k, traindat, testdat):
predict = []
correct = [col[-1] for col in testdat]
for datpt in testdat:
distlist = distarray(datpt, traindat)
catlist = [col[1] for col in distlist[:k]]
predict.append(vote(catlist))
return predict, correct
def knntrains(k, rand, dat):
trainknn, testknn = split(dat, rand)
normedtrain, normedtest =normflow(trainknn, testknn)
predict, correct = knn(k, normedtrain, normedtest)
return predict, correct
def knntraintrain(k, rand, dat):
trainknn, testknn = split(dat, rand)
normedtrain, normedtest =normflow(trainknn, testknn)
predict, correct = knn(k, normedtrain, normedtrain)
return predict, correct
def accuracy(pred, corr):
i = 0
blist = []
while i < len(pred):
blist.append(pred[i]==corr[i])
i+=1
return (Counter(blist)[True])/len(blist)
def kaccuracytest(k, r, data):
p, c = knntrains(k, r, data)
acc = accuracy(p, c)
return acc
def kaccuracytrain(k, r, data):
p, c = knntraintrain(k, r, data)
acc = accuracy(p, c)
return acc
# print(kaccuracytest(19, 589, knndata))
# print(kaccuracytrain(19, 589, knndata))
# The Statistical Process for the kNN
def statdatatest(data):
k = 1
result_list = []
while k <= 51:
random = 11589
alist = []
while random < 11689:
alist.append(kaccuracytest(k, random, data))
random += 5
result_list.append(alist)
k+=2
return np.array(result_list)
def statdatatrain(data):
k = 1
result_list = []
while k <= 51:
random = 11589
alist = []
while random < 11689:
alist.append(kaccuracytrain(k, random, data))
random += 5
result_list.append(alist)
k+=2
return np.array(result_list)
# narray = statdatatest(knndata)
# print(narray.std(axis=1))
k = np.arange(1,52,2)
narraytrain = statdatatrain(knndata)
narraytest = statdatatest(knndata)
acctrain = narraytrain.mean(axis=1)
# print(acctrain)
acctest = narraytest.mean(axis=1)
stdtrain = narraytrain.std(axis=1)
stdtest = narraytest.std(axis=1)
# print(stdtrain)
###Output
_____no_output_____
###Markdown
Q1.1 (12 Points) In the first graph, you should show the value of k on the horizontal axis, and on the vertical axis, the average accuracy of models trained over the training set, given that particular value of k. Also show, for each point in the graph, the corresponding standard deviation; you should do this by adding error bars to each point. The graph should look like the one in Figure 2 (though the “shape” of the curve you obtain may be different, of course).
###Code
# Q1.1
# plt.scatter(k, acctrain)
plt.errorbar(k, acctrain, yerr=stdtrain, fmt="-o", color = 'r', alpha = 0.6)
plt.title("KNN using Training Data")
plt.xlabel("K value")
plt.ylabel("Accuracy")
plt.show()
###Output
_____no_output_____
###Markdown
Q1.2 (12 Points) In the second graph, you should show the value of k on the horizontal axis, and on the vertical axis, the average accuracy of models trained over the testing set, given that particular value of k. Also show, for each point in the graph, the corresponding standard deviation by adding error bars to the point.
###Code
# Q1.2
# plt.scatter(k, acctest)
plt.errorbar(k, acctest, yerr=stdtest, fmt="-o", color = 'blue', alpha = 0.6)
plt.title("KNN using Testing Data")
plt.xlabel("K value")
plt.ylabel("Accuracy")
plt.show()
# print(acctrain,stdtrain)
# print(acctest,stdtest)
# ww = np.percentile(acctest, 25, interpolation = 'midpoint')
# print(ww)
plt.errorbar(k, acctrain, yerr=stdtrain, fmt="-o", color = 'red', alpha = 0.6, label= "Train")
plt.errorbar(k, acctest, yerr=stdtest, fmt="-o", color = 'blue', alpha = 0.6, label= "Test")
plt.legend()
plt.title("KNN using Testing vs. Training Data")
plt.xlabel("K value")
plt.ylabel("Accuracy")
plt.show()
###Output
_____no_output_____
###Markdown
0. Dependências
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# database
from sklearn.datasets import load_iris
###Output
_____no_output_____
###Markdown
1. Introdução 2. Dados
###Code
iris = load_iris()
df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
df['class'] = iris.target
df['class'] = df['class'].map({0:iris.target_names[0], 1:iris.target_names[1], 2:iris.target_names[2]})
df.head(10)
df.describe()
x = iris.data
y = iris.target.reshape(-1, 1)
print(x.shape, y.shape)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=42, stratify=y)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
###Output
(105, 4) (105, 1)
(45, 4) (45, 1)
###Markdown
3. Implementação Métricas de Distância
###Code
def l1_distance(a, b):
return np.sum(np.abs(a - b), axis=1)
def l2_distance(a, b):
return np.sqrt(np.sum((a - b)**2, axis=1))
###Output
_____no_output_____
###Markdown
Classificador
###Code
class kNearestNeighbor(object):
def __init__(self, n_neighbors=1, dist_func=l1_distance):
self.n_neighbors = n_neighbors
self.dist_func = dist_func
def fit(self, x, y):
self.x_train = x
self.y_train = y
def predict(self, x):
y_pred = np.zeros((x.shape[0], 1), dtype=self.y_train.dtype)
for i, x_test in enumerate(x):
distances = self.dist_func(self.x_train, x_test)
nn_index = np.argsort(distances)
nn_pred = self.y_train[nn_index[:self.n_neighbors]].ravel()
y_pred[i] = np.argmax(np.bincount(nn_pred))
return y_pred
###Output
_____no_output_____
###Markdown
4. Teste
###Code
knn = kNearestNeighbor(n_neighbors=3)
knn.fit(x_train, y_train)
y_pred = knn.predict(x_test)
print('Acurácia: {:.2f}%'.format(accuracy_score(y_test, y_pred)*100))
knn = kNearestNeighbor()
knn.fit(x_train, y_train)
list_res = []
for p in [1, 2]:
knn.dist_func = l1_distance if p == 1 else l2_distance
for k in range(1, 10, 2):
knn.n_neighbors = k
y_pred = knn.predict(x_test)
acc = accuracy_score(y_test, y_pred)*100
list_res.append([k, 'l1_distance' if p == 1 else 'l2_distance', acc])
df = pd.DataFrame(list_res, columns=['k', 'dist. func.', 'acurácia'])
df
###Output
_____no_output_____
###Markdown
Comparação com o Scikit-learn
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=2)
knn.fit(x_train, y_train.ravel())
list_res = []
for p in [1, 2]:
knn.p = p
for k in range(1, 10, 2):
knn.n_neighbors = k
y_pred = knn.predict(x_test)
acc = accuracy_score(y_test, y_pred)*100
list_res.append([k, 'l1_distance' if p == 1 else 'l2_distance', acc])
df = pd.DataFrame(list_res, columns=['k', 'dist. func.', 'acurácia'])
df
###Output
_____no_output_____
###Markdown
Import Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Calculate distance
###Code
def calculateEucledianDistance(data,centroidInex):
dist= np.zeros((len(data),len(centroidInex)))
for i in range(len(centroidInex)):
for j in range (len(data)):
dist[j][i]=pow(sum(pow(centroidInex[i] - data.loc[j,:],2)),1/2)
return dist
def calculatemanhatanDistance(data,centroidInex):
dist= np.zeros((len(data),len(centroidInex)))
for i in range(len(centroidInex)):
for j in range (len(data)):
dist[j][i]=sum(abs(centroidInex[i] - data.loc[j,:]))
return dist
###Output
_____no_output_____
###Markdown
Calculate Clustures
###Code
def GetClustures(dist):
distdf=pd.DataFrame(dist)
clusture= np.zeros(len(dist))
for i in range(len(dist)):
clusture[i]=distdf.loc[i,:].idxmin()
return clusture
###Output
_____no_output_____
###Markdown
RMSE
###Code
def GetRootMeanSquareError(dist):
distdf=pd.DataFrame(dist)
error=distdf[:].min(axis=1).mean()
return error
###Output
_____no_output_____
###Markdown
Update Centroids
###Code
def UpdateCentroids(data,clusture):
data["centroid"]=clusture
cent=data.groupby("centroid").mean()
data.drop('centroid', axis=1,inplace=True)
return cent.values
###Output
_____no_output_____
###Markdown
Set Initial parameters here
###Code
#required parameters
numberOfDataPoints=500
dimensions=12
actualClustures=5
numberOfClustures=15
iterations=10
delta=0.1
###Output
_____no_output_____
###Markdown
Random data generataion from normal distribution
###Code
npData=np.zeros((1,dimensions))
for i in range(actualClustures):
#Generate data with different menans and standard deviations
npData=np.append(npData,np.random.normal(20+i*30,5,(numberOfDataPoints//numberOfClustures,dimensions)),axis=0)
#first index for shape
npData=np.delete(npData,0,0)
###Output
_____no_output_____
###Markdown
Data distribution
###Code
data=pd.DataFrame(npData)
data.describe()
###Output
_____no_output_____
###Markdown
Algorithem
###Code
deltaError=0
RMSE= np.zeros(numberOfClustures-1)
#loop k=2 to number of clustures
for m in range(2,numberOfClustures+1):
#calculating clusture's centroid
centroids=np.random.randint(low=data.min()[0]+10,high=data.max()[0]-10,size=(m),dtype=int)
centroids.reshape(m,1)
centroids=np.repeat(centroids,len(data.columns))
centroids=centroids.reshape(m,len(data.columns))
#showing plot on intial value of centroids on data
print("Before:")
plt.scatter(data[0],data[1])
plt.scatter(centroids[:,0],centroids[:,1])
plt.show()
#convergence can be controlled via number of iterations and delta you can change value according to your requiremnts
while i in range(iterations):
#delete target clusture column if exist
if "centroid" in data.columns:
data.drop('centroid', axis=1,inplace=True)
#calculating Eucleadian distance
dist=calculateEucledianDistance(data,centroids)
#calulating Root mean square Error
error=GetRootMeanSquareError(dist)
print("Iterations RMSE :", error)
#find clustures
clusture=GetClustures(dist)
#update centroids
centroids=UpdateCentroids(data,clusture.astype(int))
#there is a possiblity that centroid can be lost because of random pick I am trying to adjust into data points
while(len(centroids)!=m):
centroid=np.random.randint(low=data.min()[0]+10,high=data.max()[0]-10,size=(1),dtype=int)
centroid=centroid.repeat(len(data.columns))
centroid=centroid.reshape(1,len(data.columns))
centroids=np.vstack((centroids,centroid))
centroid=0
#Delta stoping condtion of convergence
if abs(error-deltaError)< delta:
break
deltaError=error
#RMSE for a given value of K
RMSE[m-2]=error
print("RMSE :",error)
#plot after centroids convergence
print("After:")
plt.scatter(data[0],data[1])
plt.scatter(centroids[:,0],centroids[:,1])
plt.show()
plt.plot(range(2,numberOfClustures+1),RMSE)
deltaError=0
RMSE= np.zeros(numberOfClustures-1)
#loop k=2 to number of clustures
for m in range(2,numberOfClustures+1):
#calculating clusture's centroid
centroids=np.random.randint(low=data.min()[0]+10,high=data.max()[0]-10,size=(m),dtype=int)
centroids.reshape(m,1)
centroids=np.repeat(centroids,len(data.columns))
centroids=centroids.reshape(m,len(data.columns))
#showing plot on intial value of centroids on data
print("Before:")
plt.scatter(data[0],data[1])
plt.scatter(centroids[:,0],centroids[:,1])
plt.show()
#convergence can be controlled via number of iterations and delta you can change value according to your requiremnts
while i in range(iterations):
#delete target clusture column if exist
if "centroid" in data.columns:
data.drop('centroid', axis=1,inplace=True)
#calculating Eucleadian distance
dist=calculatemanhatanDistance(data,centroids)
#calulating Root mean square Error
dist_error=calculateEucledianDistance(data,centroids)
error=GetRootMeanSquareError(dist_error)
print("Iterations RMSE :", error)
#find clustures
clusture=GetClustures(dist)
#update centroids
centroids=UpdateCentroids(data,clusture.astype(int))
#there is a possiblity that centroid can be lost because of random pick I am trying to adjust into data points
while(len(centroids)!=m):
centroid=np.random.randint(low=data.min()[0]+10,high=data.max()[0]-10,size=(1),dtype=int)
centroid=centroid.repeat(len(data.columns))
centroid=centroid.reshape(1,len(data.columns))
centroids=np.vstack((centroids,centroid))
centroid=0
#Delta stoping condtion of convergence
if abs(error-deltaError)< delta:
break
deltaError=error
#RMSE for a given value of K
RMSE[m-2]=error
print("RMSE :",error)
#plot after centroids convergence
print("After:")
plt.scatter(data[0],data[1])
plt.scatter(centroids[:,0],centroids[:,1])
plt.show()
###Output
Before:
###Markdown
Plot RMSE vs K
###Code
plt.plot(range(2,numberOfClustures+1),RMSE)
###Output
_____no_output_____
###Markdown
K Nearest NeighborThis algorithm selects k nearest neighbors from a given data point and assinges labels according to the neighborhood. Advantages:* No assumption about data* Insensitive to outliersDisadvantages* Requires huge memory* Requires computationsOften it is called instance based or lazy method. It saves all the instances and searches for neighbors or closest elements. K is a very important hyper-parameter. After finding the labels of K nearest neighbor it then uses some aggreagting technique. * Majority Voting (classification)* Weighted Voting (classification)* Uniform (regression)* distance weighted (regression) Lets create a dummy dataset and see how it works
###Code
import numpy as np
feature_data = np.asarray([[0.0,1.0],
[-0.01,1.1],
[1.1,0.01],
[.99,-0.01]])
labels= np.asarray([1,1,0,0])
###Output
_____no_output_____
###Markdown
Visualize the data
###Code
import matplotlib.pyplot as plt
plt.scatter(feature_data[:,0],feature_data[:,1], (labels+1)*15,(labels+1)*15)
###Output
_____no_output_____
###Markdown
Implementation of KNN
###Code
from numpy import *
import operator
def classifyKNN(test_x,X,y,k):
# change the y label to vector
y=np.reshape(y,(y.shape[0],))
dataSetSize = X.shape[0]
diffMat = np.tile(test_x, (dataSetSize,1)) - X
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
classCount={}
for i in range(k):
voteIlabel = y[sortedDistIndicies[i]]
classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]
###Output
_____no_output_____
###Markdown
Test with a simple point
###Code
classifyKNN([0.8,0],feature_data,labels,3)
###Output
_____no_output_____
###Markdown
lets read a larger datasetIn this dataset there are three features and a class label. The class label has three discrete levels:* A: does not like* B: like for long period* C: like for small period Three numeric features are there. They are:* Number of frequent flyer miles earned per year* Percentage of time spent playing video games* Liters of ice cream consumed per week
###Code
import pandas as pn
data=pn.read_csv("https://raw.githubusercontent.com/swakkhar/MachineLearning/master/KNNDataSet.txt",sep='\t',header=None)
data
print(type(data))
import numpy as np
data = np.asarray(data)
data
data_X=data[:,:-1]
data_y=data[:,-1]
data_X.shape
print(data_y.shape)
data_y=np.asarray([0 if i == 'A' else 1 if i == 'B' else 2 for i in data_y])
import matplotlib.pyplot as plt
plt.scatter(data_X[:,1],data_X[:,2],data_y*15+5,data_y*15+5)
plt.scatter(data_X[:,0],data_X[:,2],data_y*15+5,data_y*15+5)
plt.scatter(data_X[:,0],data_X[:,1],data_y*15+5,data_y*15+5)
def TestWithOutNormalization():
hoRatio = 0.10
m = data_X.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
for i in range(numTestVecs):
classifierResult = classifyKNN(data_X[i,:],data_X[numTestVecs:m,:],data_y[numTestVecs:m],3)
#print "the classifier came back with: %d, the real answer is: %d"% (classifierResult, datingLabels[i])
if classifierResult != data_y[i]:
errorCount += 1.0
print (errorCount/float(numTestVecs))
TestWithOutNormalization()
###Output
0.24
###Markdown
-1 2 3 -1 - (-1) / (3-(-1)) = 02-(-1) / 4= 0.75
###Code
def autoNorm(dataSet):
minVals = dataSet.min(0)
maxVals = dataSet.max(0)
ranges = maxVals - minVals
normDataSet = zeros(shape(dataSet))
m = dataSet.shape[0]
normDataSet = dataSet - tile(minVals, (m,1))
normDataSet = normDataSet/tile(ranges, (m,1))
return normDataSet, ranges, minVals
def TestWithNormalization():
hoRatio = 0.10
m = data_X.shape[0]
norm_X,r,mv=autoNorm(data_X) # first call normalization
numTestVecs = int(m*hoRatio)
errorCount = 0.0
for i in range(numTestVecs):
classifierResult = classifyKNN(norm_X[i,:],norm_X[numTestVecs:m,:],data_y[numTestVecs:m],3)
#print "the classifier came back with: %d, the real answer is: %d"% (classifierResult, datingLabels[i])
if classifierResult != data_y[i]:
errorCount += 1.0
print (errorCount/float(numTestVecs))
TestWithNormalization()
###Output
0.05
###Markdown
Work with hand written digits
###Code
digits_X= pn.read_csv("https://raw.githubusercontent.com/swakkhar/MachineLearning/master/Codes/X.csv",header=None)
digits_y= pn.read_csv("https://raw.githubusercontent.com/swakkhar/MachineLearning/master/Codes/Y.csv",header=None)
digits_X=np.asarray(digits_X)
digits_y=np.asarray(digits_y)
digits_X.shape
digits_y.shape
import matplotlib.pyplot as plt
def digitShow(x):
plt.imshow(x);
plt.colorbar()
plt.show()
roW_indeX=np.random.randint(0,5000)
print(roW_indeX)
digitShow((np.reshape(digits_X[roW_indeX,:],(20,20))).T)
print(digits_y[roW_indeX,0])
def TestDigitData():
hoRatio = 0.10
m = digits_X.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
# we must randomize the data before sending it to the classifier
for i in range(numTestVecs):
classifierResult = classifyKNN(digits_X[i,:],digits_X[numTestVecs:m,:],digits_y[numTestVecs:m],3)
#print "the classifier came back with: %d, the real answer is: %d"% (classifierResult, datingLabels[i])
if classifierResult != digits_y[i]:
errorCount += 1.0
print (errorCount/float(numTestVecs))
TestDigitData()
def TestDigitDataShuffled():
hoRatio = 0.10
m = digits_X.shape[0]
numTestVecs = int(m*hoRatio)
errorCount = 0.0
# we must randomize the data before sending it to the classifier
from sklearn.utils import shuffle
shuffled_X, shuffled_y = shuffle(digits_X,digits_y, random_state=0)
for i in range(numTestVecs):
classifierResult = classifyKNN(shuffled_X[i,:],shuffled_X[numTestVecs:m,:],shuffled_y[numTestVecs:m],3)
#print "the classifier came back with: %d, the real answer is: %d"% (classifierResult, datingLabels[i])
if classifierResult != shuffled_y[i]:
errorCount += 1.0
print (errorCount/float(numTestVecs))
TestDigitDataShuffled()
###Output
0.062
###Markdown
Lets play with regression problem
###Code
np.random.seed(0)
reg_X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
reg_y = np.sin(reg_X).ravel()
# Add noise to targets
reg_y[::5] += 1 * (0.5 - np.random.rand(8))
plt.scatter(reg_X, reg_y, color='darkorange', label='data')
def regressionKNNUniform(tx,X,y,k):
y=np.reshape(y,(y.shape[0],))
dataSetSize = X.shape[0]
diffMat = np.tile(tx, (dataSetSize,1)) - X
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
predicted_value=0
for i in range(k):
predicted_value = predicted_value+ y[sortedDistIndicies[i]] * 1
return predicted_value / k
y_u = [regressionKNNUniform(t,reg_X,reg_y,3) for t in T]
plt.scatter(reg_X, reg_y, color='darkorange', label='data')
plt.plot(T, y_u, color='navy', label='prediction')
def regressionKNNweighted(tx,X,y,k):
y=np.reshape(y,(y.shape[0],))
dataSetSize = X.shape[0]
diffMat = np.tile(tx, (dataSetSize,1)) - X
sqDiffMat = diffMat**2
sqDistances = sqDiffMat.sum(axis=1)
distances = sqDistances**0.5
sortedDistIndicies = distances.argsort()
predicted_value=0
s_w = 0
for i in range(k):
predicted_value = predicted_value+ y[sortedDistIndicies[i]] * (1.0/distances[sortedDistIndicies[i]])
s_w = s_w+ 1.0/distances[sortedDistIndicies[i]]
return predicted_value / s_w
y_u = [regressionKNNweighted(t,reg_X,reg_y,3) for t in T]
plt.scatter(reg_X, reg_y, color='darkorange', label='data')
plt.plot(T, y_u, color='navy', label='prediction')
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 10
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['orange', 'cyan', 'cornflowerblue'])
cmap_bold = ListedColormap(['darkorange', 'c', 'darkblue'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
###Output
_____no_output_____
###Markdown
csv 데이터 불러오기
###Code
taxi_data = pandas.DataFrame.from_csv("/Users/KYD/Downloads/refined_taxi_data_v4.csv")
###Output
_____no_output_____
###Markdown
데이터, 타겟, 트레이닝 셋, 테스트 셋 만들기
###Code
taxi_data_data = taxi_data.as_matrix(['total_amount', 'pickup_hour'])
taxi_data_target = taxi_data.as_matrix(['area'])
taxi_data_data_training = taxi_data_data[:710000]
taxi_data_data_test = taxi_data_data[710000:-1]
taxi_data_target_training = taxi_data_target[:710000]
taxi_data_target_test = taxi_data_target[710000:-1]
taxi_data_data
###Output
_____no_output_____
###Markdown
컬러맵 생성
###Code
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
###Output
_____no_output_____
###Markdown
KNN 모델 만들기
###Code
n_neighbors = 15
clf = neighbors.KNeighborsClassifier(n_neighbors, weights='uniform')
clf.fit(taxi_data_data_training, taxi_data_target_training.ravel())
###Output
_____no_output_____
###Markdown
예측하기
###Code
clf.predict([12, 5])
clf.score(taxi_data_data_test, taxi_data_target_test)
###Output
_____no_output_____
###Markdown
plot 만들기(경계 정하기)
###Code
x_min, x_max = taxi_data_data_training[:, 0].min() - 1, taxi_data_data_training[:, 0].max() + 1
y_min, y_max = taxi_data_data_training[:, 1].min() - 1, taxi_data_data_training[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.scatter(taxi_data_data_training[:, 0], taxi_data_data_training[:, 1], c=taxi_data_target_training, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification")
###Output
_____no_output_____
###Markdown
K Nearest NeighborsImplementing the algorithm to train based on a set of data and return the prediction based on the 'k' nearest items Imports
###Code
import numpy as np
from collections import Counter
###Output
_____no_output_____
###Markdown
Test dataWe're going to create some test data. Let's say we've got cats, dogs and dinosaurs with 3 measures - ear length, tail length, leg length.0 - cat, 1 - dog, 2 - dinosaur
###Code
dic={0:'cat',1:'dog',2:'dinosaur'}
X=np.random.rand(30,3)
X[:10,0]+=0.2
X[10:20,0]+=0.5
X[:10,1]+=2
X[10:20,1]+=6
X[:10,2]+=4
X[10:20,2]+=6
X[20:,:]+=10
m,n=X.shape
y=np.zeros((m,1))
y[:10]=0
y[10:20]=1
y[20:]=2
# set k
k=7
# our example case
example=np.array([[8,5.5,6]])
# Find distance for each case from the example
X_y_distance=(np.sum((X-example)**2,1).reshape(m,1))**0.5
# Add the categorical data to the array
X_y_distance=np.concatenate((X_y_distance,y),axis=1)
# Sort the data by the closest to further matches
X_y_distance=X_y_distance[X_y_distance[:,0].argsort()]
# Create a Counter for all items to k
cnt=Counter(X_y_distance[:k,1])
# print the item that has the max value
for item in cnt.keys():
if cnt[item]==max(cnt.values()):
print(dic[item])
print(cnt)
###Output
dinosaur
Counter({2.0: 4, 1.0: 3})
###Markdown
KNN as class
###Code
class KNN:
"""K Nearest Neighbors algorithm
Parameters
------------
X : numpy array
Array should hold all relevant criteria.
Data should be organized by case x feature
y : numpy array
Can be a flat array or with dimensions case x 1.
Can hold categorical data as strings or integers:
np.array(['a','b','a','c']) or np.array([0,1,0,2])
Available methods
-------------
predict : function
Used for the prediction of nearest neighbor"""
def __init__(self,X,y):
self.y_dic={}
self.X=X
self.m,self.n=X.shape
self.y=self.categorize(y.reshape(self.m,1))
def categorize(self,y):
m,n=y.shape
new_y=np.zeros((m,1))
unique_y = np.unique(y)
for i, item in enumerate(unique_y):
self.y_dic[i]=item
new_y[y==item]=i
return new_y
def predict(self,case,k):
"""Prediction method
Parameters
------------
case : numpy array
An array of features
k : integer
The number of nearest neighbors
that should be compared.
For best results use odd numbers
Returns
------------
return_item : string/int
Returns case classified based
on K neighbors"""
return_item=[]
X_y_distance=(np.sum((self.X-case)**2,1).reshape(self.m,1))**0.5
X_y_distance=np.concatenate((X_y_distance,self.y),axis=1)
X_y_distance=X_y_distance[X_y_distance[:,0].argsort()]
cnt=Counter(X_y_distance[:k,1])
for item in cnt.keys():
if cnt[item]==max(cnt.values()):
#print(self.y_dic[item])
return_item.append(self.y_dic[item])
if len(return_item)>1:
print('More than one item returned. Please set k to odd')
else:
return_item=return_item[0]
print('Nearest item: {0}'.format(return_item))
return return_item
###Output
_____no_output_____
###Markdown
Testing
###Code
a=KNN(X,y)
a.predict(example,7)
###Output
Nearest item: 2.0
###Markdown
create a function to calculate the distance between any row and the training data
###Code
# function to calculate distance between two rows
def distance(row1, row2):
# row[:-1] beacause the label shouldn't be included
distance = (row2[:-1] - row1[:-1])**2
sum = 0
for i in distance:
sum += i
return math.sqrt(sum)
###Output
_____no_output_____
###Markdown
next will create a function to return an array containing the distances between a certain row and the whole training dataset
###Code
# find the distance array between the sample row and the complete dataset
# returns an array of distances
def distances_array(example, dataset):
distances = np.empty((0,2), int)
id_ = 0
for row in dataset:
distances = np.append(distances, np.array([[id_, distance(example, row)]]), axis=0) # must return id and distance
id_ += 1
return distances
###Output
_____no_output_____
###Markdown
to get the K-NN we need to sort the array resulting from the distances_array() and return the first K neighbors as an array
###Code
# k neighbours array
# outputs the nearest K neighbors to a data example
# returns an array with the nearest K neighbours and their id's
def k_neighbours(row, K, dataset):
all_data = distances_array(row, dataset)
sorted_all_data = all_data[np.argsort(all_data[:,1])]
KNN = sorted_all_data[:K]
return KNN
###Output
_____no_output_____
###Markdown
the below predict function will use the row, and k_neighbours() to predict the outcome and return the accuracy:
###Code
def predict2(row, k, dataset_train):
x = k_neighbours(row, k, dataset_train)
ids = np.array([int(id[0]) for id in x])
neighbors_labels = []
for example in range(len(dataset_train)):
for id_ in ids:
if id_ == example:
neighbors_labels.append(int(dataset_train[example][-1]))
true_label = row[-1]
neighbors_labels_set = set(neighbors_labels)
# we need to find p and accuracy
# case 1: len(set(neighbour labels)) = 1
if len(neighbors_labels_set) == 1:
# neigb label consists only of true label
if true_label in neighbors_labels_set:
p = neighbors_labels[0]
accuracy = 1
return p, accuracy
# same but tl is not in neigh label
else:
# len(neighbors_labels_set) == 1 and true_label not in neighbors_labels_set:
p = neighbors_labels[0]
accuracy = 0
return p, accuracy
# case 2: neighbor labels are mixed values
else:
nominees = find_ties(neighbors_labels)
max_label = max(neighbors_labels, key = neighbors_labels.count)
# pred label is in the neigh labels and not in the nominees then it has the majority
if true_label == max_label and max_label not in nominees:
p = max_label
accuracy = 1
return p, accuracy
elif true_label != max_label and max_label not in nominees:
p = max_label
accuracy = 0
return p, accuracy
# tl is in nominees
elif true_label in nominees:
p = nominees
accuracy = 1/len(p)
return p, accuracy
else:
p = nominees
accuracy = 0
return p, accuracy
###Output
_____no_output_____
###Markdown
find_ties() is a helper function, that will examine the list of k_neighbours and retun a set of ties in the outcome
###Code
def find_ties(outcomes):
# find number of labels in the final array
unique_items, counts = np.unique(outcomes, return_counts=True)
nominees = []
k = 0
for i in counts:
m = 0
for j in counts:
if i == j and unique_items[k] != unique_items[m]:
nominees.append(unique_items[k])
m += 1
k += 1
return set(nominees)
###Output
_____no_output_____
###Markdown
Start here
###Code
# load our training and testing sets
data_train = np.loadtxt('UCI_Dataset/pendigits_training.txt')
data_test = np.loadtxt('UCI_Dataset/pendigits_test.txt')
k=5
# normalize the data
data_train = (data_train - np.mean(data_train))/np.std(data_train)
data_test = (data_train - np.mean(data_train))/np.std(data_train)
result = []
id_=0
for row in data_test[:50]:
p,a = predict2(row, k, data_train)
result.append([id_, row[-1], p, a])
id_ += 1
# print something to show that its working
if id_%20 == 0:
print('reached id: ', id_)
# print results
result
# save output to file:
with open('5nn2.txt', 'w') as f:
for item in result:
f.write("%s\n" % item)
data_train[:-1]
###Output
_____no_output_____
###Markdown
k-nearest neighbors algorithmhttps://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
# Assign colum names to the dataset
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'Class']
# Read dataset to pandas dataframe
dataset = pd.read_csv(url, names=names)
dataset.head()
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, 4].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
###Output
_____no_output_____
###Markdown
http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=5)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
###Output
_____no_output_____
###Markdown
* https://en.wikipedia.org/wiki/Confusion_matrix* http://blog.exsilio.com/all/accuracy-precision-recall-f1-score-interpretation-of-performance-measures/
###Code
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
error = []
# Calculating error for K values between 1 and 40
for i in range(1, 40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
pred_i = knn.predict(X_test)
error.append(np.mean(pred_i != y_test))
plt.figure(figsize=(12, 6))
plt.plot(range(1, 40), error, color='red', linestyle='dashed', marker='o',
markerfacecolor='blue', markersize=10)
plt.title('Error Rate K Value')
plt.xlabel('K Value')
plt.ylabel('Mean Error')
###Output
_____no_output_____
###Markdown
KNN works on euclidean distance. Lets see the implementation of euclidean distance
###Code
from math import sqrt
#Creating the data points
point1 = [2,4]
point2 = [4,7]
#euclidean distance = sqrt(summation_till_dimensions((q - p)^2))
#Example for two dimension
euclidean_distance = sqrt((point1[0] - point2[0]) ** 2 + (point1[1] - point2[1]) ** 2)
print(euclidean_distance)
###Output
3.605551275463989
###Markdown
**KNN Algorithm** Creating manual dataset to perform knn:
###Code
import numpy as np
from math import sqrt
from matplotlib import style
import matplotlib.pyplot as plt
from collections import Counter
import warnings
style.use('fivethirtyeight')
dataset = {'g':[[1,2],[2,3],[3,1]], 'b': [[6,5],[7,7],[8,6]]}
###Output
_____no_output_____
###Markdown
Below is the KNN algorithm:Initially defining the empty list of distances and then populating it with Euclidean distance and group to which distance is found. Here euclidean distance is obtained more efficiently with NumPy as below.
###Code
def k_nearest_neighbors(data , predict, k=3):
distances = []
for group in data:
for features in data[group]:
euclidean_distance = np.linalg.norm(np.array(features) - np.array(predict))
distances.append([euclidean_distance,group])
#Now getting the group in sorted order of distance for required k neighbors
groups = [i[1] for i in sorted(distances)[:k]]
#From the above groups picking the most common group
result_group_list = Counter(groups).most_common(1)[0]
result_group = result_group_list[0]
return result_group
###Output
_____no_output_____
###Markdown
Defining our new feature for prediction
###Code
new_features = [5,7]
###Output
_____no_output_____
###Markdown
Predicting the above-defined new feature with our algorithm and printing the predicted group
###Code
results = k_nearest_neighbors(dataset , new_features, k=3)
print(results)
###Output
b
###Markdown
Visualizing our predicted data with star (*) marker and the group color.
###Code
[[plt.scatter(j[0],j[1], s =100, color =i) for j in dataset[i]] for i in dataset]
plt.scatter(new_features[0],new_features[1],color = results,s =150,marker="*")
plt.show()
###Output
_____no_output_____
###Markdown
**Applying the algorithm on sklearn's breast cancer dataset**
###Code
from sklearn.datasets import load_breast_cancer
import pandas as pd
import numpy as np
import random
from sklearn import preprocessing
#Loading the data and forming the dataframe
cancer = load_breast_cancer()
df = pd.DataFrame(np.c_[cancer['data'], cancer['target']],
columns= np.append(cancer['feature_names'], ['target']))
print(df)
###Output
mean radius mean texture ... worst fractal dimension target
0 17.99 10.38 ... 0.11890 0.0
1 20.57 17.77 ... 0.08902 0.0
2 19.69 21.25 ... 0.08758 0.0
3 11.42 20.38 ... 0.17300 0.0
4 20.29 14.34 ... 0.07678 0.0
.. ... ... ... ... ...
564 21.56 22.39 ... 0.07115 0.0
565 20.13 28.25 ... 0.06637 0.0
566 16.60 28.08 ... 0.07820 0.0
567 20.60 29.33 ... 0.12400 0.0
568 7.76 24.54 ... 0.07039 1.0
[569 rows x 31 columns]
###Markdown
Above dataset specifies various specifications regarding breast cancer and their categories, i.e., 1.0 represents the benign and 0.0 represents the malignant tumor. Applying the above KNN on the data
###Code
#converting everthing to float and to list because after shuffling the data integrity remains intact
full_data = df.astype(float).values.tolist()
#shuffling
random.shuffle(full_data)
#Train test split from scratch
test_size = 0.2
train_set = {0:[],1:[]}
test_set = {0:[],1:[]}
train_data = full_data[:-int(test_size*len(full_data)) ] #first 80% of data
test_data = full_data[-int(test_size*len(full_data)): ] #Last 20% data
#populating dictionary for knn function
for i in train_data:
train_set[i[-1]].append(i[:-1]) #It will append the values to 0 if the type of cancer specified in train data in maline i.e. 0 else in 1
for i in test_data:
test_set[i[-1]].append(i[:-1])
#Now prdict will be from test set and data will be from train set in knn function
correct = 0
total =0
for test in test_set:
for data in test_set[test]:
group = k_nearest_neighbors(train_set,data,k=5)
if test == group:
correct += 1
total+=1
print('Accuracy:',correct/total)
###Output
_____no_output_____
###Markdown
Amir Shokri St code : 9811920009 E-mail : [email protected] K-Nearest Neighbour (KNN)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
for dirname, _, filenames in os.walk('/Users/Amirsh.nll/Downloads/KNN-AmirShokri'):
for filename in filenames:
print(os.path.join(dirname, filename))
data = pd.read_csv('genderclassifier.csv', encoding ='latin1')
data.info()
data = data.drop(['_unit_id', '_golden', '_unit_state', '_trusted_judgments', '_last_judgment_at', 'profile_yn', 'profile_yn:confidence', 'created', 'description', 'gender_gold', 'link_color', 'profile_yn_gold', 'profileimage', 'sidebar_color', 'text', 'tweet_coord', 'tweet_created', 'tweet_id', 'tweet_location', 'user_timezone', 'gender:confidence', 'gender', 'name'],axis=1)
data.head(20000)
y = data['tweet_count'].values
y = y.reshape(-1,1)
x_data = data.drop(['tweet_count'],axis = 1)
print(x_data)
x = (x_data - np.min(x_data)) / (np.max(x_data) - np.min(x_data)).values
x.head(20000)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.5,random_state=100)
y_train = y_train.reshape(-1,1)
y_test = y_test.reshape(-1,1)
print("x_train: ",x_train.shape)
print("x_test: ",x_test.shape)
print("y_train: ",y_train.shape)
print("y_test: ",y_test.shape)
from sklearn.neighbors import KNeighborsClassifier
K = 1
knn = KNeighborsClassifier(n_neighbors=K)
knn.fit(x_train, y_train.ravel())
print("When K = {} neighnors , KNN test accuracy: {}".format(K, knn.score(x_test, y_test)))
print("When K = {} neighnors , KNN train accuracy: {}".format(K, knn.score(x_train, y_train)))
ran = np.arange(1,30)
train_list = []
test_list = []
for i,each in enumerate(ran):
knn = KNeighborsClassifier(n_neighbors=each)
knn.fit(x_train, y_train.ravel())
test_list.append(knn.score(x_test, y_test))
train_list.append(knn.score(x_train, y_train))
plt.figure(figsize=[15,10])
plt.plot(ran,test_list,label='Test Score')
plt.plot(ran,train_list,label = 'Train Score')
plt.xlabel('Number of Neighbers')
plt.ylabel('fav_number/retweet_count')
plt.xticks(ran)
plt.legend()
print("Best test score is {} and K = {}".format(np.max(test_list), test_list.index(np.max(test_list))+1))
print("Best train score is {} and K = {}".format(np.max(train_list), train_list.index(np.max(train_list))+1))
###Output
Best test score is 0.015561097256857856 and K = 1
Best train score is 0.4479800498753117 and K = 1
###Markdown
Implementation NotesKnn is straight forward algortihm. In order to classify a new data point, it finds the k nearest neighbors of that pointand classifies according to the majority label.Here are some notes regarding my Implementation: note1:I'm using the trivial Euclidean distance. That is:$$ d(x,y) = \sqrt{\sum _{i}{(x_i-y_i)^2}} $$Which is the Euclidean Norm. note2:Choosing the k-smallest elements in an array is an famous interesting issue:1. Trivial: $O(k\cdot n)$Iterate k times and pick the next minimum element2. Better: $O(n\cdot log(n))$Sort the array keeping the original indices and pick the first k.3. Best: $O(n)$This is the optimal solution Selection algorithm. Here I use numpy's partition which implements "introselect" algorithm.
###Code
class kNNClassifier:
def __init__(self, n_neighbors):
self.n_neighbors = n_neighbors
self.data = np.empty((1,1))
self.labels = np.empty((1,1))
def fit(self, X, y):
self.data = X
self.labels= y
def _predict_one_point(self,point):
dist = np.linalg.norm(self.data-point,axis=1) # note 1
k_smallets = np.argpartition(dist, self.n_neighbors)[:self.n_neighbors] #note 2
label_count = np.unique(self.labels[k_smallets],return_counts=True)
return label_count[0][label_count[1].argmax()]
def predict(self, X):
preds = np.zeros((X.shape[0],1))
for i in range(X.shape[0]):
preds[i]=self._predict_one_point(X[i])
return preds.T[0]
def score(self, predictions, true_labels):
return (np.count_nonzero(predictions-true_labels.astype("int"))/predictions.size)
###Output
_____no_output_____
###Markdown
Here we are Testing its performence on the MNIST dataset while copmaring it to sklearn performence. Load Data
###Code
mnist = fetch_openml('mnist_784', as_frame=False)
data = mnist['data']
labels = mnist['target']
idx = np.random.RandomState(0).choice(70000, 11000)
train = data[idx[:10000], :].astype(int)
train_labels = labels[idx[:10000]]
test = data[idx[10000:], :].astype(int)
test_labels = labels[idx[10000:]]
###Output
_____no_output_____
###Markdown
Testing accuracy
###Code
X_train, Y_train = train[:1000],train_labels[:1000]
accuracy_map = {"k":[], "my_classifier": [], "sklearn_classifier": []}
for k in [1,2,5,10,30,60,100]:
accuracy_map["k"].append(k)
knn_b = kNNClassifier(k)
knn_b.fit(X_train,Y_train)
preds_b = knn_b.predict(test)
score_b = 1-knn_b.score(preds_b,test_labels)
accuracy_map["my_classifier"].append(score_b)
sklearn_knn = KNeighborsClassifier(n_neighbors=k)
sklearn_knn.fit(X_train,Y_train)
sklearn_score = sklearn_knn.score(test,test_labels)
accuracy_map["sklearn_classifier"].append(sklearn_score)
accuracy_table = pd.DataFrame(accuracy_map)
accuracy_table
sns.lineplot(x='k', y="my_classifier", data=accuracy_table)
###Output
_____no_output_____
###Markdown
Training the knn model on MSR data and evaluating on 20% of the same dataset.
###Code
X_train, X_test, y_train, y_test = train_test_split(msr, y_msr, train_size=0.8,
random_state=33, shuffle=True)
msr_vectorizer = CountVectorizer(max_features=1000)
bow_train = msr_vectorizer.fit_transform(X_train['token'])
sparse_matrix_train = pd.DataFrame(bow_train.toarray(), columns = msr_vectorizer.get_feature_names())
X_train_count = concat_loc_sum(sparse_matrix_train, X_train)
bow_test = msr_vectorizer.transform(X_test['token'])
sparse_matrix_test = pd.DataFrame(bow_test.toarray(), columns = msr_vectorizer.get_feature_names())
X_test_count = concat_loc_sum(sparse_matrix_test, X_test)
msr_model = KNeighborsClassifier(n_neighbors=20)
msr_model.fit(X_train_count, y_train)
preds = msr_model.predict(X_test_count)
print(classification_report(y_test, preds))
print('f1', f1_score(y_test, preds))
###Output
precision recall f1-score support
0 0.77 0.95 0.85 283
1 0.93 0.71 0.80 278
accuracy 0.83 561
macro avg 0.85 0.83 0.83 561
weighted avg 0.85 0.83 0.83 561
f1 0.8032786885245903
###Markdown
Evaluating MSR model on new data
###Code
new_data = pd.read_csv('data/new/raw_new_dataset.csv')
y_new = new_data['class']
new = new_data.drop(columns=['class'])
X_new = msr_vectorizer.transform(new['token'])
sparse_matrix_new = pd.DataFrame(X_new.toarray(), columns = msr_vectorizer.get_feature_names())
X_new_count = concat_loc_sum(sparse_matrix_new, new_data)
new_preds = msr_model.predict(X_new_count)
print(classification_report(y_new, new_preds))
print('f1', f1_score(y_new, new_preds))
###Output
precision recall f1-score support
0 0.50 0.97 0.66 724
1 0.64 0.06 0.11 737
accuracy 0.51 1461
macro avg 0.57 0.51 0.38 1461
weighted avg 0.57 0.51 0.38 1461
f1 0.10918114143920596
###Markdown
Apredizado Supervisionado: Classificação c/ Random Forest Importando as bibliotecas
###Code
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Aquisição de dados
###Code
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True, as_frame=False)
mnist.target = mnist.target.astype(np.int8) #transforma as labels de string para int
type(mnist)
mnist.details
mnist.DESCR
mnist.data.shape
mnist.target.shape
# X,y = mnist.data.values, mnist.target.to_numpy() # Converte para np arrays
X,y = mnist['data'], mnist['target']
X[30000]
digito = X[10999].reshape(28,28)
###Output
_____no_output_____
###Markdown
2. Visualização dos dados
###Code
plt.imshow(digito, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
plt.show()
y[10999]
###Output
_____no_output_____
###Markdown
3. Pré-processamento
###Code
X_train, y_train, X_test, y_test = X[:60000], y[:60000], X[60000:], y[60000:]
X_test.shape
y_test.shape
X_train.shape
y_train.shape
index = np.random.permutation(60000)
X_train, y_train = X_train[index], y_train[index]
index = np.random.permutation(10000)
X_test, y_test = X_test[index], y_test[index]
###Output
_____no_output_____
###Markdown
5. Ajustando o Modelo
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
knn = KNeighborsClassifier()
knn.get_params().keys()
param_grid = {
'n_neighbors': [3,5,7],
'weights' : ['uniform', 'distance'],
'n_jobs': [-1]
}
grid_search = GridSearchCV(knn, param_grid, cv=5, scoring='accuracy')
grid_search.fit(X_train, y_train)
grid_search.best_params_
grid_search.best_score_
knn_best = KNeighborsClassifier(n_neighbors= 3, weights= 'distance', n_jobs= -1)
knn_best.fit(X_train,y_train)
knn_predictions = knn_best.predict(X_test)
acc = sum(knn_predictions == y_test)/len(knn_predictions)
print(acc)
###Output
0.9717
###Markdown
6. Avaliando o Modelo
###Code
from sklearn.metrics import accuracy_score
accuracy_score(knn_predictions,y_test)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,knn_predictions)
###Output
_____no_output_____
###Markdown
Precision Score
###Code
from sklearn.metrics import precision_score, recall_score
precision_score(y_test, knn_predictions, average='weighted')
###Output
_____no_output_____
###Markdown
Recall Score
###Code
recall_score(y_test, knn_predictions, average='weighted')
###Output
_____no_output_____
###Markdown
F1-score
###Code
from sklearn.metrics import f1_score
f1_score(y_test,knn_predictions, average='weighted')
###Output
_____no_output_____
###Markdown
2 - Shift()
###Code
from scipy.ndimage.interpolation import shift
def show_images(images, titles) -> None:
n: int = len(images)
f = plt.figure(figsize=(10, 10))
for i in range(n):
# Debug, plot figure
f.add_subplot(1, n, i + 1)
plt.imshow(images[i])
plt.title(titles[i])
plt.axis('off')
plt.show(block=True)
###Output
_____no_output_____
###Markdown
deslocando com Shift
###Code
index = 7 # uma imagem aleatoria do X_train
img = X_train[index].reshape(28,28) #redimensiona a imgaem para a função shift() funcionar
img.shape
pixels = 5 # quantidade de pixel(s) a ser deslocado (na atividade pede 1pixel)
right = [0,pixels]
top = [-pixels,0]
left = [0,-pixels]
bottom = [pixels,0]
img_shifted_right = shift(img, right, cval=0, order=0)
img_shifted_top = shift(img, top, cval=0, order=0)
img_shifted_left = shift(img, left, cval=0, order=0)
img_shifted_bottom = shift(img, bottom, cval=0, order=0)
images = [img, img_shifted_right, img_shifted_top, img_shifted_left, img_shifted_bottom]
titles = ['original','right', 'top', 'left', 'bottom']
show_images(images, titles) # funcção para plotar e confirmar o deslocamento
test = img_shifted_right.reshape(-1) # retorna a imagem para a dimensão original (784,)
print('reshape -1: ', test.shape)
print('label: ',y_train[index])
new_X_train = [[]]*300000
new_y_train = [[]]*300000
def shift_img(img, lb, cont, direction):
pixels = 5 # quantidade de pixel(s) a ser deslocado
right = [0,pixels]
top = [-pixels,0]
left = [0,-pixels]
bottom = [pixels,0]
if direction == 'right':
img_shifted_right = shift(img, right, cval=0, order=0) # desloca a quantidade de pixels definida para a direita da imagem
img_shifted_right = img_shifted_right.reshape(-1) # retorna a imagem para a dimensão original (784,)
new_X_train[cont] = img_shifted_right.copy()
elif direction == 'left':
img_shifted_left = shift(img, left, cval=0, order=0) # desloca a quantidade de pixels definida para a esquerda da imagem
img_shifted_left = img_shifted_left.reshape(-1)
new_X_train[cont] = img_shifted_left.copy()
elif direction == 'top':
img_shifted_top = shift(img, top, cval=0, order=0) # desloca a quantidade de pixels definida para o topo da imagem
img_shifted_top = img_shifted_top.reshape(-1)
new_X_train[cont] = img_shifted_top.copy()
elif direction == 'bottom':
img_shifted_bottom = shift(img, bottom, cval=0, order=0) # desloca a quantidade de pixels definida para a direita
img_shifted_bottom = img_shifted_bottom.reshape(-1)
new_X_train[cont] = img_shifted_bottom.copy()
def main():
loop = True
x = 0 # contador do X_train
cont = 0 # contador da nova base
c_dir = 0 # contador das direções
xt = 0 # contador do X_train original
while loop:
directions = ['right', 'left', 'top', 'bottom']
if x < 60000:
img = X_train[x].reshape(28,28) # redimensiona a imagem para a função shift() funcionar
lb = y_train[x]
if cont < 60000:
shift_img(img, lb, cont, directions[c_dir]) # right
new_y_train[cont] = lb
cont+=1
x+=1
elif cont >= 60000 and cont < 120000:
shift_img(img, lb, cont, directions[c_dir]) # left
new_y_train[cont] = lb
cont+=1
x+=1
elif cont >= 120000 and cont < 180000:
shift_img(img, lb, cont, directions[c_dir]) # top
new_y_train[cont] = lb
cont+=1
x+=1
elif cont >= 180000 and cont < 240000:
shift_img(img, lb, cont, directions[c_dir]) # bottom
new_y_train[cont] = lb
cont+=1
x+=1
elif cont >= 240000 and cont < 300000:
new_X_train[cont] = X_train[xt].copy()
new_y_train[cont] = lb
cont+=1
xt+=1
else:
x = 0
c_dir += 1
if cont >= 300000:
loop = False # Fim do loop
import time
start = time.time()
print('Inicio da execução')
main()
end = time.time()
print('fim da execução')
print(end - start) #s egundos
new_X_train = np.array(new_X_train)
new_y_train = np.array(new_y_train)
type(new_X_train)
type(new_y_train)
print(new_X_train.shape, new_y_train.shape)
print(X_train.shape, y_train.shape)
num = new_X_train[40400].reshape(28,28)
plt.imshow(num, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
plt.show()
print(new_y_train[40400])
param_grid = {
'n_neighbors': [3,5,7],
'weights' : ['uniform', 'distance'],
'n_jobs': [-1]
}
grid_search = GridSearchCV(knn, param_grid, cv=5, scoring='accuracy')
knn_best = KNeighborsClassifier(n_neighbors=3, weights='distance', n_jobs=-1)
knn_best.fit(new_X_train,new_y_train)
knn_predictions = knn_best.predict(new_X_train)
acc = sum(knn_predictions == y_test)/len(knn_predictions)
print(acc)
###Output
_____no_output_____
###Markdown
Matriz de confusão
###Code
from sklearn.metrics import accuracy_score
accuracy_score(knn_predictions,y_test)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test,knn_predictions)
###Output
_____no_output_____
###Markdown
Precision Score
###Code
from sklearn.metrics import precision_score, recall_score
precision_score(y_test, knn_predictions, average='weighted')
###Output
_____no_output_____
###Markdown
Recall Score
###Code
recall_score(y_test, knn_predictions, average='weighted')
###Output
_____no_output_____
###Markdown
F1-score
###Code
from sklearn.metrics import f1_score
f1_score(y_test,knn_predictions, average='weighted')
###Output
_____no_output_____
###Markdown
KNN - Aplicado a Medicina84198, Daiane Estenio\85398, Luís Paulino Fonteshttps://www.komen.org/wp-content/uploads/How-Hormones-Affect-Breast-Cancer_Portuguese.pdfhttps://www.espacodevida.org.br/seu-espaco/clinico/o-que-grau-de-agressividade-do-cncerhttp://www.oncoguia.org.br/conteudo/linfonodos-e-cancer/6814/1/
###Code
# Libs
import math
# Math
def distancia_euclidiana(p1, p2):
total = 0
for i in range(len(p1)):
total += (p1[i] - p2[i]) ** 2
return math.sqrt(total)
def escala_normalizada(x, v_max, v_min):
return (x - v_min) / (v_max - v_min)
# Util
def ler_arquivo(filename, keys):
amostras = []
total_descarte = 0
with open(filename, "r") as dataset: # Faz a leitura do arquivo informado (precisa ter extensão)
for instancia in dataset.readlines():
x = instancia.replace("\n", "").split(",")
try: # Tenta adicionar a amostra
amostra_normalizada = normalizar_arquivo(x, keys)
amostras.append(amostra_normalizada)
except ValueError: # Em caso de erro, apenas incrementa o total de descarte
total_descarte += 1
with open("output.data", "w") as output:
for item in amostras:
item_string = str(item).replace("[","").replace("]","")
output.write(f"{item_string}\n")
print(f"Total amostras descartadas: {total_descarte}") # Exibe o total de amostras descartadas
return amostras
def normalizar_arquivo(amostra, names):
amostra_normalizada = []
for indice in range(len(amostra)):
itens = names[indice] # Obtém os valores possíveis para aquela chave
v_min = 0
decimal = 2
normalize = 1
if type(itens) is dict: # verifica se os valores são chaves
temp_itens = itens
itens = temp_itens["data"] # Pega os valores para aquela chave
if "remove" in temp_itens:
continue
if "min" in temp_itens: # se estipulado um mínimo, altera para ele invés do padrão
v_min = temp_itens["min"]
if "decimal" in temp_itens: # se possui decimal, usa ele invés do padrão
decimal = temp_itens["decimal"]
if "reverse" in temp_itens:
itens.reverse()
if "normalize" in temp_itens:
normalize = temp_itens["normalize"]
del temp_itens
valor_atual = amostra[indice] # Posição atual na amostra
valor = valor_atual
if normalize:
v_max = len(itens) - 1 # Tamanho total da lista - 1 (lista inicia em 0)
item_indice = itens.index(valor_atual) # indíce do valor da amostra na lista de valores possíveis
valor = escala_normalizada(item_indice, v_max, v_min)
amostra_normalizada.append(arred(float(valor), decimal))
return amostra_normalizada
def arred(valor, decimal = None):
if decimal is None:
return valor
if decimal == 0:
return round(valor)
return round(valor, decimal)
# Análise
def info_dataset(amostras, classe, info=True):
output1, output2 = 0,0
for amostra in amostras:
if amostra[classe] == 1:
output1 += 1 # Paciente sem recorrências
else:
output2 += 1 # Paciente com recorrências
if info == True:
print(f"Total de amostras................: {len(amostras)}")
print(f"Total Normal (Sem recorrência)...: {output1}")
print(f"Total Alterado (Com recorrência).: {output2}")
return [len(amostras), output1, output2]
def separar_amostras(amostras, porcentagem, classe):
_, output1, output2 = info_dataset(amostras, classe)
treinamento = []
teste = []
max_output1 = int(porcentagem*output1)
max_output2 = int((1 - porcentagem)*output2)
total_output1 = 0
total_output2 = 0
for amostra in amostras:
if(total_output1 + total_output2) < (max_output1 + max_output2):
# Inserir em treinamento
treinamento.append(amostra)
if amostra[classe] == 1 and total_output1 < max_output1:
total_output1 += 1
else:
total_output2 += 1
else:
# Insere em teste
teste.append(amostra)
return [treinamento, teste]
def knn(treinamento, nova_amostra, classe, k):
distancias = {}
tamanho_treino = len(treinamento)
# Calcula distância euclidiana
for i in range(tamanho_treino):
d = distancia_euclidiana(treinamento[i], nova_amostra)
distancias[i] = d
# Obtém k-vizinhos
k_vizinhos = sorted(distancias, key=distancias.get)[:k] # retorna do começo até o k-1
# Votação
qtd_output1 = 0
qtd_output2 = 0
for indice in k_vizinhos:
if treinamento[indice][classe] == 1: # saída normal
qtd_output1 += 1
else: # saída alterada
qtd_output2 += 1
if qtd_output1 > qtd_output2:
return 1
else:
return 0
# Definição
names = {
# Class
0:{
"data": ["recurrence-events", "no-recurrence-events"],
"decimal": 0,
"reverse": 1
},
# Age
1:{
"data": ["10-19", "20-29", "30-39", "40-49", "50-59", "60-69", "70-79", "80-89", "90-99"],
"decimal": 1
},
# Menopause
2:["lt40", "ge40", "premeno"],
# Tumor-size
3:{
"data": ["0-4", "5-9", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40-44", "45-49", "50-54", "55-59"],
"decimal": 1
},
# Inv-nodes
4:{
"data": ["0-2", "3-5", "6-8", "9-11", "12-14", "15-17", "18-20", "21-23", "24-26", "27-29", "30-32", "33-35", "36-39"],
"decimal": 1,
"min": 6
},
# Node-caps
5:{
"data": ["yes", "no"],
},
# Deg-malig
6:{
"data": ["1","2","3"],
"normalize": 0,
"decimal": 0
},
# Breast
7:{
"data": ["left","right"],
"decimal": 0
},
# Breast-quad
8:{
"data": ["left_up", "left_low", "right_up", "right_low", "central"],
},
# Irradiant
9:{
"data": ["yes", "no"],
"decimal": 0,
"reverse": 1
}
}
# Teste
acertos = 0
pos_classe = 0
porcentagem = 0.8
k = 17
amostras = ler_arquivo("breast-cancer.data", names)
treinamento, teste = separar_amostras(amostras, porcentagem, pos_classe)
for amostra in teste:
classe_retornada = knn(treinamento, amostra, pos_classe, k)
# print(classe_retornada, amostra[pos_classe])
if amostra[pos_classe] == classe_retornada:
acertos += 1
print(f"Total de treinamento..: {len(treinamento)}")
print(f"Total de testes.......: {len(teste)}")
print(f"Total de acertos......: {acertos}")
print(f"Porcentagem de acerto.: {arred(100*acertos/len(teste), 0)} %")
###Output
_____no_output_____
###Markdown
Breast Cancer Diagnosis
###Code
from sklearn.datasets import load_breast_cancer
dataset = load_breast_cancer()
###Output
_____no_output_____
###Markdown
Part 1: Getting startedFirst off, take a look at the `data`, `target` and `feature_names` entries in the `dataset` dictionary. They contain the information we'll be working with here. Then, create a Pandas DataFrame called `df` containing the data and the targets, with the feature names as column headings. If you need help, see [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) for more details on how to achieve this. * How many features do we have in this dataset? 30* What are the target classes? [0 1]* What do these target classes signify? ['malignant' 'benign']* How many participants tested `Malignant`? 212* How many participants tested `Benign`? 357
###Code
import numpy as np
import pandas as pd
print ("dataset features: ", dataset.data.shape[1])
print ("target classes: ", np.unique(dataset.target))
print ("target classes signify: ", dataset.target_names)
print ("participants tested Malignant: ", np.sum(dataset.target == 0))
print ("participants tested Benign: ", np.sum(dataset.target == 1))
# create dataframe df
df = pd.DataFrame(data= dataset.data, columns= dataset.feature_names)
# add column 'targets'
df['targets']=dataset.target.reshape(-1,1)
# add column 'targets_type'
df['targets_type']= pd.Series(['malignant' if item==0 else 'benign' for item in dataset.target])
df.head()
###Output
dataset features: 30
target classes: [0 1]
target classes signify: ['malignant' 'benign']
participants tested Malignant: 212
participants tested Benign: 357
###Markdown
Use `seaborn.lmplot` ([help here](https://seaborn.pydata.org/generated/seaborn.lmplot.html)) to visualize a few features of the dataset. Draw a plot where the x-axis is "mean radius", the y-axis is "mean texture," and the color of each datapoint indicates its class. Do this once again for different features for the x- and y-axis and see how the data is distributed. **[1]**Standardizing the data is often critical in machine learning. Show a plot as above, but with two features with very different scales. Standardize the data and plot those features again. What's different? Why? **[1]**It is best practice to have a training set (from which there is a rotating validation subset) and a test set. Our aim here is to (eventually) obtain the best accuracy we can on the test set (we'll do all our tuning on the training/validation sets, however). To tune `k` (our hyperparameter), we employ cross-validation ([Help](https://scikit-learn.org/stable/modules/cross_validation.html)). Cross-validation automatically selects validation subsets from the data that you provided. Split the dataset into a train and a test set **"70:30"**, use **``random_state=0``**. The test set is set aside (untouched) for final evaluation, once hyperparameter optimization is complete. **[1]****
###Code
import seaborn as sns
# 'mean radius' vs 'mean texture'
sns.lmplot (x='mean radius', y='mean texture', data=df, hue= 'targets_type', fit_reg= False)
# 'radius error' vs 'texture error'
sns.lmplot (x='mean radius', y='mean area', data=df, hue= 'targets_type', fit_reg= False)
# Standardize the features
stand_features= (df.iloc[:,0:30] - df.iloc[:,0:30].mean()) / df.iloc[:,0:30].std()
df_stand = pd.DataFrame.copy(df)
df_stand.iloc[:,0:30] = stand_features
df_stand.head(5)
# Plot features
sns.lmplot (x='mean radius', y='mean texture', data=df_stand, hue= 'targets_type', fit_reg= False)
sns.lmplot (x='mean radius', y='mean area', data=df_stand, hue= 'targets_type', fit_reg= False)
# After standardization, features have mean zero and standard deviation 1, the scale range of features became smaller.
# However, the points patter of scatter plots are the same.
from sklearn.model_selection import train_test_split
# Without standardization
x_train, x_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.3, random_state=0)
# With standardization
x_train_stand, x_test_stand, y_train_stand, y_test_stand = train_test_split(np.array(df_stand.iloc[:,0:30]),
np.array(df_stand.iloc[:,30]) , test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Part 2: KNN Classifier without Standardization Normally, standardizing data is a key step in preparing data for a KNN classifier. However, for educational purposes, let's first try to build a model without standardization. Let's create a KNN classifier to predict whether a patient has a malignant or benign tumor. Follow these steps: 1. Train a KNN Classifier using cross-validation on the dataset. Sweep `k` (number of neighbours) from 1 to 100, and show a plot of the mean cross-validation accuracy vs `k`. 2. What is the best `k`? Comment on which `k`s lead to underfitted or overfitted models. 3. Can you get the same accuracy (roughly) with fewer features using a KNN model? You're free to use trial-and-error to remove features (try at least 5 combinations), or use a more sophisticated approach like [Backward Elimination](https://towardsdatascience.com/backward-elimination-for-feature-selection-in-machine-learning-c6a3a8f8cef4). Describe your findings using a graph or table (or multiple!). 2.1 plot of the mean cross-validation accuracy vs k
###Code
from sklearn import neighbors
from sklearn.model_selection import cross_val_score
import matplotlib.pyplot as plt
# knn = neighbors.KNeighborsClassifier (n_neighbors=1)
# scores = cross_val_score(knn, x_train, y_train, cv=5)
# scores.mean()
x = [k for k in range(1,101)]
y1 = [cross_val_score(neighbors.KNeighborsClassifier (n_neighbors=k),
x_train, y_train, cv=5).mean() for k in range(1,101)]
plt.plot(x,y1,label="without feature selection")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Training data (without Standardization)")
###Output
_____no_output_____
###Markdown
2.2 find best k
###Code
print ('best k=', x[y1.index(max(y1))], ', with highest accuracy')
# The accuracy drops when k deacrese from it's best value, which leads to overfitted models,
# The accuracy drops when k increase from it's best value, which leads to underfitted models.
###Output
best k= 10 , with highest accuracy
###Markdown
2.3 feture reduction (backward elimination)
###Code
# helper function 'Find_largest_pval':
# find t-stat and p-val of coefficients
from sklearn.linear_model import LinearRegression
import statsmodels.api as sm
from scipy import stats
def Find_largest_pval (x,y):
lm = LinearRegression()
lm.fit(x,y)
y_pridiction = lm.predict(x)
# beta = (x'x)^-1 x'y
beta = np.append(lm.intercept_, lm.coef_)
# MSE = sum ((yi-yi^)^2)/ n-1-k
n = x.shape[0]
k = x.shape[1]
MSE = (sum ((y-y_pridiction)**2) / (n-1-k))
# var(beta) = (x'x)^-1 MSE
new_x = pd.DataFrame(x)
new_x.insert(0,'c0',np.ones(n))
var_beta = (np.linalg.inv(new_x.T @ new_x) * MSE).diagonal()
tstat = beta/np.sqrt(var_beta)
pval =[2*(1-stats.t.cdf(np.abs(i),n-1-k)) for i in tstat]
# create dataframe
reg_result = pd.DataFrame ({"Coefficients":beta, "T statistcs":tstat, "P-value":pval}).round(decimals=4)
return reg_result.sort_values(by='P-value',ascending=False)
# example show output of helper function:
Find_largest_pval (x_train,y_train).head()
# helper function'feature_reduction':
# remove non-significant features by Backward Elimination
def feature_reduction (x_train, y_train, x_test):
# removes the highest p-value greater than alpha
alpha = 0.05
while Find_largest_pval(x_train,y_train).iloc[0,2] > alpha:
# index of row who's p-value is largest
i = Find_largest_pval(x_train,y_train).index[0]
x_train = np.delete(x_train,i, axis=1)
x_test = np.delete(x_test,i, axis=1)
# output: non significant features have been removed
return x_train, x_test
# plot
x_train_red = feature_reduction (x_train, y_train, x_test)[0]
x = [k for k in range(1,101)]
y2 = [cross_val_score(neighbors.KNeighborsClassifier (n_neighbors=k),
x_train_red, y_train, cv=5).mean() for k in range(1,101)]
plt.plot(x,y1,label="without feature selection")
plt.plot(x,y2,label="with feature selection")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Training data (without Standardization)")
# When model complexity decrease, the training error increase. That point can be demonstrated from the
# following plot, the accuracy decrease after future selection.
###Output
_____no_output_____
###Markdown
Part 3: Standardization Standardizing the data usually means scaling our data to have a mean of zero and a standard deviation of one. Note: When we standardize a dataset, do we care if the data points are in our training set or test set? Yes! The training set is available for us to train a model - we can use it however we want. The test set, however, represents a subset of data that is not available for us during training. For example, the test set can represent the data that someone who bought our model would use to see how the model performs (which they are not willing to share with us).Therefore, we cannot compute the mean or standard deviation of the whole dataset to standardize it - we can only calculate the mean and standard deviation of the training set. However, when we sell a model to someone, we can say what our scalers (mean and standard deviation of our training set) was. They can scale their data (test set) with our training set's mean and standard deviation. Of course, there is no guarantee that the test set would have a mean of zero and a standard deviation of one, but it should work fine.**To summarize: We fit the StandardScaler only on the training set. We transform both training and test sets with that scaler.**1. Create a KNN classifier with standardized data ([Help](https://scikit-learn.org/stable/modules/preprocessing.html)), and reproduce all steps in Part 2. 2. Does standardization lead to better model performance? Is performance better or worst? Discuss. 3.1 repeat part2 with standardized data
###Code
x = [k for k in range(1,101)]
y3 = [cross_val_score(neighbors.KNeighborsClassifier (n_neighbors=k),
x_train_stand, y_train_stand, cv=5).mean() for k in range(1,101)]
# feture reduction (backward elimination)
x_train_stand_red = feature_reduction (x_train_stand, y_train_stand, x_test_stand) [0]
y4 = [cross_val_score(neighbors.KNeighborsClassifier (n_neighbors=k),
x_train_stand_red, y_train_stand, cv=5).mean() for k in range(1,101)]
print ('without feature selection, best k=', x[y3.index(max(y3))], ', with highest accuracy')
print ('with feature selection, best k=', x[y4.index(max(y4))], ', with highest accuracy')
# When model complexity decrease, the training error increase. That point can be demonstrated from the
# following plot, the accuracy decrease after future selection.
plt.plot(x,y3,label="without feature selection")
plt.plot(x,y4,label="with feature selection")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Training data (with Standardization)")
###Output
without feature selection, best k= 12 , with highest accuracy
with feature selection, best k= 14 , with highest accuracy
###Markdown
3.2 standardization lead to better model performance?
###Code
plt.plot(x,y1,label="without standardization")
plt.plot(x,y3,label="with standardization")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Training data (without future selction)")
plt.plot(x,y2,label="without standardization")
plt.plot(x,y4,label="with standardization")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Training data (with future selction)")
# Standardization have improve the accuracy for data before and after future selection.
###Output
_____no_output_____
###Markdown
Part 4: Test Data Now that you've created several models, pick your best one (highest accuracy) and apply it to the test dataset you had initially set aside. Discuss.
###Code
# If only consider how models perform on traning data, the best model is the one with standardization
# and without future selection.
from sklearn.metrics import accuracy_score
x_test_stand_red = feature_reduction(x_train_stand, y_train_stand, x_test_stand) [1]
# model without future selction
knn1 = neighbors.KNeighborsClassifier (n_neighbors=12)
knn1.fit(x_train_stand, y_train_stand)
print ("accuracy of model without future selction: ", accuracy_score(y_test_stand, knn1.predict(x_test_stand)))
# model with future selction
knn2 = neighbors.KNeighborsClassifier (n_neighbors=14)
knn2.fit(x_train_stand_red, y_train_stand)
print ("accuracy of model with future selction: ", accuracy_score(y_test_stand, knn2.predict(x_test_stand_red)))
# However, sometimes the model with low trainging error may have high testing error. we also have
# to consider how the model perform for the testing set.
y5, y6 = [], []
for k in range(1,101):
knn = neighbors.KNeighborsClassifier(n_neighbors=k)
knn.fit(x_train_stand, y_train_stand)
y5.append(accuracy_score(y_test_stand, knn.predict(x_test_stand)))
knn1 = neighbors.KNeighborsClassifier(n_neighbors=k)
knn1.fit(x_train_stand_red, y_train_stand)
y6.append(accuracy_score(y_test_stand, knn1.predict(x_test_stand_red)))
# FS means feature selection
plt.plot(x,y5,label="test data without FS")
plt.plot(x,y6,label="test data with FS")
plt.plot(x,y3,'--',label="train data without FS")
plt.plot(x,y4,'--',label="train data with FS")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Standardized training and testing data")
# After considering the performance on testing data, the best model is still the one with standardization and
# without future selection. The model with future selection may be too simple (underfitting)
###Output
_____no_output_____
###Markdown
Part 5: New Dataset Find an appropriate classification dataset online and train a KNN model to make predictions.* Introduce your dataset. * Create a KNN classifier using the tools you've learned. * Present your results. Hint: you can find various datasets here: https://www.kaggle.com/datasets and here: https://scikit-learn.org/stable/datasets/index.htmltoy-datasets.To use a dataset in Colab, you can upload it in your Google drive and access it in Colab ([help here](https://medium.com/analytics-vidhya/how-to-fetch-kaggle-datasets-into-google-colab-ea682569851a)), or you can download the dataset on your local machine and upload it directly to Colab using the following script.```from google.colab import filesuploaded = files.upload()```When submitting your project on Quercus, please make sure you are also uploading your dataset so we can fully run your notebook.
###Code
from sklearn.datasets import load_wine
wineset = load_wine()
###Output
_____no_output_____
###Markdown
5.1 Introduce your dataset* How many features do we have in this dataset? 13* What are the target classes? [0 1 2]* What do these target classes signify? ['class_0' 'class_1' 'class_2']* How many wine tested `class_0`? 59* How many wine tested `class_1`? 71* How many wine tested `class_2`? 48
###Code
print ("dataset features: ", wineset.data.shape[1])
print ("dataset features: ", wineset.data.shape[0])
print ("target classes: ", np.unique(wineset.target))
print ("target classes signify: ", wineset.target_names)
print ("participants tested 'class_0': ", np.sum(wineset.target == 0))
print ("participants tested 'class_1': ", np.sum(wineset.target == 1))
print ("participants tested 'class_2': ", np.sum(wineset.target == 2))
# create dataframe wine
wine = pd.DataFrame(data= wineset.data, columns= wineset.feature_names)
# add column 'targets'
wine['targets']=wineset.target.reshape(-1,1)
# add column 'targets_type'
wine['targets_type']= pd.Series(['class_0' if item==0
else 'class_1' if item==1
else 'class_2' for item in wineset.target])
wine.head()
###Output
dataset features: 13
dataset features: 178
target classes: [0 1 2]
target classes signify: ['class_0' 'class_1' 'class_2']
participants tested 'class_0': 59
participants tested 'class_1': 71
participants tested 'class_2': 48
###Markdown
5.2 Create a KNN classifier using the tools you've learned.
###Code
# Standardize the features
wine_stand_features= (wine.iloc[:,0:13] - wine.iloc[:,0:13].mean()) / wine.iloc[:,0:13].std()
wine_stand = pd.DataFrame.copy(wine)
wine_stand.iloc[:,0:13] = wine_stand_features
wine_stand.head(5)
# Split the dataset into a train and a test set "70:30"
x_train_winestand, x_test_winestand, y_train_winestand, y_test_winestand = train_test_split(np.array(wine_stand.iloc[:,0:13]),
np.array(wine_stand.iloc[:,13]) , test_size=0.3, random_state=0)
# wine train set have sample 99, sweep k from 1 to 100
x = [k for k in range(1,100)]
y11 = [cross_val_score(neighbors.KNeighborsClassifier (n_neighbors=k),
x_train_winestand, y_train_winestand, cv=5).mean() for k in range(1,100)]
# feture reduction (backward elimination)
x_train_winestand_red = feature_reduction (x_train_winestand, y_train_winestand, x_test_winestand) [0]
y12 = [cross_val_score(neighbors.KNeighborsClassifier (n_neighbors=k),
x_train_winestand_red, y_train_winestand, cv=5).mean() for k in range(1,100)]
# show a plot of the mean cross-validation accuracy vs k
print ('without feature selection, best k=', x[y11.index(max(y11))], ', with highest accuracy')
print ('with feature selection, best k=', x[y12.index(max(y12))], ', with highest accuracy')
plt.plot(x,y11,label="without feature selection")
plt.plot(x,y12,label="with feature selection")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Training data (with Standardization)")
###Output
without feature selection, best k= 19 , with highest accuracy
with feature selection, best k= 5 , with highest accuracy
###Markdown
5.3 Present your results
###Code
# If only consider how models perform on traning data, the best model is the one without future selection.
x_test_winestand_red = feature_reduction(x_train_winestand, y_train_winestand, x_test_winestand) [1]
# model without future selction
knn1 = neighbors.KNeighborsClassifier (n_neighbors=19)
knn1.fit(x_train_winestand, y_train_winestand)
print ("accuracy of model without future selction: ", accuracy_score(y_test_winestand, knn1.predict(x_test_winestand)))
# model with future selction
knn2 = neighbors.KNeighborsClassifier (n_neighbors=5)
knn2.fit(x_train_winestand_red, y_train_winestand)
print ("accuracy of model with future selction: ", accuracy_score(y_test_winestand, knn2.predict(x_test_winestand_red)))
# However, sometimes the model with low trainging error may have high testing error.
# Next, consider how the model perform for the testing set.
y13, y14 = [], []
for k in range(1,100):
knn = neighbors.KNeighborsClassifier(n_neighbors=k)
knn.fit(x_train_winestand, y_train_winestand)
y13.append(accuracy_score(y_test_winestand, knn.predict(x_test_winestand)))
knn1 = neighbors.KNeighborsClassifier(n_neighbors=k)
knn1.fit(x_train_winestand_red, y_train_winestand)
y14.append(accuracy_score(y_test_winestand, knn1.predict(x_test_winestand_red)))
# FS: feature selection
plt.plot(x,y13,label="test data without FS")
plt.plot(x,y14,label="test data with FS")
plt.plot(x,y11,'--',label="train data without FS")
plt.plot(x,y12,'--',label="train data with FS")
plt.legend()
plt.xlabel("k")
plt.ylabel("accuracy")
plt.title ("Standardized training and testing data")
# After considering the performance on testing data, the best model is still the one with standardization and
# without future selection. The model with future selection may be too simple (underfitting).
###Output
_____no_output_____
###Markdown
Doing Data Science - Tutorial on K-Nearest Neighbors- The k-NN algorithm is a nonparametric approach to classification.- K-NN is an algorithm that can be used when you have a bunch of objects that have been classified or labeled in some way, and other similar objects that haven't gotten classified or labeled yet, and you want a way to automatically label them.- As a classification algorithm, it can be applied where 'linear-regression-with-a-threshold' cannot, such as when the labels do not take a continuous value scale like a credit score.- The intution behind k-NN is to consider the most similar other items defined in terms of their attributes, look at their labels, and give the unassigned item the majority vote. If there's a tie, you randomly select among the labels that have tied for first.**Two Decisions to be made**1. how do you define similarity or closeness?2. how many neighbors should vote? This value is *k***k-NN process overview**1. Decide on your similarity or distance metric.2. Split the original labeled dataset into training and test data.3. Pick an evaluation metric. (Misclassification rate is a good one.)4. Run k-NN a few times, changing *k* and checking the evaluation measure.5. Optimize *k* by picking the one with the best evaluation measure.6. Once you've chosen *k*, use the same training set and now create a new test set with people's ages and incomes that you have no labels for, and want to predict.**Notable similarity metrics**- Euclidean distance- Cosine similarity- Jaccard distance or similarity (Tanimoto)- Mahalanobis distance- Hamming distance- Manhattan distanceIn classification, a balance has to be struck between being *sensitive* to the reality of the data (true positive or recall) and being *specific* (true negative) to the classification with respect to the categories of the data. - Code tutorial from [here](https://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/) - Data: [Iris dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data)
###Code
import csv
import random
import math
import operator
###Output
_____no_output_____
###Markdown
**1. Handle Data**
###Code
def loadDataset(filename, split, trainingSet, testSet):
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
# shuffle the dataset
for i in range(5):
random.shuffle(dataset)
for x in range(len(dataset)-1):
for y in range(4):
dataset[x][y]= float(dataset[x][y])
if random.random() < split:
trainingSet.append(dataset[x])
else:
testSet.append(dataset[x])
trainingSet, testSet = [], []
loadDataset("iris.data",0.66,trainingSet,testSet)
###Output
_____no_output_____
###Markdown
**2. Similarity**
###Code
def euclideanDistance(instance1, instance2, length):
distance = 0
for x in range(length):
distance += pow((instance1[x]-instance2[x]),2)
return math.sqrt(distance)
# distance test
data1 = [2,2,2,'a']
data2 = [4,4,4,'b']
distance = euclideanDistance(data1, data2, 3)
print("Distance:",str(distance))
###Output
Distance: 3.4641016151377544
###Markdown
**3. Neighbors**The `getNeighbors()` function returns *k* most similar neighbors from the training set for a given test instance, using the already defined `euclideanDistance()`function
###Code
def getNeighbors(trainingSet, testInstance, k):
distances = []
length = len(testInstance)-1
for x in range(len(trainingSet)):
dist = euclideanDistance(testInstance, trainingSet[x], length)
distances.append((trainingSet[x], dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(k):
neighbors.append(distances[x][0])
return neighbors
# neighbors test
trainSet = [[2,2,2,'a'],[4,4,4,'b']]
testInstance = [5,5,5]
k=1
neighbors = getNeighbors(trainingSet, testInstance, 1)
print(neighbors)
###Output
[[5.2, 4.1, 1.5, 0.1, 'Iris-setosa']]
###Markdown
**4. Response**Once we have located the most similar neighbors for a test instance, the next task is to devise a predicted response based on those neighbors.We can do this by allowing each neighbor to vote for their class attribute, and take the majority vote as the prediction
###Code
def getResponseMajorityVote(neighbors):
classVotes = {}
for x in range(len(neighbors)):
response = neighbors[x][-1]
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
sortedVotes = sorted(classVotes.items(),key=operator.itemgetter(1),reverse=True)
return sortedVotes[0][0]
# majority vote test
neighbors = [[1,1,1,'a'],[2,2,2,'a'],[3,3,3,'b']]
response = getResponseMajorityVote(neighbors)
print(response)
###Output
a
###Markdown
**5. Accuracy**We have all teh pieces of the kNN algorithm in place. An important remaining concern is how to evaluate the accuracy of predictions.Below, the `getPredictionAccuracy()` function sums the total correct predictions and returns the accuracy as a percentage of correct classifications.
###Code
def getPredictionAccuracy(testSet, predictions):
correct = 0
for x in range(len(testSet)):
if testSet[x][-1] == predictions[x]:
correct += 1
# print("num of predictions:",len(predictions),"number correct:",correct)
return (correct/float(len(testSet))) * 100.0
# accuracy test
testSet = [[1,1,1,'a'],[2,2,2,'a'],[3,3,3,'b']]
predictions = ['a','a','a']
accuracy = getPredictionAccuracy(testSet, predictions)
print(accuracy)
###Output
66.66666666666666
###Markdown
**6. Main - putting it all together**
###Code
def main():
# prepare data
trainingSet = []
testSet = []
split = 0.67
loadDataset("iris.data",split,trainingSet,testSet)
print("Training set:",len(trainingSet))
print("Test set:",len(testSet))
#generate predictions
predictions = []
k = 1
for x in range(len(testSet)):
neighbors = getNeighbors(trainingSet, testSet[x], k)
result = getResponseMajorityVote(neighbors)
predictions.append(result)
print("> predicted=" + repr(result) + ", actual=" + repr(testSet[x][-1]))
accuracy = getPredictionAccuracy(testSet, predictions)
print("Accuracy:",accuracy)
main()
###Output
Training set: 99
Test set: 50
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-versicolor', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-virginica', actual='Iris-versicolor'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-virginica', actual='Iris-versicolor'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-virginica', actual='Iris-versicolor'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-virginica', actual='Iris-virginica'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-versicolor', actual='Iris-versicolor'
> predicted='Iris-setosa', actual='Iris-setosa'
> predicted='Iris-setosa', actual='Iris-setosa'
Accuracy: 92.0
###Markdown
Knn is a simple concept. It defines some distance between the items in your dataset and find the K closest items. You can use those items to predict some property of a test item, and vote for it. As an example , lets look at a movie prediction system . Lets try to guess the rating of the movie by looking at the 10 movies that are closest in terms of genres and popularity. In this project, we will load up every rating in the dataset into a pandas Dataframe.
###Code
import pandas as pd
import numpy as np
r_cols = ['user id', 'movie_id', 'rating']
ratings = pd.read_csv('C:/Users/Hamsini Sankaran/Desktop/DataScience/DataScience-Python3/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
###Output
_____no_output_____
###Markdown
grouping everything by movie ID and compute the total number of ratings(each movie's popularity) and the average rating of every movie
###Code
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
movieProperties.head()
#The raw number of ratings isnt very useful for computing distances between movies , so we will create a new DataFrame that contains the normalized number of ratings.So, a value of 0 means nobody rated it and a value of 1 will mean it is the most popular movie here
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
#now let's get the genre information from the u.item file . The way this works is there are 19 fields, each corresponding to a specific genre - a value of 0 means , it is not in the genre and a value of 1 means that is in that genre. A movie may have more than one genre associated with it . Each is put into a big python dictionary called movieDict. Every entry contains the movie name, list of genres, normalized popularity score, the average rating of the movie
movieDict = {}
with open('C:/Users/Hamsini Sankaran/Desktop/DataScience/DataScience-Python3/ml-100k/u.item') as f:
temp = ''
for line in f:
fields = line.rstrip('\\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, np.array(list(genres)), movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
movieDict[1]
from scipy import spatial
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
ComputeDistance(movieDict[2], movieDict[4])
#The higher the distance, the less similar the movies are
print (movieDict[2])
print (movieDict[4])
import operator
def getNeighbors(movieID, K):
distance = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distance.append((movie, dist))
distance.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distance[x][0])
return neighbors
K = 5
avgRating = 0
neighbors = getNeighbors(1,K)
for neighbor in neighbors:
avgRating += movieDict[neighbor][3]
print (movieDict[neighbor][0] + " " + str(movieDict[neighbor][3]))
avgRating /= float(K)
avgRating
movieDict[1]
###Output
_____no_output_____
###Markdown
KNN SETTINGS
###Code
import random
import numpy as np
from collections import deque
from sklearn.metrics.pairwise import manhattan_distances
from sklearn.preprocessing import MinMaxScaler
from random import randint
class TradingAction(object):
ETH = 0
XRP = 1
LTC = 2
XLM = 3
USD = 4
BTC = 5
class TradingEnv:
def __init__(self):
#Actions : 0. eth, 1. xrp, 2. ltc, 3. xlm, 4. usd, 5. btc
pass
def reset(self):
pass
def __init__(self):
#Actions : 0. eth, 1. xrp, 2. ltc, 3. xlm, 4. usd, 5. btc
pass
def step(self, action, date):
reward = 0
if action == TradingAction.ETH:
reward = eth_reward[(eth_reward['T'] == date)]['Reward']
if action == TradingAction.XRP:
reward = xrp_reward[(xrp_reward['T'] == date)]['Reward']
if action == TradingAction.LTC:
reward = ltc_reward[(ltc_reward['T'] == date)]['Reward']
if action == TradingAction.XLM:
reward = xlm_reward[(xlm_reward['T'] == date)]['Reward']
if action == TradingAction.USD:
reward = usd_reward[(usd_reward['T'] == date)]['Reward']
if action == TradingAction.BTC:
reward = 0 # Do nothing action
return reward
#Split as Train and Test data
row_count = merged.shape[0]
split_point = int(row_count - 60)
train_data, test_data = merged[:split_point], merged[split_point:]
test_data.head()
#Preprocess Data
scaler = MinMaxScaler()
scaled_cols = ['O_eth','C_eth','H_eth','L_eth','V_eth','BV_eth','O_xrp','C_xrp','H_xrp','L_xrp','V_xrp','BV_xrp','O_ltc','C_ltc','H_ltc','L_ltc','V_ltc','BV_ltc','O_xlm','C_xlm','H_xlm','L_xlm','V_xlm','BV_xlm','O','C','H','L','V','BV']
scaler.fit(train_data[scaled_cols])
train_data.iloc[:][scaled_cols] = scaler.transform(train_data[scaled_cols])
#Use same scaler to transform test data
test_data.iloc[:][scaled_cols] = scaler.transform(test_data[scaled_cols])
train_data.head()
#Action selection
def select_act(env, date):
r0 = env.step(0, date)
r1 = env.step(1, date)
r2 = env.step(2, date)
r3 = env.step(3, date)
r4 = env.step(4, date)
r5 = env.step(5, date)
rewards = np.asarray([r0.item(),r1.item(),r2.item(),r3.item(),r4.item(),r5])
return rewards
#KNN
observation_cols = ['O_eth','C_eth','H_eth','L_eth','V_eth','BV_eth','O_xrp','C_xrp','H_xrp','L_xrp','V_xrp','BV_xrp','O_ltc','C_ltc','H_ltc','L_ltc','V_ltc','BV_ltc','O_xlm','C_xlm','H_xlm','L_xlm','V_xlm','BV_xlm','O','C','H','L','V','BV']
state_size = len(observation_cols)
action_size = 6 #Actions : 0. eth, 1. xrp, 2. ltc, 3. xlm, 4. usd, 5. btc
env = TradingEnv()
test_reward = 0
reward_list = []
for idx in range(len(test_data)-1):
state = test_data.iloc[idx][observation_cols]
#state = np.reshape([state], [1, state_size])
distances = manhattan_distances(train_data[observation_cols], [state])
most_similar_index = distances.argmin()
date = train_data.iloc[most_similar_index]['T']
act_vals = select_act(env, date)
action = np.argmax(act_vals)
reward = env.step(action, test_data.iloc[idx+1]['T'])
if isinstance(reward, int) == False:
reward = reward.item()
test_reward = test_reward + reward
reward_list.append(reward)
print("Test_reward: {}" .format(test_reward))
#We get most similar neighbours with 10% error rate
mscaler = MinMaxScaler()
test_reward = 0
pred_reward = 0
reward_list = []
for idx in range(len(test_data)-1):
state = test_data.iloc[idx][observation_cols]
#state = np.reshape([state], [1, state_size])
distances = manhattan_distances(train_data[observation_cols], [state])
distances = mscaler.fit_transform(distances)
most_similar_index = distances.argmin()
mask = (distances[most_similar_index] + 0.1 > distances)
dates = train_data[mask]['T']
total_array = np.asarray([0.0,0.0,0.0,0.0,0.0,0.0])
for date in dates:
act_vals = select_act(env, date)
total_array += act_vals
total_array /= len(dates)
pred_r = np.max(total_array)
action = np.argmax(total_array)
reward = env.step(action, test_data.iloc[idx+1]['T'])
if isinstance(reward, int) == False:
reward = reward.item()
test_reward = test_reward + reward
reward_list.append(reward)
pred_reward += pred_r
print("Test_reward: {}, Expected_reward: {}" .format(test_reward, pred_reward))
#Plot rewards
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.arange(0,len(test_data)-1,1),reward_list, c='b')
plt.title('Model test reward')
plt.ylabel('Return %')
plt.xlabel('Days')
plt.show()
index = -1
print(merged.iloc[index]['T'])
state = merged.iloc[index][observation_cols]
distances = manhattan_distances(train_data[observation_cols], [state])
most_similar_index = distances.argmin()
date = train_data.iloc[most_similar_index]['T']
act_vals = select_act(env, date)
action = np.argmax(act_vals)
print("Predicted rewards",act_vals)
print("Best action",action)
#Actions : 0. eth, 1. xrp, 2. ltc, 3. xlm, 4. usd, 5. btc
index = -6
print(merged.iloc[index]['T'])
state = merged.iloc[index][observation_cols]
distances = manhattan_distances(train_data[observation_cols], [state])
distances = mscaler.fit_transform(distances)
most_similar_index = distances.argmin()
mask = (distances[most_similar_index] + 0.05 > distances)
dates = train_data[mask]['T']
total_array = np.asarray([0.0,0.0,0.0,0.0,0.0,0.0])
for date in dates:
act_vals = select_act(env, date)
total_array += act_vals
total_array /= len(dates)
pred_r = np.max(total_array)
action = np.argmax(total_array)
print("Predicted rewards",total_array)
print("Predicted best reward",pred_r)
print("Best action",action)
#Actions : 0. eth, 1. xrp, 2. ltc, 3. xlm, 4. usd, 5. btc
index = -5
print(eth_reward.iloc[index]['T'])
r1 = eth_reward.iloc[index]['Reward'].item()
r2 = xrp_reward.iloc[index]['Reward'].item()
r3 = ltc_reward.iloc[index]['Reward'].item()
r4 = xlm_reward.iloc[index]['Reward'].item()
r5 = usd_reward.iloc[index]['Reward'].item()
print("eth: {}, xrp: {}, ltc: {}, xlm: {}, usd: {}, btc: {}" .format(r1, r2, r3, r4, r5, 0))
###Output
2019-05-13
eth: -3.07079807693027, xrp: -1.17298713130621, ltc: -2.05560936161441, xlm: -3.8831492696829395, usd: -4.81129072663069, btc: 0
###Markdown
We have missing data, so we need to clean. from analyzing the data, if the type is a movie and the number of episodes is unkown, then we can put 1. For OVA(Original Video Animation), these are generally one/two episode long animes. I’ve decided to fill the unknown numbers of episodes with 1 again. For all the other animes with unknown number of episodes, I’ve filled the known values with the median
###Code
anime.loc[(anime["type"]=="OVA") & (anime["episodes"]=="Unknown"),"episodes"] = "1"
anime.loc[(anime["type"] == "Movie") & (anime["episodes"] == "Unknown")] = "1"
anime["episodes"] = anime["episodes"].map(lambda x:np.nan if x=="Unknown" else x)
anime["episodes"].fillna(anime["episodes"].median(),inplace = True)
anime["rating"] = anime["rating"].astype(float)
anime["rating"].fillna(anime["rating"].median(),inplace = True)
anime_features = pd.concat([anime["genre"].str.get_dummies(sep=","),
pd.get_dummies(anime[["type"]]),
anime[["rating"]],anime["episodes"]],axis=1)
# you can see the features by using anime_features.columns
#I used MinMaxScaler from scikit-learn as it scales the values from 0–1.
min_max_scaler = MinMaxScaler()
anime_features = min_max_scaler.fit_transform(anime_features)
np.round(anime_features,2)
# number 2 in round means two decimal points
###Output
_____no_output_____
###Markdown
The scaling function (MinMaxScaler) returns a numpy array containing the features. Then we fit the KNN model from scikit learn to the data and calculate the nearest neighbors for each distances. In this case I’ve used the unsupervised NearestNeighbors method for implementing neighbor searches.
###Code
nbrs = NearestNeighbors(n_neighbors=20, algorithm='ball_tree').fit(anime_features)
distances, indices = nbrs.kneighbors(anime_features)
# Returns the index of the anime if (given the full name)
def get_index_from_name(name):
return anime[anime["name"]==name].index.tolist()[0]
all_anime_names = list(anime.name.values)
# Prints the top K similar animes after querying
def print_similar_animes(query=None):
if query:
found_id = get_index_from_name(query)
for id in indices[found_id][1:]:
print(anime.ix[id]["name"])
print("Start of KNN Recommendation")
pred=print_similar_animes(query="Naruto")
###Output
Start of KNN Recommendation
Naruto: Shippuuden
Katekyo Hitman Reborn!
Dragon Ball Z
Dragon Ball Kai
Bleach
Dragon Ball Kai (2014)
Shijou Saikyou no Deshi Kenichi
Rekka no Honoo
Sakigake!! Otokojuku
Medaka Box Abnormal
Kenyuu Densetsu Yaiba
Ben-To
Boruto: Naruto the Movie - Naruto ga Hokage ni Natta Hi
Kurokami The Animation
Boruto: Naruto the Movie
Naruto x UT
Naruto: Shippuuden Movie 4 - The Lost Tower
Naruto: Shippuuden Movie 3 - Hi no Ishi wo Tsugu Mono
Virtua Fighter
###Markdown
loading another dataset
###Code
r_cols = ['user_id', 'item_id', 'rating']
ratings = pd.read_csv('my-data/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
###Output
_____no_output_____
###Markdown
Now, we'll group everything by movie ID(item_id), and compute the total number of ratings (each movie's popularity) and the average rating for every movie. The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
###Code
movieProperties = ratings.groupby('item_id').agg({'rating': [np.size, np.mean]})
print(movieProperties.head())
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
###Output
rating
size mean
item_id
1 452 3.878319
2 131 3.206107
3 90 3.033333
4 209 3.550239
5 86 3.302326
###Markdown
Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it. Then, we'll put together everything into one big Python dictionary called movieDict. Each entry will contain the movie name, list of genre values, the normalized popularity score, and the average rating for each movie.
###Code
movieDict = {}
with open('my-data/u.item') as f:
temp = ''
for line in f:
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, genres, movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
# For example, here's the record we end up with for movie ID 1, (Toy Story)
movieDict[1]
# you can change the number of movieDict[num]
###Output
_____no_output_____
###Markdown
Now, let's create a function that computes the (distance) between two movies based on how similar their genres are, and how similar their popularity is.
###Code
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.cosine(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
# For example,here we compute the distance between two movies (movie id 2 and movie id 4)
print(ComputeDistance(movieDict[1], movieDict[4]))
# you can compute any other movies by changing the movieDict[number]
print movieDict[1]
print movieDict[4]
###Output
1.08419243986
('Toy Story (1995)', [0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 0.77491408934707906, 3.8783185840707963)
('Get Shorty (1995)', [0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 0.35738831615120276, 3.5502392344497609)
###Markdown
Now, let's compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. then sort those by distance, and print out the K nearest neighbors.
###Code
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distances[x][0])
return neighbors
K = 10
avgRating=0
neighbors = getNeighbors(1, K)
for neighbor in neighbors:
print (movieDict[neighbor][0])
# we can print the average rating also by using the print bellow
#print movieDict[neighbor][0] + " " + str(movieDict[neighbor][3])
avgRating /= float(K)
###Output
Liar Liar (1997)
Aladdin (1992)
Willy Wonka and the Chocolate Factory (1971)
Monty Python and the Holy Grail (1974)
Full Monty, The (1997)
George of the Jungle (1997)
Beavis and Butt-head Do America (1996)
Birdcage, The (1996)
Home Alone (1990)
Aladdin and the King of Thieves (1996)
###Markdown
K-nearest neighbors (KNN)
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split,KFold
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,\
recall_score,roc_curve,auc
#import expectation_reflection as ER
#from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from function import split_train_test,make_data_balance
np.random.seed(1)
###Output
_____no_output_____
###Markdown
First of all, the processed data are imported.
###Code
#data_list = ['1paradox']
#data_list = np.loadtxt('data_list.txt',dtype='str')
data_list = np.loadtxt('data_list_30sets.txt',dtype='str')
#data_list = ['9coag']
print(data_list)
def read_data(data_id):
data_name = data_list[data_id]
print('data_name:',data_name)
Xy = np.loadtxt('../classification_data/%s/data_processed_median.dat'%data_name)
X = Xy[:,:-1]
y = Xy[:,-1]
#print(np.unique(y,return_counts=True))
X,y = make_data_balance(X,y)
print(np.unique(y,return_counts=True))
X, y = shuffle(X, y, random_state=1)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state = 1)
sc = MinMaxScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
return X_train,X_test,y_train,y_test
def measure_performance(X_train,X_test,y_train,y_test):
model = KNeighborsClassifier(algorithm='auto')
n_neighbors = [3,5,7,9,11,13,15,17]
if len(y_train) <= 10:
n_neighbors = [2,3,4,5,6,7]
weights = ['uniform','distance']
leaf_size = np.linspace(1,10,num=10)
# Create hyperparameter options
hyper_parameters = dict(n_neighbors=n_neighbors,
weights=weights,
leaf_size=leaf_size)
# Create grid search using cross validation
clf = GridSearchCV(model, hyper_parameters, cv=4, iid='deprecated')
# Fit grid search
best_model = clf.fit(X_train, y_train)
# View best hyperparameters
#print('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])
#print('Best C:', best_model.best_estimator_.get_params()['C'])
#print('Best alpha:', best_model.best_estimator_.get_params()['alpha'])
#print('Best l1_ratio:', best_model.best_estimator_.get_params()['l1_ratio'])
# best hyper parameters
print('best_hyper_parameters:',best_model.best_params_)
# performance:
y_test_pred = best_model.best_estimator_.predict(X_test)
acc = accuracy_score(y_test,y_test_pred)
#print('Accuracy:', acc)
p_test_pred = best_model.best_estimator_.predict_proba(X_test) # prob of [0,1]
p_test_pred = p_test_pred[:,1] # prob of 1
fp,tp,thresholds = roc_curve(y_test, p_test_pred, drop_intermediate=False)
roc_auc = auc(fp,tp)
#print('AUC:', roc_auc)
precision = precision_score(y_test,y_test_pred)
#print('Precision:',precision)
recall = recall_score(y_test,y_test_pred)
#print('Recall:',recall)
f1_score = 2*precision*recall/(precision+recall)
return acc,roc_auc,precision,recall,f1_score
n_data = len(data_list)
roc_auc = np.zeros(n_data) ; acc = np.zeros(n_data)
precision = np.zeros(n_data) ; recall = np.zeros(n_data)
f1_score = np.zeros(n_data)
#data_id = 0
for data_id in range(n_data):
X_train,X_test,y_train,y_test = read_data(data_id)
acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id] =\
measure_performance(X_train,X_test,y_train,y_test)
print(data_id,acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id],f1_score[data_id])
print('acc_mean:',acc.mean())
print('roc_mean:',roc_auc.mean())
print('precision:',precision.mean())
print('recall:',recall.mean())
print('f1_score:',f1_score.mean())
np.savetxt('result_KNN_median.dat',(roc_auc,acc,precision,recall,f1_score),fmt='%f')
###Output
_____no_output_____
###Markdown
Importing all the important libraries
###Code
import numpy as np # linear algebra
from numpy import nan
import pandas as pd # read dataframes
import matplotlib.pyplot as plt # visualization
import seaborn as sns # statistical visualizations
import sklearn
%matplotlib inline
#importing label encoder
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
#libraries to handle imbalance data
from imblearn.combine import SMOTETomek
from imblearn.under_sampling import NearMiss
#libraries to spit data into test and train
from sklearn.model_selection import train_test_split
#library to implement KNN
from sklearn.neighbors import KNeighborsClassifier
#Evaluation libraries
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.model_selection import cross_val_score
# importing the dataset
df = pd.read_csv('adult.csv')
###Output
_____no_output_____
###Markdown
Data Dictionary1. Categorical Attributes - Individual work category - workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked. - Individual's highest education degree - education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters,1st-4th, 10th, Doctorate, 5th-6th, Preschool. - Individual marital status - marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF- spouse. - Individual's occupation - occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces. - Individual's relation in a family - relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried. - Race of Individual - race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black. - sex of individual - sex: Female, Male. - Individual's native country - native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands. 2. Continuous Attributes - Age of an individual - age: continuous. - The weights on the CPS files are controlled to independent estimates of the civilian noninstitutional population of the US. These are prepared monthly for us by Population Division here at the Census Bureau. - fnlwgt: final weight, continuous. - capital-gain: continuous. - capital-loss: continuous. - Individual's working hour per week - hours-per-week: continuous. Exploring the data set
###Code
# exploring the dataframe
df.head(5)
# income is our predicor variable hence, mapping the income class into binary (0 & 1)
df['income'] = df['income'].map({'<=50K': 0, '>50K': 1, '<=50K.': 0, '>50K.': 1})
df.head()
###Output
_____no_output_____
###Markdown
Data cleaning
###Code
# we can observe that there are some missing data as '?'
# we can replace '?' with nan
df=df.replace("?",nan)
df.isnull().sum()
# % missing values
round(100*(df.isnull().sum()/len(df.index)), 2)
df["occupation"].unique()
df["workclass"].unique()
df["native-country"].unique()
# we can use mode to fix the missing values as the missing percentag is very less
df['native-country'].fillna(df['native-country'].mode()[0], inplace=True)
df['workclass'].fillna(df['workclass'].mode()[0], inplace=True)
df['occupation'].fillna(df['occupation'].mode()[0], inplace=True)
# % missing values
round(100*(df.isnull().sum()/len(df.index)), 2)
###Output
_____no_output_____
###Markdown
Summary
###Code
df.info()
# statistical summary
df.describe()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 48842 entries, 0 to 48841
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 age 48842 non-null int64
1 workclass 48842 non-null object
2 fnlwgt 48842 non-null int64
3 education 48842 non-null object
4 educational-num 48842 non-null int64
5 marital-status 48842 non-null object
6 occupation 48842 non-null object
7 relationship 48842 non-null object
8 race 48842 non-null object
9 gender 48842 non-null object
10 capital-gain 48842 non-null int64
11 capital-loss 48842 non-null int64
12 hours-per-week 48842 non-null int64
13 native-country 48842 non-null object
14 income 48842 non-null int64
dtypes: int64(7), object(8)
memory usage: 5.6+ MB
###Markdown
Exploratory data analysis
###Code
sns.pairplot(df)
df['age'].hist(figsize=(8,8))
plt.show()
###Output
_____no_output_____
###Markdown
- age is not evenly distributed there are some outliers in age group more than 70 and less than 20
###Code
df['workclass'].hist(figsize=(26,10))
plt.show()
###Output
_____no_output_____
###Markdown
- Most of the people work in private sector
###Code
df['hours-per-week'].hist(figsize=(8,8))
plt.show()
###Output
_____no_output_____
###Markdown
- Most people work 30-40 hours per week.however there are outliers as some people work 80-100 hours and some work less than 20
###Code
fig = plt.figure(figsize=(10,10))
sns.boxplot(x="income", y="age", data=df)
plt.show()
###Output
_____no_output_____
###Markdown
- for income >50k the age group is 35-52 years. - for income <=50k the age group is 25-45 years
###Code
fig = plt.figure(figsize=(12,12))
ax = sns.countplot(x="workclass", hue="income", data=df).set_title("workclass vs count")
###Output
_____no_output_____
###Markdown
- people earning less then 50k are more then those earning 50k
###Code
sns.catplot(y="education", hue="income", kind="count",
palette="pastel", edgecolor=".7",
data=df);
###Output
_____no_output_____
###Markdown
- most people have education level as HS(high school)
###Code
sns.catplot(y="marital-status", hue="gender", col="income",
data=df, kind="count",
height=4, aspect=.7);
###Output
_____no_output_____
###Markdown
- The people with marital status as Married-civ-spouce has highest people with income more then 50k
###Code
sns.countplot(y="occupation", hue="income",
data=df);
###Output
_____no_output_____
###Markdown
- Most of the people who have income more then 50k either have prof-speciality or exec-managerial as occupation
###Code
plt.figure(figsize=(20,7))
sns.catplot(y="race", hue="income", kind="count",col="gender", data=df);
###Output
_____no_output_____
###Markdown
- people with Gender male and race as white has the most people with income more then 50k
###Code
# plotting heatmap for checking correlation
sns.heatmap(df.corr())
###Output
_____no_output_____
###Markdown
Data processing
###Code
# educational-num and fnlwgt are not important for our analysis so we can remove
df=df.drop(['educational-num','fnlwgt'],axis=1)
# removing outliers
# min. and max age shows there are ouliers similarly there are ouliers in hours -per-week
# using age interval from (20 - 60) and hours-per-week from (20-80)
df=df[(df["age"] < 60)]
df=df[(df["age"] > 20)]
df=df[(df["hours-per-week"] < 80)]
df=df[(df["hours-per-week"] > 20)]
df.describe()
###Output
_____no_output_____
###Markdown
Labelling the data
###Code
#label encoder
df = df.apply(le.fit_transform)
X=df.drop(["income"],axis=1)
y=df["income"]
# checking for data imbalance
df["income"].value_counts()
# using oversampling for handling imbalanced data
smk = SMOTETomek(random_state=42)
X_res,y_res=smk.fit_sample(X,y)
print(X_res.shape,y_res.shape)
df.head(10)
###Output
_____no_output_____
###Markdown
Split Train and Test data
###Code
#splitting the data into test and train for evaluation
# taking the test data as 30% and train data as 70%
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
###Output
_____no_output_____
###Markdown
Implementing KNN
###Code
#implementing basic knn for k=1
knn = KNeighborsClassifier(n_neighbors=1)
#applying knn on training data
knn.fit(X_train,y_train)
#predicting on test data
pred = knn.predict(X_test)
###Output
_____no_output_____
###Markdown
Prediction and validation
###Code
# checking the confusion matrix
print(confusion_matrix(y_test,pred))
# evaluation parameters
print(classification_report(y_test,pred))
###Output
precision recall f1-score support
0 0.86 0.84 0.85 8513
1 0.58 0.62 0.60 3073
accuracy 0.78 11586
macro avg 0.72 0.73 0.72 11586
weighted avg 0.79 0.78 0.78 11586
###Markdown
Choosing value of K
###Code
# finding the appropriate value of K using cross validation
accuracy_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
score=cross_val_score(knn,X,df['income'],cv=10)
accuracy_rate.append(score.mean())
# accuracy vs K_value for identifying the appropriate value of K
plt.figure(figsize=(10,6))
plt.plot(range(1,40),accuracy_rate,color='blue', marker='.',
markerfacecolor='red', markersize=10)
plt.title('accuracy_rate vs. K Value')
plt.xlabel('K')
plt.ylabel('accuracy_rate')
###Output
_____no_output_____
###Markdown
- Highest value of accuracy is at k = 18
###Code
# implementing KNN with value of K as 18
knn = KNeighborsClassifier(n_neighbors=18)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=18')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
###Output
WITH K=18
[[7714 799]
[1414 1659]]
precision recall f1-score support
0 0.85 0.91 0.87 8513
1 0.67 0.54 0.60 3073
accuracy 0.81 11586
macro avg 0.76 0.72 0.74 11586
weighted avg 0.80 0.81 0.80 11586
###Markdown
This notebook runs k-NN over 4 problem sets across 5 trials. Table 2 and Table 3 values are recorded at each iteration of the for loop. Datasets ADULT
###Code
ADULT_data = pd.read_csv('adult.data.csv', names = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation',
'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week',
'native-country', 'target_income'
])
ADULT_data['target_income'] = ADULT_data['target_income'].str.strip()
ADULT_data['target_income'] = ADULT_data.target_income.map( {'<=50K':0 , '>50K':1} )
ADULT_one_hot_data = pd.get_dummies(ADULT_data,
columns = ['workclass', 'education', 'marital-status',
'occupation', 'relationship', 'race', 'sex', 'native-country'],
prefix = ['workclass', 'education', 'marital-status',
'occupation', 'relationship', 'race', 'sex', 'native-country'] )
ADULT_one_hot_data = ADULT_one_hot_data.drop(['workclass_ ?',
'occupation_ ?', 'native-country_ ?'], axis=1)
ADULT_one_hot_data[['age', 'fnlwgt', 'education-num', 'capital-gain',
'capital-loss', 'hours-per-week']] = StandardScaler().fit_transform(ADULT_one_hot_data[['age', 'fnlwgt', 'education-num', 'capital-gain',
'capital-loss', 'hours-per-week']])
ADULT_one_hot_data
###Output
_____no_output_____
###Markdown
Balance of dataset
###Code
positive_labels = ADULT_data['target_income'].value_counts()[1]/ ADULT_data['target_income'].count() * 100
negative_labels = ADULT_data['target_income'].value_counts()[0]/ ADULT_data['target_income'].count() * 100
print("% of negative labels:", negative_labels)
print("% of positive labels:", positive_labels)
###Output
% of negative labels: 75.91904425539757
% of positive labels: 24.080955744602438
###Markdown
Unbalanced dataset COV_type data
###Code
COV_type_data = pd.read_csv('covtype.data.gz', header = None)
cols = [c for c in COV_type_data.columns]
cols[-1] = 'forest_cover'
COV_type_data.columns = cols
largest_class = COV_type_data['forest_cover'].value_counts().idxmax()
COV_type_data.loc[COV_type_data['forest_cover'] != largest_class, 'forest_cover'] = 0
COV_type_data.loc[COV_type_data['forest_cover'] == largest_class, 'forest_cover'] = 1
COV_type_data.iloc[:, :-1] = StandardScaler().fit_transform(COV_type_data.iloc[:, :-1])
COV_type_data
###Output
_____no_output_____
###Markdown
Balance of dataset Treat largest class as positive class. The rest are negative.
###Code
# positive_labels = len(COV_type_data[COV_type_data['Forest cover'] == 7])/len(COV_type_data['Forest cover']) * 100
positive_labels = COV_type_data['forest_cover'].value_counts().max()/len(COV_type_data['forest_cover']) * 100
negative_labels = len(COV_type_data[COV_type_data['forest_cover'] != COV_type_data['forest_cover'].value_counts().idxmax()])/len(COV_type_data['forest_cover']) * 100
print("% of negative labels:", negative_labels)
print("% of positive labels:", positive_labels)
###Output
% of negative labels: 48.75992234239568
% of positive labels: 51.240077657604324
###Markdown
LETTER
###Code
LETTER_p1 = pd.read_csv('letter-recognition.data', header = None)
cols = [c for c in LETTER_p1.columns]
cols[0] = 'letter'
LETTER_p1.columns = cols
LETTER_p1.loc[:, LETTER_p1.columns != 'letter'] = StandardScaler().fit_transform(LETTER_p1.loc[:, LETTER_p1.columns != 'letter'])
LETTER_p1
###Output
_____no_output_____
###Markdown
Letter.p1 - treat O as positive class, rest as negative Unbalanced dataset
###Code
O_list = ['O']
LETTER_p1.loc[~LETTER_p1['letter'].isin(O_list), 'letter'] = 0
LETTER_p1.loc[LETTER_p1['letter'].isin(O_list), 'letter'] = 1
LETTER_p1['letter'].value_counts()
positive_labels = len(LETTER_p1[LETTER_p1['letter'] == 1])/len(LETTER_p1['letter']) * 100
negative_labels = len(LETTER_p1[LETTER_p1['letter'] == 0])/len(LETTER_p1['letter']) * 100
print("% of negative labels:", negative_labels)
print("% of positive labels:", positive_labels)
###Output
% of negative labels: 96.235
% of positive labels: 3.765
###Markdown
Letter.p2 - treat A-M as positive class, rest as negative
###Code
LETTER_p2 = pd.read_csv('letter-recognition.data', header = None)
cols = [c for c in LETTER_p2.columns]
cols[0] = 'letter'
LETTER_p2.columns = cols
LETTER_p2.loc[:, LETTER_p2.columns != 'letter'] = StandardScaler().fit_transform(LETTER_p2.loc[:, LETTER_p2.columns != 'letter'])
pos_alphabet_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M']
neg_alphabet_list = sorted(list(set(string.ascii_uppercase) - set(pos_alphabet_list)))
LETTER_p2.loc[LETTER_p2['letter'].isin(pos_alphabet_list), 'letter'] = 1
LETTER_p2.loc[LETTER_p2['letter'].isin(neg_alphabet_list), 'letter'] = 0
LETTER_p1["letter"] = LETTER_p1["letter"].astype(str).astype(int)
LETTER_p2["letter"] = LETTER_p2["letter"].astype(str).astype(int)
LETTER_p2
###Output
_____no_output_____
###Markdown
Well-balanced dataset
###Code
pos_alphabet_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M']
neg_alphabet_list = sorted(list(set(string.ascii_uppercase) - set(pos_alphabet_list)))
positive_labels = len(LETTER_p2[LETTER_p2['letter'] == 1])/len(LETTER_p2['letter']) * 100
negative_labels = len(LETTER_p2[LETTER_p2['letter'] == 0])/len(LETTER_p2['letter']) * 100
print("% of negative labels:", negative_labels)
print("% of positive labels:", positive_labels)
###Output
% of negative labels: 50.3
% of positive labels: 49.7
###Markdown
Experiment - KNN over 4 datasets over 5 trials
###Code
def split_data(data, column):
Y = data[column]
X = data.drop([column], axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, Y, train_size=5000)
return X_train, X_test, y_train, y_test
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import f1_score, accuracy_score, roc_auc_score
from sklearn.model_selection import train_test_split
accuracy_metric = []
f1_score_metric = []
roc_auc_score_metric = []
ADULT_metric = []
COV_type_metric = []
LETTER_p1_metric = []
LETTER_p2_metric = []
datalist = [COV_type_data, ADULT_one_hot_data, LETTER_p1, LETTER_p2]
for ind, data in enumerate(datalist):
for i in range(5):
print('Start of trial', i+1)
# COV_type data
if ind == 0:
print('At COV_type_data')
dataset = 'COV_type_data'
X_train, X_test, y_train, y_test = split_data(data, 'forest_cover')
# ADULT_data
elif ind == 1:
print('At ADULT_data')
dataset = 'ADULT_data'
X_train, X_test, y_train, y_test = split_data(data, 'target_income')
# LETTER.p1 data
if ind == 2:
print('At LETTER_p1')
dataset = 'LETTER_p1'
X_train, X_test, y_train, y_test = split_data(data, 'letter')
# LETTER.p2 data
if ind == 3:
print('At LETTER_p2')
dataset = 'LETTER_p2'
X_train, X_test, y_train, y_test = split_data(data, 'letter')
pipe = Pipeline([('classifier', KNeighborsClassifier())
])
search_space = [
{ 'classifier': [KNeighborsClassifier(p = 2)],
'classifier__n_neighbors': list(range(1,102,4)),
'classifier__metric': ['euclidean'],
'classifier__weights': ['uniform', 'distance']
}]
# Create grid search
clf = GridSearchCV(pipe, search_space, cv=StratifiedKFold(n_splits=5),
scoring=['accuracy', 'roc_auc', 'f1'], refit='accuracy',
verbose=0, n_jobs = -1)
# Fit grid search
best_model = clf.fit(X_train, y_train)
# Get best hyperparameters for accuracy, roc_auc score, f1_score
best_acc_param = best_model.cv_results_['params'][ np.argmin(best_model.cv_results_['rank_test_accuracy'])]
best_auc_param = best_model.cv_results_['params'][np.argmin(best_model.cv_results_['rank_test_roc_auc'])]
best_f1_param = best_model.cv_results_['params'][np.argmin(best_model.cv_results_['rank_test_f1'])]
# Train 3 models using the 5000 samples and each of the 3 best parameter settings (one model per metric)
# Tuned for accuracy
acc_model = best_acc_param['classifier'].fit(X_train, y_train)
# Tuned for roc-auc score
auc_model = best_auc_param['classifier'].fit(X_train, y_train)
# Tuned for f1 score
f1_model = best_f1_param['classifier'].fit(X_train, y_train)
# fit a classifier using that best param on the training set,
# predict the training set, and record the corresponding training set metric for the appendix tables
# On Training data
y_pred_acc_tr = acc_model.predict(X_train)
y_pred_auc_tr = auc_model.predict(X_train)
y_pred_f1_tr = f1_model.predict(X_train)
print('Trial ', i+1, ' raw training scores for', dataset)
# Raw train accuracy score
print(accuracy_score(y_train, y_pred_acc_tr))
# Raw train roc_auc score
print(roc_auc_score(y_train, y_pred_auc_tr))
# Raw train f1_score
print(f1_score(y_train, y_pred_f1_tr))
# On Test data
y_pred_acc = acc_model.predict(X_test)
y_pred_auc = auc_model.predict(X_test)
y_pred_f1 = f1_model.predict(X_test)
print('Trial ', i+1, ' raw test scores for', dataset)
# Raw test accuracy score
print(accuracy_score(y_test, y_pred_acc))
# Raw test roc_auc score
print(roc_auc_score(y_test, y_pred_auc))
# Raw test f1_score
print(f1_score(y_test, y_pred_f1))
# Append raw test scores to list to generate Table 2 values
accuracy_metric.append(accuracy_score(y_test, y_pred_acc))
roc_auc_score_metric.append(roc_auc_score(y_test, y_pred_auc))
f1_score_metric.append(f1_score(y_test, y_pred_f1))
# For Table 3
if ind == 0:
COV_type_metric.extend([accuracy_score(y_test, y_pred_acc), roc_auc_score(y_test, y_pred_auc),
f1_score(y_test, y_pred_f1)])
elif ind == 1:
ADULT_metric.extend([accuracy_score(y_test, y_pred_acc), roc_auc_score(y_test, y_pred_auc),
f1_score(y_test, y_pred_f1)])
elif ind == 2:
LETTER_p1_metric.extend([accuracy_score(y_test, y_pred_acc), roc_auc_score(y_test, y_pred_auc),
f1_score(y_test, y_pred_f1)])
elif ind == 3:
LETTER_p2_metric.extend([accuracy_score(y_test, y_pred_acc), roc_auc_score(y_test, y_pred_auc),
f1_score(y_test, y_pred_f1)])
print("End of Trial", i+1)
print('------------------------------------------')
print()
###Output
Start of trial 1
At COV_type_data
Trial 1 raw training scores for COV_type_data
1.0
1.0
1.0
Trial 1 raw test scores for COV_type_data
0.7773657493246668
0.7776871484457607
0.7759436434666537
End of Trial 1
------------------------------------------
Start of trial 2
At COV_type_data
Trial 2 raw training scores for COV_type_data
1.0
1.0
1.0
Trial 2 raw test scores for COV_type_data
0.7752616264938925
0.7752553645210009
0.7708221947618323
End of Trial 2
------------------------------------------
Start of trial 3
At COV_type_data
Trial 3 raw training scores for COV_type_data
1.0
1.0
1.0
Trial 3 raw test scores for COV_type_data
0.7866433338194343
0.7869815425421214
0.7853779845481091
End of Trial 3
------------------------------------------
Start of trial 4
At COV_type_data
Trial 4 raw training scores for COV_type_data
1.0
1.0
1.0
Trial 4 raw test scores for COV_type_data
0.7746661527884836
0.7754884432741864
0.7776831914824376
End of Trial 4
------------------------------------------
Start of trial 5
At COV_type_data
Trial 5 raw training scores for COV_type_data
1.0
1.0
1.0
Trial 5 raw test scores for COV_type_data
0.7766747914973994
0.7773724385257351
0.7785958933581179
End of Trial 5
------------------------------------------
Start of trial 1
At ADULT_data
Trial 1 raw training scores for ADULT_data
1.0
1.0
1.0
Trial 1 raw test scores for ADULT_data
0.8377780196654693
0.7423299210968046
0.6227322588811071
End of Trial 1
------------------------------------------
Start of trial 2
At ADULT_data
Trial 2 raw training scores for ADULT_data
1.0
1.0
1.0
Trial 2 raw test scores for ADULT_data
0.8377054533580058
0.7439800499855918
0.6255963840294635
End of Trial 2
------------------------------------------
Start of trial 3
At ADULT_data
Trial 3 raw training scores for ADULT_data
0.838
0.7463818691923787
0.632486388384755
Trial 3 raw test scores for ADULT_data
0.8369797902833714
0.7435189713183991
0.6239223235958817
End of Trial 3
------------------------------------------
Start of trial 4
At ADULT_data
Trial 4 raw training scores for ADULT_data
1.0
1.0
1.0
Trial 4 raw test scores for ADULT_data
0.8361452777475418
0.7421940897545529
0.623603933988998
End of Trial 4
------------------------------------------
Start of trial 5
At ADULT_data
Trial 5 raw training scores for ADULT_data
1.0
1.0
1.0
Trial 5 raw test scores for ADULT_data
0.8348028010594681
0.7370038986558887
0.615813011560206
End of Trial 5
------------------------------------------
Start of trial 1
At LETTER_p1
Trial 1 raw training scores for LETTER_p1
1.0
1.0
1.0
Trial 1 raw test scores for LETTER_p1
0.9893333333333333
0.9359170166730304
0.8646362098138749
End of Trial 1
------------------------------------------
Start of trial 2
At LETTER_p1
Trial 2 raw training scores for LETTER_p1
1.0
1.0
1.0
Trial 2 raw test scores for LETTER_p1
0.9900666666666667
0.9549933110680774
0.8723221936589546
End of Trial 2
------------------------------------------
Start of trial 3
At LETTER_p1
Trial 3 raw training scores for LETTER_p1
1.0
1.0
1.0
Trial 3 raw test scores for LETTER_p1
0.9898666666666667
0.9507199928598129
0.8735440931780367
End of Trial 3
------------------------------------------
Start of trial 4
At LETTER_p1
Trial 4 raw training scores for LETTER_p1
1.0
1.0
1.0
Trial 4 raw test scores for LETTER_p1
0.9913333333333333
0.9323411783247382
0.8826714801444042
End of Trial 4
------------------------------------------
Start of trial 5
At LETTER_p1
Trial 5 raw training scores for LETTER_p1
1.0
1.0
1.0
Trial 5 raw test scores for LETTER_p1
0.9888666666666667
0.9237303792098314
0.8480436760691538
End of Trial 5
------------------------------------------
Start of trial 1
At LETTER_p2
Trial 1 raw training scores for LETTER_p2
1.0
1.0
1.0
Trial 1 raw test scores for LETTER_p2
0.9556
0.9556005262294175
0.9556414013587319
End of Trial 1
------------------------------------------
Start of trial 2
At LETTER_p2
Trial 2 raw training scores for LETTER_p2
1.0
1.0
1.0
Trial 2 raw test scores for LETTER_p2
0.9488666666666666
0.9488722922252363
0.9487813021702838
End of Trial 2
------------------------------------------
Start of trial 3
At LETTER_p2
Trial 3 raw training scores for LETTER_p2
1.0
1.0
1.0
Trial 3 raw test scores for LETTER_p2
0.9537333333333333
0.9537293026414745
0.9532974427994616
End of Trial 3
------------------------------------------
Start of trial 4
At LETTER_p2
Trial 4 raw training scores for LETTER_p2
1.0
1.0
1.0
Trial 4 raw test scores for LETTER_p2
0.9532666666666667
0.9532509464739012
0.9527213866594726
End of Trial 4
------------------------------------------
Start of trial 5
At LETTER_p2
Trial 5 raw training scores for LETTER_p2
1.0
1.0
1.0
Trial 5 raw test scores for LETTER_p2
0.9518
0.9518224963519245
0.9517259798357482
End of Trial 5
------------------------------------------
###Markdown
For Table 2
###Code
print('Accuracy metric values across all datasets, across 5 trials, for KNN:')
print(accuracy_metric)
print()
print()
print('F-score metric values across all datasets, across 5 trials, for KNN:')
print(f1_score_metric)
print()
print('ROC_AUC metric values across all datasets, across 5 trials, for KNN:')
print(roc_auc_score_metric)
print()
print('Average scores for each metric: ')
print('ACC:', sum(accuracy_metric)/len(accuracy_metric))
print('FSC:', sum(f1_score_metric)/len(f1_score_metric))
print('ROC_AUC:', sum(roc_auc_score_metric)/len(roc_auc_score_metric))
with open('Table_2_p_test', 'a') as f:
# using csv.writer method from CSV package
write = csv.writer(f)
write.writerow(accuracy_metric)
write.writerow(f1_score_metric)
write.writerow(roc_auc_score_metric)
###Output
_____no_output_____
###Markdown
For Table 3
###Code
print('COV_type')
print('Metric values across 5 trials, for KNN:')
print(COV_type_metric)
print()
print('ADULT')
print('Metric values across 5 trials, for KNN:')
print(ADULT_metric)
print()
print('LETTER.p1')
print('Metric values across 5 trials, for KNN:')
print(LETTER_p1_metric)
print()
print('LETTER.p2')
print('Metric values across 5 trials, for KNN:')
print(LETTER_p2_metric)
###Output
COV_type
Metric values across 5 trials, for KNN:
[0.7773657493246668, 0.7776871484457607, 0.7759436434666537, 0.7752616264938925, 0.7752553645210009, 0.7708221947618323, 0.7866433338194343, 0.7869815425421214, 0.7853779845481091, 0.7746661527884836, 0.7754884432741864, 0.7776831914824376, 0.7766747914973994, 0.7773724385257351, 0.7785958933581179]
ADULT
Metric values across 5 trials, for KNN:
[0.8377780196654693, 0.7423299210968046, 0.6227322588811071, 0.8377054533580058, 0.7439800499855918, 0.6255963840294635, 0.8369797902833714, 0.7435189713183991, 0.6239223235958817, 0.8361452777475418, 0.7421940897545529, 0.623603933988998, 0.8348028010594681, 0.7370038986558887, 0.615813011560206]
LETTER.p1
Metric values across 5 trials, for KNN:
[0.9893333333333333, 0.9359170166730304, 0.8646362098138749, 0.9900666666666667, 0.9549933110680774, 0.8723221936589546, 0.9898666666666667, 0.9507199928598129, 0.8735440931780367, 0.9913333333333333, 0.9323411783247382, 0.8826714801444042, 0.9888666666666667, 0.9237303792098314, 0.8480436760691538]
LETTER.p2
Metric values across 5 trials, for KNN:
[0.9556, 0.9556005262294175, 0.9556414013587319, 0.9488666666666666, 0.9488722922252363, 0.9487813021702838, 0.9537333333333333, 0.9537293026414745, 0.9532974427994616, 0.9532666666666667, 0.9532509464739012, 0.9527213866594726, 0.9518, 0.9518224963519245, 0.9517259798357482]
###Markdown
Write the 15 values for each metric to a .csv to do p-test comparisons.
###Code
with open('Table_3_p_test', 'a') as f:
# using csv.writer method from CSV package
write = csv.writer(f)
write.writerow(COV_type_metric)
write.writerow(ADULT_metric)
write.writerow(LETTER_p1_metric)
write.writerow(LETTER_p2_metric)
print('Average metric scores for each dataset across 5 trials: ')
print()
print('COV_type:', sum(COV_type_metric)/len(COV_type_metric))
print()
print('ADULT:', sum(ADULT_metric)/len(ADULT_metric))
print()
print('LETTER.p1:', sum(LETTER_p1_metric)/len(LETTER_p1_metric))
print()
print('LETTER.p2:', sum(LETTER_p2_metric)/len(LETTER_p2_metric))
###Output
Average metric scores for each dataset across 5 trials:
COV_type: 0.778121299923322
ADULT: 0.7336070789987166
LETTER.p1: 0.9325590798444385
LETTER.p2: 0.9525806495608212
###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, neighbors
from matplotlib.colors import ListedColormap
def knn_comparison(data, n_neighbors = 15):
'''
This function finds k-NN and plots the data.
'''
X = data[:, :2]
y = data[:,2]
# grid cell size
h = .02
cmap_light = ListedColormap(['#FFAAAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#0000FF'])
# the core classifier: k-NN
clf = neighbors.KNeighborsClassifier(n_neighbors)
clf.fit(X, y)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
# we create a mesh grid (x_min,y_min) to (x_max y_max) with 0.02 grid spaces
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# we predict the value (either 0 or 1) of each element in the grid
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# xx.ravel() will give a flatten array
# np.c_ : Translates slice objects to concatenation along the second axis.
# > np.c_[np.array([1,2,3]), np.array([4,5,6])]
# > array([[1, 4],
# [2, 5],
# [3, 6]]) (source: np.c_ documentation)
# convert the out back to the xx shape (we need it to plot the decission boundry)
Z = Z.reshape(xx.shape)
# pcolormesh will plot the (xx,yy) grid with colors according to the values of Z
# it looks like decision boundry
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# scatter plot of with given points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
#defining scale on both axises
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# set the title
plt.title('K value = '+str(n_neighbors))
plt.show()
###Output
_____no_output_____
###Markdown
Meshgrid explanation![title](demo_data/meshgrid_image.png)please check this link stackoverflow meshgrid explanation
###Code
data = np.genfromtxt('6.overlap.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data,15)
knn_comparison(data, 30)
knn_comparison(data, 50)
data = np.genfromtxt('1.ushape.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data,15)
knn_comparison(data,30)
data = np.genfromtxt('2.concerticcir1.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data,15)
knn_comparison(data,30)
data = np.genfromtxt('3.concertriccir2.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data, 15)
data = np.genfromtxt('4.linearsep.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data)
data = np.genfromtxt('5.outlier.csv', delimiter=',')
knn_comparison(data,1)
knn_comparison(data,5)
knn_comparison(data)
data = np.genfromtxt('7.xor.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data)
data = np.genfromtxt('8.twospirals.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data)
data = np.genfromtxt('9.random.csv', delimiter=',')
knn_comparison(data, 1)
knn_comparison(data, 5)
knn_comparison(data)
###Output
_____no_output_____
###Markdown
data_urls = ["""https://cl.lingfil.uu.se/~frewa417/english_past_tense.arff""", """https://cl.lingfil.uu.se/~frewa417/german_plural.arff"""]filenames = [url.split("/")[-1] for url in data_urls]import urllib.requestfor url, fn in zip(data_urls, filenames): urllib.request.urlretrieve(url, fn)
###Code
from scipy.io.arff import loadarff
loaded_data_files = [loadarff(fn) for fn in filenames]
import numpy as np
D = dict()
for data in loaded_data_files:
data_points = data[0]
field_names = data[1].names()
assert field_names[0] == 'frequency'
assert field_names[-1] == 'class'
X = list()
y = list()
for point in data_points:
v = [field_names[i]+"_"+point[i].decode("utf-8") for i in range(1, len(point)-1)]
X.extend([v]*int(point[0]))
assert len(v) == len(X[0])
u = [point[-1].decode("utf-8")]
y.extend([u]*int(point[0]))
assert len(u) == len(y[0])
assert len(X) == np.sum(np.asarray([point[0] for point in data_points]))
X_orig = np.asarray(X)
y_orig = np.asarray(y).ravel()
D[data[1].name] = tuple([X_orig, y_orig])
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
y1 = label_encoder.fit_transform(D['plural'][1])
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
y2 = label_encoder.fit_transform(D['past-tense'][1])
from sklearn.preprocessing import OneHotEncoder
feature_encoder = OneHotEncoder()
X1 = feature_encoder.fit_transform(D['plural'][0])
X1_names = feature_encoder.get_feature_names()
from sklearn.preprocessing import OneHotEncoder
feature_encoder = OneHotEncoder()
X2 = feature_encoder.fit_transform(D['past-tense'][0])
X2_names = feature_encoder.get_feature_names()
I1 = np.random.uniform(0, 1, size=X1.shape[0]) < .1
X1 = X1[I1, :]
y1 = y1[I1]
print("X1:", X1.shape, ", y1:", y1.shape)
I2 = np.random.uniform(0, 1, size=X2.shape[0]) < .1
X2 = X2[I2, :]
y2 = y2[I2]
print("X2:", X2.shape, ", y2:", y2.shape)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
knn = KNeighborsClassifier(n_neighbors=4)
from sklearn.model_selection import train_test_split
X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, train_size=.7)
knn.fit(X1_train, y1_train)
y1_testp=knn.predict(X1_test)
y1_trainp=knn.predict(X1_train)
print(classification_report(y1_test, y1_testp, target_names=None))
print(classification_report(y1_train, y1_trainp, target_names=None))
X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, train_size=.7)
knn = KNeighborsClassifier(n_neighbors=4)
knn.fit(X2_train, y2_train)
y2_testp=knn.predict(X2_test)
y2_trainp=knn.predict(X2_train)
print(classification_report(y2_test, y2_testp, target_names=None))
print(classification_report(y2_train, y2_trainp, target_names=None))
from sklearn.model_selection import cross_validate
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
k_range=(1, 20)
k_scores=[]
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X1_train, y1_train, cv=5, scoring='accuracy')
k_scores.append(scores.mean())
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-validates accuracy')
plt.show()
k_range=(1, 20)
k_scores=[]
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X2_test, y2_test, cv=5, scoring='accuracy')
k_scores.append(scores.mean())
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-validates accuracy')
plt.show()
from sklearn.feature_selection import VarianceThreshold
sel = VarianceThreshold(threshold=(.8)
sel.fit_transform(X1)
get_support(self, indices=False)
from sklearn.feature_selection import VarianceThreshold
sel = VarianceThreshold(threshold=(.8)
sel.fit_transform(X1)
knn = KNeighborsClassifier(n_neighbors=4)
scores = cross_val_score(knn, X1_train, y1_train, cv=5, scoring='accuracy')
k_scores.append(scores.mean())
plt.plot(k_range, k_scores)
plt.xlabel('k=4')
plt.ylabel('Cross-validates accuracy')
plt.show()
import numpy as np
D = dict()
for data in loaded_data_files:
data_points = data[0]
field_names = data[1].names()
assert field_names[0] == 'frequency'
assert field_names[-1] == 'class'
X = list()
y = list()
for point in data_points:
v = [field_names[i]+"_"+point[i].decode("utf-8") for i in range(1, len(point)-1)]
X.extend([v]*int(point[0]))
assert len(v) == len(X[0])
u = [point[-1].decode("utf-8")]
y.extend([u]*int(point[0]))
assert len(u) == len(y[0])
assert len(X) == np.sum(np.asarray([point[0] for point in data_points]))
X_orig = np.asarray(X)
y_orig = np.asarray(y).ravel()
D[data[1].name] = tuple([X_orig, y_orig])
X1 = np.delete(D['plural'][0], [1,2,3], axis=1)
y1 = np.delete(D['plural'][1], [1,2,3], axis=1)
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
y1 = label_encoder.fit_transform(y1)
from sklearn.preprocessing import OneHotEncoder
feature_encoder = OneHotEncoder()
X1 = feature_encoder.fit_transform(X1)
X1_names = feature_encoder.get_feature_names()
I1 = np.random.uniform(0, 1, size=X1.shape[0]) < .1
X1 = X1[I1, :]
y1 = y1[I1]
print("X1:", X1.shape, ", y1:", y1.shape)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
knn = KNeighborsClassifier(n_neighbors=4)
from sklearn.model_selection import train_test_split
X1_train, X1_test, y1_train, y1_test = train_test_split(X1, y1, train_size=.7)
knn.fit(X1_train, y1_train)
y1_testp=knn.predict(X1_test)
y1_trainp=knn.predict(X1_train)
print(classification_report(y1_test, y1_testp, target_names=None))
print(classification_report(y1_train, y1_trainp, target_names=None))
###Output
_____no_output_____
###Markdown
An Introduction to KNN classifier This an example to use Amazon Sagemaker. SageMaker allows one to build a ML pipeline easily. Building, training and deploying of ML models is less cumbersome with SageMakerIn this example, I will be using a Amazon's marketplace algorithm (KNN).The purpose of the notebook is to explain the usage of sagemaker and not the modeling aspect.The data used in this is mnist and problem is framed as binary classification.Amazon SageMaker's KNN algorithm extends upon typical linear models by training many models in parallel, in a computationally efficient manner. Each model has a different set of hyperparameters, and then the algorithm finds the set that optimizes a specific criteria. This can provide substantially more accurate models than typical linear algorithms at the same, or lower, cost. Libraries used
###Code
import boto3
import re
import pickle
import gzip
import numpy as np
import urllib.request
import json
import os
import io
import sagemaker
import pandas as pd
from sagemaker.predictor import csv_serializer, json_deserializer
from sagemaker.amazon.amazon_estimator import get_image_uri
import sagemaker.amazon.common as smac
from sagemaker import get_execution_role
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
This notebook assumes that you have an AWS account and an IAM user setup and using the notebook instance of Amazon SageMaker. For further reference please refer to this notebook https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks.html Permissions and environment variables This notebook was created and tested on an ml.t2.medium notebook instance.Let's start by specifying:1. The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance, training, and hosting.2. The IAM role arn used to give training and hosting access to your data. See the documentation for how to create these.
###Code
bucket = 'test-karan-02'
prefix = 'sagemaker_demo_knn'
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Data ingestion Next, we read the dataset from an online URL into memory, for preprocessing prior to training. This processing could be done in situ by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training.
###Code
! wget https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
! wget https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names
iris_df = pd.read_csv("iris.data", header = None)
iris_df.columns = ['sepal_length', 'sepal_width','petal_length','petal_width', 'class']
###Output
_____no_output_____
###Markdown
Data inspection Once the dataset is imported, it's typical as part of the machine learning process to inspect the data, understand the distributions, and determine what type(s) of preprocessing might be needed.
###Code
iris_df.head()
import seaborn as sns
sns.countplot(x = "class",data=iris_df)
###Output
_____no_output_____
###Markdown
Data conversion Since algorithms have particular input and output requirements, converting the dataset is also part of the process that a data scientist goes through prior to initiating training. In this particular case, the Amazon SageMaker implementation of Linear Learner takes recordIO-wrapped protobuf, where the data we have today is a pickle-ized numpy array on disk.Most of the conversion effort is handled by the Amazon SageMaker Python SDK, imported as sagemaker below
###Code
iris_df['class'] = pd.Categorical(iris_df['class'])
iris_df['code_class'] = iris_df['class'].cat.codes
vectors = iris_df.iloc[:,:4].values.astype('float32')
labels = iris_df.iloc[:,5].values.astype('float32')
buf = io.BytesIO()
smac.write_numpy_to_dense_tensor(buf, vectors, labels)
buf.seek(0)
###Output
_____no_output_____
###Markdown
Upload training data Now that we've created our recordIO-wrapped protobuf, we'll need to upload it to S3, so that Amazon SageMaker training can use it.
###Code
key = 'recordio-pb-data'
boto3.resource('s3').Bucket(bucket).Object(os.path.join(prefix, key)).upload_fileobj(buf)
s3_train_data = 's3://{}/{}/{}'.format(bucket, prefix, key)
print('uploaded training data location: {}'.format(s3_train_data))
###Output
uploaded training data location: s3://test-karan-02/sagemaker_demo_knn/recordio-pb-data
###Markdown
Let's also setup an output S3 location for the model artifact that will be output as the result of training with the algorithm.
###Code
output_location = 's3://{}/{}/output'.format(bucket, prefix)
print('training artifacts will be uploaded to: {}'.format(output_location))
###Output
training artifacts will be uploaded to: s3://test-karan-02/sagemaker_demo_knn/output
###Markdown
Training the linear model Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Again, we'll use the Amazon SageMaker Python SDK to kick off training, and monitor status until it is completed. . Despite the dataset being small, provisioning hardware and loading the algorithm container take time upfront.
###Code
container = get_image_uri(boto3.Session().region_name, 'knn')
###Output
_____no_output_____
###Markdown
Next we'll kick off the base estimator, making sure to pass in the necessary hyperparameters. Notice:1. feature_dim is set to 4, which is the number of columns .2. predictor_type is set to classifier' 3. k is set to 5. It has to be tuned
###Code
sess = sagemaker.Session()
knn = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path=output_location,
sagemaker_session=sess)
knn.set_hyperparameters(
k = 5,
predictor_type= "classifier",
sample_size = 10,
feature_dim= 4)
knn.fit({'train': s3_train_data})
###Output
_____no_output_____
###Markdown
Sample Output1. 2020-02-26 21:07:45 Starting - Starting the training job...2. 2020-02-26 21:07:46 Starting - Launching requested ML instances......3. 2020-02-26 21:08:53 Starting - Preparing the instances for training......4. 2020-02-26 21:09:48 Downloading - Downloading input data...5. 2020-02-26 21:10:45 Training - Training image download completed. Training in progress..Docker entrypoint called with argument(s): train6. 2020-02-26 21:28:11 Uploading - Uploading generated training model7. 2020-02-26 21:28:11 Completed - Training job completed Set up hosting for the model Now that we've trained our model, we can deploy it behind an Amazon SageMaker real-time hosted endpoint. This will allow out to make predictions (or inference) from the model dyanamically.
###Code
knn_predictor = knn.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
###Output
---------------!
###Markdown
Validate the model for use Finally, we can now validate the model for use. We can pass HTTP POST requests to the endpoint to get back predictions. To make this easier, we'll again use the Amazon SageMaker Python SDK and specify how to serialize requests and deserialize responses that are specific to the algorithm.
###Code
knn_predictor.content_type = 'text/csv'
knn_predictor.serializer = csv_serializer
knn_predictor.deserializer = json_deserializer
result = knn_predictor.predict(iris_df.iloc[30,:4])
print(result)
###Output
{'predictions': [{'predicted_label': 0.0}]}
###Markdown
OK, a single prediction works. We see that for one record our endpoint returned some JSON which contains predictions, including the score and predicted_label. In this case, score will be a categorical value between [0,1,2] representing the class.
###Code
sagemaker.Session().delete_endpoint(knn_predictor.endpoint)
###Output
_____no_output_____
###Markdown
第3章 k近邻法
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pprint
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from collections import Counter
%matplotlib inline
###Output
_____no_output_____
###Markdown
手写KNN K近邻模型主要是由距离度量、k值的选择和分类决策规则决定。 距离度量(欧式距离) ![](knn.png) **distance**: $ L_{p}\left(x_{i}, x_{j}\right)=\left(\sum_{l=1}^{n}\left|x_{i}^{(l)}-x_{j}^{(l)}\right|^{p}\right)^{\frac{1}{p}}$
###Code
def distance(x, y, p=2): #定义距离(其中,p=1是曼哈顿距离,p=2是欧氏距离,p为正无穷时,它是各个坐标距离的最大值)
"""计算两点之间的距离P.
input:
x: N*M 矩阵.
y: 1*M 矩阵.
p: 距离类型
output:
N*1 x与y之间的距离的矩阵形式.
"""
try:
dis = np.power(np.sum(np.power(np.abs((x - y)), p), 1), 1/p)
except:
dis = np.power(np.sum(np.power(np.abs((x - y)), p)), 1/p)
return dis
###Output
_____no_output_____
###Markdown
注意:由不同距离度量所确定的最近近邻点是不同的!
###Code
# 这里使用经典的鸢尾花数据
iris = load_iris()
df = pd.DataFrame(iris.data, columns=iris.feature_names)
df['label'] = iris.target
df.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']
df.head(100)
df.describe()
# 作图
plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='0')
plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='1')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend()
# X, y
data = np.array(df.iloc[:100, [0, 1, -1]])
X, y = data[:,:-1], data[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
class KNN:
"""
KNN这个算法玩的就是暴力.
"""
def __init__(self, X_train, y_train, n_neighbors=1, p=2):
"""
n_neighbors: k
p: type of distance
"""
self.k = n_neighbors
self.p = p
self.X_train = X_train
self.y_train = y_train
def predict(self, X):
diss = distance(self.X_train, X, self.p)
diss_idx = np.argsort(diss) # return sorted index
top_k_idx = diss_idx[:self.k]
top_k_diss = diss[top_k_idx]
top_k_points = self.X_train[top_k_idx]
top_k_diss = diss[top_k_idx]
top_k_y = self.y_train[top_k_idx]
counter = Counter(top_k_y)
label = counter.most_common()[0][0]
return label, top_k_points, top_k_diss
def score(self, X_test, y_test):
right_count = 0
for X, y in zip(X_test, y_test):
label = self.predict(X)[0]
if label == y:
right_count += 1
return right_count / len(X_test)
clf = KNN(X_train, y_train) #train
clf.score(X_test, y_test) # 在测试集上验证效果
# 对单一一个点test
test_point = [6, 2.7]
clf.predict(test_point)
plt.scatter(df[:50]['sepal length'], df[:50]['sepal width'], label='0')
plt.scatter(df[50:100]['sepal length'], df[50:100]['sepal width'], label='1')
plt.plot(test_point[0], test_point[1], 'bo', label='test_point')
plt.xlabel('sepal length')
plt.ylabel('sepal width')
plt.legend()
###Output
_____no_output_____
###Markdown
分类效果还行 scikitlearn 的KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
clf_sk = KNeighborsClassifier()
clf_sk.fit(X_train, y_train)
clf_sk.score(X_test, y_test)
clf_sk.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
kd树 构建kd树
###Code
# 算法 平衡kd树
class KdTree:
"""
build kdtree recursively along axis, split on median point.
k: k dimensions
method: alternate/variance, 坐标轴轮替或最大方差轴
"""
def __init__(self, k=2, method='alternate'):
self.k = k
self.method = method
def build(self, points, depth=0):
n = len(points)
if n <= 0:
return None
if self.method == 'alternate':
axis = depth % self.k
elif self.method == 'variance':
axis = np.argmax(np.var(points, axis=0), axis=0)
sorted_points = sorted(points, key=lambda point: point[axis])
return {
'point': sorted_points[n // 2],
'left': self.build(sorted_points[:n//2], depth+1),
'right': self.build(sorted_points[n//2+1:], depth+1)
}
###Output
_____no_output_____
###Markdown
例3.2
###Code
data = np.array([[2,3],[5,4],[9,6],[4,7],[8,1],[7,2]])
kd1 = KdTree(k=2, method='alternate')
tree1 = kd1.build(data)
kd2 = KdTree(k=2, method='variance')
tree2 = kd2.build(data)
# friendly print
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(tree1) # equal to figure. 3.4 《统计学习方法》
pp.pprint(tree2) # 在该数据集上两种方法结果一样
###Output
_____no_output_____
###Markdown
查找kd树
###Code
class SearchKdTree:
"""
查找最近点
"""
def __init__(self, k=2):
self.k = k
def __closer_distance(self, pivot, p1, p2):
if p1 is None:
return p2
if p2 is None:
return p1
d1 = distance(pivot, p1)
d2 = distance(pivot, p2)
if d1 < d2:
return p1
else:
return p2
def fit(self, root, point, depth=0):
if root is None:
return None
axis = depth % self.k
next_branch = None
opposite_branch = None
if point[axis] < root['point'][axis]:
next_branch = root['left']
opposite_branch = root['right']
else:
next_branch = root['right']
opposite_branch = root['left']
best = self.__closer_distance(point,
self.fit(next_branch,
point,
depth+1),
root['point'])
if distance(point, best) > abs(point[axis] - root['point'][axis]):
best = self.__closer_distance(point,
self.fit(opposite_branch,
point,
depth+1),
best)
return best
# test
point = [3.,4.5]
search = SearchKdTree()
best = search.fit(tree1, point, depth=0)
print(best)
# force computing
def force(points, point):
dis = np.power(np.sum(np.power(np.abs((points - point)), 2), 1), 1/2)
idx = np.argmin(dis, axis=0)
return points[idx]
print(force(data, point))
###Output
_____no_output_____
###Markdown
看上去,相比于在大量的数据点中寻找与目标最近的点,kd树不需要一个个查找,O(n)的复杂的,效率提高了。 比较下 force和KD树之间运算所需的时间
###Code
from time import time
# 创建个数据集
N = 500000
K = 5
points = np.random.randint(15, size=(N, K))
points.shape
# generate一个kd数
kd_tree = KdTree(k=K, method='alternate')
tree = kd_tree.build(points)
# generate测试点
test_point = np.random.randint(10, size=(K))
t_point = [8.,5.,1.,2.,2.]
# KD树找点
start = time()
seah = SearchKdTree()
best = seah.fit(tree, t_point, depth=0)
end = time()
dist = distance(t_point, best)
print('best point:{}, distance:{}, time cost:{}'.format(best, dist, end - start))
# force时间
start = time()
best = force(points, t_point)
end = time()
dist = distance(t_point, best)
print('best point:{}, distance:{}, time cost:{}'.format(best, dist, end - start))
###Output
_____no_output_____
###Markdown
Context and ContentA company which is active in Big Data and Data Science wants to hire data scientists among people who successfully pass some courses which conduct by the company. Many people signup for their training. Company wants to know which of these candidates are really wants to work for the company after training or looking for a new employment because it helps to reduce the cost and time as well as the quality of training or planning the courses and categorization of candidates. Information related to demographics, education, experience are in hands from candidates signup and enrollment.This dataset designed to understand the factors that lead a person to leave current job for HR researches too. By model(s) that uses the current credentials,demographics,experience data you will predict the probability of a candidate to look for a new job or will work for the company, as well as interpreting affected factors on employee decision.The whole data divided to train and test . Target isn't included in test but the test target values data file is in hands for related tasks. A sample submission correspond to enrollee_id of test set provided too with columns : enrollee _id , targetNote:The dataset is imbalanced.Most features are categorical (Nominal, Ordinal, Binary), some with high cardinality.Missing imputation can be a part of your pipeline as well.Featuresenrollee_id : Unique ID for candidatecity: City codecity_ development _index : Developement index of the city (scaled)gender: Gender of candidaterelevent_experience: Relevant experience of candidateenrolled_university: Type of University course enrolled if anyeducation_level: Education level of candidatemajor_discipline :Education major discipline of candidateexperience: Candidate total experience in yearscompany_size: No of employees in current employer's companycompany_type : Type of current employerlastnewjob: Difference in years between previous job and current jobtraining_hours: training hours completedtarget: 0 – Not looking for job change, 1 – Looking for a job change
###Code
df= pd.read_csv('C:/Users/Fabian/Documents/dh/contenido/ds_blend_students_2020/TP3/data/aug_train.csv')
#C:/Users/Administrador.000/Documents/DH/Contenidook/ds_blend_students_2020/Desafio3/aug_train
df.shape
df.info()
df.target.value_counts(normalize=True)
df.gender.value_counts(normalize=True)
df.head()
df.relevent_experience.value_counts()
df.experience.value_counts()
df.training_hours.value_counts()
df.enrolled_university.value_counts()
df.education_level.value_counts()
df.major_discipline.value_counts()
df.company_type.value_counts()
df.company_size.value_counts()
df.last_new_job.value_counts()
df.city.value_counts()
df.city_development_index.value_counts()
#El indice es un atributo de la ciudad por ende se puede eliminar
check_corr=df.groupby("city")["city_development_index"].nunique()
#check_corr.mean()
check_corr
sns.pairplot(df)
#no hay duplicados
duplicated = df.duplicated(subset=["enrollee_id"])
any(duplicated)
df_clean=df.drop(["enrollee_id"],axis=1)
#no hay duplicados
duplicated1 = df_clean.duplicated()
any(duplicated1)
duplicated1.sum()
df_unique = df_clean.drop_duplicates(keep="first")
df_unique.shape
df_unique = df_unique.dropna(subset=["enrolled_university","education_level","experience","last_new_job"],axis = 0)
df_unique.shape
CT=df_unique.company_type.fillna("Not defined")
CS=df_unique.company_size.fillna("Not defined")
Sex=df_unique.gender.fillna("Not defined")
MD= df_unique.major_discipline.fillna("STEM")
df_unique["gender"]=Sex
df_unique['major_discipline']=MD
df_unique['company_size']=CS
df_unique['company_type']=CT
df_unique.isnull().sum()/df_unique.shape[0]
df_unique.target.value_counts(normalize=True)
#falta hot encoder, separar en train/test,escalar
#Separamos en Train/test
X = df_unique.drop('target', axis = 1)
Y = df_unique[['target']]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3, random_state = 1237,stratify=Y)
#Hot encoder para columnas categóricas (crear dummies)
categorical_columns = [col for col in df_unique.columns if df[col].dtypes == 'object']
encoder_categories = []
for col in categorical_columns:
col_categories = df_unique[col].unique()
encoder_categories.append(col_categories)
encoder = OneHotEncoder(categories = encoder_categories, sparse=False)
encoder = encoder.fit(X_train[categorical_columns])
X_train_encoded = encoder.transform(X_train[categorical_columns])
X_train_categorical = pd.DataFrame(X_train_encoded, columns = encoder.get_feature_names(categorical_columns))
#X_train_categorical.sample(5)
X_test_encoded = encoder.transform(X_test[categorical_columns])
X_test_categorical = pd.DataFrame(X_test_encoded, columns = encoder.get_feature_names(categorical_columns))
X_test_categorical.head()
#Escalar columnas no categóricas o numéricas
non_categorical_columns = [col for col in X_train.columns if col not in categorical_columns]
non_categorical_columns
std_sclr = StandardScaler()
std_sclr_trained = std_sclr.fit(X_train[non_categorical_columns])
X_train_numerical = std_sclr_trained.transform(X_train[non_categorical_columns])
X_train_numerical_scaled = pd.DataFrame(X_train_numerical, columns = non_categorical_columns)
#X_train_numerical_scaled.head()
X_test_numerical = std_sclr_trained.transform(X_test[non_categorical_columns])
X_test_numerical_scaled = pd.DataFrame(X_test_numerical, columns = non_categorical_columns)
X_test_numerical_scaled.head()
#Unir nuevamente las columnas categóricas y las numéricas
X_train_transf = pd.concat([X_train_categorical, X_train_numerical_scaled], axis = 1)
print(X_train_categorical.shape)
print(X_train_numerical_scaled.shape)
print(X_train_transf.shape)
#Unir nuevamente las columnas categóricas y las numéricas
X_test_transf = pd.concat([X_test_categorical, X_test_numerical_scaled], axis = 1)
print(X_test_categorical.shape)
print(X_test_numerical_scaled.shape)
print(X_test_transf.shape)
Total=12575+5390
Total
Y_train=np.ravel(Y_train)
Y_test=np.ravel(Y_test)
# Importamos la clase KNeighborsClassifier de módulo neighbors
from sklearn.neighbors import KNeighborsClassifier
# Instanciamos el modelo especificando el valor deseado de k con el argumento n_neighbors
knn = KNeighborsClassifier(n_neighbors=5, weights= 'distance')
# Ajustamos a los datos de entrenamiento
knn.fit(X_train_transf, Y_train);
# Predecimos etiquetas para los datos de test
y_pred = knn.predict(X_test_transf)
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, y_pred)
# Vamos a querer graficar los distintos valores del score de cross validation
# en función del hiperparámetro n_neighbors. Para esto generamos una lista de
# diccionarios que después se puede convertir fácilmente en DataFrame.
# Probamos todos los enteros desde el 1 hasta el 20
# como posibles valores de n_neighbors a explorar.
# Definimos la estrategia de validación cruzada
from sklearn.model_selection import cross_val_score, KFold
kf = KFold(n_splits=5, shuffle=True, random_state=12)
scores_para_df = []
for i in range(1, 21):
# En cada iteración, instanciamos el modelo con un hiperparámetro distinto
model = KNeighborsClassifier(n_neighbors=i)
# cross_val_scores nos devuelve un array de 5 resultados,
# uno por cada partición que hizo automáticamente CV
cv_scores = cross_val_score(model, X_train_transf, Y_train, cv=kf)
# Para cada valor de n_neighbours, creamos un diccionario con el valor
# de n_neighbours y la media y el desvío de los scores
dict_row_score = {'score_medio':np.mean(cv_scores),
'score_std':np.std(cv_scores), 'n_neighbors':i}
# Guardamos cada uno en la lista de diccionarios
scores_para_df.append(dict_row_score)
# Creamos el DataFrame a partir de la lista de diccionarios
df_scores = pd.DataFrame(scores_para_df)
df_scores.head()
# Identificamos el score máximo
df_scores.loc[df_scores.score_medio == df_scores.score_medio.max()]
# Instanciamos el modelo especificando el valor deseado de k con el argumento n_neighbors
knn = KNeighborsClassifier(n_neighbors=19, weights= 'distance')
# Ajustamos a los datos de entrenamiento
knn.fit(X_train_transf, Y_train);
# Predecimos etiquetas para los datos de test
y_pred = knn.predict(X_test_transf)
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, y_pred)
# Obtenemos la matriz de confusión
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Y_test, y_pred)
cm
# Graficamos la matriz de confusión para visualizarla mejor
sns.heatmap(cm, annot=True,fmt="d")
plt.ylabel('Etiquetas reales')
plt.xlabel('Etiquetas predichas');
from sklearn.metrics import recall_score
print(recall_score(Y_test, y_pred))
confusion=confusion_matrix(Y_test, y_pred)
TP = confusion[1, 1]
TN = confusion[0, 0]
FP = confusion[0, 1]
FN = confusion[1, 0]
specificity = TN / (TN + FP)
print(specificity)
from sklearn.metrics import precision_score
print(precision_score(Y_test, y_pred))
from sklearn.metrics import f1_score
print(f1_score(Y_test,y_pred))
from sklearn.metrics import roc_curve
y_pred_proba = knn.predict_proba(X_test_transf)
fpr_log,tpr_log,thr_log = roc_curve(Y_test, y_pred_proba[:,1])
df = pd.DataFrame(dict(fpr=fpr_log, tpr=tpr_log, thr = thr_log))
plt.axis([0, 1.01, 0, 1.01])
plt.xlabel('1 - Specificty')
plt.ylabel('TPR / Sensitivity')
plt.title('ROC Curve')
plt.plot(df['fpr'],df['tpr'])
plt.plot(np.arange(0,1, step =0.01), np.arange(0,1, step =0.01))
plt.show()
from sklearn.metrics import auc
print('AUC=', auc(fpr_log, tpr_log))
df_scores.columns
neig=df_scores.n_neighbors
scores=df_scores.score_medio
knn_range=range(1, 21)
plt.plot(neig, scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy');
from sklearn.model_selection import GridSearchCV
k_range = list(range(1, 31))
weight_options = ['uniform', 'distance']
param_grid = dict(n_neighbors=k_range, weights=weight_options)
print(param_grid)
folds=StratifiedKFold(n_splits=10, random_state=19, shuffle=True)
grid = GridSearchCV(knn, param_grid, cv=folds, scoring='average_precision')
grid.fit(X_train_transf, Y_train)
print(grid.best_estimator_)
print(grid.best_score_)
print(grid.best_params_)
# Instanciamos el modelo especificando el valor deseado de k con el argumento n_neighbors
knn = KNeighborsClassifier(n_neighbors=30, weights= 'uniform')
# Ajustamos a los datos de entrenamiento
knn.fit(X_train_transf, Y_train);
# Predecimos etiquetas para los datos de test
y_pred_30 = knn.predict(X_test_transf)
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, y_pred_30)
print(recall_score(Y_test, y_pred_30))
print(precision_score(Y_test, y_pred_30))
print(f1_score(Y_test,y_pred_30))
# Instanciamos el modelo especificando el valor deseado de k con el argumento n_neighbors
knn = KNeighborsClassifier(n_neighbors=150, weights= 'uniform')
# Ajustamos a los datos de entrenamiento
knn.fit(X_train_transf, Y_train);
# Predecimos etiquetas para los datos de test
y_pred_150 = knn.predict(X_test_transf)
from sklearn.metrics import accuracy_score
print(accuracy_score(Y_test, y_pred_150))
print(recall_score(Y_test, y_pred_150))
print(precision_score(Y_test, y_pred_150))
print(f1_score(Y_test,y_pred_150))
###Output
0.7929499072356215
0.491307634164777
0.5946935041171089
0.5380794701986755
###Markdown
KNN Importando bibliotecas
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import KFold, cross_val_score
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import KFold
plt.style.use('ggplot')
pd.set_option('display.max_columns', 500)
def report_teste(predictions, alg_name):
print('Resultados para o classificador {0}:'.format(alg_name))
print(classification_report(y_teste, predictions),
print ("Acurácia para o treino é ", accuracy_score(y_teste,predictions)))
def report_treino(predictions, alg_name):
print('Resultados para o classificador {0}:'.format(alg_name))
print(classification_report(y_treino, predictions),
print ("Acurácia para o treino é ", accuracy_score(y_treino,predictions)))
###Output
_____no_output_____
###Markdown
Dados
###Code
dataset = pd.read_csv('C:\\Users\\Fabiel Fernando\\Desktop\\PROVA\\classificacao_Q4.csv')
#Verificando a existência de missings
#dataset.apply(lambda x: x.isnull().sum())
dataset.head(5)
print("Dimensão dos nossos dados:\n",
dataset.shape)
#print("Tipo de variáveis:\n",
# dataset.dtypes)
###Output
Dimensão dos nossos dados:
(1500, 101)
###Markdown
Pocentagem da variável resposta
###Code
resposta = dataset['target']
count = pd.DataFrame(resposta.value_counts())
percent = pd.DataFrame(resposta.value_counts(normalize = True)*100)
table = pd.concat([count, percent], axis = 1)
table.columns = ['# target', '% target']
table
#Descritiva de algumas variáveis
#dataset.describe()
###Output
_____no_output_____
###Markdown
Treino e Teste
###Code
feature_space = dataset.iloc[:, dataset.columns != 'target']
feature_class = dataset.iloc[:, dataset.columns == 'target']
X_treino, X_teste, y_treino, y_teste = train_test_split(feature_space,
feature_class,
test_size = 0.30,
random_state = 42)
# Limpar conjuntos de teste para evitar futuras mensagens de aviso
y_treino = y_treino.values.ravel()
y_teste = y_teste.values.ravel()
###Output
_____no_output_____
###Markdown
Ajustando KNN
###Code
classifier = KNeighborsClassifier(n_neighbors=5,
weights='uniform',
algorithm='auto',
leaf_size=30,
p=2,
metric='minkowski',
metric_params=None,
n_jobs=1)
classifier.fit(X_treino, y_treino)
###Output
_____no_output_____
###Markdown
Precisão do classificador
###Code
pred_test = classifier.predict(X_teste)
pred_train = classifier.predict(X_treino)
###Output
_____no_output_____
###Markdown
Tabela com cálculo de vária métricas conjunto treino
###Code
report_treino(pred_train,'KNN')
###Output
Resultados para o classificador KNN:
Acurácia para o treino é 0.7285714285714285
precision recall f1-score support
0.0 0.71 0.74 0.72 103
1.0 0.64 0.82 0.72 98
2.0 0.69 0.64 0.66 111
3.0 0.89 0.79 0.84 105
4.0 0.77 0.74 0.75 104
5.0 0.67 0.55 0.60 97
6.0 0.65 0.64 0.65 98
7.0 0.76 0.73 0.75 111
8.0 0.84 0.76 0.80 106
9.0 0.70 0.85 0.77 117
avg / total 0.73 0.73 0.73 1050
None
###Markdown
Tabela com cálculo de vária métricas conjunto teste
###Code
report_teste(pred_test,'KNN')
###Output
Resultados para o classificador KNN:
Acurácia para o treino é 0.5977777777777777
precision recall f1-score support
0.0 0.63 0.62 0.62 47
1.0 0.58 0.69 0.63 51
2.0 0.41 0.50 0.45 42
3.0 0.75 0.57 0.65 47
4.0 0.72 0.61 0.66 46
5.0 0.63 0.53 0.57 51
6.0 0.60 0.49 0.54 51
7.0 0.56 0.56 0.56 36
8.0 0.70 0.66 0.68 47
9.0 0.49 0.81 0.61 32
avg / total 0.61 0.60 0.60 450
None
###Markdown
Ajustando o classificador com Grid Search
###Code
fit_knn = KNeighborsClassifier()
np.random.seed(42)
cv_kfold = KFold(10, shuffle = False)
param_grid = {"n_neighbors": range(1, 50),
"weights": ["uniform", "distance"],
"metric": ["euclidean", "manhattan"]} #"chebyshev", "minkowski"
cv_knn = GridSearchCV(fit_knn,
cv = cv_kfold,
param_grid = param_grid,
n_jobs = 3)
cv_knn.fit(X_treino, y_treino)
cv_knn.best_params_
fit_knn.set_params(n_neighbors = 7,
metric = 'manhattan',
weights = 'distance')
fit_knn.fit(X_treino, y_treino)
###Output
_____no_output_____
###Markdown
Resultados Conjunto Treino
###Code
pred_train2 = fit_knn.predict(X_treino)
report_treino(pred_train2, 'KNN com Grid Search')
###Output
Resultados para o classificador KNN com Grid Search:
Acurácia para o treino é 1.0
precision recall f1-score support
0.0 1.00 1.00 1.00 103
1.0 1.00 1.00 1.00 98
2.0 1.00 1.00 1.00 111
3.0 1.00 1.00 1.00 105
4.0 1.00 1.00 1.00 104
5.0 1.00 1.00 1.00 97
6.0 1.00 1.00 1.00 98
7.0 1.00 1.00 1.00 111
8.0 1.00 1.00 1.00 106
9.0 1.00 1.00 1.00 117
avg / total 1.00 1.00 1.00 1050
None
###Markdown
Resultados conjunto teste
###Code
predictions_fit_knn = fit_knn.predict(X_teste)
report_teste(predictions_fit_knn, 'KNN com Grid Search')
predictions_knn = fit_knn.predict(X_teste)
print(confusion_matrix(y_teste, predictions_knn))
accuracy_knn = fit_knn.score(X_teste, y_teste)
print("Aqui está a nossa precisão média no conjunto de testes: {0:.3f}".format(accuracy_knn))
test_error_rate_knn = 1 - accuracy_knn
print("A taxa de erro de teste para o nosso modelo é: {0: .3f}" .format(test_error_rate_knn))
###Output
A taxa de erro de teste para o nosso modelo é: 0.347
###Markdown
Curva ROC
###Code
predictions_prob = fit_knn.predict_proba(X_teste)[:, 1]
fpr2, tpr2, _ = roc_curve(y_teste,
predictions_prob,
pos_label = 1)
auc_knn = auc(fpr2, tpr2)
plt.figure()
lw = 2
plt.plot(fpr2, tpr2, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % auc_knn)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Taxa de Falso Positivo')
plt.ylabel('Taxa de Verdadeiro Positivo')
plt.title('Receiver operating characteristic ')
plt.legend(loc="lower right")
plt.show()
report_teste(predictions_knn, 'KNN')
###Output
Resultados para o classificador KNN:
Acurácia para o treino é 0.6533333333333333
precision recall f1-score support
0.0 0.79 0.64 0.71 47
1.0 0.62 0.63 0.62 51
2.0 0.52 0.52 0.52 42
3.0 0.79 0.66 0.72 47
4.0 0.68 0.61 0.64 46
5.0 0.70 0.61 0.65 51
6.0 0.71 0.59 0.65 51
7.0 0.60 0.75 0.67 36
8.0 0.67 0.79 0.73 47
9.0 0.50 0.81 0.62 32
avg / total 0.67 0.65 0.65 450
None
###Markdown
Validação Cruzada K - fold
###Code
X = dataset.iloc[:, 0:100].values
y = dataset['target'].astype('category')
from sklearn import model_selection
kfold = model_selection.KFold(n_splits=10, random_state=42)
model = KNeighborsClassifier()
scoring = 'accuracy'
results = model_selection.cross_val_score(model, X, y, cv=kfold, scoring=scoring)
results.mean(), results.std()
###Output
_____no_output_____
###Markdown
LOOCV
###Code
model = KNeighborsClassifier()
accuracies = cross_val_score(model, X=X, y=y, cv=LeaveOneOut())
accuracies.mean()
###Output
_____no_output_____
###Markdown
Repeat CV
###Code
from sklearn.model_selection import RepeatedKFold
cv_repeat = RepeatedKFold(n_splits=6, n_repeats=3, random_state=42)
model = KNeighborsClassifier()
accuracies = cross_val_score(model, X=X, y=y, cv=cv_repeat)
accuracies.mean()
###Output
_____no_output_____
###Markdown
Separando as k primeiras observações para treino e o restante para teste
###Code
X_treino = dataset.iloc[0:499, 0:99].values
y_treino = dataset.iloc[0:499, 100].values
X_teste = dataset.iloc[500:1500, 0:99].values
y_teste = dataset.iloc[500:1500, 100].values
clf = KNeighborsClassifier(n_neighbors=5,
weights='uniform',
algorithm='auto',
leaf_size=30,
p=2,
metric='minkowski',
metric_params=None,
n_jobs=1)
clf.fit(X_treino, y_treino)
###Output
_____no_output_____
###Markdown
Precisão do classificador no Decision Tree
###Code
pred_teste = clf.predict(X_teste)
pred_treino = clf.predict(X_treino)
###Output
_____no_output_____
###Markdown
Métricas Treino
###Code
report_treino(pred_treino, 'KNN')
###Output
Resultados para o classificador KNN:
Acurácia para o treino é 0.751503006012024
precision recall f1-score support
0.0 0.73 0.80 0.77 55
1.0 0.62 0.85 0.72 48
2.0 0.61 0.56 0.58 45
3.0 0.87 0.78 0.82 58
4.0 0.81 0.71 0.76 48
5.0 0.88 0.63 0.73 46
6.0 0.67 0.64 0.65 47
7.0 0.81 0.69 0.74 51
8.0 0.86 0.88 0.87 41
9.0 0.75 0.93 0.83 60
avg / total 0.76 0.75 0.75 499
None
###Markdown
Métricas Teste
###Code
report_teste(pred_teste, 'KNN')
###Output
Resultados para o classificador KNN:
Acurácia para o treino é 0.562
precision recall f1-score support
0.0 0.49 0.77 0.60 95
1.0 0.46 0.58 0.52 101
2.0 0.51 0.49 0.50 108
3.0 0.72 0.45 0.55 94
4.0 0.63 0.52 0.57 102
5.0 0.68 0.37 0.48 102
6.0 0.49 0.46 0.48 102
7.0 0.60 0.53 0.56 96
8.0 0.76 0.60 0.67 111
9.0 0.52 0.89 0.66 89
avg / total 0.59 0.56 0.56 1000
None
###Markdown
KNN Algorithm and implementation using heart.csv file
###Code
# Initialization
import pandas as pd
import numpy as np
heart = pd.read_csv("heart.csv");
heart.head(5)
def calculateDistance(targetRow, rows, columns):
result = []
for i in range(0, len(rows)):
sumArr = []
for column in columns:
sumArr.append(abs(targetRow[column] - rows.iloc[i][column]))
result.append({
"sum": np.sum(sumArr),
"indice": i
})
return result
h_train = heart.drop(columns=["target"])
h_train.head(5)
# for i in range(0, len(h_train)):
# print(sorted(calculateDistance(h_train.iloc[i], h_train, h_train.columns), key=lambda k: k["sum"])[0:k])
def getKNNTargets(test, train, k = 3):
target = []
for i in range(0, len(test)):
# calculate distances for first k rows
srtArr = sorted(calculateDistance(test.iloc[i], train, test.columns), key=lambda k: k["sum"])[:k]
indices = list(map(lambda x: x["indice"], srtArr)) # first k labels
kTargetLabels = [train.iloc[x]["target"] for x in indices];
# We take mode cause our target label is in discrete form (classification)
mode = max(set(kTargetLabels), key=kTargetLabels.count)
# pick index of the most frequent (mode) label
labelIndex = kTargetLabels.index(mode)
print(f"Mode: {mode}")
# print(kTargetLabels)
res = {
"row": i,
"target": train.iloc[indices[labelIndex]]["target"] # use the label of most frequent target label
}
print(res)
target.append(res)
return target
testRange = int(len(heart)/2)
KNN = getKNNTargets(heart.drop(columns=["target"]).iloc[:testRange], heart.iloc[testRange:])
KNNResultWTarger = heart.iloc[testRange:]
correct = 0
for i in range(0, len(KNN)):
if KNN[i]["target"] == KNNResultWTarger.iloc[i]["target"]:
correct += 1
accuracy = correct/len(KNN)
from math import ceil
print(f"Accuracy: {(ceil(accuracy*100))}%")
###Output
Accuracy: 85%
###Markdown
18bce084Kaushal JaniPractical 5
###Code
import sklearn
from sklearn import neighbors,datasets,metrics
from sklearn.metrics import mean_absolute_error,mean_squared_error
import numpy as np
X,Y=datasets.load_iris(return_X_y=True) #loading iris dataset
xtrain = X[range(0,150,2),:]
ytrain = Y[range(0,150,2)]
xtest = X[range(1,150,2),:]
ytest = Y[range(0,150,2)]
k=3
x=int(input(" enter limit value of k for knn classification ")) # enter stopping value of k
print()
if x<=xtrain.shape[0]:
if x%2==0:
k=x-1
else:
k=x
for i in range(3,k+2,2):
clf=neighbors.KNeighborsClassifier(i,'uniform')
clf.fit(xtrain,ytrain)
ypred=clf.predict(xtest)
print("For K = ",i)
print("accuracy is",metrics.accuracy_score(ytest,ypred))
print("MAE is ",metrics.mean_absolute_error(ytest,ypred))
print("MSE is ",metrics.mean_squared_error(ytest,ypred))
print()
###Output
enter limit value of k for knn classification 17
For K = 3
accuracy is 0.96
MAE is 0.04
MSE is 0.04
For K = 5
accuracy is 0.9866666666666667
MAE is 0.013333333333333334
MSE is 0.013333333333333334
For K = 7
accuracy is 0.9866666666666667
MAE is 0.013333333333333334
MSE is 0.013333333333333334
For K = 9
accuracy is 0.9866666666666667
MAE is 0.013333333333333334
MSE is 0.013333333333333334
For K = 11
accuracy is 0.92
MAE is 0.08
MSE is 0.08
For K = 13
accuracy is 0.9466666666666667
MAE is 0.05333333333333334
MSE is 0.05333333333333334
For K = 15
accuracy is 0.92
MAE is 0.08
MSE is 0.08
For K = 17
accuracy is 0.92
MAE is 0.08
MSE is 0.08
###Markdown
Custom Implementation of KNN with cosine similarity as distance metric
###Code
X,Y=datasets.load_iris(return_X_y=True)
xtrain=np.append(np.append(X[0:44,:],X[50:94,:],axis=0),X[100:144,:],axis=0)
ytrain=np.append(np.append(Y[0:44],Y[50:94],axis=0),Y[100:144],axis=0)
xtest=np.append(np.append(X[45:50,:],X[95:100,:],axis=0),X[145:150,:],axis=0)
ytest=np.append(np.append(Y[45:50],Y[95:100],axis=0),Y[145:150],axis=0)
#print(ytest.shape)
print(xtrain.shape)
def sim(test,row):
return test @ row/(np.linalg.norm(test) * np.linalg.norm(row))
def custom_knn(k):
print("For {0}nn".format(k))
result=[]
li=[] # to store value of cosine similarity
temp={} # for stoirng first N result
for i in range(0,k):
temp[i]=0
#print(temp)
t=temp.copy()
for i in xtest:
li=list()
count=0
temp=t.copy()
for j in range(0,132) :
li.append([sim(i,xtrain[j,:]),ytrain[j]]) # append cosine similarity of each input
li.sort(reverse=True)
for s in range(0,k): # adds the the first n result with respect to their class
temp[li[s][1]]=li[s][0]+temp[li[s][1]]
result.append([i,max(temp,key=temp.get)])
ypred=[]
for j in result:
ypred.append(j[1])
print("accuracy",metrics.accuracy_score(ytest,ypred))
print("MAE",metrics.mean_absolute_error(ytest,ypred))
print("MSE",metrics.mean_squared_error(ytest,ypred))
print("Actual class",ytest)
print("Predicted class",ypred)
for j in result:
print("example is ",j[0],"predicted class is",j[1])
print("--------------------------------------------------------------------")
print()
############################################################
k=3
x=int(input(" enter limit value of k for knn classification ")) # enter stopping value of k
print()
if x<=xtrain.shape[0]:
if x%2==0:
k=x-1
else:
k=x
for i in range(3,k+2,2):
custom_knn(i)
###Output
enter limit value of k for knn classification 5
For 3nn
accuracy 1.0
MAE 0.0
MSE 0.0
Actual class [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
Predicted class [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
example is [4.8 3. 1.4 0.3] predicted class is 0
example is [5.1 3.8 1.6 0.2] predicted class is 0
example is [4.6 3.2 1.4 0.2] predicted class is 0
example is [5.3 3.7 1.5 0.2] predicted class is 0
example is [5. 3.3 1.4 0.2] predicted class is 0
example is [5.7 3. 4.2 1.2] predicted class is 1
example is [5.7 2.9 4.2 1.3] predicted class is 1
example is [6.2 2.9 4.3 1.3] predicted class is 1
example is [5.1 2.5 3. 1.1] predicted class is 1
example is [5.7 2.8 4.1 1.3] predicted class is 1
example is [6.7 3. 5.2 2.3] predicted class is 2
example is [6.3 2.5 5. 1.9] predicted class is 2
example is [6.5 3. 5.2 2. ] predicted class is 2
example is [6.2 3.4 5.4 2.3] predicted class is 2
example is [5.9 3. 5.1 1.8] predicted class is 2
--------------------------------------------------------------------
For 5nn
accuracy 1.0
MAE 0.0
MSE 0.0
Actual class [0 0 0 0 0 1 1 1 1 1 2 2 2 2 2]
Predicted class [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
example is [4.8 3. 1.4 0.3] predicted class is 0
example is [5.1 3.8 1.6 0.2] predicted class is 0
example is [4.6 3.2 1.4 0.2] predicted class is 0
example is [5.3 3.7 1.5 0.2] predicted class is 0
example is [5. 3.3 1.4 0.2] predicted class is 0
example is [5.7 3. 4.2 1.2] predicted class is 1
example is [5.7 2.9 4.2 1.3] predicted class is 1
example is [6.2 2.9 4.3 1.3] predicted class is 1
example is [5.1 2.5 3. 1.1] predicted class is 1
example is [5.7 2.8 4.1 1.3] predicted class is 1
example is [6.7 3. 5.2 2.3] predicted class is 2
example is [6.3 2.5 5. 1.9] predicted class is 2
example is [6.5 3. 5.2 2. ] predicted class is 2
example is [6.2 3.4 5.4 2.3] predicted class is 2
example is [5.9 3. 5.1 1.8] predicted class is 2
--------------------------------------------------------------------
###Markdown
KNN implementation for Breast Cancer Classification**Mamello Maseko(fill me :)) and Sandile Shongwe(1236067)**
###Code
import pandas as pd
import numpy as np
from sklearn.preprocessing import normalize
%xmode plain
df = pd.read_csv('data.csv')
df = df.loc[:, ~df.columns.str.contains('^Unnamed')] #drop unnamed column
df_xtrain = df.drop(['id','diagnosis'], axis=1)
x_train = df_xtrain.values
# Note: Makes M = 1, B=0
df['diagnosis'] = np.unique(df['diagnosis'], return_index=True, return_inverse=True)[2]
y_train = df['diagnosis'].values
x_train, y_train;
# #normalizing the data
# x_train = normalize(x_train, norm='l1', axis=1)
# print(x_train[0])
# mean = x_train.mean(axis=1)
# std = x_train.std(axis = 1)
# x_train = (x_train-mean[:,np.newaxis]) / std[:, np.newaxis]
#both normalization techniques seem to lower accuracy of the classifier
#splitting data set into training and testing sets using a 70 - 30 split
length = len(x_train)
x_test = x_train[int(np.floor(length*0.7)+1): length , :]
y_test = y_train[int(np.floor(length*0.7)+1): length]
x_train = x_train[0:int(np.floor(length*0.7))+1, :]
y_train = y_train[0:int(np.floor(length*0.7))+1]
y_train[y_train == 0] = -1;
y_test[y_test == 0] = -1;
##KNN using Euclidean distance
def KNN_E(x_train, y_train, query_point, K):
dist = np.sqrt(np.sum((x_train - query_point)**2, axis = 1))
idx = np.argsort(dist)
s = np.sum(y_train[idx[np.arange(K)]])
if(s > 0):
return 1
else:
return -1
def KNN_M(x_train, y_train, query_point, K):
dist = np.abs(np.sum((x_train - query_point), axis = 1))
idx = np.argsort(dist)
s = np.sum(y_train[idx[np.arange(K)]])
if(s > 0.0):
return 1
else:
return -1
# KNN training using Euclidean Distance
def KNN_Elearn(x_train, y_train, x_test, y_test):
error = 100000000
min_k = 1000
for K in range(1, 250):
tmp = error
diff_error = KNN_error_e(K, x_train, y_train, x_train, y_train) - KNN_error_e(K, x_train, y_train,x_test, y_test)
error = min(error, abs(diff_error))
if tmp != error:
min_k = K
return min_k
def KNN_error_e(K, x_train, y_train, x_query, y_query):
h = np.zeros((x_query.shape[0]))
for i in range(x_query.shape[0]):
h[i] = KNN_E(x_train, y_train, x_query[i,:], K)
e = np.sum(h != y_query*1.0)/(y_query.shape[0]*1.0)
return e
# KNN training using Manhattan Distance
def KNN_Mlearn(x_train, y_train, x_test, y_test):
error = 100000000
min_k = 1000
for K in range(1, 250):
tmp = error
diff_error = KNN_error_m(K, x_train, y_train, x_train, y_train) - KNN_error_m(K, x_train, y_train,x_test, y_test)
error = min(error, abs(diff_error))
if tmp != error:
min_k = K
return min_k
def KNN_error_m(K, x_train, y_train, x_query, y_query):
h = np.zeros((x_query.shape[0]))
for i in range(x_query.shape[0]):
h[i] = KNN_M(x_train, y_train, x_query[i,:], K)
e = np.sum(h != y_query*1.0)/(y_query.shape[0]*1.0)
return e
opt_k_e = KNN_Elearn(x_train, y_train, x_test, y_test)
opt_k_m = KNN_Mlearn(x_train, y_train, x_test, y_test)
out_ye = [KNN_E(x_train, y_train, x_test[i], opt_k_e) for i in range(170)]
out_ym = [KNN_E(x_train, y_train, x_test[i], opt_k_m) for i in range(170)]
acc = np.sum(out_ye == y_test*1.0)/y_test.shape[0];
print('The accuracy using Euclidean Distance: {:.2f}'.format(acc*100))
acc = np.sum(out_ym == y_test*1.0)/y_test.shape[0];
print('The accuracy using Manhattan Distance: {:.2f}'.format(acc*100))
###Output
The accuracy using Euclidean Distance: 94.12
The accuracy using Manhattan Distance: 95.29
###Markdown
Introdução Para entendermos os passos a serem seguidos diante de um problema de classificação por regressão, devemos antes sabermos como funciona o algorítmo do KNN (K-Nearest Neighbor), uma tradução mais aproximada para o português seria: K - Vizinho mais Próximo. Explicando como esse algoritmo funciona Dado um conjunto de dados, é possivel estabelecer padrões de cada classe, com isso, podemos então verificar por uma distância euclidiana se um elemento $*$ é mais $O$ do que $X$, ou vice-versa, e assim podemos agrupar esses elementos nessas categorias. Em geral, esse algoritmo é bastante semelhante a outros algorítmos de clustering, como é o caso do K-Means, que também segrega em um grupo de K instâncias os elementos da base de dados. A principal diferença para os dois é o método para classificar os grupos e sua utilização, visto que o K-means é um aprendizado de máquina *não supervisionado* e o KNN é um aprendizado de máquina *supervisionado* Vamos começar importando algumas bibliotecas e pacotes conhecidos dessas bibliotecas
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import neighbors, metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
data = pd.read_csv('car_.data') #Lendo o arquivo.data
data.dropna(how = 'all') #Comando dropna, muito utilizado quando se quer remover atributos faltantes
data.head() #Exibindo dataset
###Output
_____no_output_____
###Markdown
Uma parte importando quando se quer segregar seus atributos em classes é transformar essas classes que são de tipo string para classes do tipo inteiro, esse processo se chama: encodar, então iremos encodar a coluna class, que será nossa coluna Y (output)
###Code
data_by = data["class"] #Definindo variável que irá receber a coluna "Class"
data_by.head(10)
#Dividindo a classe em duas partes, em uma coluna encodada e outra categorizada (o tipo padrão)
data_by_encoded, data_categories = data_by.factorize()
data_by_encoded #Exibindo a coluna encodada
data_categories #exibindo a coluna categorica
data.info() #É importante verificar o tipo de sua base de dados, vemos aqui que todos os atributos sao objetos
#Isso é algo ruim quando se quer ter um histograma ou análise gráfica de seu dataset.
#Entretanto, essa não é a proposta deste notebook
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1728 entries, 0 to 1727
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 buying 1728 non-null object
1 maint 1728 non-null object
2 doors 1728 non-null object
3 persons 1728 non-null object
4 lug_boot 1728 non-null object
5 safety 1728 non-null object
6 class 1728 non-null object
dtypes: object(7)
memory usage: 94.6+ KB
###Markdown
Agora, vamos encodar varias colunas ao mesmo tempo, é um processo muito interessante e útil para nosso modelo do KNN
###Code
X = data[['buying', 'maint','safety']].values
X
#Convertendo as colunas para dados em números com a função LabelEncoder() da scikit
#Coluna X
Le = LabelEncoder()
for i in range (len(X[0])):
X[:,i] = Le.fit_transform(X[:,i])
print(X)
###Output
[[3 3 1]
[3 3 2]
[3 3 0]
...
[1 1 1]
[1 1 2]
[1 1 0]]
###Markdown
Agora, finalmente poderemos aplicar nossos atributos a um classificador KNN. para isso iremos importar a KNN e logo em seguida dividir nossa base de dados em partes de teste e treino, respectivamente para valores de X (atributos) e para Y (saída). Caso esteja um pouco perdido nessa parte, recomendo o Notebook sobre Linear Regression, postado anteriomente: https://github.com/IuryChagas25/Machine-Learning-Prediction-Heart-Attacks
###Code
knn = neighbors.KNeighborsClassifier(n_neighbors=5, weights='uniform')
X_train, X_test, y_train, y_test = train_test_split(X,data_by_encoded, test_size = 0.2) #data_by_encoded é o nosso Y
knn.fit(X_train,y_train) #knn.fit atribui a regressão ao conjunto de dados de treinamento X e Y
prediction = knn.predict(X_test) #knn.predict chama uma coluna X_test (20% da nossa base de dados de X) para testar
accuracy = metrics.accuracy_score(y_test,prediction) #Baseando-se na saída real (y_test) é comparado com a previsão
#daí é produzido uma acurácia
print('Previsão: \n',prediction) #Exibindo a previsão de saída de todo o conjunto de dados
print('Acuracia: \n',accuracy) #Exibindo a acurácia obtida
#Representação de um algoritmo de regressão pela KNN, note que os pontos de previsão confluem em varios pontos
#Sendo assim, consegue ser preciso em suas estimativas obtendo um restultado interessante de 71% de acurácia
plt.plot(X_test,y_test,'blue')
plt.plot(X_test,prediction,'ro')
###Output
_____no_output_____
###Markdown
Por fim, vamos chamar um índice do nosso dataset e comparar seu valor real com a previsão pelo KNN
###Code
aux = 2
print('Valor Atual: ',data_by_encoded[aux])
print('Valor Previsto: ',knn.predict(X)[aux])
###Output
Valor Atual: 0
Valor Previsto: 0
###Markdown
Ejemplo trivial KNN con cross validation (CV)
###Code
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsRegressor
import numpy as np
#preparo set de datos
X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.]])
y = np.array([0, 1, 2, 3,0, 1, 2, 3,0, 1, 2, 3,0, 1, 2, 3])
neigh = KNeighborsRegressor(n_neighbors=2,n_jobs=-1)
#preparo lista de scores
scores = []
#spliteo en set de entrenamiento y de prueba
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3 ) # al asignarle random_state devuelve siempre lo mismo
#entreno el set y veo el puntaje que tiene sobre el test
reg=neigh.fit(X_train, y_train)
print (X_test)
#prediccion=reg.predict(X_test)
#print (prediccion)
print (y_test)
print (reg.score(X_test,y_test))
###Output
[[-1. -1.]
[ 2. 2.]
[ 0. 0.]
[ 1. 1.]
[-1. -1.]]
[2 3 0 1 2]
1.0
###Markdown
CV varias veces en ejemplo anterior
###Code
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsRegressor
import numpy as np
#preparo set de datos
X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.]])
y = np.array([0, 1, 2, 3,0, 1, 2, 3,0, 1, 2, 3,0, 1, 2, 3])
neigh = KNeighborsRegressor(n_neighbors=2,n_jobs=-1)
#preparo lista de scores
scores = []
for i in range(0,10): # lo pruebo varias veces
#spliteo en set de entrenamiento y de prueba
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5 ) # al asignarle random_state devuelve siempre lo mismo
#entreno el set y veo el puntaje que tiene sobre el test
reg=neigh.fit(X_train, y_train)
scores.append(reg.score(X_train,y_train))
print(scores)
print(np.mean(scores))
###Output
[0.86842105263157898, 0.8666666666666667, 1.0, 0.89873417721518989, 0.96825396825396826, 0.75, 0.91578947368421049, 1.0, 0.97894736842105268, 1.0]
0.924681270687
###Markdown
Plot de los diferentes puntajes segun K
###Code
from sklearn.model_selection import validation_curve
import matplotlib.pyplot as plt
#preparo set de datos
X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.],[0., 0.], [1., 1.], [-1., -1.], [2., 2.]])
y = np.array([0, 1, 2, 3,0, 1, 2, 3,0, 1, 2, 3,0, 1, 2, 3])
param_range=range(1,5)
train_scores, test_scores = validation_curve(
KNeighborsRegressor(), X, y, param_name="n_neighbors",param_range=param_range,
cv=2, n_jobs=-1)
train_scores_mean = np.mean(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.title("Validation Curve with KNN")
plt.xlabel("K")
plt.ylabel("Score")
plt.plot(param_range, train_scores_mean, label="Training score",
color="darkorange")
plt.plot(param_range, test_scores_mean, label="Cross-validation score",
color="navy")
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
Training vs CV score
###Code
from sklearn.model_selection import validation_curve
import matplotlib.pyplot as plt
from sklearn.preprocessing import Imputer
from sklearn.neighbors import KNeighborsRegressor
#preparo set de datos
X = zip(properati['dist_a_subte'],properati['dist_a_subte'])
y = properati['price_per_m2']
param_range=range(1,10,2)
train_scores, test_scores = validation_curve(
KNeighborsRegressor(), X, y, param_name="n_neighbors",param_range=param_range,
cv=2,scoring="r2" ,n_jobs=-1)
train_scores_mean = np.mean(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.title("Validation Curve with KNN")
plt.xlabel("K")
plt.ylabel("Score")
plt.plot(param_range, train_scores_mean, label="Training score",
color="darkorange",marker="o")
plt.plot(param_range, test_scores_mean, label="Cross-validation score",
color="navy",marker="o")
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
Probe escalando los datos, y tampoco funciona
###Code
%%notify
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsRegressor
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
#preparo set de datos
X = zip(properati['surface_total_in_m2'],\
properati['surface_covered_in_m2'],properati["property_type"],properati['state_name'],properati['place_name'])
y = properati['price_aprox_usd']
neigh = KNeighborsRegressor(n_jobs=-1)
n_neighbors = np.arange(10,200,10)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2)
param_grid = {"n_neighbors":n_neighbors}
search = GridSearchCV(neigh, param_grid=param_grid ,cv=5) #refit deja el estimador fiteado con los mejores hiperparametros
start = time()
search.fit(X_train, y_train)
print("GridSearchCV duro %.2f segundos para %d candidatos a hiper-parametros."
% (time() - start, len(search.cv_results_['params'])))
print("")
score.report_single(search.cv_results_)
###Output
GridSearchCV duro 269.70 segundos para 19 candidatos a hiper-parametros.
Puesto: 1
Promedio training score: 0.551 (std: 0.054)
Promedio validation score: 0.552 (std: 0.130)
Promedio fit time: 0.198s
Hiper-parametros: {'n_neighbors': 20}
Puesto: 2
Promedio training score: 0.532 (std: 0.053)
Promedio validation score: 0.549 (std: 0.131)
Promedio fit time: 0.198s
Hiper-parametros: {'n_neighbors': 30}
Puesto: 3
Promedio training score: 0.596 (std: 0.052)
Promedio validation score: 0.546 (std: 0.134)
Promedio fit time: 0.192s
Hiper-parametros: {'n_neighbors': 10}
Puesto: 4
Promedio training score: 0.519 (std: 0.051)
Promedio validation score: 0.543 (std: 0.128)
Promedio fit time: 0.191s
Hiper-parametros: {'n_neighbors': 40}
Puesto: 5
Promedio training score: 0.510 (std: 0.052)
Promedio validation score: 0.540 (std: 0.125)
Promedio fit time: 0.190s
Hiper-parametros: {'n_neighbors': 50}
Puesto: 6
Promedio training score: 0.504 (std: 0.051)
Promedio validation score: 0.537 (std: 0.121)
Promedio fit time: 0.190s
Hiper-parametros: {'n_neighbors': 60}
Puesto: 7
Promedio training score: 0.500 (std: 0.050)
Promedio validation score: 0.534 (std: 0.119)
Promedio fit time: 0.184s
Hiper-parametros: {'n_neighbors': 70}
Puesto: 8
Promedio training score: 0.497 (std: 0.049)
Promedio validation score: 0.530 (std: 0.115)
Promedio fit time: 0.190s
Hiper-parametros: {'n_neighbors': 80}
Puesto: 9
Promedio training score: 0.495 (std: 0.048)
Promedio validation score: 0.527 (std: 0.115)
Promedio fit time: 0.176s
Hiper-parametros: {'n_neighbors': 90}
Puesto: 10
Promedio training score: 0.492 (std: 0.048)
Promedio validation score: 0.526 (std: 0.114)
Promedio fit time: 0.174s
Hiper-parametros: {'n_neighbors': 100}
###Markdown
K-Nearest Neighbors (KNN)KNN is an example of memory based learning (or instance based learning). Instead of training a classifier you simply memorize all of the data and find the K closest examples to the training data. You need some kind of distance metric (this is a hyperparameter). You choose the distance metric based on your application, by default people use Euclidean distance. $$p(y=c \mid x, \mathcal{D}, K) = \frac{1}{K} \sum_{i \in N_K(x,\mathcal{D})} \mathbb{I}(y_i=c)$$where $N_K(x,\mathcal{D})$ are the indices of the K nearest points to N in $\mathcal{D}$ (e.g. $i=\{44, 61, 2\}$), and $\mathbb{I}(e)$ is the indicator function defined as $$ \mathbb{I}(e) = \begin{cases} 1 & \text{if $e$ is true} \\ 0 & \text{if $e$ is false} \end{cases}$$
###Code
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
np.random.seed(42)
% matplotlib inline
X, y = make_blobs(centers=4, n_samples=1000)
print(f'shape of dataset: {X.shape}')
fig = plt.figure(figsize=(8,6))
plt.scatter(X[:,0], X[:,1], c=y)
plt.title("dataset with 4 clusters")
plt.xlabel("first feature")
plt.ylabel("second feature")
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y)
class KNN():
def __init__(self, distance_metric='euclidean'):
assert distance_metric in ['euclidean']
self.distance_metric = distance_metric
def fit(self, X, y):
self.data = X
self.labels = y
def closest_k_distances(self, X, k):
# make all arrays n_examples x n_dimensions (i.e. 2d arrays)
if X.ndim == 1:
X = np.expand_dims(X, axis=0)
n_samples, n_dimensions = X.shape
if self.distance_metric == 'euclidean':
distances = [np.sqrt(np.sum(np.square(self.data - X[i]), axis=1)) for i in range(n_samples)]
# find the k closest points
N_k_list = np.argsort(distances)[:, :k]
return N_k_list
def predict(self, X, k=1):
# find the indices of the k-nearest points N_k
N_k_list = self.closest_k_distances(X, k)
p_list = []
for N_k in N_k_list:
# calculate the predictive distribution over the labels
p = {}
count = 0
for c in set(self.labels):
p_c = np.sum([self.labels[i] == c for i in N_k]) / float(k)
p[str(c)] = p_c
p_list.append(p)
return p_list
clf = KNN()
clf.fit(X_train, y_train)
predictions = clf.predict(X_test, 100)
accuracy = []
for prediction,label in zip(predictions, y_test):
key_max = max(prediction.keys(), key=(lambda k: prediction[k]))
accuracy.append(int(key_max) == label)
accuracy = np.sum(accuracy) / float(len(accuracy))
print('test accuracy = {}'.format(accuracy))
predictions
y_test[:5]
accuracy
###Output
_____no_output_____
###Markdown
Read CSV and basic data cleaning
###Code
exoplanet = pd.read_csv('Resources/exoplanet_data.csv')
# Drop the null columns where all values are null
exoplanet = exoplanet.dropna(axis='columns', how='all')
# Drop the null rows
exoplanet = exoplanet.dropna()
exoplanet
###Output
_____no_output_____
###Markdown
Select X and y Values
###Code
#assign all columns except koi_disposition to X, koi_disposition to y
X = exoplanet.drop(columns = 'koi_disposition')
y = exoplanet['koi_disposition']
print(X.shape, y.shape)
###Output
(6991, 40) (6991,)
###Markdown
Train Test Split
###Code
#train, test, split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
###Output
_____no_output_____
###Markdown
Pre-processing
###Code
#fit scaled data with MinMax Scaler
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
#tranform scaled data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
#Encode Labels
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
label_encoder.fit(y_train)
encoded_y_train = label_encoder.transform(y_train)
encoded_y_test = label_encoder.transform(y_test)
#One-hot encoding
from keras.utils import to_categorical
y_train_categorical = to_categorical(encoded_y_train)
y_test_categorical = to_categorical(encoded_y_test)
y_train_categorical
###Output
_____no_output_____
###Markdown
Train the model
###Code
## Create a KNN model and fit it to the scaled training data
from sklearn.neighbors import KNeighborsClassifier
# Loop through different k values to see which has the highest accuracy
train_scores = []
test_scores = []
for k in range(1, 20, 2):
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train_scaled, y_train_categorical)
train_score = knn.score(X_train_scaled, y_train_categorical)
test_score = knn.score(X_test_scaled, y_test_categorical)
train_scores.append(train_score)
test_scores.append(test_score)
print(f"k: {k}, Train/Test Score: {train_score:.3f}/{test_score:.3f}")
#plot KNN train and test data
plt.plot(range(1, 20, 2), train_scores, marker='o')
plt.plot(range(1, 20, 2), test_scores, marker="x")
plt.xlabel("k neighbors")
plt.ylabel("Testing accuracy Score")
plt.show()
#Select best K value to fit and score data - visually K=7 appears to be at elbow
knn = KNeighborsClassifier(n_neighbors=7)
knn.fit(X_train_scaled, y_train_categorical)
#print train and test scores
print('k=7 Train Acc: %.3f' % knn.score(X_train_scaled, y_train_categorical))
print('k=7 Test Acc: %.3f' % knn.score(X_test_scaled, y_test_categorical))
###Output
k=7 Train Acc: 0.866
k=7 Test Acc: 0.823
###Markdown
Hyperparameter Tuning
###Code
# Create the GridSearch estimator along with a parameter object containing the values to adjust
from sklearn.model_selection import GridSearchCV
param_grid = {'n_neighbors': [1, 3, 5, 7, 9 , 11, 13, 15, 17, 19],
'weights': ['uniform', 'distance'],
'metric': ['euclidean', 'manhattan']}
grid = GridSearchCV(knn, param_grid, verbose=3)
grid.get_params().keys()
# Fit the model using the grid search
grid.fit(X_train_scaled, y_train_categorical)
# List the best parameters for this dataset
print(grid.best_params_)
print(grid.best_score_)
# Make predictions with the hypertuned model
predictions = grid.predict(X_train_scaled)
print('Train Acc: %.3f' % grid.score(X_train_scaled, y_train_categorical))
print('Test Acc: %.3f' % grid.score(X_test_scaled, y_test_categorical))
import joblib
filename = 's_heavner_knn.sav'
joblib.dump(knn, filename)
###Output
_____no_output_____
###Markdown
Load and clean the data
###Code
filename = path.join(".", "data", "exoplanet_data.csv")
df = pd.read_csv(filename)
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
df.head()
# Use the seven most important features identified in the random forest model
target = df['koi_disposition']
data = df[['koi_fpflag_co', 'koi_fpflag_nt', 'koi_fpflag_ss', 'koi_model_snr', 'koi_prad', 'koi_prad_err2', 'koi_duration_err2']]
data.head()
###Output
_____no_output_____
###Markdown
Split and scale the data
###Code
# Split the data into train/test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data, target, train_size=0.8, random_state=12)
# Scale the data
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler().fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Find the best K
###Code
train_scores = []
test_scores = []
for k in range(1, 18, 2):
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train_scaled, y_train)
train_score = knn.score(X_train_scaled, y_train)
test_score = knn.score(X_test_scaled, y_test)
train_scores.append(train_score)
test_scores.append(test_score)
print(f"k: {k}, Train/Test Score: {train_score:.3f}/{test_score:.3f}")
knn = KNeighborsClassifier(n_neighbors=17)
knn.fit(X_train_scaled, y_train)
print('k=17 Test Acc: %.3f' % knn.score(X_test_scaled, y_test))
###Output
k=17 Test Acc: 0.866
###Markdown
Tune the model with GridSearchCV
###Code
# https://medium.com/@erikgreenj/k-neighbors-classifier-with-gridsearchcv-basics-3c445ddeb657
from sklearn.model_selection import GridSearchCV
param_grid = {'n_neighbors': [3,9,31,51],
'weights': ['uniform', 'distance'],
'metric': ['euclidean', 'manhattan']}
# grid = GridSearchCV(KNeighborsClassifier(), param_grid, verbose=1)
grid = GridSearchCV(knn, param_grid, verbose=1)
gs = grid.fit(X_train, y_train)
gs.best_score_
gs.best_estimator_
gs.best_params_
predictions = gs.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))
###Output
precision recall f1-score support
CANDIDATE 0.74 0.66 0.70 339
CONFIRMED 0.68 0.81 0.74 363
FALSE POSITIVE 0.93 0.88 0.91 697
accuracy 0.81 1399
macro avg 0.78 0.78 0.78 1399
weighted avg 0.82 0.81 0.81 1399
###Markdown
name
###Code
name
df = pd.concat([df,name],axis=1)
df.head()
df.drop(['species'],axis = 1,inplace = True)
df.head()
x = df[['sepal_length','sepal_width','petal_length','petal_width']]
y = df[['Iris-setosa','Iris-versicolor','Iris-virginica']]
x_train,x_test,y_train,y_test = train_test_split(x,y,random_state =99 , test_size = 0.3)
import math
math.sqrt(len(y_test))
classifier = KNeighborsClassifier(n_neighbors = 7 , p=2,metric = 'euclidean')
classifier.fit(x_train,y_train)
predictions = classifier.predict(x_test)
predictions
accuracy_score(y_test,predictions)*100
###Output
_____no_output_____
###Markdown
###Code
#Importing
import random
from scipy.spatial import distance
def dist(x , y):
# To Calculate the spatial distance between given two points
return distance.euclidean(x, y)
def closest(row):
# Returns the index of the least distance in the row
best_dist = row[0]
best_index = 0
for i in range(1 ,len(row)):
if row[i] < best_dist:
best_dist = row[i]
best_index = i
return best_index
# Classifier
class KNN():
rows_in_distances = []
distances =[]
labels = []
last_Y=[]
def init(self):
pass
def fit(self , Train_X , Train_Y):
# To train the model , literally saves all the training data.
self.X = Train_X
self.Y = Train_Y
def predict(self , Test_X):
# Returns the predictions of the model/classifier
for i in range(len(Test_X)):
for j in range(len(self.X)):
self.rows_in_distances.append( dist ( Test_X[i] , self.X[j] ) )
self.labels.append(closest(self.rows_in_distances))
self.rows_in_distances = []
for i in self.labels:
self.last_Y.append(self.Y[i])
return self.last_Y
#Pipeline
import numpy as np
classifier = KNN()
predictions = []
X_Train = [1, 3, 4, 8, 6, 5]
Y_Train = [0, 0, 0, 1, 1, 1]
X_Test = [2, 7, 5]
Y_Test = [0, 1 , 1]
classifier.fit(X_Train , Y_Train)
predictions = classifier.predict(X_Test)
print("Predictions are :",predictions)
print("Labels are:",Y_Test)
###Output
Predictions are : [0, 1, 1]
Labels are: [0, 1, 1]
Values are: [0, 0, 0, 1, 1, 1]
###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
df = pd.read_csv("/content/drive/MyDrive/dos_dataset/clean_2.csv")
df.info()
df=df.drop(' Source IP',axis=1)
df=df.drop(' Flow Duration',axis=1)
#df=df.drop(' Total Fwd Packets',axis=1)
#df=df.drop(' Total Backward Packets',axis=1)
df=df.drop(' Total Length of Bwd Packets',axis=1)
df=df.drop(' Fwd Packet Length Std',axis=1)
df=df.drop(' Flow IAT Max',axis=1)
df=df.drop(' Flow IAT Min',axis=1)
#df=df.drop('Fwd IAT Total',axis=1)
df=df.drop(' Fwd IAT Max',axis=1)
df=df.drop(' Fwd IAT Min',axis=1)
df=df.drop('Bwd IAT Total',axis=1)
df=df.drop(' Bwd IAT Mean',axis=1)
df=df.drop(' Bwd IAT Std',axis=1)
df=df.drop(' Bwd IAT Max',axis=1)
df=df.drop(' Bwd IAT Min',axis=1)
#df=df.drop(' Fwd Header Length',axis=1)
df=df.drop(' Bwd Header Length',axis=1)
df=df.drop(' Bwd Packets/s',axis=1)
df=df.drop(' SYN Flag Count',axis=1)
df=df.drop(' Down/Up Ratio',axis=1)
df=df.drop(' Fwd Header Length.1',axis=1)
df=df.drop('Subflow Fwd Packets',axis=1)
df=df.drop(' Subflow Bwd Packets',axis=1)
df=df.drop(' Subflow Bwd Bytes',axis=1)
df=df.drop(' act_data_pkt_fwd',axis=1)
df=df.drop(' min_seg_size_forward',axis=1)
df=df.drop('Active Mean',axis=1)
df=df.drop(' Active Std',axis=1)
df=df.drop(' Active Max',axis=1)
df=df.drop(' Active Min',axis=1)
df=df.drop('Idle Mean',axis=1)
df=df.drop(' Idle Max',axis=1)
df=df.drop(' Idle Min',axis=1)
df = df.drop(' Packet Length Std',axis=1)
df = df.drop('Flow Bytes/s',axis=1)
df = df.drop(' Flow Packets/s',axis=1)
df=df.drop('Unnamed: 0',axis=1)
df=df.drop('Unnamed: 0.1',axis=1)
df=df.drop('Unnamed: 0.1.1',axis=1)
df=df.drop(' Source Port',axis=1)
df=df.drop(' Destination IP',axis=1)
df=df.drop(' Destination Port',axis=1)
df.info()
#df=df.drop('Unnamed: 0.1',axis=1)
x=df.iloc[:,df.columns != 'Label']
y=df.iloc[:,-1]
print("x\n",x.info())
y = pd.DataFrame(y)
print('y\n',y.info())
#normalized_df=(df-df.mean())/df.std()
#normalized_x=normalized_df.iloc[:,normalized_df.columns != 'Label']
#y=pd.DataFrame(y)
df.head()
normalized_df=(df-df.mean())/df.std()
y=pd.DataFrame(y)
normalized_x=normalized_df.iloc[:,normalized_df.columns != 'Label']
normalized_df.describe()
normalized_x.info()
y.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1245798 entries, 0 to 1245797
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Label 1245798 non-null int64
dtypes: int64(1)
memory usage: 9.5 MB
###Markdown
KNN Classifier
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
models = [LogisticRegression(), KNeighborsClassifier(n_neighbors=3),MLPClassifier(alpha=0.005),DecisionTreeClassifier()]
classifiers = ["LR", "KNN","MLP","DecisionTree"]
scores = []
#checks whether there are NaN values in the normalizex_x dataframe
#np.any(np.isnan(normalized_x))
#fills NaN values with the mean value
#normalized_x = normalized_x.fillna(X.mean())
from sklearn.model_selection import train_test_split
import numpy as np
X_train, X_test, y_train, y_test = train_test_split(normalized_x, y, test_size=0.25, random_state=42)
y_train = np.array(y_train)
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=8)
model.fit(X_train, y_train)
from sklearn import metrics
y_pred = model.predict(X_test)
print("Accuracy =", metrics.accuracy_score(y_test, y_pred))
#from sklearn import metrics
from sklearn.metrics import f1_score
print('K Nearest Neighbour Classifier')
print('Accuracy = ', metrics.accuracy_score(y_test, y_pred)*100)
print("Confusion Matrix =\n", metrics.confusion_matrix(y_test, y_pred, labels=None,
sample_weight=None))
print("Recall =", metrics.recall_score(y_test, y_pred, labels=None,
pos_label=1, average='weighted',
sample_weight=None))
print("Classification Report =\n", metrics.classification_report(y_test, y_pred,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False))
print("F1 Score = ",f1_score(y_test, y_pred, average='macro'))
for model in models:
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
score = accuracy_score(y_test, y_pred)*100
scores.append(score)
print("Accuracy of the model is: ", score)
conf_matrix = confusion_matrix(y_test,y_pred)
report = classification_report(y_test,y_pred)
print("Confusion Matrix:\n",conf_matrix)
print("Report:\n",report)
print("\n==============***===============")
###Output
_____no_output_____
###Markdown
Naive Bayes
###Code
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X_train, y_train)
pred = model.predict(X_test)
from sklearn import metrics
from sklearn.metrics import f1_score
print('Naive Bayes')
print('Accuracy = ', metrics.accuracy_score(y_test, pred)*100)
print("Confusion Matrix =\n", metrics.confusion_matrix(y_test, y_pred, labels=None,
sample_weight=None))
print("Recall =", metrics.recall_score(y_test, y_pred, labels=None,
pos_label=1, average='weighted',
sample_weight=None))
print("Classification Report =\n", metrics.classification_report(y_test, y_pred,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False))
print("F1 Score = ",f1_score(y_test, y_pred, average='macro'))
###Output
_____no_output_____
###Markdown
KNN (K-Nearest-Neighbors) KNN is a simple concept: define some distance metric between the items in your dataset, and find the K closest items. You can then use those items to predict some property of a test item, by having them somehow "vote" on it.As an example, let's look at the MovieLens data. We'll try to guess the rating of a movie by looking at the 10 movies that are closest to it in terms of genres and popularity.To start, we'll load up every rating in the data set into a Pandas DataFrame:
###Code
import pandas as pd
r_cols = ['user_id', 'movie_id', 'rating']
ratings = pd.read_csv('C:/Users/Lucian-PC/Desktop/DataScience/DataScience-Python3/ml-100k/u.data', sep='\t', names=r_cols, usecols=range(3))
ratings.head()
###Output
_____no_output_____
###Markdown
Now, we'll group everything by movie ID, and compute the total number of ratings (each movie's popularity) and the average rating for every movie:
###Code
import numpy as np
movieProperties = ratings.groupby('movie_id').agg({'rating': [np.size, np.mean]})
movieProperties.head()
###Output
_____no_output_____
###Markdown
The raw number of ratings isn't very useful for computing distances between movies, so we'll create a new DataFrame that contains the normalized number of ratings. So, a value of 0 means nobody rated it, and a value of 1 will mean it's the most popular movie there is.
###Code
movieNumRatings = pd.DataFrame(movieProperties['rating']['size'])
movieNormalizedNumRatings = movieNumRatings.apply(lambda x: (x - np.min(x)) / (np.max(x) - np.min(x)))
movieNormalizedNumRatings.head()
###Output
_____no_output_____
###Markdown
Now, let's get the genre information from the u.item file. The way this works is there are 19 fields, each corresponding to a specific genre - a value of '0' means it is not in that genre, and '1' means it is in that genre. A movie may have more than one genre associated with it.While we're at it, we'll put together everything into one big Python dictionary called movieDict. Each entry will contain the movie name, list of genre values, the normalized popularity score, and the average rating for each movie:
###Code
movieDict = {}
with open(r'C:/Users/Lucian-PC/Desktop/DataScience/DataScience-Python3/ml-100k/u.item') as f:
temp = ''
for line in f:
#line.decode("ISO-8859-1")
fields = line.rstrip('\n').split('|')
movieID = int(fields[0])
name = fields[1]
genres = fields[5:25]
genres = map(int, genres)
movieDict[movieID] = (name, np.array(list(genres)), movieNormalizedNumRatings.loc[movieID].get('size'), movieProperties.loc[movieID].rating.get('mean'))
###Output
_____no_output_____
###Markdown
For example, here's the record we end up with for movie ID 1, "Toy Story":
###Code
print(movieDict[1])
###Output
('Toy Story (1995)', array([0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 0.77358490566037741, 3.8783185840707963)
###Markdown
Now let's define a function that computes the "distance" between two movies based on how similar their genres are, and how similar their popularity is. Just to make sure it works, we'll compute the distance between movie ID's 2 and 4:
###Code
from scipy import spatial
def ComputeDistance(a, b):
genresA = a[1]
genresB = b[1]
genreDistance = spatial.distance.chebyshev(genresA, genresB)
popularityA = a[2]
popularityB = b[2]
popularityDistance = abs(popularityA - popularityB)
return genreDistance + popularityDistance
ComputeDistance(movieDict[2], movieDict[4])
###Output
_____no_output_____
###Markdown
Remember the higher the distance, the less similar the movies are. Let's check what movies 2 and 4 actually are - and confirm they're not really all that similar:
###Code
print(movieDict[2])
print(movieDict[4])
###Output
('GoldenEye (1995)', array([0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0]), 0.22298456260720412, 3.2061068702290076)
('Get Shorty (1995)', array([0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 0.35677530017152659, 3.5502392344497609)
###Markdown
Now, we just need a little code to compute the distance between some given test movie (Toy Story, in this example) and all of the movies in our data set. When the sort those by distance, and print out the K nearest neighbors:
###Code
import operator
def getNeighbors(movieID, K):
distances = []
for movie in movieDict:
if (movie != movieID):
dist = ComputeDistance(movieDict[movieID], movieDict[movie])
distances.append((movie, dist))
distances.sort(key=operator.itemgetter(1))
neighbors = []
for x in range(K):
neighbors.append(distances[x][0])
return neighbors
K = 20
avgRating = 0
neighbors = getNeighbors(1, K)
for neighbor in neighbors:
avgRating += movieDict[neighbor][3]
print (movieDict[neighbor][0] + " " + str(movieDict[neighbor][3]))
avgRating /= K
###Output
Aladdin and the King of Thieves (1996) 2.84615384615
Air Force One (1997) 3.63109048724
Independence Day (ID4) (1996) 3.43822843823
Scream (1996) 3.44142259414
English Patient, The (1996) 3.65696465696
Raiders of the Lost Ark (1981) 4.25238095238
Liar Liar (1997) 3.15670103093
Godfather, The (1972) 4.28329297821
Return of the Jedi (1983) 4.00788954635
Fargo (1996) 4.15551181102
Contact (1997) 3.80353634578
Pulp Fiction (1994) 4.06091370558
Twelve Monkeys (1995) 3.79846938776
Silence of the Lambs, The (1991) 4.28974358974
Jerry Maguire (1996) 3.7109375
Rock, The (1996) 3.69312169312
Empire Strikes Back, The (1980) 4.20652173913
Star Trek: First Contact (1996) 3.6602739726
Back to the Future (1985) 3.83428571429
Titanic (1997) 4.24571428571
###Markdown
While we were at it, we computed the average rating of the 10 nearest neighbors to Toy Story:
###Code
avgRating
###Output
_____no_output_____
###Markdown
How does this compare to Toy Story's actual average rating?
###Code
movieDict[1]
###Output
_____no_output_____
###Markdown
Not too bad! Activity Our choice of 10 for K was arbitrary - what effect do different K values have on the results?Our distance metric was also somewhat arbitrary - we just took the cosine distance between the genres and added it to the difference between the normalized popularity scores. Can you improve on that?
###Code
Chebyshev
###Output
_____no_output_____
###Markdown
Implementação e aplicação do algoritmo KNN para predição Função para dividir o dataset entre dados para treino e dados que serão usados para realizar as predições
###Code
from sklearn import datasets
import pandas as pd
import random
import numpy as np
import operator
import math
import matplotlib.pyplot as plt
import pylab as pl
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report, confusion_matrix
def init(data, target, split):
x_treino = []
x_test = []
y_treino = []
y_test = []
for i in range(data.shape[0]):
if random.random() < split:
x_treino.append(data[i])
y_treino.append(target[i])
else:
x_test.append(data[i])
y_test.append(target[i])
return x_treino, x_test, y_treino, y_test
###Output
_____no_output_____
###Markdown
Métrica usada para calcular a distância entre duas instâncias
###Code
def distancia_euclidiana(instanceA, instanceB):
ans = 0
for i in range(len(instanceA)):
ans += (instanceA[i] - instanceB[i]) ** 2
return math.sqrt(ans)
###Output
_____no_output_____
###Markdown
Função que retorna os K vizinhos mais próximos de uma instância
###Code
def get_nearest_neighbors(x_treino, y_treino, instance_test, k):
distancias = []
for i in range(len(x_treino)):
dist = distancia_euclidiana(x_treino[i], instance_test)
distancias.append((dist, y_treino[i]))
distancias.sort(key=operator.itemgetter(0))
neighbors = []
for i in range(k):
neighbors.append(distancias[i][1])
return neighbors
def target_k_neighbors(vizinhos):
ans = {}
for i in vizinhos:
if i in ans:
ans[i] += 1
else:
ans[i] = 1
qtd = 0
for i, j in ans.items():
if qtd < j:
qtd = j
best = i
return best
###Output
_____no_output_____
###Markdown
Taxa de erro de uma predição
###Code
def getPrecision(instanceA, instanceB):
erros = 0
for i in range(len(instanceA)):
if instanceA[i] != instanceB[i]:
erros += 1
return erros / len(instanceA)
###Output
_____no_output_____
###Markdown
Função KNNEssa função retorna um vetor indicando a predição feita para cada caso de teste
###Code
def knn(x_treino, y_treino, x_test, K):
# scaler = StandardScaler()
# scaler.fit(x_treino)
# x_treino = scaler.transform(x_treino)
# x_test = scaler.transform(x_test)
y_pred = []
for i in range(len(x_test)):
vizinhos = get_nearest_neighbors(x_treino, y_treino, x_test[i], K)
ans = target_k_neighbors(vizinhos)
y_pred.append(ans)
return y_pred
###Output
_____no_output_____
###Markdown
Experimentos utilizando a base de dados IRIS
###Code
K = 5
iris = datasets.load_iris()
x_treino, x_test, y_treino, y_test = init(iris.data, iris.target, 0.6)
y_pred = knn(x_treino, y_treino, x_test, K)
###Output
_____no_output_____
###Markdown
Resultados da predições usando aproximadamente 60% da base para treino e com parâmetro K = 5
###Code
for i in range(len(y_pred)):
print('Original:' + iris.target_names[y_test[i]] + ' Predito:' + iris.target_names[y_pred[i]])
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
###Output
[[18 0 0]
[ 0 16 1]
[ 0 1 20]]
precision recall f1-score support
0 1.00 1.00 1.00 18
1 0.94 0.94 0.94 17
2 0.95 0.95 0.95 21
micro avg 0.96 0.96 0.96 56
macro avg 0.96 0.96 0.96 56
weighted avg 0.96 0.96 0.96 56
###Markdown
Resultado obtido com aproximadamente 60% dos dados para treino e parâmetro K = 5
###Code
error = []
for i in range(1,30):
y_pred = knn(x_treino, y_treino, x_test, i)
ans = getPrecision(y_test, y_pred)
error.append(ans)
plt.figure(figsize=(12, 6))
plt.plot(range(1, 30), error, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10)
plt.title('Error Rate K Value')
plt.xlabel('K Value')
plt.ylabel('Mean Error')
plt.show()
###Output
_____no_output_____
###Markdown
O gráfico acima mostra o erro médio obtido variando o parâmetro K entre 1 e 30. Experimentos utilizando a base de dados Boston
###Code
plt.hist(boston.target)
plt.xlabel('Preço mediano das residências (em 1000$)')
plt.ylabel('Frequência')
plt.title('Distribuição do target da base de dados Boston')
###Output
_____no_output_____
###Markdown
A partir do histograma acima, é possível perceber que os valores do target da base de dados boston está distribuido em valores reais entre 5 e 50, o que é um intervalo muita grande de valores para se realizar uma predição usando o KNN. Sendo assim, vamos dividir esses dados em 4 classes. A primeira classe será as instâncias com valores no intervalo (0, 12.5], segunga classe instâncias com valores entre (12.5,25.0], terceira classe (25.0,37.5] e por fim a ultima classe conterá as intâncias com valores no intervalo (35.5,50].
###Code
target = []
classes = [12.5, 25.0, 37.5, 50.0]
for i in range(len(boston.target)):
for j in range(len(classes)):
if boston.target[i] <= classes[j]:
target.append(j)
break
K = 5
boston = datasets.load_boston()
x_treino, x_test, y_treino, y_test = init(boston.data, target, 0.7)
y_pred = knn(x_treino, y_treino, x_test, K)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
###Output
[[ 9 6 0 0]
[ 8 84 8 0]
[ 0 13 14 0]
[ 0 9 4 1]]
precision recall f1-score support
0 0.53 0.60 0.56 15
1 0.75 0.84 0.79 100
2 0.54 0.52 0.53 27
3 1.00 0.07 0.13 14
micro avg 0.69 0.69 0.69 156
macro avg 0.70 0.51 0.50 156
weighted avg 0.71 0.69 0.67 156
###Markdown
Usando todas as 13 caracteristicas, 70% dos dados para treino e parâmetro K = 5, o algoritmo obteve uma precisão de quase 70% nas predições das classes dos valores medianos das residências.O próximo passo de nossos experimentos é usar apenas as caracteristicas quem possuem uma relação linear com o target da base de dados.
###Code
plt.scatter(boston.data[:,0], boston.target, color='black')
plt.xlabel(boston.feature_names[0])
plt.ylabel('Preço')
plt.scatter(boston.data[:,2], boston.target, color='black')
plt.xlabel(boston.feature_names[2])
plt.ylabel('Preço')
plt.scatter(boston.data[:,4], boston.target, color='black')
plt.xlabel(boston.feature_names[4])
plt.ylabel('Preço')
###Output
_____no_output_____
###Markdown
Taxa de erro na predição usando por 60% dos dados para treinamento e com o parâmetro K = 5
###Code
plt.scatter(boston.data[:,5], boston.target, color='black')
plt.xlabel(boston.feature_names[5])
plt.ylabel('Preço')
plt.scatter(boston.data[:,7], boston.target, color='black')
plt.xlabel(boston.feature_names[7])
plt.ylabel('Preço')
plt.scatter(boston.data[:,10], boston.target, color='black')
plt.xlabel(boston.feature_names[10])
plt.ylabel('Preço')
plt.scatter(boston.data[:,2], boston.data[:,4], color='black')
plt.xlabel(boston.feature_names[2])
plt.ylabel(boston.feature_names[4])
plt.ylabel('Preço')
###Output
_____no_output_____
###Markdown
Analisando os gráficos de disperssão acima, é possivel avaliar que a taxa de criminalidade (CRIM), o numero médiode quartos nas resiências (RM) e a distância média das residências para cinco centros de emprego de Boston (DIS),são os fatores que possuem maior co-relação com o preço das residências, apesar de alguns outliers. Portanto vamos utilizar agora apenas essas 3 caracteristicas para fazer a predição das classes.
###Code
caracteristicas = [0, 5, 7]
dataset = np.zeros((len(boston.data), len(caracteristicas)))
for i in range(len(boston.data)):
for j in range(len(caracteristicas)):
dataset[i][j] = boston.data[i][caracteristicas[j]];
K = 5
boston = datasets.load_boston()
x_treino, x_test, y_treino, y_test = init(dataset, target, 0.7)
y_pred = knn(x_treino, y_treino, x_test, K)
print(getPrecision(y_pred, y_test))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
###Output
[[ 6 4 0 0]
[ 2 59 1 1]
[ 0 6 18 1]
[ 0 1 0 7]]
precision recall f1-score support
0 0.75 0.60 0.67 10
1 0.84 0.94 0.89 63
2 0.95 0.72 0.82 25
3 0.78 0.88 0.82 8
micro avg 0.85 0.85 0.85 106
macro avg 0.83 0.78 0.80 106
weighted avg 0.85 0.85 0.85 106
###Markdown
Write a program to implement k-Nearest Neighbour algorithm to classify the iris data set. Print both correct and wrong predictions. Java/Python ML library classes can be used for this problem.
###Code
import sklearn
import pandas as pd
from sklearn.datasets import load_iris
iris=load_iris() #load the data
iris.keys()
print(iris.keys())
df=pd.DataFrame(iris['data'])
print(df)
print(iris)
print(iris['target_names'])
print(iris['feature_names'])
iris['target']
len(iris['target'])
###Output
_____no_output_____
###Markdown
Note: Now we need a target and data so that we can train the modelAs we know that we have to find out the class from the features we haveWith this logic,our target is classes (0,1,2) and data is in df.
###Code
X=df
y=iris['target']
###Output
_____no_output_____
###Markdown
Splitting DataThe data is split so that with some data we can train the model and from the remaining data we can test the model and can check how well our model isTo do this we have an inbuilt function in sklearn
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
Note: It will split our 33% data into testing data and remaining data is our training data KNN Classifier and Training of the Model
###Code
from sklearn.neighbors import KNeighborsClassifier
knn=KNeighborsClassifier(n_neighbors=3)
###Output
_____no_output_____
###Markdown
Note: It implements the concepts of KNN. Here we have taken number of neighbors (K)= 3.First, it will calculate the distance with all the training points to the test point and then select the three lowest distance points.And test data point is classify to the class most common in among three.
###Code
knn.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Note:- Training the model with features values (data) and target values (target) Prediction and AccuracyDemo: Here I want to show you just by taking one data pointwe have a data point x_new Note: As you can see in confusion matrix only one prediction is wrong , and also our accuracy is 0.98 (98%).
###Code
import numpy as np
x_new=np.array([[5,2.9,1,0.2]])
prediction=knn.predict(x_new)
iris['target_names'][prediction]
print(prediction)
###Output
[0]
###Markdown
Note: As we can see that our point belongs to class (0 or setosa class), this demo is just for understanding
###Code
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
y_pred=knn.predict(X_test)
cm=confusion_matrix(y_test,y_pred)
print(cm)
print(" correct predicition",accuracy_score(y_test,y_pred))
print(" worng predicition",(1-accuracy_score(y_test,y_pred)))
###Output
[[19 0 0]
[ 0 15 0]
[ 0 1 15]]
correct predicition 0.98
worng predicition 0.020000000000000018
###Markdown
KNNThis notebook contains an implementaion of K-nearest neighbor algorithm using numpy, with some visualization functionalities. Imports, setup, and functions defenitions
###Code
import numpy as np
import matplotlib.pyplot as plt
from collections import defaultdict
from sklearn import datasets
# set the default figure size
from IPython.core.pylabtools import figsize
figsize(14, 7)
# fix the random seed
seed = 1004
np.random.seed(seed)
def generate_syntheatic_data(size, n_classes=2, plot=True):
'''
A function to geenrate syntheatic data to test the learning algorithm.
size: number of samples
'''
redish = '#d73027'
orangeish = '#fc8d59'
blueish = '#4575b4'
colormap = np.array([redish,blueish,orangeish])
X, Y = datasets.make_classification(size, 2, 2, 0, n_classes=n_classes ,random_state=seed, n_clusters_per_class=1, class_sep=1)
if plot:
figure = plt.figure(figsize=(10, 5))
scatter = plt.scatter(X[:, 0], X[:, 1], c=colormap[Y])
return X, Y
n_classes = 3
X, Y = generate_syntheatic_data(500, plot=True, n_classes=n_classes)
def split_valid(X, Y, split_ratio=0.1):
'''
split_valid(X, Y, split_ratio=0.1)
This function data to a training set and a validation set.
X: inputs features.
Y: target outputs.
split_ratio: perecntage between the size of the validation set and the size of the training set.
return x_train, y_train, x_valid, y_valid
'''
data_size = X.shape[0]
valid_length = int(data_size * split_ratio)
# shuffle the data before splitting
inds = np.random.choice(range(data_size), data_size, replace=False)
X = X[inds]
Y = Y[inds]
x_valid = X[: valid_length]
y_valid = Y[: valid_length]
x_train = X[valid_length: ]
y_train = Y[valid_length: ]
return x_train, y_train, x_valid, y_valid
# visualize the predictions
# the code is taken from https://www.tvhahn.com/posts/beautiful-plots-decision-boundary/
def visualize(model, x, y):
# define the mesh
x0 = x[:, 0]
x1 = x[:, 1]
PAD = 1.0
x0_min, x0_max = np.round(x0.min())-PAD, np.round(x0.max()+PAD)
x1_min, x1_max = np.round(x1.min())-PAD, np.round(x1.max()+PAD)
# create the mesh points with step size H
H = 0.1 # mesh stepsize
x0_axis_range = np.arange(x0_min,x0_max, H)
x1_axis_range = np.arange(x1_min,x1_max, H)
# create the mesh-grid
xx0, xx1 = np.meshgrid(x0_axis_range, x1_axis_range)
# change the shape of the meshgrid to the same as the data input
xx = np.reshape(np.stack((xx0.ravel(),xx1.ravel()),axis=1),(-1,2))
preds, probs = model.predict(xx, n_classes=n_classes)
# the size of each probability dot
yy_size = np.max(probs, axis=1)
PROB_DOT_SCALE = 40 # modifier to scale the probability dots
PROB_DOT_SCALE_POWER = 3 # exponential used to increase/decrease size of prob dots
TRUE_DOT_SIZE = 50 # size of the true labels
# make figure
plt.style.use('seaborn-whitegrid') # set style because it looks nice
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8,6), dpi=150)
redish = '#d73027'
orangeish = '#fc8d59'
yellowish = '#fee090'
blueish = '#4575b4'
colormap = np.array([redish,blueish,orangeish])
ax.scatter(xx[:,0], xx[:,1], c=colormap[preds], alpha=0.4,
s=PROB_DOT_SCALE*yy_size**PROB_DOT_SCALE_POWER, linewidths=0,)
###Output
_____no_output_____
###Markdown
Model definition, training, and results visulization
###Code
class KNN:
'''
KNN(k)
This class implements K-nearest neighbor algorithm using NumPy.
k: Number of clusters.
'''
def __init__(self, k=1):
self.k = k
def fit(self, x, y):
'''
fitting a KNN is just to memorize the data
'''
self.x = x
self.y = y
def predict(self, x, n_classes=2):
assert not (self.x is None and self.y is None), "No data has been passed to the fit function"
data_size = self.x.shape[0]
num_tests = 1 if len(x.shape) == 1 else x.shape[0]
preds = []
preds_probs = []
for j in range(num_tests):
distances = defaultdict(lambda : 0)
for i in range(data_size):
dis = self.distance(x[j], self.x[i])
# print(self.y)
distances[dis] = self.y[i]
sorted_dis = sorted(distances)
top_k = []
for i in range(self.k):
top_k.append(distances[sorted_dis[i]])
top_k = np.array(top_k)
classes_probs = []
for i in range(n_classes):
classes_probs.extend([np.sum(top_k == i) / self.k])
preds_probs.append(classes_probs)
pred = np.argmax(np.bincount(top_k))
preds.append(pred)
preds = np.array(preds)
preds_probs = np.array(preds_probs)
return preds, preds_probs
def distance(self, x1, x2):
return (np.sum((x1 - x2)**2))**0.5
def accuracy(self, x, y, n_classes):
preds, probs = self.predict(x, n_classes)
return 100 * (np.sum(preds == y) / len(y))
# create a KNN model.
model = KNN(1)
# save the data.
model.fit(X, Y)
# fit and calculate the model accuracy.
model.accuracy(X, Y, n_classes)
# Visualize the results
visualize(model, X, Y)
###Output
_____no_output_____
###Markdown
Finding value of k
###Code
x_axis = []
y_axis = []
for i in range(1,26,2):
clf = KNeighborsClassifier(n_neighbors = i)
score = cross_val_score(clf, X_train, Y_train)
x_axis.append(i)
y_axis.append(score.mean())
import matplotlib.pyplot as plt
plt.plot(x_axis, y_axis)
plt.show()
## have to find out whick k value is better 7 or 9
###Output
_____no_output_____
###Markdown
cross_val_score
###Code
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import KFold
iris = datasets.load_iris()
clf1 = LinearRegression()
cross_val_score(clf1, iris.data, iris.target, cv = KFold(3, True, 0))
###Output
_____no_output_____
###Markdown
Algo implementation
###Code
def train(x,y): # doesnt do anything
return
def predict_one(x_train,y_train,x_row,k):
distances = list()
for j in range(len(x_train)):
distances.append([((x_train[j,:] - x_row) ** 2).sum(),j])
distances = sorted(distances)
target = list()
for i in range(k):
target.append(y_train[distances[i][1]])
return Counter(target).most_common(1)[0][0]
def predict(x_train,y_train,x_data,k):
predictions = list()
for x in x_data:
predictions.append(predict_one(x_train,y_train,x,k))
return predictions
y_pred = predict(X_train,Y_train,X_test,7)
accuracy_score(Y_test,y_pred)
# use of Counter
a = [1,0,1,1,1,1,0, 2]
Counter(a).most_common(1)[0][0]
###Output
_____no_output_____
###Markdown
Image segmentation with k-NN
###Code
import cv2; #import OpenCV – computer vision functions
import numpy as np; #handle arrays/matrices
import matplotlib.pyplot as plt; #for plotting graphs and showing images
import random;
import math;
k=7; #k= 7
orgimg=cv2.imread('pyramid2.jpeg')#training image
img=cv2.cvtColor(orgimg,cv2.COLOR_BGR2RGB);
orgimg_label=cv2.imread('pyramid2_label.jpeg')#training labels
img_label=cv2.cvtColor(orgimg_label,cv2.COLOR_BGR2RGB);
org_test_img=cv2.imread('pyramid1.jpeg')#test image
img_test=cv2.cvtColor(org_test_img,cv2.COLOR_BGR2RGB);
plt.subplot(131);plt.imshow(img);plt.title('training');plt.axis('off')
plt.subplot(132);plt.imshow(img_label);plt.title('labels');plt.axis('off');
plt.subplot(133);plt.imshow(img_test);plt.title('testing');plt.axis('off');
plt.show()
#load the training data
width=img.shape[1];height=img.shape[0];
No_training_samples=100;
training_data=np.zeros([No_training_samples,3]);
training_label=np.zeros(No_training_samples);
for i in range(No_training_samples):
rx=int(random.random()*width);
ry=int(random.random()*height);
training_data[i]=img[ry,rx];
training_label[i]=0;
if (img_label[ry,rx,0]>200): training_label[i]=0;
elif(img_label[ry,rx,1]>200):training_label[i]=1;
else:training_label[i]=2;
def distance(v1, v2):#Euclidean distance between 2 vectors
dist=0.0;
for i in range(len(v1)):
dist += ((v1[i] - v2[i])**2);
return math.sqrt(dist);
def firstvariable(listitem):
return listitem[0];#sort the list based on the 1st variable
def find_nearest_k(training,label,no_training,testdata,k,no_classes):
distlist = list();
for i in range(no_training):
dist=distance(testdata,training[i]);
distlist.append([dist,label[i]]);#add both the dist and the label
distlist.sort(key=firstvariable);#sort the distance list
classvote=np.zeros(no_classes);
for i in range(k): #find the k-nearest neigbhours
classvote[int(distlist[i][1])]+=1;#get the votes for each class
#find the class with the majority of votes
maxclass=-99999; result=0;
for i in range(no_classes):
if (classvote[i]>maxclass):
maxclass=classvote[i];
result=i;
return result;
def kNNSegmentation():#segment the image based on k-NN algorithm
resultimg=img_test.copy();
for y in range(height):
for x in range(width):
label=find_nearest_k(training_data,training_label,No_training_samples,img_test[y,x],k,3);
if (label==0):
resultimg[y,x,0]=255;resultimg[y,x,1]=0;resultimg[y,x,2]=0;
elif (label==1):
resultimg[y,x,1]=255;resultimg[y,x,0]=0;resultimg[y,x,2]=0;
else: resultimg[y,x,2]=255;resultimg[y,x,1]=0;resultimg[y,x,0]=0;
return resultimg;
resultimg=kNNSegmentation();
plt.imshow(img_test);plt.title('origin');plt.axis('off');plt.show();
plt.imshow(resultimg);plt.title('segmented');plt.axis('off');plt.show()
###Output
_____no_output_____
###Markdown
***Import Required Libraries***
###Code
import numpy as np
import struct
from sklearn.decomposition import PCA
from keras.datasets import mnist
###Output
Using TensorFlow backend.
###Markdown
***Load data***Although we can find MNIST dataset from Yann LeCun's official site, I chose a more convenient way to find the dataset from Keras. Also, from the code below, we can show that the MNIST database contains 60,000 training and 10,000 testing images, which have $28\times28$ pixels with only greyscale.
###Code
(train_data_ori, train_label), (test_data_ori, test_label) = mnist.load_data()
print ("mnist data loaded")
print ("original training data shape:",train_data_ori.shape)
print ("original testing data shape:",test_data_ori.shape)
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
mnist data loaded
original training data shape: (60000, 28, 28)
original testing data shape: (10000, 28, 28)
###Markdown
For the convenience of training, linearize each image from $28\times28$ into an array of size $1\times784$, so that the training and test datasets are converted into 2-dimensional vectors of size $60000\times784$ and $10000\times784$, respectively.
###Code
train_data=train_data_ori.reshape(60000,784)
test_data=test_data_ori.reshape(10000,784)
print ("training data shape after reshape:",train_data.shape)
print ("testing data shape after reshape:",test_data.shape)
###Output
training data shape after reshape: (60000, 784)
testing data shape after reshape: (10000, 784)
###Markdown
***Dimension Reduction using PCA*** For this case, the pixels of the image will be the features used to build our predictive model. In this way, implementing KNN clustering is to calculate the norms in a 784-dimensional space. However, calculating norms in this 784-dimensional space is far from easy and efficient. Intuitively, we can perform some dimention reduction before going to KNN and calculate those norms, so that we can become more efficient. The way to do dimension reduction here is PCA mentioned in the lecture. I don't dig deep into PCA here, and use APIs from sklearn to implement PCA instead. I reduce the feature space from 784 dimensions into 100 dimensions. Talk is cheap, here's the code.
###Code
pca = PCA(n_components = 100)
pca.fit(train_data) #fit PCA with training data instead of the whole dataset
train_data_pca = pca.transform(train_data)
test_data_pca = pca.transform(test_data)
print("PCA completed with 100 components")
print ("training data shape after PCA:",train_data_pca.shape)
print ("testing data shape after PCA:",test_data_pca.shape)
###Output
PCA completed with 100 components
training data shape after PCA: (60000, 100)
testing data shape after PCA: (10000, 100)
###Markdown
From the result above, we can know that the training and test datasets become two vectors of size $60000\times100$ and $10000\times100$, respectively. At this point, the datasets are ready. ***Code up KNN*** Here's the code to K Nearest Neighbor clustering algorithm. This function takes in the image to cluster, training dataset, training labels, the value of K and the sort of norm to calculate distance(*i.e.* the value of $p$ in $l_p$ norm). Under the most circumstance, we use Euclidean norm to calculate distace, thus $p=2$. This function returns the class most common among the test data's K nearest neighbors, where K is the parameter mentioned above.
###Code
def KNN(test_data1,train_data_pca,train_label,k,p):
subMat = train_data_pca - np.tile(test_data1,(60000,1))
subMat = np.abs(subMat)
distance = subMat**p
distance = np.sum(distance,axis=1)
distance = distance**(1.0/p)
distanceIndex = np.argsort(distance)
classCount = [0,0,0,0,0,0,0,0,0,0]
for i in range(k):
label = train_label[distanceIndex[i]]
classCount[label] = classCount[label] + 1
return np.argmax(classCount)
###Output
_____no_output_____
###Markdown
***Define the test function*** This function takes in the value of K and the value of $p$ in $l_p$ norm mentioned above, and returns the accuracy of KNN clustering on the test dataset, along with the confusion matrix.
###Code
def test(k,p):
print("testing with K= %d and lp norm p=%d"%(k,p))
m,n = np.shape(test_data_pca)
correctCount = 0
M = np.zeros((10,10),int)
for i in range(m):
test_data1 = test_data_pca[i,:]
predict_label = KNN(test_data1,train_data_pca,train_label, k, p)
true_label = test_label[i]
M[true_label][predict_label] += 1
# print("predict:%d,true:%d" % (predict_label,true_label))
if true_label == predict_label:
correctCount += 1
print("The accuracy is: %f" % (float(correctCount)/m))
print("Confusion matrix:",M)
###Output
_____no_output_____
###Markdown
***Test result*** Here's the precision of the KNN clustering algorithm with argument K=3 and Euclidean norm, along with the confusion matrix.
###Code
test(3,2)
###Output
testing with K= 3 and lp norm p=2
The accuracy is: 0.973500
Confusion matrix: [[ 974 1 1 0 0 1 2 1 0 0]
[ 0 1131 3 0 0 0 1 0 0 0]
[ 7 4 1004 1 1 0 0 13 2 0]
[ 1 1 4 979 1 9 0 7 5 3]
[ 2 5 0 0 949 0 4 3 0 19]
[ 4 1 0 10 2 865 3 1 2 4]
[ 4 3 0 0 2 4 945 0 0 0]
[ 0 17 6 0 2 0 0 996 0 7]
[ 5 1 4 17 5 10 5 3 921 3]
[ 5 5 2 8 8 2 1 6 1 971]]
###Markdown
**KNN**
###Code
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from sklearn import datasets
iris=datasets.load_iris()
X=iris.data
y=iris.target
from sklearn.model_selection import train_test_split
X_train, X_test,y_train,y_test=train_test_split(X,y,test_size=20)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train=scaler.transform(X_train)
X_test=scaler.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=5)
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(y_test,y_pred))
print(classification_report(y_test,y_pred))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 6
1 0.67 0.40 0.50 5
2 0.73 0.89 0.80 9
accuracy 0.80 20
macro avg 0.80 0.76 0.77 20
weighted avg 0.79 0.80 0.78 20
###Markdown
Cross validation
###Code
seed= 1000
np.random.seed(seed)
X_train, X_test, y_train, y_test = train_test_split(x_standardized_features,y, test_size=0.30)
len(X_train)
###Output
_____no_output_____
###Markdown
Applying KNN model
###Code
from sklearn.neighbors import KNeighborsClassifier
np.random.seed(seed)
KNN = KNeighborsClassifier(n_neighbors=1, metric='euclidean')
KNN.fit(X_train,y_train)
pred = KNN.predict(X_test)
###Output
_____no_output_____
###Markdown
Prediction and evaluation
###Code
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,pred))
print(classification_report(y_test,pred))
###Output
precision recall f1-score support
0 0.93 0.91 0.92 151
1 0.91 0.93 0.92 149
accuracy 0.92 300
macro avg 0.92 0.92 0.92 300
weighted avg 0.92 0.92 0.92 300
###Markdown
Choosing K
###Code
error_rate = []
for i in range(1,50):
KNN = KNeighborsClassifier(n_neighbors=i,metric='euclidean')
KNN.fit(X_train,y_train)
pred_i = KNN.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,50),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
KNN = KNeighborsClassifier(n_neighbors=1,metric='euclidean')
KNN.fit(X_train,y_train)
pred = KNN.predict(X_test)
print('WITH K=1')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
from sklearn.metrics import classification_report,confusion_matrix
np.random.seed(seed)
KNN = KNeighborsClassifier(n_neighbors=5,metric='euclidean')
KNN.fit(X_train,y_train)
y_pred = KNN.predict(X_test)
print('\n')
print(confusion_matrix(y_test,y_pred))
print('\n')
print(classification_report(y_test,y_pred))
from sklearn import metrics
Scores = []
for k in range(1, 51):
KNN = KNeighborsClassifier(n_neighbors=k,metric='euclidean')
KNN.fit(X_train, y_train)
y_pred = KNN.predict(X_test)
Scores.append(metrics.accuracy_score(y_test, y_pred))
Scores
plt.figure(figsize=(10,8))
plt.plot(range(1, 51), Scores)
plt.xlabel('K Values')
plt.ylabel('Testing Accuracy')
plt.title('K Determination Using KNN', fontsize=20)
###Output
_____no_output_____
###Markdown
K-Nearest Neighbors Da mesma forma que podemos classificar modelos como _supervisionados_ ou _não-supervisionados_, podemos classificar modelos como **paramétricos** ou **não-paramétricos**Os paramétricos têm um número fixo de paramêtros, e em geral são mais rápidos, mas fazem suposições mais fortes sobre a natureza dos dados e sua distribuição. Por outro lado, nos modelos não-paramétricos, o número de variáveis cresce com a quantidade de dados.Veremos aqui, como exemplo de um modelo não-paramétrico, um classificador chamado __K-Nearest Neighbors__ (KNN). O seu algoritmo é bem simples: compare o novo dado $X$ a ser classificado com os **K** pontos mais 'próximos' (há que se definir o que isso significa) e atribua a classe mais provável (a classe da maioria dos K comparados). Formalizando: $$p(y=c|x,\mathcal{D},K) = \frac{1}{K}\sum_{i \in N_{K}(x,\mathcal{D})} \mathbb{I}(y_{i}=c)$$ Onde $N_{k}(x,\mathcal{D})$ calcula os índices dos K vizinhos mais próximos a $x$, e $\mathbb{I}$ é **função indicador**:$$\mathbb{I}(e)=\left\{ \begin{array}{ll} 1 & \text{se $e$ é verdadeiro}\\ 0 & \text{if $e$ é falso} \end{array} \right.$$ Assim o KNN efetivamente **divide** o espaço de _features_ com uma granularidade **K**se K=1, o modelo terá erro zero ao treinar (visto que apenas retornamos os pontos originais de treino), mas terá muito pouco valor explicativo ao ser utilizado.Ao aumentar K, as divisões do espaço vão ficando mais suaves, até que em K=N, é um classificador que chuta sempre a classe majoritária da massa de dados.A escolha de K nos coloca em um ponto entre o mínimo e o máximo de generalização. No Free Lunch Theorem Cunhado por Wolpert (1966), diz que não há um único modelo que dê resultados ótimos para todo tipo de problema. Um conjunto de pressupostos que dão resultado bom para um problema podem não funcionar bem em outro (ou com outros dados). Assim, diferentes modelos são criados em resposta a diferentes problemas e dados do mundo real, e diferentes algoritmos podem ser usados para treinar cada modelo, que por sua vez terão diferentes desempehos nas dimensões **velocidade-acurácia-precisão**. Código 1) para calcular a proximidade entre os pontos, preciso de uma métrica de distância exitem várias: Jaccard, City-Block, Coseno ...para começar podemos usar a Euclideana:para dois vetores de _features_: $X=(x_1,x_2,...,x_n)$ e $Y=(y_1,y_2,...,y_n)$$$ d=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+...+(x_n-y_n)^2}$$OBS: Vale observar que valores nominais não darão certo com esta escolha de métrica ...como resolver? (dummy coding, Jaccard)
###Code
import math
#####################################################################################
# d = sqrt((a1-b1)^2 + (a2-b2)^2+(a3-b3)^2+(a4-b4)^2)
#####################################################################################
def euclidean_dist(data1, data2):
# transformo [a1,a2,...,an] e [b1,b2,...,bn] em (a1,b1),(a2,b2),...(an,bn)
points = zip(data1, data2)
# quadrado das diferenças, dimensão por dimensão
diffs_squared_distance = [pow(a - b, 2) for (a, b) in points if a is not None and b is not None]
# retorno a raiz da soma
return math.sqrt(sum(diffs_squared_distance))
###Output
_____no_output_____
###Markdown
2) Agora criaremos uma função que calcula a distância entre cada item do gabarito e um item novo a ser julgado
###Code
from operator import itemgetter
#####################################################################################
# olho os vizinhos 1 a 1 e guardo os k mais próximos
#####################################################################################
def get_neighbours(training_set, test_instance, k):
# calculo a distância do item a ser julgado de cada outro ponto
# distances tem a forma: [(item1,d1), (item2,d2), ...]
# onde d1 é a distância entre item1 e test_instance,
# d2 é a distância entre item2 e test_instance ...e assim por diante
distances = [_get_tuple_distance(training_instance, test_instance) for training_instance in training_set]
# reordeno a lista de items da menor distância para a maior
sorted_distances = sorted(distances, key=itemgetter(1))
# não guardo as distâncias, só os items, uma vez ordenados
sorted_training_instances = [x[0] for x in sorted_distances]
# retorno os primeiros k items da lista ordenada
return sorted_training_instances[:k]
#####################################################################################
# aplico a minha função de distância entre dois itens.
#####################################################################################
def _get_tuple_distance(training_instance, test_instance):
return (training_instance, euclidean_dist(test_instance, training_instance[0]))
###Output
_____no_output_____
###Markdown
3) Uma vez que temos os vizinhos mais próximos, precisamos contar a classe de cada um, para saber o que responder
###Code
from collections import Counter
def get_majority_vote(neighbours):
# presumo aqui que a neighbours tem o formato
# [(item,classe), (item,classe)...]
classes = [neighbour[1] for neighbour in neighbours]
count = Counter(classes)
return count.most_common()[0][0]
###Output
_____no_output_____
###Markdown
agora vamos brincar...
###Code
#############################################################
def run(dataset, item, K):
neighbours = get_neighbours(dataset, item, K)
guess = get_majority_vote(neighbours)
print 'I think this guy likes:',guess
#############################################################
def generate_random_data():
for _ in range(30):
item = []
for i in range(10):
item.append(random.randint(1,5))
bucket = ''
cl = random.randint(0,8)
if cl < 3: bucket = 'action'
elif cl < 6: bucket = 'drama'
else: bucket = 'comedy'
data.append( (item,bucket) )
print data
#############################################################
import random
if __name__ == '__main__':
# generate_random_data()
K = 5
data = [([4, 2, 5, 3, 3, 3, 5, 3, 4, 2], 'action'),
([5, 3, 4, 2, 2, 5, 5, 2, 3, 2], 'comedy'),
([2, 5, 3, 3, 4, 4, 5, 1, 3, 5], 'action'),
([1, 3, 3, 5, 3, 1, 2, 5, 1, 3], 'action'),
([5, 3, 2, 4, 3, 1, 4, 3, 3, 4], 'drama'),
([5, 5, 1, 3, 1, 3, 3, 4, 3, 3], 'action'),
([1, 2, 3, 3, 2, 3, 2, 3, 5, 4], 'drama'),
([3, 5, 1, 3, 4, 1, 4, 2, 3, 4], 'drama'),
([1, 1, 1, 2, 1, 3, 3, 4, 5, 1], 'comedy'),
([5, 3, 4, 2, 5, 2, 4, 1, 3, 2], 'comedy'),
([4, 2, 3, 5, 1, 3, 1, 5, 3, 5], 'drama'),
([1, 2, 3, 1, 3, 2, 4, 4, 4, 5], 'drama'),
([3, 2, 1, 1, 2, 3, 1, 4, 2, 4], 'comedy'),
([4, 5, 5, 3, 5, 3, 5, 1, 3, 4], 'drama'),
([4, 4, 3, 3, 3, 2, 1, 5, 3, 4], 'comedy'),
([4, 1, 2, 5, 4, 4, 5, 4, 1, 4], 'comedy'),
([2, 2, 1, 3, 1, 5, 1, 3, 5, 1], 'comedy'),
([2, 3, 1, 1, 2, 5, 2, 2, 4, 2], 'comedy'),
([5, 2, 2, 4, 5, 3, 4, 5, 4, 2], 'comedy'),
([1, 1, 4, 4, 2, 2, 4, 4, 3, 1], 'comedy'),
([3, 3, 2, 2, 5, 1, 5, 3, 5, 2], 'comedy'),
([5, 4, 1, 2, 1, 5, 1, 5, 1, 5], 'comedy'),
([4, 1, 5, 5, 1, 3, 1, 5, 4, 1], 'comedy'),
([3, 4, 2, 1, 1, 2, 5, 4, 3, 5], 'action'),
([4, 5, 2, 1, 1, 1, 1, 2, 2, 2], 'drama'),
([3, 3, 1, 5, 1, 1, 5, 2, 1, 2], 'action'),
([1, 5, 2, 4, 1, 2, 1, 2, 3, 2], 'drama'),
([5, 3, 3, 5, 1, 3, 1, 2, 1, 3], 'drama'),
([1, 1, 4, 4, 4, 5, 2, 2, 1, 5], 'action'),
([3, 1, 5, 2, 1, 1, 5, 1, 5, 1], 'drama'),
([4, 2, 3, 4, 3, 2, 5, 4, 1, 3], 'comedy'),
([3, 2, 5, 3, 2, 4, 2, 2, 5, 4], 'drama'),
([1, 3, 1, 2, 5, 4, 2, 4, 4, 3], 'action'),
([4, 3, 4, 5, 1, 2, 2, 1, 1, 2], 'drama'),
([3, 3, 3, 1, 4, 3, 5, 2, 4, 5], 'action'),
([2, 5, 1, 2, 3, 3, 1, 3, 5, 1], 'action'),
([2, 4, 2, 1, 4, 2, 2, 4, 1, 1], 'action'),
([3, 2, 3, 3, 3, 3, 4, 2, 2, 1], 'comedy'),
([2, 5, 1, 5, 2, 5, 1, 1, 4, 5], 'action'),
([5, 2, 4, 1, 2, 5, 5, 3, 3, 4], 'comedy'),
([3, 5, 1, 3, 3, 5, 2, 1, 3, 1], 'action'),
([4, 1, 4, 1, 5, 2, 3, 5, 5, 3], 'drama'),
([3, 4, 2, 2, 4, 2, 1, 4, 1, 5], 'drama'),
([3, 3, 5, 3, 3, 3, 3, 4, 1, 4], 'comedy'),
([2, 3, 2, 1, 3, 1, 3, 2, 1, 4], 'comedy'),
([3, 5, 1, 1, 2, 4, 1, 5, 1, 2], 'comedy'),
([2, 2, 4, 1, 3, 4, 2, 3, 3, 5], 'comedy'),
([5, 3, 4, 5, 1, 5, 2, 4, 1, 1], 'drama'),
([4, 2, 5, 2, 3, 1, 2, 3, 2, 2], 'action'),
([1, 3, 3, 5, 3, 3, 2, 5, 4, 2], 'drama'),
([3, 4, 2, 1, 4, 2, 1, 4, 1, 3], 'drama'),
([3, 1, 3, 4, 5, 5, 5, 2, 1, 3], 'drama'),
([4, 4, 4, 2, 2, 1, 1, 2, 2, 1], 'action'),
([1, 3, 3, 4, 4, 4, 3, 5, 1, 2], 'drama'),
([3, 3, 3, 3, 2, 2, 1, 5, 5, 4], 'comedy'),
([2, 5, 4, 2, 4, 1, 2, 4, 1, 5], 'drama'),
([3, 1, 1, 1, 5, 1, 2, 3, 1, 1], 'action'),
([1, 3, 4, 3, 3, 2, 1, 4, 3, 5], 'action'),
([3, 2, 3, 1, 4, 5, 4, 3, 5, 2], 'action'),
([5, 1, 3, 2, 3, 2, 4, 3, 4, 2], 'action')
]
run(data, [5,5,5,1,1,1,5,1,1,1], K)
###Output
I think this guy likes: action
|
experiments/experiments_qNNC.ipynb | ###Markdown
Set the specification for the model
###Code
# Change version to change the qnnc model
version = 1
# Change to change the dataset
dataset = "iris01"
# model required variable
tot_qubit = 2
output_shape = 2
###Output
_____no_output_____
###Markdown
Training of the modelWe train the qSLP model starting from given starting points
###Code
model_name = f"qNNC_v{version}"
#function that returns the best parameters for a given model
starting_points = get_params(model_name, dataset, "starting_points", "../results/training/file_result.txt")
###Output
_____no_output_____
###Markdown
Obtain the datasetThe dataset is processed through the use a PCA in order to use only two high descriptive features.
###Code
X_train, X_test, Y_train, Y_test = get_dataset(dataset)
###Output
[0.90539269 0.07445563]
98.0% of total variance is explained by 2 principal components
###Markdown
Set the optimizer and the quantum instance
###Code
optimizer = COBYLA(maxiter=max_iter, tol=0.01, disp=False)
qinstance = QuantumInstance(Aer.get_backend('aer_simulator'),seed_simulator=seed,seed_transpiler=seed, shots=1024)
qinstance.backend.set_option("seed_simulator", seed)
###Output
_____no_output_____
###Markdown
The model Build the model with the chosen parameters.
###Code
feature_map, ansatz = get_qNNC(1)
interpreter = parity
qc = QuantumCircuit(tot_qubit)
qc.append(feature_map, range(tot_qubit))
qc.append(ansatz, range(tot_qubit))
objective_func_vals = []
def callback_values(weights, obj_func_eval):
objective_func_vals.append(obj_func_eval)
circuit_qnn = CircuitQNN(circuit=qc,
input_params=feature_map.parameters,
weight_params=ansatz.parameters,
interpret=interpreter,
output_shape=output_shape,
quantum_instance=qinstance)
circuit_classifier = NeuralNetworkClassifier(neural_network=circuit_qnn,
optimizer=optimizer,
callback=callback_values,
warm_start=True,
initial_point = starting_points )
###Output
_____no_output_____
###Markdown
Training
###Code
circuit_classifier.fit(X_train, Y_train)
train_score = circuit_classifier.score(X_train, Y_train)
test_score = circuit_classifier.score(X_test, Y_test)
print(train_score)
print(test_score)
ending_points = circuit_classifier._fit_result[0]
get_params(model_name, dataset, "ending_points", "../results/training/file_result.txt")
ending_points
###Output
_____no_output_____ |
code/ch05/ch05.ipynb | ###Markdown
[Sebastian Raschka](http://sebastianraschka.com), 2015https://github.com/rasbt/python-machine-learning-book Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
###Output
_____no_output_____
###Markdown
Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.transform(X_test)` instead of `X_test_std = sc.fit_transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
%matplotlib inline
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train==l, 0],
X_train_pca[y_train==l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:,0], X_train_pca[:,1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1,4):
mean_vecs.append(np.mean(X_train_std[y_train==label], axis=0))
print('MV %s: %s\n' %(label, mean_vecs[label-1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label,mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X[y == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row-mv).dot((row-mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label,mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train==label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i,mean_vec in enumerate(mean_vecs):
n = X[y==i+1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0707 -0.3778]
[ 0.0359 -0.2223]
[-0.0263 -0.3813]
[ 0.1875 0.2955]
[-0.0033 0.0143]
[ 0.2328 0.0151]
[-0.7719 0.2149]
[-0.0803 0.0726]
[ 0.0896 0.1767]
[ 0.1815 -0.2909]
[-0.0631 0.2376]
[-0.3794 0.0867]
[-0.3355 -0.586 ]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train==l, 0],
X_train_lda[y_train==l, 1],
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N,N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_spca[y==0, 0], X_spca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y==1, 0], X_spca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_spca[y==0, 0], X_spca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y==1, 0], X_spca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y==0, 0], np.zeros((500,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y==1, 0], np.zeros((500,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((500,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((500,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N,N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:,-i] for i in range(1,n_components+1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1,n_components+1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new-row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y==0, 0], np.zeros((50)),
color='red', marker='^',alpha=0.5)
plt.scatter(alphas[y==1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black', label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green', label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y==0, 0], X_skernpca[y==0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y==1, 0], X_skernpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
###Output
Sebastian Raschka
last updated: 2016-07-26
CPython 3.5.1
IPython 5.0.0
numpy 1.11.1
scipy 0.17.1
matplotlib 1.5.1
scikit-learn 0.17.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
/Users/Sebastian/miniconda3/lib/python3.5/site-packages/ipykernel/__main__.py:1: VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 99 but corresponding boolean dimension is 100
if __name__ == '__main__':
/Users/Sebastian/miniconda3/lib/python3.5/site-packages/ipykernel/__main__.py:3: VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 99 but corresponding boolean dimension is 100
app.launch_new_instance()
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2018-07-02
numpy 1.14.5
scipy 1.1.0
matplotlib 2.2.2
sklearn 0.19.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [The main steps behind principal component analysis](The-main-steps-behind-principal-component-analysis) - [Extracting the principal components step-by-step](Extracting-the-principal-components-step-by-step) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Principal component analysis versus linear discriminant analysis](Principal-component-analysis-versus-linear-discriminant-analysis) - [The inner workings of linear discriminant analysis](The-inner-workings-of-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis The main steps behind principal component analysis
###Code
Image(filename='images/05_01.png', width=400)
###Output
_____no_output_____
###Markdown
Extracting the principal components step-by-step
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# if the Wine dataset is temporarily unavailable from the
# UCI machine learning repository, un-comment the following line
# of code to load the dataset from a local path:
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_02.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.13724218 0.50303478]
[ 0.24724326 0.16487119]
[-0.02545159 0.24456476]
[ 0.20694508 -0.11352904]
[-0.15436582 0.28974518]
[-0.39376952 0.05080104]
[-0.41735106 -0.02287338]
[ 0.30572896 0.09048885]
[-0.30668347 0.00835233]
[ 0.07554066 0.54977581]
[-0.32613263 -0.20716433]
[-0.36861022 -0.24902536]
[-0.29669651 0.38022942]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the Matrix W with its signs flipped. Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma \cdot (-v) = -\Sigma v = -\lambda v = \lambda \cdot (-v).$$
###Code
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_03.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn **NOTE**The following four code cells has been added in addition to the content to the book, to illustrate how to replicate the results from our own PCA implementation in scikit-learn:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_04.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_05.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis Principal component analysis versus linear discriminant analysis
###Code
Image(filename='images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
The inner workings of linear discriminant analysis Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in descending order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_07.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.1481 -0.4092]
[ 0.0908 -0.1577]
[-0.0168 -0.3537]
[ 0.1484 0.3223]
[-0.0163 -0.0817]
[ 0.1913 0.0842]
[-0.7338 0.2823]
[-0.075 -0.0102]
[ 0.0018 0.0907]
[ 0.294 -0.2152]
[-0.0328 0.2747]
[-0.3547 -0.0124]
[-0.3915 -0.5958]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('images/05_08.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_09.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_10.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# scipy.linalg.eigh returns them in ascending order
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, i]
for i in range(n_components)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('images/05_12.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_13.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_14.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('images/05_15.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_16.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_17.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
alphas: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# scipy.linalg.eigh returns them in ascending order
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, i]
for i in range(n_components)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('images/05_18.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('images/05_19.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch05.ipynb --output ch05.py
###Output
[NbConvertApp] Converting notebook ch05.ipynb to script
[NbConvertApp] Writing 27741 bytes to ch05.py
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2016-09-29
CPython 3.5.2
IPython 5.1.0
numpy 1.11.1
scipy 0.18.1
matplotlib 1.5.1
sklearn 0.18
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
###Output
Sebastian Raschka
last updated: 2016-03-25
CPython 3.5.1
IPython 4.0.3
numpy 1.10.4
scipy 0.17.0
matplotlib 1.5.1
scikit-learn 0.17.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.33051429 0.08414846 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.14669811 0.50417079]
[ 0.24224554 0.24216889]
[ 0.02993442 0.28698484]
[ 0.25519002 -0.06468718]
[-0.12079772 0.22995385]
[-0.38934455 0.09363991]
[-0.42326486 0.01088622]
[ 0.30634956 0.01870216]
[-0.30572219 0.03040352]
[ 0.09869191 0.54527081]
[-0.30032535 -0.27924322]
[-0.36821154 -0.174365 ]
[-0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.0662 -0.3797]
[-0.0386 -0.2206]
[ 0.0217 -0.3816]
[-0.184 0.3018]
[ 0.0034 0.0141]
[-0.2326 0.0234]
[ 0.7747 0.1869]
[ 0.0811 0.0696]
[-0.0875 0.1796]
[-0.185 -0.284 ]
[ 0.066 0.2349]
[ 0.3805 0.073 ]
[ 0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
5장. 차원 축소를 사용한 데이터 압축 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,scipy,matplotlib,sklearn
###Output
last updated: 2019-12-29
numpy 1.16.3
scipy 1.4.1
matplotlib 3.0.3
sklearn 0.22
###Markdown
주성분 분석을 통한 비지도 차원 축소 주성분 추출 단계
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# UCI 머신 러닝 저장소에서 Wine 데이터셋을 다운로드할 수 없을 때
# 다음 주석을 해제하고 로컬 경로에서 데이터셋을 적재하세요.
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
70%는 훈련 세트로 30%는 테스트 세트로 나눕니다.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
데이터를 표준화합니다.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
공분산 행렬의 고윳값 분해
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\n고윳값 \n%s' % eigen_vals)
###Output
고윳값
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
총분산과 설명된 분산
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
특성 변환
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# 높은 값에서 낮은 값으로 (고윳값, 고유벡터) 튜플을 정렬합니다
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('투영 행렬 W:\n', w)
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 주성분 분석 **노트**이어지는 네 개의 셀은 책에 없는 내용입니다. 사이킷런에서 앞의 PCA 구현 결과를 재현하기 위해 추가했습니다:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# 마커와 컬러맵을 준비합니다
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 결정 경계를 그립니다
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# 클래스 샘플을 표시합니다
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap.colors[idx],
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
처음 두 개의 주성분을 사용하여 로지스틱 회귀 분류기를 훈련합니다.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
선형 판별 분석을 통한 지도방식의 데이터 압축 산포 행렬 계산 각 클래스이 평균 벡터를 계산합니다:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
클래스 내 산포 행렬을 계산합니다:
###Code
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
클래스 내의 산포 행렬: 13x13
###Markdown
클래스가 균일하게 분포되어 있지 않기 때문에 공분산 행렬을 사용하는 것이 더 낫습니다:
###Code
print('클래스 레이블 분포: %s'
% np.bincount(y_train)[1:])
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T, bias=True)
S_W += class_scatter
print('스케일 조정된 클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
스케일 조정된 클래스 내의 산포 행렬: 13x13
###Markdown
클래스 간 산포 행렬을 계산합니다:
###Code
mean_overall = np.mean(X_train_std, axis=0)
mean_overall = mean_overall.reshape(d, 1) # 열 벡터로 만들기
d = 13 # 특성의 수
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # 열 벡터로 만들기
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('클래스 간의 산포 행렬: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
클래스 간의 산포 행렬: 13x13
###Markdown
새로운 특성 부분 공간을 위해 선형 판별 벡터 선택하기 행렬 $S_W^{-1}S_B$의 일반적인 고윳값 분해 문제를 풉니다:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
고윳값의 역순으로 고유 벡터를 정렬합니다:
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다.
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# (고윳값, 고유벡터) 튜플을 큰 값에서 작은 값 순서대로 정렬합니다.
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# 고윳값의 역순으로 올바르게 정렬되었는지 확인합니다.
print('내림차순의 고윳값:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('행렬 W:\n', w)
###Output
행렬 W:
[[-0.1484 -0.4093]
[ 0.091 -0.1583]
[-0.0168 -0.3536]
[ 0.1487 0.322 ]
[-0.0165 -0.0813]
[ 0.1912 0.0841]
[-0.7333 0.2828]
[-0.0751 -0.0099]
[ 0.002 0.0902]
[ 0.2953 -0.2168]
[-0.0327 0.274 ]
[-0.3539 -0.0133]
[-0.3918 -0.5954]]
###Markdown
새로운 특성 공간으로 샘플 투영하기
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
역주
###Code
y_uniq, y_count = np.unique(y_train, return_counts=True)
priors = y_count / X_train_std.shape[0]
priors
###Output
_____no_output_____
###Markdown
$\sigma_{jk} = \frac{1}{n} \sum_{i=1}^n (x_j^{(i)}-\mu_j)(x_k^{(i)}-\mu_k)$$m = \sum_{i=1}^c \frac{n_i}{n} m_i$$S_W = \sum_{i=1}^c \frac{n_i}{n} S_i = \sum_{i=1}^c \frac{n_i}{n} \Sigma_i$
###Code
s_w = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, label in enumerate(y_uniq):
# 1/n로 나눈 공분산 행렬을 얻기 위해 bias=True로 지정합니다.
s_w += priors[i] * np.cov(X_train_std[y_train == label].T, bias=True)
###Output
_____no_output_____
###Markdown
$ S_B = S_T-S_W = \sum_{i=1}^{c}\frac{n_i}{n}(m_i-m)(m_i-m)^T $
###Code
s_b = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, mean_vec in enumerate(mean_vecs):
n = X_train_std[y_train == i + 1].shape[0]
mean_vec = mean_vec.reshape(-1, 1)
s_b += priors[i] * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
import scipy
ei_val, ei_vec = scipy.linalg.eigh(s_b, s_w)
ei_vec = ei_vec[:, np.argsort(ei_val)[::-1]]
ei_vec /= np.linalg.norm(ei_vec, axis=0)
lda_eigen = LDA(solver='eigen')
lda_eigen.fit(X_train_std, y_train)
# 클래스 내의 산포 행렬은 covariance_ 속성에 저장되어 있습니다.
np.allclose(s_w, lda_eigen.covariance_)
Sb = np.cov(X_train_std.T, bias=True) - lda_eigen.covariance_
np.allclose(Sb, s_b)
np.allclose(lda_eigen.scalings_[:, :2], ei_vec[:, :2])
np.allclose(lda_eigen.transform(X_test_std), np.dot(X_test_std, ei_vec[:, :2]))
###Output
_____no_output_____
###Markdown
커널 PCA를 사용하여 비선형 매핑하기 파이썬으로 커널 PCA 구현하기
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
반환값
------------
X_pc: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유벡터를 선택합니다(결과값은 투영된 샘플입니다).
X_pc = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
return X_pc
###Output
_____no_output_____
###Markdown
예제 1: 반달 모양 구분하기
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
/home/haesun/anaconda3/envs/python-ml/lib/python3.7/site-packages/ipykernel_launcher.py:33: DeprecationWarning: scipy.exp is deprecated and will be removed in SciPy 2.0.0, use numpy.exp instead
###Markdown
예제 2: 동심원 분리하기
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
/home/haesun/anaconda3/envs/python-ml/lib/python3.7/site-packages/ipykernel_launcher.py:33: DeprecationWarning: scipy.exp is deprecated and will be removed in SciPy 2.0.0, use numpy.exp instead
###Markdown
새로운 데이터 포인트 투영하기
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
Returns
------------
alphas: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
lambdas: list
고윳값
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유 벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유 벡터를 선택합니다(투영 결과).
alphas = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
# 고유 벡터에 상응하는 고윳값을 선택합니다.
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # 원본 투영
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# 새로운 데이터포인트를 투영합니다.
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 커널 PCA
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015-2017 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2017-03-10
numpy 1.12.0
scipy 0.18.1
matplotlib 2.0.0
sklearn 0.18.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from numpy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.linalg.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](http://sebastianraschka.com/)[Li-Yi Wei](http://liyiwei.org/)https://github.com/1iyiwei/pyml[MIT License](https://github.com/1iyiwei/pyml/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality ReductionPrinciple component analysis (PCA)* unsupervisedLinear discriminant analysis (LDA)* supervisedKernel PCA* non-linear mapping Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,scipy,matplotlib,sklearn
###Output
last updated: 2016-10-12
CPython 3.5.2
IPython 4.2.0
numpy 1.11.1
scipy 0.17.1
matplotlib 1.5.1
sklearn 0.18
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview- [Unsupervised dimensionality reduction via principal component analysis](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis PCA is a common way to reduce dimensionality of a given dataset.It can also be considered as a unsupervised learning method.Given the input data matrix $\mathbf{X}$ Goal: find a transformation matrix $\mathbf{W}$ that will project each row $\mathbf{x}$ of $\mathbf{X}$ into a lower dimensional vector $\mathbf{z}$ so that the variances of the projected components are maximized:$$\mathbf{z} = \mathbf{x} \mathbf{W}$$$\mathbf{X}$: size $n \times d$ where $n$ is the number of data samples and $d$ is the input data dimensionality $\mathbf{W}$: size $d \times k$$\mathbf{z}$ and $\mathbf{x}$ are both row vectors with dimensionality $k$ and $d$, usually $k << d$.In this 2D example, we want to project the dataset into 1D. We will pick the first principle component (PC1) as it maximizes variance among projected samples. With $\mathbf{W}$, we can project the entire input $\mathbf{X}$ into a lower dimensional space data set as:$$\mathbf{Z} = \mathbf{X} \mathbf{W}$$<!--The total projected dataset $\mathbf{Z}$ can be computed from the total original dataset $\mathbf{X}$:$$\mathbf{Z} = \mathbf{X} \mathbf{W}$$-->$\mathbf{Z}$: size $n \times k$We can also recover an approximated version $\mathbf{X'}$ of $\mathbf{X}$ from $\mathbf{Z}$ and $\mathbf{W}$ as:$$\mathbf{X'} = \mathbf{Z} \mathbf{W}^T$$It can be shown that $\mathbf{X'}$ is the best approximation of $\mathbf{X}$, i.e. minimizing$$||\mathbf{X'} - \mathbf{X}||^2$$ Algorithm$$\mathbf{Z} = \mathbf{X} \mathbf{W}$$$\mathbf{W}$ can be computed from $\mathbf{X}$ as follows.First, compute the $d \times d$ covariance matrix $\Sigma$ from the columns (i.e. features) of $\mathbf{X}$:$$\begin{align}\Sigma_{ij} = \frac{1}{n} \left(\mathbf{x_{(i)}} - \mu_i\right)^T \left(\mathbf{x_{(j)}} - \mu_j\right) \end{align}$$, where * $\Sigma_{ij}$ is the $(i, j)$th component of $\Sigma$.* $\mathbf{x_{(i)}}$ is the $i^{th}$ column/feature of $\mathbf{X}$ and $\mu_i$ its mean (a scalar). Alternatively we can compute $\Sigma$ by summing the covariance matrices of each individual sample $x^{(i)}$ (rows of $\mathbf{X}$):$$\begin{align}\Sigma = \frac{1}{n} \sum_i \left(\mathbf{x^{(i)}} - \mu\right)^T \left( \mathbf{x^{(i)} - \mu}\right)\end{align}$$, where $\mu$ is the (vector) mean of all rows of $\mathbf{X}$. We then compute the eigen-values/vectors of $\Sigma$.Recall $\mathbf{v}$ is an eigen-vector of a matrix $\Sigma$ with eigen-value $\lambda$ if$$\lambda \mathbf{v} = \Sigma \mathbf{v}$$That is, an eigen-vector remains itself after transforming by the matrix. $\mathbf{W}$ can be constructed by horizontally stacking (as columns) the eigen-vectors of $\Sigma$ with the $k$ largest eigen-values (which we assume are all non-negative) as columns.These columns are called the principle components, and thus the name principle component analysis (PCA). MathLet's try to find the first principle component $\mathbf{w_1}$ so that when the input vector $\mathbf{x}$ is projected into $\mathbf{z}$ its variance is maximized:$$\mathbf{z} = \mathbf{w_1}^T \mathbf{x}$$$\mathbf{x}$ differnt rows of the matrix $\mathbf{X}$ verticalized as columns. Consider $\mathbf{x}$ as a random vector that can take values from $\mathbf{X}$:<!--(Machine learning can be understood via a probabilistic approach from ground up, but I prefer the non-probabilistic approach to reduce potential confusion for beginners.)-->$$\begin{align}Var(\mathbf{z}) &= E\left( \left(\mathbf{w_1}^T (\mathbf{x} - \mu)\right)^2 \right)\\&= E\left( \mathbf{w_1}^T (\mathbf{x} - \mu) (\mathbf{x} - \mu)^T \mathbf{w_1} \right)\\&= \mathbf{w_1}^T \Sigma \mathbf{w_1}\end{align}$$ We want to find $\mathbf{w_1}$ to maximize $Var(\mathbf{z})$ subject to the unit vector constraint $|\mathbf{w_1}| = 1$.Using Lagrangian multiplier we want to maximize:$$\mathbf{w_1}^T \Sigma \mathbf{w_1} - \alpha(\mathbf{w_1}^T\mathbf{w_1}-1)$$Take derivative of the above with $\mathbf{w_1}$ and set it to zero we have:$$\Sigma \mathbf{w_1} = \alpha \mathbf{w_1}$$ And thus$$\mathbf{w_1}^T \Sigma \mathbf{w_1} = \alpha \mathbf{w_1}^T \mathbf{w_1} = \alpha$$Which means we want to maximize $\alpha$, and thus it should be the largest eigen-value of $\Sigma$ and $\mathbf{w_1}$ the corresponding eigen-vector.We can continue the same trick to find the rest of the principle components by making sure each new one is orthogonal to all existing ones. Code example for the math aboveUse the wine data set as it has 13 features for dimensionality reduction
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/1iyiwei/pyml/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix:
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformationNow let's apply PCA ...
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Notice the nicely formed clusters, even though PCA does not consider class labels (unsupervised).
###Code
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
What happens if we use the last two eigen-vectors?
###Code
w_tail = np.hstack((eigen_pairs[-1][1][:, np.newaxis],
eigen_pairs[-2][1][:, np.newaxis]))
print('Matrix W (tail end):\n', w_tail)
X_train_pca = X_train_std.dot(w_tail)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC -1')
plt.ylabel('PC -2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Notice the badly formed clusters! Principal component analysis in scikit-learnPCA is actually part of scikit-learn, so we can use it directly instead of going through the code above.
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysisPCA* unsupervised (no class information)* project data into dimensions that maximize variance/spreadLDA* supervised (with class information)* project data into dimensions to (1) maximize inter-class spread, (2) minimize intra-class spread. ExerciseUnder what circumstances would PCA and LDA produce very different results? Provide some intuitive examples. AlgorithmSimilar to PCA, given a data matrix $\mathbf{X}$, we want to calculate a projection matrix $\mathbf{W}$ which separates the projected vectors as much as possible.Unlike PCA which is unsupervised, for LDA we have the class information.Thus, the goal is to spread out different classes while cluster each individual classes via projection $\mathbf{W}$.Assume we have $K$ classes, each with center $\mu_{i}$ as computed from the mean of all $n_i$ samples within class $i$.$\mu$ is the mean of all samples across all classes.Below, each $\mu$ and $\mathbf{x}$ is a row vector, and the transpose $T$ is a column vector.We first compute the between-class scatter matrix:$$\mathbf{S_B} = \sum_{i=1}^K n_i (\mu_i - \mu)^T (\mu_i - \mu)$$And the within-class scatter matrix:$$\begin{align}\mathbf{S_i} & = \sum_{\mathbf{x} \in C_i} (\mathbf{x} - \mu_i)^T (\mathbf{x} - \mu_i) \\\mathbf{S_W} & = \sum_{i=1}^K \mathbf{S_i}\end{align}$$Note: these scatter matrices are very similar to the covariance matrices except for scaling constants. We then perform eigen decomposition of $$\mathbf{S_W}^{-1}\mathbf{S_B}$$And construct $\mathbf{W}$ from the first $k$ eigen-vectors with the largest eigen-values.This step is similar to PCA, except that we use the above matrix instead of $\Sigma$, the covariance matrix of all input data $\mathbf{X}$.Intuitively, since we want to maximize the spread with $\mathbf{S_B}$ and minimize the spread with $\mathbf{S_W}$, we want to perform the eigen decomposition via $\mathbf{S_W}^{-1}\mathbf{S_B}$. MathBelow, we first discuss how to compute such inter and intra class spreads, followed by how to optimize $\mathbf{W}$. The between/inter-class spread can be computed as the scatter/covariance of the projected class centers weighted by the class sizes:$$\sum_{i=1}^K n_i \left(\mathbf{W}^T (\mu_i - \mu) \right)^2 = \mathbf{W}^T \left( \sum_{i=1}^K n_i (\mu_i - \mu)^T (\mu_i - \mu) \right) \mathbf{W} = \mathbf{W}^T \mathbf{S_B}\mathbf{W}$$ The within/intra-class spread of each projected class $i$ can be computed analogously:$$\sum_{\mathbf{x} \in C_i} \left(\mathbf{W}^T (\mathbf{x}-\mu_i)\right)^2 = \mathbf{W}^T \mathbf{S_i} \mathbf{W}$$And thus the total within/intra-class spread is:$$\sum_{i=1}^K \mathbf{W}^T \mathbf{S_i} \mathbf{W} = \mathbf{W}^T \mathbf{S_W} \mathbf{W}$$ The goal of maximize/minimize inter/intra-class spread can be formulated as maximizing the ratio of determinants:$$J(\mathbf{W}) = \frac{\left|\mathbf{W}^T \mathbf{S_B} \mathbf{W}\right|}{\left|\mathbf{W}^T \mathbf{S_W} \mathbf{W}\right|}$$Recall that the determinant of a matrix is the product of its eigen-values.Linear algebra can show that constructing $\mathbf{W}$ from the largest eigen-vectors of $\mathbf{S_W}^{-1}\mathbf{S_B}$ can maximize $J(\mathbf{W})$ above. Code exampleComputing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
#from sklearn.lda import LDA # deprecated
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappingsPCA/LDA problematic for non-linearly separable dataIdea:1. elevate the dimension of the input data (similar to kernel SVM)2. reduce the dimension (similar to PCA)Projected data becomes linearly separableThus:* the projected data can then be handled by linear classifiers* why it make sense to increase dimension before reduce itNote: PCA is unsupervised, but it matters whether the projected data is suitable for further classification. AlgorithmLet $\mathbf{X}$ be the usual matrix of the input data set, with size $n \times d$, where $n$ is the number of data vectors and $d$ is their dimensionality.Similar to kernel SVM, we want to elevate each data vector into a higher $k$-dimensional space via a function $\phi$ (usually $k >> d$).Specifically, we denote $\phi(\mathbf{X})$ as the matrix for which the ith row is $\phi(X^{(i)})$. Ordinary PCA performs eigen analysis of the covariance matrix of $\mathbf{X}$:$$\Sigma = \frac{1}{n} \mathbf{X}^T \mathbf{X} $$Kernel PCA performs eigen analysis of the elevated covariance matrix:$$\Sigma = \frac{1}{n} \phi(\mathbf{X})^T \phi(\mathbf{X})$$ Now $\Sigma$ is of size $k \times k$, which is very large and thus expensive to compute.Fortunately, all we need to know is to1. compute its eigen vectors2. project all input vectors into the lower dimensional space formed by the selected eigen vectors (with largest eigen values similar to traditional PCA)That is, we actually never need to know the eigen vectors explicitly, only their dot products with the input vectors.This is where the kernel trick comes in, by replacing high dimensional dot products with fast kernel evaluations. Specifically, we just need to compute $$\mathbf{K} = \phi(\mathbf{X}) \phi(\mathbf{X})^T$$, a $n \times n$ matrix, much smaller than $\Sigma$, via kernel trick.The projection of $\phi(\mathbf{X})$ into $m$-dimension can be found from the $m$ largest eigen-vectors of $\mathbf{K}$. Math$$\Sigma = \frac{1}{n} \phi(\mathbf{X})^T \phi(\mathbf{X})$$Let $\mathbf{v}$ be an eigen vector of $\Sigma$ with eigen value $\lambda$:$$\Sigma \mathbf{v} = \lambda \mathbf{v}$$And for the elevated data matrix $\phi(\mathbf{X})$, we just need to know its projection with $\mathbf{v}$:$$\mathbf{a} = \phi(\mathbf{X}) \mathbf{v}$$ Note that$$\begin{align}\mathbf{a} &= \phi(\mathbf{X}) \mathbf{v} \\&= \frac{1}{\lambda} \phi(\mathbf{X}) \Sigma \mathbf{v} \\&= \frac{1}{\lambda n} \phi(\mathbf{X}) \phi(\mathbf{X})^T \phi(\mathbf{X}) \mathbf{v} \\&= \frac{1}{\lambda n} \phi(\mathbf{X}) \phi(\mathbf{X})^T \mathbf{a}\end{align}$$ If we denote$$\mathbf{K} = \phi(\mathbf{X}) \phi(\mathbf{X})^T$$we have$$\lambda \mathbf{a} = \frac{\mathbf{K}}{n} \mathbf{a}$$Note: $\Sigma$ and $\frac{\mathbf{K}}{n}$ have the same eigen values Thus, $\mathbf{a}$ can be computed as an eigen vector of $\frac{\mathbf{K}}{n}$, where $\mathbf{K}$, the similarity (kernel) matrix, has size $n \times n$ is thus much smaller than $\Sigma$ with size $k \times k$.Furthermore, each entry of $\mathbf{K}$ can be computed via fast kernel evaluation$$\mathbf{K}_{ij} = \mathbf{k}\left(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}\right)$$instead of the original dot product between two $k$-dimensional vectors $$\mathbf{K}_{ij} = \phi\left(\mathbf{x^{(i)}})^T \phi(\mathbf{x^{(j)}}\right)$$This so called kernel trick, of approximating high dimensional dot products with fast kernel evaluatioin, shows up again, after what we have seen in the kernel SVM part. Mean shiftRecall that in (ordinary) PCA, each entry of $\Sigma$ is a covariance:$$\Sigma_{ij} = \frac{1}{n} (\mathbf{x_{(i)}} - \mathbf{\mu_i})^T (\mathbf{x_{(j)}} - \mathbf{\mu_j})$$, where $\mathbf{\mu}$ is the mean of all $\mathbf{x}$, i.e. the rows of $\mathbf{X}$.For kernel PCA, we need to perform a similar mean shift for $\mathbf{K}$.Specifically, since $\mathbf{K}$ is the covariance matrix of $\phi(\mathbf{x})$, we have$$\mathbf{\mu} = \frac{1}{n} \sum_{k=1}^n \phi(x^{(k)})$$And each entry of the mean-shifted $\mathbf{K'}$ is:$$\begin{align}\mathbf{K'}_{ij} & = \left(\phi(\mathbf{x^{(i)}}) - \mathbf{\mu}\right) \left(\phi(\mathbf{x^{(j)}}) - \mathbf{\mu}\right)^T \\& = \phi(\mathbf{x^{(i)}}) \phi(\mathbf{x^{(j)}})^T - \mathbf{\mu} \phi(\mathbf{x^{(j)}})^T - \phi(\mathbf{x^{(i)}}) \mathbf{\mu}^T + \mathbf{\mu} \mathbf{\mu}^T \\& = \mathbf{k}(\mathbf{x^{(i)}}, \mathbf{x^{(j)}}) - \frac{1}{n} \sum_{i=1}^n \mathbf{k}(\mathbf{x^{(i)}}, \mathbf{x^{(j)}}) - \frac{1}{n} \sum_{j=1}^n \mathbf{k}(\mathbf{x^{(i)}}, \mathbf{x^{(j)}}) \\&+ \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n \mathbf{k}(\mathbf{x^{(i)}}, \mathbf{x^{(j)}})\end{align}$$Coalescing all entries $\mathbf{K'_{ij}}$ into $\mathbf{K'}$ we have$$\mathbf{K'} = \mathbf{K} - \mathbf{1_n} \mathbf{K} - \mathbf{K} \mathbf{1_n} + \mathbf{1_n} \mathbf{K} \mathbf{1_n}$$where $\mathbf{1_n}$ is a matrix of the same size as $\mathbf{K}$ with all entries equal to $\frac{1}{n}$. New data setIn the above we perform kernel PCA for a given dataset $\mathbf{X}$.How about a new dataset, such as a test data $\mathbf{X'}$, which is not part of $\mathbf{X}$?In ordinary PCA, we can simply project $\mathbf{X'}$ through $\mathbf{W}$, the matrix whose columns are the (selected) eigen-vectors of $\Sigma$:$$\mathbf{X'} \mathbf{W}$$ However, for kernel PCA, we only compute the eigen-vectors of the (mean-shifted) kernel matrix $\mathbf{K}$, not the original covariance matrix $\Sigma$.Fortunately, we can accomplish our goal via smart math tricks, as follows.First, let's express each eigen-vector $\mathbf{v}$ of $\Sigma$ via the eigen-vectors $\mathbf{A}$ of $\mathbf{K}$.Recall$$\begin{align}\mathbf{v} &= \frac{1}{\lambda} \Sigma \mathbf{v} \\&= \frac{1}{n \lambda} \phi(\mathbf{X})^T \phi(\mathbf{X}) \mathbf{v} \\&= \frac{1}{n \lambda} \phi(\mathbf{X})^T \mathbf{a}\\&=\frac{1}{n \lambda} \sum_{i=1}^n \mathbf{a^{(i)}} \phi(\mathbf{x}^{(i)})\end{align}$$ Thus, to project a new sample $\mathbf{x'}$ with an eigen vector $\mathbf{v}$, we can use the kernel trick again with the already computed $\mathbf{a}$ vectors:$$\begin{align}\phi(\mathbf{x'})^T \mathbf{v} &=\frac{1}{n \lambda} \sum_{i=1}^n \mathbf{a}^{(i)} \phi(\mathbf{x'})^T \phi(\mathbf{x}^{(i)})\\&=\frac{1}{n \lambda} \sum_{i=1}^n \mathbf{a^{(i)}} \mathbf{k}(\mathbf{x'}, \mathbf{x^{(i)}})\end{align}$$ Implementing a kernel principal component analysis in PythonCode the math above ...
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data pointsNote the code below computes eigen values of $\mathbf{K}$ instead of $\frac{\mathbf{K}}{n}$, and thus the eigen values will be $n \times$ scaled.
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
select_new = -1
x_new = X[select_new]
x_new
x_proj = alphas[select_new] # original projection
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
# should be the same
print(x_proj)
print(x_reproj)
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
X_1, y_1 = X[:-1, :], y[:-1]
alphas, lambdas = rbf_kernel_pca(X_1, gamma=15, n_components=1)
# projection of the "new" datapoint
x_new = X[-1, :]
x_reproj = project_x(x_new, X_1, gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y_1 == 0, 0], np.zeros((np.sum(y_1 == 0))),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y_1 == 1, 0], np.zeros((np.sum(y_1 == 1))),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y_1 == 0, 0], np.zeros((np.sum(y_1 == 0))),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y_1 == 1, 0], np.zeros((np.sum(y_1 == 1))),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learnKernel PCA is part of the scikit-learn library and can be direclty used
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2018-07-02
numpy 1.14.5
scipy 1.1.0
matplotlib 2.2.2
sklearn 0.19.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [The main steps behind principal component analysis](The-main-steps-behind-principal-component-analysis) - [Extracting the principal components step-by-step](Extracting-the-principal-components-step-by-step) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Principal component analysis versus linear discriminant analysis](Principal-component-analysis-versus-linear-discriminant-analysis) - [The inner workings of linear discriminant analysis](The-inner-workings-of-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis The main steps behind principal component analysis
###Code
Image(filename='images/05_01.png', width=400)
###Output
_____no_output_____
###Markdown
Extracting the principal components step-by-step
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# if the Wine dataset is temporarily unavailable from the
# UCI machine learning repository, un-comment the following line
# of code to load the dataset from a local path:
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_02.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.13724218 0.50303478]
[ 0.24724326 0.16487119]
[-0.02545159 0.24456476]
[ 0.20694508 -0.11352904]
[-0.15436582 0.28974518]
[-0.39376952 0.05080104]
[-0.41735106 -0.02287338]
[ 0.30572896 0.09048885]
[-0.30668347 0.00835233]
[ 0.07554066 0.54977581]
[-0.32613263 -0.20716433]
[-0.36861022 -0.24902536]
[-0.29669651 0.38022942]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the Matrix W with its signs flipped. Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma \cdot (-v) = -\Sigma v = -\lambda v = \lambda \cdot (-v).$$
###Code
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_03.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn **NOTE**The following four code cells has been added in addition to the content to the book, to illustrate how to replicate the results from our own PCA implementation in scikit-learn:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_04.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_05.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis Principal component analysis versus linear discriminant analysis
###Code
Image(filename='images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
The inner workings of linear discriminant analysis Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in descending order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_07.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.1481 -0.4092]
[ 0.0908 -0.1577]
[-0.0168 -0.3537]
[ 0.1484 0.3223]
[-0.0163 -0.0817]
[ 0.1913 0.0842]
[-0.7338 0.2823]
[-0.075 -0.0102]
[ 0.0018 0.0907]
[ 0.294 -0.2152]
[-0.0328 0.2747]
[-0.3547 -0.0124]
[-0.3915 -0.5958]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('images/05_08.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_09.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_10.png', dpi=300)
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2D array with a single row if you intend to specify the same RGB or RGBA value for all points.
/tmp/ipykernel_31117/442015021.py:23: UserWarning: You passed a edgecolor/edgecolors ('black') for an unfilled marker ('x'). Matplotlib is ignoring the edgecolor in favor of the facecolor. This behavior may change in the future.
plt.scatter(x=X[y == cl, 0],
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# scipy.linalg.eigh returns them in ascending order
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, i]
for i in range(n_components)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('images/05_12.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_13.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_14.png', dpi=300)
plt.show()
###Output
/tmp/ipykernel_31117/2033738109.py:34: DeprecationWarning: scipy.exp is deprecated and will be removed in SciPy 2.0.0, use numpy.exp instead
K = exp(-gamma * mat_sq_dists)
/tmp/ipykernel_31117/2033738109.py:47: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
X_pc = np.column_stack((eigvecs[:, i]
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('images/05_15.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_16.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_17.png', dpi=300)
plt.show()
###Output
/tmp/ipykernel_31117/2033738109.py:34: DeprecationWarning: scipy.exp is deprecated and will be removed in SciPy 2.0.0, use numpy.exp instead
K = exp(-gamma * mat_sq_dists)
/tmp/ipykernel_31117/2033738109.py:47: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
X_pc = np.column_stack((eigvecs[:, i]
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
alphas: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# scipy.linalg.eigh returns them in ascending order
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, i]
for i in range(n_components)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('images/05_18.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('images/05_19.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch05.ipynb --output ch05.py
###Output
[NbConvertApp] Converting notebook ch05.ipynb to script
[NbConvertApp] Writing 27741 bytes to ch05.py
###Markdown
5장. 차원 축소를 사용한 데이터 압축 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,scipy,matplotlib,sklearn
###Output
last updated: 2020-05-22
numpy 1.18.4
scipy 1.4.1
matplotlib 3.2.1
sklearn 0.23.1
###Markdown
주성분 분석을 통한 비지도 차원 축소 주성분 추출 단계
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# UCI 머신 러닝 저장소에서 Wine 데이터셋을 다운로드할 수 없을 때
# 다음 주석을 해제하고 로컬 경로에서 데이터셋을 적재하세요.
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
70%는 훈련 세트로 30%는 테스트 세트로 나눕니다.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
데이터를 표준화합니다.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
공분산 행렬의 고윳값 분해
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\n고윳값 \n%s' % eigen_vals)
###Output
고윳값
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
총분산과 설명된 분산
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
특성 변환
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# 높은 값에서 낮은 값으로 (고윳값, 고유벡터) 튜플을 정렬합니다
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('투영 행렬 W:\n', w)
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 주성분 분석 **노트**이어지는 네 개의 셀은 책에 없는 내용입니다. 사이킷런에서 앞의 PCA 구현 결과를 재현하기 위해 추가했습니다:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# 마커와 컬러맵을 준비합니다
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 결정 경계를 그립니다
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# 클래스 샘플을 표시합니다
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap.colors[idx],
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
처음 두 개의 주성분을 사용하여 로지스틱 회귀 분류기를 훈련합니다.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
선형 판별 분석을 통한 지도방식의 데이터 압축 산포 행렬 계산 각 클래스이 평균 벡터를 계산합니다:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
클래스 내 산포 행렬을 계산합니다:
###Code
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
클래스 내의 산포 행렬: 13x13
###Markdown
클래스가 균일하게 분포되어 있지 않기 때문에 공분산 행렬을 사용하는 것이 더 낫습니다:
###Code
print('클래스 레이블 분포: %s'
% np.bincount(y_train)[1:])
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T, bias=True)
S_W += class_scatter
print('스케일 조정된 클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
스케일 조정된 클래스 내의 산포 행렬: 13x13
###Markdown
클래스 간 산포 행렬을 계산합니다:
###Code
mean_overall = np.mean(X_train_std, axis=0)
mean_overall = mean_overall.reshape(d, 1) # 열 벡터로 만들기
d = 13 # 특성의 수
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # 열 벡터로 만들기
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('클래스 간의 산포 행렬: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
클래스 간의 산포 행렬: 13x13
###Markdown
새로운 특성 부분 공간을 위해 선형 판별 벡터 선택하기 행렬 $S_W^{-1}S_B$의 일반적인 고윳값 분해 문제를 풉니다:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
고윳값의 역순으로 고유 벡터를 정렬합니다:
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다.
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# (고윳값, 고유벡터) 튜플을 큰 값에서 작은 값 순서대로 정렬합니다.
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# 고윳값의 역순으로 올바르게 정렬되었는지 확인합니다.
print('내림차순의 고윳값:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('행렬 W:\n', w)
###Output
행렬 W:
[[-0.1484 -0.4093]
[ 0.091 -0.1583]
[-0.0168 -0.3536]
[ 0.1487 0.322 ]
[-0.0165 -0.0813]
[ 0.1912 0.0841]
[-0.7333 0.2828]
[-0.0751 -0.0099]
[ 0.002 0.0902]
[ 0.2953 -0.2168]
[-0.0327 0.274 ]
[-0.3539 -0.0133]
[-0.3918 -0.5954]]
###Markdown
새로운 특성 공간으로 샘플 투영하기
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
역주
###Code
y_uniq, y_count = np.unique(y_train, return_counts=True)
priors = y_count / X_train_std.shape[0]
priors
###Output
_____no_output_____
###Markdown
$\sigma_{jk} = \frac{1}{n} \sum_{i=1}^n (x_j^{(i)}-\mu_j)(x_k^{(i)}-\mu_k)$$m = \sum_{i=1}^c \frac{n_i}{n} m_i$$S_W = \sum_{i=1}^c \frac{n_i}{n} S_i = \sum_{i=1}^c \frac{n_i}{n} \Sigma_i$
###Code
s_w = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, label in enumerate(y_uniq):
# 1/n로 나눈 공분산 행렬을 얻기 위해 bias=True로 지정합니다.
s_w += priors[i] * np.cov(X_train_std[y_train == label].T, bias=True)
###Output
_____no_output_____
###Markdown
$ S_B = S_T-S_W = \sum_{i=1}^{c}\frac{n_i}{n}(m_i-m)(m_i-m)^T $
###Code
s_b = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, mean_vec in enumerate(mean_vecs):
n = X_train_std[y_train == i + 1].shape[0]
mean_vec = mean_vec.reshape(-1, 1)
s_b += priors[i] * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
import scipy
ei_val, ei_vec = scipy.linalg.eigh(s_b, s_w)
ei_vec = ei_vec[:, np.argsort(ei_val)[::-1]]
lda_eigen = LDA(solver='eigen')
lda_eigen.fit(X_train_std, y_train)
# 클래스 내의 산포 행렬은 covariance_ 속성에 저장되어 있습니다.
np.allclose(s_w, lda_eigen.covariance_)
Sb = np.cov(X_train_std.T, bias=True) - lda_eigen.covariance_
np.allclose(Sb, s_b)
np.allclose(lda_eigen.scalings_[:, :2], ei_vec[:, :2])
np.allclose(lda_eigen.transform(X_test_std), np.dot(X_test_std, ei_vec[:, :2]))
###Output
_____no_output_____
###Markdown
커널 PCA를 사용하여 비선형 매핑하기 파이썬으로 커널 PCA 구현하기
###Code
from scipy.spatial.distance import pdist, squareform
from numpy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
반환값
------------
X_pc: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유벡터를 선택합니다(결과값은 투영된 샘플입니다).
X_pc = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
return X_pc
###Output
_____no_output_____
###Markdown
예제 1: 반달 모양 구분하기
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
예제 2: 동심원 분리하기
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
새로운 데이터 포인트 투영하기
###Code
from scipy.spatial.distance import pdist, squareform
from numpy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
Returns
------------
alphas: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
lambdas: list
고윳값
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유 벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유 벡터를 선택합니다(투영 결과).
alphas = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
# 고유 벡터에 상응하는 고윳값을 선택합니다.
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # 원본 투영
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# 새로운 데이터포인트를 투영합니다.
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 커널 PCA
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
5장. 차원 축소를 사용한 데이터 압축 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,scipy,matplotlib,sklearn
###Output
last updated: 2019-05-27
numpy 1.16.3
scipy 1.2.1
matplotlib 3.0.3
sklearn 0.21.1
###Markdown
주성분 분석을 통한 비지도 차원 축소 주성분 추출 단계
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# UCI 머신 러닝 저장소에서 Wine 데이터셋을 다운로드할 수 없을 때
# 다음 주석을 해제하고 로컬 경로에서 데이터셋을 적재하세요.
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
70%는 훈련 세트로 30%는 테스트 세트로 나눕니다.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
데이터를 표준화합니다.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
공분산 행렬의 고윳값 분해
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\n고윳값 \n%s' % eigen_vals)
###Output
고윳값
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
총분산과 설명된 분산
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
특성 변환
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# 높은 값에서 낮은 값으로 (고윳값, 고유벡터) 튜플을 정렬합니다
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('투영 행렬 W:\n', w)
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 주성분 분석 **노트**이어지는 네 개의 셀은 책에 없는 내용입니다. 사이킷런에서 앞의 PCA 구현 결과를 재현하기 위해 추가했습니다:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# 마커와 컬러맵을 준비합니다
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 결정 경계를 그립니다
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# 클래스 샘플을 표시합니다
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap.colors[idx],
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
처음 두 개의 주성분을 사용하여 로지스틱 회귀 분류기를 훈련합니다.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
선형 판별 분석을 통한 지도방식의 데이터 압축 산포 행렬 계산 각 클래스이 평균 벡터를 계산합니다:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
클래스 내 산포 행렬을 계산합니다:
###Code
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
클래스 내의 산포 행렬: 13x13
###Markdown
클래스가 균일하게 분포되어 있지 않기 때문에 공분산 행렬을 사용하는 것이 더 낫습니다:
###Code
print('클래스 레이블 분포: %s'
% np.bincount(y_train)[1:])
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T, bias=True)
S_W += class_scatter
print('스케일 조정된 클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
스케일 조정된 클래스 내의 산포 행렬: 13x13
###Markdown
클래스 간 산포 행렬을 계산합니다:
###Code
mean_overall = np.mean(X_train_std, axis=0)
mean_overall = mean_overall.reshape(d, 1) # 열 벡터로 만들기
d = 13 # 특성의 수
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # 열 벡터로 만들기
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('클래스 간의 산포 행렬: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
클래스 간의 산포 행렬: 13x13
###Markdown
새로운 특성 부분 공간을 위해 선형 판별 벡터 선택하기 행렬 $S_W^{-1}S_B$의 일반적인 고윳값 분해 문제를 풉니다:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
고윳값의 역순으로 고유 벡터를 정렬합니다:
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다.
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# (고윳값, 고유벡터) 튜플을 큰 값에서 작은 값 순서대로 정렬합니다.
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# 고윳값의 역순으로 올바르게 정렬되었는지 확인합니다.
print('내림차순의 고윳값:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('행렬 W:\n', w)
###Output
행렬 W:
[[-0.1484 -0.4093]
[ 0.091 -0.1583]
[-0.0168 -0.3536]
[ 0.1487 0.322 ]
[-0.0165 -0.0813]
[ 0.1912 0.0841]
[-0.7333 0.2828]
[-0.0751 -0.0099]
[ 0.002 0.0902]
[ 0.2953 -0.2168]
[-0.0327 0.274 ]
[-0.3539 -0.0133]
[-0.3918 -0.5954]]
###Markdown
새로운 특성 공간으로 샘플 투영하기
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
역주
###Code
y_uniq, y_count = np.unique(y_train, return_counts=True)
priors = y_count / X_train_std.shape[0]
priors
###Output
_____no_output_____
###Markdown
$\sigma_{jk} = \frac{1}{n} \sum_{i=1}^n (x_j^{(i)}-\mu_j)(x_k^{(i)}-\mu_k)$$m = \sum_{i=1}^c \frac{n_i}{n} m_i$$S_W = \sum_{i=1}^c \frac{n_i}{n} S_i = \sum_{i=1}^c \frac{n_i}{n} \Sigma_i$
###Code
s_w = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, label in enumerate(y_uniq):
# 1/n로 나눈 공분산 행렬을 얻기 위해 bias=True로 지정합니다.
s_w += priors[i] * np.cov(X_train_std[y_train == label].T, bias=True)
###Output
_____no_output_____
###Markdown
$ S_B = S_T-S_W = \sum_{i=1}^{c}\frac{n_i}{n}(m_i-m)(m_i-m)^T $
###Code
s_b = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, mean_vec in enumerate(mean_vecs):
n = X_train_std[y_train == i + 1].shape[0]
mean_vec = mean_vec.reshape(-1, 1)
s_b += priors[i] * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
import scipy
ei_val, ei_vec = scipy.linalg.eigh(s_b, s_w)
ei_vec = ei_vec[:, np.argsort(ei_val)[::-1]]
ei_vec /= np.linalg.norm(ei_vec, axis=0)
lda_eigen = LDA(solver='eigen')
lda_eigen.fit(X_train_std, y_train)
# 클래스 내의 산포 행렬은 covariance_ 속성에 저장되어 있습니다.
np.allclose(s_w, lda_eigen.covariance_)
Sb = np.cov(X_train_std.T, bias=True) - lda_eigen.covariance_
np.allclose(Sb, s_b)
np.allclose(lda_eigen.scalings_[:, :2], ei_vec[:, :2])
np.allclose(lda_eigen.transform(X_test_std), np.dot(X_test_std, ei_vec[:, :2]))
###Output
_____no_output_____
###Markdown
커널 PCA를 사용하여 비선형 매핑하기 파이썬으로 커널 PCA 구현하기
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
반환값
------------
X_pc: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유벡터를 선택합니다(결과값은 투영된 샘플입니다).
X_pc = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
return X_pc
###Output
_____no_output_____
###Markdown
예제 1: 반달 모양 구분하기
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
예제 2: 동심원 분리하기
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
새로운 데이터 포인트 투영하기
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
Returns
------------
alphas: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
lambdas: list
고윳값
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유 벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유 벡터를 선택합니다(투영 결과).
alphas = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
# 고유 벡터에 상응하는 고윳값을 선택합니다.
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # 원본 투영
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# 새로운 데이터포인트를 투영합니다.
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 커널 PCA
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
*Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-editionCode License: [MIT License](https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2017-09-03
numpy 1.12.1
scipy 0.19.1
matplotlib 2.0.2
sklearn 0.19.0
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [The main steps behind principal component analysis](The-main-steps-behind-principal-component-analysis) - [Extracting the principal components step-by-step](Extracting-the-principal-components-step-by-step) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Principal component analysis versus linear discriminant analysis](Principal-component-analysis-versus-linear-discriminant-analysis) - [The inner workings of linear discriminant analysis](The-inner-workings-of-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis The main steps behind principal component analysis
###Code
Image(filename='images/05_01.png', width=400)
###Output
_____no_output_____
###Markdown
Extracting the principal components step-by-step
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# if the Wine dataset is temporarily unavailable from the
# UCI machine learning repository, un-comment the following line
# of code to load the dataset from a local path:
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_02.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.13724218 0.50303478]
[ 0.24724326 0.16487119]
[-0.02545159 0.24456476]
[ 0.20694508 -0.11352904]
[-0.15436582 0.28974518]
[-0.39376952 0.05080104]
[-0.41735106 -0.02287338]
[ 0.30572896 0.09048885]
[-0.30668347 0.00835233]
[ 0.07554066 0.54977581]
[-0.32613263 -0.20716433]
[-0.36861022 -0.24902536]
[-0.29669651 0.38022942]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the Matrix W with its signs flipped. Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma \cdot (-v) = -\Sigma v = -\lambda v = \lambda \cdot (-v).$$
###Code
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_03.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn **NOTE**The following four code cells has been added in addition to the content to the book, to illustrate how to replicate the results from our own PCA implementation in scikit-learn:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_04.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_05.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis Principal component analysis versus linear discriminant analysis
###Code
Image(filename='images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
The inner workings of linear discriminant analysis Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in descending order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in descending order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_07.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.1481 -0.4092]
[ 0.0908 -0.1577]
[-0.0168 -0.3537]
[ 0.1484 0.3223]
[-0.0163 -0.0817]
[ 0.1913 0.0842]
[-0.7338 0.2823]
[-0.075 -0.0102]
[ 0.0018 0.0907]
[ 0.294 -0.2152]
[-0.0328 0.2747]
[-0.3547 -0.0124]
[-0.3915 -0.5958]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('images/05_08.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_09.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_10.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# scipy.linalg.eigh returns them in ascending order
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, i]
for i in range(n_components)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('images/05_12.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_13.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_14.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('images/05_15.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_16.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('images/05_17.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# scipy.linalg.eigh returns them in ascending order
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, i]
for i in range(n_components)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('images/05_18.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('images/05_19.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Summary ... ---Readers may ignore the next cell.
###Code
! python ../.convert_notebook_to_script.py --input ch05.ipynb --output ch05.py
###Output
[NbConvertApp] Converting notebook ch05.ipynb to script
[NbConvertApp] Writing 27719 bytes to ch05.py
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2016-09-29
CPython 3.5.2
IPython 5.1.0
numpy 1.11.1
scipy 0.18.1
matplotlib 1.5.1
sklearn 0.18
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015-2017 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2017-03-10
numpy 1.12.0
scipy 0.18.1
matplotlib 2.0.0
sklearn 0.18.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
5장. 차원 축소를 사용한 데이터 압축 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,scipy,matplotlib,sklearn
###Output
last updated: 2020-05-22
numpy 1.18.4
scipy 1.4.1
matplotlib 3.2.1
sklearn 0.23.1
###Markdown
주성분 분석을 통한 비지도 차원 축소 주성분 추출 단계
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# UCI 머신 러닝 저장소에서 Wine 데이터셋을 다운로드할 수 없을 때
# 다음 주석을 해제하고 로컬 경로에서 데이터셋을 적재하세요.
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
70%는 훈련 세트로 30%는 테스트 세트로 나눕니다.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
데이터를 표준화합니다.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
공분산 행렬의 고윳값 분해
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\n고윳값 \n%s' % eigen_vals)
###Output
고윳값
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
총분산과 설명된 분산
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
특성 변환
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# 높은 값에서 낮은 값으로 (고윳값, 고유벡터) 튜플을 정렬합니다
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('투영 행렬 W:\n', w)
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 주성분 분석 **노트**이어지는 네 개의 셀은 책에 없는 내용입니다. 사이킷런에서 앞의 PCA 구현 결과를 재현하기 위해 추가했습니다:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# 마커와 컬러맵을 준비합니다
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 결정 경계를 그립니다
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# 클래스 샘플을 표시합니다
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap.colors[idx],
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
처음 두 개의 주성분을 사용하여 로지스틱 회귀 분류기를 훈련합니다.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
선형 판별 분석을 통한 지도방식의 데이터 압축 산포 행렬 계산 각 클래스이 평균 벡터를 계산합니다:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
클래스 내 산포 행렬을 계산합니다:
###Code
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
클래스 내의 산포 행렬: 13x13
###Markdown
클래스가 균일하게 분포되어 있지 않기 때문에 공분산 행렬을 사용하는 것이 더 낫습니다:
###Code
print('클래스 레이블 분포: %s'
% np.bincount(y_train)[1:])
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T, bias=True)
S_W += class_scatter
print('스케일 조정된 클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
스케일 조정된 클래스 내의 산포 행렬: 13x13
###Markdown
클래스 간 산포 행렬을 계산합니다:
###Code
mean_overall = np.mean(X_train_std, axis=0)
mean_overall = mean_overall.reshape(d, 1) # 열 벡터로 만들기
d = 13 # 특성의 수
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # 열 벡터로 만들기
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('클래스 간의 산포 행렬: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
클래스 간의 산포 행렬: 13x13
###Markdown
새로운 특성 부분 공간을 위해 선형 판별 벡터 선택하기 행렬 $S_W^{-1}S_B$의 일반적인 고윳값 분해 문제를 풉니다:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
고윳값의 역순으로 고유 벡터를 정렬합니다:
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다.
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# (고윳값, 고유벡터) 튜플을 큰 값에서 작은 값 순서대로 정렬합니다.
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# 고윳값의 역순으로 올바르게 정렬되었는지 확인합니다.
print('내림차순의 고윳값:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('행렬 W:\n', w)
###Output
행렬 W:
[[-0.1484 -0.4093]
[ 0.091 -0.1583]
[-0.0168 -0.3536]
[ 0.1487 0.322 ]
[-0.0165 -0.0813]
[ 0.1912 0.0841]
[-0.7333 0.2828]
[-0.0751 -0.0099]
[ 0.002 0.0902]
[ 0.2953 -0.2168]
[-0.0327 0.274 ]
[-0.3539 -0.0133]
[-0.3918 -0.5954]]
###Markdown
새로운 특성 공간으로 샘플 투영하기
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
역주
###Code
y_uniq, y_count = np.unique(y_train, return_counts=True)
priors = y_count / X_train_std.shape[0]
priors
###Output
_____no_output_____
###Markdown
$\sigma_{jk} = \frac{1}{n} \sum_{i=1}^n (x_j^{(i)}-\mu_j)(x_k^{(i)}-\mu_k)$$m = \sum_{i=1}^c \frac{n_i}{n} m_i$$S_W = \sum_{i=1}^c \frac{n_i}{n} S_i = \sum_{i=1}^c \frac{n_i}{n} \Sigma_i$
###Code
s_w = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, label in enumerate(y_uniq):
# 1/n로 나눈 공분산 행렬을 얻기 위해 bias=True로 지정합니다.
s_w += priors[i] * np.cov(X_train_std[y_train == label].T, bias=True)
###Output
_____no_output_____
###Markdown
$ S_B = S_T-S_W = \sum_{i=1}^{c}\frac{n_i}{n}(m_i-m)(m_i-m)^T $
###Code
s_b = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, mean_vec in enumerate(mean_vecs):
n = X_train_std[y_train == i + 1].shape[0]
mean_vec = mean_vec.reshape(-1, 1)
s_b += priors[i] * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
import scipy
ei_val, ei_vec = scipy.linalg.eigh(s_b, s_w)
ei_vec = ei_vec[:, np.argsort(ei_val)[::-1]]
ei_vec /= np.linalg.norm(ei_vec, axis=0)
lda_eigen = LDA(solver='eigen')
lda_eigen.fit(X_train_std, y_train)
# 클래스 내의 산포 행렬은 covariance_ 속성에 저장되어 있습니다.
np.allclose(s_w, lda_eigen.covariance_)
Sb = np.cov(X_train_std.T, bias=True) - lda_eigen.covariance_
np.allclose(Sb, s_b)
np.allclose(lda_eigen.scalings_[:, :2], ei_vec[:, :2])
np.allclose(lda_eigen.transform(X_test_std), np.dot(X_test_std, ei_vec[:, :2]))
###Output
_____no_output_____
###Markdown
커널 PCA를 사용하여 비선형 매핑하기 파이썬으로 커널 PCA 구현하기
###Code
from scipy.spatial.distance import pdist, squareform
from numpy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
반환값
------------
X_pc: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유벡터를 선택합니다(결과값은 투영된 샘플입니다).
X_pc = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
return X_pc
###Output
_____no_output_____
###Markdown
예제 1: 반달 모양 구분하기
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
예제 2: 동심원 분리하기
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
새로운 데이터 포인트 투영하기
###Code
from scipy.spatial.distance import pdist, squareform
from numpy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
Returns
------------
alphas: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
lambdas: list
고윳값
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유 벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유 벡터를 선택합니다(투영 결과).
alphas = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
# 고유 벡터에 상응하는 고윳값을 선택합니다.
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # 원본 투영
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# 새로운 데이터포인트를 투영합니다.
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 커널 PCA
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
5장. 차원 축소를 사용한 데이터 압축 **아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.jupyter.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.** 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기 `watermark`는 주피터 노트북에 사용하는 파이썬 패키지를 출력하기 위한 유틸리티입니다. `watermark` 패키지를 설치하려면 다음 셀의 주석을 제거한 뒤 실행하세요.
###Code
#!pip install watermark
%load_ext watermark
%watermark -u -d -p numpy,scipy,matplotlib,sklearn
###Output
last updated: 2019-04-26
numpy 1.16.3
scipy 1.2.1
matplotlib 3.0.3
sklearn 0.20.3
###Markdown
주성분 분석을 통한 비지도 차원 축소 주성분 추출 단계
###Code
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
# UCI 머신 러닝 저장소에서 Wine 데이터셋을 다운로드할 수 없을 때
# 다음 주석을 해제하고 로컬 경로에서 데이터셋을 적재하세요.
# df_wine = pd.read_csv('wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
70%는 훈련 세트로 30%는 테스트 세트로 나눕니다.
###Code
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
###Output
_____no_output_____
###Markdown
데이터를 표준화합니다.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
공분산 행렬의 고윳값 분해
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\n고윳값 \n%s' % eigen_vals)
###Output
고윳값
[4.84274532 2.41602459 1.54845825 0.96120438 0.84166161 0.6620634
0.51828472 0.34650377 0.3131368 0.10754642 0.21357215 0.15362835
0.1808613 ]
###Markdown
총분산과 설명된 분산
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
특성 변환
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# 높은 값에서 낮은 값으로 (고윳값, 고유벡터) 튜플을 정렬합니다
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('투영 행렬 W:\n', w)
X_train_std[0].dot(w)
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 주성분 분석 **노트**이어지는 네 개의 셀은 책에 없는 내용입니다. 사이킷런에서 앞의 PCA 구현 결과를 재현하기 위해 추가했습니다:
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# 마커와 컬러맵을 준비합니다
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 결정 경계를 그립니다
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# 클래스 샘플을 표시합니다
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap.colors[idx],
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
처음 두 개의 주성분을 사용하여 로지스틱 회귀 분류기를 훈련합니다.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
선형 판별 분석을 통한 지도방식의 데이터 압축 산포 행렬 계산 각 클래스이 평균 벡터를 계산합니다:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9066 -0.3497 0.3201 -0.7189 0.5056 0.8807 0.9589 -0.5516 0.5416
0.2338 0.5897 0.6563 1.2075]
MV 2: [-0.8749 -0.2848 -0.3735 0.3157 -0.3848 -0.0433 0.0635 -0.0946 0.0703
-0.8286 0.3144 0.3608 -0.7253]
MV 3: [ 0.1992 0.866 0.1682 0.4148 -0.0451 -1.0286 -1.2876 0.8287 -0.7795
0.9649 -1.209 -1.3622 -0.4013]
###Markdown
클래스 내 산포 행렬을 계산합니다:
###Code
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
클래스 내의 산포 행렬: 13x13
###Markdown
클래스가 균일하게 분포되어 있지 않기 때문에 공분산 행렬을 사용하는 것이 더 낫습니다:
###Code
print('클래스 레이블 분포: %s'
% np.bincount(y_train)[1:])
d = 13 # 특성의 수
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T, bias=True)
S_W += class_scatter
print('스케일 조정된 클래스 내의 산포 행렬: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
스케일 조정된 클래스 내의 산포 행렬: 13x13
###Markdown
클래스 간 산포 행렬을 계산합니다:
###Code
mean_overall = np.mean(X_train_std, axis=0)
mean_overall = mean_overall.reshape(d, 1) # 열 벡터로 만들기
d = 13 # 특성의 수
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # 열 벡터로 만들기
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('클래스 간의 산포 행렬: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
클래스 간의 산포 행렬: 13x13
###Markdown
새로운 특성 부분 공간을 위해 선형 판별 벡터 선택하기 행렬 $S_W^{-1}S_B$의 일반적인 고윳값 분해 문제를 풉니다:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
고윳값의 역순으로 고유 벡터를 정렬합니다:
###Code
# (고윳값, 고유벡터) 튜플의 리스트를 만듭니다.
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# (고윳값, 고유벡터) 튜플을 큰 값에서 작은 값 순서대로 정렬합니다.
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# 고윳값의 역순으로 올바르게 정렬되었는지 확인합니다.
print('내림차순의 고윳값:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('행렬 W:\n', w)
###Output
행렬 W:
[[-0.1484 -0.4093]
[ 0.091 -0.1583]
[-0.0168 -0.3536]
[ 0.1487 0.322 ]
[-0.0165 -0.0813]
[ 0.1912 0.0841]
[-0.7333 0.2828]
[-0.0751 -0.0099]
[ 0.002 0.0902]
[ 0.2953 -0.2168]
[-0.0327 0.274 ]
[-0.3539 -0.0133]
[-0.3918 -0.5954]]
###Markdown
새로운 특성 공간으로 샘플 투영하기
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0],
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver='liblinear', multi_class='auto')
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
역주
###Code
y_uniq, y_count = np.unique(y_train, return_counts=True)
priors = y_count / X_train_std.shape[0]
priors
###Output
_____no_output_____
###Markdown
$\sigma_{jk} = \frac{1}{n} \sum_{i=1}^n (x_j^{(i)}-\mu_j)(x_k^{(i)}-\mu_k)$$m = \sum_{i=1}^c \frac{n_i}{n} m_i$$S_W = \sum_{i=1}^c \frac{n_i}{n} S_i = \sum_{i=1}^c \frac{n_i}{n} \Sigma_i$
###Code
s_w = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, label in enumerate(y_uniq):
# 1/n로 나눈 공분산 행렬을 얻기 위해 bias=True로 지정합니다.
s_w += priors[i] * np.cov(X_train_std[y_train == label].T, bias=True)
###Output
_____no_output_____
###Markdown
$ S_B = S_T-S_W = \sum_{i=1}^{c}\frac{n_i}{n}(m_i-m)(m_i-m)^T $
###Code
s_b = np.zeros((X_train_std.shape[1], X_train_std.shape[1]))
for i, mean_vec in enumerate(mean_vecs):
n = X_train_std[y_train == i + 1].shape[0]
mean_vec = mean_vec.reshape(-1, 1)
s_b += priors[i] * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
import scipy
ei_val, ei_vec = scipy.linalg.eigh(s_b, s_w)
ei_vec = ei_vec[:, np.argsort(ei_val)[::-1]]
ei_vec /= np.linalg.norm(ei_vec, axis=0)
lda_eigen = LDA(solver='eigen')
lda_eigen.fit(X_train_std, y_train)
# 클래스 내의 산포 행렬은 covariance_ 속성에 저장되어 있습니다.
np.allclose(s_w, lda_eigen.covariance_)
Sb = np.cov(X_train_std.T, bias=True) - lda_eigen.covariance_
np.allclose(Sb, s_b)
np.allclose(lda_eigen.scalings_[:, :2], ei_vec[:, :2])
np.allclose(lda_eigen.transform(X_test_std), np.dot(X_test_std, ei_vec[:, :2]))
###Output
_____no_output_____
###Markdown
커널 PCA를 사용하여 비선형 매핑하기 파이썬으로 커널 PCA 구현하기
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
반환값
------------
X_pc: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유벡터를 선택합니다(결과값은 투영된 샘플입니다).
X_pc = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
return X_pc
###Output
_____no_output_____
###Markdown
예제 1: 반달 모양 구분하기
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
예제 2: 동심원 분리하기
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
새로운 데이터 포인트 투영하기
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF 커널 PCA 구현
매개변수
------------
X: {넘파이 ndarray}, shape = [n_samples, n_features]
gamma: float
RBF 커널 튜닝 매개변수
n_components: int
반환할 주성분 개수
Returns
------------
alphas: {넘파이 ndarray}, shape = [n_samples, k_features]
투영된 데이터셋
lambdas: list
고윳값
"""
# MxN 차원의 데이터셋에서 샘플 간의 유클리디안 거리의 제곱을 계산합니다.
sq_dists = pdist(X, 'sqeuclidean')
# 샘플 간의 거리를 정방 대칭 행렬로 변환합니다.
mat_sq_dists = squareform(sq_dists)
# 커널 행렬을 계산합니다.
K = exp(-gamma * mat_sq_dists)
# 커널 행렬을 중앙에 맞춥니다.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 중앙에 맞춰진 커널 행렬의 고윳값과 고유 벡터를 구합니다.
# scipy.linalg.eigh 함수는 오름차순으로 반환합니다.
eigvals, eigvecs = eigh(K)
eigvals, eigvecs = eigvals[::-1], eigvecs[:, ::-1]
# 최상위 k 개의 고유 벡터를 선택합니다(투영 결과).
alphas = np.column_stack([eigvecs[:, i]
for i in range(n_components)])
# 고유 벡터에 상응하는 고윳값을 선택합니다.
lambdas = [eigvals[i] for i in range(n_components)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # 원본 투영
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# 새로운 데이터포인트를 투영합니다.
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 커널 PCA
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
###Output
Sebastian Raschka
last updated: 2016-03-25
CPython 3.5.1
IPython 4.0.3
numpy 1.10.4
scipy 0.17.0
matplotlib 1.5.1
scikit-learn 0.17.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.33051429 0.08414846 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.14669811 0.50417079]
[ 0.24224554 0.24216889]
[ 0.02993442 0.28698484]
[ 0.25519002 -0.06468718]
[-0.12079772 0.22995385]
[-0.38934455 0.09363991]
[-0.42326486 0.01088622]
[ 0.30634956 0.01870216]
[-0.30572219 0.03040352]
[ 0.09869191 0.54527081]
[-0.30032535 -0.27924322]
[-0.36821154 -0.174365 ]
[-0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.0662 -0.3797]
[-0.0386 -0.2206]
[ 0.0217 -0.3816]
[-0.184 0.3018]
[ 0.0034 0.0141]
[-0.2326 0.0234]
[ 0.7747 0.1869]
[ 0.0811 0.0696]
[-0.0875 0.1796]
[-0.185 -0.284 ]
[ 0.066 0.2349]
[ 0.3805 0.073 ]
[ 0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
[Sebastian Raschka](http://sebastianraschka.com), 2015https://github.com/rasbt/python-machine-learning-book Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,scipy,matplotlib,scikit-learn
# to install watermark just uncomment the following line:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
###Output
_____no_output_____
###Markdown
Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1-–-separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2-–-separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
from sklearn.cross_validation import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.33051429 0.08414846 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(reverse=True)
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.14669811 0.50417079]
[ 0.24224554 0.24216889]
[ 0.02993442 0.28698484]
[ 0.25519002 -0.06468718]
[-0.12079772 0.22995385]
[-0.38934455 0.09363991]
[-0.42326486 0.01088622]
[ 0.30634956 0.01870216]
[-0.30572219 0.03040352]
[ 0.09869191 0.54527081]
[-0.30032535 -0.27924322]
[-0.36821154 -0.174365 ]
[-0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.0662 -0.3797]
[-0.0386 -0.2206]
[ 0.0217 -0.3816]
[-0.184 0.3018]
[ 0.0034 0.0141]
[-0.2326 0.0234]
[ 0.7747 0.1869]
[ 0.0811 0.0696]
[-0.0875 0.1796]
[-0.185 -0.284 ]
[ 0.066 0.2349]
[ 0.3805 0.073 ]
[ 0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
from sklearn.lda import LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[25]
x_new
x_proj = alphas[25] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) 2015, 2016 [Sebastian Raschka](sebastianraschka.com)https://github.com/rasbt/python-machine-learning-book[MIT License](https://github.com/rasbt/python-machine-learning-book/blob/master/LICENSE.txt) Python Machine Learning - Code Examples Chapter 5 - Compressing Data via Dimensionality Reduction Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -p numpy,scipy,matplotlib,sklearn
###Output
Sebastian Raschka
last updated: 2017-03-06
numpy 1.12.0
scipy 0.18.1
matplotlib 2.0.0
sklearn 0.18.1
###Markdown
*The use of `watermark` is optional. You can install this IPython extension via "`pip install watermark`". For more information, please see: https://github.com/rasbt/watermark.* Overview - [Unsupervised dimensionality reduction via principal component analysis 128](Unsupervised-dimensionality-reduction-via-principal-component-analysis-128) - [Total and explained variance](Total-and-explained-variance) - [Feature transformation](Feature-transformation) - [Principal component analysis in scikit-learn](Principal-component-analysis-in-scikit-learn)- [Supervised data compression via linear discriminant analysis](Supervised-data-compression-via-linear-discriminant-analysis) - [Computing the scatter matrices](Computing-the-scatter-matrices) - [Selecting linear discriminants for the new feature subspace](Selecting-linear-discriminants-for-the-new-feature-subspace) - [Projecting samples onto the new feature space](Projecting-samples-onto-the-new-feature-space) - [LDA via scikit-learn](LDA-via-scikit-learn)- [Using kernel principal component analysis for nonlinear mappings](Using-kernel-principal-component-analysis-for-nonlinear-mappings) - [Kernel functions and the kernel trick](Kernel-functions-and-the-kernel-trick) - [Implementing a kernel principal component analysis in Python](Implementing-a-kernel-principal-component-analysis-in-Python) - [Example 1 – separating half-moon shapes](Example-1:-Separating-half-moon-shapes) - [Example 2 – separating concentric circles](Example-2:-Separating-concentric-circles) - [Projecting new data points](Projecting-new-data-points) - [Kernel principal component analysis in scikit-learn](Kernel-principal-component-analysis-in-scikit-learn)- [Summary](Summary)
###Code
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
###Output
_____no_output_____
###Markdown
Unsupervised dimensionality reduction via principal component analysis
###Code
Image(filename='./images/05_01.png', width=400)
import pandas as pd
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/wine/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Note:If the link to the Wine dataset provided above does not work for you, you can find a local copy in this repository at [./../datasets/wine/wine.data](./../datasets/wine/wine.data).Or you could fetch it via
###Code
df_wine = pd.read_csv('https://raw.githubusercontent.com/rasbt/python-machine-learning-book/master/code/datasets/wine/wine.data', header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
###Output
_____no_output_____
###Markdown
Splitting the data into 70% training and 30% test subsets.
###Code
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Standardizing the data.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
---**Note**Accidentally, I wrote `X_test_std = sc.fit_transform(X_test)` instead of `X_test_std = sc.transform(X_test)`. In this case, it wouldn't make a big difference since the mean and standard deviation of the test set should be (quite) similar to the training set. However, as remember from Chapter 3, the correct way is to re-use parameters from the training set if we are doing any kind of transformation -- the test set should basically stand for "new, unseen" data.My initial typo reflects a common mistake is that some people are *not* re-using these parameters from the model training/building and standardize the new data "from scratch." Here's simple example to explain why this is a problem.Let's assume we have a simple training set consisting of 3 samples with 1 feature (let's call this feature "length"):- train_1: 10 cm -> class_2- train_2: 20 cm -> class_2- train_3: 30 cm -> class_1mean: 20, std.: 8.2After standardization, the transformed feature values are- train_std_1: -1.21 -> class_2- train_std_2: 0 -> class_2- train_std_3: 1.21 -> class_1Next, let's assume our model has learned to classify samples with a standardized length value < 0.6 as class_2 (class_1 otherwise). So far so good. Now, let's say we have 3 unlabeled data points that we want to classify:- new_4: 5 cm -> class ?- new_5: 6 cm -> class ?- new_6: 7 cm -> class ?If we look at the "unstandardized "length" values in our training datast, it is intuitive to say that all of these samples are likely belonging to class_2. However, if we standardize these by re-computing standard deviation and and mean you would get similar values as before in the training set and your classifier would (probably incorrectly) classify samples 4 and 5 as class 2.- new_std_4: -1.21 -> class 2- new_std_5: 0 -> class 2- new_std_6: 1.21 -> class 1However, if we use the parameters from your "training set standardization," we'd get the values:- sample5: -18.37 -> class 2- sample6: -17.15 -> class 2- sample7: -15.92 -> class 2The values 5 cm, 6 cm, and 7 cm are much lower than anything we have seen in the training set previously. Thus, it only makes sense that the standardized features of the "new samples" are much lower than every standardized feature in the training set.--- Eigendecomposition of the covariance matrix.
###Code
import numpy as np
cov_mat = np.cov(X_train_std.T)
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
###Output
Eigenvalues
[ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.60181514
0.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.21432212
0.2399553 ]
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Total and explained variance
###Code
tot = sum(eigen_vals)
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/pca1.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Feature transformation
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
# Note: I added the `key=lambda k: k[0]` in the sort call above
# just like I used it further below in the LDA section.
# This is to avoid problems if there are ties in the eigenvalue
# arrays (i.e., the sorting algorithm will only regard the
# first element of the tuples, now).
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
###Output
Matrix W:
[[ 0.14669811 0.50417079]
[-0.24224554 0.24216889]
[-0.02993442 0.28698484]
[-0.25519002 -0.06468718]
[ 0.12079772 0.22995385]
[ 0.38934455 0.09363991]
[ 0.42326486 0.01088622]
[-0.30634956 0.01870216]
[ 0.30572219 0.03040352]
[-0.09869191 0.54527081]
[ 0.30032535 -0.27924322]
[ 0.36821154 -0.174365 ]
[ 0.29259713 0.36315461]]
###Markdown
**Note**Depending on which version of NumPy and LAPACK you are using, you may obtain the the Matrix W with its signs flipped. E.g., the matrix shown in the book was printed as:```[[ 0.14669811 0.50417079][-0.24224554 0.24216889][-0.02993442 0.28698484][-0.25519002 -0.06468718][ 0.12079772 0.22995385][ 0.38934455 0.09363991][ 0.42326486 0.01088622][-0.30634956 0.01870216][ 0.30572219 0.03040352][-0.09869191 0.54527081]```Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have$$\Sigma v = \lambda v,$$where $\lambda$ is our eigenvalue,then $-v$ is also an eigenvector that has the same eigenvalue, since$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
###Code
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca2.png', dpi=300)
plt.show()
X_train_std[0].dot(w)
###Output
_____no_output_____
###Markdown
Principal component analysis in scikit-learn
###Code
from sklearn.decomposition import PCA
pca = PCA()
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
###Output
_____no_output_____
###Markdown
Training logistic regression classifier using the first 2 principal components.
###Code
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_pca, y_train)
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca3.png', dpi=300)
plt.show()
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/pca4.png', dpi=300)
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Supervised data compression via linear discriminant analysis
###Code
Image(filename='./images/05_06.png', width=400)
###Output
_____no_output_____
###Markdown
Computing the scatter matrices Calculate the mean vectors for each class:
###Code
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train == label], axis=0))
print('MV %s: %s\n' % (label, mean_vecs[label - 1]))
###Output
MV 1: [ 0.9259 -0.3091 0.2592 -0.7989 0.3039 0.9608 1.0515 -0.6306 0.5354
0.2209 0.4855 0.798 1.2017]
MV 2: [-0.8727 -0.3854 -0.4437 0.2481 -0.2409 -0.1059 0.0187 -0.0164 0.1095
-0.8796 0.4392 0.2776 -0.7016]
MV 3: [ 0.1637 0.8929 0.3249 0.5658 -0.01 -0.9499 -1.228 0.7436 -0.7652
0.979 -1.1698 -1.3007 -0.3912]
###Markdown
Compute the within-class scatter matrix:
###Code
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d)) # scatter matrix for each class
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1) # make column vectors
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter # sum class scatter matrices
print('Within-class scatter matrix: %sx%s' % (S_W.shape[0], S_W.shape[1]))
###Output
Within-class scatter matrix: 13x13
###Markdown
Better: covariance matrix since classes are not equally distributed:
###Code
print('Class label distribution: %s'
% np.bincount(y_train)[1:])
d = 13 # number of features
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: %sx%s' % (S_W.shape[0],
S_W.shape[1]))
###Output
Scaled within-class scatter matrix: 13x13
###Markdown
Compute the between-class scatter matrix:
###Code
mean_overall = np.mean(X_train_std, axis=0)
d = 13 # number of features
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train == i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1) # make column vector
mean_overall = mean_overall.reshape(d, 1) # make column vector
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: %sx%s' % (S_B.shape[0], S_B.shape[1]))
###Output
Between-class scatter matrix: 13x13
###Markdown
Selecting linear discriminants for the new feature subspace Solve the generalized eigenvalue problem for the matrix $S_W^{-1}S_B$:
###Code
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
###Output
_____no_output_____
###Markdown
**Note**: Above, I used the [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html) function to decompose the symmetric covariance matrix into its eigenvalues and eigenvectors. >>> eigen_vals, eigen_vecs = np.linalg.eig(cov_mat) This is not really a "mistake," but probably suboptimal. It would be better to use [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html) in such cases, which has been designed for [Hermetian matrices](https://en.wikipedia.org/wiki/Hermitian_matrix). The latter always returns real eigenvalues; whereas the numerically less stable `np.linalg.eig` can decompose nonsymmetric square matrices, you may find that it returns complex eigenvalues in certain cases. (S.R.) Sort eigenvectors in decreasing order of the eigenvalues:
###Code
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
print('Eigenvalues in decreasing order:\n')
for eigen_val in eigen_pairs:
print(eigen_val[0])
tot = sum(eigen_vals.real)
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
cum_discr = np.cumsum(discr)
plt.bar(range(1, 14), discr, alpha=0.5, align='center',
label='individual "discriminability"')
plt.step(range(1, 14), cum_discr, where='mid',
label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/lda1.png', dpi=300)
plt.show()
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real,
eigen_pairs[1][1][:, np.newaxis].real))
print('Matrix W:\n', w)
###Output
Matrix W:
[[-0.0662 -0.3797]
[ 0.0386 -0.2206]
[-0.0217 -0.3816]
[ 0.184 0.3018]
[-0.0034 0.0141]
[ 0.2326 0.0234]
[-0.7747 0.1869]
[-0.0811 0.0696]
[ 0.0875 0.1796]
[ 0.185 -0.284 ]
[-0.066 0.2349]
[-0.3805 0.073 ]
[-0.3285 -0.5971]]
###Markdown
Projecting samples onto the new feature space
###Code
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train == l, 0] * (-1),
X_train_lda[y_train == l, 1] * (-1),
c=c, label=l, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower right')
plt.tight_layout()
# plt.savefig('./figures/lda2.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
LDA via scikit-learn
###Code
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr = lr.fit(X_train_lda, y_train)
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda3.png', dpi=300)
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./images/lda4.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Using kernel principal component analysis for nonlinear mappings
###Code
Image(filename='./images/05_11.png', width=500)
###Output
_____no_output_____
###Markdown
Implementing a kernel principal component analysis in Python
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
X_pc = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
return X_pc
###Output
_____no_output_____
###Markdown
Example 1: Separating half-moon shapes
###Code
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/half_moon_1.png', dpi=300)
plt.show()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((50, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((50, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/half_moon_2.png', dpi=300)
plt.show()
from matplotlib.ticker import FormatStrFormatter
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1,ncols=2, figsize=(7,3))
ax[0].scatter(X_kpca[y==0, 0], X_kpca[y==0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_kpca[y==1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50,1))+0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50,1))-0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.tight_layout()
# plt.savefig('./figures/half_moon_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Example 2: Separating concentric circles
###Code
from sklearn.datasets import make_circles
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y == 0, 0], X[y == 0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y == 1, 0], X[y == 1, 1], color='blue', marker='o', alpha=0.5)
plt.tight_layout()
# plt.savefig('./figures/circles_1.png', dpi=300)
plt.show()
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y == 0, 0], X_spca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y == 1, 0], X_spca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_spca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_2.png', dpi=300)
plt.show()
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y == 0, 0], X_kpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y == 1, 0], X_kpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
ax[1].scatter(X_kpca[y == 0, 0], np.zeros((500, 1)) + 0.02,
color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y == 1, 0], np.zeros((500, 1)) - 0.02,
color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.tight_layout()
# plt.savefig('./figures/circles_3.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Projecting new data points
###Code
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
"""
RBF kernel PCA implementation.
Parameters
------------
X: {NumPy ndarray}, shape = [n_samples, n_features]
gamma: float
Tuning parameter of the RBF kernel
n_components: int
Number of principal components to return
Returns
------------
X_pc: {NumPy ndarray}, shape = [n_samples, k_features]
Projected dataset
lambdas: list
Eigenvalues
"""
# Calculate pairwise squared Euclidean distances
# in the MxN dimensional dataset.
sq_dists = pdist(X, 'sqeuclidean')
# Convert pairwise distances into a square matrix.
mat_sq_dists = squareform(sq_dists)
# Compute the symmetric kernel matrix.
K = exp(-gamma * mat_sq_dists)
# Center the kernel matrix.
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# Obtaining eigenpairs from the centered kernel matrix
# numpy.eigh returns them in sorted order
eigvals, eigvecs = eigh(K)
# Collect the top k eigenvectors (projected samples)
alphas = np.column_stack((eigvecs[:, -i]
for i in range(1, n_components + 1)))
# Collect the corresponding eigenvalues
lambdas = [eigvals[-i] for i in range(1, n_components + 1)]
return alphas, lambdas
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X, gamma=15, n_components=1)
x_new = X[-1]
x_new
x_proj = alphas[-1] # original projection
x_proj
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_reproj = project_x(x_new, X, gamma=15, alphas=alphas, lambdas=lambdas)
x_reproj
plt.scatter(alphas[y == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y == 1, 0], np.zeros((50)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='original projection of point X[25]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='remapped point X[25]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
X, y = make_moons(n_samples=100, random_state=123)
alphas, lambdas = rbf_kernel_pca(X[:-1, :], gamma=15, n_components=1)
def project_x(x_new, X, gamma, alphas, lambdas):
pair_dist = np.array([np.sum((x_new - row)**2) for row in X])
k = np.exp(-gamma * pair_dist)
return k.dot(alphas / lambdas)
# projection of the "new" datapoint
x_new = X[-1]
x_reproj = project_x(x_new, X[:-1], gamma=15, alphas=alphas, lambdas=lambdas)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.scatter(alphas[y[:-1] == 0, 0], np.zeros((50)),
color='red', marker='^', alpha=0.5)
plt.scatter(alphas[y[:-1] == 1, 0], np.zeros((49)),
color='blue', marker='o', alpha=0.5)
plt.scatter(x_proj, 0, color='black',
label='some point [1.8713, 0.0093]', marker='^', s=100)
plt.scatter(x_reproj, 0, color='green',
label='new point [ 100.0, 100.0]', marker='x', s=500)
plt.legend(scatterpoints=1)
plt.tight_layout()
# plt.savefig('./figures/reproject.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Kernel principal component analysis in scikit-learn
###Code
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y == 0, 0], X_skernpca[y == 0, 1],
color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y == 1, 0], X_skernpca[y == 1, 1],
color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.tight_layout()
# plt.savefig('./figures/scikit_kpca.png', dpi=300)
plt.show()
###Output
_____no_output_____ |
Extract/get_coordinates.ipynb | ###Markdown
Galsim is a simulation software that allows to reproduce astronomical scenes of the deep sky. For this it uses analytical profiles, but also models of galaxies built from real images of the COSMOS fields. The good news is that all of our AGN images are from the COSMOS fields too!Galsim uses a catalog of galaxies and allows user to draw images for these galaxies on a pixel grid, but also allows to draw their PSFs. Be sure to download the catalog here: [https://github.com/GalSim-developers/GalSim/wiki/RealGalaxy%20Data](https://github.com/GalSim-developers/GalSim/wiki/RealGalaxy%20Data)and to install galsim.
###Code
i=2 #Index of files start at 2
ras = []
decs = []
#Stores ra and dec coordinates of HST AGNs
for f in files:
ra, dec = f.split('_')[1:3]
if f.split('_')[-1] == 'sci.fits':
#print(ra, dec)
ras.append(np.float(ra))
decs.append(np.float(dec))
i+=1
#Now arrays ras and decs contain the ra-dec coordinates of the centers of all the HST AGN images
#Coordinates of all the galaxies in the galsim COSMOS sample
gal_ra, gal_dec = [], []
for g in galsim_cat:
ra, dec = g[1], g[2]
gal_ra.append(ra)
gal_dec.append(dec)
#Positions of AGN and galsim sources
plt.figure(figsize = (15,15))
plt.title('Position of galsim galaxies and AGNs', fontsize = '30')
plt.plot(np.array(gal_ra), np.array(gal_dec), 'o', label = 'galsim galaxies')
plt.plot(np.array(ras), np.array(decs), 'o', label = 'AGN positions')
plt.xlabel('Ra', fontsize = 20)
plt.ylabel('Dec', fontsize = 20)
plt.legend()
plt.show()
print(i-2)
def galsim_psf_picker(index, catalog):
""" A function that extract the psf of a galsim galaxy
Paramters
---------
index: int
index of the galsim galaxy for which we want to extract the psf
catalog: list
list of galsim object from a galsim catalog
returns
-------
psf: array
image of the psf for galsim image at index `index`
"""
gal_cat = galsim.RealGalaxyCatalog(file_name=catalog)
psf = gal_cat.getPSF(index).drawImage(nx=51, ny=51, scale=0.03, method='real_space', offset = (-1,-1))
return psf.array
# Example of how the psf picker works:
# Show the psf for the first galaxy in the galsim catalog:
psf0 = galsim_psf_picker(0, galsim_file)
print(np.where(psf0 == np.max(psf0)))
plt.title('psf')
plt.imshow(psf0, cmap = 'gist_stern')#Use np.log10(psf0) to reveal seemingly hidden features.
plt.axis('off') #Remove the indexation of the x and y axes
plt.show()
###Output
(array([25]), array([25]))
###Markdown
What we want is to extract the HST PSF for each AGN (orange point on the first plot).To do so, we use galsim images. Galsim has a set of PSF modeled for each galaxy in the COSMOS sample. catalog `gal_cat` contains a list of galaxies, the coordinates of which are represented in blue in the first plot and stored in variables `gal_ra` and `gal_dec`.For each AGN, we will find the closest galaxy in the galsim sample and record its index. then we will use the function `galsim_psf_picker` to extract the psf from this galaxy and we will use it a psf for the corresponding AGN. To do so, we will save the psf as a file that has the following name: `'index_ra_dec_HST_psf.fits'`. In the name ra and dec should be replaced by the value of the coordinates of the psf and index should be the index of the AGN to which this PSF corresponds.Make sure you understand what every variable contains. Doing a print of the variables that you are not sure about helps.You can save images as fits files using instructions found here [https://docs.astropy.org/en/stable/io/fits/creating-a-new-fits-file](https://docs.astropy.org/en/stable/io/fits/creating-a-new-fits-file)
###Code
# Your turn now!
# We need a psf for each AGN galaxy in our sample
# This requires finding for each AGN galaxy the closest galsim galaxy.
# We will start by creating an array of size 2*N (N: the number of galaxies in our sample) that contains the coordinates of the AGN galaxies.
#Note, at the moment, these coordinates are in arrays `ras` and `decs` of size N each
coord = np.array([ras, decs]).T
indAgn = 2
for c in coord:
#c should be a coordinate point of size 2 with the ra,dec coordinates of an AGN galaxy
#Now compute the distance between c and each point of the galsim catalog
d = np.sqrt( ((c[0]-gal_ra)**2)+(c[1]-gal_dec)**2)
# print(c[0], c[1])
# print(gal_ra[:10], gal_dec[:10])
# find the index of the closest galsim galaxy to `c`
ind = np.where(d == np.min(d))[0][0]
#Exctract the psf for the galsim galaxy at index `ind`
psf = galsim_psf_picker(ind, galsim_file)
#Save the PSF in a fits file which name starts with the index of the AGN galaxy (be careful, it's not the index of the galsim galaxy)
hdu = pf.PrimaryHDU(psf)
hdul = pf.HDUList([hdu])
#The final name of your files should look something like '2-psf-HST-COSMOS.fits', '3-psf-HST-COSMOS.fits', etc
hdul.writeto(f'HST_psfs/{indAgn}-psf-HST-COSMOS.fits')
indAgn+=1
###Output
_____no_output_____ |
G2_data_analysis.ipynb | ###Markdown
G2 - Grado di soddisfazione della vita
###Code
# Import librerie per analisi dati (Pandas) e dati Istat
import os
import pandas as pd
import numpy as np
from IPython.core.display import HTML
import istat
import jsonstat
# cache dir per velocizzare analisi in locale
cache_dir = os.path.abspath(os.path.join("..", "tmp/od_la_grande_fuga", "istat_cached"))
istat.cache_dir(cache_dir)
istat.lang(0) # lingua italiano
print("cache_dir is '{}'".format(istat.cache_dir()))
# Directory
dir_df = os.path.join(os.path.abspath(''),'stg')
# AREA -> Opinioni dei cittadini e soddisfazione per la vita: 15
istat_area_sodd = istat.area(15)
istat_area_sodd.datasets()
# DATASET -> Soddisfazione per la vita
istat_dataset_soddisfazione = istat_area_sodd.dataset('DCCV_AVQ_PERSONE')
istat_dataset_soddisfazione
# istat_dataset_soddisfazione.dimensions()
spec = {
#"Territorio":1,
"Tipo dato":1079,
"Misura":3,
"Sesso":3,
"Classe di età":259,
"Titolo di studio":12,
"Condizione e posizione nella professione":12,
"Tempo e frequenza":2186
}
collection = istat_dataset_soddisfazione.getvalues(spec)
ds = collection.dataset(0)
ds
df = ds.to_data_frame('Territorio')
df.reset_index(level=0, inplace=True)
df=df[(df['Territorio']=='Italia') |
(df['Territorio']=='Nord') |
(df['Territorio']=='Sud')]
df.head()
df_filename = r'df_soddisfazione.pkl'
df_fullpath = os.path.join(dir_df, df_filename)
df.to_pickle(df_fullpath)
###Output
_____no_output_____
###Markdown
Calcolo Dataset: Soddisfazione Vita | Reddito | Popolazione
###Code
df_g1_filename = r'df_g1.pkl'
df_g1_fullpath = os.path.join(dir_df, df_g1_filename)
df_g1 = pd.read_pickle(df_g1_fullpath)
result = pd.merge(df_g1, df, on='Territorio')
result.drop(['Speranza di vita alla nascita'], axis=1, inplace=True)
result.rename(columns={'Value': 'Gradio di soddisfazione per la vita'}, inplace=True)
result
result_filename = r'df_g2.pkl'
result_fullpath = os.path.join(dir_df, result_filename)
result.to_pickle(result_fullpath)
###Output
_____no_output_____ |
notebooks/convolutional-neural-networks/mnist-mlp/mnist_mlp_exercise.ipynb | ###Markdown
Multi-Layer Perceptron, MNIST---In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.The process will be broken down into the following steps:>1. Load and visualize the data2. Define a neural network3. Train the model4. Evaluate the performance of our trained model on a test dataset!Before we begin, we have to import the necessary libraries for working with data and PyTorch.
###Code
# import libraries
import torch
import numpy as np
###Output
_____no_output_____
###Markdown
--- Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.This cell will create DataLoaders for each of our datasets.
###Code
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection
# Run this script to enable the datasets download
# Reference: https://github.com/pytorch/vision/issues/1938
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
###Output
_____no_output_____
###Markdown
Visualize a Batch of Training DataThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
###Output
<ipython-input-9-731dd270b2c2>:12: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
###Markdown
View an Image in More Detail
###Code
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
###Code
import torch.nn as nn
import torch.nn.functional as F
## TODO: Define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# linear layer (784 -> 1 hidden node)
self.fc1 = nn.Linear(28 * 28, 560)
self.fc2 = nn.Linear(560, 160)
self.fc3 = nn.Linear(160, 10)
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = F.log_softmax(self.fc3(x), dim=1)
return x
# initialize the NN
model = Net()
print(model)
###Output
Net(
(fc1): Linear(in_features=784, out_features=560, bias=True)
(fc2): Linear(in_features=560, out_features=160, bias=True)
(fc3): Linear(in_features=160, out_features=10, bias=True)
(dropout): Dropout(p=0.2, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
###Code
## TODO: Specify loss and optimization functions
# specify loss function
criterion = nn.NLLLoss()
# specify optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
###Output
_____no_output_____
###Markdown
--- Train the NetworkThe steps for training/learning from a batch of data are described in the comments below:1. Clear the gradients of all optimized variables2. Forward pass: compute predicted outputs by passing inputs to the model3. Calculate the loss4. Backward pass: compute gradient of the loss with respect to model parameters5. Perform a single optimization step (parameter update)6. Update average training lossThe following loop trains for 30 epochs; feel free to change this number. For now, we suggest somewhere between 20-50 epochs. As you train, take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
###Code
# number of epochs to train the model
n_epochs = 3 # suggest training between 20-50 epochs
model.train() # prep model for training
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
# print training statistics
# calculate average loss over an epoch
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch+1,
train_loss
))
###Output
Epoch: 1 Training Loss: 0.084479
Epoch: 2 Training Loss: 0.066627
Epoch: 3 Training Loss: 0.054749
###Markdown
--- Test the Trained NetworkFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy. `model.eval()``model.eval(`) will set all the layers in your model to evaluation mode. This affects layers like dropout layers that turn "off" nodes during training with some probability, but should allow every node to be "on" for evaluation!
###Code
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # prep model for *evaluation*
for data, target in test_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
class_correct[i], class_total[i]))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test ResultsThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
###Output
_____no_output_____ |
examples/Advanced_Lane_Finding.ipynb | ###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
from moviepy.editor import VideoFileClip
from IPython.display import HTML
#import Advanced_Lane_Finding
%matplotlib qt
###Output
_____no_output_____
###Markdown
1.Camera CalibrationIn the Advanced_Lane_Finding module, there is a class called **cameraCalibration** with the arguments followed:1. A list of calibration images(Chessboard images) will pass through class cameraCalibration2. Number of corners in X Coordinate3. Number of corners in Y Coordinate
###Code
# we can try it with single image
images = glob.glob('camera_cal/calibration*.jpg')
img = cv2.imread('camera_cal/calibration1.jpg')
# Arrays to store object points and image points from all the image
objpoints = [] # 3D points in real world space
imgpoints = [] # 2D points in image plane
# Prepare object points, like (0, 0, 0), (1, 0, 0), (2, 0, 0),..., (7, 5, 0)
objp = np.zeros((9*6, 3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1, 2) # x, y coordinates
for idx, file_name in enumerate(images):
# read in each image
img = cv2.imread(file_name)
# Convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9, 6), None)
if ret:
objpoints.append(objp)
imgpoints.append(corners)
#image_size = (img.shape[1],img.shape[0])
ret1, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(
objpoints, imgpoints, img.shape[1::-1], None, None)
dist = cv2.undistort(img, mtx, dist, None, mtx)
# Display both distorted and undistorted images
plt.figure(figsize=(10,5))
plt.subplot(1, 2, 1)
plt.axis('off')
plt.title('Distorted Image')
plt.imshow(distorted_image)
plt.subplot(1, 2, 2)
plt.imshow(undistorted_image)
plt.axis('off')
plt.title('Undistorted Image')
plt.show()
###Output
_____no_output_____
###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- Import Relevant Packages
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib qt
###Output
_____no_output_____
###Markdown
Define Helper Functions as Needed
###Code
#Calibrates Camera based on images taken in chessboard size
def camera_calibrate(load,save):
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob(load)
# Step through the list an d search for chessboard corners
for index, fname in enumerate (images):
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
cv2.drawChessboardCorners(img, (9,6), corners, ret)
save_file_name = save +'corners_found'+str(index + 1)+'.jpg'
cv2.imwrite(save_file_name, img)
return objpoints,imgpoints
#Undistorting an image
# performs the camera calibration, image distortion correction and
# returns the undistorted image
def cal_undistort(img, objpoints, imgpoints):
ret, mtx, dist, rvecs, tvecs =cv2.calibrateCamera(objpoints,imgpoints,img.shape[1::-1], None, None)
undst = cv2.undistort(img, mtx, dist, None, mtx)
return undst
###Output
_____no_output_____
###Markdown
Camera Calibration and Undistorting Image
###Code
load = 'camera_cal/calibration*.jpg'
save = 'camera_cal'
objpoints = []
imgpoints = []
objpoints,imgpoints = camera_calibrate(load,save)
image = mpimg.imread('camera_cal/calibration1.jpg')
undst_image = cal_undistort(image,objpoints,imgpoints)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
ax1.imshow(image)
ax1.set_title('Original Image')
ax2.imshow(undst_image)
ax2.set_title('Undistorted Image')
###Output
_____no_output_____ |
Prakhar_Gurawa_Q_learning_Frozen_Lake.ipynb | ###Markdown
Learning Frozen Lake problem using Q learningIn this work, we will try to learn policy for an agent using a Temporal Difference learning algorithm called **Q-Learning**.We will create an environement of 5*5 size and add few manual hole in it for which the agent tries to reach its destination location which is at bottom right cell of the environment from the start position which is top left cell.Also this work considers frozen lake problem as a determinsitic problem, not considering other varient of this problem known as slippery frozen lake where actions are not deterministic. In determinsitic case like ours, if we move "right" the agent always moves "right" and not to any other direction.**Note:** We wont be using any other library to simulate environment and agents like OpenAI Gym. Rather code from scratch all the classes like Agent, States and Environment.![maxresdefault.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAUDBAcHBwcHBwcHBwcHBwcHBwgHBwcHBwcHBwcHBwcHBwcHChALBwgOCQcHDSENDhERExMTBwsWGBYSGBASExIBBQUFCAcIDQkJDxINDQ0SEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEhISEv/AABEIAtAFAAMBIgACEQEDEQH/xAAdAAABBQEBAQEAAAAAAAAAAAADAAIEBQYBBwgJ/8QAWhAAAQMCAwQFBgoFCwIFAwALAgADEgEEEyIyBUJSYgYRI3KCFCExM5KiB0FDUVNhcbLC8GOB0dLiFRZzg5GTobHB4fIkVAg0RKPxJWSzF3TDVYSUJjZFpNP/xAAaAQADAQEBAQAAAAAAAAAAAAAAAQIDBAUG/8QANREAAgIBAwQABAUEAgIDAQEAAAECEQMSITEEE0FRFCJhkQUycYGhQlKx8CPRFeFiwfEkBv/aAAwDAQACEQMRAD8A8jFPouUT6L6I8QcKICZRPFUgY8USiGKeKYBgqiUQRRBQJhhRBQRqiCmILRPBDFPFAmGpVPGqFREFABRqiUQRTxqgCQNUUaoA1TxqqAkDVEpVAAkUaoAMNUUaoA1T6VUjZIGqINUAaog1QJEgaog1QBqiCSAZIFEGqABIo1QxoMNUQUEUQUigwIgoQVRRUgPGqKNUKifRIAw1RBqgiiDVSUgwkiCSCKIKBkgDRgNRhTwqpY0TANFoahiSKJKKKTJYkiAaigSKJKaGmTBNFBxRGyRQqoaLTJoOIwOKENUUSWTRdk0TRQJQgJGElLRSZNGqINVEbNGAlLRaZKoiAo4kihVZtFoP1LvUmhVPokM7ROFc6k6lErGjq7SiSfSiVjOUTqUXaCn0FKwobQU6lE+lF3qSsKGjRdpROpRJMDtE7qXKJ1EDFSi7Si7ROpRACXaJdSdSikBUonUolRdokwO0TqUXBTqIKFSifSi5RPpRSwF1LtKJJyAOdS6l1LqAF1JUonJIA51JdS6kgDnUupJIASS71LnUgDlVzqTkkAcpRdoku0ogBUonVSpRd6kAcXepdS6kANSTupKlEAN6l2lE7qSQA3qS6k7qS6kAN6l1dXOpACXOpd6l2iAG9S6updSAG9SXUndSXUgBvUl1JySAGVom1oiVouVogAfUl1J9KJVonYDOpLqXUqpgcquVXVzqQJjKppIlaJvUmhAq0Ta0RupMNNCYEkIkYkMlSJZGcqgEpDqDWi1RnICaAaklRCMVpEyZFrRNqKkVBcirski1BCMFLOiZBUpCaIJAhkCsqsoRNK9ZGgrybTKgrCrSbVlNZCXjK+oIJgrA20Kra0UyHAg4aWEptG06jKNYLGV9WV3BU6rS4LSnWV2yvqynCwpptLgtpOYaNyJVlAJpWotITjSSyFPGVsE6jSmVaTaim5i0AxoIoDhkjO1UY6ISBuiS1FdrUVFpVNI0u3bHrpDbwxVYUVJfcVc+S3xwMcmQ4+agvup7rqhOmuqEDnlM+eqJ4plE8VidISieNEyifSqoQ+icNE2ifRAD6IgoYoopoTHinimCn0QIeNUQaodE+iBBRJEGqCNEWioAoolKIIVRaVQA8UUUMaogoAeKINUOlE8aIAKNUQaoNEUUDDDVEEkEap41SYiQJIg1Ucaoo1SGSAqiio4VRgqgCQKINUAKogpFEgKooKONUYCSYBRTxQxqiCpAIKIKHRPFBSCiiDVDFPFSMMCJRBBGFSx0OFFFDGifRIpBRJFGqAKKKhjRICqMBKMNUUKqWi0ShqiCo4I4LJlhgqjDVACiKNEhhhqiiSANEYKKGUg4migaAFEUBUMtElskYSUYKIwLNloONU8UMaJ40UFBBRKIY0RBQMfSicNE2ieKhgOpRdpRconUonYC6l2lE6lF3qRYHIrtKLtKJ9KIsDlKLtF2lE6lEmwOdS7Fd6l2iLHRyi7RKlE+lEDOUonLtKLvUlYCon0ouUonUSA6lRKlE7qQAupdSSpRACSXepLqQByi7RJd6kALqS6l2lEkALqS6k5JADK0SpRPSogBvUu0ondSSAOdS71JJIAXUkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEl1JJIA51LnUnLnUgBtaJtU+qbWioTGpLvUuIEcqm1T02tEAcQyoiJlaKkJgSohlRHKiGdFSZLI5igmKl1omVBWmS0QiomQU6rSbVtXqI0kOKaTam0aXDZRrDSQaMptWFNoCJ1I1hoK2gpsFMwkqsJ6xaSCTaZhqdQEzDT1hpIBMoVWlYOAhVbVayXAjCwuYSlRThbRrDSQyZQ6NKeYpkElJhpIdWkMgUx2sVCdrJVF2JqjokIoZ1ku0ouVToATgoVRRjQ1SM2R3QQDBHuDVe9craEWzGUkhPGorr6E+8RKE5Ql1Rx+zmllFcXElDcqSkECGQrZJIxbbIhghECmkKjuUVok+dRTxTBRBXKjvsdSifSiaKeKoQ6ifRNFPFCAcKKKYNE+lEyR4p4odKJ4oALROoh0qnigQUUWiCNUUaqgH0oniminigB4ogoYp4p0AUaogodEQUwH0TxTKJ4qQHjVFFCoiCgbCjVEFBoiDVArDCihVAGqKNUmMOJIoko41RRSKJA1RRqowogVSAlDVEGqjgSKFVLAkCiCgDVEEkhkgaogoAVRgqpLQYKIoUQQqjDVQxoKNEQKIY1RQqospIIIoggmhVFFS2UkOoCIApDRPFQ5F0OEUUKJgowrNspIcCMKECKCTHQQaooEhjREGihlJBwqjgo4URwUNmiDgjhRAbojBRZNlhBRhQqIoCobGFGiJSiH1J4pXZSH0onRTRRBqmAqCnUFKieKkBUFKlE+icgBlBTqUXepJAC6l3qXaLvUgBvUnUou0onUoqA5Si7Rd6kupSB2lE6lFylE7qQAqUTqUXKJ3UgBUoupLqAF1JJJ3UlYHKLtEkkwF1LvUupIA5RdSSogBLvUkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASVapJIA4m9Sel1J2APqXKonUudSBUD6l3qTq0TUxDaphVT6oZJiY0kM0+qYSpAMquda7Wi5FMkUkqriVE6CzvUuEKMIrpCpbHRFimlRHKicNBTsKI1BXaApNATSpFDYUR8FBMFMkhuITYNIr36IUVPrUVGdqrTM2iKQorYIvUKGTkVRIF4UGtUU6plaKkJ7gjogVDMpXUmHQVadEMHRtMcERT6vCKhXb6qMWyXJIa8YqE+8uPFJRzFdMYHNPJYJ81DcopLgoJAt4qjCTbIp0QiBTcNcJtaakZaCFRtDcBTqihONyQpD0FcaEYqywFwmVWuhaD5fpROGiVKJ9KLNHUdGieNFwaJ40TA7RPGi5SiJSiEB2lESiaNE+lEwHCnUouUTqUQSdpRPGiaNE8UCY6lE8VwU8FQD6J1KrlKJ1EAPGqeNUOlE8VQBhqiUqginigA4p9EEKolKqQsLRPFDpVPpVABaJ1EOlUQUAEFEGqENUSikYQaowkgiiCgA4EiDVAoiDVIokDVFBRhqiCSGBKGqIKjgSKBKB2SAqiDVAGqKNVJVkgCRRJRwqjBVJlEgCRRqo4ogVWbKTJIGjgaiDVEGqhotMnAaKNVCbNSGzWbKslDREGiCBIwVWbLQYKIw0Qm0YFDLQQaIwChCigobKQUBRQFDCqOChstBG6I4D7yGC8/8Ah26fF0f2c23bEP8AKm0cRu0LKRWzI5Xb2PEMo0694qV+JZvccpUjnwq/C7s7YLjlkyP8o7UHVbNlFi0ItPlrw6S6vPh0zfYvBNvfCx0jv3CJzaz1sO6xYRtGhHh7POX2kVf1LEE1qzE4RSccNwiccccLU44Redw6+mta166odAWiSI7d8m96P/CJ0hYLs9tbR7rj2OOXlfoVCFeu9BPhyJwm2dtMtiJRHyy2GMeZ5j0D9daV/UvnG1OOlXNtUiGWn7qiUIv6M6oY1R9y2rzbzbbzLguMuDJs2yk24JbwkjDRfLHwQ/CM/sN5u2f7bZrxdo19CRfKWxfJl17vor9S+prV5t5tt5lwXG3WxcbMdJNkMhL/AGWN1sxSjQWlE+lE0UQUCO0TqLlE4U7JO0FdqC6KfSiRQygJ1BT6LtE7AbQF2gJ6dRFioD1LvUixXKiixDUqJ3Ul1JgKi6kkgBUXUqJJWOjtF1KiSQUKicmpyBiSSSTQmJdokkmISSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAElVJJADKptUStE2KYA6rlaJ9RXOpVZLQIhTCoj1FNqKLEBoK4QqRQFwhTsKIlRXKUUiooZCqsVDmqp5VQhonpMaGGKVE6iadEAOqSC6aVaphJ0JsGRJqdUU6gJ7EkV2iHFTKtptW01MTjZDKiE4KlOUQToUZRy8UVaZLQARTiFNqS7R1VTJsCSC6pBoVaKkQ0V78lEICJW9WhQ6sLaOQzcLKgmUwmlbmyo7jStZCHjKsmUwmVY1bTKtK9Zm4FdgptWFY1bQXBTUxaCHgoRApRqO4apWS0gOGmuUTjcQHCVUTaPloUQU2lE+lFqWOFPomUT6IAeKIKYKeNEIAlKJ1KJtE8UwHUon0TKJ9EEjhRBFMoiCqExwin0ouUT6UQAqUTqJUonUogBCiCuCKfQVQHRon0okNE6lEAdFPFcpROpRSA+idRNGifRAD6VT6VQxRBQA8aog1Q6J4oAINUUaoIooqRhRqijVAFFBFDsLSqIKDRPGqQw4oo1QBqihVJgHGqKNUAaogVUMZIAkZslFFFCqRaJbdUYVEAkcCWbRVkiiIFUEaog1UMtBxqjN1UcEZuizZSJLZKQ2SiBVFElmy7JrZKQ2Sgt1Uga+HvEIrORepE4EUFVXO1ra2GT9yyyPETgj/qs9tb4Udh23/qccuFgSc970LNlKSN2NUZteb2fwxbBcHMVyyXCTJfh61peiHTbZO1nCZsrmT4iRYDgk28QjqJsS9Z1U8/mWbNVb8M1bNN3iKK+NPha2q/tHpHtZ94pCzf3NgwO6zbWDzlsy23w9eGR15nCX2ZakIkJFpb7Rwt3DbzEX9gr4XZcK7Jy7lEr25fuxHUX/AFL7j+bh68RRqSLx49c6XgaTJRSaaRXqRLDJvNm9riEkWlMv3v8AilrN1CnTE2zxKwYLL+fZULdUxjSjUdEYhTGWX8yX0B/4Z+lLjzLmyblyRMjiWxFw/KN9346LwRkdK9D+BN8m9tW0dThF7v4Vnl41eh6E1R9RdaeKEWpOGqZwhV2lE2lU6lUAEonCmCn0SAeKdRMonUQA+i6m0Tk0B2iVVxJACSSSTRIqLtFxKlUyjqSXWkpAScmrtEAOoupqSAHJJJIA6kuUXVRIkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAupNinJIAbFc6k9LqQAytFyoonUkiwAQXCFGXKinYUAiuVFFiuRTsVAoprlEetE2gp2KiLVtcgpdQXCbRqCiJBOoKNhpQRqCiOaaVUc6CozjaaEwRqO7Qi1EXtEpgNLhMq9VENFcTCHgK0q0mECruC0IhUaQjbUtwxQDqqTZLSAVogkKmUZIlzydPWidLIUEB4VZkAqOapTBwK7AJMJsRU52qiOrRSbM2qIbtVGcoSmOUQTBaxZjJEBwEAm1Y1aTasLRTM3jsrqtIZNKyqyuVtST7gu0fJFBT6Cu0on0oukQ0aJ9KLtKJ4igBDRPGiVKIlKJgcpRPGiVBT6UQSc6k4aJ1KJ4igDg0RBolSifSidgKicKVKLtKJiHUTxTaUT6UToBwp9E0aJ9EWA6ifRMpRPFIB40T6UTRTqIAcNE+lE0USiAO0onDRLqThogDtKJ9KLlKJwoAcKeKaNEQUmMcKKCGNEQaJMB9E+iYKJSiQ7H0RBQ6J41SYwooo1QgRAUMaDBVFFAGqKNUhphhRRqgDVFGqktEgTRgJU21ds2ll/5l4WZZhHeJZTaPwo2TciYZceHNGQ4Yy3fCsJ5YrlmkYuXCbPSRJHbJeLXvwtXfyFowOnM5JwubKPmiqi/+EXbD4kOO2yJEJCTI4ZD3S61zS6mPg2jgyej6DeuGWRk8422PE44I/eWR298Jmy7YSFoivXh3Why/3heZeBXdzc3JSfeefcIpSccIv9kVpnLl4hH88y55dR6OiHRyf5mbfbvwl7Uu5NsueSNlus+sIe9xUWVc2g+9InX33CEs0nnCL/NKxsxIouS1CWX7ysGbBsSWXd9nVHpUivwpc3MUi+8mFTiJW5WwjpUN0Y5lpHJZaxr0RKCJbynbKuHLZ5t5gibfYcF9l1v1jZN7w81P8addECIknj2MXc0ZRIh3SL1ZFy19CjJkOrHBJWz2vp38KDL3RYnGHBb2ntMS2c402XqZN/8AVvt8LRNy6q19FXOr4l4AzEY5d0Y5dIjKKvdtNt+TYkflhw4j9IJYjfdr6aUVYAyiQ5nB3eIVm3aM+ngsc2/ZKAGyYkTekSId2UeHzqutnpRLwp9xdRawYxKRZZbspF3UrEMqziqKl80i3YaEh7TMii2I6VCtiXXjlIWykXFw95OiqomNORLMIkI7paSW4+C+8G22ts14tOMLfN2nZ/iXndo8QuC29verIdJcveXp/wADmzfKdtWAkI4TLmO6W6IsiRCP6zH3aqJ/la9gq59H07cesLvLlKqNV2REXERF7RJ4mt0tjzbJI1RKKKJogmkMlDVPGqjCaIJoYEilU6iAJp9CSALRPQqEuyQA/rS60zrS600A7rXetM611AD0kzrXaVQA5LrSSTsDvWkuJdaQC606lU1JADutdpVM612lUAPou9aZSqdSqBDklxdonYhJJJJgJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQBytE2KekgBsVyCekgAcUortapUqgBtaJhIlU2tEARzBMgpJUTaAnYAYppij1om1oiwIjijmJErKra7EVSlROkqPJSRKMCKnumorpo1NipIASju1R3KqK6riiGwBkgGSMYplW1sjJ2RHaIBqcbSZRhaKRDiyDhrosqwoylVtDyDUCDS3SqyphUQ60U6mPSiKLC7URRiohuJWFHxmKIKGKIK9g4B40TxTBT6JgPon0TKVThqgTCCnUomiiCgTHDREGiaKcKBDhon0ouCnUQB2lE6lEhT6UVAcpRPoK7SifSiAOUonDRdpRPpRAHKUT6USpRPpRAzlKJ1KJUTqIEdGifRNpROpRJgPonimUTqIQD6J40TaJwpgPonimUTxQwCCiUQxTxqpGPFEGqHREFIB4ogoYogoKCCnimCnioY0FFPGqGKeKTGEBY7px018kJy0YEccRzOF8mXD3lM6U9MLSyFxlsse7iQxbLs2SLecL0eb5qLyE8QiJxyREWYi4paiJcHU562idnT9M5u3wc2hdv3LmI6RPOFxfdFRwAtJD+SRev3fzlRatkRZcun2V5blZ68cdcEdpn3ZIwAMe6OZENktX55k+gRLNvSzcUhzCocjRRCWTWYS5ZItGxiQlplq+jLdly/WomMWURzEMRHmH8+ZEFsizOfnlUt2WkWXlEY8XFKQpr17pjmQRFPo2kq8j3JVrdiWUspcS660W7mUM25KFe3DjYlEiVrbgTROuQiUo+FPsX4kTbgiTbkhbIvVkO8y5zUVUF6W8RFxR1KUxtBsZDlcZLUOnukPCVPnUu/Jvqi0TNuvFbMlbesZeFl5st4m5Fl5TGouD/8rM3O0S3ZJ+2bkibEi5RVKDiuPBxZHuWLTpEUiJXVm+MY8yz9vVXVjHeUSZtgLFlxCvGikLjchcHeHL7XEiVbLu8K47cRyybEd4izfd0pxmdE4okdHbJy7fZtmpE8+8It70SzETgjvRASLqp6Yr6k6C9EWNmds4Lw3MSbwicEm2B+TLLTqcerTz1Lz9VSrSnoXgHwK/8A907DkOUbt4ssc0bK5IfqjX519Ok9IiLiIi9rN/sqjDVL6HJ1U3CKivNlmDyILqqaOoovrdxZ5+otRdRQdVWLyKLqnSVZZ0NPFxVwvIgvJUBYi4iC4q4XkQXUqCywoadQ1Bo8ni8lQ7JtDTpKHR1Po6gZKoS7QlHo4n0NAB+tLrQaGnUJABaVXUKS7QkAGSQ5LskAPSTKVTutAHUlylV1AHaLtFyidRAHaJ1FylF1AmJJJJUISSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAOVSpRd6kkANrRNrRP6lytEAMqmlRPrRcrRKwBVoupESGZJgIiQjcSKSCYpoBrpyQCojQSo2tCKI1RTSbUuCaVEtQtJCq2m1bUo6IdaKtQqI1RXK0Ryoh1ommJoFWiGdEUqoZVTQmAOiZFFOqYrI2GRQX0ciUG4NOKJlKj43oiChiiCvZOAeKeKYNU+lU0SOonUTRTxQDH0T6VTKUTqJoQUSRKVQaJ41TANSqdRCpVPpVABaIgoI1TxJABqJ9EKhJ9CToAlE8UKlU+lUxWFonUQ6VTqVUjsJROFDpVPpVABKJ1EMap1KoAfRPFDpVOpVArCCn0Qhqn0qnQWFFPohDVPpVIYSlU8aodKp1KpUAYapwkg0TxqkMOJIg1UcaogkgokDVEpVRqEniaVASRqmXz+Gw+4Opth5we8LZEKaJpPBiNuN/SNuN/3jZD+JZzWw4vc8WpQfzxcXe+tEK3lJFJnDImXB7RsokJahIdQo4jl0r53LOmfVwgtJDC2EfEjUaES/EiUrmXCJc7ZpSBO0TCpIU+tJLhNS7qlsdAG2uFSWm05ttGAeJCYNIi6dSeCdd1biRZijwqpd2hmHSMdIlvfwobIUi46lCvWZZVaEIxEh0kMvaUN6iuDG0UJsRJAcIhVvcNquuQWurYxcQF32jJR3c35+xUwKa+UVDFLgxyPcPb1zK82OBOR3cpZi06cv1+f0eJUTLZSGKurQhjqlwx3YrLIa4edwm0rkWdRE4W9Eso8vKX1VWn6HdB73a3bWBMuMk3ISfe8mi6URFspUzFq9HmVbdWDLkXHGW223HGxIhciTxCUcPBHrzfFUqfF11XtXRC4wLQX3CYaJlgcAdLJOEJNMi8I064Ucw+vqXl9b1rw6YQpuTPR6fBqUpy8cE/4Nfg7Z2SVpfvk+N/5M4Lls6TLjds85Jlwmyb65DUJdXX8RL0SjqyPRPpC5dyYu2SttosFhvNOZRcLdcEvRItVOqtaZvNWqvRcXv9LplDbnyeJ1ksmv5/2LSjqdRxVguoovLp0nHqLIXE4XFXi8ii6lpK1li28iC+q0XU6jqlwKUi1F9EF9VQvJ4vKXArUWwvIgvKpB5FF9RoHqLYXk8XVVA+jC+p0jTLMXUUXlVi8ig8lQ7LOjqKBqsF5GbeSaKLDrSUYHkUXFIBetKhJtKpVRYBJJ1CQqVTqVQAShJ1KoNKp9KoALRdpVDpVOpVABqVXULrT6VRYqHJLlKrqsQkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSrVKtU2tUmBypJlV0k2qQDCFNrREqmVoqAGSYVEWorlaIABWiXUiFRMqgBhUQyoiFVDOqpEsEVEMkRMMVZLAVJDcqnkKE5VUjNsGVUMqpxEmrVE2DrVMOqNFCdTJZGdJRXVJdqo5CtImUj5Coyu0aVt5Ku0tV6Wo5tBVYadQFaeTJeTJ6haSto2uxVj5Mu+SpqQtJX0FPoKm+TLtLdVYnEiUonUUrydLARYqAUonCi4K7RpUmKhgp9E7DSggZylU+lVygp1KJkjqVTqVTaUTqUQA+lU6lUylE6lEAEpVOpVMpROpRAIfSqfSqHROogbCUJOpVDonUQIJSqfSqFROogAtKp4kgjVO60AHoSfSqBSqdQkqAONU+hIAknUJKig9CTxJAGqfQkUKw9CT6VQBJPoSkdhxJEoSAJJwkhoZA6Q7CZvRlEW345Xd4uEXOIfr+JYm72W8wWG8242W6USJsu6Q9dF6RSqMDi4c/Qxyu7pnoYPxCeNU90eZNbHeLdiJb2Yvu/6o1p0Zu3iHDYIR43iw/FH0/qXpOJ3k6hLFfhkE922by/FZeEZCy6DF8vciPKy3L3iV5YdEdnM6m3Hy3iecL7o9SthJPE10R6TFHiKOeXX5Zea/QyfTLYdhbMNuMMk24T0S7RwhjEi0lX51mHLZvdFbL4QXf+mYHeJ8i9ltZCldK8X8QglkpHsdDOUsdvdldtGWHm4Sy/nzqsbtxJkXN4c0e6SstqFLKh1YkMR3V5y2O2rJlaybHuqC/RTaNxEcxZRjHd73eUV+i0ixSIhqvumVZlRRHKSHKtLIozG0G8yjUorraFvlVcLPFl7upPWYSgOs2Scll7MYyju8PiqrfZlW5FKPYxyjllGOUuX4utB2U0QkTg5S0t5d4t4vsorHY+yyIpCJEJFl8RREeb9S4uo6iMU7dUdWDBJtbGntbQXsBvDHtN7B7QRiJSl6SGlSIaLdbYuHGdlsts5cRuRRGJZiESIuIeoh6vir6aehVfQ/Z+GIuZoi2ThDJyMW+0LKW71SVh0vrFi2YEnN7stQxISEXGy3vMQr5bp8z6jq4xXC3PbywWLC/bNN8Fbo7TY8iezXNo2XkxFqcti9Y2XEI1Jbq6snMHGiWIIiTglqiWnE4TpT00Xkfwc3pbP2hbXZdm2BEJEWUSFwYkObVur2noz0usr6+ftpDCsY9UY5RLL9kI9fzedfT92WDLqj/wCjw8qWSFSX/ZQiaeLqrdq37dpdu2jhZWnCFsy+h1NyLe83m6/jipYFIZDp1L6Dperx5o2tn6PEz9LPFzuiULqKLqhjVOoS6XE57Jwup1HVCoSeJqdI9ROo8ni8oNDThcScRqRYC6n0dVeJoguKXEpSLAXkUHlW0dTxdUuJWotBeRQeVWLqILqhwLUi1F5FB5VQPIwOqHArUWgPIwXCqwdRRcUOJVluFyii+qcXUYHVFFWWtHE6arwdRhdSoLJlDT6GoguIgmkNEmhJ1CUehJ41SYyTQk6lUAKog1QhMMNU+lUGlUQappiY5JJJWISSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAEkkkgBJJJIASSSSAOVTap651KRg60Sin9S5VAxlaLnUn1TUANrRNJOJDKqCQZoJopVQzVADrVCKqeSZ1LRGdjetMKqeSEVUwbBuoBijnVBMlSszbB0bSKgimEaEVFpRnY43lFcJGqKGQqlSIbbI5UTcNSKimkr1E0fMNDbT6AKrRJEA13UY2T8Bd8nUVu4IVIbu0Uws7VhLBRRuEUXBRbDYjUYXcPlUwTFO6hJGoKINGk7AU2gCui2q1BpIVLdLyZWGGu0FGsTiV3kqdS1VjSidQRT1i0lZ5KueTK1w12jKfcDQiqpapeSK3owu0aT7jJ0Ip/JF3yRXFGhTqNp9wNJTeTcq7Rglc0bXcMUaxaCnowu4Ct8IUsFPWGgqcFdwla4C7gJ6xaCqo2u4atKMJ1LdGsNBVUbTqArPyZd8mT1hoKygp1BVl5Ml5MjUhaSuoKfSin0tV2lqlrQ9LIFKJ1FNraEuVtk9SDQyN1p1Ko/kxJYBIsWkGJJ9CTsEl2jJJWOhUJPEk2jRJ2GSLKH0JOoSZQCTqClZO4QST6EhUon0ogozXT0s1sO7FwveEf8lljWn6cesYH9G57zgrOkOVfM/iD/wCV/wC+D6X8OVYkQKBIkcAjl4kWEURugkuBI7rGvCqm/fFrV+eXvK8rTKqfaVnicpCWXxLRIkjAWIP3UOH5/Cj2zcRHUPKWUspEnYXKqJ4K11rUoo2oyzRiOr87yuXG9I72pOYaiQxHmGXFzLHMnp2LhV7ky32P5MPaYZSjLDcFz1zci09eYaf4rV7G2Vhstjl7USiRSw5DpkXF8fWsvflEmNOXtHI6XCkRFp8xCvQ9mnpthIZDh4AiMYxfbEhH7Q83m9Ea0XzP4hgyyS355PV6bLBX7LTYAeTEJOYmG4RC8Q4cW4yIizUr6AHq6vMs3eW93Fy7wizOOC2QywxzFp5fmW0fsRfZtmyzMycJ8S1ETZRiXerLr+rrorgf4cuXLw936l7H/wDnPwhaHmls5bL9F5PL/FvxFRmoLejxU7hxwu2IpS3lY9HNsvWFy3ctRxGyKIl6vMMc3LmW8290UYu5ONxYe4h9WRcw/wCtF5/tXZL9o5hvNxIfZLmEuFexn6Vw2a2OXF1EcnDC7Yu37lxx0niccczFLNm/d6/iUDZ/SC9si7N9wWx1BqHmykiNkQqDeEJS4ljSWxs0esdFOl9tfxZcIWLkoxEsovFwjwktLWhDqXzrQolIfDyx0r2r4P8ApO3tFlth9yNy2MZkQxciOUeYurzda6sPWPHSnuvD9HDn6RSWqHJoqJ1EetuWkhjxJ9Wt7u7vCvS1I87SwNKLtKI9GE7AS1INLA0om1JSDbJNwktaHpA0JFAkSjSeNulrQ1EQVRKVT22EXAU6kPQwYkiUNdoynUZJS2hqLOg4jC4hUaJOoBJNopWShNFA1DEEUJLNpFonAaM2aghVGAlm0UTwJGCqgtkpDZKWholgiigN1RgqoKSCgiimAijRAHaJ1Ehon0omkJs7RJJJWiRJJJJgJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJAHKpqeudSkYyqaiptUWMESGVEatFzqRYqIpChlRSyFDIE7ERKimVFS6gmEKpSFpIh0QiophUFCOop6iXEiEKCTalnXlQiJWpGbiRibQiFSHJIJiS0UjNoDVMKiLUCXKtkqsimBKiA4pRASEbJIUhNM+RhNPoaiCSIJr16OKyVQ06hqNQ0+hISHZJFxFF9RKVTutOhWTgfRQfVcNU8SRpQamWY3CKL6qxNEFxGkestBfT8ZVoup9HEnENRYi6n0dVdRxPo4jQGosgdRKOqso4n0dRoDUWVHkSjgqso6ni4jSx6iyoQp/WqTaW0sBsXIyk62z/AHhRVhi+0lQaiX1p1Kqsub8WSZEsovOYYlzEOUf118yLfXrbDLjzhRbaGReHUjgWon0qnUVeF82WGIuDJ1vEbHeIYykP2SFQNsbSJi7sBkItvk4yUi3t37qLQ03dGgon0iqa+23bMOiy85huEIkMtOZzDHNxdasaObv57qKsWrayTSieIqlb29bSu28TNZZn45ojGWWP9irOj3TNi9YeISjcstvOYRZZCOkhl/koc0nVlW+a4NfFdGizfR3pG3d7OK/+jbcxBHKOXvJ3QnbzN7aMkJCLknG4EWYsMv2JLIm0r53HTV/Q0lKLsVHrctiTYkQ9oUW83rCjpHiJMvNoMssPPkQk2yJE4Q5tOrxJ2vZFk2grvWIxkUSLKPMSz1elVsNg5fyEm23sPL3ojq3vqWN+Ejpbh3ezHGCxGm4vkIlqlxfq8yyyZlGNmsYttI9WEBTqNCsx0q6X21hbY7ZC8RMi+22JDEm5CJZuJWd3t22ZZF5xwRFxlt0R4hcIRy/rJV3I778Dr/NFrRgV3ydUe0OkjLDzTeI2QvMPvCch1M7vzef0KNsjpxZXY2WE52l2USa+UZIhKMvZU91XVglZpqW6Xkyis7WtnHsBt9knhlJqXaDHVlUl+5w23HNURl3lad8MNjvky7S2TLm9Ftlx4tLbZOF3RGSIxc4jbbg6XBEhEtWZCbBNHaWq6dqIjIsoiMiLhEUSj3uqs6XXUdnXeYh7FwZDzDGKlydWGxYhaCWYcwlmHmHiTvI1kPgp6TM3No3ZEThP2zAk5LTGRCObwrbUfShkclaLlBJmC+EAY3bY/RsCReIiJZoqZloOmjktov8AKLLfstiX+ZKjcovn+plqyN/U+g6RViQ0K8SabXCmuVSFYo6Ao6VGeBGoSY5UdKdgV5NrrLHDuipBCiUqIqiWQBbT4e9qFShAooeAOnVHize0gSsAAZolm3fDwrQ7GuXivWG2JOXJC42xEuzbi3HELhCmqtPnUbZeyXH3BFtsnCcKIivTejezLbZeG044JXdzlIsvCTmG3y5SrWvx+dbYulWXZrbj/wBGObqVhV3uXlnbYLbbUpYbYiRcRCOYh+2slIFtcFwYiQkJDxDpLmRAcGRCO6US7y9WEFCKjFUkeLObnJylyzotpl5s9t9vDdGQl7Q91SKEiCaHugg6dnl/TTYLtkIl2bjBaXYxj+jc4S+tYa5DN+fzJfRhRcEm3BFxsspCQyEh5hWH6WfB+y/21hFlzeYIuzL+hLd+xedm6V8xPU6fq09pnk9B4U9k4lIcvu+9/qpe0dmPMOEy8yTLg5SEhj73oUQg4hXBTi6kj0Nnwe2fBd0xb2kPkF2823fsj/0xllG5bEY4Ob5UY/rW5wYr5Yo7GJCRCQlIS3pd5ezfBR0+8tIdnX7n/Vx/6Z1wvXiI5m3C46LfDm0Kv6f8f+jjz4dXzLn/ACeig2ii0l1cWVGCq63I4kkCJgVzAUqieIqW2URKWw8K75MpoiiiCnUwohNW6NS3UwARKAlrY9JCFhOowp1BTxFLWx6SB5Mu0tFYUBPoCO4GkraWyXkytaAuwU6x6SqownC0rTDXKsinrDSQBbRgFSKsLmEjUmFCbojt0QRFEGsVAWSm0YaKC5ci2JOEQtiOYiIoiPeIlWN9L9nE82w3ci444LpDHT2WoSLdrX4lEskY8sfJpBXaqiZ6T2RDIXRynByhZSDLLrJUZ/CTYYpiMjbFsYmIF2jxF6seXq+NJ9VjjyxaGzdJLMW3TGy8mZuH3BaxRkI6vj0jHV1KwsukFo+2Trbwk2JRlpkXLSvnVR6rG/ItDLdJVl1thhsmxqY1q46DQxIa5nJR+6juXzQ9damPVRwWq+fS4W771Fffx+0GlkxJBByREPD1InWtFNNWKhySSSqxCSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAkkkkAJJJJACSSSQAklzrXaIASSSSAEkkkgBJJJJMBJda5VJS2Byq5Wq7Wib1JWUcqm1Tq0XFIxlUyqISZUU7ChlaJhUFPISTKtp2JoEVBQyIUYgQyBPUTRHJDrRSKim4arUKiNWibFSqtphAnrFpI1aLlVIqCbhElrDSRDFCJtT8Al2lqjuB27Ph+hJ1CXKAu0BfTHiDxJPoSHFdpRCEGE0+hoFKJ9EwDUJOoaDROogA9DTxJAoh3QiTbguaYlLupvgmyisulH/XlbPEIs4jjIkURKW7+Ja2hrxh19ttwibGQi4MRIspD3S1L0zovtZu7Zy6mxESjpzcPmWHT5dTaZ0ZMdQ1IvqGnVeEYyIRllGXF+8sn0/v3rS2aeZIhi8Mo7w8Pdqo/SvbbblhbXrLglG5ZIhHdj6wYq5ZYxbXozUG6NvRxdN4R1FEd4iyisc90jbudnXbjZCLknG29TZFvN+KiyFl0vuSEmX3icbcZISlqFwcwkJcXX5utRPqIRfsawyd/Q9Wu9tsMMuPOEUW3MNzLmEpcKhXvStlgraQ9m+44yRfRkPqy7tV5Nt7a3lMh1ScxJZtUYuDH0Eq0b1zDbbxCi25iN5iylyrnl1js3j0tq2eq9ONvi9svEZyl5SLZcQuNlqb4vnVEz8INyJDiZmxFvKRaib3penz/Msne3xOsDInMrxEWaQlL8X1qtr+Yrny5pN2jbHgik0z0O/wCm7t3aPE4ItvMvtvMiMsremMt7q9KD046UOXNhYNtvZXWi8pESzE4JRi58aw4FlIeL7yQ8MuZQ80mmn5LWGO30dl1a9JrkbmyfJwiK0EW282XDHUKsOlHSp67G0EYytu0EtXaSkP8AZ6FkaVR2olllGI+1yxURlLiy5Qjafo1PTDpQV68y/plaNiQiWXE1EQ/rSb6a3MbuTjknm2Ym2USbcZyiQl9iyXWns0TlOW+/JMcUaotLbar7eK4Ljknxi4csxSKWYt4kC2v3GyJzEISzCXEQlq/tQIyHLpHh0y/CuUJZlpIsLfabzbZNtvOC2QlIRIolLVlS2ftR62JtxlwmyGQ728MSUASKJdnl3izZf9E0TSaaGq3+pb/zgvcO2Zx3I2zhOM5vVkUs3vKVY9J3WdnXthiEQ3LwuTkWXNJz2lnSoiUHdlES3i0q02S8ceKLK22s9guWwuFguELkN2Q6SQLm4JzDkRdmMRlqjwqJUoyFEoSzat7lqiRillbIiiMojLKMvwqxvdruv2zFs44RDbCQtlIpYcsrfMKpppzctQ7uoklH2J7lhXaThC2MpYbbjebNqzbyZs2+etnWrlvUy4Lg8Ik2UhVfilvRJScVmMRxB4t4fZVafILYtS2485fvbRlhvOPYnZ5d4Sj3axWr2h8JV7c2TzDgi2TwkM29WYt3h8ywDMcOJCUiLLly5dWb7UFuu8hKUeHyS4Rkqa4N1edO33tils5wpPuE4Lx6XMGUh/t9CXRz4SL+27MiF5vAwW+ISFuLZS+pYWkolFKZZZFy5VblL2Ltx3+p6X0H6d3I3ds3cu4jb1y2LxkQ6Sbwxlw9VR61C270qcG9v2Sdx2HL0nBEik3FuUcPlqsKWYe7HwoR10kjXKqbF2ot2a7oH0rLZ17jxGLjeG8IjHKObw/avWOhvwj2F2ItuuYDzbeI5jZR9ZEREt4uqK+ehquULeLTmThknDZDniU/1PcHLvynt/p+08JZh/wig1qh7MH/AKa0/wD1Zj/8Ipxrypu5NnuQjUUkMr7K7SqYa5WscqVFnTJMlmlwppVXOr3kwH0quJkk4STRLC0FT9jWRPvNt8RRkq2pLb/Bjscr3aNsIiRCJYjkfZEZfXWP+KjLLTFs0grassfhI2nbdFrIbBlsR2le2T122/vdmTbbbY/RiVXCr9jdV5R0z6dvXe0WL22HDFu0JoRLNFxxkhccGW9TEKlEH4dOl38sdIb95spW1k5/JtpwuM2Tjjbjw997GrSvx0EVixciI6v4SyktenU1jim/r9zyM2nJNyf+o98+DfpvZN2Gx7R54Rc8keFw3iiLZW0ezIi1FWn3VW7Q+FFllxwrSLktokRCQlErSLciEuLKXmXirBNjGRDEd4uGSNV8Rcy8Xukup58qjSMOzFybPoSvwsbL7cm8bDYYJ6RDHGIYxbb5usvdU/pb0/srS0ImX2yuXGGX2Q5XnB/VKlJL5tbeGMdPae0kVSKRFLLpl91viTWfI7TJeBeGfULfTzZxOuMNviRNuNty3SxCbGQl6C1e6r3au3LawiV2+LYk82zqkUnPVyEfz1L5IB7LvDpzCWbL3fmT3toOuOSeecIpCUiccLMOUdVfmTj1E64Vg8Hpn0btvp/s3yS0J/yZ8n3mReYKLjjLBEUns2nzCNVTDYbB2jstvaYut7MKRNvRcxGm3BIpSbLziPUPX5uJeCg5GPKiuu9nhkRYZERR3SLTKPol1LF5JN/MkzaGqC2Z6d0k6L3dhHGEXm3MzLrJSEhjL7R8yoW6OCTbglmEpN7pCXeFU20Okd7elJ+5cIhJshjlESbHDGIjpLqFTrDb7ZELd2IxcKOOIxcbIiiJOcQ5lzzhb2R3Y89/mPffgr6deVizs69L/q9LL5aXo6WSLdOgD5q19MV6M05+dPur5cBomSwyll0kOoSEspCXFSvx/EtZ0lur/bTFttG0uSHa2ycr1q2UfLWZYnlLQ+hw+ofOFfT+tLHkcNvH+BdRhv5lyz35s0QHx0y0lHxRlH+zz/Yvnaz+G+9FgRJlty5ERk6Q5XCxO0k2Pqy6vN1fFVVvRr4VLli5xLlsnGXL9+/IhIsQXHLZxllmPX1YVK4dfMt+5taONc0fUFHBGMssiiPMRaUYHRlhy7SOJHew9Ml8oX3wrbafeJ/HERJxh4WBHs23GPVxlp8/pp8asdsfCve4+zrmyecFxjZjDF3iDlffyk8Uft83XzLF5G/BZ9TCidY+yvmLZfw17WbucZ7DfbHEix6tsRKPD5yIYl6VZF8NdyTdyTgkTlzbNt4DeVlh4XikTZb0mY06/nUPK1ymUj6QEU+gLwLpx8OEitv5FEhHBLHx249oUYiI7xUiXn5lXdH/AIctos5rthu5InWScJvsywWRjhiPnpKvprVJ5X/ax7H0hQE6grwzo78PLYtueX2jzjpOPEOBHDEYjgt5q9f9irQ+HraOA5K0tsci7GMsNsYjES+Mi+NT3tuGPY+h6JVqvGX/AIeGMV/DsnibwxwCkIxcw+0xB7/mp1emijbS+HISb/6a0cFwhcEhcIcNvs4tkMfmPzo769MD3HrH8KjbTuxYYJ/dEmx9pwW/xLxPanwyYzezjbYIXWrtt25Ddi0zGIl6CkZF9ii9KfhT8t2YxbMNuNuk/K7lpwRKQi2W91/Os5Z7TSTHR9BVqOnxd4eVQi2i3iNDJvM4425m0kIkX+MV89N/CltPtJRIcB9huPrGxe9WRFvEMfSqPYvSe/tJC2/LEiRE5mKTYkMhLdKsvSh55PdIdH0tbdJLBxtp4nhbbecJlsiyiRDmEZfXTzrMbU+EFiwcu2nO1Jm9JuIxkLWG2Q97rkS8gudsOXOzHm3XBJxi7tCZGXyYs4eUfCqd65cfccccLtHCkXMXF+pR3JyW4Uj0rpj01G5stpW2Yif2i3haSbbtBFsoyH461H3lgwP7yiULeUhtRJLzuWkTwuHM2Ys2rNq7ykNV0qGFNPMpdusuOCmix8oIhbbLSyMR4UZspKC2pbKz2HEsbZwhISlmEhIeUh0qc0+4UpOFmLELMWYuKPEqxiqmMqWkVRp9j7efYB0AjJ0pSLUJRjlVvd9JidcaJsYYZS1SIpDEsqxzNVMZqrWRpUm6IcEbX+c9csW5cRafsyqfYbdbIRnlMiKXCObi+xYZiqmskt11E1wyXjibuyvgekQ+gSj9vMpNXKdXX1rEMOKaw6WmRR4ZZV0x6yXozeI1QHQkRZ+1fISyqwG8LhW8OpVbkPGWCXWo1LhFoa2WVMhxaCJLlKrq0TsQkkkkwEkkkgBJJJIASSSSAEkkkgBJJJIASSSSVgJJcrVCdeiplNIdBklylV3rVJ2ISSDcPC2JGWkRkqum2hiRR0iOXiIt3lWOTPGDplKLfBdJKHYXWK3ieiWn7EA9qANsVzuxkI9fhGn66o78KsNLLNJV1ptNl1s3RLK31z5YjIv8FXbS6UWzDbRVrKrwiQiFfQJcRegVMuqxpW3yNQbLy4dhHmIR/tRRXl130qdfuWCIostviUByyzRzFvdS3W2tu21kTYvlGjgkQ+GKxxdbGTk3slRUsbWxcJLP7X6Q27Q2nU51Hek2NvzYker7NQq+610Qzxm2o70ZtUOSUa5uBbaJ6ugGycr3RGSI2cqU+setX3FdBQVc6lxR7e6E3Hmx+SqIl3iGScpJVfkKJSXUuJIEJcqurlVLKGkmp9U2tFm2NHEyqempWAytE2oolUwktQ0hlRTKin1TapaitI2tE2op9VyqTmGkHFcinpVSeQekHBcgn9aVSFTrHoG0BOiKZVwUyrilzGonxj/J5Jh2kYyIRlpkUZd3iWkN22xG28ZsSfEiaGQ9rHVh8RLzv4Ydriyzs5y2c1PEUo5ZM6hlulSo+hfTPqaPAULaXt0aHyJN8iVG98I9sOGWHIcNlxwcvFF4ZD9XnotBtbpFbWz7EnB8mvbYnbZ+QxF4Rlhl9vo61a6pFSwNf4BVtF3yX3so97hXjW0OlV2/dt3ZFFxkhiLZEIlhl/Z6PjVr0s6fP7RZwMIWBkLkhcKUhzeFL4vYPhpNJnoPl7BeUiLnbWwydDSWXlLUu7JvWblkXhcZ0yIcQSJse716V4m/fvOOE4TzmIWVwpZiGMcyC26Q5pEO7lUfGtGnwn1PU7npFK7fshjH5MxiWoZSHi6q+fzKT0e275bbCUu3GTLwiQiWII5XG5btaryRl0hKQkQkOkt4fEk04QlKRCRbwkQ6tWlQurkV8KtNE/pCRDcuSbwylIhKMpc0fMRfXRXXwZbSJm/FkiGL+XNxbqyZfkkqFzR5tMVlDI4y1I3UKjpZqOmW1XCeu7QScwfKSKJZc0cwkP2rPC4WUZFHhXHMQpOFIpaiLN7yYKmcm3b8kwxqMaJmzbjBc3oy93u/6qM6GGRDLe1asqVAlmEhKPFqTHa/8YxTfBS5HBXhl7KKwBOFlcbHezEMcve3kJs+EizEpN092UezjLlJyXe6usRTikEmc6t6WUvZIkGvhR6ODmERIh3eVze8KZSmbLlEuKMkNWJHOtdaXeuKexHNmiX5zJJDugWXmkpDQoL9IuFmlxcKlN7unNlHNFNITexFISEs3NqUi1De0kui0WbKWXNxJDlHmFNITY06FKJR5R/EjlaFESjvcObvJ51c+jEh1afxIjzQjImyJwSjy5uElWknUAbjlliREswDvDveJCPVERIm80e6iVbGJb2qOb8xXLarYtuEUsQsrcSyj3uJS1ZSfk6TcRGW9p4k082ZEZZ7NsiEhGUSPh5RXHRzEIlIRy5hzR4sqKCxhy/JLtad5Jou6n1rqzRzJUFgypqQ+8pB1zEOWPKk7GIjh6S1bynSVZwcsdOb2h/dXIjLVHhjmkm1EYkUs0tO8pFvHKJe7q5UJA2SLcJCMrkRkRNiGYiblvRHT15VHtWikQjmIZZR+7He+yiQN5iEiIS0iP6SW8XCi0cJl5wcrZCWod0uISFXzRAy5aJlyJauIdMk2lSHVllm0qRtECi2UhKQ5SHm3S5lFrxS4YjLUlJbscXaD1bkJFvCObvf5EhuDl06eVTLYWyHDIpC23KJFEcT8SjtmOaUsw4Y/RiRaVWlOhXQ21GQuEUhEdMSiMub40wGXCkIjIiyiPMWUdPn81U+1qMox3ouGO7IoiQj9Su9g2UtsMNiUhbIniLKMhZGWkfripeyv0aY/mml7PSWRwwbZ+hbbZ/u2xH8Kaa4NVx2q8hs92I00ur2Uiqh1olqKofUhUdxxFoKEVMyETQ7729+FKiFIZal1uhR95WQyZs9qTgjxEI+8vXbp7+b2w9sbUbw2yZsnsCMSIbt0SatGxLdITcEvP58q8t6LWxOXLIxlmlH95aD/wATfSghtNk7JZjmEtqXI5REsMnG7aQ+jXjHSvLRc2SLyZYwXkrLPt4W/L2Pnm0bw2xb4R4sxR+8SPWmWUhy+1qQ2wIiiIyIt0d7xFpR4ZXN4R1R3eFe3GJ4cpUMI5DHxZo+6jMCROZRKOXUPFplHzedCYEikMSjEpRzI+z6yek7EojpclEhHiju9XxJqNtBdCu8pcOaJd4d39SeFNWbdyqTd2wsjhkTeZuWXM3m0tiQ+YSH6kNsRERIRkIkIv5okJFpFvl+tN4tyVPY6zbETZEOGOGOYZZo8XN8yj1r/wAkRtjtBEYkWaOaWni5epNFrMQlIYkUo6hjqj8REocFsUmEGo5uKPvEuxy/ndTquMkJRZ+UHDMnMwtjqEh9BFX09a4IkRFGPFpzEPe+pJxorUcCpFpT3My4wP3kbDykQ7uX8l9az02yrNxsDaHlLAjqdZbi4O8UcsuYa086lNuk2TbokQ6SEmyiQ8JDHeosfsUiG5Zw3CEh0kPFhkQjzDWvmWrG5IiESGOJHulIRIv2LnyVB0j1On/5Y36LjbuxR20Nt5M22O3DcwIDFtm/GMmyIvQ3cZerr+NebVGJFIYk2RCQlqEhKJCQ7pUqPV1cq9K2PjW17sO9H1BbTtG5Duu2l2wRNlzE2Xm+oarB7eD/AK+/HV/9Tvx73/VvZpeiNfT4lEJuUqMOpx6KfsiVyolKaUo6hjzS7qJUCwxKIiJacwyLvD19YrXR7OSxVKJItaaS4kwaELe7mjmLMXtekepPrGJDmyl4Y8Q/ahRCx7dcyfUopjLeUSISiWkt0o8Kdq95Q40UmSQqidRZYoLW6ikpewBQNGAiQKbqPSsSbciOUhKLmYSiQlm4hr6K0UOOw0wjRI41TB1FIcOW7utyzRH05aS6k5quUpDmjHulxZVDiXYdslIbJRWlJZpmUaR2SW1IZqoY1UlmqiSopEyhI7VVEGqkN1WDRSZNCqlMkoI7vMpdusmirJrZKU0ShgplvVRW40S2KqazVV7VVLZJTIqywaqprJquZqpjKCiwaNTBkOoSHezDu8SrW6qYwmiSwZqpjNVAaqpbRLVMllkwSltVVeyaltEt4szJ7dVIbqoTRKQ2S6IsholhVPogBVGGq6ISMpIekl1pLoTJEkkkmAkkkkAJJJJACSSSQAkklytVLdAcTCJIiQTJc05lxiccNRXDT3CUV0lzTkbpDnrsuJBHaJiQ5pUlmoo7xqI6aweRoagg+1dok7l0juj+8qh1zdRHzUJ4lzTm27ZpFVwWjW3CaYMB9YUo8v5+tZ1+5LDFuRREpRXXjUJ8lGttV6KUUg7u03hbwRcIW80olGUt0uJVFw9LwjFPfNQ3DUUD2G0McRuWXtG83COIMv8ABTune2/LrvEHKy03ht828RfrVO6ahPGnEnzZOstpOFe2jxFImXGRbJwii2Ilq5Rp6fMvTenPTi2t7J4bZ0XLkiwWxHdLec7tKfGvGiPN3VGdeIltBzjenyqJcU3bPTfhE6eWz2z/ACO0InHHhbF04xEW94fn86PsTpt5FsC2eciVy6440yMt1si7QpbtBH/JePOPeJAuHJF3fyWVXc29V7pUTS4Ppnolt4bnZo3bpaGyxC3SiPWRfYvLegnTO9u+kYjIit7t16QDpw22yw3C+yPvLAN7cu27Ry2bccFlwhl2hDl4f1qj8oISk2RNlxCUS9ofOrvJKKUv6eBUtL+p9XPdLLRodoOvGIMWBCLhyGsyJuRCNPjLr83Un9FullhtJkXbZ4ayGUCrFzi0l5/Mvk2+2o84zgOFJsiFwpcQqKxtB5uJNuuNkMswkQ6tS2WTL7IlBeD666XdK7bZbdsbucrl9q3aEa0kWIXVLu09Ku6PDm6vEvjXa/Sq9ufIMZ4i/k7DwC3hJspCRcRZVqXfhb2i3svyRtwivXnHCfvCLMMiyiI9XVLqVLLPVv5FpSdH0tY7VZfbccEhwxccbkWkibKJf4qVV5fFhdMdojs7+SxuS8mxsctWI4UsSJOdfXHr8/UvVOhHw1MCT7m1CcEiwxZbZbIhi2Mfs89U1kmuUK1Z77irlXV897O+G2V6T74k2xIhaYlulpcKNNUR95aVj4ZNnOXNpaC5LHHtXc2GyRZokSfdvwU6R67V1DN9eQbU+F22cu7S0sHBcxr3BedcLDbbYb9Y5IvnR3vhf2PjE15Thk3c4EiEo4e89IadUEOYWk69HqR3KYV2sJsrp5s27bdfG5ZbYbfJps3HBHGwxzEI+mKqdofC3sNvKy+Vy5wtDliOWUi8ydgpo9MdusqDW6JeWWfwv7MeLeb1F2mWMd4i9EVAf+Ga07YmxlERwA+kzRkReiKEmxqcT2LypcK5JfPR/DE/c3OG28zaMyiRlGQjHdLirX41of8A9JlljMt+VuONttiT2G3InI6iJwupWsM26USHngnTPX3buIk4RCIjmIiyiI8RFuql2x00sLQRxX25EMmwkOYf3frXgPwofCs5tG28ktG8Blwu3Jx3M4zLK2Qt7tR9NPmXmu3dvuPv4hO4paSMpCMdMRHdGlPNSnzLb4OX9Tol9RHxufVTHwlWjz5MsSIRwxlHLiOfhpRS7np1bNjmcZbInIiLjzYxHiLz5V8fs7XEXGyISylIoqa90sZH1ds5LdkQj91bQ/D8b3lOv2OeXVZL2ieaX3SV91uwGUXNnERNGhbb20/dtjiFlF4nhDdEnNUfzvKprT2uFdrQtJZV1DUUdF0vw+FEN9wm22yIibb0juty4R3UEaJ0UqD6HKJyS7SiaQxtVxFpQoyHT4UM0NCTH07sV1CFGFso/hSGCqngcfxJUolUVVAdpRdcbjw8XEu6UiTogaNJFu/dTn6Zv3l0RT3GiGPZkIxyy/CqrYPIJuikUqUYk3l4iH8SdaNkIk4O7yyEkQLkilIRiRd0RVRjREpWCOollbEoiIylxcQpjgZZRU5/BIhFlsco5jbl23MQ7qEy2ObKXskq0WLUBaGQ/dT2XI5sw7pRQWPzJHbkUfyKhIbZ24pJsXObly94RTwqURlHhjxJrsdIjp1R4v3UW0AS1EI5Y6c0eJVVsXgcLZRxBFyMcxbvd7qaz4lOYZF/sRJscMScxCIhlH9epRRoMokMYiW9IS7pK3GiNQ6jsZcyVaDLVhjHNpLVwoVaZhlGJaRH95EAijp072ohjy8KS9AwTW9HdRW6Niw52faS1yzNjwj9qHSspSLMRbunxLp+1L3uZLwUH2eLbgt475NsyISEdWnVH0fV1pW7QyKIyIZSKUcscub7FDpQdOYi9lTRESGTYi2JFllIt3SRejzIi7FJEWlJSUiEh05fZ08yEVcxRy8v4VLtQEmx1N5ZZvZiI73nQkNsrwId3iUl6pCOrej7SEIZiluoj9BIcuYRHd3VFbMpjSd7MW4jEeEcxEXEXpXBH88KbQf+W6ihTNxJIbH0t3MQYjItX8SVtXDcEizb2YcrnKm3FIxJuQ/nUutmIuDlyjGUtOrMSpKg8Ei5ebJmIkXFHdHNpb5epRXiGPyfdHUPiVoTkmyHs3G+ER5ZZS4aVVbRt4W/V9nvad5VNO7Ig/BKbYZFgiciTpZWhlGMt6PoJMBtwXMGRDLSJRHNuyUNpkiEiEZDvfvFwqVb1kMRkREPymbTmiPKhO68CfkE5TdKWXdykWItl0DbxH3HdQtsYbZb0nSzfdWSsqCTksoj7OHLLIfsotx0OEWReZxBJzEEoiUpNx9Y3xfWsstrGzp6ankSNJWuZNKq51LhVXjM9tAzqmtVTHj9lDacRQ7JJuKMTicbijEeZVEhsLVwoxkUZSjuyjHEjxU9CPa0zcxZeL8koYuK66MWpPPtjGXL7vtdaqTpWEY2z0ToPsgRbF7K244QiJlpZbHM8+XxCIBKvXyrwz4Rdv8A8sbRvdpiyWA/hsbOxBiQ2VsOCwTPxEJUEnK/NV6q9m+GHbI7H2CTDbhDc7UlZN8Tdo2Mr14e9lb6/wBJVeBG+4ROesJtyJMi4UiER05R8wj9nUn+F49erK/OyOL8TzLUoLhc/qU3FKUlY2zBEMuzEREd6JFl4R1KDcUzEpLNRjlkJRcEiItRcI8Ir1YctM86T2tB7VohIi1CMS1ZRjmzfrEVy2JtzEIhLEciLZSGIkXrHC/V5+pcCJSER+jKThRiIjm73nXbpkhkJMxcGQkMfV5pfYXzK+ER+pLumCwMYSImiccGRCIuE3LK4QjpGvxJtjhvZSEsTs+1lEW229RYfoLzedHuLKTeG2+3jN22I8JP5XWx+Tb+LLLSoWz/AHSLvFH8/EnJfN+oo8DreuYoiLhFiZhiI6crnL86K62JaRk2UYxGMvD/AKoV06WM5l1EUhjGUtIxH+1SqU7MsuURHNvZdWX0iktx8bjrB4WSEsNt6IkIi9mEZFGXpzFRcoESIi+Uj3ZFwx0prNRwSk5pGQjHUXCPL1fGuU9YQjEW5CRCWlKXFDXI+zallEtP728jssjEWxzOE4TbgR0xLsxEt7rTmKMtk2TkiblIt0Zahly9UkMhbcLKItyGWWRRjmzejN1JONBqsnlsp3DcIhFvyYhx3BcEsMSKIxbHf/1TrHbly2RCLmOy2UhF4RlGXul1fEq+3q22RaiZIY5pNyHdLL81fOj0abbdebIhdbGQkTO9l9Y3L5vrWeTGpeDbHlnB8m+a2taMWDbxOC5cs7TbubQJesIWZScb4KVbHrr9iwb9ZOSIcxOOOEMsxYhS8OpEJqJD3RcGWWQ/ox4fj86H2gxKMcMh8JDpFYxwrHfk1y9RLK7YYK5vZ0jl5kNqm7+S5ke1eL6TKWqUYlLVIS3ev4lzqH3i9lU4mSuwzBZvVi9qylpLlQ2ajmy+8WnhEd5HaAiiMdWbhzd7hTICRFlERKWnTLxaUnGkUnYa3ItMnIjLLLKMuEU2IjIRKWb8xUulCdJsm7ZtsYtiTbEolhtxIhxDrWRRlXq83X1+aiZWoyIhy5pZo/8Awk47CT3HDSTZFIR0iQlLE8Pm+L4/QiVHVpLxfd+xcDSOYs2bMOYZaSHi+dFa08pfeUadh2caEu6iFGI5UgH734vdThEswxFQ47FJhmmxwxIcQokWJliIiMYkLm9/Z5kaPh3pSl7yj21SiWrNuo7Q5ebh3Vk1RpZIpQtRDIdMssZc3L9akMF2khKMSyxjl9pBtmsuJpjESKWrwrrNeWRF7PeJZvbgESgpIdW8X/JSLVonHBbGMiyjJxtsZRlqcrSn9tUGglHvbv53U4KSy5uKJbywyLctMktDl/MlIarzILJZSKW6Xe9pSKuERN90VMoDTJNWybjiDGQi4OYSk2QyEsta/wBnpopDdYiox0wyw5N8QxISEpDKUh3virT4oo/XpkPi4Vk4l2Th/PsqViSjlEcojll3d6tfT8yit0bw9RSlpjlIeLE68pfUpQFIeWWWQiMvFxLPTuUmSWB/JKS2SitV3lJBZSWxSJrJKYxVQWqqYxVS0WTm1KbqoAEpbdUyCc0alNEoLVVMbKWopd4pe8S0QmT2CU1olXM1UpolrFkssGiUlslXtkpTRLaMiaJzZIwVUMCRwJbRkZtEmlU7rQaEniS6IzM2giSb1rvWtVNEnUkklVgJJJLrRYCS602tVwiUOY6OlVMIk0iQzNYSnZaiIyUdw11wlHdNc8pGiVHHDUVwl1w1FdcXPKRohrxqG8ae6ahumsGy0hjxqE8aK8ahvGsJFIV1RuOV0SLgw3BL3vN5lXPkjOmShPkk3sUgD5KI4SM8ahvEmhMYXeEc0ZFpGW8UfPH9SDfs4YiWPbOyKPYuERD3hIKViuOkohkrhwZsCWosu6oZ1Uh1xRXXFvERFeUUy1I71cyAVYjIs3LL8K0irJYGlyW6Ufzw8SjO6ZSGXvLrrkuEVHervR07y0V+SeANwWZR6qebEm8RvNH1kd3mVeRrZw8kpkd001wU4wXHKFhlHN+FXGBLZGc/MUwjy/mSc8JDHNHxfhQ+zJsiIikMSjxd0lUYXsTY0q6uXeUUXSbKQyEuKSJSmotI+8oxUTcKFZyp8yGbq47RAcqjTY7DVuSjGUR/eQ8aIigVJW9psAX223B2psdsiGRNXN75M63ykLrdKS+ytaKqXkRUG7q5kw3+FcvmSZccbImyJsiGTLjbzZR3m3m61FwfrpVRa1WijXANE0LuOmPsrjt+4Q5nCJQSNMIlacvZm4IO49JDq4o9SQyJKiiQbqC64gkSYVU6FwZYNWlEFuI5spbpEOUvEpDVmUo5pcIiSkubDuyzeTP97Bcj/ku9YZ+mYPIvaK+3LDISHLHeHd5sqLeesIicxJZpZhEv4lNp0eueHNzNuNl71OpRrnZr45iZLLwiRSQ8U0t0w1xfDIgjmjGPvKXSzeH5Fz+7L8VEaxsbkibIWXCGUso/dit3cbdL/wAs8yUnGCjqEpCPCtsOBONytEZcrVad7POLi3IYyGJcJCmVoKsrhx+5iWETmEOHJtstI8RdWpR6DxS1bwrKWJ+L+xUZ7W/8grK2cecw2xkXL+JENopZhzD+FXuwbt60FxsWcRx4ZNxEZDykqt63ccIiJkh3iGJDHiVyw1FVz+hPd+b6ELrLeGSGYqWNJDFvELiGMtPMhRiOb2ljplfH8GmpDOrvJ40Uuz2c/c+oZeeLlbIv9lLc2DejGVldjxSZLD9odK0WGT8P7EPJFFWNOWSXURZRLLw5lZ2+xrlzNgEIjykKPblFvBbzFmHT+JaxwPyRLKkU1KRGOYc2khRnAkyMRLLqKWXvK4ci5lebIiERjESl/kop2LkZMNuPcUWyiPKSHhkiVlRWtNZoiQkRDulpVhRgiIo/JjJztNUd3mQAsXd5txvmj91HpaPFmbFwhHejxavrThB+mE5J+SKDJERFHLyomYWY4eotUcw90lo+j2w3ncQm2yLxCPizJrvRbaLYuCVs8TZacMhL2o1Wnw0kuCHnjdMz1uwJEI6e7urrlvhuCJCRDu8ysK7Ku2yGTDzZFlzCStekmzHG22MAScc4m8xCX4VPwz0W1uh95WtzOPWrgt4hDlIo82XlXAEh0qezst965bYiQuZZaiEeIiU3a3R5xl4WW3MThIt6OpR8PNq0inliuWZ92pZd3u7vdU+0eIcw5SIdW9zf2pr+zXxLDJlyXKJEPtehTLzYz7JNtkMniHTq1acw+ZKOOfNMcpRdborzMtQ6i3Y8ObMuXr5OEJOau7HL3VpLfoFtRwRLDbby6XHMyV90Su2BHGHlGJCS1+GyPwzPvY15MrbUzahHvKyA8EXGy7RstI975QRRLvYr7MSJsiluiOnvIjti5jdm245FsdLZFm4R83xKVhnHwVLJFvkhi2yL2Xthjll3d7mRHhyi3ItQyy6R1CpVdlXLhYhNOjzRjL2d5Ryt3GxKTZOZSItQiJFlEpfUjRJXsLUntZEfFsSIW5OCW84Obmy8KkVqWGQ+rllkOpweb7F07Ui0iRFHhL8xRGdmuEQt4bmNl07oqFjb4Ro5KiAYRKIliDl8XNFP6ouR3d787qtw6O3McQmHBbGMpauaKazs1xwiIRLKJZSHNEUfDz9E92PsjGxiMETbbxE2RSKPZiO6QlvdXxqvZOJCWnKrPZ7T7nYiLhDmHSUc2ofmU246L3LAtuE3IZDlHMQ72YVTwTl80VshLLGO0minbfISGWUSISIR5t4iTbtuJYY/nvcSsrE7IRdF8XCc1NiJRES3YjvK3sei5PNi5IhEpFIh1S/xVR6aU1SJlminbM1YGQsuMyjiFpHU4X6TlQ7UXJRbEi3oiOqP4Vf32wX24kIjEcxGPs6UfZ+wXHmyIXBZiUZ5s2bNGPnVfDTbquBd+NNmdtWxLEkOWQkIDl7wiW6SfcE3iSYkyTfqc3aD4v8AFTtu2vkRYMsSOaQ8wy/PeVA6a5cny/KzoxvV8yNXsrpc83lu2ycEcuKI5vEPoIvsWoY2oy6OI04Lg8u73h9Iryxt/iGQpNvE24LjJE2XEJZl5+Xp1LeJ6GPqpLaW56g85LuoQuLG2XSEoxuRkJaibHN4m/2K8srptxscAhIeEd3w+kVzPE0dcMykXFXk2j0ZFqiOkRVZ5Qug8kkaWTrJ4iGRDGX4l6l8C+yHH3SfzbrbeXU4WWPepJeUWDROuC2JNiRC4Q4hCPqxkQjLUdfRSnx16qL2zYW0nLB3YuyWCFt9+7YxiHUyz6wWB4TKMqy+Lqp8a4+syOtEeWb4qj8z/Y8b+GTpE5tjbFyUcO22c49su0AilEbR5xl57vOPCRfZFY62LDEouEQlKJc0Yl3RXpHwp9D3rDa21mBZIo7UfuWYjmJi5eK5HvddHur7etYALEnHBbbtHxcLUWG57o9XV5l9Lh6ZwxQUV4R8xkzKcnfNkB0faTqtly5R4veWr2Hs7sHyJhwsOQwwyIiIR/19Crr3ZLkWyw4k422UMMhKUoxKS3l0stOpcmceojdeiBs2se0czCWkSGRFHNEeEfnQ364kizRIpCMpbunNu/arm92FdsMNCTDguZtIk4Qy5hUJ/Zb4yImXG4iMpDxEIyl9qU8M0qocckXvZHo6IsOM4bZSiTZRzNuDvCW8NfmStzyjzFIRjp73D51OHo/dlIRZLsyiUtIlvDL0IjFg8JeTOdk5mKGGROOFuiMafH9antT8pla4LyivuMMXSIpYmod3dykrKyfLDci2ThMtjhuDEXGClInPnclWVPP1qS10Wv3cwskJR0kO7GOpIdl37ZC2Vo4WGOGUt0hjGMf9frVrDkjyn9iHkj7Khk8sZFlkXh/EnMDIi5R4tP8ACrR/ZF22RN+SPdpH5MiH2h09ajlsi5ZGTjLg6h0kX3VEsE14f2LWSPsDV0SblmLtBlm92Poj1I9lSRFvFmykWbTqQAs3iEhFtwoxIuzcGPh6syO0BDHGYeHE0ybIS5iHzKVB+n9inJUP2ZZ47b8Rb7FnE1RIs2Yhl5iJEob9zmIcxCMj9WTkRjm4jrRFrsx8csSIcMs0SiI6vskhuWTjZNk5iZilIRKXsjp6k5YmktmJZI+xN1iWbMQ7pFIeUXPsTzpInxIYx1ZZRQ3Q1FvZtTcSlq0/61RaCRCMd7VHe4RL/dZaW9i4tcgmaj2eYtWaXeUhumaQxHve0nA2LItkUtOaQ8WmP2rr7RDujEu0HNu8qjttF67JMBk3IWxEt6X5qhVPLIs38OVMdIW3B3RiOrm/xRra3IpCIyiMvClpcnSFdbsQOlpH3dXtIrYyIiiXi5eJTnNhPF7IlHe9lDraRe8mlmkIlll3o/H5lbwzXKDuRe6BNiRDiFLhlzbo/wBi7U4lKIjItQ5R9n/FTitXmycGJPRHsxEcxR4R4ft+tHc2HciIvCy4RFmIRHTwqZYJ8JAsiIzRCMd6RZi/iR7NuRFFcbYcKPZy3ZDxDuq1Y2Q+yIkTZZijwx7361EcTb3TKc0lyVtQjvRllT26bvMKbtFh1uOM2Q7wkWku6W8ntNuNiIuCTYuCJCTg5SFwRJsuYa0Lr66LknD5qpmsZbchzEhHMMSFPt6jHNISj4c33kS1ZIiJsYllIsvLqH+1Gt9nvkIkLZELmbTljL3UdqT4Q1NLkVxEtObLEiLl1JrNZS1R3UO4Am3BkMY7pZfdTmC3cpd7urlyL5tzRPYntEMSGXtFq8KM2Jai097Kg2jWIJEOURzfwp7ddPCXNq8KlriwiyZX3iKX8X+6lBux4dUSKSiW1BLNq938/YpThZs2UhyrKSotMk29JaZcsR/CpzGXUO7GJD+cygWxkMolEtWXeUhlzm9pYyXrkpPcsmK7u6WpEEt5Q2T0qSzRS47FpkxqqnsVUBgC1KVauavwrLT7LsnN1UsCUEFKFCQrJzJKU0Sgs0UoR3lqoOuBWie0SlNkq9mqltEqiiWT26qQ0ShNmjt1WhJOA0YDUISRQNaJiJomiCah0NEE1pYmiXQ06hKMJp1DVaiNJJklJAmlNaaxaQ8lySFNNqaTmGkKRIZmhkaERrJyKUQhGgm4hm4gG4sZTLSCmajOmmm4o7riylIpITriiuOJOOKK64sJMtI464ojppzpqI4azbNDjxqG8ae6ahvGosBjxqG8SK6SiPVS8ABeJRHTR3SHuqHcEO6Ulaje4rAumojhKRXeUS5W0cfkhy8AHCUU6opkgnUhll3lcYkNkYiUQqSLMjv5VGdqIiJCUiJbKJLYrh/s8Mmxy6SHKXiSJnBw8QsbEbLJzKO+Ui4kOwuCZcxBykPiXTikr+YzknWx3a9mTGHKQ4gy1R8JCqwajmkW8p11tBxyQlmHN6wcw/uqA21LLplvEtp6ZS+QmN1uMI9XKgOuCI6SEt4k1/LlTKHLUWVCdbDYSjEXBjHvEUW/DJRrgHBUsmnCiRPDHdIiy8qhu1KWZdGlVwZatyO6e7pj+dKa4MiHdlxafa4kRwE0nYuEMZCUd2RCXEI+hCj7E36IRoDik3Got4uZRHSU6aZQOtUM+8jEKjGKFELBy5fz+fnXCRQq4Ix3d7KMi7pelMIYx5lppYWCNDqjO6c3EgGjSK7BmmVTyomV0o0hYPrTakulVNrRFCPVv5McHU08P9WX7EQNnufQuey5+xXIXNyPywl3WiRmbu5Le08rza+q7n6HhaV9SjGwc+je7sXF0tkEWYmC/uyFaEL+54h70XP29RIdL98vlhLwuI7j9INK+pRt7Gjl8mIR4cMkamyv0BF3myL7yvKXtzlEXMP2peySLW9uf+7bHvCP3SolrfpD0r6lCGzyHS1H+rin0ti+j/8AbH9ivm7i5LTfty4cMf2Irly+Oq9YHvNiJe8juP6E6V9TPUtf0fuj+xP8hIvkRL+r/wBleje3f/ctkPK2P4U+m0Ht65ZHmIf/AJT7r9INC9v+DPBscRGI2wiPCLOVcZ2Nh+rawxL9CP7FpDun/wDuWC5hXRvXxH/zbJd0f9kdz6IHBe39yjLZRFmFom/6MSEfZFGtbJ8ZDhuRLUJDqV8zevRlji5zDIfdXBubmOILurc3k+9L0ie2vNmaPYDZf+mIe6JCPsrn83B+icHujH/RagNqPDmIpcpSy/4I57TIoi49HmGP7Eu5L0i9C9sxxdGS3RdH+rkh16Mv7sh8JCt6xevF6lwiHekTIueyWlD/AJXdb1PNylvEJe8Okku7L0hPHH2zEh0be+hkXFEkq9Gnd1kR8K3w7blLtBl3o+H0LhbXc3Xm/aEvu0T70vSH24e2YZro3cjpbj3Rinfzeu+EveW0HabxZsQSjqiJe8jBfvRliD7yO/P0hdqL9mF/mxdl8mRe0l/NO5+h91bmm1XPpR7uZPb25u4g+9+xJ5pekPtQ9sw4dGLkdLIj3Rinh0XufoB9lbmu2+73hcyrp7XGQt4jfeEpD4ku9L0g7cPbMSPRm704Pu5fZ6kQejV39B7q29dslpxB/D4U8dpPFmxG/aR35+kHZh7MT/Ny9LUyXskkfRa7L5AvZWzd2uQ/KNpN7ZL6RtPv5fSEsOP2YtvondjmwPdTv5p3f0Hurc/yp+lH2pLldsR+WH3v2Ke/k9IrtY/ZgqdDLmUsEuHlTq9CrktTAlykMh9lehDtKXy7P95FcrtJwflG/a/Eo7+T0iu1j9mBf6HXJamBHutxTGuhT4lIW4lvR1eJehO7TcLS437REo720Cb+jIi5oqlmn5SE8cPqY0Oht39CnfzMufoPdWxDa47xNjwxkUuXKjFtUcxYzGXmL7qHnyekCw4/ZiWuh1yOlgRHhEY/dRD6H3JaWY8q19duCO82XtLlNtDxf3cpIWfJ6B48ftmJr0AdlLyRuXFhipQdCLn6FbA9r5crnukuFtgo5XZf1ZJd7L6Q+1jfsy7fQ+7HSyPurtOhV24Qjh5iy+0tW1tQt16I/wBGUlKsNquYjfb7w6m3OL7EPPl+gdnFXn7nyB09fJy/u82m5eb/ALlwmh/wbWdJviV3t5mN3cjw3NyPiF9wVVOivFncm2z0cO0UkRKgl1IpU/4jmJSWtlXLmlhyO7KI5fEsnE6YpvghDVOaOJSbImy95WF1sS7ZHEeYdbbyxMh7HN+kHrES+qtaVVdWnEotPYbi4ltZ7bcH1zYuDxDlLxKwa25basw94f8AdZsVygKHiiy45pLybfYnSy3tHsdtjGcEezxMrYlukQ+epdXzKo250qvbu58rJ9xtwXiebJsiEm3JSkJDWlZfX6epUoBzRSFouX2lK6eKd1bCeacuWeubJ+FIbt+/2j0gJx158ezG2H1L2VtiIllbt2qYhVp1yOrlKU9C9O6P7Cd2ix5TZCLjJFEsuG4PDiMlnalTNShUpWtCpXq6l8unUReImBJtnGJxkCcEnG25SZFwvQRUpGla/atF0P6bbR2S+L9o8MiyvSEXCebkRRJwvO311Iq9Y9Va19Nart6bq8uGNR3j6fj9DkzdLDI7ezPpFvobfjpEe8JLrvQq/LU2Jd5WHQLpxs7bjGJYXLg3LY9vZvSJ9nicIR66E11+gh6/rV+9dvDlzF3RJelDrZzVxo4pdJCL3sx/8y736MYrlOg13m7Bsu9Fasrp/heHmiUS/wAVwLu5LULgjxYZftWnxOR+iexj9MzX80L0suCJDvR/EkHQi7EicFgRcLUUcxR05lqRuHR0uOiXd970olL18cxOOF3f/lTLqsn0BdPj9P7mYDolfj8muH0Svy1NS8S1ddpRzOO5vEiDfOFpcIu624l8Vl+n2K+Gx/Ux49DNoxKLZDLhERjL7yQdCbsfkPEtf/KZSiWN4WXi+6jtXJR7ORR5Xvekp+Lyr0Uunx/Uxf8AMq9+gXP5lXv0K2A37+bNmHVqy+0ujfPl8u37Jf8Ayk+py/QF08PqY/8AmTe/QEnt9Db0SkLJeytgV65H/wAyyJcxEKRXjsRInW/CRJfE5foP4eHp/cx1x0FfcKT1oLhZSkTYkWXSmt9A3myIvJBkW9hity29d5YkRd2SOXle9H+8KXiFS+rn5otYIfUwjPQpwZRtoy1dmObvCuh0KIf/AEn/ALa3heU6o/8AuEWXiT8O53sPxKPi5fQa6aP1MEfQ2Wq0EpcTMvvUTg6Jk3mG2Ee63H7tFs6HcyEcMcyfQLki9SPtRT+Il9A7K+pjB6OOasH/ANtcPo3mEsBsSEtWGMva9K2g48ikyMR5h/JJzThEOkW+8P3fMh9TJ8jWCKMdTYX6EZd3N/knV2MX0a19Dy4mXhjGRez1IrFcsot+LV7PUk+rkHw6MH/NxsfkRbLiERHSiX2xcVsmyIolwkIktx1ubrEo6SGObuojUi+QbHvRUvqWHYXs89Lo2JC2LjYuNt5RxBEo82ZMc6Ksxw8Bkm5FuiIlzR+tej9XEyPdy+0uDGUcAREt4hFL4j6Fdj6nm9v0TbbIXG2RbIcuUuL9amBsdwRiLYiPKt9QR+gb9oUSg73kzY+yp+Irwh9n6s8n2t0N8pLELEEhy5YqD/8Ao/c3XnPZFey4rf0IiXDlXMUf+1H3VjLtzduKNIqUeJHmOzeipMNk2JOOYmqQjm9lRf5nObumUtOYeUS4V60BCReoZEvD+1FBv9Cz7v7UpSx1TglX1GovnUeaPdHycGJNkJcQiMu8pTWwREREmcSIjmjKUh+8vR6M/wD2zZd2P7U3A/8AtB9of2qNcP7S9MvZ54OxhHSx/wC3qTD6PCRSESGUZZSivSaWv6D2RH9qXkrf0DnhH/dTKWOX9IlGa8nm7mwJEMRiI6tSltbFzZiy8IyW/G3b+ge9n/dPG2bj6lwfCsXjxf2mlz9mIa2XFyW6W7/EpJbLZ5vZj7QrXjbs/Ruf3ZJ1LVvhc9kknDG+UO5ezH02ZGQi5q4lIbsBWsC0b4XPZTvI2eFz2Vn2sa8DuRnGbaKlUYHeV15I3w+0K7S2b/Ikq0RHuU4WwogW6txtW+L3SRBtW+IfZWbxQYamiqFhSKBpVh5K2u0tm90lKxRHqZEAUWKk0thXaMJ9uIWyONETqRqMJ1GEu2gsDShLvUSLgLtGUu2h6gUlySkYK5go7TDUB60ypKRhJpMpPEx6iMZoZEpJMphMko7LHrIblVHM1YFbkhFakpfTtj1laRILhcJCXLmkKsStCQHLVT8K2NZCqcJRHjV0dp3VHOy5RUvonQ+8ijdNRnCV85Y8oqM9s/lU/AT9j76KB0kB2mVXddl8qH/J0d1Efw6fkTzoz9FDva5uFakrT9GPsiod5syW6tX+GvTSZHxCuzNO0yqtdoS1T2yCjvKJXZEebvIX4ZMXxMSgqOpQbuhZcq1J2PKoxWEdQkS7F+GuqMn1G9mTOgiSY2GrvK5uNiyIikQ8sUrewwx3iSxfh81KnwKWeNGauBzEKr3q7orWu7PblJVr2xxkRCS1n+HTfBMepiZ2uUi4kBvLxS91W9psl7F7ZuLf3lMa2SOJJyRCWmKMPQZJcqtxT6iKZnLbZ5PE529s2Qxy3NyzbYkvoyerQS/too99ZOMlgui3KIuDhvMvCQlpIXGTIS/tWiu9hiRZS9pRh6P4e8Md2OVb/wDj8lk/FQoyr1OIUIPZitPc7I1EOYuEoxUH+RXC1ZS5VL6DInsNdRCijrX95NGupaUOjrZDEnHJEon833hKMhJuOUi/dWq6LKkT8RB+SgrVAKOaUuWI/eV9/Ny5l6wVxzo29xd7e9lX8Jkfgjvw9mZNAEJEI6Vo3ejbnFmQLjo84PqyIuLLpUro8nlD70PZQHWPKgOK/r0feLNlHl4VEf2O63EYkUsuWWVOXTZPQ1mh7KeulOGnEX57qtn+jtyOlsSH+kGXsqM7sq9/7YvD/wDPUj4fIt6Y+5F8Mq3jHi/PhQzpLTmVnTY1zHM2Qlw8SGOyLuJdiWbmTWCb/pf2F3I+/wCSqIvCueIvaU2uyrneYc+991MLZr/0L3ibIfvKVhn6f2G8i9leTZJhKe/s98RGLLhFvRbJR3LR8dTL3slpQ8MvTBTXs+hhum90v/8AWL//AJppOMkWYZF/+qF+4rHNw+8iDT8yXp6kcWllZRxn6MfFaF+4iNvW30LcuLyZz9isgPl95Fof5kjX+oaStIrYvkWi71sX7EQcPdFgf6hz9isR8PtItK833kaw0lXWJZewj/QOfhRG6sDqwZcrbg/eVjSnMn0oX5ilrBRor4236P3k9sLbhb9klMzcP3U8KlwijX/tj02QDtWfk8IfCX7V2gjxN+y5++rCkuFFoyO8KO4HbK8Bb4Wy9r99Oo0x9D979qsBYHhH2U/BH6P3Uu6PQQBG2+i/P9qcIsbrI+L/AOVYA039GiYI/R+6l3f1DR+n2K5wbIdTTI97/wCVwfIt3A9lWg248I+yihat8Lfso7q+o9D+hW0atP0HsrnVZbuAPhH8StvJ2eX2RSoyzwt+6ju/qGj9CrC2thzYw92I/sUgcHdJv2RU6luzwtp1GR4RUvJfseivRDA2x0kPuo9Ln+jLvYakYQ8IpYDfCPsocwojE8P6Pmyt/sQCC2LU2z3hEVZ0ZHhH3V3Cb4W/dSU69g4/oVhDbaY+yKEbFoWoR8Qq2O4tm9TjI+IRUF/pBYN5cTEL9G3JaRc3wmZy0rloA23bfR+6l5LafRe6ntbeti02z5Dytq4s3GXtIk2XC42Tf3qJzlKPNhBRlxRSUsrbdYH3hUj+T2foR+995X1bXN6sfdThth+j91YvqP1NVgX+oztdmW30PuowbPtB0i57KvaWw/Rl7y55KP0Ze8l32/ZXZXpFAezrLVhlLul+FEpbWnC53oufsV15IP0ZJeSjwl7yO/8AqLtfoVNG7b6Nz2SXIW30Zf3f+ytit210bUd0Ud79Q7f6FMbLBfJueFsv2LrbFtGOG5/dl+KivaWo8KdRoeEUd9/UfZX0M9SxtpSi/wCyS63s+04X/ZIVoMEeX2UoDxJfEP6i7S+hRU2dbcLviFxEGwthiXb5SH6RXNGx/Ml2IpPO/qHbj6R8e/CvszyLb21GY9n5S88yJbzdz27fe9YVPCstY7PcfcbZbHtHNRFpbEdRFyr3b/xRdHov7O2s2OVwfIn+EXG5EwRfqIqeGiw3Quww2X3i1OOYIkXC3q/tqSxb8s36fHbr0QbbZTNoPZj2nyjpD2hFy/Rihu1VrtCmqKpnlLyI9RRS4L/oftYrZ6JEOE5ldB0cRtzlJve6/R6FG+G/oQzaNsbY2c2Q21yWHctSkNtclpiX0R5uqlfRXzUVMD0cy9P6I7WtL20d2XdxFi5ZwSxN0hLKQ/Flr5+teP1cpY5rJH9/qjohBZYOD58Hzn1rqt+mvRp/Y9+9ZPjmbiTJjpfYc9S8JcNaebq+Ko1VQu+E1JJrhnkyi4txfg71knUqSHUk6lFoSEoSeJIVEQKoAs9i7UubR9q7tH3mLlkhJt1ksMu78xDWnmqNfNX0VX1D8FHwlWW2hbYu4220xHMIuYbNzHeb8/ZlXh+f0L5UsbZx8ostk593xF6FqthbBJpxt5x9wXGykOBlIY8Lhb3UmpuPDKWFzPsbyMd5t+Xe/wB0MrD9C4XiVN8GXSpm/YbYedLyltsRInCzOFpyx6hHzR81Pj61uq248ReGSuHVauCMnTODplEDMf8A0RIhDLKNs8PdEf2K38mH6R32iTqWv6R3+8JU8xCxooq2/wCgf91NG3zeof7wkI+6K0YtOcRe0SdVgub2iR8Qw7aM+IR+RvfC8ukI7zN//eCSv6NFze0uxIdSnvsehFGDLP0N34iFHrZsasN73SJW3WmuDLec8JEp73+2PQisrbMOahf9lMG3tB3XS7wyVpVnme/vCTwAvpHPaJPu/UehFeFqzu4490S/YkNmz+n5pNlm/wAFaUD9M57RLuF+mL2lHcfselEFvZzepuQ82US95KtiIlmflylEvu0U4WP0xe0nUa4XS9r/AGUvI/Y9KIlXLTTEnO624Q+7RRXnbIfl3mS4cMh9mVFbiwX0zntf7JVtiL5Rz2R/Ympr2Gkpmrm0/wC9KXMI/sRRdY/7n/2SVn5EO8Rf3bf7E07Nvhl3hb/YnriLSyv62Jevb/uSRBFn6Yf7kkZyxZHNpj3R/wBFyti258sWbeFwUakFMFVpr6Zv+7IU8G2/pmy/qyT29njGOO5/eIgWA6sdz+8S1L2FAex+kb/u/wDZNnabxMeIf9lLpZ8NyQ96JS9pPpbOfTj3sNtTf1AhVK0+ktPz+pcjacVp7SmlZPfTM+JkU4bJ76Zkv6ltDa9johjRjdK2H+sTxBj6VnwvKb5I4O6x/dikNqXCwX9X/ujV9R0R6NsfSN/3g/iT6W7e6Tf942pPko/Rs+JtdrZj9Cz/AHahyHQAWW/ph7uIKfS3/SD/AHgpVs//ALZj2R/Yu0tR/wC2Z9lKxjqMcTg/3gp1GC+k/wDcH9qTbH6Bkfz9idVv9A2lYUPFouKXiFdwnObwkKFSg6fJm/eTyo2OpkR7qkY+rZR3lzC/pEwzZHU2I+0kNzbcLftIsB9ALic8Uk2rbnE57JJ4uWxbgf3n+6dRpndbL+8/3RYUNoDnF95GAXOJNwW+Ev7z/dOo0P6T2kAmP7Ti95PoZcSYID9G4iUoPCfspFDMYkqvJ9AH9IKfRseIkEsGLyILgknUHmJcqHMSKAcu0TMPm90UsIuL3U6CwlFxMo0XEPspYJcSBD13wptALiTqUqlQHaxTepOqK5UU2A3qSyrsaptaHypUM4QihFQeL3kStD4RTK1PgFA0DqPMmk2iFicIoNTc4UDGk0m4Yp0yXK1LhFFgMqwKEdqPKpVJcLa4UuFv2kWxUQCtB4vdQztB4hU+pcraGdf0YqlJi0kAtn/0ftJh7L7qnFX9GmFX9CnrYtKK0tld1Ac2RzK3MR3mPvJlQb+j94lSySJcEUhbH5h9lBPYhcQ+JXtYjpGPiJDqY83hJaLNMWiJQlsBziZQ3Ojzn6P3VopS4kNwSLSReyJJrPMTxxMw50fc4W/dQXOjjn0I+6tITVzxe6K5gP8AN7KtdTNeSezEyR9Gy+hFAPoyX0ArYmD/AAkuQc5lfxcyexExJdGC+hFBPosX0C3FKlwkm1oQql1kxfDxMA70Yc/7YvCornRz9A57JL0ar/eJM8oIt6PeWi66fon4aJ5u50e/QOeySjnsEfo3PZL9i9Qq5zD3lHePhfbH8/YqXXS9EvpI+zzGuxR4S9kkKux+GS9RxR3nmi/P2IRvf0Ps/wCyr49+ifg4+zzBzZMd0vZJRz2YP/JeqVLiFj3v2JhNs/RsF4v9la6/2iX0f1PKD2QJIJbOb4veXqj1i2XyDPhcEVGcsGiy4Lf98Ktdcn4JfR/U8tPZzfEhls5viXp52Df0LP8AeNoJ7OH6Fv8AvGVfxiZm+la8nmZ7MHiiglsweIl6XWx/QjL+kYQnNlt7zH/uM/tV/FL6/cXw79nmddmj9ImHs0eL8/2r03+T2/8AtvebTfImS1Wn/tiX+qXxS9fyHw79nl57MHi/P9qHXZY8Xur1J3Y1oXyHuiP+qgu7DtPo3PZJNdVF+BPp5LyioGt2O8yXeKP+iczW5L1hMeEpfeopYiP5JGoIrHX+hpoIuG59MPuyRm2y3il/dotKD3kQWx4UrKoFRv8ASEPdFlEoH6R7wts/sTwAeH7qIIDw/dTENbb/AEjn922nYf6R4ebDH8NE6I8IpwfnMgpIjugX/cueyP7F0LaX/q3PELf7iKXeFLr4ktwoa3Z//ekP92P+ilN2Mv8A1rxd0mf2JrVZcKkUqJbo+yp3/wBoKGhs4t27d/vG/wANEYLBz6Z3++Ev9EOoN7wj7KcNszq0+0i2NIk+TPfSOD4hl/kmHZvfTP8Atf7JtGG0fCbHe95TqY6QHyB/6d/+8/2TqbKlqfuS/rIqRSoj8oX94iS/SF7SWuQaYkX+Sx+ke8ThLtdl/pHP7wlJB0vpCSx0ap+xuMSPXZAlqJwv64lz+Qh/Lrn7VKF9zh/4ordzyj4k9c15EowIH8gt8Jf37n7U8NiM/RlL+mcL/VWPlHDH2V1t8i3f4ku7MeiBVHsBsvpPC4QptOjbP6T++JXVHS4RXaO8oo700Hbh6RVh0ZtN5n3iRadG7T6H3iVgTpcMlwbj9G4l3cj3sO3D0iE3sa0b+TIe6RKTSzY+kfH+ucRmnx4SHvIpOcpKXkm+SlCK4INNmWxFLEflxYxIh7Nb3bm5H+uJSKuDw+6lR5sd73UtUh6Ygf5NH/uH/wC9cSpslv6e5/vyR6GP0nuroufpB9lLXINK9Ef+R2f+4uf78l2myGfp7n+/cUkaD9Iu/wBYjVL2GmPoE3s4R+XeKP6QkY7OXyrn94uiH6RKjRfSJan7/gdR9AKbKb+ke/vnEQNniOlx7+8JPoLn0g+0uVo5xEi37Cl6ENgMvWPeJwkWlkPE9/eEmCLg8SeOJzJNv2UqF/Jw8b394ScOzxHec/vCXMw7y6Jkk9QEDpX0ZttqWFzs5+Qtvt5T1Ey8OZl4eYa/4da8De2S/s5vyK5bw37Zxxt0R0kWIRC43xAVI1pX5l9Hi6XEvPvht2bi2jN6PrGCFl4hHNhuF2eb6q+bxLLJdWbYZKMv1PDrwxzKoeVzcjJVLrfvfn/Jcykd5ErX+FTNmbQeYzMuOCLgxcjvDISiXL1iKh1r+fw/2JzeaSJJSVMalTs9TvNmM9Ldlix2f8p2maycIhEnt5y0IuE4+br9FepfPt8xFwhi4MSISF4YvNkJRJtwd0qV8y9G6N7Uesn2XhkIy3csuJekfCL8G3842S29snDK7cbHy2zGI+UuCPr2I+fFrT0j8dRXLgvDk0P8suP19FdTFZYa1yuf+z5oKke8n1/PeVjfbNeacJlxkhcEsOBNkLgufRk36RKit9mdGBHNc5i+jHT4i3l3yelnn48bnwUGz7J58otNy4j0tj4v2LR7O6PMt5nixCy5dLfs+kv1q3gIxHdHdEcoj3Vxz1ZEWbijw739izeQ6o4UuQ9Ii2OGIi3HSOUfZHeoui7mHmSu7knHXHijJ5wniw222m5OFKTbbdKC2HKNKINCHKlZqi52TtN5hwXmSJtwSkJCW8K+hPg96WsbYGJOON3IjmDGIZRyyEetfM7RK56N7VetH23mXIxL3f8AP+xZztfNHlfyU0prTI+sP5P/AEj/APeEn+Q8zn94X7VS9Cukg7TtBdjF0R7Qe7lkPEKvKvt/8V0YsutXE8/Lj0SpjK7PbLVjf3zn7Vz+TG+J4f61z9qLiCu0eHiWlyM6QL+S29Mny/r3P2pwbLb+kf8A75xE8oHmXaP95TcgpDKbNb1Yjo/1xJ38nN/SPf3xCi4idVxK5DoFTZ4/TO/3zifSyH6d7++L8Sdi/mKdIuX2UrkIbSzH6Z7+8XfIf0r3tCu0r3UqVLiH2UWyqEFmX0zhez+xEC1LiIu8KVHC4hTxdc1JWyhwW35lFPwSHSRD/WSQ/KC4V2jvL7yncAkC4i9pcrRz6T3RXMblT6OD/wAkgBUac+lL2RFI2nd1+P8AViSNRweKKcJjxItiojnbvF8uP9y3+Ki5Sye1Y8i7rY+71dSl0PmXZI1MKREbsXBl2gl3hb/Yol3si4cL15N8rbYircSHlXUKbQOKKYNjP/8Adv8AstpBsZ0f/V3PuxV3ROpUlTzS/wBQlBFIWyn5DG7eH8SkUsSHMLzsv0hSH2epWkiXaV5VLySZWhFKNpdy/wDMtkPCTZfhqpDds/8ASM+04P3tKs5rkx/IpObHpK8mXuIv6shL73UjNg9EpE9937vWpFCH8iniYjy91JyCiNRwd5xzxF+9RPoXC4XtD+FSKOc3tLhRLUI+ypKGULvSXRrHeIh5lzDZ+j9kiTgab4nB72YUyRwAMtRd0ox/yRKBzfd/Ymha84klW2JIo6VvLdHxCJKM9s4i3h/u2/wo8Yrk+ZCYqIP8kvfTe6lXZb30v3lZUIuJOo7zKtbFpITFg4OohLvEQkiu2ZFpeIfESmUcXaGPEpsdFf5E/wDSl7RJ429yPyynUpzJIsVELye5+mShc8Snda7SqExNFfh3PESQm/zKx605MCsq8/zeynUef/IqyouqgK8Llzl9kkWjxcQ+ySlpKaCyLiF9I2kNS4m0cwEvSmVtWvox9lFBYI8bdw0IyfHdElJC3bHcEV02RL/ZKgINX3/oUzyp/wCiJTfJR+dz2k02PmdMUDK/y64+hL3k1zaLn0Be9+xWPk5fSkm1ad+k91Owori2oUfUOfn9SaO2h3mHFYE2/wDSN+IU0qP/AKEkWgohV2wz9C97KY7tljheH+rU0vKfomy8Sjm7c/8AaNl4kqQUyPTadsXyhDykyX7EE9qWn0xf3ZKYDr29aCk4RD/6b2Ypqg3K8b20LS+MuKJCpWFlljxHiEkyly2Wqyc/uRQHru0H1lsQ/wBSqoQelm45mG5cLulJBc2Y5KXlLyGzf2G6WD3RJtP/AJXsh/8AUy5ZSRuGwSlu4Py8u8IknYTv0w/3aiDtTZ0pYwiXiRDuLJ75Rv8AvEU/QWgpg+Q5Xmx/qxJBC1vf+5Eu8yKiP2mzv+5Fv+vIf9VGa2cx8nfkXduf3qqtP+0S2XtG7mPycuJMw7v9D7SrB2S5uv3JcwvCuVsXh/8AWvj3sMkaV/qHqLJyj4/Jtl4lGdeufoGy/rFDctH92/cHwjFdw7vduWS7zaelfQAtHrv/ALYfC4KHV58tVsXtChuN3+6+x/d/7oVB2oPy1s4P9GQpqP6Bf6hnCIdVtLuxQTeb3rZz2RSdd2pujbF4iFCrdbUHVbWxd17/AGT0fp9yb/ULR5ovkC9n/dNrVj/tiQvLNo71k2XdeH9iYW0L0dWzvZdbT0sLDVrbfQfdXavD9A5HlEVCcv3/AP8Adr3hJlD/AJRd3tnPjzRbL/VGl/60GpE8nB+heHwqO462XyL/ALKCV8RabR/2Y/6qNd7Tw/8A0F2XdEv2pqEmJyRKqLH0T/ibJCcJjTgvf3JfsVd/Lrfyltft/wBW9H3ap38v2n6fxC9+xV2pEqcQxtWG8wX925+xArabOLTbPf3biG/te21Y9y33c34EE9qskPZ37vdJoSL7iun9ROUQ7tjszetnPZcQHLTZf0L/ALLyjntkR/8AW+1bJjm2B/7+08TBCXu1VKMvbIco/QI6xskspDcj3RfH/RMo3sscvbf1jb5Lje2v/v7IuH1g/wCqTm1Xvp7T+/c+6nU/b+4fL6Ok3swdLzA95twi96qE7Ww3X2PCy4X407+Urn6S2L/+L/eouDtC7+jZL+ju2S+91IUZf6wtejN0dT6Olw/d+96E1lv8/wDJGC3H8iK7DlGTIuLuyH3Y6k8XP4uL2UTAGOrwxHL3VzB/SFLwoTQ6YqGXCXdiPuruIX8JJ1Wt7N7q6LYjxJ7BudEuWPeyow15UwGx73iRRa5fz4kxoE7TMOWXhIVw5Du5VIFviH7qe0PdSsKAgHLJEkWkRL8KkCH5HT7S6MtJEMvz+pKwoFIt0S7pIgyLdzJxiXKniX55kWOhUty5u7JIGyHUiARcSILnMpbY6B4cuEfCuUDvItXy4ZLtHSzFEoothRzmj3UxwtPZuF3d1Go+X0ZImJykiwohCfFid1G65DmFxEqyP0ZeJdwG46S9opKtSFTBtvjzZeXSnN3jZZZaeFEFjeH3iRTa4ox97wpWgpjPKR070uFdpfDKOGUvvLvWI8JSylLNFAqy2WotO6KSUWNtkzywdQ7u7LMlW+LdHLxKEdR3Ryj4kZnTljHh3kaEJSZKB+X8KcNyO6otK8I7vKukMhGIl4Y5ktCHqJY3Q80k+r4y0xUNt0tOaXLlijjXdKRS4lDiUmG8o91dJ4e8gdfKMt5EqUu6lQ7Ci6MZZV2lB3S/iQwqXhiu1p7vClQx2AJfnMngwIjql3l1sijmTjr+SUtsBmGOXiLeRajFcoHN7ScIlHdy6cyRR2gjxFLvJ9Wx4i9pDoiDSSTKOYY6pZu8ukyPEXtJ9WhXaMClYUCIR4iQ720buWHmHczLzZNuDvRLe71Mtad1TK24pAyKG09gqj5d6T7Ie2ddv2j45mS7Mt15ki7N5suGv+Y1VI81lHmj7y99+H3ZbDmyRvSJtl+0ebFgiIe2F4ouMCJaurKVKUXgxUy93D+6vPn8ktj08EtcSA5bSkooNkIjJXHUoNwES8Xu7qqDstxBMU5o/dX0D/4d9oYpFbEWZxsh8XyZCvBLekuFeofANtnyTazQuC3gELhOEQ5WRbGWJLdGkfeWHVR2tco1xvle0eo9OOgNpthwbkmxtr1uTflLIj2wkMYvj5sTveleMdMuht3sxyL7MWyysujmZe7rm6XLXzr6O2XdeUsMXOnHbF8R4Re7Qf8AAhUh9lt5txl9tt9lwYuNuDJsh5hJdunXFN+jzseR420uD4/dt9WVRBrp/OnL4V9BdMfgfYucR7Zbvkz30DxETBcrbnpb+bz9a8V6R7Fvdlv4F/bFaPFmGXq3hH5RlwfM4P2LlknE7o5Iz4KVgdI972c0Uq0ju6vw5kag5padJR8Sb1fnvJKRWkTQfhRQKMYpwAlWi0iwqj0n4KukpNv5nMw6RL5QS1FyiNJddV7wxeMuCLgiUXBEhzCS+Q7O4cacFxsok2Upfh7tfmX0z8Gu1mtp2WI32cSLEBsvUuFq1bnxrBf8OS/6Zc/9kZo92H1RpMZksxSXZNac0l2tq4Mol3hIsycFq5lLEH87q9PY8rg42bZZhczbulOobObtCzJhULLHN4USBbrfijm/alSHR2Qlpe08w+8niY5ixBEvD+1R3GZamSLmiSbgjlyx8IpUhkwZasQSHm/Cu94h/CodWxEd4c29EvyKc24MfXR8P+yTiCZMoBcQ8KILTnF7JCoLdRIZFFzmFsil/qlRsdQjHLl7MhSoCwwHOL3hXaMl+SUFugkQlulp1e8pIy3SSoAuEW8PvJ1AUUKlKQkUuKUfZRhMtMi1cv4lLRQ7DlmEe6nUEuEk2py3fdFdxC/Sd1FAd6iLV7ydlHe91LGLmSEnPzHMkOxUHhIiTqtp0neEfZXet3hH2VO4WM6k8Jby7XE3hHukl2g/Jjm5UWMcFB3vvJ495DxC04I6uVdF4i3W/dSAkDTmTxoo+Py+yQrlHh4Sl4fdSoaZMoa7MVHbc5S+8KdS4b3peyk0Ow1KiudmmUfb5vZThfZ4hSoY/q9lNIV3Ga+kH2k4XBLSQl4hQAzDShze0izHiFcpGWqSABiMSRRfzaUWjRJgs8OVABMXikPhTqCJbortJLqBADY4UOrRcKlVXKV5kDAUJdoSJQpJlHOIYpWB0STutdGqdRMBtE6lE7rXRqglsauyXepdigRylSXetdilWie4jlCSkmVPmSnzf4JWOglKrlapoku1onYCql1JU7q5UuVDAbVc60iPlL2U3rFIaOptaJVhzJuXiJTuUh1KJdS5Ed1xKPOKAGx5iSpT9IS7VvmH2kyrPd9pSATqrxple8hEwSbVpzmQAekuJNKhcQqMVHOZcrieFVuAQmOUfZFBOzb+hZ9kU6mIuVcc3ooTYAHtnslqZb9lRXNg2xfIN+yptX3E3GKWlVql7FSK0ujtlmHAb9lQ3uiNgXyES4hV4VwXCkdyWnDKSpZJLyLSjN/zOth9W4+33XiH7qiO9CBL/wBTc+Jwi97rWwrdcpZkzHEsqpZpLyJ44vwZKw6K3doUmL0uYXBxBLwlVSrm32sQkMrKXFhxL/NaGj27FCq4P0aO9Ju2JQiuDJOW+2m80bZz2hUq2vrkR7ewclxMOC57pedaSpt/kk3rlul7QqnlvmKBQ+pnC2sP0F6Peal91RbrbxN5htrlzvNEK1FY8Je0u1OOknElOP8Ab/IOL9nnV102cH5D+8kP3kWy6dMFldbJvukJCtw9QS1NtucUmxJQndmWDmq0YL+rEVv3cbW8TJ45+JfwZd7ptaDuuF3RUZzp6x9A8XsrVO9HdllqsGfCMVHr0X2L/wBo2PeJz9qqOTAv6X9xOGT2vsZWvT9v/tnI/wBIKX8/m962cj3hWke6E7Jc0sx/o3C/aq24+DvZxaXnx8X71FosnT+UyHjy+0R2unNkWonG+8P7qN/O+w/7kf7tz9iiufBmxu3r3siSiO/Bo2P/AKt7+5FN9j2xVmXpll/PDZxZfKY95tz9iX86dnf92yPhIfe6lRO/BuXyd2Rf1f8AuoT3wevj/wCp/wDbL8KpRwP+pife9I1YbfsHCy3NsXeIfxKY3UXMwi2XM3Eh91edvdBLn6dku9If9FGc6HX4+rcZ8LxN/sT7GN8TEsmRcxPTCZ/R+6P7EM2fzFuP+S82LZG3m/VvPx5bkvxVQhPpGzmldl3iFwfeS+F9TQd+S5iz0ZyzbL5MS7zTJf6KI9s9jSTDf/8ALN/hBYmnSPbzfrLbEH+gLN7NUnent+3lctGxL+uFL4XIuGvuP4iPlP7EtphzThuR5SbL7tepFqBcLkf6P/dUTWwi3X7kfFIU1qweb03b/wB33l26E/Jy6n6NEy3xYg95ssqILXCUvzzLOFbbRLTe/vf5Jg2m0xL/AM3737yawp/1IO4/TNNUMo5dXdL/AFROoRVAxcbTbGPYuf0wiKL5XtSPqWB7pZkPE/aH3Poy9oGqTnhKP+yIw0XyYl4VRW20b/eabejq7RKm0r3Lh2jI90pf6JPDLxX3GsiXg0Z1ItTfuxTm6jxF+f8AFZ8NuXbfrGNPMUR8IozXSYvo5D3SkP8Aqp7Ex96JdVqMiSKg8vvKq/nM3vR7pCX7F2nS9gd0S/q/9kdnJ6F3oeyyrEfzmTmqiojfSvZxbsS7pfsRK9I7DiH3hR25/wBrH3Ie0TiEe8lQhHVlURnpDs7iHxEpQbR2c9vD4XB/ap0TXKY1KL8oJR9vm9nSmm9mykUUWrNgWlzNw4w/hqntMsaRIvF2nvEp2Xh/Yu/0IomX0heL/wCERsv0n4VKG1ZjlLxZpI7ezx1YhF7X3UakGlkInCEuIebSnEen3iFPHZpCUhelm3pI7lqW6Wb3veS1x4HpYAsTUIxb9pOF0t2JfhTwtXpFm94Yp/k7kYlLvcPsotBpYNsOGIlzZk6lRHNEY+93lzCcLUWXh4V0Lfm8OXKi0FMj1oRSjlElwDKQyHNy8KsG2UnAbHMXdkn3ELQCpVkt4pJwUHSJFyowAMtMpbxCn0HdjxLPVZdEeoFLNL2k+lRlux8WVFaaKOohXDpHSQ8yLChtJFpIe9veyi0Z7pJlZcunSuiZd5JgEBsuJdo05xDFOCRJ8XN1TuUNjxakqU5hinuOx1e6KHjCRR/CluA7q5pJE7Hdy91IiLT+FITkOZvdyjvIoBwPigFdCX/KKH5S5LMy4Ijwt/n/AAT2HRLNLLzNkrUSbCyHTIdMsuYk43RGMRc94hTRHi1EMhIW9I8Mk4zHm8I5kqRVhBu2+X2iXaXYj6wojGREWkRHMRf2fEmsiOrNLmigbWtxK2uRHLiWj4lIuJkt4t1TKkij5m+Enb7+3r1y7cIhtmyJvZzG6w1pFyP0p6ql6c3V6KKoaccj2g9oOot0o/hUp+giI+EhjzR0pUzSEh4vdXj5J7nsYcaitiQIDmLuxUG+aipzRyGMt3N3d1Q7nN4VWPJTNXEhNgMlsfg46OubWvW7SRCwQyuzH5O2EhxM3EWmne6/iWOYGJZt3Mvo34EOi5WVh5Xctk3c7RFtwQLKTdoOZmQ7pHUp/VQqLohU5GGWWiNm8qTeXdERFtsdIiIiIi2P2U8yONB+ky6dSHUMNvEcLDZHMROFht/1jheaP2rI7e+Eno1YSEtpi+99Fs7/AKkpcMm8g/rJdEpRR5iUmbYOHEUPa+wtnbRbcZv2GbluOYnhzMiOaQufJd6n+K8Q278Ob7kh2Xsxtn9PeuYjnewWeof7arzvpH0x2xtZsmb/AGi+4wX/AKYYs2xczjLVKUcGnzF10XPPKvB0Qwy8bFl0yttk2l6TGy9ojfslLEIRKLDkvUi/6HxpxU+JVLjYqkdt4jly8MRj7PCpFvduMj2gkQ8Y5i8X+y42ejB0qZbNNe8i4CbaXbbg5SkjtucSIzKe5Gca3VuvgT6Rjs3ajbb5EVpdkLDo/pnIi2X9pef6ljycHMW6P5/zQWijGWUpS8Wr/Bay+eDRlwz7Ecw3DdEYuEw4LT3FmESbIvtBMraiWoRXnfwNbUcvX7+9l69tkXhIt5kuzL9dF6Ni6eIt1X02pQp+Di6iKjLY6LBd1dwnOIl3yjvc3Kljcq33OccLbvESbVkd4ZFzJwu8vsklV0eEkblCw294RXSFsc0R7y7Q5bvvLuXhRbChsGy0lH3U3A/SESKTjY6sycJtpWAOjPe8UkqWn6SPeEkSQ8y7Wu9H2syLAZVlzibId2WpdpbkXyke7mTqEXD7qbMo83DEo+0kA07V7dJvxSl/l1ImA9xD/eLuG5+ZJ1Bc3vvEi2ALDuOJv2pJSc3sOKkUbLdj4h/dSpbS1afESLHRGbec3o8pJ1XyHvI3ko6s3iSO3FFoKBgRFu+8iABEXF91MGzH9GjYT0dQkPDJJ0MaWIJDlkPKQ/dJEZHLmHwxRG5RzCK5SnKKQDIcvtCi0AeX3UGrrkSkJf1e6kLhZcpFq8X8SQEiBcUk2gyUeh8xZtOXT3lyjkd78Je8gLCm1zJzZDwxQRcHh9kkwjZ4o94oooLJvZ8OVNMW47pd7UgA6I8X4U0jb3spd1FASRBso9mKklYN8P3v2oNuziZhKJcQqbFwd4S8KmywbNvHSRe0pNKoIkW8Ip1SSAN1pdaDNdqRcSVCoect1RaG6PD7KHivCnYryYx9XXuESXKuubzYkm0cc5fEiSc3REvEgDomX0Xsp1Hi+hc91MF1z6P3k8bkvoyQJjvKR3hIfCuUuW+Zd8p5S8SeL4l/xQIZS6bTqXbfF1J2IK7SvdQA2ly39IKfR8eIUq0HhH2VyrQ8I+yKLYqR3E5hXaEmVYb4RXK2rfCnbGFmuTFD8lHm9pc8m5iRuILUuZMqSbW35iTMA+IVO49ghEmzJKjJcQphNucQpbjsfLwptC8SEdC1ZUOlSjpQMPWgknVZHdUEXi+jJdpdEPEKKAkEzzJRLiUVy95fZ1JFecqVASKy4kidc4lGre8pJlb0ZcKqgDuvFw+ykDxcJCo7d8PEMuZd8rlISy8yKYrJGMPMkTjZKI4Q5cycVW0UFhyOWXL4hQ4yzF7qAACW8KY61EpSj4vwp0FhyAd4pDzJkWy3h9pBGsR3vEmbv4kUFkmjQkP8SETO6Il4SUYy7yFJweL8SelhZLJndi5+f1rh28t1zKo9HSHVLvEuzc3SIpb2ZG6FaHPMR3S9lCfEssvupxXjmmSTb7nMjcNgUS/OVDcqUYy95G8tIt0YpeV8oy4cqdsKACDmUR+8Ka8RNxk42JacydS7H6PlTHLxgdQj7Ke4A6lIo6uZCfuI5Y6feR/KGIj/ABJhEwRS1eJP9hEdsy3oy5hXTuo/SR5Ryo2Hwi5hkOqSA222O8XdJOkAPyxwswkMeZtdC+c1EIx5SIcqdXDEpSj7KYFuMpY2rizCXhRSFfoIF+Jbpe6QyXaXDPFEvEKr3rHD3RiW6IxFAdtBjpcKXCRftRpQtRcELZc3skoz1q3vCPsqpc2eRZWy08rgkP8AWDVcZauRLK7GO64REMeLvJqH1C/oTi2eyX8JfvKO7svhL7pIVH70crwskO6W9+1EG8LSTZeEi/EnuvIWRntnFy/dUU7Jzhl3SFWNL4eJxuPEJF91dpc4m80XtCXikmpyQmkzz5u4cLU40UcvqyHMpQVIhH1BeIh+8ofXmjEeWJSEu7wozdOX94V6rijgUiUROxysNl3XhkhA8WkrR4e6TZfeQhAeXvafaR6S05UbA7HA4zpJt0f6RofvDVSOu2jmIR3c0o+z50CjkeGP5+pPIxEhKWXmSr9RpjaHYCUfKWJf0n7qcT2zh1XbAl/SKQ07HTh80hEpDypjjLBZiZbc5iH95O/dikvVA2n7ItN2x4XRUxuxEvVuMlLhJsv9VEPZ1lvWzPhbH9iY7sSwL5AR7sh+6nqXhsSX0ROc2BLUyJF3RzJpdHm95sf7uKihsC0HdeEt2Lzg/wCqk2tsLJZbm5HvOEX3k9cvDY9MfKQOvRdjhLNwl/uhfzRt+Jz3f2K5tbxzeucT+kb/AHUdx0oyEmeXUp7+VPkfZx80jOH0Qb3Ze1/sgudDuEhj7K0zbtzwsFw7v+qbW4u/oGC/riVrqMnsh9Pj9GTLosQ6hcl4V3+bbm7iD3iJayl3d/8AYZR1EL3+yKztAvlLR4eYe0VPqMn0+6F8LD6mO/m9diWoh5hc/D1qTXZ202/V3Lhf1kVq3NpWmom38vE2h02xYcTw95lz8NEviZPmI/h4+JfyZtu420OXEc9kS95SB2ptZv1jIud5v91XzV9s5zTciPeEh+9RHbcsi03LJf1wj95KWdP80P4BYWuJMpB6QXcR/wDp0S5ZD95OHpHcj6yyc9rV3letWrJaXWy7rgl/qj02cJZZS7sSWbyY/Mf8ldufiRRfzuEh7S2eHxCXs/GpTPSeyIcxEO9mHe5pKzLZDJam/aEU09is/wDbtl4RUueF+GitOReUQWukVh9O2XLGOZSabWtBblis+Ek1/YLJf+mbHuqE70VZLdj3ULsvy0F5VxTLQdq2hZhfZ4dQpzt3bOaXWv7wVR/zQb5i91db6JsaSb/9xV28XiT+wtWXyl9zQMNNlpzR4XBKX+K7W2Lh9oh9pUYdGGd0XB7rhCo72wrtsiwXij/TF+JSscH/AFDeSa/pNPhtjqj933k5oWdIkP55li3Ng3panC8TxLlNgXo/LF4XCV/Dwf8AWiO/P+03gg2O8guWzeoXCb7pfeXn5W163qcfl3iIV0jv+J72SVfCepoT6p/2s9CCz4XPFJPO2/Se8vOa3N6PyzgpzW0L8dLrhd5P4N/3IS6teYs9B8k7354V2lrwy8SxAdIL8RzYZd4U9rpLdjqbb8JEP+qj4PJ7RfxWP0zcDajLSQj7qGTA82XmWPa6U3I/Jy5ZKY10xc+Util3o+6VFD6TKUuqxs04MFuj7yDWOXeLvS9r41SNdMh+gelHLEhKSex00Y1Ey4JcXZlJL4bL6K+Ixey4oUd0R05sSP5JJ5tt1lxshyvNkzlKUhcGJae8q6nTJj6Mv7sUYel1p4eEWy1fvKZYMn9o+9jrk+Y9t2Tls+/ZPettCJhzmwyyl3a0jVBcPLl1ZfaLMtH8KV82/tzadyxHDcebbHvNsttuf4iSy9KkO9pL7q8TLGnR7mGdxTEBRkO7+H/aSimO8JS4eJNoRSjzF4hipIgPi93wqYwKlloEI7xb33Vc7E6b7YsGfJrLaNyy2WkCi5H+jJ6latj9laKtdblGKCYRLl5tK0UtPkz0OXJ3a9zc35Yl/d3N7ml/1L7joiX6NtytaD+qii4TYjl7u7H9n9imVzasv3fzT50InpFERkUYxbEnC8Ij11kpbbLWNRHNhljuxkXEUdP1iP2oVK7xFHvD+FW1nsHabw9nszabkuGyuY+8FKJx9GNrD/8A4fan/wDIXJD7oVkSemfoetIrxeEtRe9xJwjy5eLUP7EdzZN+MhLZ20W+L/oLsf8A9mhN7PudLdlflHdGwuyIi5hFtKWN2LuIYw3HtB1Fl7ysbcuLxCpWzuiW2rkhFnY+1C3pOWjzA+0/QRVqXwc9I28v8k3Jf0bls57RC51CoWNldxL/APTP1OPDKSiuEREIiJOOOFhttiJE44RaRbbHzkVfR1Lb7N+CfpC+5Fy2bsm5Znbl9oojxCywZE4X1eZevdBOgllsftGxx7shiV1ciMhLewB+SGv1ef611YcTswy5kl7OfBL0Re2Ps4huxLyu5IXXwbIeyGOViQ16iOlC8/xda2mL+j3fF7qjsA4PFyxLKjC7xOZvurpqjhlNydiq8O9ll+c37apOOxKJEMfz7S5UZahHm4Y970rjQjvF+Ih/hTpEhApvCUu7mRahIRkRCXd08qCyZasQfZ3e6uuesyjIe8Jf6qaGPqO7iD70iRGG8uYiKWqOaJd70oFByyISGO9EvySXVlIs2Ypac0fDpSaAmQjmxPe9nKujXiKXsoVrUd0i8USlLdzJzVdPFyx3e6paKH4n4UgcHNly+yuFTxd3NFcoBR0lxJUAZt0eGMcskTGUUa5v3v4l0KS/hH8SKAKJkXEnYpJjY+0i0blql7UlLQHKOd1OEi4kxtseYl3qb/P7qKKH0q5+9u5k6fF/EhFUuIf4kvKO7zbqQB6GOn3V0ad72lHK54sw8uZdo5q/EigJIknUcHmkgUMS1DHxRTwMdI7qTQBaS/JJUoXMhzIe7xZhRJ6UtxnaU7wrpCX0nuyXCcTKulux/PdTEFoBcXuihPN8RCQll0ppVJR6ulpihASWaDpkMfEuYLhFux93up9q1LNp5VN6hSbY6E1p0ilSveTU1SUG61zrXFyiAH9aabg8v3VzrQ3BkgADzpFujHlJFF9IWRSK3HiTA4TnKuiW9mXPJS4kwmY6iSAkUc5l1tzij4SUbBLiEl0Wi7yAJeKuVPlUTrHiTgl+SSYEuXKmiYoZCRbwprAFvae8mBLCop9e8gmJDKJat2KaAlLd/ElQEjr5hTqGh0j+RXCJtIQWjq7VxMpFCrLdRbCg+Ku1cUWpFvJVIkWwokTSmovXzLnXzICiTWopUpzKHjd5dF5SMlVoK5AUMDLhl3U6jvEMUAOq2PCKGTQ8IovWuVTsAFbdvhFNwG+EVI60Nw92KLY6RGcsmfoxUU9nM7oqcbveQ8QeZVbFSIBWDfMueQDxEPiUyseLMuR5k7YqRCHZkSljOEK4Vi5uuKZUeZNq3zJ6mFFbS2uRy5SEd4Uxyr4y7MvvKxwS3S95Do27xeJNSFRVYz0tLhJO3bhahcGOpWTgOcKG5Rzd3VWoVEShvahc7oko7l8+JbvhIVOcPvD91RToWqIj3k00Khlb1zVEZd5cC+eKWkfEux1SGUkGocqewwo7QIsv5FNHaTZZcORDlzDmXHZFuimt2+9l8SKRI5y4EvkYkmPXbEczcfdSJnVlUZ63He+6mkgsO3eWRZY6d5PpW0LTLwqEdsMYjpXHGijERl4tXeT0r2G5LErYijiOd0iIVx61Y3nvCRKuctZasQS5SyrjzbmXtCHvR/FRLSFlhSyZjlel3okmPWMhjjCPhHKolWnI+sIi7oqK6VzxN/3P4t77U9P1FZNCyjlxJD3UJu0cbL1wl7WX2VBfZKMpSc9kVGEX5ZmZD3ol7qtQfslyouH27mUhcGP3e8SjON3OrGFzi1fhoq55936NxjmEsQveqnFctt71y5Id15sfdT0P6BqJ4Hc8TY82qXiKmVOm/wAI94iVM5dlLtBdEeZwfu9XUn0q2RRxyHvR/wDhDxsSmWVHHPF4i930qO4858tbZtWnL71FCviJoZNOSLlj+E0I767FtssaUtQ+sj3h661QsbDWjOuDLVEuER9WXEUk1gBGWG3GOqLgiI+zqUIHCJsZNiRCWnEcEpd3qR3yKJFpLh/5eaK9PSzhsntV7QSkMd7TIv8ARKXDxfKD+7uqHY1F5oo9oQ6ibEXI+H0/2UT8ZsdW7pHDeH2o0pUft8yekLJojmKOX2vd4lwa8Xi1SL/BQ6vSGUm3IlGLJZmx4SbKv+Kk2DgywiIeURzS73ny/wBqWmhp2SBo25GOHLdHKJeyXUujLTEcpcskKhy9aRCI6oi5If6Qi1D9i63fi9lZJsil8oPDzf61SHZIaMcuVFFzdIY8yDFzKJNjLNLV+KtVICIiURLLqlvFyjw/WiijgOSkWUiHLvCX8SK3ib0UAHolEpZtXD4S9BIrUSlGJR8Mu8k0C3H0rm0jIeHh5U8KZUKpFGWUSlvZh/zRm6xGJdo5xRIcvd9AqWCHDq3cuVczSll70lzrH92Uk4SGQ5Yjy5i9lCHYZsiHN91GG4cH5QhEh95Rw7uoZeFPCu7wpNDTJHW5qkMt7Kn1cc4Wy8KBVzmJPo5LeLiUjD1LibbLmFCdYZLUwJeEf2IguS3v4uZdpIuKI+8ldDpEemzrT/tm/ZFdps60+hj3ZD/qpR0JKtfaS1P2FIi/yUxqxnx7rzn7Uz+SM2W9uR4e1JTRpHNmJKlVSnL2LSiMxs67b9XfuF/SFJFCu0RLK+y5/SN/7ogCOnLmRhoXLwocm+a+wtPoieV7UEvV2he0uHtDaY6rZgi4RlJSxpLwojVO8lqj/aitL9sgNbYu/lNmF4XET+cI/KWF2PFHMpZCWYRczJ9KucWX3krh6S/cKl/d/BBr0jY/7a7Ef6HSnMdIrAtRON/0jBfhUypOSy8Xurkno+rbIvySPk9fyC1+/wCBtNsbO3blvxSH7yMN5ZFpuWP7wVHdAS1Wwl7KE5ZWxf8AphH+rGXuope39x3L6FmDbLmlxlzuk2njZjy+6qO42RZOD6lxso6m8qrv5Asi+XuRLmFXGKflkuT9I1H8nNufJiWrdlp1IJ7EZ+hH2VXXtsw9bWltjEz5NIRMRISIeb/VRq7FzD/17nFqL2fSiKfmTX7Cbv8ApTLk+j1uXyI/d1IJ9G2S1C4WWOaWkdKjW+zLtuWBtFzMMsxZU3B24Om5xPEKa1+Jon5fMSWPRhjUIkKGXRhnhIfEKBV/bgl6sS7wiXe+NTbfaW0xzOWAuR+jypt5fEk/3BdrjS/sRj6Mt7vvEsD8I1wWzHGrRtwcZ5vEeNvMTDBFH7BMpeatfRRemN7fclFzZz493N+RXgvTjaw3O07t/dfccER4RbIW2xL9TaxyZ8sVTN8PTYpzWxm9qCIuOE3lGUh7pbpcRKqabcc05R3iV4+Mh8UlB2dQcUW46i08XN3aU+NeQpKUtz2NFR2GvOstjhsN7o4jrnrHC3hH6NqlfRTl86BJOucMScjpkUe7IorV9A+gF3tYhceebsLTUTrvrHOVlstRV+evmV1LI6ir/Qj5YLf+TK7OtnHXhaZFx95wsrTLZERF4dP216lv9n/BdeuCJXJE3yMtyIe8ReaS9o6IdEdl7LZwbBsZEPavuPCVy9/SObo8tOqlFdhaDp1DyuCX3V04cUY7yVnLl6iT2g6PFrP4PLRnMVs8+XE/IvZb9C0OySfshw7TsB0xZZFv3hp1l+teluWX6PMgnZD9CPiFehj6mEFWhI8/JiyTd62YxjaW1HC9YRf1ZF91ajY7r+GIvkJSl8m4JR/eT2dliLmI3ISLhIh91Tq4kY5i7yjNmjNbJIrFicfzNsyG19nX4uE4289huFlwXnMvh61BDaV+z/625HlxnPu9a3uHliQ+8op7LYLUyMuLeV4+qjVSSM59M27i2ZW06T3Y+sJx7vESlfzuc3WyHi0q0f6OslvEKF/Ndv6RwvCKt5emlyjPt5lwwFv0tbL1g7u8P7qkF0rZy9mJcwrtOjjA/JkgPdHmd3EFR/wPwy/+ZeiSz0sYKUhipgdIrTVjDujEt3/BZ1zo8W7p7pILmwXB3ZD7KrsYXwxdzMuUagtuWWnGb5uEk4NrWWrEbIY8WZY642M42Pq/ZzIY7KcLSyXsp/C4q/MxfE5E60m6Y2hZFGLzY+KI/wCasTtRiJEOUhylL8XX1Lz5vo44WohH3iV1sm2ubT1LouDug7Im/Z9C58vTwX5ZG+PNN/miX98QtjJwnIj3iUGu22C3no8JMEX+ip3Nv3LcmxwSzczgj7VVVXG0r1zU+53W4iMeHKqx9I3yTPqkuDZMPsOZhJws2bsCIu6JdXWKsm28uUsMS0iXNxSXmQvXI6X3h/rC/ajN7Rvx03L3iKSufQPwyI9avKZ6SNmUvkyHlHTyx6kKtu5HUQ93e9rSsGO39o7zku8I6lIa6U34/Rksn0OX6Gi6zH9TbdRFlyyHdciSZWpDmwy1bv4llg6ZP7zLcfa+8pTPTYcsmPEJLN9JlXgtdVj9mhJ0RKLgl7PFxLhOiRFm0833lUs9MbYhiQuNqW30psvpvaEv2LN4ci8MtZoPyiaDkt7vbye3HezDqHL+JRmdvWhfLt5uKP4lLavbZwYi4xH+rWbjJeH9jRTj7Chh7uXe1e7mTeoS5h5oklSjO6TfFlc1F/aiiIlmFvNl3vvedRTRVjGqjuj3hyp3Vm0x/P2pxjljHD/O6SBRmJFHE4f+KSAPD/jl91NFoi4hHu5v4k1uO9LL+jzS/Ens3GXUJDmzRISHvCkB03BEdWH7pZeVDrWXE4PEOb2hTguiLhLm/dQHnO0ESiPFuppDDt+1vZpN+6nunw5h7yWGRRytx7slIaYH6Me8OVJsVEZsiLUJCnEwO6QkpptiW6SBVqO8lZVBbZ3iR6VkolBLiRG1LGiQkhULmTpcyQx8u8lQlwaLlUAInOVPGg+JBqIoo1QA/rFdpVMouxFADlygpUXVIDagKWGPCn0ouoAGDYilUUWKUB4U7AZh91cwkSrY8K5hCkAzD5UqU5UTC5iSqHMnYhlarlaCnUElyXL7yQwad1p/XypdSABy5VwjHhRYruEKAI/UKbWgqVgimVZ5kAAGgiuBQd4kWrJJpNFxIsDgC3LKRCnFXhJNo0XEuVAuJSA8nYoB3Rbq4ZkO8nVlxCr2ADW6IdQrhXBFp9neUirMtRd1Ddt3Pky9pGwbkalzLdj3kOt1ukKkOW7mreQXWHN4fEKpUAyr6bV7N+JNdaIdTZIRMjwp0iNw1bgkqvkW6JIQs94e8mCyObtIp7BuFpccqdW47yjE0W88PuplWXd3MKelMLJtLjhFcxR3SVdhucKRA59GQ91GkLJpPJxVyqCeUYkLku6gGRfSEPeRpE2WNSHh91MrHlUIalHKWb7y4YkIy3uUk9IWEfp3Y8S5QRLezbvCmSIuJON4h4h/P2IoLBkyQy1IRUHdkXFJEK6LiimG7LezIpiOGEhzS90kLAEtQkIjwotXd3Sg4vMQqwBw+jIo91dBoo6h/En4o8Qp1KluxUgQ6iQ5UKglvDIe7+JSSkOoZfdTYlH5P2YqgI7VXCiJN+0IxXSD9GWXhckiFUt0ZcURXCHLpigCCTbw7w8USc91QLpl0S9SJS3REi/2Vt5TGWUsvdJcrcSy5vEUfdVKVEtWUd1Uhykw4LftZu6PXFR7m6ZIYuNuSjmHUQ/5K+bcL6RzmHV7w0XcUizC4ThcMhKPh81Vfc+hOgz9vcWQlh4JSH6QSbL3qdSC8wy4REJOCW7LT4fiJWt24RbwjxSJzVwigNW8myJuRcpOZfDKmVaKW2xLT4MhS3djmx2+07Mm5aeIvij9q66ZbzhFzC3mL/T/AATXaYgjiRxBzFIcSXiRK0iMRIhHdwyj4cu6vQRxNDG6NybKTAujpIsMSLxddMyn0ekRSJuRapFlLukVVHbHVJwnG4xwyEXB7xZEmHG8MsNwmxEtJM9mPs06h+1NjRIFkW5OahIsvrIkX9tfR8yO0UhkRN90ezLvIVRbciUWe+JREf3ftQTeGQxIZFpPEHDcj3kqsLomVAXClljuxLMPMi8Q5cPeGIy7w831160wGu0J4ZYhN6SGPs+fMiC7IsHh1REpEXdKmZTuPYTLIiRDEm4yESbIcMpfSJg28o4bgyzaSH2S+Ik8at/JuDvRiWWX+Y/Ym27rjbkSKRNiUSLKRS4S6uuKEA9kZSxBISEc0hyuFyx6/wC1SWIi2WHhkOrMOnlQWTykRD/7gl7pU6xRwJzUQuR3h1D7QodspDRrLdjlLLHKXiHSnE84UYxGWUo5hEt1OJmQxbxN7NH7zo0XWmoiMs2HmHLlH+k8ynYVbhKt5SEcYXB1REcsdRCO8ls6RDGWIRZu0ERcER4vi8651aXBkXFHLieH0ki21ZF2eUi5Yuez1avtS8FeQgx3SzadQ+FDae9ZIcvDxd4khpEmxIcvFEpFL8P1phUIiwxxBEcxE3pL/BIGHq4UW4iJS93xdWZFbqXDml+cyjvuxISbcHLqIiwx/wAkZuTjchwSLvDEi4pelDQ0woViWnUjSjHUJSyyUMauC3IWxJyJbyJbUw2/lHI6uXlEt5S4jTJpHm1Zu8n1r+LMolak5Jzs2+ECEhlykmCw4WYh06cEtPLmUqI7JBPDplEvEjnlylqLNyqIdXOIikPLIv8ARNB9zexR/qZe8KekLJdNWWKR1LdIR4kC1fItJadUssfCVE+l1Iok2JDuliCJeyk4jskNOilV8ZCPvLjzRCIxHN8pljFNChacwjxZoqUlyUEAt7UiiX53kDqKWnLHdXCEtQlm4UmrAmdYkOrMWVPBoeaXF+JRWSIhlplyxyrlXoiJao5cu9+8lpKsmVb5hLKuVb3vaUaeWQxjp5pJUIhKJEUd0d6X7qKaFaDUHMj4Q8qivGWYSy6e7H95EHKIlHTl9reS3GtxO2zZDmjl0oB2QylIfaUqlJDEdUfySJTLLT7KSk0NxRW/ycOXNKMox5ki2aX0hD+cqtEiMuL2h3uFV3GToRVUs3iH12YZRjp7veQiC5GMntPNq/8AhXJV3csfZ70UIt3KJFFUsj8pCcPRDevbkWnnCck0LLhFyti2WJm+xfNdzTEGXEUuZe5fC9tXA2S8I9m5cuNsDHebIu27o9Xm8S8XBsY8q5eonudXSxe7K96sRjzCtJ8H1m44xtYrZnEuXGXGGSIh+UbiQiTnmbHtCr1rN3IfneXqXwG7Pb8mfdcGXaRKWmJRHe83oHqXnvHrko+2egsmmMn6RM6G/BnaWAt3N+23tF8mxJsYl5Ixu9mPyxUrGki83zUotrXZjEv/ACw8Ooojyjy9SkONtNjIW8MRzCIjmlxf2ebrR7J5sdRSJ3MMspCRbq9qCWONR2PHySeR/MVz2z9nRk4wQjKPrCEcyfa7MsJFgi8yWmQufhVpRvEKRYhCI5hLS5ypjbbJSw3HG5DEpCQ6fz6UdzblmaivRXu9G3JSbu3ylpkREXu1UZuz2iyRC3dvZc2oo+8rkMZsotuNkPEWX2S3lJdtSejiCJSL5NwlSyyXNV+gniT4KSz2teiQi46JDxFFXNNtl9G3q1CUZc3drTzp1vsxxnEjiFiahiP4t1Df2QRREmcsu0iUS5Y/ESmcscnuVGM0h/8ALGaOFLhzfn0rgbb1dgJCJRKLmYS4SFAf2A3KQk4PLL8SYWxpSLHiRahlq4UqxBcyxptccsmMvKQ5e8jhtId5pzwqmLY+b/zJD3d6O6SltWRDquRkXNl9klEsePwWpTJv8s2w6sQSLTIUYNpWxDLEj3hJVp23E82Qy5fwordo2Wkm1DhD6lKUvoWPlNtqxx9pEAGi0uCXiElAGwHl4oj+6geQW2aQ+Jvi8Kml9SvmLjySQjERj3sy4WzuVZ262NbEWW7caKWnELV+8oBdG3yKLN3id1xz3vOtI41W8q/ZmcpyX9JsaWAju+6lW1jpH973ljx6NbSH1dz/AO85JaPZAX7LcbmL0dJOZve9PmU5MelWpJjhkb5VHLnYzDmpmJFvN5VBc6LN7rhLSjWQjlESLiyy7qgXpONx7ZhvhFxwR+9WiUMuThNhLFDykUR9FS4v3lH/AJtOcXhiS0rN9h+sftsuqLo+7561Ty2u1GTcXt7sSkQ+zpWnxOVbbk9jG1ZlS6OOcQ+ISFNp0ce4hV690qEcpW1z4hL9iLb7cEhkVo+PNh6fEVKUV/EZvJHw+MzLnR+5+jl4kyvR+5LLgF7QreMXTDmUSi4OoHBi4PhXHbm2GQ4zeJEiECykURll83Kl8bl9D+FgYb+bb+qLftDJAu9jPM5iby8Wof4VtNlbRZuRcISw8McRyTZRES1DzdSNs29ZcbIhcEhIoiOXVwx3vsVLq8nol9NjPOfIiIsol4RkulsshlIcPDGRYmUvCJeclL25tu7ecJiODEokLbZN6i+ULdGijsW+IQ+UuE8TeUZOdgIjIYuOFXrjp89F2qba3/7ORwV0iPRjm95SQx249o8PeIh+8r/ZvRK0ciTjhNyESiw6Lgx8VJD11+daGuxm49o2TmbU4Ui/hXPk6qCdV9zeHTSq7MQzte/b0vl3SiX3lMHpHfxzYZeGK1ZbJZ+gEhUF3o8wRZRIfEsviMT5iadnIuGUjfSi7H5Nku8KKHS18dTDZe0pj/RsfkyIS5hyqIXRx7dISVqXTslrMg1r0saHK5ZZeUlZW/SDZz0ZY7ReIh/YqkOjpSISy+Ei95K56PPbrenm1c0epRKGF8MuMsq5RsLTBL1VyOniEf8AXUpGE8O97svEsKz0bEfWZeUcv+ivrC3fZGLZEQ8OJp8K5J4orh2bxnJ8ou5ub0S91dFwvySgBcubw5uYfxCn0uuVYaTWydIV2tR/hUMX2+IhTcbml4hS0jsmlHiyp4UbLe9lUtxdR1Yw+GSA5fsiUiFwZaotlL3VSx2LUaKLfMlCOmXiJZ/+XGWxjIh3u2bIXCHdIR6tKa30rZHU4WnhEhT7UvQu4kaWJLtJcJKgb6UMfSe02SNTpTafS+6SXan6H3I+y8pRdpRU4dJ7QvlfaEv2I4bdtC+UFS8cvQ1NeyypRd61DptBktLo+KKfS9b+mZ+6p0v0PUiUkmAcvoy7pJ9e77ymhnaLvUuDHhXerlQB2KXUuVr3l2lRQB2i5Wq7Sq6gBvX3U2tE6ApQQIHWnKuVon1bSw0BZHIk1x0lKgkQ91AwES4kMqlzKQQDxJlRHiQACrhLlXS4kSoN/Se8lQG/pPuooBtHS5SSJ4uFdJtvikgux3RJNIAlHh4Uzs1FO6j8m4uUu2+Ye8KKCyd1jxLtB5lEx2d5creWw7yNLCyXWpLlfCoh7Ttt5wfaQy2taD8oKFF+hakTa1LhQ3BLhkoJ7Ytvph8RJw7QbyxcEvEnpl6DUvZIqBDpb7yB2f0RfhSPaMdQ5eVMrtlmJFliOohISj3o+cf1p0wtHSCPyaDhy0k4JcqEXSC2+kFALpNZcStY5+iHKPsk1bcIspF4k+rbgjqEi5VX/wA4LRwh7cRlxKUJkQ9i+JIcJLlUFoaYPbxS5YoL4jvD4U64av8AdcZ9lQ3a7U3RYcVRj9RNnHAEtMh8SdFvd1cMiUBy62o3/wCgEv6MpKM/0hvR1WDol/RrVYpPivuQ8iXP+C5rbyzCJe0S49a9nlJ6XLu+0qAukd2Wmyf94U9va1y7lJi5b8KbwyX/AOiWWJZssODmJwnO8mRckWkeEiJRTZuSkWO4z4VCINrZsF4XP6RuKFBPyhuVeC1qLoxxBbc5myQnaR+RIi8SqDrt4fkGy7uVCOm3i0se8K0WH6r7k936P7F4FXPoyH8Kd1jmyks2VOkH0cfZJMaptYpYjjjfdEiJN9P/APJE976MvpiOYtPeTiJkuIS8SpGWto5ZE+QjmkTLY/eqinaXrmYcQZcTzY+6Knsr2ilkvwyzpHdcy/pE2lRLS4Ptf7qod2ZtPN/1LY95xv3pUQBDaIl/5nZ2XiJv8P7Edm+Gg7j9GgcrIfWR8P7tFFcoXEJDxRIYqouLraYxLEtHuHBKXujVQn7u7lmcwyLhc/CVU44H7QPKl4NJWjgjmLxZv9lHdKXrJF+eL0qmBnaPyZPkP9GwQ/5rjg7S+k8LgsCXsjXrT7P1Qu7fhk64dEsuaW7htyKPi1IlHBjhk4PdcERLxKG3S/8AlBYlxOCQx7pNoTj77khF1gi3hGJF/wC5RDh4tfcSl+pm3344eYi08wxHeES1faigZD2z2VuO7FsuUY6Y/ahNWrZFvNyIswlGTfFHeFGpb4ekRHiiOG453i88l6GxxpPk7YlIpYJMDqbEoiTndj1SFTbKubMTzOaMh7QfZbrWheJRhMhLtxGJfRk5LlEhLrRN4RGIjuhlEY83oqJJMaHGOonsTVqbIhIuUvi/UjBTKUiykMYODKI8pdXV/YmNnL1bgjHUJSy97iFduGSIYyiX0khER7pDSn+qB0RrmxxI4IkTkspSIh9kq/5KZbNFJsSfcJxvVIXG4lwiXo/WmssiI4JOYjg5pOMt4kv0ZD1S6vnR6GPEWnMTZF/8kSHJ0CjuMfoRSEWxbGX9YQ/29f8Aaiu3Asi2QuFL1ZETZEnzkWIIyIco5hEhHm86kDpzRlp8XCXn1KXKuSq9EYbptyOGTUiLSUpS8VP9U6rDIlKUXC1ZnNXh8yN5NFwSLKO8McxcsuL7EW8bxHMRtsW474lu8xFqL+xLVvsFbbkW1q4UpON4cuzGX/wjQKWkhkWaLxRHvCOr9dF07OUicbF5zDykMRbHhJcsjbxCbKMhbzFIfdIfOX607XKBKg7hS+UFyJZRyy9oVGbZlmEYt5hIRLUXERdaPdUGIuC/pIok62QvezvJ2GRFiETBCIjEYxIhLi/ZVJMKIrxdiMSw8PKIykJcsm6o2z7pwvWNi2JDlNshzR3Y7qJRuQiyQsEI6sNwcpd0fN/auXtoTcezbwxISi5IvZiqtPYlp3aBYhCRSxh4couNl4lIC4Eoti8yUhiRFHKXCMfOgtg42LgsC3ESkUpFq7yLS2bFsXG22xd3sRsZC53UOhqw1njDqFso5hi5EfZLzxXG3n9LjbgsykWZsh9r0rrrZEMuzjGU3JSly8KK2MWmyEso8QxkXKXpJRaKAYouZh0y4pCJF939SLZATJdp20pZhckObSObzp9rV5xxwcpCWohGI9309UkmmGSIsEmSl6wJERC4P3Sp9SG1wHINq7i43JuJSIoi5pHiES8/6kZ+pOfKE24UomOrxfEn3NliFIhGQ6SiReIUErUsRuWaJZSbGOnupfKOn5JjjxREiISIRjKPD95cFwSKTjLTm9p083zorwyKREXeHLJN04cc0t3mWdlIJ5U3mkMRLdzIjJRzN92JFl95RqVjllmLm3uGSLU46sOJFvbv+imirD0d3SbKPEMUKhRIpDl7vsyTKVIS0jhlxFm/hSct29RCQlq1F3t2ulKkILUxIc2ok6kS4Y7o8274UJijkiJwpZYjm3tOYSSdylGRS4YyHLzbop1uO9rC0bzRiMhylwkSK3WOYhlGQiPeQ7btCzRLDKURyx8Q6hXToPykRzSEvuiST3KR0gJvMRFh+GXhTRo4JSHd4uHlRG6jmyjLNLu8ortB4fFmLNzJWMeD5SKTeXi4uVccq4Q5cpEURkPNw9fWmBKXLxCP3l0iHUJaUUFhGnBHeEiHh4kgdEo5kAK4ZFlH2Yy/hUjFyyKO6JbyTQDuuQ5f3i97dQ8XMQxj4dSTxDKOn73/ABTCMfF3t3u+lCQnweTfDTf4l+3aRL/pmBJzhJx/N/hQfeWDpUhHKtX8KVzibYu2223CJkWWXsNsnPVsiRFEeusRo4PXX4qD51k3XMxDqjpLiXmzyNtno44aYIrnXSlxEXur234F2HG9j+rxMZ5xyMRLKvFHhzRjmXuvwSbSH+R7YWxk4JPMOcsSyy/Ulji5ZI14/wCipuscjXm/L1zRCLeoo6uHTp6kw9oM4shLKI5Qyx70eL7VObJ4hyjliWks0v8AIhQOoZdsI5h1FHKvVTR5bCNFlykTeJq/Pokk9V0eEt3ezeHdJCaCyeGJPNeIlZVshLM29pjlkpe3spbrajJbR6QONkTcRERLw+ySgOdI3CEhkQ90iFbK62aIiWM2JcJDmiI6pD6VWHdbObKMZd1mWrm6l1Qy42vy2cs8eRP81Gea21ey7N54t7URK92T0mvxKLzeIPFhkJc31Er2ztrZ2MSw5Dl7OMh7wrM7St9ouE4IybbEssSiJR0xLeJNTx5dqSFoyY97bNTW4xhHKI6Y7qBc7LeclpEe9mIh+6VFj3ej9+OqRCJSGLks3F6dStLCxuRl2jglvcxcXp1LN4Ix/LJGizOXMWi6YsH28pC2W7IspR4vmki3lm44MRGPiEo+0qx199uI5nBHKPaRLe1Kvb2nesyLEcGI/KE3u8PxkoWGb3TTKeSMeUWjVg4Lg5miEiylhlqHTmHzKfa2jg5SFsvdL3lRWXStwiIXBEhjljFvNxSTNu9JizDbdmJN4ZEQjKREUol6R+brVdjK3VC7+OKs2bLLmmIyHMMRzR728hOW97mi22LcZCQjIpS05vMvM6bfiIjIojlHtCy/4pM7WeEpMvuNiWaIuFH2evq86v8A8fkI+Ogzfhs6/k45LEiOZuLEe8I+dV91dOkLjbAvSEmybJuQlxPCUaUoQ9agW3TO7Fse0EiHejEiHhIv2Ll500fIsottjwxl7RF5/wBSn4bLe8Ux/EYq2bJT/wDKlyRF2jY5cxOYY5R1D5+v+xEtNk3Me0vSGQkUcVwtX60ex6WMuDJy2bxBERIhIhkJasvoEupQ7rpSUhiLOG25L1YkRDwkRafN8yWnLdKI9eNK3IBf9H72Qk24L2WXrokPhKvWKbcbCvX8Nu5xHCbHKLbZOE3/AFhdVPr9KjdIOkrjpC8wRMSGLgtlESLiGO71Ktpt+5jlfelvELzn7V1QxZWlwv2MJ5cVvn7ls09ZWhZrYnHhykThRzd3qXbvpney/wCmJthsdItiPskRU/0WbduyLel95DacIYlJbfCxe8lb+pj8Q+I7I9B2V0nHDk7tLtOAhLD05o+bq/sVpaX7bwiTd+4RDKQtuYfZ5vWNl5ioszsxiwfERfYJgibiLvyJF9NL/Dq+1W/80bIYkLhRjrZcjm4hHhXm5YY03ba/Y7sUptKqZoWhZeGUsRwRyyiTnLhlvCshtTauBd4nkGG5GMnhIpbuX4hGvzK3tdhC3LBu3N7K4OUi4h5VcUYdHeEhy6s7fsuddFzQlGD9nRJSkvRV7FvP5RtrlsWybk2QtnliJFuy9JDX/BUA9F7/ADC29bZoyHG1EOmXm/xW6bIoxjKOqMR930JC8JZSEhLhJuX3Ul1EoN6VsxPCmlfgxY7J2xvFhkOWRPbunxChudFr+Rdm24MZCQuavaWyeq3vR5Yyy+1TSgP2+aTbxDHd/h4Va6uX0+wn08fqUmxrK9tu0FgcQtWYpEPD80VZ3XSC5HL5E4Rf0bn3vQpAP3MYs3Yy3cRv/dP2ft0i7N5xhwhKMhKJeySzlLVu0mXGNKk2h1jthxzUw4JFuaS8Mt1WAmOWQkOJpzCXhyolatkMiHxIcmR4R5lzyd8bGyWx0iEdTZR7okuUeZLS3HvDH7ybR5v6Qe7JdOJZssssu6O6kkB2uGW74ZRSCnC2XekJLvsihldNjKTrYkPELhJhsPcIpRiUSlmLVL/JR60b0l4pN6vZUa420MYtjmLiHKKhfyvc8pS5VShIhySLgW2yzCXvEmw4ol3lSN3z8szxcyt2b2WUhEiLwolFoItB6W4luxLlzJVsS1S06pCKjuuufJ9nxYeUvaQIuFqIi7xS/wCSlX7KDP3LbZRHtO76v3lCpev4khwxHhwxL7yNUCXWmladEj7a6f3nBc/pBEo8vd+pSWbZtwhImWf7ptMZYFWzYE23lHMSzcq4Hpsyd/0ek44TeXtCyxyjm3f2KA5sB3hFbijj3CIruI5vCPvLaPUzj5JeFMwo7Ec5Ux7Yr30cu6t7Wo7wilRtkuFV8XLyT2Innrmynm/kyHuqMdvHi8S9NpZtcyi3rFkI9qTYjzR1cIlxK49V9CXhXhnnzeIOlxwe6RCrKwv9ojEWXHHOUhkPvLQE/ssc2MwMeL85lPtxtnBky62Q8pD91KeVPmIRxtf1FJXat2PrGM3KJfhUIOkl6P0fdIdPKtcLAlpIfaUa62I25mJvNxLKOSHmJo4S8Mog6XPjqZbLukQqS30z4rcvCX7yMfRhvm9pcHo02PN7qtvEQlkOj0uZ3myHwqQHSVktIy937yDTo+39H7yIOyG4+rj7Kzl2/BaU/Iq9KrcdQkKM10ntS1HHvITmym/ox8WVQLjo22RZRIZd2KEsbB6/BcU28zuuNl4l0duMlvN+0s450fEcuYvD+6oF1sVwSyjJWsUH5E5TXg238qtlvD4STfLmy3hL+sFefHZODukPdQTtC4SV/Cx9k9+Xo9Jq/wApF3SFOF0fo/dXmQi4OkiHukSeNw/9M5/eEj4P6i+J+h6MT7PCQ+FR3AtnN6Kw7e1rsdLxF3syMO37n5QWy8KXwkkP4iPk1Z7MbL1bpCXKSF/J746Xi8UlnqbeHebIS5ST2ukjg70uFL4fJ6H3YGmaG5HVElMCvEIrJj0v4miRA6Zt/Rl7qh9Pl9FLND2ao2BLdUS52cJcSph6YMcw+FPHpZbbzke8Kns5F4ZXcg/JIPYgl/EIqKfRlstQqQPSa2KUnBRA2zbOaXx9r8KdZV7F8jK4+izA73tKRbdHW29Je8rCl6JaSEl2l2JZsvtJOWRjUYIGGz4pO2DLnrGWy7winldcMvCS55V3lm3IrYgPdHrAtVs34cv3UGvRjZ30HvErStwPF7S5R7mEhVrJP2/uJwj6KynRywHS195SA2cwOnKppFlQHJcRe6hzk+WCjFcAnLUS+UJPFoh0/eXG23OIdPiQzo4Or3UhhSDiIh8SGUvpPaQnpEMUMW3R0xLl1IoLCVJzlJNq6Q6lGdF6Uox7oxTa1LSUvZzKySRjLtXPzFQz5ZEPMlWvDL7qVBZKq6XD7yhG2REUpR4ZEuE6Q5eHhGS5S6Lh+8KdUFgXdnMlmIXPC4SH/Jtt9G57TinFdFHtG/FGJe75kCtyPEXsp2xNIhubJti1D4SKXuppbGY3REY8OVTamP5FMrQeXvJqchUiAWwmd4s3EmO7DYHhzcoqdhSzSIfZXKtcxfdT1y9g4pkBnZbjeZtweUcMf2IN3s0izE2yRf0LYl7XUp50c/JLnWXF7SNcvYtKKxvZjYllFwSLUQlm9kaqIViMik53SiWIPiKquq0LiFcIC5SVLI0LQiiNpwcouOEI7xPS93z0UW5uHCEWyHEHTqiXhjRaM2iHUyJfhQHGx3m9WqMfeVLKhOBjqhqzZojqEREY8LnX/kuthpkRRKWWRELndEv9FU2u02IlLMMpQEYjIeVzV50W52riZm23MsYxcjIuYfRH+xel23wcLmuSydFwSlmEYxhq+7pH6qp1aFHLqIswjulzebrJQa3b0ZCw444Q/SSj3i9Aj9VPOiNvXJSIWSxCHKOpv2i86ag/oDkia43Es28OWQ+7IaJ2CTYi2IyEc3EIlvRlpFVTlxdlqw2yHLly5v8AGSi3NlelpeFvLIYkX4f8laxvyyHOuEaRiWIQ8Q6paf8AH/JIDEZZtP58X6lnBsbkfld3NESl4iUtrZZbz7nsim8cfY1kl6Lc722bzOPS+8Q6Yl5s32IlNo2kR7QcwyKQjHvd5Uj+yhjEScIuLdVcWzC0iyTnDFz95VHDB8tmcs00+DWW+2LYhKLgiQ5pcXKPEify9bNxy6uYhIpcpaVjhYdby4D/ALMvu6lKYxtOG45HdcZj+NU+lh/rBZ5moDpFZZolq5dP7xKWG27TKMh4iyy1fd/UsRcNubrbwxGMcPLzR86jtWLxaSj7WVP4OD4Yviprwej0uLJyOZsSb3hKMRUxlpkhKLjZbxRLUK8xfx2SEZDJvLlzD/F1oZXT4y7QuYdKl9BfEil1qXKPUnbNjU222JZd3N4iGmZDZ2YWJIcMmy1ScKX+a8yZvbkdLrnhJWdld7RfHsyIsOMs2ZQ+ilHiSLj1cX/Sz0N6xclFsSjzF7uVdYtXiHtG97KMpDykXMsnbXe1hEicbdKIy0j7Ulb7N21cuSxG3JaikMR7o8y554JpbNM3hkjL2WjFpEiyl4t7i+Ncd2cMmyGQ93NHwqbY3pFlIc27L+JSKkOYo5R05Y5lyPJNPc30xaKhu3cFzM44Q8OX2i4ky4acIiKUcv0Y4cf6Pqykra9CUdJF4Ze0q+lsRPS7Rv8ArJDl3Y+glpGd7mcorgj2TxNt4ZFiOCOWIlIWyLej5iQ63zMvXE04PE2QjzZippUtprvDqzDIf9kx6jw4bbbbzg5R3cvFIi860Uo2JqXgiltEXBFsilLs83u6aaVPaJ6WG4OUYjLURZdQl1aVIt2ot5mSk3lkUSlyyFQ7y2EnB9e2QlLsyIRjyo1ReyCpR5H+UON5nBEWRH6OQl+6S41cl2ZYZE24OoZSH+k/3Soy4IxEnnBLTIpFm4hLdTzdESjIR3YkPrOUknQ1YVyg4g5Swy5SIfEm0xMpSylGUvWDzenUn0oLmVlxsY6s2Yfz8ybS0LDIiInJFEjcGRat3zJKgCxzapd4pFLlScoRPCUe01SkP/ymNsE2LfaOdppkIy7st1cAMYZCQk22XFvcMVJQUH3CIso5RzF/EmOUIouDq5dWXvb1PmTguBIvJpRyyIMT7oqT5KTY5m9Xq97UlaQckYgIokJCRfKEW8nOuOCQxEs2WPey/Ou3huWwjiMl2mnDHL4hUOm1ykIiJF3W3Jd30dSqKb3BtInNkWktPdRA3ublQre+Jz5EssZDEs0kYqyH1LwkObhyqHsNOxsPlBLe0xTSEcsiGRZuH2pKPtG+KXaMPDljIRKPuplb8XBEuxJwREZSIezHLLN5xL7VSixWibSUikUhb4dSYFcQhJwSER1FHNHiFNauh1SxB05RIi9oUc7xzDiQjhlISOWbN91Jpj28nmfwgdHHmHH9o2nlLjlyRNvCyOJ2b4iLkcPMUqiNK0Xl+27IrRzDy4kmcWRCUXh9Y2PKPo+3rX1RaXOIIi3ESGIlwkPD/uvn34V+jg7NuxcEnHW3yccbH1kXHC9SMfORVqRLz82HS9S/c9HBm1VF/s//AKMXbE44+QjEsQREc0okRRiXCVPm+xfRHRPo8/ZWDbNsTYk22LhDhxceeIZOFLeKv4V4f8HWwnrvaLbBRHAebcdbcyuYhOZij6cv4V9V3Nq5gkIliDHXwjvSw/OP2o6eWmY+rj/xpLy/8FTsS4cKOJiCLgiUt0Y6u6g7esCuRciWYW+z4i+kl/krMLUREfDmHSW8jCA8Wn7pZV29zTLUjznjuNM8rftyHu8yPYvOskIi5hiWXM5Ec28UlpOlux22xxGxjLKQiJEKyT7Ecso94vzFeziyLLE8nJB45bF/ZbVfbLDeuRId7B7Qi7vErwNqtsuONPP5tXatE2XLISpm6lgRt3JSESKOos0R7xegVebc2kL9gziD27BF2uJLEbLdL4/MssmCLkq480aQzvS/f1LdrpKwQkzdiRRLK6z2cYlw9ail0oGRYbeIMsovEQ+KX+iyVk0/clhsMuOkRRHDEi970IFzR5ksNxtxspEMSEhKQ/qWsekxJ/8AsyfV5av/AOjas9LRIix7ZshLhcISHxehNf6TMj6tt6WnM5iD+dSwjlxmEc0i0im+Vl/CrXRYyPjMhsHellzmiLYyGOmXi7yo7y8ccKThS/P9igg9vJ1XB4lrDDCHCInknL8xO7SQiMYkOrLGW6Kf22WUXBKW9KJd5E2bbPOCWG244PKJRHxIzbMXCF4Sw+FshxJbv1fqQ5xEosrTMR3R4dO8n0dbyjEfEMVdWD9k2TeLaC7hxljEREXEXzCg3djsnFkyVyQk56rSMfo5en6lHf3ppmnZdWmiDbtCQkOYSjKX4SQgq4JRKOX7q2N3sa2ctG3bYhtrkY4jDhFEhLiJynUJ0UENlE3HH8giRdpF1vEHvODp/Us11UHZo+maZX7PcbLEF5zDGMo5s3Lyki2WxnH2yJt4SeFyJCPq8OOrELe+pAv9nti4Q4kRHLlISEh3SEt77VbsbNvYtiwy9HdJvMObLInBr8aWSdbp1+o4Q9qwrfQsnBbxCeZkMiKLLgjzZT0odOiuzGyw3touCRCWfKQiXMI0rSKsLF+/tGXm3yEW/VDiYcW97N5utQ7C0bIXCKRFmIZMiLLn9GRV63PnXIsk7bcvsdLx4/EfuZzaOxL22+RJ9nSL9t2zJD4fOJfVVMsrG5L5B1seImyEfe1Lb2Nb9kuxG2ERzRllcHu9er4lMb2q8z68izF3WxluiXD9qp9XNKqT/wAkrp4X5Me1QWvWM4mWMXJCIy4eFWjG2xbERZacHiEXCwxLl9KvH9oskQybF8h0kUYj3viU21urYh9SIxjKLY5Zc3UsJ5r3lFm0MVOoyRntn7XeecwyLVpHl/tVu4y3GRY490nBH3UQ7RjEJwZC5LUO8PdU23udI4ch4v4RXNkmn+VUbwg1yyNs66YJyLeMTg72bL4iVw/UnBESwyjpEhi4JDwl1qGTglqbEd6RaUiEhzDhkJDLKUm+8uedN7G6RNqThakwJDlJvuqMDzgkOXd3cwqXbnKUsaRZi4dO6oexRDLZT7jokLhYe9KI4Y/o4731obfRdxxxzym9cKUiizpjzEX1Kdd7WFvK5iCO9veyo1ltJkpSJ6JEWoizfvJrJOtiHCL5LXZuyGWNLj5f0jktPKphONacvNpyrJ7YunIjgPE2yOpuX4lXvOy3i+8hYXLdsHkUdjaTYc0iy5m5VIFlkt3xLzt6QxLm9mKNa37g6ZCPe9lD6d+GT3l5RsNrvsstyEpERRjL2lmycIkI3pZi9reSaqqjCkKWS2HaBSBaQAcRwNJtlJoeDSmstjwqO3VSW6qJMpB6kuRTmqSRwBZWURqNIjTCVxdtt8yiHtUt3KqSb4C0W1uxEkcX2y3s2lZstqOfSEk3twhLtBFwfZL2lXZkxa0afrHiTRNksuIJeISVSxeMP+rLwaXP4kO4tRIpNuYZcJDlUaCrL0hb4hUd21Y3ol4lnW2trbrzGHxasv8Amnu2nlbfaXIi5xNiQiQ8wq9CXlE6rRcU2RbVLLlLdEXSl/mm3Ow23BhIiGWkikKyVejpaivWRjvE4Uh9pFsHWbYs1+8XELZFHm/+Vr2vMX/BGvw0S73oi58iQx5lWXHRm7H5ES7pK9DpNbD8o453hKXi4l0ul7O7Ih5h/MVpGeVLghxx/oZauybsfk3vDL7ymWtrtMfVk+I8zhfiWssekdk/vQL5i/eVuFRLSQkKmWaS/NFDjji902efHtTaTJZnXu6QiQ/5I7PSm/D1gg4PMEVuKst8IobliyWpsSUd6D5iVoa4ZlA6aufKWw17pEpLPTFkhzWxD3Yq4f2NaasD2RUYbO0H5Ih7zJJOeJ+GCjP2Rm+ltp9G4P54UT+ddhpInB7zZfhRXNi2T26PhkKhXPRBgvVuEPezJLtPm0D1r0Tg6QbOL5cR70hVk3VpwZC42QlwkJCse90Sj8szHmSsej9sWUbsSL9CQqpY8fMWwUp+UjT3BWjfrHmR7xCKELlg58tbF/WD+1Zq86KOj6twi5SVJebHfZ1Ml7Mvuq44YS/qFLJJeDfHsuyc+i8Lg/tQ69H7Tl9oV5qTMf3dKZ1HxF7S1XTS8SM3nXmJ6UfR1ndEVGPo23L1f3l5+zcvs+reeHukUf2Kbb9INot6blzxZk302Vf1CWeHlGxLY7OrD7uVDLZbeomR70VR2fTa9b9YLb3eyl7qvLXppbEPaNk2XtD7qxlizR9s0jlxP0BuejzLmbT3VF/ms39NH3lcs9IrQtIy90vZJFHa9sXrG3h5sOQ+6s+7lj7L7eNlHTowz9JJNPos3ulFaKl5ZF8oI94SFOpRhzS637SPiMnsOzH0ZYuibf0hD4VFe6MR9W4txRkfpBLxLlWU11WT2HZh6POXuj92OkiLukolxs27EcxEI8xEvT62qEezpboktI9Y/KRD6ZfX7nlgUux0vF/eKQ1c7THKJEvRi2dHLhtj3hFALZ4/Rtq/ik+YonsNcNmBrf7SHdJAd2vtEdUh8MV6BTZrY/J/e/Iph7NZLeIe8muqh/ag7Ev7meeU6RX7fyntCij0uvd4RLuitjcbAYcHd+6oLvRFktMlquowPmJk8OVcSM8PTK53hbLwqWz013XLcv6spKdTom3vSQj6MW2mRIc+nfgFHMvI8OllsWYpD3tSd/Ou0EvWOeGKjl0Za0jEiVfd9FhGXaRUqGB+0VeavDLwOlNlpxx8Ql95PPb9lvPs5uYljLnoy8I6hLxKqudlON6hL3i+6tY9JhlxIz7+WPMT0j+WLT6Vsv6z95OHaLBcJf1gkvKXLXi94YoNWfyK0/8AHR/uIfWNco9dptBiX8Sc7cMlpcw/EJLx2rZDxe8mFicRe0SX/jV/cHx/0PZKk2Q+szcUlyA7pD7Wb/NeLmTn0jntEhOE5xEXeIk//Gf/AC/gH130PaiYH85kHyJvul4h/wBl5AztC7Z9W+8PdcL8SlD0ivx+XIlD/DZriSKXXR8o9PO3jmFwu6OX9i7Rseb73+68wHpRtES9eXiTv537R4my7zYkpf4dk9oa66B6U6wXFHuyIfEuNsFxCS8+Dpw/lkw3/VkQqaHTtvebcH3ve9Kzl0WVeLNF1eNmxwiLebLuxQSDu/eWZ/nswWbDH3hL/JEHpvbFlISEd7NiD7PUsvhcq8Mvv4/Ze1HxR5Zfdqu43N+fEqZvpVYOfLD4hjFSQ2vaFpfbL2o+LzqXhn5T+xXdh7X3PObVtyJE422JFp0l+f1qdhuEOUiHu5ZezTMP1Il7SMYiThDyiKVQlGUuIhIij4o+Ze05WeakSLMXBzPE2W7pJsv2J5mRaSzRiOYW9PCRdShOFEo4m7qLEy8PhTWgfGQk229LeEpR5hIv8lOlPcNTJtu45JuTbkS3pZR4tKs9ngy5lbKJcJDm5pF56N/rVbYiTZZhHmyxl7PmEk4nSxMRohHlLT/WDvfrUyjeyLUqLa5o2yRDFuO64JZe6ReaslFN2OnMW7EveUJ11wiFwh1esiWX2fQKKybOaWGRbpZZD7Wnz/GiMKQOVjxAiEhJshlvDvLsikP8O7yojT7eoZSzCRRKXtcK60bZFhykXM3H3lSYqQ0KOCQjhlItRDpH2v8ANObq+Uh1DpHEHT4keJNSKUt0Rlq/hQa3u8QkPMi2KkCcYd0jlKMd2PNqVXfbMuZYktW62US9kVbjtNodRCUilwyJCu9q2jgl2rgucuaMVtjlNPjYynGDXJnDYcb+UIY8RI1vfkIkJRc3RJwZR7qsrVjZzmYrkh/pJZvZU9+3sGRHDfZEiHLiNE5m5uFbyzrhpmKwvm0ZkQecKTYkX9GJRU6ytdojJxpm54SiPuq/FlshzX5C2WkWWokRFwq6tbXBw3m333P0ZEIjHiIS3ljk6qlXP3NcfTNb3RnbLaW044ZY0R3SHN7Pp8ysdm35REnCjmzFHV3uVWsCJwXCbk5m1OdpLl4k+giIkRNti2JZh/e865Z5Yv8ApOqONryDc2m2z2hOatRDmHu8qVv0gY3XdMiKJSIuERHiVgxs5smxcHAiQyEYjl/eTryzcZbJxltoi4WcMSIuXhXPqg3VGrU0Q7TaeNlFtxvE1DEsxcKudn2ZODzCP5ispdWt6QiTzhNEQxjpkXeGur61VPXzzLmGV2MhLNhkRDIVr8LrXysz+J0/mRqn3nWycF5sm46ScEo5i1CXoR2nbmPrOz5SWc2dt15zVcuOEUhEdQkXMKv9kXrmUSEc2nLqLeFTlxSgiseSMnyFLykuKQjIRlGQ+FBF27zCQlm3sxf8VbPbVawZEI5SzbsorI7X6U5iwM0uaUe6XpU4YTn/AEhlyQh5Ltyrw9niDh70pSEt0RJQ6Xt3peZJzmb7QeXKSom+k7mki93TzcynWnSUcpSzSIYx3Y5SXT8POPhGPfjLyy2ZYbczYJWzhaTJv1hd3rykpYsPtxEnN7KQ7veEq5lm6bfLUQ938kh3G3nCj2YlxKH002V8RBG0ugcjHEbyjmH95QGgclmykIx0jpLeHzrIO7cfLdGW9zcOX6kj6QXcY9nGUswyL2uFWujyJeCZdXjvhmtuLIijESzFlKQyzcJf6KVZ2jjYyGREJaSLMLniWFDbt2ObG0lIR5hVps/bpERYxOOEXyemUt6QpZOlyV42Hj6iDZtWbhwSi53ouEJKK3etyLEi2W7HVxZo+ZFZYZ8kJ98mycbGXCOnKPeXnu1NqPkRSKMt3l4Vh0/T91+jXqM6xfWzbhta0kLmJKJah1DHi86NXpFYS9YRSy6olmKOZeUOGXElUpLsf4bB8tnIvxCXo9fvdu2jMpEREJYbgiQy8I736lM2Tc2V3mZi5pkOlwc28K8at3v4luPg/wBp2w4jZDF5wspjqKPyZD/kubqehWOFxbOjpur7k6Z6C1s+0LLERL7peFcuNhNFKPtCmEy4Qi425KWqWofEuHePNiRFliWXNmJeRcvDPRpeSlutmDiYJYki0lLVm4lTX2wHh4SkRFLUQ8Ore+xXW1+kTZDmjIYxkOZUjnSsR+QJwRyjmiXMvQwRy+EcmWWPyyFbWrbLwuEJNulEpi2Mi5SLq6y/tWv6O7RiIjFzMMpFly72rd+1Zy02vtG7i3bWzIx0yEYkPMRLQ+QbTew/+pbb0i8DIyiXygkSfUx2qVL/AH6DwyvdWye6200Q5otulKRZRFzdzf6Ku2hti2YcwHCInJZiZESw83FveZTulmyXnrISbzOWmod4uIo8VF59eEQ5t7e3S9ndWfTYIT3b/YM+Zw4Rsx6RWResJwhblEcMSIuEhJVobY2c2Tf/AEmNqInYjiSIpZhLzEsbc3H7upQyuI6Sj7or0IdFHw2cUurk34PQL/pTZCyTbDIliesHDi24O8JKmPbLItuC1aMMk5IpYcsPlGW6soV1IojIi5RIvuqcwQiPbylutiUY94lpDpox9mcuolL0SRu4kLhOFiDmGJYcS8PVFDcvXrlzURE5qIiy5ub9ibS/YHTbMbsick4Uh4SKqk7BtLm7eLyYm2ybzD2mCI5suHxLeXyrU9jJLU9K3/QkWezBl/1Lb73CLDfaDzFKnWP6lc7P2Ts4hL/oHHHJZiuRdxBEc2UR+NRHNn7UFzEddIXGYkQOPkJRHSQ/EQKLta/fbfFwnoukIkOCUtWUcw11Ljk3l2Uvszrio493H7mwubhsRZG2ubSUYuNuRzcI4fVqos5tq0etLksRlsi3jbZbKQlmyiVOtW3RsmBLHdGQ6nMZ5nE5uz9Pn+pLpMzbPv4nlotybEWxcaclHdGXoXLCfbyU90dM46o2ihv9pOE160iFwdMoxjxCO6s+7eCOoveW3Y2D2chY8rIhKQOdmTY8US6s3+idSwcJkiYGyZl6yLIk43Hl+r56Lqh1MYrg5pdNKT5MSNq+9EhaeISyzwyj4i6upbrozsjZrIyeInHnI5nmyi2XCMfN6fP1qu21aYD7I3b7l6JN6Rk2TYkX9n1qw2f0cYfHEtLt9vKWUs2G4Iy7Qh6stVHUZtUOWk/RWDDolsk2XruzLJwSlhuCWYiEilm1fH1F5vnVez0ZYcEiacJxrVESl4Y+lDG2u2sNu5axmSjhvi5hlHNqEtXUjVt3GSEibcbHlcKRbshjpJcCbitpHZSl4IP8kuMEOG2w5m7MCFvNyyc/yUiysLZspE3F4iKQ4kh1bsfMnXG1xtMNtx0nJasYZEMeXi/zU1m9Zc7Ztse03iF1vG3cscsk5TnW4oxjexCxLTEIhbInN6RDic0RJHK9tnolhEUdMo5d3KPXlRmjZEiiy22QjmLLLhEi+NOO2bcGRNkQllkWmO9mFQ5Lzf3LpkDa1gLwlESHE7RsssSGPKqO5tnottycjzZhlyktYbeGMRZKLYxbw4ll3c26nCA5W3HGW5DHtO7977FrDO4ETxKRirW3ISIeWW9Hu95XGywdiTgsPk2W9EhEvD/qi3JC3LAjFsok7LsvZ3vsouFf3LbTcb2TfAOUR/w/wWs5ua8GUYKHlhbfaJCUcNwh05iGQlw/7q7tRIhFyQt5Y9ph736T0rJ7V20TwxbiI7xZSIu6o+w2XCcxsaODHI5Ihc4h+b0KJ9NcbexccyTrk2TzpMuCJM4guaXMQY8sfPp+tdPa2HEnmCbbcIhGUS5Zdn15VWW1uIti3idnmjiFIe0Ldl6v/RSG8FvLiCJDKQ6u8Ui/zXK4xRvbJzd+25ERESlpjHNwxipTJE3IibiIlzDqUC0Yi42TMSzDH6P2h0q+unREcEtMY+LhWGRrhGkfZnNvHiSEdKr8QYx1Epd4Iy5UFmsSyraOyoylzYB63c0xKO9FEpYlHKpwPRQa3BF+8nqkKkACzyjukXEpTFkI/iUZ4ikOpWDThKZN0EaI+0BbiMRzbyigKkHSWZdEUlLYTW5wBR2xXAouPXDbeosyhNt7D4JIURKGI6iVK7tQtIqM7dEtVgb5DuJF+e0hHSo720HHNRZVRHcprr5by1j06RDyl9/KuWOGJZYjyqvculWVeTSdWkcNGbytk8rlDN9QaupuKtljJcycD8c2kuLe9pXDG2sbs39W6+I5h7w+hxZjEXaOKZYExxytF7fsbR1NuE+1uu20vZIfSJfVVQXa3/ynlPsl+xBsb9xosRpwm3OUtXeHeWs2X0txIi+WEXHHsy73Cs5xnBcJmkdMnyZF5t8szmMXMQkX+iDUPa5l6Fc7YcH1hNiJerLU25yiX7VGfvRcGRPWgl+kES7wkoj1L/tKeFezDVOKcFyPDJa5s23Cyls4uUm4kXi68pKtv7tllwhf2cz/AFZRl3VrHO5eDN40vJTNiJbqv9lt3Lekhbb4ic/3QGL7ZJDmYfZL+8XHabH1C/cjy4eVTkerZxf2HBJeUaSydd0+UiUd3UrNq7KOaMuVYlu72W2WXynvcSlNbbYHS45HiLN4e6uOWBvwdCyI2FLqShXd2+3mbk4PLmL2SVVa7bZKIk4JDux/dLzqwPEjiNiLo8pZo72VZdtx5LUrAfzjbH10my/SDH/ZWDG1Rcyi4Bd2KoLvpCwyWG/bOZeUfxKVszbNg9lbeJhwsuG4Ij/srli2umLXvVotnBbc9YIy3THUon8hMEUsNp7/ANtz3fMp/kpDvDFMEBLlJY2zSrIpWzjeVsblvhiWIP8AqoxOX46Sl3svuq5DFHLm70pCiuSLzEEvdJNTFpM7S5Ih7e2b8TYpjlraOamG5cshWgJj+kjzDIVX3Gx7ZzMURKWoZNqo5PWwtBRv7GtC3XmfeFMb6MWznq7kfEIrQ+QC36siIe8guMs7wxLu/iFarPJcMjtR9FOXQj9OPsrrXQ0RLM/7Iq6hlyuF7UkN2r3yZDLmFHfyPyHagvAwdhCPq8MvCiBbR3Yqqd2jetl2jGXiHSpdvtUiyky4PNqFRJS5bKWngnVaEtQigO2DJbvuo7b7bmoorptCWklnbKoiHs8YxElFfsH/AJO5JsuEswqyqyQ7xfeSZMt73hJNSoKRTkxtQd5lweWQkoG0rXaZCWG6TfKMvdJailwPCPtJrd02W8Pdkrjla8L7EuC9nmdxc7TZkLjj/wDWSIfaUKm07sSlIv7xwV6/URLhIeYZKI/s+2cykyzm5Yrqh1keJRRg8EnupHmzfSzaLeksvNmUhvp5fjqbZIt7VmWsueiNgWkXG+6SrbzodaN/+peHwrVZ+nlzH+DN48y4ZXj0/c+Ush8Jf7KUz06ti9ZbvD3Ykg06LWRZRvc3DEUdjoVbF/6mXdik/hvqgXe9omM9KLBwh7Qm/wCkGKOO17Bz/wBSyJcxRVfTodZacYiLhIoqPc9CrL/uRb7xD+JY1hbpNr9jXVkrx9y6x7Qvl2C5hcFdo1bOFlcZLxCSw950UYH1d/bEXMUVG/mo58nc2xc2NFarpoPiX8EPNNf0r7noRWIlw+FBds47pLEtdHtoj6u7H+rfRgZ24zpuZR4nBIfeSfTVxJfyHeb5iaV2wZL1jLZd5VtzsO0c+Sj/AEeVCtNobYH1jLLw90ZK4Z2s+PrrL+7FZvuY+H/JacJcooXOi9p+kUV7oiyWl4h7y2Le0WC1NuN94SFOcurYhl+H7qF1eaPljfTwfhHnrnREf+5HxCodx0Rf3XGy8UV6LV204hHvCnVC0c+UbLxR+8tY9flXJm+kxs8judhvt6m/ZzfdUFy1IdQkPhXtTezm90my4cw/eQ3dlDqwRzcsltH8TflGT6CPhniuBypw7LeLS2Xsr11zZTI/ItiXd/Eox7MHuq//ACS9EfAfU8rf2K+Opv3hQC2O/wDQueyvVv5IHdkPiL7yjubFb1E3u7zhF4k//JA+iPKi2U99C57Kiu2hDqFwfCQ/eXq7mzB3iGPLpkq+6txHL7wiTkvCNFcevTJfR0eYmwXMgGyQr09yxIsoi3/WDH7yg3OxRc1NiXdiJf8AtrVdZHyZvpWUL7JFFvEjmHKWqP5+tFdeISKOWOrel/ojNnIeHD0jiRjzRLqj+tS7SzxHMMW5PEMhEYkXERR/1WbyUaaSJZuCQkJPR/pCGTnLL/REfAW2y3S5pSLu+bKhndsYhCRC45pLDKUd3MPEuDbNuRHtO0LMQiMm48PooSqMr3JYErlwR1OS3pYkR8RV6op1L1lxyJbvdjLi+cUr+wIXCEXGnhjldHL4Sb82b7EIbYsoiTZSHiIi95bpRaMm2S2C1CWndjll/hmTm2ZOC2XDwkogW7Qtk55WyyQ7hE4Mi3oiWr9SiPbUjmFyUvo9Xi82lOMG+BOVF1S2c0jmH3v80xppwvUxItIjmy+FV7Oz7tz1jzbIy3nBkMtJR4VJvNkuNxJ67EhjKTNyMfZGvWKdVs2F+aNAxYXOpzUOopfeHdQtquvMZRYxN7EecERHukVcyzdaPODEX282649InPD+1AurNsczlzicQju+Eko4d7bCeW1SGm044WpuRFmiUve9Cv8AZnRUXBIiccIR1EIiI+HzpmxysssSiQ6iiQl4viH7Vfs7VYFyUhFvu5SLeIvRWKWXJNKoplYsUOZNEauwrRtsf+nJzLmORS4h0qONsJDK2aZbIdIuSIpS4i8xKwDpA24XCREIxKQtx0+Hr9KtLe8bcytiOG2WGJcJcolqFc3cyR3aOhY4S4M2zZ35erdIc0d0RHi006iH7Fo9m7PeiOOOJ2faZpZv3Ve7MfZcGMcPd0xHvCp99Vtll58YybbIs2nllyrly9VKXy0bQwKCuzC3bTjDkczzwllAvoy0lLd6vnUa+aflKJSjGIlIY/2ZlcbKZcJsXJOSek47Iu0KWnVpH5qfMq672fck5lecYb/oxIfCQ11LqxT3OecaRAtrN3SI5eGX4Ur4nG3MMhkRacyY9bvkWGTj739Hl+78ykW4ss9k8OIRF61wi9nvfYujb6Mwpv2UN5UiKLjso5RGJEI8qJY7LxtJOZtJYBEJfvIt41Jxwm22xEcsSEnPF3lItHxwxyttuZYiIuYZEO9Ea5SXRrenbYwUE38xY23R7M2QuEJf0ZNlJS7a0cb+WcyyISllHw/Wqh7pBclERwyJuRZRIvEp2z+kly5LDaF6IxIdPi+dc0o5HvsdMZ4+FYy8snny7R4h5RHd4vmTGOjo5pOEMdPZjEh4hU622o856lvwuPSIh4fRlVHt74RmdmCRXpYYiIlBvtnB3dO71/NVS8mSK8ITjBvfcumOj0ZRclEZD2Y5ly52MJfKcMotjl70Vhf/ANPnR9tshJnaL2rSwTci/pOvKuWfw8dHyjiN39sUY+oxBEeIiHr/ALVkuplfJu+nhWyNdc9Hnh0kJDu/8UIdjacRxsRlHNJQdm9ObLaci2dctubzgyi5Id4my84rr1y45LEcJzezFvLvxOc1do4cihCVU7H3LLbZZSlzIREhdfeTa07y6EvZztiJGtiISEpaUGgd5SDEcQsKWHpHE1S5opugV8lmd+85qcIuEd0S4lXvZveXRoQ95EA47vMoSS4LcnLkgusodW1Ldpw/mSZUBiXFL2eVWmZtAioRCOnLly7w83Ei2LpNkJDlISyknW9sThRbEnCEZRHNER/CuU5Um1LbktJx3PXOh/SRu5aIXYi4Oohj7UeFXO1LFx4RJsmXCiURcHsyHdLKvFLG6JtwcPijq1CWUh7vUtU/t+7YEmhdIWx0jqiPCJcK8TN0ElO4nq4+sjKPzFXtll5l4m3iiQ5tUh8KqX348yffvk5mlIi4tSghUpcS9fHjpbnmTnbtFzsxm7vS7KRR3RKMe751p+j989sx8hcEpRi4BEWaW9/ushY3WCUmyISHeEokK2TO17LaNsQ37+E+0PZPx8McuoVxdVF8V8r+52dPNVafzfXg0T227R5uLdy5bOapFIhlvNyUB+3G9cwyGyuY/KC5guEI7shpmLqWePYr/kxXLMnxEouCLZCWH9I2JedwVU222BbcEh3ZD3ZDH7Vx4umi38jOjJmf9aDdImNmNl2DlyWrEEoxbIeFzhVUD7AiQiy3IpZyzF7W6oF1dEIuZtRRIeIZSUS2N15wW2RJwiKPKPeL0L1scNMfmbZ5s5ty2X8FnTabgiTLfZtyzC3ll7KjVdllHMXCOYlpOjfRyyfbcK7uyx25dk3FsYjqzFq61sbPYFs2P/SMZSGUYi47pkUXN7rWOTrMcHSRtDo5TVtnmlpZuOZnBIR94vZ0qbRt5pyQlhkMd3MI6hL+xaxyzfYInLa2u8wliA5pkJS0ju9XzKE9tBxtsRc8mIiEXO0a7TNuy9Pm9CldTr/KrH8OocuiyZ2pcuNtkzfi+Me0F9sZNlvZerQmXmx3HnGyIbaI/KMsxyl/ZJVZbdiPZi224JZSbyiI8IiKgXG3LkiIsd7uiRR8IksY9PNO1SNn1EK+bcvNo7NLFjjiItjJsW2yLvCQ9XWPn+dAvBvRu2XXHGCfEhwSxBbwyHTJsuqJfaqN7aLjjccSUiEpkXbSjHUopkRFmISLiLMRe1qW8Onl5r7GEs8b2v7lvtm7uccvKyLHLNuxIeWOWP2IlntMWW8pFibo5hEZalV1uiJsWSISjplmiPCKjOGQ5pCXu+6r+HVUye+7tGovNq2z+G+++5iC2LZN6iIm/wDARU/ob0kcbdwRZbi8Q4erElu97zLLW7LHyzrhcjYj+RVra+QYjbmA83EpEJPYguRHSQ7o1r8y58uGKi402jfHlbknaNx0geaHMThE7guC3EsUiclGIs+kRpSVfsWSt+kxMREcRzVLEccjykIj6v7FYWu1bBwhxGSkWYjJxwSGI+rHz9fn9Cyu0xEXSIZYZFlGRSES3f8Adc3T9OvyzRtnzSW8Wa2nSy0fwxe2c28WXEkQkRd0vT/atKRbObtsfEK2ZdiQjLUQlIcFvi6/iovN7BljDeIibHLFssxOC4Jcvz+hStr3Q3ZNxGLbLYtjGUY8Ud3rSy9NFySjaRUM0lG3TZqdm7XZcczCXrBiQtiJODpzc3x9SmFb+TNxLtGXCxNMZcXdJZOyuBk2222TjgxFoOIh/Wg3u1X23Hm3RIXc0hcEhIZfhWcunuXy8GizbfMbK0YF4sS0uZSEhcacjlH8/OoV2wLBELgkLhcOZ4u7xCsvs66cFwXBcIS4my1f6Ets9Vy9tibclJuOGTfrC9qv+SzyQcHzsXCaktiBfNYYsxi44IiQt4Y5ZcRDWIqONmT7kbu5aYyyjLLy5vR6fjVQ81f2xYZWz44nC2RCQ+HzItkziYg3L42xN6WybkRco8q1S0q0zNu3TQa02XbE4Qk4TkdRs5WR/rC1J20NkuNuC22OI2RDIBcykRcMdPmVkxesuNiRYYiOWA5pRzC4LY/X5q0Tm7nGiTbAk42WIUsuGO6JS8zhKO7Oyu1Eqysrkczg6ZSHhKX9hfarWwbJzN6wc2V7h3h5v1opE+8RPNtlbNiOYRZw5cRZvNGtVC2gw22WL2ThE3LM9IsTVlEd5Q52i4xovtmN5sbS22Mcvq5bpS3ip8ydtFwiLE1ZRl3uVNd2myLPZiItlGIjGMvD9aqTvBHU4IrljBt7mrkkiT5HxSQ32ZFlGKVvcNufKSJFFxtXuidmA8mLeRQYERRBc3oqPd3jLfrHBHl3vZSTchUluShp4hXK0EdKqq7V+jiQoRbRLhFPszZLyRLRckqsNolw/wAKF/LTXMX3UuxkexPdSJW0No4eVv1m8Sp3LiWolCffkREW8UvaTTLu8OXSvSw9Ooo5Z5W2TavJC9IsygyXJrbtkaye85woZvy1KLNcE0aA1kknFyriCbhbxSXWzHeEvCQqtKFqCTSEkNyoyy6d391M60ULUHM93hSoSbRlz6MvZQ6khUO2HoaeJqPQkqVRpTHqLrZ21CZHDiLjTnrGnczZcw8JI97s9txsn7QpCIydYL1jPFh/SAqJohU3ZlyTbglwkPdjvLCeKnqjsbRyXsyOdeVcJ5zeIiEdMikrbadhguZR7JzM0fyZNlmGPMoVWRVQkmrJlHwRKEnSUttiURHUuE1yxVa0LQRwGWURIu7wodKqa1JspDId3LzIjWzHXNIlmS7iXI9H0K/rR7bbT7Jdm4Qq1a6NPFwioh9DLsnPWN4fHveEVEsuF8tMfbyeCQe3nHxw3223OEtJD4kC1tm3ijHwy0+JXuzOhTbZdo8Tsd0hitJabPbbEmxjh5SjHeXFk6mC2gdMMEnvIqdmg+w2IsOFEd1ztBzcKkWbj7eohL+rirWoMt7ublQzeth1FHllmXHqcvB0pUFbvu7JU23LvabeZjDcHl1R7qM3f7OIvXRLmFcuds2DOl7ELdES1cskQi0+LBtPyUbPSy7bkL4x/qyFTg6UNuD6wR7wot50g2dhiTgk4RbhCJEPeVFePbFdGUX2yL6NdKgp7uLX6GLlp8k3am1ieHsR7TdNtzUojHSC5aISfck3HMJfnUhUsdnbu0nW5bpCWUuZR/5quORcafbuW5amy7T2SWsYY1s9l+hDlN7o1Nr0j2Y7vYJfpBirhqrJDIXRj3l5xfbOfss2AUR0mQkXvehQ29t3I/KSlqElm+jUt4P+R/E6fzHrke6Q+6mYbfCK8rb6SPjpLwyVxadJ3CHMRN93N7qzn0WSO5ceohI2dwLA5il4RJRypbF8rq/SR90lSlta9w5MiT0tPZx9qSHR4rnK+xEuYcv7VksT8mrmvBc1s5D2VyWXmFV93bbRH1b3hcb/AHUJvZTYlJuMh05lZhdutjEhIk948Cu+Spg/KJMC5xE2n1tmHBIYuNlzalcheCXycSSduR+j91Jzb8DUTHbS2Xej/wCWuyIR3d5Z26vdps5XHnx737y9L8oZ+jj3RigvO2hesH2sy6cfUadnFMynhvzR5bXa97/3L394ptn0uvWoycF4d4XBlLxLe1s7Ah9Wz/digO7PtB0sMOeHMt/i8UtnAw7E09pGetdvWFyQ47OA4W8JDhl4h0qZbtM6re7jLTm7MRXby82ayXasMiX9Go1drbHLdw+6JKGtX5U6LTkuWiTfbTuWMxMt3IjvDGSob7bezn8tzZPCWmQlEhlvRV/Yt7Ow8QXBEZREikOblU252JZPbrLwxlLEESkphKEXumVJSa+Vow7mytluEWBek2I5ixm1D/kG2jL+UWBHhitbe9C7Yh7OTbnDKQxUK46EDEouDIdMoxHvfGuuPUQ9v96OeWJ/2r7mfLYVsI5dpt8sRL8KIPR+7+QvRc5cQhl4ST7rohtFvM2229zNkMlXnZ7RYLMw94hl92q2jJPeMl+9GTVcxf8AIza1htNsZP40R3hKQ+6q5jal+z6u5fHuuF90laBtW7Y+TeZHe7NzTzYnmUoNtWlyUX2WNJdp6lzL/gS1T/uimvpRm16bX6lT/Onam9cuF3hH9idTpdtHeJsuUmRU93YTL2ayfEpfJuZS8JKovtkXbHrGXI7pCMh91XFYJ+F9iZPLHyyya6bufLWTLndk2pjXTKwL19s8PiFwVjq8woZNofSYn4+wviciPRrbbmx3ByuYRcJSbUkbi0GJNvuDLSTdz+HrXllWyTCb5R9lYS/Do+2ax66XlI9WuNuC2JEV2X9JGReLD8xfrogMdIZEMbu0f5XOzcLlzdS84sH22iInG3HBjpFzDHxfsV5Z7QtnIiWzmSIY6nM3vLKfRRivLNI9W5c7HoLN8RestibH9G5IR9muVSBq2Wlxwe6X71Fi63IiRYTJMNiUpMvjHNyl1KeW13GxEnHWSEt0nGyLxefrXDPpn4R1RzGlIY7xeIRQHI8OX+jy+6qJnpXaSIcUR9r8VFKpttsh7EmXSlpFwZf6LLsZF4Zp3IsmFg5o4fhiP+iCbDP0Yj3Yj91Vd90hcblibOIhzZhES+7Sqg/zq2d8tbPMOFylvcqpYsnomWSK5Mz1kXrCjHdEhEmx3peatRH6lKacJhzEEYuxyuuFJ4RLhLdH7EdtqyebthIrB67ZEsXyRzDxB4XHBp6frXdpsY0XCu8JgcrYxaIW46m3HHPPH616Dnbpo5XFpAqOuFEicL2c3/L60Jxz+klulIdO93lLs9mFlcJtzBkOdsm3Cc4SEW+vL9VUnbbMQsk2USLLiRc/u1anHgzcWiA+62OURcIcuYssS4RjXT9qgE+LbuNgNuZcoPDiDm1FFcvdrttkTYsuE4OoS7Mpd0vP/ghM3172jblhHEiUSt3CeEeJtzzVH7aLrgv9s55Su/8Aorr2uaWCLcs0eHuyqo1FU9Lek9ts14Wbll8XnBxMMW8zY8RYnUsf0l+EYoiOzmyGXrHXxGQ8rbfo/XX+xbPqccFV/YyhhnOq8nopmm0qW6MvvLzfY3TW9ISEhFxyMRN71cuIhbpT+zrVY5t2/dew7+9eZblEm23Ctmf/AGeqpD9tVg/xHFFWlubrocjdWesC+MibxG8QdQiQk4PeHr6x/Wj4hCsJZA3aMk5bYIi42UsPM25LeKOovrVFfdIruwFptl8sYu0cEhxGRHdEWy64rDD+LxlKpRpfqa5Pw1rzuevs3RDl3e6prDT7xZS96OVYnoR0ob2iTbDjjLdyQ/SYbJODu9pXKS3NjalqJuWblLT3uui9KOeE1cWcc+nnjlUgrezXilIh8RLR7PJsdJEOWUBzDLl+NVzLMRzNEO8PZ6v8epEt6jIszgluyEdPCMv81jkevk0xLTwa6w2iLeVwXJRL3v2Io7RbcEhLtBjGBaf4lnyuHBbzRcGMe0GRCO9H01XbdqJYnLlzSKPeFcTwJ7nWsr4NMFxEojlkOko6kK4AsoyKIjlISLLLdyqtt3csZZvz+tGC4iUvejm8IqO1TK12ct6lEiy5SKMtWbiRCdEiGQjwx/dJNd1EQkMdRacw8KkEzKOWMu0GJZY/n51TdEpbCKrerDEcvaZYkJfi+1RHqNju4ZEMZbwkW9l0invO4ZNiQuELksuXTyy+tSK2YuD2b/aFlw38uXhl9SalpJ02itwSIY6Y9mRjqcb73V1j9ql9UtQxERiJRLEiOnMNNKfWxfbljsETMcpCQ7ukcu6nsCRDH1g5Y8Q//CpzsShRGeB5vNlJviER07v1r5o+F5p+56Q3LMR7YWHhl2bbY4IjIiLzRpFfT9GyFwhcEhjxF6we6vnv/wATGGztO0Jss1zZdqX9G9l+8SxzyuOwQVZFf1PJqgIuRJzLLtCb7TvEPEry32bsd7L/ACw8wX/3Ng5h+Im65ftWfckmOSbKLgk24OoXBiQ+ElzqVcpM6tNmqv8Aoze7ObHaVpe2l22yQl5VYPiTjObLjMlmEfqr1r2z4POkn8rWDb7g4b4lg3I7uIO8PLWnnXzUzUd3xcy9s+AC7ccYu7TMQtuC+3vF2g/tXX0uepqPhmebDrxtPet0el1BKlFMZtXC0irVvY0RbIpEJDLdzcUV6cs0Y8nlRwtlB1IzLctP7y1NrYsD8iPdcjJT2NmWzksMcMiEtJRH3fMueXWR8HRHpH7MeVqW9Lu8K41bERCIjIiyiPMtaOUsMib4ZOdoXDlVD0kq6w55MRPETZSLsxb8Ql6SFGPqXJ0KeBQVkGuNhkyLkW8TEjEZSHLq/wBE4bftB+Ulw7ve5lFqTgkJZhItJF+HiW12N0RZJkrkXn3xwcQcPsylvCQ/b51WbMsW8nyLFieTgk9DmSEYs4cmyLdzRjmEi3hr9abtTYFoIiIjgEThE5vE22XEO8NaklVgba2ZiRM4zL0okXaOCWWXx/aqC7v3CzOFmEYylu8K4MalOeqL2OvJKMY6WtxrezLQRkTjhSIhHKMcuksqqtuXOaIyy5URy4j7yjMuPvlFhlx9yUYttk57w6V6MNt5M4Zb7RRTlfahFHsiIhlurSbG2NbP4rG0i8kuRciIk2MmyEtTnF1y9Cl9K+hNzYCLjbjbzBFGQjEmyLTId3rT+LxqWlvf+BfDTrVWxQGMc2VG2fRvMTzhCI7ojItP9gqVYbDw328ftmyGQg2WouHL51pL3Y9pgi4yQ2jjmZxh+QiX9GSzydRFfLyaQwSrVwD2Z0yftmxbbi42OnGkRCPDId1RasDc9oWzhiRYkxbcEZFxF6IoLTzTJCJMtucwlJsf4k26eJwibFwibHMQiTmGPeH0LB4lbpVZt3HS1O6NRe9EdnPPvDFmRRJvtpNyiJEIiNetU9dksWzhNk4TLjZYrYCzJko6YiXmj9a5sRi2lHylgXP6PLy9oStrnZTzzjQkQuM6W3Bc7MmeHlJc2qWN05bHQkpK0iqvrbGbcfEm23ieESBtvdLexPrU1kX9nXLDrxORykJC5JtwY6RLdH6kHbFkVkTgiXZuZmSlLTzfV6FT3u0Sc3ijyywx3S+pax+dcqjKT0v6krpN0juXnJSJvN2eGRZZbokKpAtycjIXHHHCj2cnCl+8pdq6JEQy0jLd9lcdvBEYt9n3d7mIt4l1Qioqoo5p23chtOj7xZcMh/pOzj4i1KQ90auW2yIYvNjlImCFzDLhKPnUA7kiiREREO8RESsNj7ZdtHG3WHCbJvd3SHeEh3hVTeVLahQWNumQndk4eHiSHEGQlu8w96iAeyXhIZRiWbMW7uktV0lFq9YavbYsOUsVosrbb5aiH7fqWZo5mjJGLNKa+osmKMXQqWGrMJZfEKjst7sizbyILpSl7SbV1bpMy2RIqOYc34l2pS5lHxiUat2MoyzcPESmmNUWQvCPCRJhvS3hUq2um2Y4bI4kc2J20vCXmH/FXmyNosPuNsXbbOGRREsFsW25cXmpWPWuec5R3Ss6IQi9rKXZt0LcZM4gyLJm/CtNb7fsGczbdy24QxJrEHB8XxxWa200IuOYMRalpEiy/vDX4lWi5u7yl4lkSkxrK8eyNAe1WXCIXWBblmxG9QlulHeVrs3pXctdn2dy3HD7URIo970/29axlCFOrdkiXTQkuAj1EkXe0blsXMRsRbcczE2Olvm5fsVjsbaz7jrAuCRN5RiTcRcEspEXmzFSnxrJiRSkStg2u4LbYiUSb9WW83xCP2+lTkwpxpIqGZ6rbN3/ACuds09aFFwBJxsaOSHLIh+0lAe6SMtsiwNs32cYyHs+aXxl1+lYl7ab5escJyW8SEV2RFmzLnj0KXJrLq/Rvx6VC9hi82LbIiQuC22JahiMR4aVQW9sWjchZljREcdzMJcQuNlpGnxLG2LZOEUnG2WxzEZSL2RHzkX1I9xRyJNi6LjMsrsSGXhLzj9iH0kE63GupnXg2TfSNtkSbeIbkXBjJtyQtiUhiQkqpm4ZESFtwXBIeHtB5S/2WYKm7LEHLpHMnNGIlGMd2WklS6SK4E+pk9mWN/ckWUSyjqzKI9dEWUpIVe9IuJCc7v8ACto40jGeRssre5LLmkOUu6pNL/DzSIi3e8qOtSHMKVXC3kpYVJ7iWVo1J7ZxW4lIXCzF/CqW+kTmIRSl7vKo3lBEMUypJQ6dQew5ZdSDtui3mlpUZ14i1ElVCpVbKKRk26LBm6LDwyIophmo1KrtC3U3BLgjU2HoadSiE2ZDHMnk5xEpaNEwmZdPKgSFIj9pPSGoJJLrQ5LvWjSGoOyMiHMnO0iSjdafQlLiNMJ1pzTpNkJDqFBpVdRQ9Rau7VkOWUlXSTKJ1FKglwU5tjqJ40TQopDIDHVmQ2COBRTxZHiTG2+VXPR7ZwuERF6tuMpfdWGXIoqzbHjtlkFCe2cyLmXCKQx+j0iUf2Kups+Xq3BIvzuq32uYuCTI5d3LljyqkcZcHSQkPNqXnwm/B2zivJJd2e4MSIRGWrh91MNt0fkWylvCUv8AdCDaESHUPKWYZcslJHarglLDZ5Sw83Lm/YquRK0hbMBl6khId6Kt7eReId3LGPEqlvpAXyjcuaUil4tIp384WxzRiWmJFH3vQspxnI0jKKL1pov+SlCyOooj4lnB28O82Q+KQ+GOpV+3NqsOsxbeKRFEhlu80t1ZrBJui3ljVmyaubSXr25aY4gknOXjTZRIoy0kQ5S8W7VeTk7H1ftcPKKKO0n2xjiFh8JZh95dD6B+GY/FLya7pP0kFuLdsQyzYhRiTftUzLC3V+RERE4RFKWolNudqOPeuFsuaMSEd1Uu1B1R3l29P06guDlzZnLySGtruDp3kwHycczFGW9uqoF8lPtXmc0nsMoybykQkXCUfOP1eZbuKXBzpt8mosrWwbbxLt7HIhi2DUhw+YuJRq9G3i7S2fZcZL6RwRcbHdxBWfC5VhbbQJtkmxj2hCRS1Zd0VzuE1umbqcGqaOXuz7tpyLjLneESISHvCpWxdpXLMhYl+k3Yj4tKJa9IblsRabe06RHMX7SVobo3bfbWTwuYZRdbbIZcxebMonlfE0qKjBcwbJmw9vuELjL7kpFGL2ZuJahlup170RbeKTLzbGXSWZvvCXConRTZRDieX2z5NuRw+Eu9GuVai82U28OUiERywll8UlxTzKE/kdHXCGuPzGTY6HE8JExdtuE2UXMuWSmB0AfHN5SMuUdKt2tgsttuCLjwk5pJtwhwy4stcyZabDfbfb7dxxvexHCGX7yT6vJulLb9BrBFeCwZZebiy4+RYYjmbZGPtIv8nMuCRY5RHi3VnttVfccw+0bbEsxRLtI5YiX1Kh2hbvsFlF6OoTbEhEuLSphj1+VuVKWnwbio2Q5ccRIsvdJVd1tq0YLDxCcclmIcwx8KwFduPDlF+UuaX3kJzbL8YlEvCK64dA293ZzS6tLweht7dtoyx5f1JfsUgNtMF6txvmxMq882XthuQtvtyblmJsoufxLYWmy7B8cRh7EEdTbpR9r41lm6eON07NMWdz4Lmm0hjKLbn9GQkhubXthyuD+8szci42RYbBC5Ih7Ngox3YuDqVc7ti/YLtGCbHN69qMvEph0+vhlyy6eTXPbc2ZLDk7iEWkWiL7tE87q0jIRc7pMkJe8qLZXS0nhFpzBbKWUo5SlzbpLT0vGG2yJxxtsR1ERDFZZcbxumnZcJqfBDra2VyOaOYdL2UveVHtLYuySIm/KW2yHhVpd9Jdk7z7bxcItyHw+Zed7c2sLz5ELYttyy4YxEh7vEunpsWWb2bRlnyQit6Zpbnoa2Lcm70cPUOIQ4ahM7JZjEn3Xh0lh5R8JelVNNvEQi3ESFvTLdXHNtvjpIWx5RFdfYytVJ2cvdx8o2TF0WHhiWGI96XicJSheL+KIyXnT23bksuIrDZfSmOV9uQjvDFYz6KVbGkeqjdWbW5cbjJyLfEcoqmu33HCjaXLZcuKP/AMqRs7+S9rNk288WMJaJYZR7voJVu1OhFkJFh3rjPDiNj7pemSyxKMZVK0/0NZTclcd/3Kba219qWhdoREMt5uTf+SrnekhPevtrZ0f6PDL2hWq2Vs27ZIh8tF9jhcEnPCI+eoqVddHrK5zExEi3mYj7vpXYuoxx2a+2xzSwzlun9zHWF3skvXMvMEW8yRZfD19a1+zrYSESttokQ8BEJe64qzaPQtuI4JR1SJyUi5YqH/NG5azMuMuOcOYfeUZJ45/llX6jhHJjdOKZqX9ni4PbsMvc2Hm93zqqd6MWDmbCeZ/oyl7qqAv9qWhDjWzjg8TctPeFT7HbRPDLBeEuFwvvfGsdOWG6l/Jrqg9mv4AP9CWPk7lweVxv91Rv5jf/AHbfeJsldjtK73R7MRk4WIMfeQx6TOD65sRHdImyzeLq6lXezryJ4cT8Fb/MFv8A7sS7op9egVpvPkXuqwc6S2w9oTAyjKTMZf8At/6qH/PPZ28TzfekX+iXc6l+w7WFA/5h2jekicLeEXCGXKXo/wA1RX/QC53XB5RL7oktD/OnZrhZX3hlpkUR8PmXRvLRyRCT5Zs0ZEiOTNHm/wB0OUMUlSr7nn+0Ojt+x6xh3NvDIhVSdu83mw3G+aJD/plXp924I+rub9jT8nJDN+WraZF/Tsau9H/VdS6uVfMjnl0sfDPO7TbF2z6t54uUnCJv2VZN9JBcGN2wLnMMmy/z+Na1/Z+znJE4VpiFvNtkyXs9aq73YGySLK/h7uUSKRJrqMcuYtfsLsTjxKz5vZ6dXbMvJm22/CUiHmIlYNfCXe4eC82TjJEOIIvORLmwy83mr8S83o/IYuSlxC5EvFxIREPES4O9Nu3JmywJbM9+6LfDI1bMuWhME4wWkXBbjLiIetRdv/DKLhDGysnBb0hEREuUvTVeFZeIi8KWIKayP6fY0eO6tvY9jv8A4Z7t9l9m0tLLZjz7eG2TAjgtkXymZvrbOnxVoofQv4YdpbOjabYbLa1lIZC8QuOYfE2XoLr+elaVXlTFyTZCTZYZDpLKgm5mIt4ikXeTeRNFRjpb9M0nTfbre1Np3d62yVsw852DBEThNNCMRGRV/X4lRkW6PtKPQ0Rs47olyko1WEYpJLwaM7ptsW2hyxiUh1S4uZTtv7IZfZbISEXxyjLelul9qy9ndRcFwhl+HlVwW1GXnBck5iCUsxRGX+SxnF3sb8oiW94/YCTDokziF8oOXmiXoLr+pQr+4xXCcHSRZe6OUf8ABekWF4443ImxKWoSFp4S/wBP7VBu9j2TxEXkgsyHNhtk3m4hHr6v7FnCdPdBODZgRMh9odOrKvqX4I78do7BtnLsicuREm5kLItkTbkd3zy6o+leBX/Q4m23CZfk42MiaebJtwu6WklI6BdINo2kmW7kmmx7QQeeEW5S04Llc3X9S7unyW9nRnOtDUkfTrTJNyykPFmc9kfiiuustkLeUoykW8XFlLq64rzTYXwmbWYHNg3LMpREd4v7ZD9S3myPhA2dct4mI3aPFHGBxtwm+bDEfN/Yu+5/qcWmLVpliyGoiEh4SHE/2XLivMQ8JDKRZd4Vf9Hnm7tscErZ5vGFtx2ODl4e06qCvPOmO3G2NrXbg37DLbTxCwZPDEhbykLIt9eJvU9CUMltqW1E5IONVvZpgZkJDl0iUiHL7Q0yqbaDjELYyiWnUXsyppXmt58KYsD2bj1zLdJgW2yHik551V3vwmPv5WWXsThJwRbHui351LzwbrUv2TLWGfhNnsdxVhhyLxNiQlmxP3d1U15tltttzAJws2ocveL515VedLr9vs3rJtl4hEu1xJRL5SLmrrW5+Agi209fjesxYYFsW7psSEReKXZx89C8yhdZgjy2/wBivg8002qSX1JIbZvXHxebliCMR3oj4vMr/Z/SSRNt3rzIty3mMQh5ij1KZtno64y4OJc4zRSyiQtjh7unzD1pdFOjVs88T1zaOFbDIWxIiKRbsiXXLLilDV4+hzY4ZVKl/JcWnSO0HsCdZFqUsSMRc4fj6xFF2O3ZXLbjzbwk83mliEQkPKJaVR2vRxhm5wXOzeHNB4ez4oxU3aLTbGIRYWCMRk2IiObu161yyxw4g2bxnKrkht0+2WqUtUhLUO6K8E/8VLX/AFOyX83aMPtkPCQuCWXiX0I1YNjbMvk+2Vs7Id1twfCW6vHP/FhshxnZ2zHhY7Nu7cbx8ulxvKPp+NZ5ZR00Jp6k/qfOVSRbi6ceInHnCdcczOE4WIRFzF6Vw6NkPMg1XOdQSmHzD7y9e/8AD1dt2m07bEcIWX2XxciOYo5hH7y8eFbjoXdiw/skiLDk+TbhFpEXHBEi/sJLVpafrc0h697fc+mds7XYxBwGyKW65xb2UdSJssLm9w2ce2YEpREt3vRVG5bC2ThNkLjEiw32+0bcjvC59ajDdkMhb3t7e8PCvcjGMofI/wB+TxZSlGXzfbg0d8/d2jmHjtuN8TIiX3qdf9qfsu5YcFwXHrttwtJMkMSzaS4VSWD0otvOatJFmiRcStmrcW3sMsrgkMo73MoyR2p8/QvG23a4NhZ7BEm2XrZwouOYLhOesEuIi3hoovTJpxxxttxvEbYZwxPUTmbMRcKlM9JxYZZYZbHBZlIS+ULeLlRHdr2j7BE2THlJEMRcLDj4i8xLzo9xTto7ZqEo0jMG1gxccbk38mTgyb/ZJbPoMLbTBXLj0W28wjLSO6QjvdfoVU1t2Vo5s55giclESbbxBj8oQl6PN8XUqC6uGGBi2244LeonScbEe6Pmy1+tazTyrTLYiDjidrdUavpZauXLPlokJNi4UREcrLZZvD51kDt2f/U3IsDuxbxCIVfD08wbQm22GyccEm45sMRjHEIS1fYsC6ZEUnMxLbpMWTTplslwc/VZMeq47ljf2+zhfZcbefuRb1A8ItskXEXxkP1LY9EukGzrRtliJMtyIhcGOopFJzi8/oXnrqJbUJ7DbGI7uYhEfeXTl6WMo02zHD1Di9kjU9JdsM3tzjNtiMRjKOZzmJafop0hF3EtLuJNvtkMuEhby/UssXRURZJ4b9kibIcQRHSJaSkO6otzZv22IJRIRISE2ykJDukPxrmlhxZIqMXwbqeTHJykuSxq+I5mezId4cqibW2hiFF4saWXMWaXEqN27JBK5HViDLLlLe/+F1Q6fS7ZzT6i0SXjGRcuXe3Vxq7LTIhEi3VCJ3NL2k2jxCWXSupY1Rz9wtGXSllLTmzfu8SuOjm3ytnxceHGESKQlqLLEYl9XpWYZuM3MkT8llkwxmqaNMedwdpnoG0emLDhCJWTbjbbhE2bnrBIt4R9Hp+dRXumjhNk2TbZNkJCQEIxIS3fQsRV8R1JeUCsI9FjXg1fWTZeP37bherZbEW4t4bYt5vDq+ZVdTUSr45k4XMq6YY1Hg55TcuSRJHZId72f3lX1f7QW94hkK647h6t7d3lbVqiUy/PbTpNixKLA7m7/wAlAfuBEu7+JV5vEIiXhUVzawyjhlKO63qjqFZ6Io01t8lq2RFmy97dVdtN8m3Ix3t1Ji9zREiHlIdK55cLcuzxCLecze7upqQUEtHnXCiTZDvbpIpbJbJ1t9wiFwSEoiURIh5d5Ca2hl+jll7xJg3LcsxZtIkST3Gi7N+OnN+eFMcuyIpEI8PDpy6VVHdfwp1bwR3h9lLSgcyzG4He08qZWuHpzfn+1Uzm2GeLTyoZbdHhJJ6UJNl5Vgd0iHeQXxEYlLKg2eG4OJLN7UUsXNhjIub8SKHqJtsfZ93KnEqyrrg5RjqzEOZTgqpXJTCCScEeL2VFunVxp1DkJMnG6MYrjtwRRkWnSoVTzIwioXI3IIJLpGm0KPsoUlo3ZCZLB9Fxd6Sg9ScFYpOJaZJI472UuFCcdTq1yqNftEQ5faUAwo3YkjNvqoaKIkI5i4lY2o6S5UDRMFxcOsk0CTa8W7xbqFHccnSHiupgooCqkjOLOdf3veSqSf1p8lBbYEapVqizQ6jJMkdQl2S5UUMqoRQeruXiXJqPSXCiDTmTaGmGkptqTcdIkW6oFGS5VIpb6Y5Sjm/hWUkVFnLikS1LrVUUbUi1EjjacxKdSRVA2xkrC2aEkxm0LdJWezrEnCjpEcxFurDJkSRvjg2H2bZY34la3N43bCLbYy/O8iNELYxb07xcyiOsCWrUvMlPU9+DvjHSiIV7IiItRF95LHFCftI6VGKkRktVBeCG35JLpCWrSobjbe65H7qE84oovLaONmcmiQ4bg8w8v7qFjD/ySF5PIpahFXXskGVRQamKIYD3UB0CEcq0iiGOxG8o+1myphG3uko1arrkYjGQl8pmy94fj/UtUjNseTo8Q+Jc8oHlUaKcICrpEWyHebLJxyTEREtQ7veU8OjzhW2YmRcZlGMsRwdWb4i+30rjZxl7qn7PF25cFppsnHC0iP3iLdH61llur4ovGjPM2b8o4ZD3oq32LsR+5eFmJDvORGUWx1F8y31t0JZabFx91wiiJOCJCIiW8Mt4VNYu2bQYsYI6pEOZwuESIlwZertVA6YdLvbAW1h5Ey2LbGHHSZCJOOFvS4etS3toDqF4tIx3S5hiKE1th4osCLZZssvaj/Z5kZppgnsQRbbw+0zODGXDFcDT82dipcHXL9wYlEs3KWZFuzdbi6TfZlmLl/dXW9vNjJsnGyL5Mh0y4S/amWu3mCKLhREhISxNMv3VOl+i7IZbRHMTYl4s0UmNqCQ5nCylp4h/CoN15I082Qv4jbhScERyjJNt7uybxMSThCXZiOWXeJa9tVwQ5l2N+2W9Hh/5JG/mzFlj3hFZ8dqsi24ItjKQkJkUiGO6gfysRb2XhHKKFgfIdxeTSGDBZcNnMWYcESl3lT3/AELsnpEOIwRFLs8w+zuigBfF3VLYuyT+ePDYvllykZ4OiT7TmkYyiLpFu7uVW1v0b0kTkSkMiEtQ91XjN8Mc2lT5tkI5mm5cRDIkp9RklswjihHgiWIEwIjJwhGQt6tKibaNt5nBebJwS1S3eYY6SUpu7EtOkZaYpjd63LVH3iksUndmu1UYPa/RUmSbcYcFxgvpCGTceJR7l/BaJsXycIeUSbjujm1L0OrlsUhw282rLFQLrojYOiUSJmXDm90l3w6u9shzSwf2mELbrGGMrC0It4ibjIvwrPbcuheLEbabZ5WdK3W1fg+i32FziF+myiXLl09aw3SPZV3af+ZYJmWk8pNlyi4K9PpsmKT+V7nDnhkSplZR5Px1XOyTBIl2tHFZZ1OSKTYx5lCtQKSlXFVLotbjWauNkPu8vdV7s27blFy7eblpIu0GXNyqDsnYt3euC2yyWbfISbbb5icKnV+pbW3+Du0wxxrl9xz5Qm8PDlwiPCuPqepxLZvf6HTgw5H+X+SEWxnmxxHNqWloLmVtzNJyX6+oVXbY6M38sVu/YvXPk8F8ReLuj15v1LQW3QK2xCxLl5xlvU2Qi37ylt9DNltkLjLbrJahMXpLg+JUd07/AGOzsuW1fyYdzbe0rBwW3nicKI5XRLe3fnWl2f0qYIYvuiyUd5soq8e2MTnq3G32ZZhdzE3LVEip1/qVftzouOCWHhkPDhliDzCSUs2LJyqY1inHh39BpdIdnSESuWC5ixRH2i8wqwpS2czC3iczD2J7Ij11Xnm1dhk2RNizKO80JF7Q+lA2BYX7BC8xZPRKWGeD+95xGv2K3ghVxkQ8sk6cT0MtmsSkJPN/0mYfZJDc2bpLGFwt2QxHxCKyL+29otjJ9i5wxIhzNk2Pil54/XRCp06jGTbn95ieyJKF02WrTs0ebGtmaa76MkUXIjhy+TiPs+bMqjaPQtt7dJspao4hcxZetFtumTMRJxnx8PeFW7HSqyc0vjL2fZU//wBGP2H/ABS9GF2h0EeESJt4XM30bgkI83mVM/0Z2i38mXLhuD7XpovYxu2XBkTjZCOYs26q202tj3OCwxJkfWO5vVx4fR51rHrclNtXXszl02LlfweUNjf22YhfjwlIhL/NL+cDmkmW+YY6h0/nrXs1zsxgt1xv+jkMf9CVbebAtHMrpC4PC5bCJD4h86pfiEHzET6SS4keW2m1bIYi5bEPETZRb5ZCSth2hst9uJE42Mpad7TISHTlV/f9BdnEXZuOCP6MS/F11FU138H+bsLssxREXmCLeiOZuq07+Gfloz7WSHpnxLSqfQlIG3bLTGXCJRL2XPN/ilXZ5bpeFzsy8PxF/avPs7aAUqndabAhyxL2UQGXC0tuf3ZfsTtCob1oZozjLg+sEh72VMrRAIYNUURLhTRrpUtsxQTKVAKZV1glIcoJDFAthzRVR5EpWWuy78mcwy7u6pZbRJ8u0ESIiyy0+8qygfklYt0Yi52YkQ5iJwuHUIiO79q3WNPclzZZDth9nLjjGUYj2glHlHryom0HH2iEbm2JuRDmJlsdWb1g+adaefq61VVHHi423bMC2IiItCQy4cvnkS0WwHNo3IlswcAW7kmyJtxvEbk0UpRGvWJV+eqfZ8pC7lnbbbmzmGCHyB965xJNuldkw2LO6JMs0zdfzoN/txh6OA28x2Y4gE+Tknt4my6uuNfRSnxIO37J5hwrC7sGWL0SFzFbIpYZZhHDGtaEHVHqVI8Mc0hGJRiUpez1f4KIpp2huV8lm3diRdoJCJbzjhE2Xh4ltOinRnZzhN477j0hkIs9i3hlu5fOKx94NoTOpkXm4+rIouS1ON/sV30Gv4kTIliC2URLl3V6PTRTdT3OXLNpXHZo2w9ENkiX/q3CxJSJ/s47rfo64rb7K+DG0u3GNo9HHxsr1iWNZ3ZE+245H1lsRddRL7aLK2LwuDpy8ytdnbScYLEZImyHLISiQ+JbZfw6DS0bNGGLr5xtSbPP+nt/c3d+6L5FjtCNsW6Qk3liUd7616F0N+EfBZZYsrZi0FhtllwWcQW3Hm28N1wh9BGUeutVAvW2HCceJkcRyROGPrCItRS9MlQFbeTZm24sjqIfpHCl4iXND8KisuuVNb7HSvxN9rQufLPSLzpY+/pFlgZS7JsdXeLrqSjubcvXI4ly8UYxGURy8orG2u0OKXikP3lYs3MtK9ePTwSpJHkzzZG7s0e1dtXN2WI+8REXDl3eVQ6Eqyhy3kdpst4oqo41HhGUpybtstm7gijJwspCObSPhQfhmtr3aPR+9ZImnxtiF1shEZdhmy8PXRMs3mG5YouOSGLMXMPDc3S5vsXW9rYfZkMmYxISLLEtRF8Xo+dcvUYVNPY6Mc9Kps+WyTFbbeBsX38P1GO9h73Z4hR0qtqC8Jpo9qMrViaV1cnG0tns0RfIZcJZSFU4h4ltfg+v9mMt3LG2GMeyfbIhFnNctviPZuM+fKf7qyyakrSujSDTvcWz/hH2my22227hk282QluuCJfKtllLr+P0L11j4Wthv+SE/ZNyiLd/htuRF76Rtwa5hr6fR5l4g50VLtnrCW07LddZGLzZFpbeZ9MqfH1eZVjm0RbZKy8nw3MTEJwicF+P0ZCXmEevz/Wt4L5U91+mxm522tm/rR9AM/CZsl7GcY2Y3htlreuSw4j/AI/2dah7A+GJh+5cti2YyVk44It3g4guMERCObzdTg0l1+deFXz5YkRcFwREYk0JC2Mm8wiJU1Ul1VWv+CjoZc7WuWRYHylt5t9t1gXCZbZ7MhF65e0iFDjWtPTVXkyOSelvZc2LFFat/tR9D7SfwyIRISbLSe6Q8QqFWpEJZRKW99Hxf/NVptg7Gsti7OttnX8b19tksV0ouEROaiaLdGlfRRZ/a7NtbF/0z+OJDxaZfJkQ6l2dJllKKUlvXPs5OogoSel7AGrx9txvDcISZzCQllFX1emBN2xNuD5S+9iCREIxiQ5S9Gqn1LKib3elxaUW9oQkIiJFpLLmzFwiO6u+WCM61HIs8o2kRiuCLMhjdlvJE0QkQuDGUtRaf4kezFsdO9vLp2o5WhzQuEMiy95WWzcHNiiTmXKIlHNzKNb24k4Mi1SzcOVDabiMpfwluqJJSVXRUG4u6s9QetdluWzb1sXkRPMxIWyxBId5twS1dVVlnpCJDqGMRVMxdYOlyWXw+ypdb8nMolhy1EPCS4odO8d1ujslmWTYg7T2RftsvPFbOYLAycLL2berEj19cfrWbudpttiUiEo72Ve19E7e0ZHM6JOOtkyRFmEm3BiQlL/JZ3aXQyy2SI+TC2425IixBxMoyjmJZQ6z53B/sVPo6jqXBg2m5RLGyl7Xh5VKEW+bLvES7d7KtmyImWybEt0XCiMt0R/0QDZccyjEY6RLKu5ZEcTiSCoyMSze1qVFfXJNvlFzLKQjwip7uzCzETpZdP8ACox2Ehl8pxfiSchUCub1wRkOqUuJU5XLm65LiVsxZD8oRFm0qzNplnMIjGObLq8Sl2xrYywvvDmEiIi4VIauHm4ycIS/eV02xiDpGJaYpnk2YhLd3kU0OyEDjkhIik4Ms3DLUpRGWotUYyluqTaWbYlm3tSsDqyzGIiiwSKy2CQ5iIpcxSUppwR3iKPteEka59ZISzFwj+JQHXW5b0uZJyL0klyO7KXFvcqhvHzKQ2ZOFlHN95LyDE1DhkpsdEDyqPh93u8KC6+WX/krUdiC2QljtllEpN826SDSzGWYhEeLUX8KVhRFYfItWZOcMuFSTi3lHTxJrWbTqTsVFO+w5LKOouFSrXZ7hF2hRHl1FFWd7htxzZi3RTLdxuUeLe3hUUVZ23dbGLYyFuWYtWriUioE3mJwcP3lHfo2JdnHmLlXW8yqxUNcvS4svKnW9w5zR7ybVoZINyQyy5RUgSbzlIh1aSyqM4+TI+skJe73kJoyTsASIfo94e7uqZKwSO/ywMhlL2VdWW0CcGWXNpjq73Ks3eWHaDgjEd6WaKtrQBGOWOXUpxar3HJKizbvBiMkNy6ERFV907HT7ShTIlu5E0aAL8S3kSl0s7XUJCOZSaPx1eykpMKL4bodK7jF3uGKzo3PCSO3flKKHJMZci0JDm9ZvbqkNnpHhFUvlpI432ZIdlqFd5Oq8WYZZdRDukq9i+FGG5Et5aJoT3JAvo43EsqgA2JbyHcm4P4VTSe5mti1FwU6pKmbMiKO9xfvIovlmEeJQo2XZY1RANQAq4S64UUOI7J9CXavDyquF3iJFbiRZco7v4VLRSJNSXUMKpyVA2HacRccR5lDqW6hiSbgEZk0LwuH3lJZvuVVXWjNVHTJS4IrUy+avhyq62VfCXZi4UdRDFYhl8pR1Ky2RtFtt4SckQt5ojxbq5c3T3F0dGLNuegUoI8oqI6UUy32vaOMC5Jwi+VFsczfMI7yk3dk5EXBGTTmZssunUMuHzLxlFp7nqXa2IDp5lDvBzIzzZNkI5c2mJSl3YqHe3Y6c2bSIjIvdXTBb7GM9uSM8eVQCcTTqWIUWyKOoREpD3k6kSGQ5l3RVHK23wEZcykOqX5l3kYUCkYxH8iuOHhxiWpJpsdkhDcoKZVweKXdQjuOVVGLByQnWlHqCfjl3uX8Ke9Vxv1khlmiWXLyrQyYKoJUFFtnsMpDm3SEswkO8KdQZaUNiSsYASWl6H4doLjokWI4WHzCKznWMspCn+Vk3pWObG8kdJrjkouzbbS2xiD2ki8X7qori94cqpK7UcQHNpOcIrGHSNGs+osufK10bwd4VR02gXCuVu1p8OZ94uiveGOVR3rsiVUVyhlcK1gREszLSlwXEu1eIlUeUJeWRVdr0LuMvbSPyhRFTDNjS25L3Vkz2i4S6O0CUvppN2WsyRsKUIdOZdeeIRkJFiCWUY5Vm2NtODlRq7XJYPp5GizIsXb58izFLl0po3pFqH2lWuX8lDuLoi4la6ZPkTzUaCm0B4kqXw7pRWUcqSDW4IVoukiR8SzZV2mXFm4kcNsuCPriWH8t4k6t4k+jXoXxLNq7tVxwcOUlKtLJ59sheYK5bIYtg4MhFzi5VjNmX5MutvDEolKMtS2Oz+mwy0kJRKQylJc2XBKH5EdGLPGf5mZ/a/Qxhx3BtHMF7eB8uzlvCJKMPwb7UyyFgRlmLElHmj1ZkO827JwnN4iJXFh0+dbZwSbFwtIuFqH95av4iMfl3I/4JS3G2nwcXMSJx9sY+riJFLvcKvejfRhmwJx1yLrxCIiTjYxbjqiJf5qq/nU+QiOUY7w5fd3ldbL2yTrJYxdsJZeYe6uDPlztfM9joxY8Sexcu3BYcRIY6tOX2VVGAkJR7Mt3DLL7KibQ2uJEMij3cqrTvIycElzQxOrOiUkiZeNPMtiTeI4RaiFwo+JslV7SuH7YokURcEcw6SQb3ahcReEkfo1e3LrkcJu5EfWE/EhZHxf5LqUHFW0YOdukxNHfuCJNi8Q7paRLu+dWTbm03BEhF8oyEhEoxjq1eYlbHciIxZLKOXiHwiqDpftxy0wW23yEnBxCDh9pZwvI6SNHHSrbLu0q/glJuLjeaJRkUuEhUd+8fEfVODvDIS0/5Lz8tvSKREWbUQ5faUm36QvMlJl8o6YyKPsrf4OZkupgaG5srLaJCT4uZZaXnhl3oqIPQy0c7NtohHMWIT2YY/J/P+tEvatvMDesPtk9GL1sJRe7wt8Kg2nSN4SISHKXtftRBZEvlfHgJOHMlyY3pNssrB4myxBIS7MolhuDyudUS+xUNXuEo91evNbVf0iWUtzUJFxRLzIodFLLafaXLLbER7R/KxGPEXXShLsj16gqyI5p9Kpv5GeStbZuWxiLxRItP7yLYdKL22IsF6IlqEswl/ErLpT0TFh1/wAkfx2GyHDOOoSHN2n1LJmEcpCQj3dS7oLFljaSaZyTjkxumblv4Rn8shkW8RS+71otx07fiJNkMuUZLzvu6VM2awJYhOPk0TYyZERIpObubd+1RLosS4iWuqyezd2PS65Ev+rImN6JMOtuFw5nK0GNfnWm6IvuXLQ377JM6sCTxOYglqeiPmjX4l5t0Ztr/bV2zbPvPuMNkJXJkRELbI6okXXmL0UovZbmrYiLbYxbbGIiOkWxyiP9i8jrVCDUI8/4O3pZyyO/C/ln5vuOF9JLvalIpduRjKPdyy7yhEUk4VikbxfssWL9zSREQ8xESkUdtHNWI2X6Ei/F5lT1RBpyyTpFWWo7GJwXHGJOYY4kSHtCHhEevMf1U9Kds/ottG5yttCMcxC84IuCMfodX6upRNm0fEsRgn28IhJx1kXIsDL1hENOpv7a9VFe9K+kD17tPy0nHHrnDbF5/K2TmGMRIsPzD1U9KvHWqnwTPZXEA50LfbGRPtd2LkvCJdSBc9HMKOJciMij6ssveVjtDbj75EJPj2enLiYhcOMOZVt8+TnaEOUYx7SX+663DH4RyXO92DutjC2Mhfbd7ox+8oJ25CWr3hVg1tGIxHwyzZVBeKRSy+yspxj/AEmsNV7jwcItWZSmHh+UzDvDGUlXUy6VJYLeHVxIjLccol/buMPfIuE4WkhcwxHwj5vMpdNn4LmIw+Tbgliad4dJKg2dUpR05swqfd7RZtMQWRxHHMxS0jzfOutTjpuRzyTT2LzbRPXIjcveUlcxHEdLN2Y70i+r4lQvttuC35NiFiSEsu8OmJFqKqteiXwm7W2dIrZi0eEoi9j2zj0mx+THz9Tf2qBfdLdo3bz75MsNuXL7j5YbeG2Mi9W2PoEaejrWDmpS4pFuDjumr+pTll/O8rPopd4b+rV+cqndH9iXbhNv3eJgODITiJCUuZa/aWw2W3mX22xEY5iHMMt3Lu+ZdfT9PK1Ixy54x+V+Sbsy6GIycj3iVpTabY6nBXKbGtH25PSKQ5SEokPhWdrstzNlcIeYV6WpxPNpNmhDbLJaXMykt7VbKIlm+7JZdrZT5ZRbIldbJ2E82Qk8I5c0CLV4vQqjkb8CcEi6eLGbjiD4kOzshHeIs2WK6eIUcwt8WXSKks1KPZiRc0fWEtUzNh2wJsdMfvJjr6GdSLdIfup1naSIiIsu7zJkcjKkoV7cRbdlpJlwfabJXJ2Apt5bxbiQtkKmStNAnW585C5GQx3SEhKUh5o7qHLuxXvdvYsxLDbZKRZuzGRcpEqLa3QBh5t5zDcF9yT3YlFsSH5NsV5M+gmt00enDrY8NHmWz7Zt4sNso5RkRbvErBzou4LgkLzbYlLBId6OmXD1rWdG+g92+Qtsi3aE5ukWI5LdxOHrT9tbL2jswix7YnBtxc7VsZMFvSJz0D1V9NULp0l8yKl1Fv5WV22bp+2FlmyebxcOT7TeaQ/SFHzyrVX3QMP5UbubR/Zbm2nGLYn322yEXGW2d5p/zVlX0UpXhWP2NtZtsXmRYG2cIsR50SEsSRbpekRotL8HvwoD0af2o4Nt5S5d2wt2wywytnmyKJPSp1kHV6R83xfOsup3hd3fg3wOnTS459h7jbXRgXrR5voi4zaY2G4b1252m6QkI+Yjp9vpXpOwto20ZbOacsmJEOBEWnGeUhHqr+uq+d77bVzekJPFIhcxoD6vEJzELDHdXplltN5/NhxEY4haXCcjqLhHqT6PBBO93+oZsstNPZ/Q9JuHcSIyGRbo6kI7eIiTmJEpd2W6JLP9HxJspEJEXyY6i8MVq7e5kJC4MsTVLMPsr1FBI82U/wCSBbVKOofaRqPvDm0iJDEhL8S7d2wxxhcEREok3LVzDyoBXQtjux3uHxLXkx4H0IcSWqWYuXiRKPtNkUojxRze6qKtxJwo6R/PiJSjuxJtsRERiIjxSIdREXEk2G7LQrofuxTbq6xOESjFUbrst5TYSESkMt4VLkNIe3WOoke2uxGKrXGiTatxzfmSlzGlRr7fbERGJKW7tpx1uOMUt2WmW6KwjV2Qo7d2RF+6sO3G7Ne7KqL6gFxCSiG1LMSilcODEiyyH3V1p9veVUIK2RDIZFEvyKaQ8xCXCSbcvNkOXVJDNzTEs293UXQqCmzlllyqPqkJZhRRpItRFwjvFyxU/Zuy3nHMFtspFlKQxj3peYUpZEuRrG2VFG8PSRe0ulcc0VqNq9Fr1tsiJoSbEZSFwSHwx86zB2fdKWlKORS4HKDi6ZOtIkOoZEgO0jLMJOb0cwxUOrJCURy8yks0y6s28Mk9Qkhw0cjxK58nti1NyLKUtXhks0V2hjfkOWSTKs0dwzbS3RzKC+7mKMfz3VV1ucpR1KKd3miRZkm0g5LJy4j3uVIDJzd08RKuFyWrd0ozTv53VNoKJNLfNqy7uaS75C5qEsy4w+I6kdlwXCy5RjvcSNhgmbSRSc073EjXFqJFIRERjxcKlt2xEPrB7u6mFZtllJwpe6mkJoj4Yj+dSZSW6pI28iKWWI5eZOwRy5hLi/hTsRApVFMW4xLNzKUVlKMf+KJTZwx1IsdFbUB3SyrjYqRW3j3kyoFpRYgfUnz5ZeKKYdExvVmLKlYDnQcLdyiuNgJbpS5UY3IjlKRe7Hu8SZa1j3tOZBQ/BHmlzKCdFaEctQ91MdZEvzFMVFQdeFOaOOrUjO2nCuUYISze6s2MdR3TmThdb5pJtLQi/hRLewKX8XCnbZIylSJPoZCpdWsyedRHhknpAIzcOCP7yE9ePFGQx7qaL+6uEYll3le/sTJlvcxkRauZNpdOSlxe8oNWOEvdkuUAuIYpaqCi7B4ssvFmSdclmzZVTk4QiQyyoZvx3iT7lBRf4o8Q/hRhxB0xVFaOi3vS5d1TW7/iQpJjLijsk6rpZeWXvKuG7GJFmlHLzEO6lW7LUrSVEt7k5OpXhUIbpPpcfwqyUyXUk5sxUQHE6jyyo0TJtCGUhKJItm05IdJSLw+LlUChorbsdKUk2qLi6dm9sH3MQmycYGQkI2zZRbLi7QfNKqtLA2WJNxESe0sCRPtjzEI9cVgdi7aKycxGxbIiy5hEh9lWhdMLkfV4TYkMSFsRH3hp1yXk5+km3tR6eLqI6bZqNtOvti0OG2zF4RImR5vsyjVWLewWRe8rFzBLNzN96JdUVhQ6Y3o5sTE1ZSGQ5uXe6lD2jt559sWyjl380u6oXRZJUuC/icfPJrOkN63aOf8ASO9o8WM8QkLmIXD3fqUEbuyvxJx9pxh8RzHbN9kUd5wR80ljxeLiyq72L0gK2w46ZdoJD6zvLpfSuEflts511KlLfZFra9G8ZvEtHm3hLMIkUXIj3kFro4/IsYcERykRerlyx1KVe9IbB4ezbJlwZZWyJsSEtWneUB3bDOCQsPXIucLhFEeIhLeJYrvP/wDDaXa5sn3tba2bFlxhm5kI5xEhIS5lBvbJh5lwmGRbJuJSEi072VUj20XHCk45iFxFyqz2DtZlmWOJOCXNp/eWrwyhHUuTNZoylp8DdibOdKTjbcib3kPajbzgllIib1CQyipO2r9iIiw5l9cMZDEi3S4upVh7YflLEzfe73EtccZy+YjI4x+VkSglwp3lGGnO3jZZij4VDefEvwroUb5Oe0uB1Xyl2aIBlvEo1GyVo/sS7baJ8msNrLvZi5o+lOTithRUmQqvCSZiCu4JcKHAuFOkJtjqkmVqmFRcGhEnQrZ0iQ6klVcqgQutLqSqJJdaBjaim1RK0TKiqEzgFLSngaEy1HupxigA1bj9GmFdkh9UV2KEkDbBm+SHW4JSIx3UIm1aol2DK5lq3UMnBTyZQyaTSRO4qFFEC7LiQaiSj1pIu0JGlMNTJZ0KSQORLMrLYe0LJgSF+w8pIszZE4QiPhUm92paOxFuwZtuI8zhCPKK5JuTdVsdEIxe7ZVs3uaMhl3hWi2LtEWyKWqOUpaUVrbjLDLbfkFk4O6+2IuYnel52z+qtFUX20PKx0ti4PqxbGOVcTxSn+ZbHSpRhvZZbYvMRzELuqsrcc2VVwXBFLMmVMuZaQwpbEzzWTJEtDsnaQss4I5ZZiLecLmJZOrvMu+VkieFSJhk0s9Ftts2jDLjrzg4gtyZEdRFwiO8sL0tvm7m9efZJwmyjHFyuZRzZd3zqueuCLNvIDhqMXSrHLVZeTqXJUDKiLbuRIS9oeVNoBEmFRdb3OWi2cuXBjHT+d5TXf8AqW8SRNvNjGMcrnNJZ+3uYlEswluqy6ybEXBkTf8A+PlIVlOC5Ncc3dE3Y1y4TotkUhlmj95aLpLtIWcFhtzEFtuTgkI4bZFuiW8sW/cFu7pZY5cu8I8qbW+YEsoy/pFzzxKUrZvDLpWxsra8G5InCefLdFsSHCH+p6uoh+qqoL612Y+8VkTbzb7bkWybecYZISzYjI3OkaV9I0qolnt9xgpNRbLiEcyc/wBIiuIjciNy2JSEXRxI90vSKmGKcZXEvuxaqRm+kXR7yQZY7L7eYZMuSj/SR0qr2fYP3LzLFsJE88QttjqHmIuWlPPVa3b+1hiL4xaebjERESEh4SZLroQ9S3XQ3ZjbDA7RubRhi9fbIiwW3G4slmGTZV6mzr6a9S6MnVyxY1q3b2/c53gU51HY7aWg7Dsm7ZnDcIszhEUXHnN4pfV6KfMiBfYwyjHiHmVR08fxmWXsRscxRGWbmLlVVsDbJRFlxwS4cTL4RJeX2ZSj3Hy+T0HKONqC4PhsmijpXRUq5f7STQxbHTLMQjzfERfWgOXAlqjiS1cX+KbXoass7KwbJmRFJzdHSIjzJrdvEtQ90i/CPn/sQ3LC7bZxSYcbZIsMTKQtyL8/MvUvg62EwwzIm8Z5we0ccGI+GWkV14eneR7KqMcuVRV2UXRKyvcN60Yx2W7sh8pdcEm2cEdLbbZdWMVa/PStFML4NWSzDcvDLdyl3u8vRMHdHSI6iUeD2aMcvF+Fd8Oixpb7nBl6ube2x4ttHZVzZPE2OIUSKJt7w/hUF9p1sRxGyEeIhIZF3t5ex7SZEizDFwd6I+8oG1H2MPCcHGHeERkPtehRPokrakVHq35R5EQpBVaS62DJwiYk2MsshkQq5ttgMkOGTAy4iEicl/b/AJLlj0smzp+JikYm2s3HS7Mfayq3sdhlmkUVs2NgCMRJvDj4S8S0Oyujls5GRNt8zgkQrpx9KlvIwl1Lf5Ty4dgl9Nq4Rze0p1l0XbEh3pFvFq9leju7KYbIswkI5RJscOXhLzj+tSLFhlspC2IlqkQyL3vMtY9PDmrM+9NmUZ2M4Q4YtyEd0co+z6Fcs7FcIRbJlmIll7MSL2upbcmdlkwJNvvN3JEMsTK3Hek2NMxfMmv20RxG7R8mYyEiYew3BHU5iafr9KuOlETUiktrBhscMSHihhjq7oqXbtNxlHLulp9pCO4bjpiRacv3eVR3XJZZF3ZLfUY0TK0b3no8Mc3vcKbVgt1yShjQR0o1LS51COXiKLf3kXYNCIXBy5c353UZt1kSiTki4BTKbPcL1zkeVn970KbYMNt+rHxfKF4vT/YtoxZk5IfFveZIi/SfxIukdMR3Y7q7OW9mSGW8XsrVIzch9GSLUm4RItDSxEEajmOQ5Vx4ScEpZh4UidQzuEE2CaDDc5VYMP8A5FZ+42jm0llJTbTaAkOUo8RSS1LgrctmjESIiGOJpzKcJybcbLtGyEpBqGJcqzrJYhYgjIS0kWnLwq2tbmIyEsxZf+STSaKT9mJvvg8sH8zJFbSc+TKTceUSUKy+C8itCbFxjyknik69JwiZ0iIj6Gz+OtaVXoxuFh5hHD+7zINHMoxIhIdQ/urCXTQb4No55La7MNsb4PnrZwswiQ70RISIS7PL58tarXWOyMEicfISe+Wi3h4hFmIiH0S+zzKVTaAtkQ5pI4G28UiHUMSLNmjw8yqOOMAlllIudj7W8iLHtsEXRbJsZNiWUsss299azT11hyxCcxCl73NvKPc2r7hZSGP4uIfNpTw2e58oUvFJTqSdoGnIJTaBO5RZiMc2aWUd5OrbNlmKI8ub/wCFZbMtLRuMs0tRcPhUpzZcnItl2eYhlp1aorN5S1jKnyEY5eHhH8KZWxIeb3l6V0X2HaeTO4gsuOEQi4bmqMd3hWZ21sfAcIRKTfyZcqwj1UZS0ms+nlCKfsznkst1OEBHKp1zaObub3VHo2XDFb6jFxA1NHtXBEhxBxG5DiBpIhEswiW6VaSp1pVIYyiKG48O6KdhRLubdlwnCbZw25EQjKUW5ZRIt7zfGoxtNiOmKJbX5NllUi4vBKUmxLLl73EobaKpFebQubxZvvKXXZQxkMt2QlulvKP5RHKIiit37g95LUNJDhtWR3pcpLj7A7sRXRexC4d5aAeit3h4zjJNjEcpaily7vWspyUeWaKDfCKrYN43ZXbLnZ5SzE4OXNlly/atW5tgSInJZi+jLKKqh2NaNtxfZ7btJE45qkOWIj5vMqi4sibbi2WGQlljIhjw8qxklPdmylLGjRbS6QC2yUi8MsxESwd3fCJZZDHi3f3l3agEwMnHJSzDp08Mf9VTN1cuSKI4Y7pRVQUceyMsk3N2yU/tGMt795QfKycKLYkXdRq2TI6pEW9JS6HEYiIjHdFN5RKBBraXJZtPKRZlEMibIhLUJK3pcEPLwy0y5lAumJFiFGMc0fw8yjuj0AxfLTxcOpTLXZ7ZCLjkpaiGSDak2JdnHLliWrNplwolLgsT5MRjpLMROc3x+lRLIUoUWFLdgtLZcOVzLL+3rQytBkQtuDxFKX5JRRuCESkUt7sxjLlUpq4KMtI8PCp7g9IQ7URjEilzD2f/ABQiF9kcQo5eH7ykAfMn4gqu4ydKAt3mnMpNLyW8oF5Zi4WIJRLKo8CFz1g5tIyzLWOYhxZfsyc0kIobVYkQuCQiOmOr/NUje0hkWYhLeEpD7SkDctlGRDEVqpphpLHyhwii0RF3S+9FE8pcGQkUeWRSiqwbsRHK54pe6pDe0yzRcHTEpDIuaMqf5J6hUPN2X53k8T/JINbuWXd8OpMNzKiwoOboodCXCqMdREWXdiPNJCRYUEARlJHxRUfyfm/PdT2m4kIlH8IosZLbuIok5Rkg9QyEdKlG+xGOGUh3iLdTsVCpbbw6eHlSpbcIyIfZXW7gfz+8ii7GKdCGN7NclmiIkW7qXbjZxCXZ5Rlml+FcfvXPk5ZeZcDabm8SdA2FNshGPrFCetylmEo/neUhvaRd1Grdi4OZBJTPPiJRESy+Ik5urjhRjm4SiJK8YeEfViI82oiT7ijb0cRsZCUpaS9pKgKSgkO6WniH2k0qyVy80LmXd/DwqqesibLs5R8MRRugBk0WXMOZNYoO96z87qa9iDKQkPMSG2LrhRGRF91Q2UgzsYylmLlXWqcWrd3U5mycIu20/wBJ+6rw7FjL2WnSMsqEmwKpt0iGI5olKPNpXKyHMRRl+dKn7SaEREmxEYy0jFVzje9LMtCWDO5y8JIbF44RR05SjIt4RlGX1rjjI6d5cbbjqFZtyY0kEYv3B1EXi1I9b8d6Rbyj1Zlq07qHRnmT1SQ6RaWl/wCJHptJUbYOCWUo8JaYpXNXRIiLeIpRT1vyKi/b2kiFdiSzTDwx5lwLkvZU6x0alu4Rhc5lQWlxLeUwHVpGViexdCnUqqlq4zKSFzLeWgaiwoSGZqLR7mSxEtIaiU48OXLp1R3uYeEl0LhRJrsZDlyppIVsmk+KHU1Xk/mXaXCHANZNCjcswkpbeAOYhlwyVRjIrbqzlEuMi6bvmxiWCOXTGSE/eOOSInCzZsxEq3GiljIWJFPIybS43d7iXcbLpFQKOp1CVaCHNhzuc2bSllLTlUetEutGkNRIq5Hhkm0f3SigJtKoodkwj8SHURLMOVBpVdIuZKh2KoJ1BLeXQeiK4b5byFY20c6lytFwCRKBzJDuwVarlV00ypKiR9RLiXKimUa5oppUjve8qoTHkh1MeZdxB4lw6CihASdEtUfChXYiTcuFFFkS0j7OlRn3t0R7ytEtjLanEpjtwOGLbYx3iLeIv3VX1JNq4k4W7FraQeji5R4pSHUKjVNKhI0pi1k598SKQjh5c0d4uJcJ8lBIi3fZ4kquLnlCnRsp3uSicQqvKNUlzEJPQKw9XEqGgYi5iI0isNVxNq4hSQnnIqXSKTJOKp+z7+OUokO9LSs+VwkL5bqyVMrjc0FwJDmbjFVT9SkpVjdSHMlcAgq7IBOEng4uOCrfoTsEtp3bbYkQsM9pcmO6I5hbEuOsfT8SMmRQjcuESo29jY9Atkythcu7Acr2IJPMyfIvkyzaQp/itPta4xt7e4cqibb2uOI3bMPkO6Uc0vF/qsz0h6Qky4TLIxbEYkResKW8vG0TzS1e/wDB68FHFGvJH6VG2LkRZiUtW6QrNE0I5tQo4X5EXadpyllUF85ZYxJeljx6VRyZJqTs8IsegrAliOOOODqEd3LxebN+taMNk2Q+rtm5cURl7un7FdvtcQoNGeEhHvZV6GPFjjwjinknLlnKtxGIxIebT7yNan3SHTl3fCKtNk9FNp3eZmyuXW5D2rLOIyI8Wbzl+qi2P/6PxYsCJ5t8rnEEcPS3m/so2P2qMnVY8bpuxxwTktVbIx/WRCOXLHVp9pQ71lgRllLxafZV1tTo/tPBHDZbZbJwWpOPNyEiyiMWev0/P1r0D4NOgts8y/ZbR2c2+TeUnCZFt6RZiJt5utSEvmr1rPJ1cYRvwaYullNvweFvN4nNwj+6K1fRjo1YYzPlolcti4MmxcJluPD2fn/xWy6adCdmbFFl+0feJxwibg8QuON/1g6ur61mWnRHStcLWbHqV7mTksc6av2aDb3wOuOET+wbm0uWHu0btXHMN1kY6ReLro55/nVHZ/Br0hJxtkmGLYnCiONdsiXN6upV/V/gpVttJxv1bjglxC4Q/dVt0YunvLWXBeLGEvWOEUubV9Sw7eWEX8ydfTc3j2pyWzSf1PPb/Zzltc3NoUSetniZcLNHEHVEipSsftUzZuzHHo4MnHJRiyMokXN6JL17p7cbDdfG9Ice9jF0RGTbsdJOF6JU+dZi+6WybJlm2bYb1CLfZjLdKLdKUIvtRinlyJVGn9TOUMcLUnf6GfHonck5gvYNsI5XDfcjh8RC3qc6vqRLjopaNYgltYX3BLL5IwRMuDzOOVpUS+enxLju0XCliZiJRguC1cPs/wAS61083u3X6GPeguF9yys8FsRFkWZDlnhiThFvSIvPJazott67tnBIieuWiHDcacJxxoh4Y+egisW/fPPuYjhDiREZCItjEcukafMp1rtV5gYtuODxDKI5k8uDVGtmVhz6ZW3/AL+hc9IuiVoy/jC8Q21yRP8AkoxFxhuWZlpwdI1p6PmWdvLLY7LzpNt3DjZF2bTjkcPlIh8/m+tPuTcfEpOEJZub2pedUreytWI7IuFvL97Us8PTyilqb2DJljfyqiSBsy7NsWy5f3kae6WaW8o1nZi2Jdm8Jcf4Y8KIVCERcKUt4SH3l2xSRySYYmxHtB08O6KBdmIxLTxRHVzIzb4uDy+6qXahv5osOE2JRkP560SlRDVkg7wW04L6Rco+8qbY9jc7RfG2tm5OEUZEJYbf9I4NK4f61tto/BRf2jbHle0bYceUgYFxxwR1SGXVQlzS6yEXpZ0R6WUlZTeVjxD7S7S5HiH2hWu2b0T2DbMti82/cvtkJTcLKX9SNYiKrelljaCUm2Rw4xb3SH2VpHqtTqqIn0+nyZq42hEcuZV53b7hRES7sSIvdU1zDGI4ftKXYX5NllEfCMVTyX5JWOiBZbMfIRxGxj+k/dV1a2rbObL3i3e6KI9dYmki5kmmezzOSjm5u6p1pcFaTpUlEhIfaT623Nm3e8mM2cYllHxZla2RdpJvtI5uJHcY1jTK1wnGRzaS1Zc3srtpEmyyi2RCWYtStbshzC4IlLVHNEuUkwLNnBxhLtJRw82niWbzsrtEnoxbbOcxhvyISGOAWnvfUq55psXiFpzEZlFsoxl4U+kt6JJ1rbuOkLbYjLezCMfEVVnTTu3uaumkkqAu1IcsYkh9ZEjOtlLijq3kZi1IhlEhHuo1ISiyGGJzZUfylwspKxrZFHNISlvJrdvyjlU3Y0iTY7UJtuIkQjqIZKPdXJEUpZkZm1bFJy3EYlqFZUk7LbbIknEUBJShs8QRIYyIuLMtVa9GGHLaJSF/jHd3okO8oy9TCHJWPBKfBiLu1kOWPNHeVc5bx1EPtfurVWWwrl95y2biJNiRTL1ZCPCXFVUZN6myIZDId0sw5U450+CZYWuSINqTYiWHqGTcvlO7xda09OhG0ezFxtlvNEpOj2fDIRVC20WUdQjukRZR3R5VttnbWeIWRfLEIYyKRSc5iWWfLkX5WjfBixv83JN2P8GFthuDcvkT0hw3GyiIt5d0t6v+Cp+kHQTyS5bFtz/pnZRdcIRFkh1CUf7afOtrs69IiIZad5WxOsuC1jxcbGUcTNmjGUfsXlvqssZbs9D4bHKOyMTtvYdo5Zf9Fhi8yI4mmTsdRCXpHr+ZY9/bT4t4ZPORHdIpR4Rl9Ssulxls7NiC+y5mbMdPdLmosXebQFzMRat1eh0yuO7s4M8ql8uxPC+k5Is2beVi0+5pEYjKUvzqWLcvSElOY2zzRXUmjntsudusYjbhCUXB3Y5faVG3WI6Y8UU9/auXV72pR33RLTwjurKb3KQ0nh4pEhuOCOZdrUuLu6svi9EVXX1piZiIsQt2UhJZSkUTKvlzKK44Uh0jLi3h3lHeEvVjlkI5SLh4UnHCwsPDHKURzS7uVZ2NIK+8Q6sQRylJv930kmhdCRF2jmUt4RH73nigFWJSEiEdJBGQj7WpFrUv4Zfd5VFlpBgdRQuY7wjLLmIR7qhULKRRHV3fveeSTpt5spat4d3iStjothuuL891HaIizbu6qNu7cEuzFseYi4uXdL40xlwvlHHC5pfh9EUa2PSi6drqKMeEhcGRF4fOhUuWyKJeslqEZcOohp1KtaqPK3zCIiKfMSy6ZZZZsvi9IinrDQWz3aDhvCLkt7Sod1ZsR1YfN+8hWx4bbYy97V3ZJ5XAkOWMhKJbyesNIBrZ5E5q4hy5o8Jd1K3sHpRxCbEdJbrn9H8ckWrxRiJCJc0ke3eckQvE3HLEWxly6uL60u4ToC2lplGROS4d2PEP2JrWIRRFwuUiEhEh5vjEutOk3zZSkOaIj7KJV6XymXeHi7yuOZoXbRCcv3WyJsmxylEs2WXeUlknHGyeIm2xHTLe8ReZcbbb3ZDvRHMMu71LjTItkRC5iZSKMRiWXN2e8VFXxDF20ADa/eT6bVHekKh3LIkLjgsk4IkQllJoh7o9XxJobLJ4cRnsxjpcLtJc3Kr+IF2yzDaglpJSK7QEsyqbDYokP/UkTJSyxjHLxfaoL3ZxlIZS73Krj1CJeI0/ladS8WS8qdEZSy8wrobScHUtF1CJeNmsG4lLlEi9nMhm8s+3tZP/AJQElfdTF2y7C4RKXSpRux4kULgeJLULSXzV0rO2vBjmWZbcIsxco+yjNvR3pLRTJaNLS64RFIrqW6qIbpFbulpqRNFsLrZREvF+d1ShZbEZDFVFbpshHTq4c0e99a6F3HSQxSsRZVPhFObuC0kKrQvJJ9Lod5MCzdDEHh5uJRCtnOFBbuyUgL2PMgBlLNwijEU13Zzg6Sl7UVKG7Rm7sdKKAq2g1CQ8vd8SbRoZcX595WgCMpJREtSWkCjumcOJNy4S/Dm4er4kIpEMY/xLQ0b7o8q7Vkh1R5SEtKnSUZYbXmRgtRHV7K0NWGy4RJMKyEsvvJaRopoj8nEd5T2oox7JEd6XdVY+LjZRiUd1NbA0Sr1yI5f+SBZ3IiObKol3RyKAJ8RJ9wnSXFbvmTTvvEqlypJuKjuBpNCxf8PiU5i5xNKyDT5SER3sqsLd8hy/8pcMeJCyD0lrtoCEcQcwjqiqtq5IiykPtKcF5IYlvKqu2BlwxLKrlLa0LSmyzYdU23OUlE2bbC2IyzFqVgV0PKIio1spRHdeVOoY6ZIFxdDHdFRmjktYysl7E511R23JJmrSmyIdPvLVEchSMh3k6rpcKiC/h6hjJEduBERkQ5kmwSJA3KdR4VW4hC5ERIpItTSpDJ2Mm4iiUcTqmlpRVkqjiadUFqpFpEi3suYsurKKcBbyl7Dsms0iOXUmE8W8mg7lT6CPtKUWPKo6h95c6+6iNsiO6MkKtR3S7ysRHuXSHVFQHXs2pSdruCLeku8qWjwyjvKkZT5LESHup9TIdObuqCNeZHaoXCXsptkpMML6BcZeaXtI9MMizDmUe8LNlQmvA6OG4KFXMorriZR1MloITiaLyFLNIU0jzcyLHRJN0kwO8hDLeFGECUSje5aHmXEmVJGBreJRXhjveJKgHSXKkgiJFmFFcty3UAOo/ljFBiKaVCHKSVFLinyNMa83wpA0SfQk+hLPQi7Ggcc0ltuh/Rgr8ca5cK2Y3ez7V4f0f0Y/XVZbZLBOPNkWYW3JF3t0VuA2i5GIlFcvUaqqJthScvm4LWx6J7JtnMRyT8SkI3JDhjw5R6qOfrTtu9KMMSZtBZJwsoiIiLY+z1UjRZfa16+WpyQqqq6W9mXLHpnLebs7O7GH5FRKvr1xwiJxzNvDu+FVxkJLpmmHXeXYo0c8ptgnQHUJZkNwy733kSpD/EmnX/kKZJeXHQi0Zk9tF5uwESIS8mvyeGPdcDV+pZLbNlsNt/sLm9u2x4cNsfaIP8V56e1RLMRERasxS9ov9VZbPbu3s0cNvicy+yPpRgxO95N/4FlkvEaNJS7wy/6Qn2G8sRxyIijvOEPVJW49K9rOiQ47jzbbeYSiQiPMPmksrS1ISliDEeFOIxFdjwQfhM5u9Li2WBX7zmVwt7EHlLlT3tpXrhScu7sv/wCJdH7p9SrG38yJjLdY41wjN5Je2TsUiHDJxwhlpccJzxDLeTuttsiiUu8qm4u0xk3nCERbczb0cvtJbIneRfMXJOFhttyIsoiO8rBkxbFwXsRtzS2Ix1cxCu2ljaWwkQ3bxOE3mw28PMXyZefKKqrvvZe8paUuC94BX7mJZSlzKId2qu5uOGRd1R8VzhWsWkjJpvctvKRUm3rISJV1ts9z1jjgtkW7GRf8lZDQe8PCS0UiWgtuUsqkUYLhIi91CtyRRfjLlSbCgg1c4oqVu5tQ7yqq3fN4U9u8Eu7xJMrctKOjplmVTe3BZhkI+JOeuNWHmy6o6eZVjlqTgkWGUsubdWbmPSyRb3BacseXSKn7PdIiLMQxy8Mu6qsdn4eYo90fxRRaVLdyjuo1ryPSbnoqbg9kyWC284JPYeWWnMXEvUOmF0Q2T7YiL8WBEii3iNyzE5H0+b6l4NYXTzBC42WYeZXdhtx8XhexSJzTIilId4c27VcHU9N3JqSdUd2HqFHHoZNdtXiiUSFuMs2qPd9KrqXeGRNk3icUm5DHh5VMe2i49IpZiStLEnNTmb2fEUVp3NKqRi8ep0jPWmxX790htGS3spFEW47siQz2Q+wRC+yTceIcpd0t5b+z2Hhtl/1AjLMRbsiy5i4VU9JBei2LhYjbJELZSLMMljHPcqRpPDpjbMxCOldbpIhGUVJetyLhzIBW8cxEurUjmoGdIlxDurQdFQtsQXLtwhZzdm3lIuEXC3RWfbeEeb8SKd1lGOVRPdUXB07LwHxEpEI6iiI+6PMpW07RwRZISbcxW5RGXZ8pKht6PFmESjxKybuXGxiOXm1LKS9Gie24az2U85mkMRIRKXEWrvdS1H8h2Qi3F8iIRLELTiOeFZRi4LeKUlYW9xHeWGWMnwzbHKC8GnYs2WoxEe8QirOw2aNzqcGIlmly5lkhv+YlZbN2oIlzbq48sZpcnVjlB+C72t0bdecxGyEiIsvCKBToe+IuRJtxzhjmLuqy2TtYRHNqEpZS1Cre52phliNlu6S0rjfUZY7I6Oxje5hz6LXJPEIsxIubLIh0oj3Qe9Iot4ctRZoty5eZa1vbQyEoy5R3ZcKkhtYSEolzRTfWZVyJ9Jj8Hm2x9kXbj7zItyJgouFulEolEipmWj2faXLd2VsJCLjY4kpZYl975lo7a/bGMiGObd0y1Kp2ttNjEccbEswt4Lso5d6Q+kVEs8sj3LjhUFsdu7R9luRYMiyibZRJv9IQlq6vT4VTbQsNlsiQi2L7m864WouIfi+tC21cuuRkW7l7qo2reRZpFm06pcq68GGvmbObNlT+WiI42LZFEsqGF2PErfali282LbIuC8LZOSIspRzYcd1YnyldkZRkjll8rNjabY1C4WXUMs2ZTh6TEPq3t2Me8vPXrvmiop3pDm/Esp4INlxzyRo+mW1hfbwcpEJSLTEe7zLCvP8ACMkV9+RcpZva3UMSER/FzJp6dkYy+Z2MOjkSIoiXModKkpNw+38oQjm3i0pj1MsUOYtIrahFm7MizZdOVTgcHMMSGPezKvM9JZSju6SHulupUdjmGXNJyUo6VDmUkTqOiO6W93vaTTuOEY8UhJQqveLurhOcvdy5VDkOgpVlzIblJavvN/tTq04v3Y/upjlB4dPs+IkrGgFajmyj7qZU+H3ZfdTi8I/ndkm1pzZkrLR2ZF3kwT/iTzpLL+6XuiuhUoykPF3h3v8A4SbGhvV3V2hLlB/I5u7l4VyQ97h5v4vqQMLQ+UkiMY6R9rN72pRsRECqCiQMYiI5m+Epez6UUUAST6GgklCRFlHN3kN55tshbIhHEKIiQll/0H7VHfFtwYk5Hh1Dm8OlHJ3dEi04cspFlyiMi1D9agA+Yd6W9mH3ZJ1S7yjs6Y8OXSPvR3kSlUIAjrwjHEKWbKIyIpDyt+dPauBcEnIkJNlpcyucUvRWKit27IlIWxEtUh1S7yOGXxaubvJDsYG2G45X23CxJDqbIZfSR1fau3O1nSJsmHhFvdiOIXCUo0qnRHN2bf8AdjL2kRqgiRELYiReES8QqrGRX7dkpOPtvPuC4IjqzD/lFSb1pshEibFsh0iMiyjyjpJHblHNq3o6fDJMftBelLeERy5S/vPSpsEikfZbKJNk4I5okUt3ezaVEoyRDlEiHuy9lWl3bkwMhkQjl3iiP4iQqMkIj25ZhyhEsv8AjlVatidO5A8mcEt38/iXQbIvDxSRz07uJLNvf6onkrmHIRxI+sj8n3hTUmKgXWXq4ojRrtgw2XricjqyiUR4U921IREpC42Wkx3u98cltHIQ4BRudQ5h+6lS6LiUejcii3mItKudh7NFtyVy2Lg8MtJcS2jNvYzaSIoXpI43hLYtusNiUWRiQ7oju/hqq+uwm3xJ5sRESIiylEl0VJIyM/W/XKbRVhe9HXm90hlxafa3VRXLWDlcEh4eburJzkilFFmF2KkNvSVAFMyKRkJDypxzWJwL+jvMiUuCVA3clxIlLvmWndJ0F+FySMD/ADKltnSISKOUdSm2LTjxRGPeLKKuOQWksqXhJ/lXMq+6tLlkZE2RN8Q5hQrd4XNMiLlElesVF/a3GIUVMaebLSQ+JZrGjxDy5hJd8pRqKNOFw3vCK75SyQxwx8KzPlam7POQ6tSFJMC78qbjEfdQnLXGiIlIi0jvERbqCAl8mMsubhFH2Tck0+y5pJtwXBLmFEuBLkiPWbzZONuDm0uAQ5myH7pUVadjm070lstqmNy+7dkUjMpH3uJR6Osl8n4lkraNJJXsZVy0cjLDyy1R91AO0KJFujvDu95bW3ZbGUSIhj6siyl+8oh7NbzE2TjJZhLeEhLUPdr+xPSyaMg2yUhbyjKUSLTibolwjX50YcT5TM5vS4hy6loD2MyTcW5C5GMt0i5lX7c2c3bCzEpSyuFuy5eVFNMfghW5cSiuPOZuFTbZgizRyo9zXDjlGPKi3RNEazuCcGMiyorFCLKKfaARFIYqdW4bEZRze6rjuB0bQRZKRFiDHuuDvDykkRxHT7sVFLaHN4V1y4Eoy0rSkhWEN8d3Km0dzakF60lmZKXeUKhkO6XNypOTQi3pVuObMhdTP0YxHMMlFG4yrlXxH+JWpBRb1fEo8ulOqLP0Yqhee3uHdFGs7+UR1Fupa0Oiydt5RwxSrYkOrKpNmJDmUpx+KNbHSItrYFlLEw46SEokPdR3mm3CIiLMWqIxEi4oj5pV9NU24ekgeUIpvcbaQWtvHKKNQBER3iFQKuJhHwp6SdVFhDmzKIQDLNl7qCThLgO82ZVpoTdk0hEmyEmyJvmzKP5PbfRkPdRGcYswyHhjmU2xZcIhcwXCwykXZlEo7pLOcqLjG9iANqzHEi8Qjy5fa9CIVwJaXI8pDlWjptTab+Vm2i1EhcaFrs3JcpeeX2KiPZrerEFt6RSaISGMd1Rjyauf8mksVcAKU4m2yLvCmT5RGPKP3kNwCHdTSPxR3VukYPYhXti3mJsiItWlUFxKWmUvdWpo8SgX9o2+WJ6lziHSSskoPJni/hJNboQlHNLmV/TZkSkL2rTIU4rFwtI4nMOZQ0OiCNI8SkM03iXGWi1EJZd2JI4A456tlxz+jbIvu0VWqChhGKHBne091AeylEpCXCWUvZLzpJqnwG5JJwd0RjupG8WlBaojVCWmReFAgZVEuJDw2/8AkuGY+LhQCNGkAtbRkpZS8JaUDyN7iEvEu0d5Udl0R1S9pRKJVlls8yZbEXIy3oqWN4Kp3X5byZV1YvGmaRlRcnfqE89mkOUVBxV2rnEp0UVrDVLNqkKbLMOZMBzlQxKKVDUiQQe0m0fiuYiZWimizzrZGyxFwXHokQ6Q3R73ES1QsuFHK5pylEvvcKj2JN7u7lkQj+L/ADUy96QXMW7YiEWGvVtC8T0uZwvQX2LrrSqicr+a2yBd3LTZYZODicMhIvdUQnJSVjcbZfIfWC23piy22z4Sw6UkP2qZdXbboiI22GLYjnxCInOIilX/AAort+TNw2uzMtSlqR+rmzJ10eGQi23iEXqxEsxeEkO5ffb9ZbONy0yH8Qq1IVBQalHd5iVzQxbEYlw5dKoWhccGRdnwiWpSmnhbiThS7uok0/QUW9HyL/ihk9l9Y2XKMZILF02WmWX8/N/kib2VuPeSbDSNbEXBkMR5v4V0AbbzZfZ/e/0RmWpFmkhbRYthLtM0sojqL2R85I1Jcj02MxR1EQ/dUh97DGUSIeL95PtthNuCIk3htjmHN2hfuq+srBkS0jlyiIyIfe1ElLJSKWO9jnQzo+O0ScK7fcYabHEEWxEiIeYi09a0PS74PrS2tHr+0ffcwxbIWHGxk9iEIi22Te/m9Cuej+xnG2Hhw8MiGUSLtHGxGURHdRbrpZbCzFsnG8PcIY5uInCXkZupyPJcHsvB6K6fH26lz7PMts9E9vDhC5YPxIcRsm4k2I/pHBrWjZfUVVDuujDlsUbt7tN4W8wjyyXrTO17S/Zckzd4Yj2kX3GxxI6fioSzmyaeSXLbkcPezZij4qK8PUzld+DKfTRi0k9n5MxabOfc9SyRDH5MS0p13sK5i444TLYt5iEnREh8K9D6U3Vk+5j21yTeIMXGm296OYhcHzCqhgrYY4bYuEO85FwiL/IUfGya9A+ljZgxto8o6ZFKP+Xxpx2b2XsXolpImXBH2iovWP5UeIRF4RiMS9WO7+pTv5yjlxIkPN+6sZddPhItdFH2eOtWTjZE2XrN4d3xLQ7G6MXL7eI2yRDLVmESjqiRavP5vMts1gXb2CVo23j5p4YtyjmkUd1Tri5cYJthp0RaHNAR9WMtI/b6epZ5OtlWyNIdJHyzFfyK83EnGSbbKQtmWUSIeH9a2Gxuj9gTEpPE8I5s0RlvR+KPWtFas47GC4PM2RapcQiqwTKyFyTgvMlIREt3NmXHLqZT2XJ0wwRhyQGtmuFhuC8x2ZEItEUtP0nEoe12nxHDLDLdi22P3uFBd23gkUWxiXEOXw/Gqy42085vd1awWQibx1Rp9gbCtt5htxwhGQlmFvlEU7pbaWjFo42NswMmyEYsjIS3SEuKirNi9Iyb9YIlzDlJSulG1SJ5gRYbJwRF4pFLKWkY/WkoZHkHPt6PG55YGzHssmyEeJzLIZRkKmNWrLe7iF+dK912JtIrlmVzaM4JdmQC3LL4qaVnekXwfWjjmPbOOMMk4RPBl7MeFsV0rrdMtM0c0ujlVxPNHL0ox0jwo2z7Vy5LDEmxIhIu0KIxH7xfUtl/JWzrYcrbbxcbpS/2WcuWhxHHm8rZFHlly8K3jnU1sYyw6Huai02ZsfybybKT8Yi7KTmJ+EafMq3pR0bctiJ62LEthiOYu0lHNL9ap7S67SWnLq/dXLzaJOZScIh5pRWUYTTu7NJzg48UyHW5R2LxQ3HB3c3Fl3kIn/CtnCzGMqNHa7TJve1K0c2ri6tSxHlBI7ZuRkIkQ/d7yzfTpmyzs1wX4tjqkXDurldpiMhEdXFl8SypGQuYcpZt0sqLcXMiKWnTm3YqZdMmVHqGj0dnbUnGxZiIkLbeG5xCOYpK42dZ2jwveUi2444JCRDu8MV5GxfOSERLTvLc9FLjGJyWaIjvapLlz4FFWjpxZtWzG7bdctmRHDkzmbEnI6d3Mqp+7toybjiCWpbZ/ZbF2w9bPFHEEoxzEJDp8K8Q2m+TDzrJEUmXCbzDEspZS/WlgzKaryiM60O/Zpdv7fatBwxtnMcmyIjcKIkLg5Yj6V52dyQ5d4cq5tu7ceKRERfey6fCoFCIVvHY5Jy1Ely5LN+JRXXN6X8KbR/i97eQ6utlKWUY7qiWQaiGByJd3UmOvCXeEh5R9oVFdOOJIS3d38SNaWrjjeIOHmLCESzYhFu/O31U8/XVQ8lFaQZ3QyEXBEpcOYi8SI46PF4YkKhFXhGPdzDl/PxJtHCLeEeYiRqCiTjFuiRd0U+fyZCTbnC4MSUHCiUiIZaZCMvZLi+tSDuCLUREWXMRSIopWOg+VEoQ90u9l9nry1UKpoZZk7Cic4+OneLvDmQze3SLNzZS/wAkEKeLLGJZv90ammOaPtD7JddRQIHTeKOnxeyn14tQ8WVObLNlKJDpjl/P2JTjmzD3SEc3NJKxjg4veKIoVTj6uQ+IhklUu6RcURkmGXL4o5vaLrTBHMUt3/j+fmQ6nl3iEs0hlqHu7yiXTurm4il/soZufndRRokWhO5ubV/Ent3A/wDLKqbELdL2swl4SSq5llGMSzCOkh4h/YiiqL8bhOo8qEHiRAuS/iRQaS8FxPxFTDdkKKF4ihaS4E5Iwkqhu4Uht6UfuySYqLFsk8XB4s3DvKuq4RbunvIzdyJd7+kGQ8UhU2GkntmJaSl7Q/eRaKvxR0kMdWbUPLKOlOE94iIfEUfZU2VpLATzb0vd9pdqRaijpiOre/EoLTxSlKQjp7QRHxDvJ78SiUo4eYSFJhQO1qLzhORe3sMhkMf8vT9imWtGsxCJEQybLELMoVW28pOETe632maO72npXa3Dbbgs4gtiWoSLM44WnNvJWOiZbC2QyZbbbcyyIREhHlQb6pSkz2GHEiw8uMQ6iEev9ShON3JPSbZFseUh8WUfnUi9Fjeky4Q6cxRzSLKO8itxNg767F/ewSkWRzu/erVQmwiIxiMdUv3l26cwSGMso5Sw8pc0V0LkXItvDh8wyHNxR4VsmZlhsxnMOWRbsvvLTU2a6TGI2Qk2MZZhlmHdH/BUtiRFEspFH3R0qY4ZCOYRkWkhiXeXdjVIwlyTdnlEZSGPvD3hVns24jmyx4SWaA80ka0uC4lvGRm0bJ6+ZLSJDl7QdQkohjbRIW2283LLNyyVODoxIiKJcP7qPs2uJm3VrsyNwbWyrbEJ4mx4o6R5opm09ksFHAbISIsw6su9JWVbYS5UQbUeJZ6I+hlPXYjbmkSHl1eJVm0NmExGTJFIoyy+9wrdbPYiJCIy3iJSXNlPFmEhKW6RRUyxxYKzB7KsHNRCTbZN7u9yqwtqYZCIyItIy1KyvLGPZkJN8v7pKLSxIczZRLVmQlpQEhh50e7p7yRybKWUZco5krJ0h9Y3iDLdLSX4lYNW7bpCROYLbebMJFL+FaNqrBKzPbSssbtCHtN0vwrPuuE2USEh7y9K2lRvM2JC6IllNvLItW98yyXSK0xIkOqUY7yymrVoKKcH5I7L2be5lFd2W5mIZCX0ZavCmOC4I/elqWSm0OjQWl2JRkUiHTu+0p7N2I5pEWb8xWOrcqXbXG8K1WYNJrwvc0pKS3dCWr3VlQuFJK4yjElamI1XUPybg+JSG23suYYrIWu0SbJWgbS5ltGSYmXF01vSGSiUGWVxsXR3ZbqjBtNscpZlOsroRzaUNggjLbYxbwxzLt30bbGJSLNujmGS5XaQjwyUUtrEOklnJNlpohbU2Y5bRcFzEbKXh5VTvXg5hJuUhy7sS4h/YtFcXAvtk25LvLNX+ynBLLm4YpS2WwtmAtzHNJcbezRItO8ptvaN4fNvKfbbMtsPtpSLekpTZJXsOSLKREPCKt/JBfzerLTmQabPYYEiEiLelvCibIuMV6MpCI/klpuuQOubEZy5iGO9JVO0bBxn1faD7y2sB3okqLarRD6rMPDvCrSsTVGbqDiPZNCw5iDIuVFNqWaPh3pKtdcIZCWUlnJblI0jF5l1I1btZdt3LqzJBdlLUrU6EzU1ekhlVQrS5Eh5kcXJLVNEOw9TXKnwpgtyRwKOURRYUOhGObvSXSNuUsokhUIkdgs2mSG2Uh7dXtWYR/O6rGx6R3bLeG28QjwxH9irHLhAddHuqXjU1uUsjhwW9/t+5dIXCfcEh0xKMe6oN7dOPjjS7b5QvpOYuZQSNPxoiqWKK4Qnlb5Y0ycLUlLTmTRPmQXBWlGdk10mdPvIXk0tJCXizILRjHMSGcd1CTByDiyWktK64/HKMhEVG8oId5RnXXE9AtXotG7oojml3lddH+kL9oQk0QiJeslvD+FY0KlJSwIi/P4VllxKSpmmPI09j0S52ts69ERvxbfIt3DFzD/o3h6qiX2qm2x0GF5sX9ltkLcdLjwuNuSL6T5MqfNVZ+zqyOZ7EIeFvi4SLh+tTb7pAQsYDREy3KRA3Ic3M56XFwrBKEv+N/fg7e7Ga+ZEQuie1BKPk2JzNuNuD7XWmXVoVoWE44IuDqESlEuZBHajg/KEPiV3s7pGzHBvWG32yzC5hiT7fLLeH9a2lLKud/0MlHG35Ki7sZDqZc4SHKRKkurYm+LxL0Aujrt3FzZzls8yQyw8TDJnlcEvOKo9obOftnovDKOrDIXG83MqxdQm6vcmeDa1wZLqTKPCra7aEiIhykoJgQ6vwrp1WYaaGFlXMRDdkmzSoLJIvxTquEQqLlXCcUuI7JzYSTnm8PNqURp8hUjygSHNlLd5ljJOyosbRxOlJRqmlRxDiWmUdoDLYiJNlm0kMpD+frU202e04WJJwh4SLKXi9Kn12ZH5Mc3L90V0Ngvl6gScLULcSGXKK3eWJjpZAC0Zb3ZZt7NHlRjMiyiMu9pTabPu3ijEW3JRIS1CQ8XCrbYeyijiON5hlmL8I8KTyRSuxqLkUzOxm3XBlEXhLV2hEPd89KK9tQJkYkUhUuoCOURzFwjmkmjW2IhxXSEpRIWW5e0ReZQ8hax+Cv2gLboxclxZSiQ/wqhuW22yi22MeIikRF3etajpPbWTDg+TXbl638oODhxLhxPQQ/ZRP6J9I/5MeJxm2ZKW6QjLu4nVIVVtxuKslRjqqToruj3RnaN+XYs5eJwSbEe6PV/kttZ/BjIYubUZF/MJNiwRNiXD6etRrLpzduXJPXPaN7rWIWG3wx+z4l6Z0O2gO1GyecYbHDKImWYva9JLzOsz58Svj9D0On6fFkXtnk+1Pgs283mYwHx4m38P3XOr/NB2T0D2syJEOzicc3iJxkiHey56183zr2R423HI4hNxIhjlj4hJV97tFrZzjZFht2zmYWmyjJ75QhH61y4+vy/Rlvo4Lzszy3aWzBZGT1yxlGRC24Wrh9GYqfH1KHsna7IvDFyUeWKsfhI2MTz3lNg4Nyw8UhbEh8pbcLU3g+kv1LDXLb9s4TTwuNuDqBziXoLM5xp+TiS0S9Hrtp0jFsSJxwY7sc0vEsNcXYuPSkTgk4RSw9PteZVVk+45ukREQi2AjqLu7y1NhsVuIuXbjjjhZsJkottjwuOcXX8VFlGEcTbT5N5zlN8A29o3b4jaW2I4P0TDebV+jp8/x1Uh3ZN+3/5m2uWyjL1JF7w+YRWq6I4NkXYNiyPyhDmcc7xFu/b1UW/2bfMEQkT0RIZCJEIkUcxRHhXNl6iUXUUbw6fUrb3PKthWFyUXGWHHh4sOQ+95pKzuLcheEnhFgsolKPhKPWtJ/Jgk8ROPuDjPETbYiQtttkRRGW6qrpH0dYbIce5fzEUsouDHdjLdWHdcpbmjx6VZWbYucBv10izRjHT4etF6P9FX7tsbt5zDZckQiOZyW7Id0aqvpQvKR1ELnZsuvjpZ0yER8wr05xkWxtibyxERxG8olylFVnk8Uf1DHFZPmZnL/YeG2JMuPYjfrHCzFh/ox4VC2U6LLhOEWJqEZZi7xcy0u2triIxGTmaMRbIpEPych+f9qx1ww4Ui0uEWIIN6R/8Ahc+PJqXzFyjpexstlbRbIsxcJcqxHScyG5diXZ4hEI97hQfLHmZSIm91QnH2y9Zm5leGFSsjLPVsQHbgiKKksMtjmcKI8Wr3U26vmRytNiPKMVAdut5elFtnE6iWFxet4cW2ylvGXDwiO6h7LdEXheKRCJCWrNlVYLokWpEB+JfdXfigtLOaU3qPRLPphIicInBERiOGMiLvEtL0e6SDflgEyTchLMRZiXmOz32yylFano1Vtl5t/E0ryOpx46e256fT5Jt78DelWzBFxyI4eaRYn3VQXjI4YttkTnEMYj3VuNtPsPkWkicKQlLm3h9Aqsct9QiTchKOnV3SXPgyuK3KniszzOy371wRi23EYyIojlHSqG+sHGHIuEPdEpCtFdjct5nm3GxlGZaZKhv8PEKTmoilHN4l6GOTe9nFlikVT8hze6hYiNeN4Y4m6Wkiyy7vEoEpaZS0xHNKXCI7y2U0c7TJFXST275wRIRLKWrmUEcSUYlLSQkOYS7qljs14hllHlL95U2Cs7S7zS1LtH5ZlYbN2ZbDEn3JFLMIkjtdDb995xuyJu5ESlISw4tlpIpf5cqynlUeTSMG+CEzcZlq9hXQtjliMh+6q+nwebaEo4EuYXG4y5S605/Ym07BnEubYhZGOYiGLcuKK5MmeElSZ04oSi7aN5sW7IhzcWr7qwHw8bJeK5/lYREmHG227khGOG8OUSIfrp8ambO2/wAI8pcMkuk3S9srJ9h6JC+yTJDKUebvU+Jea4yjkuJ2ZNM4bnkrFCISISHxIQ1lLiQntoi2IjHNzZSJQju80ixI7uGRav8AOK9DUeckSzZcISH6PMXKMt2SG2A8WkeId7l3kKjrcZdoUdO6OYs2r56IbjjeYm5Dw9oOVZNtujSkHt2hEhEZCRcxDL/RMu7UWy9YWUs2GRSH94qJjbjZNxIikW9L2pF+xDo2MokUYlrkXuoJFCJD2mXd1Fyy7yEZR3RLmj+95ke3fItWbi5ebKm0GREROSyy3cw7seVOwOHWJCOki4vw+bKkJx0/nlzLtaCXaYcs0c0h970IZnyjH3h/0TAk9cuUh9kh/eThUZmveEeKP3oo4V/PEqALRdGiZShJ9KElYMd18WbvZorhud5Mq6hm4kKh1XB5ijq0yHmUU3ZaYl4vw9a465+eFQbm5LlLvCJF7Q9SouKFcufmKhz5SLi5SQyuPzIiy/2rleIcw8UiIh8PXpTs2SoPQubwjmL3lypl3eWP+yj1qPL4cv3dSdTxe0SCqD1Mu8nUcl+dKB1Fw+0uiQ/nKqAPPdRKV/P7yBid5dE/ySkVEsXuZOGvFm738SjS8KVK/mJJBRYC9vSIS0ykjDdFmk5q4W2y8Q+bKq4K+Luojbo8Ue9pJKhpFsFz3S5oxRKujyqoFzd/DmRglL5T3R9rzqKHRaDcjxc2mWnmHSSMF0OXtPC5lLm1Koq9uiWYdWaP+xdSXlER3iL+jlLul56frSoKL0HcunxIN6YkJZe0jqbbHEHh1fOq6lwPCUeGPux6/wDJEI3HG4iIkOIJCMY5R05i86nSIe42+Qtk2L4uCJb3N9ubrXcO5likPaDmkQjmRLCuHISyjwy3t6Pxpr9G9OI5EpCQyLe5t5WiHFhHCxyEszjmomnCw+9h8qmMWhCRETmUcwj3hjEh8XpUV3Z7YjLHIXBGQjpl/WIlmw2JYguSlGXF4lcFbM5bIutl28RkW9LLxR4iVnWybJuWnKoTYi5lEY80YkiGRDESkJcukh0ykvSiq5OVgH7ItLOYYykq90o96WYfD+1W7QFqkQ80izd74kG5YHNqLmSf0GQguijFX2z7mLbceH3lVYOXeQajEZCUilHLwihSaE0ail0SK3drKhfkO8XekjBtEi3vFGS07iJ0s1trf4ZSIZDvDKMloWukzAuCRMuYZDo7NyI8Q/HJYGwvBjm1S1fnSpXlo7qqlIFJxNq8/ZPiRDiCJNuRAspSH8KowNscpfxKnbuB3iIfzqR6OxzCUu8mqQnuX1mwy43hjERIsxErG32aVsPaDJt4hiY5ox3VTWjTjJDLMWoRHNq4lpLXaJC2IuDEizNxzDLxLDM5Lg3wqL5LLpHtnZ3kxWzLbJEQ5ezHs3C1F3l5RtJzDcIZSHm+9Fa3bezHWhbuXhi2+RE2Q70Szd3zoeyHrZnEciy44RDEXW5S4hzaf1KMa0w+XcrL8z3pGBvKlHejqH+FR7IccnGy1C3iS/O79a1vS11l8m3nMNkmyjpiJDuiqnCGQ4Mcw7u94lGVyREYozpWxaSIRy6cqYQk3pJWj1swMsxSlujvS95RHAHe0lp4VKE0NG6ebbl8mRRKJSzDxDuknt3XMk+LZaW4938SgPiQkq1SiQW4XKKL6qGqkjiTi0U2DRZNvdpyirJq5kQkJf8AEVnheIVLavYxjlirjMmjQNx5kSlu24XrIqlC9kpbZERZVtrJZZMNOCQsiQ4jhRHmllXpzHQewbtsN9t5x9wZE6LhCTZco+iNFR/B5su2JzHd9YwIkMtOJy81Fvn7mURkTg/e5l5HXdRLVpjsken0mBVcjLbR+DW0wHHGyeaIRJzFcckJDGWYV5QTrm9mHdIdJL6Kde7OJDLLpLeHh/WvHvhF2MWzrkYiQ21yJON8LfE3+pHQdQ29MnuV1mBKNxXBkDvCzDIlK2JdizLSRF+YqOdMQsPi0p7uwnhbxBcjHdL95em5Ns80s3doy5UwbkeJQvIXh1R0yy6V0bF7VGQ+8ulZNjJxdke5LDejIhHi3faUd0BcLtM34lYUcEhzZt3MqzaTWHIh8KzkWgfkzfEXKo1w2IqGV1zJlbhZ6kFBe0bIiElabHuSLKWpVQPcSkW1B4o8MdUU1swNQDwjlFOx1U21wMRUgXN4VspCLJuqK1lUBp9Eo+tOSQr5qOSTjoodXRVrYTDjVcq3IpaRQmzFGCktIl7Kdio6YxH8SHS4c3cyJSkkKlObSnYNeST5I24MiykqtwCzcKlVNwdJJtDLhTWwmrIwtkjUbT6ODvZUUUnIFEj4fKuHQuKKkPvCO6q5x3EKIqeSx50LUhVFWAW7gt4mUo8Qiooui44LeUcQo5ijmLmLSp1VuGmwFF0aqy2jsd9n1jeWOUmyFwc3MPmVYYEKakpcA00XnR67w3GyFwm80iiRDIf8o/UtBtFuwebIm335ERSBsmybb5sMvOQ1+aiwQvCueUEuafTapak6OjHn0qnuaq86KOYeOy81eiUcrWIL3dw+Kiz75Nxw8MsuqQkJCQ6h+cSp9ads7bD9sUm3I8uofZWlH4QXsHDwyYcHS6wTY5uJwSpWSlrLF+yv+Kf0Mi9ZDh4kojLe/eURxiPMrW92y4WZxsSIt6Ii24OmUf2KpcuBzE2MeXd/aumGp8mE1FcAXRio9RLlUqhtlmzD91NqMt4VRBHnFdBwiRTtyHMSHXvKGho7Uk2aaVOYU2tEqKs2XRjpC8x6rDIZaXBEh4ZSW6HpHaYIk+5bW1yJZRZLHHxYfX6fmos1XomwyTbhN3L5PuFlbi2wJcIiOYQp9qvdm7A2WXrGHhIs0m3nmy/zqvE6ieOe9P8AY9bHrUapMsgvdl4JY3kTYvanSFtlwicKUo+t+vzrObc25stgcMRJ3VmaKI+HzZutVPSfZVsxiRu3HHpEQg42y4Ud2T26sdtEsQsxbsVv03SKXzW6OXN1NbaaJW0dsY7hYDZNt7oyk5Hmc6s32KMciGOaSjsxHSi0dXrJRiqRw87htn2YkPaZoqcFhbFLEb5ZCRDHmj6C+yqgNPiOn91Tre53SScpMrSnsFsdkOOOxblgy1lly/vdS976I2zDNo22wMWR3ZSLvEvM+ihNvlHEEYjiEREIjHmIvN51pdidKmfKRtrYhwy7Nx+XZ/1ctXVX414v4jKeXZJ7Hr9HFY1Xll10uFhkXCcxGxliOG2PaODwy6l490kuru7IX3MrbMm7Zst1siyy4j6l6l03t3xFshu33m38TK5EW8vM3TN+teYv2zL2IRPlES7xFEt0Sr1eb51z9EqVtmXUW3XgrxuXLaOJHMPFEpd4t5V9204+5jYeIQ5eXu5tRKdtO2YfKLZOEMvloyGOrNupVbbIRbFyLY5YtkIjLmcJetFPk5Jb8kzY21GLZstOOQ5jcKJNjvC2X7FNY2k2QybcEh+77SoK7OJxuQtSZbkOPlwyIf0noIvqopFo0wxhkJE64OoXGxwZd70ufrWfZcmWsq8mmPar+EOGJC1ImydEcpEOpvE/0VY265IizYhaSUF3aLhai7ojEW2x4WxHzCP2IlncSJuRFlJdWPp1jj4Mnl1zW+x6j0QavWCwLkiHEHEEiLEbIeGXoH5kDpnbXty8RYEm22xiLcXMMebm+xRS6SsuCx2ZYLe5iC2ThRiRR1CPWtFsUWX2hc7QZSKTLjg72UfTm6l4+RyhLU1R6elZI6Uyi2A7s5kWyIu2L12I2Tgtl4tPWt/ZusEwItkOGWaGkfCKztzYsN+rYzZilHKXM5xF9qcxdNjEojJsYjljl7orjzSjk4NsfyqmXbDbbJRalEpSEdKxO0dkvYj7kt4iGIykPFyl9SvrvaojpIu9JUz5E4RFiOluyIsubdEVhDG79Dm14MHtN9xwhHeItO94lCvGXhGUvCtZtLZolKQxzLO3lmI8XiJexgSo8zI2mRHW5bw5R5lFaOMsSJcKknat83tKOxYlLs5EWmOpeh8qOV2xmKPL4UwHI5Va22w5PNtvuiyJFm3iHvCKl9I9nMsF2eZsYiPNzfrSlmS+UfbdavBUsXce8Kmt7S70uVVdGiLKLZFyiMi9kfOuvC4z6xsmy3ZDFYzhfJpGek0FjeuOOCMtRbxez/itbsTEHMRFy/urzW3uI8UvxbsVorbpJclIuxyjqIeH9eZcubG3sjpw5FyzdbWwXLZwXyEhiWrSJFvS+pefbI2YNy4TcolmibhSZwR4i4q/FRR7rbrzxds5IS3SHKPhVnsm8bFsiEoy3Y6i4ksWFwiE8kZSN90cYsG22WXLZh7DERI3G8QuIoy0+dT7/ofZFet7RZbFl9uRDhxFtwiHKUd0qU9FVhNn7YiWYpCXhW02XtgcMZORkJCMuItOndXFmjKLs6oOE1ujA9Odjk3cvvk41JyJOfScPzZuv0rMtycIW82bd5V6Dtzo9juSfu5SH5OMRIeL4/MszcWjFoUm3CIiKLkuHhku3Dn+T6nHkw1L0mTOi2wI3P8A17XYYREMSGLhF6uXL1Sqt5sh22tm4sNi2Mt3UXeJYYL8spCUh3fwiP7FZMMPkLzjhEItxl4svhXPkTn+Y6sdR2RvGL8Sjm1J+0mGLu2ftnBk28yWJyjxDzdaylpVuIkJFEflNObdykrpi8acIsYRHBHd0l3S9C4cka4OpO1uY/4QuhbLGzG/5Ltu3bIRI8YheJuMicL6ReLXYE365t4S09oJCXvUX0q3tdtzs96W9w/vLt8LLgyeFtwilKQiUpalnDPLG/Zll6fVvE+XyBvhkSjXjI6s3dElruk2yrK2dcwSi3KIhqj/AArJbRuW9IjyyIl6UJKSs86UXF7lU62ObVy733lGA+IRIR1CjPFqQcPlTsRNC0bJssMu05suXUmCyWaJSjzSUSmIOkfFvCKIBRHSRCW9+9zIsocciLMRDHhjJcaAeH2syQnzEKeNe8SLJCzLSMh8X4Uyo+1qzfupwNlwxL87ydQR9qQlvJWAJumaQxGO9u+EU/xfe+6nG2mRjvJ2A4Sl9H3Y/hKid1D3S7sf+Kb+eVMqcf3eFFgHxOLN972vSo7h6tWX8lIf9UE3vdEpfvKG9cRiQ5ZFu6S7w/WhFpBn3x1EXdKOX2hVXcPiW9l0yjl9rdTX380hylyllL2kGlYy0+1qWholQUax/Dve8PmXeYRjukQ6fFwoLfKOUtQ5hHwkScDxDHTLSUhxJDzSQVQatCGIlLNp05uX0Jzcd7Ey5czkY8svMo3XljEcpZcsfup2r8/d+L+1FjClUZbvtSj/AJp3LKXdL8Pmqh0oXeHdLKPhzaSSrUd4Yy9pCHQXqjl3o6Syl7JedP4vvJjRSy4bjjceIZDzDHrqlQ5RkMY7wlJyO7ISp1ej46osYRssvd93xIk+8Phy/wCaE1m4eUSLDIu6ReaSVD5S8URLxIEg9KZZah4h1D/qKcLu9Ldzc3NzKNVwtQ5SHTvf7J85bo5sxCMdXEIoGT6b0cMhj3vZL0fqXSdjlKLkdJCRCRd5QaHukPi/PmRBpFSwJlTHKMYiPezfn0LgkO6RD735JRgluotaiXeHVlL8Wr9SkA4ujqEiLhKMf90R4xIZOR73DxafqUZs5DKWXvEPvIrRFwtjIt2MS7znnigVEkzEREibJwSzTbISHD3Sl16l0rxkibESlml2YliZS+zqQasjGQtjiEOUspeyI9UvtR7I3Mo4cXOWObmRRMrZNYq3KUnJDIu0czDLVlRrV4dTeGQyj3u8o1w/IREiEi3RKMva9KkMWbhDmJsR5RjHxK8bp2ZzTosxdIc3Fl7qn2N8Il2kY8RDmHmFZ4akOUs3N+JWFuWkhHwrujOzmao0hPMPxxCjqyi3q/1iq6/LDEd4S8KjM6uEuUsyM7bOOEI7ytuyQNLsW8wjm0o79yLzbmURkUuYkZrY46SIRLmy5kHaOyXrYRccHLplulzCimSRmbMYiUtUtOYu8uvsiMYyiSeD5Futj4d3vLoPkIkJCMS3uJKkURuqJZUfygkEKe0pTbxNjER5uKUUJ0KjlXcyktv5hiSj25RzEMh4kZlxkh4S3RzEhMKNLsR4iIRHMQ7upbfZ/R0rmJPO4EW3MPLKTm7LhVV8G1u2TJCIiL4liFvETcR92nzL12w6P4Y5oyLd3e8uHquplHZHd0+BNWzCs9HMZkmL2JZspslmbjzb3n86Jd/Bay6LRWF64Eh7QbntPFFvqqPXX9S39dlN/SDLvIfkjdtJxx9tiO8TgiPNKVV5zz5LtNr9Dr7EHsfOPTrYlzZPYDw+rL1gjl/hLq86oLcRbIs2bVHMPvcS9S+GvpJs24w7Zh0Lm4EhxHWCkwI8znocL7OteZle6RiJRjmLNL/VepjySnBOXJ52bGoSpAgc04YxczR7yG7TDKT7naCOUfvSEUM33G8xODEikQ6i8PCqul2JOFqLNqjmjzf7puSbM0iY3EcxD/CpTRsuZXIx5o+6olyYkUhGOWJcJFxfrT2LUuKJLRMRxpkf3SJEqYiPNxbsUE6RLmTAbJzSMkXRNAnpahLMOlMbvSjmVnZ2bhSEm9QkMt4S5VF2jsjDEe0kJbu8Klt8hQ5m4EtKs7R8uLSsxEmyVjY3XMqhl9ilE9F6PbQcERKW8tnsza+JlIoyyyXmGzLrKJcW6ryyvdJcKJ4lNG+LK4nrlpQhESlLTicRCO8vM/hY2y3c34+TPjcsMMRk3mbFwizDL0EVPRX5la2u2JNjIo5tXKsZc2rbLjxahJwojyrm6Xp9M3J+Dp6jOnCl5GbGZj2jmrd7qs3XBLLuqt8uERiKhvbQLeJeojzS2duBy8Irjl5HSqX+U+Il0b1UmSyUbYkROR/iUsnB+Ub/ABCq9u/FFO5xJZlaVisqdubIxCxGIjLUOkfCs5cMk2RNllIVsXDjJVO2gF0RcEc29+8s5wRSZQCSlMuoWVNMllwMsm7lSm7kVRhUiRqVcb3sq0UhUaG3uRIVMthJwco/nvKi2Q+JODiRjzLWMPCtoTJoC3ZFvF4VIC1b4UYSFdqPhWljoc1bM6ojLmWn2VYbLIYvXZNuCObDHLp0j9iyVad3xaUFxwd0v3v4llkxSmtnRcMkY8qzS7UsmWiErZ7HHijKPhV5bW9peti4TNs7cjlj6v2h6sy8+auXGy7Mo8Sl2fSG9ZGLb5CPhJZZOnm0qe68msM2NO2tiZt7Yw2xaW5FIotlmHlKXmWc2m24yMhEuLmFWx7bfLUUi5hElWOXrmbMXMO77PCurCpJUzDNKDdxIHlZORlmFSMfLmj3U+l2OoRH3cqI083LNHvEK1bMkiBRwnN3LLhyo4tsjEo+8pVaDuiIoB1b0jhiX3lNjOhVzdEuVXGyNu+Tf+ZYbeHdk22RD3XCos49Rwt4vZimUMt7TwyUTxxkqZSnTNtZ9I7Btlxtuyt2papNyxM0pE4P7FWbatLZ/tGCwXCj2G65LeZL0fq81VWbPZZKWJJsR5RLL/nJWfkrAlh2zgvkX0JE28Ix5spFT9S5Ywjjl8tnUpuap0Zq5FsSISHMOUpCTZS5uFCGo7o+9Jay5w8MmyLtBiI47YkX98zq/Wqm8sIkMcNwi+iHN/lmXTDMpbM554q4KgvaQ613cykPW0S0lLxe9woRs8X4lqmZVQ62qzpeyjxDqb4iEd4fqQdpseTOE2QiXCY6SHiFcqDne/P9qk2DzcSbflg6hw3BEhLiES8xfXT7UvOxS32Kl8JZh/PdQAeR3RiWru837v2KK61KRDq3hV/qQ9g4vJ9LviEfCqsHk7FScQTZYk+2XEJfnhQnKlxSUTETZpUVZ9O3Vu2It5o4I8oiPEXzCsV0r6UCPZ2RDLNJ8R8JC3+1C6Z7WfciLg4LJDIW5CRd4vP/AILDXji+f6Po1+abs9Lqepv5Ygby6L3lm+lm1CtLK5uW4k400RjPTIeIVY3LyyvTRp53Zu0S0ttsOSIuLLlHiKq9aT0wdHFjVyGdAeklxtCyK5fwhIXCHIMRiPKVaq3LbDfyfaEOYi3faTf/AAz9B7TaexHLu6ccg3duCTWNhNlQR82mnWX2da9rt9m9H32PIC2dYYuHhtnaMttkyRDHFx+qUqfPXrXFHrWoLZujsfS3J0eE7f6VeTWLt2LROOAOUd2XEXLRL4MOlL+1rZw3WxA2XMMyGQtFIZS5er41690j+D7YtlhvPsvt2jDBPXZ3NyRWjjPERdXWI0+Ois+jVp0RZZ8p2W5aHsm3EicdYIXWG3BzPkUqdcaZfP1KH1vzXvXoqOB6a2s8X+Grpe/YbLBqyfJsrt2Aiy56uIycc+cjr8VfiUPot8FLDDtleuP3L9808xdFUa1riOAQvRj5yIcvV1/GoX/it6S7N2pc2rthe2F2Iu1/8kRFFsRym4MaR8y9g+C/4RNj47bTVzaPuiz6Gsz8REZZupJZVkblV+kXCLSW9fUvbvpS/d4YvkIizlgJYYiUd4fTL7VmL98nXxl2bYkRCMhJxyOkojp/WvU9rXmxyaLaO2ra0srSIkV4/wBjISyj2g9VSL5qU66r546TfCJ0bYvXW9nbSN+2JzsyK3cHDGu6TxesH6+pZ4ckHKmtP+Ccsciq978m7aBwRJ9tjEjuiQy72bUrP+dbDjAsYb2rVJscPLmHKGlYmx22xfttFaOi62W+3p8XxiVPmqpguYfZtt97eIuZelHAsm5ySzOG1Eja+2SuXIiIiI8IiLemMijTql9fpVYNw4JDmEvEptwy2TMsOJcQjElUtVH5QcxcW93V2wioqkcmpsms3OIUfeVnZkqSxIW5FmGW6Q7viVvbu5VMiomg2Jdiy4JF7wiX3l6Bsna7eHKMYx0xFeTtXUVZMbVIco8OZef1WFZDv6fPpVHq9NqCWkvCWmPeFRT2owTmGTbeJwlFZbo42/tN2IiIsiMXCjlGI5YiOo1pLLYLIsYbkm3BKLhjmcKXMW71R8y8iWOEOWd8ZuRZWjA4vqR5d5svEpu17IYiOUc0ij+HhUe0cJhttvVFuOnUPhUty4FwcxZh93vLjyT32NlFUYjpO7g9mP7xLG3b5by9N21sIbkpNkONHMI5Rc5pehZxzoy5mHCKXdXf0/Uwijhy4ZNmOpQuGW8tNsYxei2RNs80cpd6Kks9ECISJx0W+zlHUWXd+aSa7YsWzYuiWVscR83sosi3qKRdQiNPnr5ltk6mMlSZOPE4vcKOxSLU4LciiJCMpZuZZXbwutvvNuahKMt0lGuun9htS9btGX3CbkIi/HDtsQtOGXmxB5vQr652C9jEL5SId4i1DukPENVng6mCnpk9y8/Tz0qSTpsptm3BMjIXIkWqJRKPCh7X2iT4xkTxEWqJZY831q9e2c2wOYRkW8j7R2g2TDDIsNiQtiLjjZRJwuIi4sy7O83VI5Hj2tmLC3c4YqQQllGJDl9pXlrbjhyj2n3R4VD2kQyGIkOXMtEvZD2AMgOHmzESfS5jpHlFQ595NpQiIRH3k2kSiyC4LiVltDagk52LhYYiIjKWXlWberHeTtn27z5RZlljIuESWE4Lk2hkfCNJYbQubtwbZkcRxwssZdmO8RFwfWpO19h3dsQ+UxJuMpNl9745Kw2c2Ox2SebIScuRFuBFm5S7tK+mirhu7m7cFl55uWaROdmI5c0ub4vtXOn821UjokrW/JK6OW7AuNuZXCEtTnqxW0O6ZfG5cZLDIm4lKMSIR/PnWE2dsshbcxbkRFrVESIiHlH7FpDtbRtnsLnEJxkmyxPVuCOYSHzZS+LxLm6iceUdGJPSZm92oLZCIyllKJaUD+Vy+kIR0kMubeWZ6W7QbbuXMLSXCMdOrLu+f4lRPX7xCJaRLTzJ0mjHu0z0I+lbDHyhEXte8qLpB06feHDb7PV3orHV7Qu7lyoVRGJRIiLMJZcyxWFNlPO6pCefdf1OCPiVbdUzahLm3U4xj8mUeKOXTp7yGMvCuiNeDme7Gi0O6Q+1mXYp3UmlVMgGVEM6b3tc3e4kQ0OqTY0Mj+cydAeH3iEvaSou0qpsBwAP/L8JIsi7vhyl3hFCrXKmVJMAsuH3Zf8Ayh1Mh3Rj3YoNS/JIRF3fZ3e6qHRIMy9n87qjvuDmzR05uYeJANz8/nSoL9zm4uIS3vESdFqIe5fjISyyGJD90hLeVe89uyEh1DmzILju7u8OqPdQyPu+yMva6utWkaKISpeLwl+zqXTykMhKURjIt3lEd1DByJS1d6USHhy1ouUy6cvDxe1vJlUFrTe3uJwhH3fSl1c0vz/bFDCst0SLvDL3k+oFwl7v7UDofL+IR3e7xD9a51jxD4ssSTPZHdzFKPdiuiRRiUi70fxUQMN17xRKLm8QlEv90sTMWbNyimNuEPCQxiWUYl/hl6vnXamUYyIh3RL94UAP6yHw+14o7qK3WRRkIluyKLfdFzdH7UCX5Ehiu9fh8X4d79aADCMh9W5+kiOIIl4fNGqe45LNmLLEssSy95RizZi90REfZFPnzSL86eFAEjNHhFIMyADurhLd3Zcwp9SLhQAeld3L3f8Akn0c4d3elp5cvmUaplvIoCX/ABipYBirLvd4RSnmzCReLT7SYLu6Lkf7v7xJ1QIsw5vzmUgHo7ze9GXsrg5SKOWWrel7SARx/P4kQHCzaeXlQBMt3CjEYjxRlL2t39SPR8ii2MXBEt6WIMebeFVwH3vDlIke2oRFlylHeQxMtrU3CLNhi5u5SElLtakJFIpS4hHL3VWs4pDmcLMUiKJSLu/EKsG3CLTIh/o/xJXRmywaJsiiUZJpuxyiMR3ZalGAy3lN6pal0Y5GEkdt5EX7ysgxhHLGI73e/wAVU0Ihy8OnmU4bndHTqy6u6uqLM2iUd64TeGUe/ml4op7Fw4I4TnbDmjqLV+xQxc3vaU5h2PaYccu6O8rtk0QqMR3Yx95Onw5h91P2jcCWnd3ikug24IjzaRjqUt0AqNObwj3kOrZcMfxJMPOk5hiWaW8OWSsGroswuDEs0Sj2ZKG2XsQKsuDHUIlu5o5kbZjGIURGTnNpbHiLlU20HElHV7qvNlg223lHNvFxIjbdCostk9H73AJxlwXCEe0BveHeEfP83nRrTbD8RbceImx0gRS08UqrmzLrDcEmyzCXtfn0Jr1qzjk5gjFyUQxMo92KxnHf5jpi6XylzabfFvKWUS0kOkfCiXfSISERzFm3pEKqjBkWSKXLFscpd6XzKGVjpylEt6JD95YvHFOzVZJEDpZfWl25htstsxyuEIxJzNqJZ8dkNkXriHNqXstr0YsLu2YFxlsiJuQkyUXGyl8sXp9HxLM3fwcX7eMTTjLzbcib1YhDqzcPWjH1EeHtRnl6eX5jzV61wC7QZS346vDxKHc3VsOUc0pCQxIfa8yt+kIvskIvMuC4OYo5tSgsXAvDLBckO9Ed5XJK9jHdclTWuUikMeHeiiGw/lLUMcseFWDtqwQlhiMtPNJcsrB5siEsu7mzaU0yKK2h8SttkujmESjvaVA2g4IlHDbGPCKG0ZbvuqlLcTL0nxHezcKg3vbRLSUoy4lJargiTbwtlIZSHm091U0i8Je6tG9hUWPUTLcYi4JauXNvKPf27OrKPd3uVEauxLKWr7yj3IalDj5HZJtLoR0luxjwqzs7weKP4lkHqE33Ue0eISVRyVsDR6ExecPDlVZtC6k5myyUTZt8IlEokPD+6pFxQSHTm4vurZPyK9iK44g9ZbqOzZuOaRIo5iKJR9pDJshVKVmdEVxkpSTxKKmPW+WXuqITEtKtCZ194SjFMB0pZUyrRDyrpu8uZNMmiyauOJRGmnCKOkeLlQbdp57SWlI7iOVU3Y6HbU2e2232erV3lQnRW+MUpEUu8mPNNkMh1e6s5IaKcDJtSAqLm9m4U02PD3lHJuPL4syjgokMnhlHhJaS1fKI5iWW2eDhOZRJwe6tLatlxZlWOQmi1Yq4rBvaJC3gkMmxzDL1jZb0S4a/NVU9u24O9l7yI4a7ILUZuVEu4uiIY5VBJxMoaR0W6VbGbdj6OLmKh0SqSKGh5Ootm+QlIdXNpJRusU8HVLGi9cb2c8QuRcYc3h1Nl3S9Iq3uxsibFx/DInB7OLbeI22O8RddMyxhOEmm4ud4rfLN45EuUXXkzDhFgOjl0iW8rHZfRh5wnPKf+mEREm3CiQuS5h0+b51kseJSHUrFrbTgyiRRcERcGWrvJ5Y5KqLCEot7msv+g5MtyF8nCIScHDZJ4S5ZN1rmVUz0Puy1dnpjiN/vKpttuXLOVh55nlbIldbOvvL23Cu3yFwREWyJonpDxOR09S5n3ocy/g3SxSdEy12BbW2W7ubZxzULb+K2Mt0SjTSod5sBtxwSYf2cMspCLzkpS4iDqVRtoW24kxdi9Eo5WyaIe6Jah+tQG9qEJCWIUd7NqVRxTfzJ/wABKcI/LX8lntXZF3ZSbJttwSKMhHV/RkWr9VVVneEJRIRiJZgIYkPdc3VpLHpgwy4OCw803GLgi9InC4olTqWr/nzs55sSeFvLpFxgSzcpDp/xWUsuWHMLKjjhPiVHmjLokUicKRZu0yx8Redwf1K52awOYXi2cQk38oRMv5t4hLqoX29a1rZbO2o4TLjzcZYkOwxC/o3Orrc+xRdqdHNnONiyyzixkJCw+y24RS1ERb1Pmp5/qWcur8NOJcemfKplC70bJkXPJmbK/L6MsTGEeERKvURfXRZ652TclIvJn2yHUBMPNk33ZU6iFXG0uiWH/wCWG/snBKOLcuD5MRcOOx5m+v0U6/Mqbaze3LD17162JaXBuXHG4jpkQnWg/V1rpwzt1GSf6mWWKXMX+x3Y1iy6TjdyMmxlpcFm5bLiEXMrg/PSqo9q2TlsUXBEh1NlmGTe6USpT9itadKbkv8AzI216JZXBuWmydIeZ4aUP9fXVRNrbRtnR7O2K2IR7PDfccbb/q3uvL9S6IdxS3Rg1DTsyietxcLKQjxS0qBcA4zlIe6W6SsSeLeiSZW4LdLL7X3l0Uc1orKPp4vDxKZJktTIl4iH7tUytvbbok33XJe6Sl2M9r6QdFLkikJNuDIhEhIpR3ScWb250TK2wca5YHFKMZFjDzYZdVSGnz+hAe6aPjIsQScKRCThOZf6MRrSg+dVJ3rZCVzcvuO3Y5WhxCIs3ERaQp81F5GDHlivmar6I9HJ23ut6+pqtr9DNjsstuDf4zgiQuC49hi4XKLAVIer5qLzrp8zbDsXag+VsuC3ZFhCwJRJyQxbiXn/AFrQ9C9nOXt7h3JCQuCUtJOcURcL1f2086uP/EPsBlzo5e3dsyLJMWhCTbYiIkyMcw+b5/jSyz7XySk2352KWPZTjHZvizyb/wAPO3XGdhOWjYiMrpxwnN7uj5+r/BfS/wAHFswNoJSx3nBk+RRiPKI9S+av/C5aY1qIkMm/KTl15dObUvqHo43F4mxLKQ5hzSzbwl1dUVydTKMcSiua3o7MUfmcn/tFT8O1BLopt7VEtlvkI7sYyXzz8AVwP82iYcGTbz922QlpJtyOIP2Vova/h9ucHYO2mXMOX8nPiIi4MhEhylHr6y614J8Cd7/9AZZEczdzdyLixKjH+xT+Hw+ZX6Murn8lr2ZT4cNh7NshsfIrRq2xHSxMOeYerSUq1XoWw9kbNsnG37KyaYfw4zDElEhGQ+n41hvh79Xs/wDpS+6vRrTY165gNjbPM4rYk2dy07bMRjKWM4FKR+zrXp44QWSey24OTJkk8cX7POttba/nHt3ybaN3HZezZC204/htkbZRKMq9UiKVOv5h+tb0P5BwMDD2ThxiIxtNPDLrl/ivLegvQ/Z970ivNk7YuStqyuMBy3EnReeApCIU81SoQZqV5ar2mz/8OGwHRZFvabmK5KTbrTko8uGfXLq9K5sWXTb0rl3/ALR0ZISl5f7f/p5DsW6Z2J0kBi0dbcsbzDyC7iNt1ckI9ca1zCY/H8RUXs1DHV7ytOjP/hlsGL9h94WnmrZxt+AXLxERNlJsTbIOogrUR6+v00Vj8LWyG7S5lbM4eIIuCDbYstlifRiO6K6Oj6mKm4e+PocvVwelS39bmfK5HDIeIVHsW2WxlHtOItXh4VWOE4JCMhLLIvzvKTsuVy+1bC5hk44224QjiYYlqcIR+an+i7pZElbOaEHJ0gl49Io72pSmCyits18GNsJPf/U8QpZSJmMh5o1zF9nmUe86JsMCUn3LkmyykJCy3H+j1fr61yLrIPh2zf4ea5RlhNWtlayEdJS3M0pKyt9j2w/KSLhbGUuXtN77Ftuj3Rdyy/6kmWyfEsoPxi2JDqcIvNLq+KixzdTRrhwN8k7odaEzbRw3G3C7RvdbIS3cukvtVpe3Y2gyec1aRIcQsv4UCzN50iFwmxlvDlbHujwo3Sqwcw24tk+I5RIS0ucwjTrXjT3n8x6m6jsVZ9ImSebJsXGxHVmiLnF4VMptd59wibHs3NRYeUeWSq67AbFh4nxcFwREmSlFsuLL9X1q16IVKJSkI8293R4UTcEvlFDU+SyvNoYYi5hjpEcv3lGHb4lGQ/dRuk1q5EXmJSHK423vS3hFYxzEHdKPsrCKTLcqZrb+9bISKTcRbk5wiIjIiJfN3wjdNLvpCMRImdktuYjFmOXynDL113HzEVY+Zv0D5l69fvEVs+I71o+3/wCy4vB2msMRH5Mhy8pR/NE3HSKLuRTUYzEMojqbjwyy/eFe5/BV0pLa1oOzHyH+UbQf+kIiGT7Yj6qXF8VPrXjF43Ehc/Md4f1elc2e89aPN3LBELjLmIMSiUhKWUv9Fnlx61qW0lwdOOS3hLh/x9T2Tal2ThFlw4kQkMtJDukO6VPQq4XVeHdsbcsv5WYEfK2xH+U2B1ORGPlIjxfP/as8Zju+Hel7K9nouojkx3w1yvR4vVYZ48lPf/D+pPZuybHLq1aZeIlFdexBk4WbhR2zJkSFzEbkMdOaPCSqdolGMZDLN4V1a9znfAqux7qFV4SJRCQ61JS8hNF7sJtsnyJyLmGOUC5t79SvbA8S5bYto4hEIx3Wx4ij8yyOyqSeZGRCOIOIQlEsOWZejdFaWjGIQiLeK5KeouGMljkytI6cMFJmuZ6MWgi4WV17dNwsolyjuiqLamy8FwYk3J8izkMWxEd4i+T0+Zai0dEhbjm8OnNvLtzs4m8QiaxBeZIokWXBLMUY6f8AReU8krPRcFyVuz+jdsQyF9wYvCQyEczZavrRf5ttv4jLglgi3gtkzlxiLVKPnFV3lBERNtyKI9nHKXLp3lrNhA+y2TbmaRC4JSlm4cyxyY3dtlxp7GE2r0A2dhts+SE49Iu1JwhekWWJFw9fzrxraFthuOMOesZccZIHMpCQlH7C+frX1JfbQZGTbwlIt7THNqlvCvM/hW6F2z7bt7ZCLlzmfcEZSe+kIvi9HzIxZNDpnNnxbXFHi7px0kgVdlp9otKPR0YiWXi07qju0Iu03ZRyr0LOMkv7SJxgbYhk2LmNuiIuaS+slAChZouaSiXeTCKWotOndQhOOUtJDmihKht2SXgIdX8KAVUOhRy/eQ3XVRNHXDQ8RRycSnFQ0OiRU1zrTG6yScjpzIChpOLhPIBmorrqqgomOvKM9cKE9cqC/cy0qki1Ak3F0oZOSQq1S61dGqQ8ap0+ZCou0TKCda7SqHSqVKoALXu+6u9Q8Pu/7IfWu0qpAIJfnKu0qh0qu0qgAnX3fZThryj72XwplEutABevu5uERl7y7Ux3ZDxZc3+SF1p1C/5ZkAPEpaZfniTuv/j/ABJhV4il3hTaRQAehf8AFdoUdMhHh1RQaVTqVQAWlfzJEAM2Uc3vIIVHeRA94fzl4kmAfqjqEo93MnUjGMRLeEhLMKE644WrV7MkzvR8SQEnr5YjupwO+9qiotC5RJGZBxzKLfsyQAcDlw+1+JT7IdXskSjW7Ij2ZREi3HMvtKczgsj2hYebS3Eh/aSTZLZNZHeiXiLKj90e9+fQgUq2WYSxC9mXdFEEojp9n91Bm2HqBDHdUtiqgWxZh9YI5ZDESj/orBmJR+8Mfu7yuBmyRVmQ5svDxF3UNxohLSnGEXJYgkOXTl7sZaVKpKUpFHm3V0IhodbMjlkUilp3VYBwkUlGAJc0ZZlMbaEsso8xLZEMTJyHSO8uvvE2Ijq5iGMeHMSR1eZkJNjEeL3dKd5WJNxcEcw5d4e8KdioY1h6ojicSjgwRCRYg9nxSzeL0LgVjlll3U5o3ByxkhjCMSbLhU7yvhyquCrhbuUdSKwwROZeKIoVICztny0y1byurdoSEZOd7LlLiiStdh9A3GXG3LlxsReESL9HLTLlotBddDCu5CwVsLg8LhCIx1R4hqsJ9RjezZvHFKrKdp5iQ9mJCMdXL95bRrbVs82IuC24OWIxGKxrvQjazY4otNvtiMpNvCReEfSX2LPFfEO8QlwlIY8XiWU4Qy7xZtDJKHKPT6Wdk89Jl8rRsh7QRzDIfukq3bZlbFhjd9i4WaREJYZaSLzrDW22HJDmLNLUJRLxfsWo2Z0ksBbw3rZssuYizS8RLleHRutzfuKW3Bh9uOFiPZhcIiIZFw5llasE2RD2mUcuHvD+JepfC1cbLbsmRtG28ZxwXGyEvkI5hLh/WvK7F1x4s0m83ZiJfveaP1LphLVG6o480NMhrVG+KLnE4Ix8SkndYZCLhEWXXlIeWKabeoXBJxzNpiUh8Kh0upZXPDljl4VZkHfumxllkW6Re7lUMyIYlERlmkK442yWaUh5cpIb7REPZtuRFF0JqyVbuxEtJEWqW8ns1HLJsREh1FlEoqvpUhykW7p4VIaPSOoeHmTuyQzdm2W9EuHhTqWhbpSXWjbEd7ExN7KMe8rLyVlxscMu0w5EMvupoDP31tEolqUClCElc31uIjiCPi5kPZ2zyuXMMSESEZDmWU5pblqLI9ocVbsOSjmiRFEu6oz1iVtqzZo5fvKRTDyk2WbdH8K6MUrVozlGuTdNbRIbYbYS7GMcojmH+zMqkrIRIiHe08qk9Ftj3u0SEWGSFuXaul6lviGXF9S31fg8YEYjfkRb2UYqZdTjxuvP0No4pzXB5ReMkUolqVW5iCWkhy+Fe6B8HdhlxCkOqUtSlF0H2WIjJmUY/KFm4ZKZdfHwmP4ObPA7NsnvWKS5sgSEouRLmW++EDoa3bEV3aN4LMYuA2UsMuKP1rE1JweKPEQ5V1YsimrRz5MTxumQnbXDGMs3KWpOs9kYwyES5SUhy1GUpK0G4FsRbbLSO7xLZbuiAFNnNtiI4Mo8SiXmyBcL1JCX6PKKnO3hcUkO72phjGS2eMmzNubKebKTguYbZZiitAI2zwiRWzco7zabTastSc5ctqI46Cw1sTIjEWRHu5UDyYRIib1Fuko1bgZIgvrZJE2NcbESzSHlUe4cjp0p1w288RcukZKvv6OMFFzLIcu8Jd1UthPcPNKrqqRuJaiRQdHiV60TpJ9HkpqMBy0pYqdhRK6kxkJZt0UMHUSpxFADvKM0U+pKIRpzCBhCJPCiGNEWiTGGkis3Tjfq3HG5aokWZR6Lqhraik6dlgTzAkTZYj7BFLMUXPa3fOnu2WzC0+Uslu5heHxS6lWdaICycH7ZopL1ZJd6PuEUbYhfc3QGLTviZcrT/BWOxOiT7jbzt+RWDDfE3iOPEO6I+gftqqsilq94pZVZ7OJxxt6TjmC3ERGUmxlxD19QrLI51Sf/AGXCML3DMW9o29h2V6w4RRy7RYERIt3De9Akn7TvrthwXLuwbYIhiN1aNj7WINatOF9RdVepUG1beOYfEo9vdEIk2UsFwYuCLhDLm732prp7p3f+Su847JV+5J2t0hvSczXOK3xRw23BHdeaHzS+elaKG/tUhzN4jbbg9oAkWFLulqDq9FPiVO+5Eu03d7dXBOX7u77q6VjijB5pPyceiW8Jd1Mo3l3Y72YZeyjgIvRFscMi3ZRHxFupbXthYLDk24Wpt1lwnGy5VrfgyS8lU8cSIfZQ6mpdwGIMS9rhVU/QmyiXh5hVWRQepJlaqPip1HErGb1namzBtMH+TrZ16I4ly9iE4TkpS9PUPn+ZWNgw8TTm0XGLTDItPkw4ctJR4VA2q4L7eG3YMMYcSmLMXGxlukPVQf8ANR7q6IWxZkWHGRDpzd1eZFJ8eX7s7pPm6+lA7K4Jl/Eb7EiL5PLl4e6rv4Qr0r/Yt3YWzjYuXdpgRIiIRLLmLhVCAkUYtkXdzeFW130cvWRbeIRw4jLUUSLNh5euXVRLPHG/zFYsk4x2Wx49sT4OelmzmsKy6Q21kz6zDYv7tvOWrK211S/WnP7N6cty/wD6of8ADtO7/YvQtuuRcw2CxBlrzRHlH41CraOOD27ceaUZf4Lk7OL2avqJ+DKs2u0X9i3Nlev+V372PG5ceceIhc0tk45SUVM+DDo9f2lh5JhtkTb7hOO9qTDeJmESch1CeX0elaGz2Q+5ImGHnhbHMTW7vb1f91eWNvesWw4wuMMOFJlp9t5twi+lzUpSP2dfWtscYKSozcm09X6mF+FroW7tIbQbBwSJgyNw3ywx06WxGlfj+tbKtxcvYYuOPPZREcRwnB07svMIqx2ZYOvkQtMuPOCJEQtiRZR7qodrdMba0uRs2ra7u7kRE3aWjJOYQEURJ2NMnX5/T81V1/8AHjk5eWYtymkvBRdPPg1d2g4O0LR/yXabUCE8wiUPV9ZD52ypxJuydv8AT1jsWtn2jrreXyzFZEZfTE3iUqXV3f1L0oAcc05R5tXdWt6JbGEnJOCRNt5illEi3RiuPqIwScuLOvp3JujOfAT8HPSFvbBdJekO2HnL99smRtmHSwsLNBt8eqlIDIq0ClOqlSr6evrXqHS/YzO1LJ62ZJth9kiJoyKTYuCXqy8/WIl9SumQcbYJ/eERiMdP8NPSqDZG0ra9LM5hlLeiLeUtXer868SDlq1x8HdkhGfyPfY89vvg1ZbtGyubu4trtwhEjw2ibIt4Wx9IhT51rthbKZsmG7bZwtvuCIy7EZPFqJ4nBprrXerXzKx280/EuwcJsZOCURcbiOpzELzeb0qFsa0ZISKRNC5qFkiLT9JiLs78pxuTMYYoxboPe7KuW2HHsHGEik7glIm92LfxudXo6lgNv3DBF2ONIcpYnFwx3f1r2VnZzjLI4DhE2RDEXBzSHu9VP1rM9Nehb96+3djhuFhxfCQsyc3fq9Hx1S6XPGM/mJ6jFKvlMP0ai327rYk58iLmnvR/1VldbQu7l6Uie4hlFseb09XmUd/ZTrb5MvDhkyWYRiUd6Mh8ymNt8W7qLdH/AEFd8pQe5itT2NB0WebzFcvZh0xLKRfiW5a2lJmTZDvFFzLIRHMJfq+deUBcxLs9XKMlJZ2k9GRCUS4t6K87Nh1u0dcMiSpm+2VtEXhcFwWxbjpyjHh/aiWNsJPYbL+I3KRFGRDykXEo3RF+TeGQiOXhGRFqkReklc3LbbMXmx7wiMRlxfMuLIlF0dK8MiPPk24QuREo8XsxWZ6RsiTjZCQi2WUhEcwlLMUVrLvDfHLEijHtPdzKov7WTZCTDeIJEWUcwlLKIlw/GsoumKasyL7Qt8MR8WXhj6V5B0u2ANo8WGMrQiIm5ZsEvoS+yvoqvc9o7HceERbJsXhItW8RcwqE70Vky624TLgx7QcsZR08vn+NdLlFx+ph8ylaR87uNSyl3hUMGMvu5tPKS3/TjoS5s4iftiJ+y4h9ZbFvC4Ppwuv0EsVcUlmGJEOnhjwrOMvR0Jp8EjoN0kudk3LdyzmGXbN7rg6XJDvDqovamtm2j7Y7W2c8y21IXBAs2A4W7Idzr9FKrwO4b9YTZRlmEe76xaj4NelxbLeJl7tLC5yvtFpiXyg/RnT4qqJ6oS1w/de0a6Y5Y6J+Pyv0/X7noDDFhJwrsnHiiWGWYWycIvlPjQNtg24zIo6YiXDHTFTdu2DbeG4yWNaPjiMP8v0bn0Zj6Kqw6O9DifbF9+TNs4Mm8PM4RcUd0V6ePqIaNd7M8nJglbjW6PP3rIh3pR4c2bhSfs8PD1SykUtK9Gudi3JPkV2XY4kiiIi44I7wiNKUHzCNFQba2WyJE42444UsoEMRj4d5aRzxkYyxOJn3buRSiI93KpGzdoky5xcpaYq8HoNck2L7jjLJFmwy1CJb0vRL6lX7P2IIvF5S5FsZD2epzhlwilLKmti4xknwaRrpaTjYiWUoxKJRxIpB0peIcMScGIkLYiRFlc1KivLRjEHAEoxjmlq3tSuNni3bNlFuLxb5Du8IrJQT8G/cfkvej1y83mc7MSjHec8PDqVzZ3l24RRbKLZZjIsojKIkXCslZ7TkXCQx7vhWjs2nxZfbFzDxxKLkpaRkMiH/AFXL1MtK2OjE7HbQ2wLjgycFxwSiQjpjwy9EU5xwSiTFyLTxEItyiPaOZRHNTqLV9iyG0bS7ebbJtgmx0uYbfaOFukXnzFWv9nnVH0q2y5bWjrFy2WP2bDbDguNkLOHIX23hp1RA93r66rz387SNJTSTswHSS0Kyvbm2KUmnnGy4hKWZV7joiPZ4hEWlddcJzMUicIiIjIiIiLeIi9Muv50nGnMuXUIlGWperHZUeT5BUMnJDEcu8WVRHe9pVq9RwhGLbeHiahHUUcwkX2fEo10LOJLBIRy5dOXTKRJ6goq3DIUEz5VJ2oyyLnZv4gxlpj4R4lCqKpO0AwzTqLnVyp1apgcrlQ6mk4aCVUihrrigXLqPcmqq8czLRFRVjHHEzrTElRsP611DXaIAfSq7SqYu0qgB3Wu0qlRc60APpVd60JOGqACUquyTJJIAJJLrQ12lUAEpVOpVMSpVABEqVSGq6KAHDVFpUS4R5SKUv3UHqEtPiGQy9lHEhIcpR4sQR+915vsSYDQLlJKlVIJshESliDuxEibLxCotKpASaOyHNu6ZJgEuN1Hmkph0IW4kUh4ojIeUvN1oYAOr8xVvasuEPru8LeUsxbyqgMY+9IvzmVjsh6Obljq96PCgmXBLd2SMZYguCRaY5vEiWg4eVsojw4ZF726ituiPqy1ah3S8JJUcbLLHNwjpUtUZ2FOkdQiRcqTz4jqylpjEkHqiMtREWne7oqVZ2uaRavupxVibD27JDmcH2SEhIf7VNCnhHvSL2R3k1sI7qNb0clEZeHStUqIYcGyLSUY5h3SIlLtAjKI5iRbVqOr2hEdXeUir+kRju+7zLpjEzbG2rJFy8Ud4l2gFKMS8X7yfRwSHdEtSmN7QLDIcMc2oo8IrWkZgHHxEs0hEt3dLvJBgkWYWxy8RDHxby4LBPCMixIiueRDplEo5paRQ0URnGc3rMvLxIr7ZN7xEO9m3kfyMc2aJDl73Mg0YcGJasMpCMsuVKgE06Q5hzcSmWrkiLKMh3pae6gW4OOFpiPFGOVSTt2W2xjqHKSStgek9G9vE+3F71gjH2VetPd1eU7K2o43GIjwx5Vom9svCMsq5Z9Pb2OzH1FKmb4tqNsCTzzuGI70s3hVUy/YXb775M4hOCOCZDHNHVH0F1rIu3/lJdru6eH2VobDarDccNkWyGMt7MO8Mv8qrJ4HBbcmvdU3uZ/4Q7xsmG8Moky5GI+rGWoe9X5lgXLwvyS9V6YhZX9hciy223cxxhKMcZwc0SLiXkVpbk8QxEop45aY0zHNH5h4Nk85IszhDGRDmIe8k/aORLLGPuqRd27YlHEckMYlp95Rb7ab2JJ7MQ72XN7K0T2Odr2AFjEGP591DrbE2MSiQ6ZIlXpDiCIjzCWVBeAizSkQ7svwkkJDHLZwc0RIeUpe6KVB7ORNkUeEoqczUWc0ikQ5o/hTHKY2lsSIdRaSLm+aSBkV14SEuIi3kOtIjKWaWn8QpHES0j3f3kV2reCTgtkJZRHh5kAdt3iEhKIlHVzd5S7y7xRbEWW28PhykQlukqmj6K3caR4dMcqtSJJVCERiQ72ZOujiQuN5SjuqO7WRac29mkmxcLVpSkk1uNNkp24ceFsi1CPtEn2VcMhlxZk1ki91NpTd3kQpLYmTb3Patj7etrS2wWGybEu0LNKTkdSiubelmEiHlkvPNnbQlEZbqtG3+ZZw6aNnX8S6pGyHbbhRbkUd3NpV7etONsC4y+484URIO9w/rXn1gUnGxkI7ubStbsS5bkMnMzJSEh4hRnwqPBeLK5ckXpSe0bZgsbKy8OYhiQjLdLhJV3R1gX5WjzYkLwxbLhKK17u0G3xJl2LjJahLeVbc7DtLLthecIi9XIsoox5GlpqmxzxNu+UZvbmwfIIiUXG46x+TLhJZu8Ed0Vstp7Uxm8B4srmpYnabDjDkiKTJSiQl95ejgcq+bk4s0Yr8pAIyEswoV3aPOCTgjzR5eVGfdlpIi5U1i63V1WcpUBcimOXJI93s1wS7GJCXu97lVaThNkQkMS3lDbAnDdKXbXQkKp3MscpDIRKJSHKWkkMzTUgNGNynuniNk2QyEh/Mfj/sWcYuIqztbjmitFOxcGbedJtwmyykOX90k5lwlf7cbF5sScjIcwlH3S4hWaM/yKykqdlE9u4jvJ7j2WQkqkbseZd8pEcrmXeEVccgnE0GzzlmyyU2ulZjYd63mHeIpK7o8tYSslolQkiAMUBk5I9KrQkcKfWsdSZRKqkoJUhXaEg0DiT6JDQ9PEooNSSpRSUHq4rDoztNu0uReeHEZKQvNxEhLL2ZEJeYuqqr2GxLV7S5dsi3GJSlp4lnJJqmWrW6HbXvxIpREeVvKP8Ko37gi0q72U2y5ct4zWOIyk0RYYuEIlESju9fxfGhbYdsHCJxm0JhwtQtuFgjHhbLSrg1F6Ugnc7laRGsOkVzbNk224TbZSxA+TKWqQ+giUzYl+xEhcsLB5z6d9mRRLdJsjoP6+pZ+5bEtP7vtKNV0hyxL95aSxRl/6Mlka/8AZ6Be7E2a8wVyxbXrQjlcNqLjLbn0mD56kFPjjXzLGbRaFl6Mm3xjIXWyyuCW984/ZVJvazwiI4jgxkTYiRDEi1EMdJV+eii1Llkow43Hl2VkyQktlQwnJaSLulpUO/AsPu5tKm1Ecsc3s/dTTeHMLgrYxRnauJ1HEO9ZcbLhEtJbpDwy4kCleZS2XR7UzfMC6RE8LjI5nDkRSEdUv8qKgv75i5cJwRkTjnij3fsWaub8n3iciIyKRRyj7I+ZS2dpORwG2sdwhjEWMZweYY06x+1eXGChudzlqRrHNq4bbbNsLjYt5iw90uL/AHqgPSeGTz7hDq7RwiHNvF5+pVdnsC/Jjyl5srRmUZPiTbn92XVXz/WrO7ura2ZZG2HGckRE68MibLhjvf5Lm7Tm9jbWkty12c1HsZFmHKItliE3+jjSpEP2eZSrJrs32yssSUcQ3mybIRHMMXH40bH5+qvXVU5dMnmRi2RYhahZFtlsi5iHzuF8/XXqVR0i2rd3YjiFi5pZhxCHlEi0j81KeZaY+hm3uTk6mC4NA3ttlh5sSzNtuSJu2cbiW9mc9H6h61Q9O/hkK5cLZbFk/c4RSJq07RxuWnHcLqGX1U6lVtCQ5iEuLNxLL/BNtKysrm9tL822bkr591wnSEMVt0utt4ZeZylOtb5enhjafn2ZwyvJFr+D1v4EenFtduXZNsP+V2DJOFs8osXMhlEW2y6qOGUY0r85LzbZ23ru56ZbfJvYG1G3L25ssa2wQxrASbEScuxGsGwrmKXo6lYdFKjd9MBf2W8LjezrJ5q/um8zThG6OExib1cpV66dfV8/nWy+DrpA3ZdI+ndy+Q42Js5tvLIniwYxEt3zefrqvOnOXduO7O75XiWrajW7N6OX5EyzgEyJaTKMYjqIo1/w9Kura0Gwdbbv3hZHU2JFmczZSEfT5/nWK2t8JW0XJCzhsZspNiUm5bwkVcxfXVZ21u3XHsZ5xx54ik4bjhOOEXeJdy6LNkTc6RyR6rHCS07n0gd7Fkii4XY9mGXEcHxdVF4+TeGWIxlIZbwlHw7pIW0+lb74iJFhiIiMW5DpUTYlScfbiO9/Esun6J4U2/sVl6hZJ7fct2Oku0xckVy45liMikIjwx9C2nQe/wDKcQrkWXHMscNttuLY8w9WZYV4GSeLt2WyGXrC3e6O8rCm3tnWnaMPPvPRzSIRZLxemKefDHJGoR3/AELw5Wnc3seqvATbeMy4483lyEMibUV/aDmUXG9Wkd4uYhH/AFXno/CM5hkJRcEssG5M+InN5F2b0ou34jIW23HBEjHMQt72Yv8ANee+gyxW6OpdRBukzf3eyLZ/t3Gxxx7UiHLKP0n0iYA2jn/TOMsuSclBuIiJcRCPn831p+zNtMvyG0Hsxi2T+WROfKYYlq+1ShsGG5ELTYlLWIxckXEXpkuR2nTNdK8HbxnsybcIu004cWy4Yqja2E2I4ciw5SLKI92XMtK5atk0QuFHLlcLi/eWV2rfFbDiE243Eo5hKMvz50Qm0iXFWaDZVkLcYjp1RjLxI/ViSEi1aWyyjyrNObeiMRcHmIVEutuODvEJbsuHdJS46t2y9SRdu292IxjhjLUOb2YqFtTscTEcccGIlESi5ItWnzqv2beuOYmG44UcxatKn7Qp5ewLYxbdaLLLebLiId77VPbinuxOW1omWu1LYW2xJgSIh5feLeJGvrW0IW3h7FsijBscxEQrAC+8w9hvZRlmGWYR5o91Wl5t8Rbi25EeFtsY6Yjm9MlcsafAlJVuWpstiRCURjl0yIh4SEvN6PMvPOnXwWtvtuXexyw3xzFZ6Rd/oOEuX0VVsF+RFKXeVrsu/IiiJZuUsyUsFK/IlK3sfOJA424Tbok2UiEhIYk26OoSlpUG7YIeUeHdjwlw9S+i/hI6IW22GXHm8Nu9b39OMQ6Rc5vmJeC7RsXWycYfEm3myISEu773WojJ1TNotM1/wY9MG2JbM2jmsH4jKWZhzSLzPCVK/F83mXtQdLR2YLGzr22ytNjh3jeZt9n5J1sfT5/jp8XnXyu9KOYcw5SHT+eteufBV0iZ2tbN7Fv3MN9uX8nPufIuF8iRfRF83xLNx0PV/S+V/wDZrK8kbX5l/K/7Rv3un1k5IcNwYkWdxkSl/jKP1Kk2h0kbecLDb7MiHLGIkI6R+cRqn7RsLQf+mcYJh1iIvcpD6zvdfppX5iWa20Ns2UbYiIeIvdXo4dDVo8zJOS5NLtTbovxZFt7EjKLYyj7O7mVOxcRfESbzDLKUSkUdRfEqXZm0HGScwyzODFzmHhlw/YitPyIi/PhW0I0ZynqZp9mWjl29lIRjqIvVt8OnSrt7os/JsXHhw25SERzS1ZSVD0avBaEnIyxMpZora7MfJwRl3ZEUpLnyzkntwb48cWiPb9GRbFwYtt6Yk4WI5KOaTnD1+jqV9a25Cy2244JRGMo6o8qdVwYycIcu8WX7yr29rMuOFhxwxIRnxFvYY/6rmacjoSS4Ly2abbEibHtCERlGURLV9XnVN0g6J2m0ibK7En22Sk2GkRLeEh4fqTq7ZEezFwoolLwt0h8P4lLxA6ezMdt74I9mPviTDb9oO8DLmUiHhEtI1WE6W/BRc2jb79pci6LMnG2HBLHwx1DL0Ef1UXu7dyRNxkJKr27fNxGW7pjurNRkuG/0Mp4Y1tsfL7Lr4iLb2IMcwhEhiRZRk2W8q6+q4JELglKO9LSXLwr3jbtuy84RZh5t7lzLJ7Z6PMXOXSTmUnCylEeFehhhJ8o4JppnkztcuWPe3kyj5DlkJZR3eHurXbd6H4AjgOkQ8Lm6spd2rzZRcbj+LxLecaIUkyOdcxEuEWVdOhfkU16j7Y5hIR/o4y8SzKB15hTcMiIY+HikisOlmyinFX/kKpDBVISEm3xLKJZo5hIf8S+xZrabGGXrBISGQkOn/ktThC6MSKLg6eEt32lU7QsS3pak1sVFlFH3l0hii3AE3yj7qCZyV2bHOtdpVc60utADutPGJZUPKiMUb+UlHlH8XUgCQLDm6OJHNyiKbU2y9YMSjLs+0H+sTDGJFEiEd0hcj4SRG3WeFwssd2QqQBVqI70lwYxTwaTwcJshHKQl+LmVAgNaRSpVSX8wj2g6YiOaKaVuP0mYhlGJZe8gAXWu9aM7aOCMo5YykPN939aCYEO6gDtE+iFRPogB6eFcuUo94RJCoSfWm8pGEbPNp91HCssuXDL6TdLi7Pz/AKlDoplpaPOEIiOYiER7xaYoEw4sFHK6MY5mxkIl4fR/aozYEWkSy6ijpU+9t7mwcwXh7SOaXaDm+6olLhzdcIY7ox9lBKG1cZbjlJ7ijlH9pKbaycEojGWaBOSy8XzqM2yRRLKJas37vCjtAIkRFiSlmwxiPtegUhsMIkWrDEeGI+6XmqpTNkzGROYY8IlmUhuyEWxLTLdwycIR5S3UazDsyHLh7stRd74xTqzNyI4XDbchwSERyjLVHml/opeLwlGWmJFJCrbFlERL3iUzZ+zicLu6uX+JLRZMmOtGCLMQ4hcoyirrZ+y3nM0Yt8RKfs4cEYiOUtUt7+JTqX0RwxER06uXdXXHD7MHI5b7OEdI5d4k6rebLlHNmjliS7S6ESlm8JZVw3fzurZY0ibY5hqQ73MI7pJjFp2mGREMfu8WVSjuycEcQRkP0cs3eQ+aURHL3ZK0kJhGdnRLtMwyER5lesbIYbbxClux5iVCA9oTIuCWbX8mXD+xHHbTrI4JDjDKQiUspfoy4VTQkTjD5McPDLxEh+TiPyfvS95QwusRwScw248JEOXu+klZdTmUhi4JbwqhEa6NsRwy9YgMXAxbFxuJad3N3kMmW3icJsovSjHUUUhtibLtPCW7/wAlFblWWr7WIOXL+JQKMtliNyiQjLlIh+6mlfE36stW6ozz2aRDm+8hoSDbPo5LKJF3RVsD8d7SoeyNpYeYiISlpbH1g8MeFHubttx7K1gCXiSjtwVyaHZVxaSF65ZxmREsolEicjlGO6q9wyJzsxiJFllqjuiRb3Um2FkRELhNkTOJEiHLL91aFtqyJ4RbEm296RSzLOckpOtzWKbW+xVbZ2TtFhtsnGHMNwZCY5myHvf6LI3FWR7OWCWHLvOcJfGK9tf2vbui2072gsCLbZR+TEcuXip8dVHvth7OubR8cBgiebESdwBxWN6TZcXX8y8nJ1Ek90dvw6rZnjl7ZPt4bj4kQuNiQlISxG92JehQL19kXBiJYce0EubUrjpDsi9sHMF/EwRzNlmEXGZbvCNfQqq6BlyUYiMsokWYeUuJdMJqS2OSUaKm8FrdbIo+rzafDvKZ0eaxHRZyjiZZlmEVEoOCUiHEFWGO2I4oxEh3dKb3M17Nhtfoq2RNuC4LjYiWOUREsQeEd7rWPraE2Ucw8wq6sdsYgiWrlLSo989iOYgjq3RGQiXDHhVpUtyp0+Cjphy7RyI6ZfiSu7dgZEL+Nw4YkPtSorCjz+JhuMkThfo8w93zZlB2q05iZhIYjpcGJeyKi1YqVEGmnTJcoyRDLTmU0GMQcoyKJOFHhFCbYb+UIhIpRj/EnYgbDbhOZSkUcwiiGObV4SQDEY9pLLlkJRL+JTbVkRGUpDHKSAHhXDbLURaRipViYxzE2Uh3spCgtC44Q6RykQiRZVFebwyiXEriBPbcIZCOVWFg+Ps/eVXV8dK607FaRZJprV4SLEHLvR1KexdkJSElmrR8h/Ep4PLoirJ1UaFvaIylpl3lP2xthsrBxstUhw+XiJZKryj3b8s0svCnLp4tlx6hxDOPqFf1J0YjqEpDzcqG6eVGtRFkhJyRDLNHh5eZb+Dnso3XsMuHiFAJ7MrzpBZNvSeYLTulqj+IlmCcSsTRZs3EuVc2jTGGJRlLKX53VXC5mUlg5JpiA3mILYtkWI2Pqy1E3LdEvTH6lCpQd5WrvCSi3ANxL8KEqDcgvUiWpEtXyFC6k2tUx0XzLzBNk243IS3h9Y3zDxD841VBtq2FmLjY9mWrNKJcKJR5Ht3hIYlEh5k3TQWZaoplalqWuubBt/dwSHeHeQLjYDcezcKW7iZhWLgytRlAKJK42ZtaMm91VN6wTbhCQ5h1fneUV4s2mKmMnBlaUzatXkYxIYyVoLwjqJec0uyHeyqXa7Sc4sveykto9QiXA34vj3k/rWPa2kQjLL7SvLC+kIlGJFm1LZZUyGi1Gi7UkxmRaf4UibIdQx7ulVYD5LkkyidWn5ihghVIuJMrVJLr7qQxuJzJP2okLeAL7hR7bsycGX6PDp8SgvPZlb9Fry5FzDYcwReIW3Dlh4Ylqcl15VM5OKtDjTdMo6viJZSkO6jAQlmzeGK9M6VObFYEhfsmLt7VlcckRD8s+Q1pmL5qUXll+80TzhMjhtkREIyIojuiJekupPBmeRXTS+o82Ht+SQ4bmGQtuEQ7wafEI+glVOVLTpId3eU+1vybISbKJDvb3iUvam1HblkW3ixBEiJvKOI3wiLnpIPq+Jb27oxVPyUrQEXq83eTanukMS5vwotGt38+yuQllcGQ/d/dTbJSADpjmIS1DGQkgO7EJz/ygji5sjhRFzlbItJfF1VqrRhqIxkJN6tQ4g+L0oLwZoiUhll4u8s5blrbk1ex7fY7Al5bJ97dFvKJCO7L0+f51Be2oxjPFaNt2DZfRiQyEd2XX1kVVn7jaBFIiLVmy5VAduCItS48XRVLVJnRPqvSNE9dY2p14vEQy8KhOvDIuxwxKI7xFId6RLzv4Rekd3YlbeTO4eJiSyiXDxLYMOkQiRFqES91dGJQ1OC8GWRy0qT8k2pD+ZKR5R3lXiakMSLKO8unTRhqtlzY1Eoi5pV070X2Pettle2lo+3uk62Tlx3RLdFZtxyOXhRP5RdLe7vL3Vy5sLyI6ceWMNz0PZF/svZbQtWTDdpaNyytjh4hcWXT3q+dYfpNtO0ubt5+0YZYF0hxyYLNcuNjEXHyGtaEVKeinxLPdLLgvIL2RFIrZ/8A/GSyfwIiP8nGRf8AcF91YYenhizJVbq7NM2WWXG3dUzfMOEp9sShsuDEsskmbgi5V6b3OHjctcfDzF7PEgvXzxaSJseFso/d86ignhEVGmKK1PwHaqWaKM03vErPZlrYC2Ll6VyThZsJghby8JOFvVp8y2/R7YGzNoiP/QFaNkJRdG7cxy/SFLzOf2UouXN1Sx7tOjph0858Vfo89bcES/eVnaPE5vZR3dPsq8e+Dq9ZeIZNlaSEfKpDmbl9D19c+VTdsdE2NmsC45fiRSjhkIiTgjqJsRr1l1LCfV45UluzSGGaVvZFp8HYETgxc9XmEJREuLxL0MNq2zDeJckLAjEpue6vF9n9IhYclbDlH6TUXhHzD9i3PQnpI845F5v1xREYyEWy3YkvK63ppP562PUwZ1JaVyTdu7dYuX2SYfJxhmWJEYjKWUhlqL61obe+J4YkMmyGOYZDHxKj2xZWjL+ILLciIijHL/dj5lxvabxOORjh8Jco6RXFOMZJUawk/wCoudr7Lsn5YjLGIIxZKMSxN3TqGip+kHRu7uW2yHDJ0eEoj3RVuxfybEiEhiO6OnmUnZrxOdm4LmYRLMOUuFc9UXpTKkNjeTMCLbL8mZERNxIpFqlxCibNuxzOOR08MZDwl9qvWzzS0+HL4Vienuy7lhxt62F55hwtDYkRMucMR85DX4vmUt+BXpLBzZuzrlwidGXZ+rlHtPpJb3Uszt3odctkRWjmMzutuFFwe8XoLrT7Sy2sX/pHsw7wiIx8Vev9SvNn2F7HtHHGG+bUX9WhTa4BwjIz+3OjPk1k2+24Tj4xxxkMREt4R9Jef41X2FX2M3kzrhE3IdQ90l6PZsi3lGRcxZpKVd27b4k28Mh94e6SvvS8ieKt0eW3rdy2QuPjhy3tWX95UfTbZjO0xEmRjdtCWCZDmf8A0bkdXX8XzeZegbe6GuEIlbOk8Tcog8Wbwl6C83zrHu3FzaN4LhE22LmIJRGQuS4ur5/mVOSmq4Znbi7PHH7cpE242Tbg9m4LmoSHdL7K+ZVwATZSbKLjZZS/eXp3TZkbsvK2243OH2gj8u2O9H6WlP7aLB3YSkWru73CSnG9tMjoUn+aJ7b0U6Ql0j2W4yzh/wAuWTAC4JZnL22AYyEi1O0p5uuqw9zeS0ju734h+pYXo3t272TetXbJE25bOC5IR+TLKQkO8FaF1VovX+nNgxf27fSPZwiLFyQ/yi0Om2uS+W5Wjr8fzlT508U+1LT4Zn1WJSXcj+6+pkQdj+LdRrSso5svCoBDItWVS7QorvR5xfWd1GQ4csuXiEuJWbe27lzs7YSb7uZwvFurMtOZspF3hUwLghHKX8Shws2jM0j1w84I9sRDvS1cubi61YWl2RDzRzbqzbF05llljw7ynMXBD/EnGCResvrExcKLhEMuGJK1urZpn1bjjhDvbst5Y61vMMhIdSMVyRCWYpFvCSfZ1MXdSRd3e0Mw4fZlyquubwi3sxalTncZtSC9dbslpHCl4MZZmw906RZuFVrrubElp0ijVdccGO7u8yhkceYlvHGYuQ52rJZSGUuJCGxZdbIXBGO7LV4Uw3JIRmt1BeTJkVjYLZEQiI8XN7XChbS2G2TeFhkRS3cwjHh+1WI7SFscOIylq3hXH78oyHe95X2osDz/AGr0TuRKVs2LmJqCQjh8xS3VmPKs0dMcseZev7Y2y5bWTr3kzT5C1lFwdPDEh3uvzrxc7kniLEGTpZo4cS8K4s0FB7GuN3yT5727LVHh/Ym7VfIXCIXJDlzNllLeyy0kojW0SbbJuJEIl7Mh3pfOq6TjmYhi2QlGJRGQ6ViaKJKvrXSTYk42WYv3e986oXBGRbubSW6rC3fKOGTeJmLLpLNzCgPs5nCIcEcxRLMXdIkWaLYhda7RS62+IPZtueLh3vrQasEJRzDy6U0yrE1myjmJPJovoy9ko+0ui24WlvvOSLLzZVZ2TzjYlqIcwuFqGJaYpMTdFXWIjmZISLSUveKWoUwnMylXzAjEmyIpaR3R4mxlpQLS1J5wWxjIiiMuJA7GMuFLiy5ZZo937FPasnnI4mnhLL4oo57LJhyLxer9YBahLeElabKq3KIiQy3ZcI82nr+ZVVkSl6Kq/wBlk2IkIkTctQiWUt7L/qmNtDhyjmEvWSzCtmxdDvSiO63q4dX1fMqTajbZFJsW245SiMRc4S5SRKBCmRmWmMFwnGm3HnIlJzUOHqzb3Wq5q0ci5gyJkhkWWUebl6vnVoyfZxJkXG94vlPzT5121tSxnGWXJFEsOQlEh3hLh6kDUqM+S6to/wBCSwRcbcFxzNkjlcj9GXoWWvQwXCizFwdTZcSJJrk0jkT2GW1uREIxKK7gyci3IvCRFzZRTG3HyKRdnwlmKP8ARy0krvo83iEROFJzVKUS91StxylRDtLV5twSbIcSWUS1fw/rWn2KWC+MsNx5zMRFlFsu8hbbq3ERbb7TURCObvEo1rTEEpat38SutLMHNyDdNaOOkThZSlm3hj3uFZY6jmlm5my95a4BHBISKPCMtQqpraCJZRbccLLukI95Et3Y4TpUVlnUhkQuahiXd8Ss7W7ejhssjw5RKK7ImyFtthuUhlLMKtbIntOMLY8ItiOUtXxZvtUpFOQSw2g+IiJNuFlESKUh7yaROOl8m45zZfaJWAAQkIkUWy3R5t6PEo7giyWJmkOUZZmylxcy0alRmmrI1kw43ISEvCReH6ldWpkOUhJsS4t4h4lR2lSeIhxcwlvZRLly+ZTmgclqy95EORSNHiaYlm3hFdFwuLLzKrC4KOZH8o9nvLoUjJosccSKP8K4bu7u/nSq/F/JJ3lMS1LTUSywpcFq4fzFOK41c3Mq4bpdqYkPMjUKicTmbT7PCiE5u8Sgg9FNxsyeoC5rdN9n2eYdUv3kg2g43KJREt3UPs7qq6PyT8RCkBZsXuHKI9oRSIu8pAXpEOnLvSVNQpI5GWmWVO6AuWask2REQiQxi3LtHCLeFBumm3MwkIl7SraH96SI2+QlIRGSTkBYWTeHm1d7dUsjFzvKnO4KUt3hRqvxLl3ooTSKNtsTbLtsy4yJSbeGJSES1fdURpwfEs61ecMhFSLe5kQjLMobirLTbSNf5VEW+IfazcSk2u0CLsycw2yIRcFveEfz6VmnAfi2842WDIhE45SIdX9isLG4FkhkIlLi4V5vURTO7FOjbm6JNi3hiQy+ULEHD3Yy09VViOm9g2+IuMtyv5ETkcokyOURER83m+daixvxJxkRFtvEykROZXhLUMd0VbHaWAiRCLbhCRDiCRS1ZR7tPmXl3LHLZHU4Rmj5+vGHWSEiiJYmHAii5IeIS3frTL0SjJwRGX0elfQW3ujNlf2RMOCOI45iNuNt9oMv0vp83zLwrpL0ff2Y+TDjmKMuxdHLijuy5l6HT9S5vTJUzgz4NG6K1upRlmy5SHSpY3GXKRDp73eUcbsRyuCQx5U0rgSJsWxKUpEUdPh3hXVdI5zUbA22429K57fThlERIY6cyFt+5J5wnCLeLNq/5KBdCPadoJZRJvdHmVe8w8UcurMODJyXhFZR0ptmmp1pCgyV3cssMZnniFtuXZjIt0lqdn9BNsE45jNsDhkIzcKUiLhjq6lZdA9i2lp/1N/JlxvDcbJwcpCRao+n7Vsr3bdsRENs+yQj6wpRbcEtMZby48nUTlKocezow9OnHVI8z6V9AnrIWS8tbexCLEARiLZd5UTrZNuRwxwxEcw6V6JtW5ecZccwCcbzDLdy8pLEXt5GPY5Y9pHSu7p02tzDPCMXsRBZcjIS05lWvkUsT2v3lLuL0cwtlq91R2QFzixPvLoexiME0d58SjEcOQ5h3ZcQ8PX6epRiylxd1LriSEyGX+yXMMe8ppFJUmzbiIxLUpxYnLFdmJ7GchrjvMniXsqLixIpDJdoclvZmye6wJN5dQ5lWP3JaSJSHHt1Vl8WZN7ASW7mKg7TYEpON+IfxIeJzLoXCzbssr6EituEpNyzIZCoJDFLgmiWJSKRKSQsuZY+JVouIoOK0wSBPMCLhCJFlQza8Sfe03h8Sg0cc7yLooktFHu7y7Wgyy6VGqfEuTSsTLJo3BGW6n0uVVnc7u6ug6qsKDdILYXmcbeZHxEKibL6KldstvY4ti4MhykRD/j1Kys7oRIZR7paS7yv9mVEhi2I4Y+rEdIjwilpjJ2wbZm2+gjeXEvXpb0Wxiplr0OsGyi44874oj7QrSVo4PqxGW6rGxrbF2b4lJzU6yUSbLu+ghVPFCO9CSb2MiPRWw3hfGP6YiUu12FZN5hxHI6Rccyy/wBlodp7MbZEiYu2rkW/WCIky/HiEfQQ/YqGoSKWJJXBRkrQ5RcdmT3rgRZwxbEe7KXdirO36OkLeLdvt20swgQ4j0S0yZGuUftVZsp7ALGbjiDplulxd5F2ntR65cJxwpEXdSlruo7L2aQUOZePAO/s3mRkLMmSKLb4tlgud0t0vqqq+tS4vCptLtyMZFEt2WUvD6JfWn31vvRIhEYjly+L41UW1syXFPgq3RJQbwHC05u7q9lTCypsloQB2Fs169uWbZoSk4XCWkdS1e0gstk5Rcbvb0Si3ESFm24icbL1h/as61eOMlJtwhIdJCUY92Krrkh1FmcLMRFmkXNLeWbhKct3t6LhJRV1b8BX35SJwpEWYpbxFqQHKiQ6R8OUkEnh3s3dQnXx3V0JUYN3yMIOHVxRTRFzd/d9pFtyUilJbxezIfEhsVAqBHVEveHwotqbchlGPeirHZ+xrkiZ7LDF0uzNyIsOkO6Lheb0K32rsF5kSuW2xcbEpOARNuYcdWnUH1rnnninRtDHJq0iVsHYBOCRExaXLJZm8ZwR3dQvDWlRH9Sfc9GLL1glq+Sb/wCtal3ip1j+vqVhsrZ7DzWJbMWgtvDmDEESkOoRHrrQS+vqU8GbtsRwBFgcokYxf7wk43WTa8nJnkpbM78eGOnc8AJ9NF2SGbI8MUMaxJe5Z5dWYv4Wa9ZWX2OfeFeg2T0mx5RH7q85+FMs1p/WfeFbvZwjFsZahH7q5end55/sdOVf8USfR1AvukNpaE22+5EndGUizad1CfHDzZi5VhOnr+I/s3+l/wD2ja16jM4RbXJj0+JTlueuOEq7a+37SyJsbl3DJ31eUilpHd7ys6d1ea/DL6/ZvfL/API2qz5XCGtE4YKc1F/U33Skf+gvS/8Atn//AMZLM/AjX/6a5/TktJ0oL/oL0f0D/wD+Ml5z0H6VsbK2Q5KLlw48eC1+Nz5gWOWahljKX9rN8UHLHJLfdHpO2ukFls8W/K3xAnNI6iIeKI+eP1qw2TtBm5ZF9mWG5mEnGybkPFm3frWE6BdDnNpXQ7T267UaOFIGI4jkdQ9hw0+Ia+b46r2bbwbL2bsm72kTLjjjFs4bBXbzY4jkexbG2b668Pmr1IXWS3lJbeFW430i4jz53MH0j6YbP2blfdk7qwm8x9XFHd6/rVPs74U9luOCLmM0PEY5Zbso7qrvgTvNgCT+0tuOM3O0HTcJtu5NsW5S1ENerq6/q6ur4lr+lfSHo5tMXmn39ni25IRAXJCyUYiTLhefzenr61zrqZz3TS+ho+nhjpNN/U2GzbJ18RcEmybcbFxkxebeF6WkWyb66F5/1UjVenvdFxtm7Ihx24tyfzE4Ilyy9XvVrTzL53/8Hm03C2jc7LEfKRacxbfLIRCp1bcIZeZulctevmX1n0r2x5Mw3g4Ii4L2Yo4cm8sfm+pcefqp5JRUfJ1wwRxQc7KjpH0yY2daPCPa3L4k2w2USju4znJT5vjqvI3X37lzGfInHCiMy1R4R5UStq+84Tjgk44Wo45e6Pm6hH4uqikNs4ZCLhC34ZEPh4vqXpdNgx4Y2t5Pk8/Nmnml6JuybKRDl/eJes9CLZi2bxIiTkSKRahHlksvZWGzmWG3CfJwnBKMnMMS3pOCPq+pVzfSBoXMERkwJFKJSlHTIi85D9VFwdS5Z7irVHfgSw88m12vtm0xhbZbEnSKIkXq5FxF6I0+aimbE2mMiZIixBcKURHNHlHSKp9jiJWT2lzdbGI9mO8XKVPn61K2bsqQuE3giWUW3W3NI70Y9UvMvOlGKWk64t2a9mouDxDulH81Tbd6JYbglwjLMPLElD2Q0TY4UnJac37ymPiURzDLekuN7M3JHViDljlRKD3hllVfbPCRC2RDIfCrMoxiJKXsFWRnGCzRL/kkdSylESHekigeaOniindY8W7m7yQUR37dvlHLvZVHiJaSEi4VKuzGMZbubij+8ooCyOYdUcu8k0NEdw/3VH2kwy9lcFspcQyyj+FE2i6IkOWREIly5kduyYcaFxsiF8h7QXJFp/Vlp8yzasTpnm3Svou9guOWQsxbLEJscr2XNIZbvX8VF5P0m2WQljC2Tcszglli98pl4K/EvpS/tIyxBLSWYRlu5RIh0rPdJ9lMFbXbRMCRPNyk2LeJp7PN6R3qo1tLbkiMdLvx6PmXaDPynCOb+j/eotb8CfTEdm3JWF7FzZd+JMOi56sRcyxzf4fNVUu29nlbOky5myjFwt5stJEP+FVQ3tqI5SjHm3uX9S0lWSFHTjai9+Hs/wBD1bpV0bLZ92402RPWzgi/ZOiUhftnPV5t46eivdQGrFlv1jmYtWX1ZcvErX4ONrfy5swtkvEJX9pJ7ZpFlzCOZgS+ug+alfjVA7m5S0kO8JDqEuEqVXR0eVzjpn+aJ5/V4O1PbdPj9BMtR3uZSJju5u8ogHmyotTiWYe8vQ0o4yWy9FSm7olEhhiJEWbhFGtxEt4lpHGDmScUtSML5bviVe66I5RUdx/UMvZXQoUYuZJddIt5co/Ecw6vzIVB8oihA/mkKNKItlpS7IRiWndHhUB8yEu9myqIbpEmFQo+FUhp+whXPMopXmZMowRFFHesxbylIi93wqqByIpOYmkssi70eYkNy4y6t3iTHQHuyU3Zdj2gkUZDmEU+SQ9ns0nMMnSIojIRLMMd2Iqk6Y9HmHhJ6JYgxFshyk2Mt39a2RXIkWUhIo5o6RVbtW2JwSbbiWI3EpfeEt1Tkx2twvc8R2gyWNgiLjkSw5ll9pTWrNuI5ibLmKUuJWm1djeTOYAtvFiRwdRERFu5dRdarNoWNzbOEzcsONFHKJZYy4i/0XntUdOq0RGbQpF2kYiRZeFW2yHWxbId5wpSIRzR5i3ur4lXCXaRckLkd6UiHdUmjREQiWkvCkttxtgtrMZm8xDziO6XFFQ3qyIZSKIxHurQW1oUZCJONiXrIyGXD3fnVTdMRcIdP6OWYf8AVDQKQEmBIhIW48Qy95GoUhcEsox4k15hwc0SiX53U/DZjLDkUe0EpDJKitRHAREREZOS+TLMIjxCSnbMctrR3GwykIyESjlc4pJtzYFh4jJDh6oSzD3lXuMkJSc1K06YXaH3rpEROCRYjksSWbUPMm7Mdc3c0d4hlJIXHCERw2yiUSiQ5h/tR7egjLs97Tie6pvcfCJrT5avzJBuTccGI+rlmy6ooluEsssMS5dKkMUiUSkUcokP5yp8kEa0ZcKRaYjmjvRRGbkm3Bcb7OO8JZolq73mUhy2bjiDKQlm3hL2UFpoW8urhyx/IpcCsl/zhcbysFHmKJZR5fR51Xv3IueszahnveFMpZiRd78XMnjZxISjJsiEdWnNvcybk3yNURLmreXDl3YxJR2zdZc9W42W6Uc0fxK0uLcZELZEWoYjlXWqlhiJRiIxjlkJcpKS1JE3YV9Eouat4t4u8ri5txLMMZKlaoIxEhlGWr89al2zxN6hy7orVPbcykldg7l4mxwyFvVvCgE7pLBEh3o5fZUq/upEOmJcWYpcJKI6JYYx0y4VLBUdbjmclEpZhj7qm7PeEizaY8OnvLlhszGGUsvCJJruz3GC3SEtOYijy8yFZWxZ7OIXCKW6Q/8AFccOJOMyIpacokPvKBY3EXMuUXNQ7vhQ9pGLhburV+6q8E0OrUsTLhkQlmy4Yx4R+IiVgTnL7yhAJCUhi5xRIfaIR0qU2IkPCOmPEpjfgGEo6RbqMzXNmHLvILg5sv3pLtJLS6M2i8aBlwcqVNmyHK5+6qIXCHSReJTmtq6Yyl7q2U4y+hGliJohIhIYkKIzQU198nizZZJht5spFl9pS3TKRLqA8UfxKPMuFMbKW97Sabw7qWoVEgT5UUSUWj3iFEx+VUpBRJqaO06oRV4k9ug8WbmRrFROxBT5qvqZJUcLiQ5josaOrtK5lAB0kYHy4VOsdE5pzwqWwX3VVi7JSm3SWbdlI12w79ttvtxKIyER1CRFy7v2oxUJ8pMsFhjlLvai8KzLR5R5sq0GxjlFsRlLd+UIlnPeJvCRbWuz7smyLAJxtmUhHMQjHMXdSZvCHSXhVj0dfebcxrRwWizCTThZco8JbtVRbTvCecxnMOXEI4Ylql9S5YQV7nU5UjV9H9p5cMpRLh1EudPtjMbWtmxbbbZuRzNmURlEfV/P51kbO+iWovD+dKv7PaLbgj6sibLKJFEi8W6ufK6dxNYNSVM8cv2cIibcFxtwcrgluxWg2DYCIi4TJELjcmYyIv6yOnrVx8J+xHG3G9ottlEu0uRLM22RaSlw1TOh9vfvuCNgRNkQ4jhiQkIt7w934urqWuTLqhaZxrDpnTJ2zOjjfbE8IuacpZhEd4RW12TsW0ZG2ubbsH2ZNiWCOkpSHu/Wqjo3akV/5M+23FscSEspEOrTvLS3FqTBOXbIkWaLjWoY8Q/Z8y4XcnTdnbGCrgB0p6NNuRJgnH3B1SkLbhcLZf6Ly2+tiaeJuQtkLmgtQkJbq9ktHm3BEhJwZOSy8Q6SEV3pds+w2iw4LzEX22pNut6icjlzD5/T6eta4cjx7eAyYtW65PI/LXsMsQizcSzV2JSId2UlpLvYJCOM24TYjKTTnrMUe9qGv1KgupaiGOVevjkmrR5mSLTplW8xJOsdnXL5CLLLrv8ARj+JW1jayEXHBHN6seXiW96C3REWBmjuiIiMY8Xx+ZZ58ulWgx4tTPLX7bAIm3WybIcuYS1cvxJltZOPDlLLu8y9+esbYe2JsSwSxM0XGyIeJstQrC9IdjsjJ62bbEsQnXCaytkLhSiLfob6iU4epcnTVGmTpWldmIfsxbUht7DEU+/pxEPKKiOaRXqwpHC0SzYyk4URlm1bqhjThSpd9nFAbuI5VrqJaLSxqTfFJVm2rWOYdOrMpA3nFpTnKC83Ev8AiquyTNmabR+KY+2QkQkJauH7vEhEJc357yxtlUTAuS4uUl1whLeFQgFPDLqkqsKHEa5iob1FrNgWNk5bMYjLbxZiIs0pcJIv0BlnnMslGqQ6luD6JWz5SZccYGUnBjiNiP6MVQ9K9hlaOSbxHLYvVukPuuR0l9qbYJMpJJVNIoolWxQmBFIlyjsUd23JDpaPFuobGMoQylpV50ausPEEpbpDHdVGVsQlFwSHwy+6peyzIXCw82JHKIkRZd6KFKmDRsxvOFDJ3NLSq9ttwRkQkKNj8RSXbFWjNsmg+2XeXDo3w5uXSoMR7qM3VVpQrYY6ZUB0XBzbquB2pbC3geQDLefkROfs86q7pouEu7HMoTb5VGjjXkijekJadKvbYSux7Nt1yOYiZbJwh8I6f1qiwi4SVr0e2u/aODgFgkOYSkQ+GXXpr8ynKnVx5DG99x7+xH2xLGtH2REZSJlxuXelpJVBk3wrQbZ6YPXMXMR1tz5QWy7Es30foVRemLxE42McTdiOX2VGKU6+c1ywivysgOqC/TmVjRsnMslGcDiJbpmDRX0aFOwh/JIpCmVpyq7IobAeb2lIaoI6nMPw/i3hQaVRrQiccFsSEZFEcQoj7RaVMmNGm6LbSZHsXmcbN2cSiJCWoYlSJfPSlVsre7tnIyJxgilHEEiLxS66j+vrXlp3RMFliLzZRIcrzZf41p/ZVanZ3SVx/Ek5aMkIt4Ns4yMXi3s3pH+1eX1eBv5kd/TZK2ZqmtnMiPZiLksxdnHNxYg0pT+xDuLVkiEnhLs5Zii42OXSRegvsJGHbjNt6+2fthjqcbLDItWUm61iKkW+3LC7y4jDhboD6wsumJUpUf7V5cu4t6Z3qj5gO5yx3UMHBIpSLLy5SUA3SL8Iiit1IR0xFfQRnZ4mmjNfCo7IrP8ArPwrYDXEbEhllEdPdWd6V7JK+wu0FuEtQ9cpfrVSz0UfIo/ygQ07rn765dc4ZZSSuzq0wlBJuqN8TjpCOXxaljfhDaJktnmeXtSIuLU2WZJjoZcyER2lXVwuj9fEtn0n2G3tJjCcKJDmZOm6UfuraUcmaDuNP9UZxcMck7tGgq9w/wAPhLhXmfwuXwuXli1KkmvO5y4jgx/wGqsLPZnSBpsbYLu1JoezEzmTojyyH4vtTLjoGTotF5TO7q7ivOmJFiFwj5/MKeeU8sNMYte7JxRhjnqckzb9Jv8AyV7/APq9x/8AjJea9EeiDe1NkOEEQvBePBP6SI+rc+alfiqvT7uxJ5h5kii48y42O8I4gxlHe1KB0G2GWyrYrYnRdInJSFuPhzdavL07yTWpbUyMebtwdPe0Rfg1+E6/Zb/kO9ectnWiwxLK244WnDcc9Mvr6/PRXHTpgrnZd60OZyrREI6iKOaP61T9OOiDO1IuCQsXIx7WMpDwuR6ql1fF5/MrfozZ3bDAtXL43JDlF3DISIf0kq1oSeLFNXjktvD/AOx5MsXWROn5RjPgf2RsnaNm4FxaMO3bDhSIyKZNl6usRLN83oWzDoNsb/8Ad7H/ALn76z+1+gbrdz5dse58ifIiIgKWEUilHLpGvzVpWi5e7I6U3beE7e2bDRSEytpzIS1blP8ACtKrGMNEdMoW15SW5pKet6ozpen4PSPg6/k/Y7xXOy2GGieHCecYcIsRsSzN66x84/EtT0o6SP3rg4j3Z6haGItt/wCC816D9HmtkMYTRm6RFJw67xco7orb9GNmtXb3/UuOMtjm7OMnC7xeYftW3ahFa3Gml4J7s8nyRdlts/aI4cRJVLtCxCLizLfbQ6DWjjDblg4TBR0PuE429ujm9Il9n9izG0dg7RtJFd2jzbY6nRHGaH+sb66CP29Sww9Tjk241fp7BkhOOzVfXx9yso44UVJtKR1ItlaYxCLcRIi3iyjLSRFwre9GbXZjD4txxno63BFwcTeERKnV5vsRn6jQuDXDilkf/ZX9EHHnuxl2IliRL1ZEPL6P11616YwbmGyyUSlqISy5t2P2KIzcWguCItsCW7lbbL3aU/tqod9tiwbeL1hOCOlvtBb8XXlJeFmk80rSo9aC0KmzQbQeG2YceLKLQ5t7Tp/xUfYO1hu2Scw3BERl2g5uHMqG42+5fiTDNo4LbhCImTgxKJS3f8loej1gTTbguDwx+8UVhKGiNS5KUm3twMfAiKQj2ndkPtelXNnp5o6R0ioQWsSykUt0UfZ5REpLFu0Wgg+sKO7qQ37wWxcIhjly8XCpDRDIi3vvKDtq2F8WSEkhMpbm4Isst3d5k/ZlCHLIo728rINnt4kib3dJer9neVVti6FgotjHiFPU3shXRZbWJuLeWRRy/wDFOs6kMilIY8v5H7FlBv8AMRSVxs3aTeaQ5S/OVNwpDTLwXOLvd795ZHb17hskQiWYouZo5pSw+Ylore5bLiGOrNqFQ9rWI3LBNYsREidbpH1ZFqj9vo61lQT4PFekdgV2LhYYy1ERD6zMUm5fVRedvtSkJZolHhLh3l6ptdgmyJtzEkJSiWWObKsX0l2f8oIxylLlLe/tWuRaXqjwycGS/lfJmtjbTcsn27lpzDJlwSkJZhiX+i9U6SO2m1LRnblkXaOFg7Sa+jdyiL48p1Xkt+2W76wRzFxDzf5LU/BHthpt52yu2sSyvWyYcIpStiIezdEd4qF5+qvXpouTNlcGskfHP1Xk7ewssHB88r9fX7knMJCXiFEbMpKftLZbzDjls5FyMYuDmxG4yEh/V6fr60BzTER0lmJfQ4mpxUlw90fPZFTp8oIByyjm/eUuzYJz1hR4R3kK3ZiOUolLLxIgui33l1xRk2QbihCUS3dSds0WyciQyEhKO7Et1OfqMpSkJSlwqNhq0jMLdW0SKTg+HeUMx4feR6guIoCO2BFlESUltktJDm3d3/dHs3nJELZYfsyLe/NVLeeEf6TiJUkBEZbKURkPFLl3UrywdLNuxlHh8KK9cEUpRKUfCo9HVSQMjnsd0hFxyMeEfWeyuWzObNKI6d0vZ4VIO4LizF7qF5QQ5ZJoksis8SOGMSkI8I+zwolrs9x5zDxmWxGRETjgtjEeGXnIvqUG2ui0jpLL/Epjst2Mube/i+1Oh7Awo2OXUO6W94SUm6bZfEhJttwf0gy7ureVfSrjhDLLmzSyinMUcIuzzI0omyBtvZQyxsNsnHMsibHEjuquHosNziNuYbcolMcpcsS9C0dWH3CjGOYcxFm/aSkVtRbLtHJFwxiIqHiTHZWWFlgNt22ptsYty3eIvRm61TdJ+jg3uXBbF/5N+OaI7rnEtcQCWUpDJDfIWYjqEeHUI8ybxJoVmVr8HwuYfkT2GTbY4uJpc5hHr1fVRZja7HkjzzTw4jkZN5cpDul3l6cLw7xERDpj+Il29bJwRi2y4QjvCMhHV6xZS6deClN+TxXadXCzCJZm8wlqJMsaYkRw28MR7Qcun/JemXTBScIoy/DwrI7Z6PE45i2wt4mlwRKIlzfMuaeJxNY5EZy2BmREOWWkSH8+hEo02OWMi/Fq/PWi3Nq8yQi8yQylmLTLiEkC3rmy5S0lItSy8ml2XFs0JN6tW7+d5R3rcRlEolliJbxImzmyIhGXMUtXCrV62YISxMvNpjzK622IctzO3TRREntPKOYeWKC1XEyjKW7LTFS27MSIhxScbkWrN/xU4bCMhEpCWUuHwluqNLKckVBtuDqFMB4myzZeIRzDzS+xXTtmMYt4nZiOJwiooWMiizhkW9pzZeZKhpkZpyWUZEO6eVsY+HeT6Scy+EiH8/4o7QEMhiXs5V0GsPNvRTSBsNbNyk2QuFLTFdtgJwnBiWUSyxIiXWto4IyHMRZR3vCpfRC4EWyxJYmYSIspRLd+daJK6JYC1fw4iTYlhlKJDqjuufHGqZf3gkWXSUiws0W1cbWs2Xs3q3N0uLvcSoXNnaiJwYjly/e7qUlpFF2MAoxiRcsdSO5dvyERF4olEvySkYLBMRFxzHEsOMRiQ8Ql/ohYbgyk8WpTuUmcabEZbsval3Ujtyw8QRlEolGPhy7v6l2Hh7oodbctPEpdBYJnElpwyEt4fdUls5J8MubLHLId7lGSk2TQiOIOaWWOqSIoGOaJOcEe7zIVTbLK232nDFcFp3h9paNk0dMh04niXRrqLV7vurjVq45IRGXFxKbbbMdIe0EY7olmJStwdAWn96PL/EhHc5pfdUl6wcbEhLTyqvq17Ke6BUGo6JDpLxJwGJcpaUEaJ/UkAekeJSAbEvlNKhUoW6rFmxdcGTYyjlKOpO2Bxv8AMk+cijH2tKCTLgkUhLL4fdRLJ0cw5pJ2AcyFNrQZcyK1URKRCJS91FuI8Q8PeTFR1qo5eZFrQSUKtSHdykni4lqFQdseZGGSi0NFF2W74hUMpExq6jlJWNle/pP+XEqUgy6k0XN32SUtmiZvNnbdIRFtwpCOWMdUuIvT+pTek4P+vJltlgoiIjpEoy8KxdrdSHdy6Vutl7aYcFtu7bFxsmxGEizF9IXOsZycd0jog9WzKezabcZIS9ZqE/8A9mI+hSbixJhkXmyxB3ilHD5SFbFuz2WLYiLDZCRSzFm4Y8pUWc6V1G0EmR9W4JREtQ8ObeWMmpeKNlBozu0+k1z5M9bE5iNvMkyQuZoy05vStJ8He1bBu0F5lgmLtuLLmohciI5pfX8y852m5u5Y5ZRU6zdwRbwSIZZiLNEvD6FOTCmiI5XqtnqtNr2zjZFGL4lq0jHe/NFW3m2icyiUhLVEo/nqWKYvXN4vazSUxq5iUijm4d3+FEcSiavKbLY203JYYkQ6illktHabQk5lISbjvDEl51ZXubL/AMld7Ovo70ZR4d1OeLbYIZC36Z2JXotiw2MhlmGMh7xLz6+6JbU/7Yn45hwill5l6ZsvaMhHLERLMUUK/uSJwhbIo8uUvdUQnOOyDLijPcxVvsC5cFuLDzZZR7QYx4pcv1rb2OzLa2GLLcSjEi4vF9q7W5dHU4UR1SJdq/JOc3LkUMKiDv25Dh93Lmzd3mVc/ak2ziD6omyxMPM4PeWgtWCdlGRRbJwuLKsf0q2hHZb5R9WQyyyzOFpxOHrRittI0klRhtpvMETg+EcsZF+FVTgR9nKhPXTjj2I4Ui1SU8KSiS9vHxR403uVDtCHV3h4YrrTZOS5UQ2CccJsSlmIo7o8RI57Ndblh5ijIoluq0zNgCAW4ykUvZTyKKA64RREsqT/AOeJapk0WvZuZo5vzpUHbTYvt6e2b9WXFyqMDrgyjKKn7LbxYycEd2RbqHJDUTKq72J0Y2nftuPWlk++y2MicjFsh4WyLqxC+xer7C6K7MZEifjekUSEXBi3xFp9YNfR51bXXSHDw22RFtlv1bbeVkR5RHzD9i4ZdRNuoo6odM3vLY8d6N9GsciJ9zAFtyJAQ9pId0h+pbcisGWhbFsSciWduI5u7upnSx9kixWRIScIif4Sc4lmX311Yo6nqkZyisfG5PrtAt3KKQXkpCQi4JZXBLSQ93iVKTwjmkmjeCumSRlqbL9q3shHBFhsW9UXG8SUuYvOg7X6IWjjeJbOCLuHLCESw/ZL/RVnlkojvKSF3JJY0x2vJm3Nlv5pMuSHVEdP7yq3LgR3or0Jq9Eoi4WYd6SBcbNtCInHGGXCLURCMvz9abxNEpmFZvSlhtjiEX0eZyP5+daHYOzO0xnBcZKOYSjIpd3zirW3tGW5Ew22MvoxzftXetwijp7yFB+REkrNh7LiE2WkYjiNiXMPX1/2KHd9FL8czLbdyPEy4Psxc6qy+pKBCW6UeaSkWvSZ5koykI5Ylp/YramvylLQ+Sic2betyJy2uWRH6RlyP+Sh1dcFepbH6RE+JYdyTBDqbJwSEh5ZeYVU7ZqTgk9lxBLtNPhUwzyupIqWBVcWYvaTdzaFG5YcZyiWYSiQlpKQ+ZcDaEhjIo/neXoGzdtk4zglcxLThOCJMvj9HEqVoJfFT51i9tuWjrkhYwd1wWxwYl/R+hXiyObaa4JljqOpPYiUKW8u1p4h4VBJqOZsi7ukkPygh1LoaMbLs2RIRcbbFvdciW8O9mT9lHIiZFsSxCiMtUuUh+f61RDcuFpFEpjSHMKycS0/JfO22E44RNuMxlhi42MpDpEpahUG6riFLDy/KCO8X4VCq++I+uKPCRFH2VJttpuDEXG23N3SIyHwqdLW5blF/QivN6o+9+dSATatGrhh5ztBIRIhGQ5YyKMpef0JbfrZYkbTEFscvaELko70urrFXHJvRLhtaKaqE6cRRXHGx3vz4VXXL2ItTKx1HM0t5Wlrtl5luOITg7wORJuPEMvOJfYqcKIoEO8OXvRJTJJ7Di2ty3PbbxFIXMEY5RbLDGPhXauYzYuDGUo5SbFzxN9fWX20pRVFAEhIsaJbouCWbukPmFRzLmWLxrxsa91+TCk54d5SAqTm9lQHmCb1R/P4UV2IjlIiXKmVydFoiKOpWAALeb8j4lX2bojqL88Kn0aJzKQy5ZZVvjSq0ZyH2xETwkJK8IlBs28EcwjLl0ipDQk4WXxEuuP1MZU9g9uJOafa3VKpSOVveykk3lyiKUo95WhBSrlihASZiLk1Zm9yTRwU9slHGoooKhEsDR6EoQ1RmaEShsdEsK/xLT9EtmXLxZRwRKMXX5Mtl/Ryp1u+GlVR7PebYiQti49uuODLD/o2y80vrr1q3r0ifFwnGXCbk5iSlJyX9IWb9XX1Lmza5Jxj/J1YNEHcj1fY2zLkmCZJ+2cESkLsSKMeGVOsSRLe+u9nMXLDbBPvPSISb7VuJCUpD/j1Lzhrpe+yxgiRCThScc+Ul3v2KLs3pNctv4wvkJSlIpOe6Xz+heSvw/I93R6T6rHwvIV16OURw2xyiIjER8O6SsdlO4JC5IZJm3drlcsNOOiyT5SkbcRc1aSEfMqalyQ6swrrWNuNHIp6ZUXe0NqOPONlIsur8OZXWzmSImxJxntO0KLgjH+mJzqoJKgNkm2xecGMhkOIQyjxYfX1iP29SRbSKOG2JFKIyLe5R5VjLHqjUTaE6fzHsOzbNlkW2y3tJS08w/EX6lfWjrcdX57v+i8o2Ht25ZiJRIRGMSEdPhV7TaYi92Y4ZNjLKXyn9vxrx8vTSTrk9KGWNbcHobrgiPEPLmyqO0xJzE3S3VjtmbXdcfEZEXaS5dXL5lum6iP55VyTg4bGydj60Hh5UxwBjEcoppU5k6SgCC8JNiWbLl+8sP0nusR9zNKJYe7u91bbbpk2ziCOksy85249JwijHi7yrFJJmeQa06prdwqAX1JauE57iiaRl9XFhcSbcEijpishbXKurG7FZ0zVMl7f2S3eiLbrmC2RCWKIjljpkXV1kP1LzHbHR5xkrmURZbl2rnyg7sR3i+PqXp1b0R0llUbaN2242TZaS7peyqinx7Mpw3tcnzXtpgm3CEhzCUZD6sv/AJ9NP1Kqtn8N6RSj6wY5Zfmvmr9a9N+ETYgizjt/J5XB5S0uDzLzty1Fwc2WXq+V7eHuV/zXLKDT0s78OW4qS5R6zszaLm0dnNvCOI/bCQ4QiUith3RH0yGpS8/FVZu7dl4s3D7qz3wa7be2ZetykIywyDEyxLKUvtotT0o2X5I/EczDgi7bFxMuZhES3ur0f2Lu/DMrxyeB/rH9PRw/ieJSrLH+rn9SKF2UsvF7StGGm3R4ikUh3h/hVA3QlaWNwTIlEcxCvoFueKDOpaXB0lp4e79vzpwE3vfkU4j1EWYiQ+pUiWIiHShlQR0rtab3tcq64PDHiIuL2k0SNEREpLhkmHWKFiKwsPVwYodXEPrTx95MRKbFsRKWYZDmEUR5pohxhcEZeEsvKodzdFhi3LKPLGXMSi1bItPe5eYkBYelxFPG+Ifz+JQPeSqnYFiV+Rao6d3i4u8meU7vvcXDmUDhRcqLJJNbsh9WRDHxKVa3Y7xcyrWqZo8WVPfaiWod1NDss3NoiPq5cpfiEd5PYfJyOYZapZfEgWlu3ESlmjEhIYrltdC05IWRcwyGO7iN7zZc31osKsk0azC5idpwlp8MU54OaJcK0j+1NmXNg2LIjbXLMnIOaiEizRcHV1f2rOUuBcLsyHvF90ZITsJQaK24ecKLcR4Ze7qT8FsRGLZEQl2khylxCKsnHMEcrcXNQ/iJQyvCEhHUO9xc2ZRQiFtbBck3giMoxl6vux+r56KE5sO2iUmGSHLmEcveGP8AotQbzbjMSkRaSyyEYqE/bCOVvhLKMiEuHKWlS8ftFcFSPRywIZYcS1DEnB97hUxvY+znBFt4RLBHURe13iR2xL5QsMva8OVIK6pDqTWONcBYK06HWTLgk2ROxlIHI4cS0iQ/6qL0i6OkXaWgiJaSa0tl3fs+ZWjN5FTmb6QjLKl2l4C2eZbV6N3rEnCZJxsiKWDmIR5h3v1Lg9HLtxll9tsnCcKURGJNjqzD5s31L0152WXLuxLiXGr7DLKIkMYlL8PCsn069la5Hk10L7bhZXJDqkJCQlw5kXZ+yr+9ISZZKLhZTcyjl1Fm3V6s/c4gxcbbcHmGWXxIzd1lGIxiIiI8PdS+GY+4ygp0DsBzELhZhKWMUtO7wj1qW30E2cOIQ40SHL2mkpSkJf6VVt5Uh1uSWvZh6JbZk9o9GMMXcN9xzD9WJDq5VkLgXBKLgkJcLgxj7S9bq8PCs30s2WVzmFvtBLh3R3ZLLLj22KjKjCBUW3JFIhH7oq1epiNZWyFssrZRy+ElpOhXRxohuW7+2clIW25ZcpDqbId7rWtG0ZZbFkRFxtsRjIZZR728s4Ym0Ny9HlNlsi9cHGbtHibzZo5cuqPEtfXoO3ESxSbkIkUREs2XK2tbUhEYjl5RyroOafxLWPTC1WeQbZ2W7aPPMkOUSylukJaS7yBbNlvaV67tLZzNy2TbjYlISEZahLiEt1YG66J3bJFhs4gtjKY6eYY+kiWE8Ti9iozrkpWpCMZRkWWXD3k5thyMpDHhlmQKgPEUpaVNtLF8nBbaEnJcP4lkW2WGyaZcvFm4lZ1oojey71ssrUijqEhiQqNcP3I9m43hlwkK3TSW5AXab/yY5nCFVAtZs3sqWFvhyIvWe9Iv8EJoc2YZbyylK2NIZQB4ZD70V0hb4Yp1QHd3t1EaabcEtQkKkoCx2cSykX3VLYunB0kQy1RyqKbJDzDxfvJlTRYFiN1JyREOYpEUZbu8hOEOJIc3KO7JAYEiIW8ok4URIubm3R+tTbrZV2yQtkwThOFFsmxxm3O6435lLyJchQgItJFlXCIY6s0vaFWtdibUZbKVgTrIt4hON9pESyyGPzLPuPyLej73s8VElki1sxtNckzF9kt391NERlIVu+gnQywfshfu3nCfvZN2QNyEmo6Xi83afYsjtLoztGwZF+7YeaZJ4mZkIxJweEevr89PP6Fis8XKi3iaVkcWy3c0lYUsSFsXCEY97tO8Q8KiWtY6dKsGtok3mGJFGPaDKIrdKyEQqZRJPw9RDqH2ZbyCFCclESKOqI6U8RcKQjIo5ijw8yzbXsoKyXhirLZ20CEiEsw6u6W6nbJ6KbSvRFxi2cISzCRDhyHlkg7d2Nd2RRuWHGyiLkokTficHzelY96DdWaJNbljaXrksQSzb2bV3lp63g7SsHG7t+2tMCXakJOXLhEPZxb9Mfi66ehYjZjN29mZtn3t7sWicH3UB3aROZRbeIpEJSEoiW8JEVMpfaonKMl+Y1hka+p3pJbMMDbE28JOPNtkQiUt3M2XCdK/EnWZSy5o5Y/xIey+j7dy4JOXLhPav0Y5hyiRai1KXcsC04QkXqyjl4eJKE09hSXnwTG3sPNw7vNxKM25uyUZ18tMpc3Kp9vYYjeIJfwrar2RNk/Z9B4okSutk2mI52hCMcwiSodnNiIycyxlEOLuq+2QGJiFHtm28RmUcMuUi4upTkyfLRpBGhYdJxuXqx05dRf1fpFOxYllLxDpVdS8LAYblm3iwYlLhEt4OpPF0hHs4i2O6QyzcPdqsFZ0WWl04TZS1eLiXLSsnBHeLSoHlctIjzRHLJWezGI5iEs29w8yctkFlk1iC0Q+r4o/eWI6ftybYtBIouFjOEJZcumQrc3LZFhllJ7NERLVHiHiUK22B/1OO+23e42kCyiIjliUtP29SePIsasJptUeTW2yxbcwxiThCURlmjxf2IF3ZOM7uVfQxbJ2aNoTBWls0yUpYZSIZbzbmoSpVZq46EbMeEhG7f0x7QhLMX6l04usXlHJLpJI8l2QLbMijmc0xGRcoiI+eS3Oxfg7cuWRfu3CssT5L5TB3XC8/Zn9VVqOi/RFjZIuOE83cvEUW3Sb9S2P0Y+eJKdebRwZYgyKUpEUpCiXUzntDb/IY+lS3mYDpB8GFsTZFs65ceuR+SfIRxB4RLiWSZ6A7UFzDcbbbykWZzLl3S5v8Fvdq7eIiGOWJSEt72lfbFdG9bxHCw3ByiQ7w7wkK2lLLihqbDs4pypbHgLtSZcJshi42UXBLdId1DbMRfF4SIR3gl2fejukvV+n3wav3b/ldg8yRZRJhwcEuYsReR7UtnLZ962eyvMOEy8MpDIeElvizxyL6nNlwuD+hr2dtlhiOJl4fzpTS2nwksixckIxll7qkhcreMVdkvI2XL95LLuqqunEE7hRrp4Y8y1TIbsI+8o+Mh4vLq3kN1DfknYk0uo8UVIauB3S9pUsyKUZFHhEiitX0fsGsEXS9YTcsw6eWP8AqrhN3RMgA4jnESmNRjEnY8sSUujY7pF4VIqzuuSzcQrr1EabK8KCIkMpCXCUfySmNEzERHLl1Siqm9sXmycw5ONjmlvR7qhC+QjvKnFMWrSX1WBkRDm70SzeFRrxgiEiFsdPLLw8Sqxuv4k3yxNQBzTDWjoiQlmy/nT/AKLY7F2W47Em71kW3NQSFsu6IueaSzDd25uk0X9IIy/yU/ZlwLZC55MJOEJDyy4hH0LDPFtbbGuJpMl2dbnZ17lbxByyaKJSEc2Uh6/R89FU9I9pW13JwmSbuxeLEcH1bze7iCPmnT0dauGdjsXIxldsPlmGQiVsJcsfPKvxKnu7O5shFwWybGRC4TZYgiWkhdj14Z9XxVWUHFyv+o2ndbflKAi4fxJC6jORHSWbVyrvWIjmES4l2WcohNPbbIuEYqNIR06eHh7qdR/LLdUMEGFsZatSnlsN6Mhw48xF+KmpQ7d5tweFyWrd7pftW3tG23mxcixLKIiO9xEUq0iubNlcODfFjUnRQs2d3bNuYJCQ6n2nBJuQju5tXm83mWPurYsxCMRIvCPKvWbPZ4k24JE0Mcwg+Qi2WbNHz6ur4lTbV2Wzci3iENtGURZEXI7oxIfMQ/roscPV77m+TpnpPMztyTPJ1oNsbKK0ewXXMQoi52emJaRLnp8dP8aqvLD5l6EZ6lZwyhpdEMWx5UoqQdPEg0qnYqEDUt2RLo2RYmGTBPOEJRFsSIi5hw6VrKn2KdsnDJ5sXBIhEvk4i57ReYles3cSLBImXMNxspCOIIlqHs/9FzZMrXBrDHqP/9k=) Importing necessary libraries
###Code
import numpy as np
import random
import seaborn as sn
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Defining global variables for environment
###Code
BOARD_ROWS = 5 # Number of rows of frozen lake
BOARD_COLS = 5 # Number of cols of frozen lake
WIN_STATE = (4, 4) # Agent want to reach bottom right corner
START = (0, 0) # Agent will start from top left corner
HOLES = [(1,0),(1,3),(3,1),(4,2)] # Manually adding few hole in the environment
###Output
_____no_output_____
###Markdown
Defining hyperparameters
###Code
total_episodes = 10000 # Total episodes
learning_rate = 0.5 # Learning rate
max_steps = 99 # Max steps per episode
gamma = 0.9 # Discounting rate
epsilon = 0.1 # Exploration rate
# Uncomment bellow hyperameters to decrease exploration rate wrt time
# max_epsilon = 1.0 # Exploration probability at start
# min_epsilon = 0.01 # Minimum exploration probability
# decay_rate = 0.005 # Exponential decay rate for exploration prob
###Output
_____no_output_____
###Markdown
Defining State
###Code
class State:
def __init__(self,x,y): # Initialize state with provided coordinates
self.cordinates = (x,y)
self.isEnd = False
def getCoordinates(self):
return self.cordinates
def getReward(self):
if self.cordinates == WIN_STATE: # Rward at win state is 10
return 10
elif self.cordinates in HOLES: # Reward for failing in any hole is -5 ie punishment
return -5
else: # Reward for each transition to a non terminal state is -1
return -1
def isEndFunc(self):
if (self.cordinates == WIN_STATE):
self.isEnd = True
def conversion(self): # Function to convert a cell location (2d space) to 1d space for q learning table
return BOARD_COLS*self.cordinates[0]+self.cordinates[1]
def nxtCordinates(self, action): # Provides next coordinates based on action provides
if action == "up":
nxtState = (self.cordinates[0] - 1, self.cordinates[1])
elif action == "down":
nxtState = (self.cordinates[0] + 1, self.cordinates[1])
elif action == "left":
nxtState = (self.cordinates[0], self.cordinates[1] - 1)
else:
nxtState = (self.cordinates[0], self.cordinates[1] + 1)
if (nxtState[0] >= 0) and (nxtState[0] <= BOARD_ROWS-1):
if (nxtState[1] >= 0) and (nxtState[1] <= BOARD_COLS-1):
return nxtState # if next state legal
return self.cordinates # Any move off the grid leaves state unchanged
###Output
_____no_output_____
###Markdown
Defining EnvironmentThe agent interacts with environment to learn anoptimal policy using some learning algorithms like Q learning,SARSA etc.![reinforcement-learning-fig1-700.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAeAB4AAD/2wBDAAQDAwMDAgQDAwMEBAQFBgoGBgUFBgwICQcKDgwPDg4MDQ0PERYTDxAVEQ0NExoTFRcYGRkZDxIbHRsYHRYYGRj/2wBDAQQEBAYFBgsGBgsYEA0QGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBgYGBj/wAARCAEOArwDASIAAhEBAxEB/8QAHQABAAICAwEBAAAAAAAAAAAAAAcIBgkCBAUDAf/EAGAQAAAFAgIDCAkMDQkIAQUAAAABAgMEBQYHEQgSIRMYMUFXk5XSFBcZIlFWYdHTCRUyOVJUVXOBkZSyFiMkNDZCcXV3krGztDhYcnR2gqGjpSUzN1NiwcLEYzVDhKTh/8QAFAEBAAAAAAAAAAAAAAAAAAAAAP/EABQRAQAAAAAAAAAAAAAAAAAAAAD/2gAMAwEAAhEDEQA/AL/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD8UokpNR8BAClJQWalEReUYtcWJNiWlkVy3XSKUaizSmZLbaNX5CUZGfyCCNJHGy66VcFGwmwqYKTe1xL3KOvPMojeZkpwy8Ow8jPYWRnxBY+hDhrDiFV8UH6hfV0SiJybNnyV7lrnwkhJGRmkuDNRmf5OABKO+LwS5Srd+mI84b4vBLlKt36YjzjyN6Zo78ltH/zOsG9M0d+S2j/AOZ1gHr74vBLlKt36Yjzhvi8EuUq3fpiPOPI3pmjvyW0f/M6wb0zR35LaP8A5nWAevvi8EuUq3fpiPOG+LwS5Srd+mI848jemaO/JbR/8zrBvTNHfkto/wDmdYB6++LwS5Srd+mI84b4vBLlKt36YjzjyN6Zo78ltH/zOsG9M0d+S2j/AOZ1gHr74vBLlKt36Yjzhvi8EuUq3fpiPOPI3pmjvyW0f/M6wb0zR35LaP8A5nWAevvi8EuUq3fpiPOPtG0gsF5chLDOJVtmtXBrT20l85mPC3pmjvyW0f8AzOsOvM0QtHaZFUwrDSnMkostdhxxCi/IZKATLBqUCpw25dPmMSWHEkpDrKyWlRHxkZbDHaFHb3w2vLQ9qCMScJKtU6xh9u6fXu2pzu69ioUrLdGz8G3LWyzI8s8yFxLNuyk3vZNNuiiPk9BnsJkMr/6VER5H5S4D/IA94AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYvcOJOHtpVJNOum+bdosxSNco1QqLLDhp4j1VKI8vKPJ7eWC/KxZfTMfrgM+AYD28sF+Viy+mY/XDt5YL8rFl9Mx+uAz4BgPbywX5WLL6Zj9cO3lgvysWX0zH64DPgGA9vLBflYsvpmP1w7eWC/KxZfTMfrgM+AYD28sF+Viy+mY/XDt5YL8rFl9Mx+uAz4BgPbywX5WLL6Zj9cO3lgvysWX0zH64DPgGA9vLBflYsvpmP1w7eWC/KxZfTMfrgM+AYD28sF+Viy+mY/XDt5YL8rFl9Mx+uAz4BgPbywX5WLL6Zj9cO3lgvysWX0zH64DPgGA9vLBflYsvpmP1w7eWC/KxZfTMfrgM+AYD28sF+Viy+mY/XDt5YL8rFl9Mx+uAz4BgPbywX5WLL6Zj9cO3lgvysWX0zH64DPgGA9vLBflYsvpmP1xHuOOkFY9K0fLoqNgYq2wu5WImvATCqcd503NdOxKNY9Y8s9mQCwA6VWeNijvulwkQ144beqMXJTdxgYn2kxWWCySqo0pZMPl5TbVmhfyGkWutfSVwcxTtxxi2rxhtVFxJEVNqCijSDPwJSr2X90zAQpg1q3P6qNfFUqJbsuiUEmoZK27ka9wzMvkccL+8YuwKR6ORkfqkWKpkeZHSGdvyRhdwAAB0Ztao1NlsxajVoMR988mmpD6G1Obcu9IzzPbs2AO8AAAAAAAAAAAAAAAAMev2kQ6/hZcdEqDSXI02mSGHEqLPYptRf/0V50DavKn6K9NiyXDWUaXJYbMz4EkvMi+TPIWVuH8Ear/U3vqGKr6AP8mxj84yv2kAt0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1k4t4fUzFf1VKbYddnTYsGoLbbceiKSTqCTFJRausRlwl4BNnc4MJPHO8udjehGAyvbpWPjk/wRjYQApr3ODCTxzvLnY3oQ7nBhJ453lzsb0IuUACmvc4MJPHO8udjehDucGEnjneXOxvQi5QAKa9zgwk8c7y52N6EO5wYSeOd5c7G9CLlAApr3ODCTxzvLnY3oQ7nBhJ453lzsb0IuUACmvc4MJPHO8udjehDucGEnjneXOxvQi5QAKa9zgwk8c7y52N6EO5wYSeOd5c7G9CLlAApr3ODCTxzvLnY3oQ7nBhJ453lzsb0IuUACmvc4MJPHO8udjehDucGEnjneXOxvQi5QAKa9zgwk8c7y52N6EO5wYSeOd5c7G9CLlAApr3ODCTxzvLnY3oQ7nBhJ453lzsb0IuUACmvc4MJPHO8udjehDucGEnjneXOxvQi5QAKa9zgwk8c7y52N6EYHjToMYbYbYC3NfNHum6JU6lRN3ZZlOMG2tWsRZKJLRHlt4jGwYfCXDiT4a4k6KzJjubFtPIJaVbc9pHsPgAaW8OtHPGTFFxtdq2VOOEo9tRml2NGSXh115a35E5n5BbCzPU76ZSKT68Yl3e/NfbyV63Uctybz8CnVEajL+iRflF+W222mktNIShCSySlJZEReAiHnXD+Dsn8n/cBSfRYo0C3vVA8S6LS21tw4tEZbaS44pxRF9z8KlGZn8pi9YpJo5e2R4rfmhr9kYXbABrJ0h4ly47Yz4n31bMtwqNhnGZixFtEf211DpbqaT8JGTysy9yjwi+WOuILWF2j1dF6KcJMiJDNEQs9qpDhk20Rf3lEf5CMxVDRkxa0fLF0ZH7cvu9oPr1cbkiTW2FsuKM91zQTajJORnqZZ+VRgLcYMX+xifgTbV6trQb06Eg5KUcCH0lqup8nfEfyZD64jYt4eYTUVup37cselNvHqsMmlTrz5+BDaCNSvy5ZFxmQqroAX3BYk3vhC1V0VGJTpqqjR5KcyS9HNWos0keRkWxteWX46h6mH0CDi76pliPWrsjN1KLZbKYFKhSkktphRHuZrJB7DPNLitvGvPiLIJbiaXeA0y2KlWm7vcbKmoS7JhvwnmpJINRJJSW1JI1lmZexzy4xLka5KLJsmPdpzm2KO/DRPKVIPckpZUglkpWfse9Mj2isGnnh5a9Q0ZX7vTSokasUeUybEtlokLNtaiQttRlwpPMjyPjIhguk/eblM0Y8ErEfkTWaPX4sBVWKGk1OPRmmGc2yItqszWR5cZkQCdXNMnR5brXYB3wtTRObidQTTpBxSVwZbrqZZeXg8uQmRNyUR2zl3VEqLEykJiqmFLiqJ1C2kpNRqSafZbCPgFW06RGjOjD4rHLDS5/WEo/YvYX2MK1dTLL8ufl4c9oxrRGqVVLRyxhtd1mqN0OlqkuUb1xZU0so7rL2RZKLZ/uyMy4jUYCcVaWuBKLHgXSu71dj1BTiYsREN1cp3UUaVHuKUmok5kffGRFs4RlGG+OeGOLMGe/Y1yInu09OvLiOMrZfZT4TbWRGZcWZZlnszFfvU/bBtVvR3evZ6kRpNbn1F+OuW+glqbZbyJLaM/YlmalHlwmrbwEPlGhQbW9WHagUGGzAjVi21uTWmEkhLq9yWo1GRbMzNpB/IAxml6V9uHp31SuPXPcSrDepLbEaCcN8yTI1EEatwyzLaSu+yF7WHkSYrUhrPUcQS05lkeRlmQqLRIUPuvFysdiMbkVsNGTe5lqke5Nbchb4iIiyIsiAebcP4I1X+pvfUMVX0Af5NjH5xlftIWouH8Ear/U3vqGNWGAek3iJhHhw3blr4bNXBDTIdeKUpD55qWZZl3hZbAG2EBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5g3+uNnIZH5qX5gGwcBr43+uNnIZH5qX5hJejpph3NjDjivDq6LFh0R44jz6XY7rmu2tvIzStC9u0jMBbwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABr3le3SsfHJ/gjGwga95Xt0rHxyf4IxsIAAAQzpFYi3ph/bNqsWEdKRWLhuGLRG3qmyp1prdlapKNKVEfDl8mYCZgFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wFgQFfvWbTI8dcMOipHXD1m0yPHXDDoqR1wE21u5LetpiM/cVbp9JakvlGYcnSEspcdMjMkEajIjUZJPIuPIca6427bD7jTiVoUkjJSTzI9vhFCtMumaQTOBNO7Ztfs6pUpVZaSyxQ4LrT+7G25qnmpRkactbZ4TIR3gRammKxTkP2U5VqRbew1ncC9WGaf8ApadzUf8AcT8pAJ00cvbI8VvzQ1+yMLtiiuiwmso9UDxLRcL0N6plRGeyXIaFIaUr7n9iSjMyL8pi9QCo+ltTrkxPxcw2wRpFKqTtImzk1KsTWo61MNNpNRES1kWqRklLqsjPP2PhIWXj2JZUWG1GatOiE20gkJzgtGeRFkX4oyEAFLcYrWqeEWnth9ixZttS3KFV0lTqyzSoalIYTrE2taibSZEWotCy2bTbMd27qLfmj3pjV3GW3LKqt32TdjCU1WNRmjdkw3tma9RJGZlrEas+DJZkZkZFncUV6xKLSWs3HFV54eRmL5sqTHS2/azshqM7EXqkSltqVkatqdbhP2Siy4DIK/aVmMV54u4CzI1uYb3LbtnwH2ZFTq1wRjiHJXrkltllB+y748zPyFwccn43YUXZdWjFhZd1lQzm3NZMSBUGoLe1clCWWjWhHhURoSZFx5eHIcLyt7SA0lyptmXfh5Hw6sRua1Lqj0me3IlzCQeZNNpQZmn8pkW3LbsMjttHjsxIbUWO2TbLSCbbQXAlJFkRfMArJv28LisspPrDcarp1NQ7aKmuE/2RwbnrZZZa2zPh8nEMjwwRjBVNGG663i2paa3WIkt+HSExibXBYNlRIaNJFmazz4D2lsLhzE6FAglN7MKHH7I4N23Mtf8AWyzHYAVs0F6RVaJojQ4FZpkynSiqktRsTGVMrIjUWR6qiI8jGO1Wh1tXqvNErqaPPVSkW2ptU8o6zYSrcHi1Tcy1c8zLZnxi2oAKaX7UK9hL6o/IxOqVj3JWrbrNDbhNS6NCXJJtwkpQZKyLIjI0cBnnkZGLjx3ikRGpCUqQTiCWSVFkZZlnkZeEfQAHm3D+CNV/qb31DFVdAJKT0bWDNJH/ALRlcJeUhaq4fwRqv9Te+oYqvoA/ybGPzjK/aQC3G5t+4T8wbm37hPzDkADjubfuE/MG5t+4T8w5AA47m37hPzBubfuE/MOQAOO5t+4T8wbm37hPzDkADjubfuE/MG5t+4T8w5AA47m37hPzBubfuE/MOQAOO5t+4T8wbm37hPzDkADjubfuE/MG5t+4T8w5AA47m37hPzBubfuE/MOQAOO5t+4T8wbm37hPzDkADjubfuE/MG5t+4T8w5AA47m37hPzDXxg+RF6sVepEWRbtU/qkNhI174Qe3F3t8dU/qkA2EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANe8r26Vj45P8ABGNhA17p+2erTK19uo93vk+4hsIABXjSq++MIf0g0r98QsOK8aVX3xhD+kGlfviAWHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHVm02nVImSqECNLJhwnWt3aSvc1kRlrJzLYeRmWZeEx1q8nO3pCSL8XgIemPhMZKRCcaMs9YgFJsBH26X6pviBBlnublUoZOxtbZr6hxyMi8PsVH/dMXgFDdIm0LusnGGi4vYfNf7doq9Y2csyktGatZBkWWZGRqLLjJRiaMN9M/Bi9KG19kdwMWbW0JIpVOrKjaShfHqOmWqos/CZH5AFigEa74bAnlgsrpdjrBvhsCeWCyul2OsAkoBGu+GwJ5YLK6XY6wb4bAnlgsrpdjrAJKARrvhsCeWCyul2OsG+GwJ5YLK6XY6wCSgEa74bAnlgsrpdjrBvhsCeWCyul2OsAkoBGu+GwJ5YLK6XY6wb4bAnlgsrpdjrAJKARrvhsCeWCyul2OsPlJ0jsBYsZT7uL1nqSkszJqptOK+RKTMz+QgGZXrUY1Iw2uCqzXCbjxKbIfdUZ5ESUtKM/wBgrToDxHo+jPCceQaSemynEZlwp1siP/AYBjdpDTNINXacwSjS3KJNdSisXE60ptCmdYu8bI8j1T4TzyM8ssuEWywisiDYeG9Nt2ntmiPCZSyjPLM8iIsz8p8ICQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGvfCD24u9vjqn9UhsIGvfCD24u9vjqn9UgGwgAAAAAAAAAAAAAAAAAAAAAAAAY5fd9W5hvYky8Lslri0mHq7s8hs3DTrKJJd6W09pkMiQoltpWngUWZAP0AGNW9fdt3RdlxW3R5a3qhb0hEWotqbNJNLWnXSRGexWzwAMlAAAa92vbpnfjv8A0iGwga92vbpnfjv/AEiGwgAFeNKr74wh/SDSv3xCw4rxpVffGEP6QaV++IBYcAAAAAAAAAAAAAAB4711W7HveNZ79WjorsqMqYzAM/tjjKTMjWReAjIyAewAAAAAAAAAAAAAAAAAAAAAAA8G4bZg16KbchlKjy48xXa9NFS1bkqa5UqgRH1n+OaFa3zkLTgApKehZZ5H+DbP6zvnDeWWf4ts/rO+cXayLwBkXgIBSXeWWf4ts/rO+cN5ZZ/i2z+s75xdrIvAQZF4CAUl3lln+LbP6zvnDeWWf4ts/rO+cXayLwEGReAgFJd5ZZ/i2z+s75w3lln+LbP6zvnF2si8BBkXgIBSXeWWf4ts/rO+cN5ZZ/i2z+s75xdrIvAQZF4CAUl3lln+LbP6zvnHbg6F1lIkJW5bMZRFxL3Qy+YzFz8i8BAAi+wsIKBZsFpiBTY0ZCCTkhpJpLYJPSlKEklJZEP0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABr3wg9uLvb46p/VIbCBr3wg9uLvb46p/VIBsIAAAAAAAAAAAAAAAAAAAAFT6riZULp0kb0sa68ZpOFjVGfbj0OCw2wwdSSaczkKdfSZOZnl3iTLZ8pi2ArXdrlZi3zc9DxhwZqGJtuyZHZFv1GlUmLLNhg0l9zOkpSFJUlRGZKzPPPh4AGA6RlNxJqOgZWJl/3W0/KpUzUbkUVbXY1cjKfbJp19JJPVWWWeSDIsxMl9wsRLXw6tahULEpinU9ckyrt33A6wUtiPlrETRGlLZrM+9I8thEIZmYJ4hI0CsQrWgW5LhvVirLqdEtVb+7PU6GTyVJYzzMtfVSZ6pGfz5jLMWYFwXlduFWKD2Glx160qOqQVVtORGR2Yy6okk3IOOa9RzVNJ8Ktmw+MB87IxLlw9Kyg4aW7jWrEe365TJjkl6RuDz1PfabNaVIfZSlKs8j70yPIeNg/DesHHbHq8KxfFw1CFbMkn5bD6mjKo5RDWSnckFmtPAnV1S4M8x6shu8Kjpd4b4kRMGq3QrQgR5lLS2lhlMtK32zSTzzLajJpsj1SzNRnkSjy4CHpUWyK09jnjfZFw2zWmaVfpJcgV5lhLkNCOxDQeuvWIyWSuBOW0y4QGH0fFG47xwpcxLl6SlCtq6H2XJ1PtNp2J2FHSWZojvpX9sWoyIiNWsRlnwCxeBmJC8W9H+3L/fitxZNRZWUlho+9Q824ppwiz25GpBmXkMhA1nsXNYWELGGtX0dJNcvemsKgw6vGgRXaZM4SakOSFqJSUkWRqI057PmsjhpQK1bGFNFotySIMisssa052BGRHYU8ozUrUQhKUkkjPItm3LM9pgKRU37Z6tNO1++1XlaufF9xJGwga96V7dPUPjlfwSRsIABXjSq++MIf0g0r98QsOK8aVX3xhD+kGlfviAWHFW8DsTMe8Uqm7WKg9b0C0KLW5kWoS3WMpE9pCjJLbRFsQSC1c1HtM8xaQV50VKHL3ulxUerQ5UM5dw1ZBoebU2o0LcyJREeR5GR7DAeTbOKOLOL1v1G+7Gu+zbVoKJDzVFplTi9kP1FDZmndH1G4k2iWZHkSSIyLhHXjaR9z1zCzD3ESJGiU2I5dZW1dsEkE6hpSjNBLbWe1KdbUyP8A+QiPgEcYcWphZhXYUmw8ZcGJVUvClSHm4sqJQnZqa00ajNpbTqEmnMyPVMlGWWWZiU0YVVSqaB9xUCRZdHtKuVFp6rx6RSGNzKK6hRORyUWZ5vZNozMuM8uIB9a5jzc9P09KVhkwTH2GuIbp8p02SNXZ7rLjyEk5wl3qC2ece7buKGIVw17GKpUClNV2nWzLKl2/S2kpaVKlttEbpKdMyzLWUnhPYRiCJFDvCqaI0zHF6gTmrwK9GbtRAJlZvpbZcQwlsk5a3+7TlllwGfhGcRbPxHp3qaNVTaTE6Ne1e165LaYzRKUciUlx5CeMl7hmguPZs2gOd/4qY24TYZMYjXTflh1CShTK59mtwiaUhK1ESmmXSdNalpzyzPMthntGV4z4n4sUDSMs3DrDKJS5TlwUmU6aKij7Uw6hRET61F32ohOZ6peyPIuMQJitSsNLh0T5lFwgwXrki5ExWHJ9SkUJ1uREJBpN3dX3i13FmZGWqg1Z5mfAJ0qco7m04cKrspMKeukvWtPUUh2K40SNZSdUlkoi1FH4DyMB9axf+L7uIlrYG0GrUBN7LpB1i47ichm5HiskrULcWcyI1LVkREfAXFxlidE7YLHqlNv03EGTS6hIiWnJRDqdPYOOmYybij1ltGpWoslGpJ5Hls2DIcQG6lhVpoQ8aJ9GqVTtGr0D1hnv0yKuS7T3kua6HFNoI1GhWRFmRHltHkUS5qhfvqidAuiFbFahW4xa0mLDqE+E5HKT9sUalaqiI0lrGZFrERmRZ5ZALVgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANe+EHtxd7fHVP6pDYQNe+EHtxd7fHVP6pANhADDMUb6lYd4dP3DT7YqVyTt1bjRaZT0GpbzritVOZkR6qC4TUZbCIRxKxbxbsW9bSgYqWRa7VJuiqM0ePKt6puvuxJDx6rZPIdbQRlmZEZoMyLb8oT0PnIkR4kZcmU+2wygs1uOqJKUl4TM9hCILqxbuuZjJMwqwltulVmv0uK3Mq86tS1xoMFLhEbbZm2la1OKIyPIi2F/hEuL+Kt6Xzot4pWs7bFHo1y2wSYlyxJMxxbfYriDUh+GtKO/NRpLIlkksiVmeYC2jlQgNJjqdnRkJkqJLBqdSROmfASdvfGfFkOyKoSqzUKfhfo9N4hWhb1Tny69S41NchTpJJhINpvcZH4ms8RbTQolIz8IkWs4u3tcOL1bw7wdtei1WXbqW/Xiq1+a5GhsurLWSwjckLWtzLaezItpGAmoBB9Hx9lT8MsS5NTtpNJvSwIEmTUaM69ujLikMOOtLbcLI1NL3M9uRGXzDFIePuNNRwIj4yxMLaA3azMMpsuLJqbiKg+0kvtjrDZINBI9kaSWvMyLPLaAs2A8q2Lgg3ZZNIuilms4NVhMzo+uWR7m6glpz8uSiHqgAAAAAAAAAwXEHEqHZFWty3Y0A6pcdyy1Q6VTidJknDQnXccWs89VCE7TPIz2kREZmAzoBHL+JFapGJlsWLX7Rcbn3At7cJsKQT0VtDLSnHNdRklSVFkREWqZHrFt4hIwAAAA170r26eofHK/gkjYQNe9u9/6tPWtfvtV5zLPi+4kDYQACvGlV98YQ/pBpX74hYcV40qvvjCH9INK/fEAsOBERcBZAInxjxzhYMOUddatGr1OLV5SYER+nuMqNchRGZN6ilErM8uHLLygJYyLPPIBF8rG2k2/iZQrGvq3qpbFQr6jbpUiSpp+NKcLItz3RpatReZkWSiLPMsj2kJQAfmRZZZFl4B+gAD8yIuIgyLwEP0AAyIyyMMizzyIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGvfCD24u9vjqn9UhsIGvfCD24u9vjqn9UgFj9K67rrtHCiiPW7VZtEp86vRoVbrcJvXdp0Fetrup2Hq7SSWtxZ+UQHimrBej4j4TVOzr5nXS9BvCmyqrXZVcdqcaFH3ZJmbr6lqaaMzIzyzI8knwEL4SY0aZEciy47UhhwtVbTqCWlReAyPYZDzkWtbDdGVR27cpKacpRLVDTDbJk1FwGaMss/LkArhady0DCrTSxOev2swaJTLsah1SjVie8lmJKQlokrQl5R6usR8We0tpD7YkYmKxY0X8anLftxwrfpkU4dOraHDWVYNJZuuNo1CybQZERKJSiVmZ7MhY+o0KiViG3Eq1HgT47ZkaGZUdDqE5cGRKIyIdluFDap5QGojCIpI3MmEtkTZJ4NXV4MvIAqLedcotfw/0YJVDq0GpMM3NR4zrkN9LqW3UNNkttRpM8lJPYaT2kPYwuue3sHNJTGG3cSqxCts6/WzuClT6q8mNGmR3M+9Q6sySakmeRpzz2HsFlGLXtqLGjxo1vUplmM92Sw23EbSll3/AJiCIskq/wCoto+tVoVErjSGq3RqfUm2z1kImR0PEk/CRKI8gFQYOveFL0osW6a04Vs1S3JNMpctaDSU4mITxOOozIs0ZmREfHmfgEhxvavy/sOr+HMWCOnwFUpVMOFHOCps2VRTbTuRoMsjQacstUyPLLgH4VMppUj1pKnxSgbnuPYm5J3LUyy1dTLLLLiyyAYNgJ/JVw1/svTf4VsSGPlGjRoUNqHDjtR47KCbaZaQSENpIsiSki2ERFsIiH1AAAAAAAAFe9KrCi7b1tygYgYcz3Y152S+7UKc22f3wlWobjZeFR7knIj2HtTxiwgiOzbgv+gVW603nZdWep8isyZNIk07VkqVH1tVKXGyPWQZ6usXCWSizMgHk6PmLtBx9tCFc0+mqg3dba1w58NZGnsd5xJJUtJe5UST4dpZGX5ZyFfrAsurWHPxYxnn0mNQJlwKVNi0mWslJjNMNGZLfNszLWWrNRpSZ5Z5ZiR8Hp141XBehVi+nIq6xUY5T3OxyUkkJePdUoNJ+xNJLJGW3IkltMBnQAADXvbXt09c+Oc/gkDYQNe9te3T1z45z+CQNhAAK8aVX3xhD+kGlfviFhxXjSq++MIf0g0r98QCw4p7j5XkXLpxWDbC6dOqtJsiE9dFSiwEE4vWLI0nqmZZ5aiMuPvjy4Rbip1KJSKTIqc9xTcaOg3HFIbU4ZEXgSkjM/yEQqpozSHLo0nsXMRLjpVWp0+tSUxKSxUYDzGdPbM8jJS0kXfElrNOfCQDtIpk7SwxIw9xSpbSaNh5bE1U6OqU6lU2pSEOFs3NBmTSCU2RHrK1j27OAWrFWNHhqoYPY43/AIK1OlVNmgSaqdVt2b2G6qMaXUkamScJOonIiRkWZbSVxmJew5va67wxAvePU6NDh0Ki1H1rgvsSN1N1xtCVO62wtuayLZsI0mW3hASSAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA174Qe3F3t8dU/qkNhA174Qe3F3t8dU/qkA2EDyLluihWfQFVq4qi1BhJdbZ3VzjW4skISRcZmpRD1xTXS5OqYi4QXVctJkPFbtiy2NwJoz1Z80nkpfWeXCholEgv+o3PcgLlAPEs2spuLDuhV5KiUU6AxJ1i4zW2Rn/AImPbAAAAAAAAAAAAAAAAAAAAAYviNasm+MLqzaUSopp66kx2OqQpBrJKDUWuWRGXCnWTw8Y9ukwDplEjQFPqfNlGruiiIs/IRFwEXARcREQ7oAAAADXvZff+rP3Eatpk9Iyz/qiBsIGveyfbnri+OkfwqBsIABXjSq++MIf0g0r98QsOK8aVX3xhD+kGlfviAWHAAAfi9bc1amWtlsz4MxgmEtm1yyLCVS7gmxJVQemSZshyKRmlbrz7jqlmoyIzM9ci4NhJIvKM8AAAAAAAcddG6G3rp1yLW1c9uXhyAcgAAAAHk3JcdLtO2pNfrTj7cCKnXecZYW8baS4VGlBGeRFtM8siIB6wCNqXj3hVV4lMmR7n3GFVXdwgTpcV6PGkuZ5aiHloJBqz2ZZiSQAAAAABxWtDadZaiSXhMwHIBi1fxKw+tVwm7kvSg0pZlmSJs5plRl5CUojMY/vgsE+VK1ekmusAkkBG2+CwT5UrV6Sa6wb4LBPlStXpJrrAJJARtvgsE+VK1ekmusG+CwT5UrV6Sa6wCSQEbb4LBPlStXpJrrBvgsE+VK1ekmusAkkBG2+CwT5UrV6Sa6wb4LBPlStXpJrrAJJARtvgsE+VK1ekmusG+CwT5UrV6Sa6wCSQEbb4LBPlStXpJrrBvgsE+VK1ekmusAkkBG2+CwT5UrV6Sa6wb4LBPlStXpJrrAJJARtvgsE+VK1ekmusG+CwT5UrV6Sa6wCSQEbb4LBPlStXpJrrBvgsE+VK1ekmusAkkBG2+CwT5UrV6Sa6wb4LBPlStXpJrrAJJARtvgsE+VK1ekmusG+CwT5UrV6Sa6wCSQEbb4LBPlStXpJrrBvgsE+VK1ekmusAkkBHTGPeC0h5LTWKNp66uAlVRlP+JqGcU6r0qsQm5lKqUWbHcLWQ7GdS4hReEjI8jAd0AAAAAAAAAAAAAAAAAAAAAAAAAABr3wg9uLvb46p/VIbCBr3wg9uLvb46p/VIBa7HPFSJYtuwbapddgQbtuSQim0s5LqSKMbh5KkrI+BKCzMjPYasi4xHGI+jRhJbWjtciE1etwzRSZKmX51xSSYck7kpSVqbU5uajUvJRpyyMz4BZKbRKLUpCX6jSIEt1JZJckR0OKIvARmQ+0unwJ8Uo0+DGlMEZGTT7SVp2eQyyAVhwOxsp9D9Twod1sxna7UaFC9bnqZEcQbyXEOqaa1yMy1UqIkHnwmSiyIzFmqPMfqNvwp0mI7EeeZStxh5OqptRltIyzPLb5RHOKdkzKpRrfotpW3BKI7Xokmqkyhtgux2DU6nWyyzI3UNZ5ZnlnsEoo19yTumrr5FravBn5AHIAAAAAAAAAAAAAAAAAAAAAAAAAAa97J9ueuL46R/CoGwga98OO+9WWug1bcnZnD/V0jYQACvGlV98YQ/pBpX74hYcV40qvvjCH9INK/fEAsOAAAAAAAAAAI9YYom+tnyU1GUdaO04yFwTZ+0lH7MfNLhLz2qNZqI05cBEee0SEMKZ9cd8ZMzt2Omn/Y2xq1omftq3eyns45r40pTkvV8KjPjAZqAAACFdK67ZNp6LVwJp2aqpWjbocFtPsnHpCtTVT4T1dc/kMTUK/42WbiLfeNGHy6faKJ9nWzVCq0wzqDLa5bxJyRqtqPgRmrhyzzPIBGa7fk4u4VwNGG1qOu30WY5TkXHNqykE80SSNetHbQaiWbiiWetmRFn5RciOymNEajoUpSWkEgjWeZmRFltPjMQncNg3nQdLyl4r2RR2ahS6rSlUu5InZSGHO8URsvJJWRLMiPLLPgR5Rllu37cFQqN3zqtQGioVJqZ0yCqkqemyZCkEknVm0TRbNZZlsM8tRRcBaxhIgDA8Nr5Xflqrcm02qxZG6SULcdgvRGzQmQ42kkLVl3xJSnPI8yPPgHl4CJU1hXMjm/JfSzXqoyhcl9b7mqmY6REa1malbC4TMzASc4tLbZrUeRFwipuPmL18XJiZS8B8GXSRctVLXn1LgKnR8tqs+I8szz4iIstpkLPXJJOLbcl5JnmSeL8oqHoiIbuLS1xsuyandZsJcSAw4vaaG1qdJRF4M+x0AM3s/QcwfpUIpd7oqd6157v5dRqcpZEtfHqoSZZF/SNR+UZZvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACEN6Do7cmtP5xzrBvQdHbk1p/OOdYTeACDHtDzR0fZNs8OISCPjbfdSZfKShDV+4FXZoxpexW0fa3U5FEhHu1XtKe8b7S2Px1t8BnkW3bmouEjPLIXYHxlxI8+nvwZbSXY77amnW1cC0qLIyP8pGAxHCzEai4pYYUq8aG4ZsTWEuKbV7JpfAttXlSojL5Bmgp7oHSHolo3rbe6qVFplwPsMJM9iU58Xyi4QAAAAAAAAAAAAAAAAAAAAAAAADXvhB7cXe3x1T+qQ2EDXvhB7cXe3x1T+qQDYQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA174b+3K3T8ZM/h0jYQNe+G/tyt0/GTP4dI2EAAgXSjt27qxbdj1ez7YmXHIoF1Qqw/AhqSlxbTKtYyLWMi25ZfKJ6ABX7t/4q/wA12+Ppcbzh2/8AFX+a7fH0uN5xYEAFfu3/AIq/zXb4+lxvOHb/AMVf5rt8fS43nFgQAV+7f+Kv812+Ppcbzh2/8Vf5rt8fS43nFgQAV+7f+Kv812+Ppcbzj2KFdUyVibMvSrUuu06rfYgyt2x9z3aU0lMx/J4tVWopSvYkRd93vyCaRgjLjW+bnNfY5qula8dR1zNffl2W99zZex732fh7/wAGQCmeMmnzf1Lqsq3LPw5kWo82ZoOVcbKuyi8pMGRJR8pq+Qfui/pRYhrpN0zbvtTETEiVImtrRIokE5TUMtQ/teRZE3nwkRELyXbYdmX5Sjpt42zTKzGMsiTMYSs0/wBFR7S+QxjuFmCdh4NFWmrChyoUWrvokPRXXzeQ2pKTItzNXfEW3gMzAYCnScrS0EotHLGDI/DRcv2qHzPSfuHWMi0acYDLw+tKeuLDD8UpKEGtaiSkizMzPIiIBX49Ja6NU9XRpxaM8thHT2i/8x4dAxrqVuPTToWilihEXOfXJkK7GSZrWpRqUealnkWspStUsizUo8szPOykapU6Y3ukOoRZCNbU1mnUrLW8Gw+HyDtAK3xMeboplGOmUzRbxNZjGpxW5JbbTkbilLWZGSzMjNSlH8o820sWrhs+nyIFs6KmJsSO+8qQ429J3UjcUZmpRG64rIzMzM8uEzMz2i0QAKyXTj/iNJt19k9Gi/WEKTtdecZIk7S8BmI70Cpsio4q43T5UB6A8/Mp7i4rxka2TNUzNKsuMhb+8/wQl/0P+5Cpug//AMccePzjB+vMAXSAAAAFXcdsZMYbd0rrOwfwuVbTblwUvsonazHW4SXSW/nmpKiyTqslxHtHo9iacPwxhL9Gk+cBZEBF2FTOPzVXqB4xTbNkQjZT2EVAadQsnNbvtfXPgy4MhKIAAAAAAjW+ZuM7GLlox7EpNIlWe6s/sgkSlJJ5lOts3MjWRns8CTASUAAAAAAAAAAAAAAAAAAAAAAAAIqxbxqi4U3ZZNDkUJ6oquio+t6HG3SQUc80lrGRlt9lweQSqAAAAKa6DZ/dmJX9ppH7RcoU10G/vzEr+00j9ouUAAA6kOp02ouSG4E+NKVGc3J8mHSXuS8s9VWR7DyMjyMB2wAAAAAAAAAAAAAAAAAAABr3wg9uLvb46p/VIbCBr3wg9uLvb46p/VIBsIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAa98Jtvqxt357ftlQ/dJGwga98Jfbjbv8AjKh+6SNhAAAAAAAAAAAAAAADCGSn74+aZ3MyqB9jcck0Hdl66HOyns5O55apEoska2eZmjLLIiGbiPmHaRvqZ7CaTJKrFakZaqibp7kpjsx8ktEjL2RK1la2fAoi4gEggAAAifSEYudeFkOdb1IercSn1aPOrVGjqydqNPbJRuso90ZnuatX8YkmXGJYGAVjECo0DFpyi1G2K25bZU5t0qxCp7sptuSa1ZtrJtKlZapEeZEZFxmWZAMftKlYYYozrQxXsB2EiPTFvZlDaJlSjU0aNyeQWWSkGrPJRZlty4RL4gmz4VLs3EnE7GBqj1Gh2jNhx3nGTgPIcmPMk4p+WUbV1yIyUkszSWeSlHsLMSI/iVQo2HtPuyQiRFanHG3OHNQcWQknnkNEam3NUyyNeZ7OAjyzAZkAjvF26a5b+Ey7ks+qwGnWpkZK3XGCkpcaW8ltaUZKIkq772RkrLI9me0pEAY/ef4IS/6H/chU3Qf/AOOOPH5xg/XmC2V5/ghL/of9yFTdB/8A4448fnGD9eYAukAAAo1pLXOuzPVKcLLmboNUrqoVDNZU2lNk5JfzXMTkhJmWZlrZ/kIxKm+znfzd8W+i0dcYVjApKfVZsG1KMiIqKvaf/wCaLhbux/zm/wBYgGBYV4nv4nUioTnrDui0jhupaJmvxiYW9mnPWQRGeZFwCoWArWNmMV64jWq1itXaDa1OrzypdQYkKcnHrLWluMw4sz3JBEgzPVy4hfsnG15khxKjy4jzFQNBf/67jR/ak/rPAOg2/iFo56Y9j2ZJxNuG8bOvDWjnGr0g5DkZzPV1kqUZ5ZKNJ7MsyMyMj2GPbqFyXRhh6ptApFWuKqyLSveCZQ4cmU45HYkZERk2hR6qT10Fwf8AM8o6elF/LWwE/r5/vUDJNN60Jk/Bek4k0RKk1qx6m3VGHmy75DZrTr/ISktq/ugOzpk3vcVHw4t2wLJqcyn3Nd9YYgxn4Tqmnm20rSajSpJkZZq1C2cJGZcYxPGCo3RZWlJo/wBmU+660UJSERpyOzXMpxoUSTU9t+2GeRnmrPhHUw9rsXSO09oeIkNO7WzZFBZ7GL2SOzpCDNXypNay/K0kx2tJX+XlgN/WFfvCAeTpM3jirSNPXDy28M6zKbl1GiE2zTnJK0Q1POOSm92ebI9VRILJZ5kf+7LwDM6hMvPRP0cLvve/8SJ2INfqL7KYKZuuTbUlRKSTaCUs8kbTWZFq7EcAx/Fv227Bz8xr/wDeHt+qAUioT9FiLU4cdchik1yNMltkWZE0aXG9Y/JrLSX94B1rTwDxgxAsaDfN96Qd8Ue5qowic1Bo0k48WnktOulrckmRGZEZErLLjLbwnkejXi1etZvW78GMU5TM67bTeyTVGkEgp8czyJakls1izSeZcJK8m2cbJr9KufDWhXFRpTT9PmwGZDLiFEZaqkEeR+Ay4DLiMjFWsA3U3p6ofi9flHMnqJEaTTSlIPWQ66RpTkSi2f8A21GAwfAxvG3F/ETEa02MVa5QLWgVt1cqosSFOzjPWUlEaOtZnuSMiNRmnLgIvyZUbuIejfpg2DZ8jE+4bxs+9VqinGr0g5DsZ0lJRmlSjPLvnGzzLLMjMjI8iHoaD/4YYy/2kP6zgaVf8s7R0/PDn8RGAdHSJujFOJp+4eWnhtc0inyKpR1spjuvLOGlbnZKFPuMkeqs20ZrLMj2tp8A8fHu1cVdHKzqXi5QMdbxuGc3Umo9RgVmSbkWSSyUZ5NZ6pJzTlq5bCPYZZDJsVfbasH/AMzvfu5g9n1QT+SGf55i/sWA82bgrjLc2Eb+Jtw6QF30y7XKeqrMU6lSDYp0M9z3RLBNpMsyyySZ8fl48mwo0ipL+gGeMt75TKjSWX2JZt5J7Kebc1G9hbCNes3n5TMTBJ/k7O/2dP8AhhRGz6HUa96jXdEemNLedj1pcxbaCzNTbT7K1/MkjV8gCabEwoxfxwsWHidfuOF3W07W2in0ujW1IOLHgsLLWaMyIy1jNJpPM9uR8I9zAXEvEGiaQFy6O+K1aK4KrSo5TqXXFI1HJcYySeTnhPJaTz4fZEZnwjEsHNGmxcQsB7Uu2DiriSjs2msKfYh180tR3iQRONJTq96SVkpJF4CIepgzYGC9taZNXgWlcl8XJd9Ep6m50+pykS4raVkktzN3VI9cs8ss9hkZcQDArCk414maXeMOHtAxLq1At6JW33JlQS8p6REZS+6huPEJRmTWttzMssiRs8B+xeLGIui9j9h1Ji4sXRd1pXTU00yo0+4JRyFNGa0JNaDUZ5Hk5rEZZHmjI8yPZ6+iv/Lb0jfzz/7UkfTTj/CnBX+1KPrtAMN0wsM3Sx/w9qn2b3LlcleSwmN2We50zLc068YvxFceZcYudYVpLsawYVsruGrV9UXXzqNWeN6Q7rKNXfqPhyzyLyEQrbpjrQ1izga66tKEJuUjNSjyIu+b4TFtkqStJKQolEfGR5gP0AABRjRGvS07GgYmVe77hp9GhFc0nJ2Y8Tetke0kke1R+Qsx7mJPqh2HlA3aBh5Qp1zzCzJMyR9yxEn4SzzWv5iLyjytD22Ldu2BidSLnokCrwXLmka0eayl1PDwkRlsPyltGS4k+p94WXRu02x6jOtGerallH3TEM/KhR6yfkVs8BgKX4k6XOOGJO7RZd1vUSluZl630UzjIMvApZHrqLyGeXkE16GOkbhfhJhVcVIxDuCVEqM6sHMbJMV181o3FtOsakke3NJ8IiTEnQyxxw83aUzbp3NTGzM+y6Jm+oi8KmstcvkIyEz6Fmj3hribhVclRxGtNc2pwaycRvdnHGVNo3FtWqaSMuNR8ICxG/l0cfG+b0Y/1R9t+7o5GX4av9Hv9UdreYaOniCj6W71h8T0KNHMzz+wdf017rAPnv3dHLx1f6Pf6o+jemxo5OEZ/ZwtOXuoLxf+IbyfRz8SF/TXusPmvQj0c1mR/YW8nL3M94v/ACAfbfqaOfj3/wDpPdUfqdNPRzUok/Z4RZ8Zw3iL6o628f0c/E2T0g91hxVoPaOSkmn7D5RZ8ZVF4j+sA9DfmaOnKCz9Fd6ob8zR05QWforvVHlbxfRy8VJ/Sb/WDeL6OXipP6Tf6wD1d+Zo6coLP0V3qhvzNHTlBZ+iu9UeVvF9HLxUn9Jv9YN4vo5eKk/pN/rAPV35mjpygs/RXeqG/M0dOUFn6K71R5W8X0cvFSf0m/1g3i+jl4qT+k3+sA9XfmaOnKCz9Fd6oq5o93HR7v8AVXblua35ZS6XUPXGRGfJJp3RBpLI8j2kLHbxfRy8VJ/Sb/WGZ4aaNOEGEl2ruayrcdi1RTKo5SH5TjxpQrLWJJKM8s8iAS4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1oUPEK0cMfVWr0uu9qr62UlqTNZXI3FbuS1tkSS1UJM9p+QWw36+jVyif6ZL9EPVvXRRwPv+95t23JaanarOUS5LzMpxonFERFrGlJ5Z5EQ8DeP6OnifJ+nvdYB2t+vo1con+mS/RBv19GrlE/0yX6IdXeP6OnifJ+nvdYN4/o6eJ8n6e91gHa36+jVyif6ZL9EG/X0auUT/AEyX6IdXeP6OnifJ+nvdYN4/o6eJ8n6e91gHa36+jVyif6ZL9EG/X0auUT/TJfoh1d4/o6eJ8n6e91g3j+jp4nyfp73WAdrfr6NXKJ/pkv0Qb9fRq5RP9Ml+iHV3j+jp4nyfp73WDeP6OnifJ+nvdYB2t+vo1con+mS/RDIcPcS6PibijOumy7wp1UstFCaZ3FKibeZmJkOm4tba0k4lO5m2WZ7Bim8f0dPE+T9Pe6w7dlYN4L2RjHWcP7ftVxKZdrMuzYknJ+LIYVLeIjVrGajd1knw7CSSctpAO5iVpcYIYZ7tGn3Sis1NvZ63UYikuZ+BSiMkJ+VXzinmJXqhmI1f3aDh1R4drRFZpKW+kpUrLwlrFqJP+6YnzEj1P3Cu6jem2TUJ1oTl5qJtH3VFz/oKMlEX5FCoWI+hXjhh/u0mNQkXRTm8zKVRDN5Wr4TaMiWR+QiMBbjRv0qMO4ejvR2sVMUmDuo3pC5h1Ba1vbXlmjM8j/FMsvIJVXpa6OzadY8UKSZcHepcP9iRW3Rs0P8ACbErR1pF2XvTK+1XH35LMltEtUckm2+tBFqGWw8kkJbToB6PSVEfYFwnlxHU1eYB6d8aSujreeHlWtXtv0+AmpRzjLkJjOuZIUZaxauqWZKTmk9pbDMeXc+P2itdNtxqVXsRaNOlMKjK9cHaUpbhk08h3VLJrJKVamqZFxKMfbeE6PPwTW+kl+YckaBmjyhWZ0asq8iqkvL9gDz780g9Gy6sNVWfRsWKVb0Q323ldj0h5xJEhZOaqUElJFmpKTM/y+HMZVF0y9HVuEy3LxOjPvpQROOopslCVqy2mSdQ8iM+LMx5O8R0d/gCq9IuDm3oK6O6M87cqS8/dVBwBxurTF0dZltvxYuISHXVpySlNPleEv8A4xF+gbXqTWMZcapVPmodRPkQpcYjI0qdaJcojWST25Froz8GsQz65NCfR+p1uyJcW2pyXUJzIznuHxl5RH+gVb1Jo2MmNEaDESkqc/Chxlq75bbRrkmpOtw5GbaDP+iQC8wAACIMVNG3DrGC+Kfdt1u1xmqQIhQo7tNnHG1WyWtfERnnm4rbnwDEd5XhT4w390+55hY0AEWYXYB2bhJX5tXtqp3JLfmR+xnE1WpKlIJOsSs0pMth5kW0ephng7Z2E8m4n7TTPJdfm9nzeyn91zdzUfe7CyLvj2DPwAYBeuDtnX7iNa97V9M86pbTu7U82H9RslaxK79OR620iGXV+h025rVqVu1mOUin1GM5EktHs1m1pNKi8mwz2j0QAR3hDgpY2CNsTqHY0aW2xOk9lPuzHt2cWrVJJFrZF3pEWwvKfhH1vHByzr5xPti/a4medXtpZrgGw/qNkZnn36cj1toz8AEf17Buzbjx3t/F2opnncdBjHEhm2/qs6h7r7JGW0/ty+PweAZlWaNS7it+ZQ63BZnU6Y0piRGeTmhxCiyMjId4AFZU6GVCpxSaZauKt/29bchalLocOofaUkozzSkz2kW3jzPwmYmrDXDCzcJbGatOyKX2FAQs3XFLWbjr7hkRG44s9qlHkXk8BEQzAAGAYbYO2dhVULhmWqmeTlfmdnTeyn91LdMzPvdhZF3x7AvjByzsQcQ7SvS4UzzqdqyDk0047+oglmtCz105HrFm2nwcYz8AGAVzB2zrhx2oGLdQTP8AsioUdUaGbb+qzqKJwj1kZbTydXx+AdjFXCq1MY7C+w+8UzVU3shErKG9uS9dGeXfZHs2mM3AB5y6JCXaSrcVunYRxOwj77vtz1NTh8OXGMWwzwks7CjDVdiWvHkuUZbzry2p7hPms3CIlEZ5FmRkXBkM6ABW+Xoe23CrM1+wcRL1simz3VOyaRSJxlHM1HmeoR7Ul5NuXFs2CUMKMGrHwatp+k2dBeJyW5u02fMdN6TLc904s/8AAiyIsz2bTEgAAj+yMG7Nw/xHu697fTPKq3XI7KqRyH9ds166194nItUs3FeHiH7iVg7Z2K023ZV1pnG5b80qhC7Ff3IidI0n32w9Yu9LYM/ABgeKuD1jYy2tHoV8U999mK92RGfjPKZeYcyyzSovCXCR5l8xD2LDsmk4d2FCtGhyJ78GHr7m5PkG+8eso1HrLPae0z+QZIAAAAAproN/fmJX9ppH7RcoU10G/vzEr+00j9ouUAD4sQ4kZ15yNFZZW+rXdU2gkm4rLLNWXCeRFtMfYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGHNFcG+BlmqDDK3/ALHmNSWSG93OV2S7rINXs9QkahkR97mZ5bcxmIwhmkxk6SM2uk7UeyV21HiG2bB9i6hSnlkZOcBuZqPNPEWR8YDNwAAAiIuAsgAAAAAAAAAeHdzZuWnKSRfif9yFO9EepxbY0w8WbNqDhMyq2iPUIaVnlupNG4aiLy5PmeXgSfgF2JsdMqC4wsiMlFltFIdIPCO5aVe1OxLw7lnS7opLhOMPoMyJ0izPUVmWR8GWR7DIzI9hgLzAKW2vp9QaPTEU7F7Di46dWGi1XZFHZQ8w6ZfjEla0mn8hGoe/3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2YCpndDcEPgC++jmPTh3Q3BD4Avvo5j04C2Y6FarEC37cnVyqPoYhQWFyX3VnkSUJSZmf8AgKtK9UMwV1D3G278dcy71BU5gtY+fEYXtitippXS2rGte35dl2G8slTpEpX3TNQkyPUPIthZ/ils8JnwAM80DI8qRZt1XK80aGqxWnpjXlSZ5bPlzFxxgGE1i02wrBg0OlxUR48aOhpCU+QuE/CZntM/CYz8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB0KnSIdUjKakstrI/dJIx3wARnU8HrfnvG4qFHMzPPayQ83tG277wjcykS8ACIe0bbvvCNzKQ7Rtu+8I3MpEvAAiHtG277wjcykO0bbvvCNzKRLwAIh7Rtu+8I3MpDtG277wjcykS8ACIe0bbvvCNzKQ7Rtu+8I3MpEvAAiHtG277wjcykO0bbvvCNzKRLwAIh7Rtu+8I3MpDtG277wjcykS8ACIe0bbvvCNzKQ7Rtu+8I3MpEvAAiHtG277wjcykO0bbvvCNzKRLwAIh7Rtu+8I3MpDtG277wjcykS8ACIe0bbvvCNzKQ7Rtu+8I3MpEvAAiHtG277wjcykO0bbvvCNzKRLwAIh7Rtu+8I3MpDtG277wjcykS8ACJmMEbcbdJXYEbYf8AyUjM6HZVIomqceKykyLItVsiGTAA/EpJKSSksiIfoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP/2Q==)
###Code
class Environment:
def __init__(self,length,width):
self.BOARD_ROWS = length
self.BOARD_COLS = width
# Setters and Getters to define winning state/location , start state/location and holes in the environment/lake
def setWinState(self,x,y):
self.WIN_STATE = (x,y)
def setStart(self,x,y):
self.START = (x,y)
def setHoles(self,holesarray):
self.HOLES = holesarray
def getWinState(self):
return self.WIN_STATE
def getStart(self):
return self.START
def getHoles(self):
return self.HOLES
def getSize(self):
return self.BOARD_ROWS,self.BOARD_COLS
###Output
_____no_output_____
###Markdown
Defining AgentThe agent works on the Q learning equation which is defined below:![qlearneq.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAp0AAAA8CAIAAACraJYLAAAV/0lEQVR42uydeVRTZ/rHbwKRsmiRiiwKo8CI0tYKAqJWBq3T4iC2KrSAVXuqh9Z2oLgxIHaE1hIO2lLOCIxQFh0rDlAoCsVBVtkGEAIChlHWIlvYFyGELL9zfH/nnpxsJOFCAjyfv3Lfu7w33/c+z/Nu972qPB4PAwAAAABgUUAGCQAAAAAA4joAAAAAABDXAQAAAACAuA4AAAAAAMR1AAAAAIC4DgAAAAAAxHUAAAAAACCuA8ACxsrKyogPGo0GmgAAIBP+/v78boRKpcp9KRKsSwMAs8TAwCA6OtrExARtmpiYqKurgywAAEhPV1fX0NAQ+h0WFqarqyt3aFcFNQFg9mzYsMHc3Bx0AABAPgxfgn7r6urO5lLQDw8AAAAAiweI6wAAAAAAcR0AAAAAAIjrAAAAAABAXAcAAAAAAOI6AAAAAEBcnyNSU1Orq6uV5J8PDAxIc1hSUlJDQ8MiKOmFKP5i0h9YZCiVQRHFgjM3iCkikf/99Z6ensTERDqdrqmpOTk5SSaTnZ2d9+3bJ+74hISE+vr6Q4cOidzL5XJjY2PLy8u5XK6mpub69eu9vLzq6upqamo+/fRTwv92dnb2sWPHenp6Zjzy0KFDBw8eDA4OfvPNN5XnaV4i4iut/tLz+++/Z2RkCKz+RCKRDA0NzczM3njjDQiQYFDKg2LNDWIKYfBkZ3Jy8u9///ubb77522+/cblclNjU1GRlZXX8+PEXL14In1JVVeXg4MBisUResK2tbe/evdeuXeNwOCglIyPjo48+MjQ0fPDgAY9opqamNmzYQCKRxN2PAN3d3ba2tmNjYzwlYKmJr2z6i0RfX7+xsVHkrt7e3oyMjM8//xzDsG3btqWnp//2229paWlXr161tra2tLTMz8/nAWBQSoNCzA1iigB+L5H7fmSO6z09PVu3bnVychodHRXY1dHRoa6ufv78eYF0LpdrZ2dXUVEhThFLS8tbt24JpPv6+lIolPHxccLLgEqlampqYhjW0dEh/SnC/2v+WZriz0b/iYkJxcZ1xLfffoth2M2bN/kT2Wz2jh07NDQ0qqurlcGhz49WSsUiMKi5YJ7dHcQUBcd1BoNhYmJia2srzgV4enqqqqoK+LikpCQ7OzsJLm/Tpk3C6ffu3du+fTvhBdDR0XH8+PE///nPGIaJeyyEGRoaWrly5e+//65AY1uy4s9Gfw8Pj8rKSoXH9XfeeQf1yYuM9z4+PsrgzedHK+VhERjUHDGf7g5iylzEdRnmzbHZbFdX1+fPnycmJor7rMXmzZvZbHZeXh5/4vXr10+dOiXusvfv39+8ebNw+urVq//0pz8RPu7w9ddff/PNNwYGBhiGdXd3S3mWtra2k5NTQkKCrNlFREQQcttLWfzZ6M9isaamphQ7ZslisUpLS9evX29kZCQ8+o5h2CuvvKIMY6vKoNW8sTgMao6Q29zArc1DTJEGGeL6P//5z8LCQi8vL/y7VcKsWbMGw7C6ujr+qRB5eXm7d+8Wd0pDQ8OjR49YLJZA+vLly8VNiJCbvLw8MzMzY2NjOULLrl27EhMTZcquqqoqKSmJkDtf4uLLp7+SUFFRMTk5KexQxsfH7927p6qq6urqCjPXpGFiYoLJZIJBzQPzY24QU+ZIZGnnw7948SIoKIhEIvn4+Eg4jMFgYBjW39+Ppzx8+FBPT0+4pYJjbm5eXl5uZ2fn6+v7zjvv4N+x2bRpE+F1w/DwcBRo5SuDzz77rK+vT/ov7cTFxf31r3+d/Z2D+PLpLzclJSUxMTE6Ojqampqjo6OBgYErV66U+2oFBQUYhgnH9YsXL/b399+4ccPKymqBBtrW1tZz584VFRXp6upeunTpww8/5Pd3GzduxL9Phaq5Xl5e/f39ZDKZTqdHRkYODAwMDg7W1dWFhITY2Nj85z//KS8v53A4NBpt//79np6e+Lm5ubnp6elr1qxhMpmFhYUXL17cs2cPvvfHH398+PAhhUJhs9lkMjk5ORnDsLNnz7a1tamqqk5NTQUHB1tYWCwmg5oNERERNBpt3bp1GhoaR44cyczMFJ4fPg/mBm5tDkWWsr8eVSscHBwkH+bv749hmJeXF55y4cIFR0dHCac8ePCAQqH8f+8Bmbxt27YrV65MTU0RPgpy9epV9MYRGp7BMMzT01P60zkcDolEysnJkfL46upqCwsLfDbmbADx5dAf4eLiUlxcLFMuly9ffvvttxkMBkoJDg62trZms9loZllsbKys4+tocL21tRVtjo6O5uTkvPvuu05OTk+ePFGeUVVZtXr8+LG+vv66deucnJwsLCzIZDL/TGORkwYYDIazs7O+vv6VK1eePn2KEs+cObNq1aqCgoJffvkFpWRnZ2MYhk8nbG5uNjMzq6urQ5s0Go1CoeCPEyq1jo4OFOmTkpJQYm5uLoVCoVKp7e3ti8+g5IPJZHp4eHz00UdoznlhYaGJiYmOjg5R5gZujaiYMk/j6+np6RiG2dvbSz4sJycHw7Dt27fz17ZWrVol4ZS9e/eWlpa6u7vr6upyudzy8vLz589/8MEHXC6XwOpLd3c3jUZzcnJCm6huJeUr1Pjzoa2tjSqPM0Kn093d3W/cuEEmE7DyD4gvq/5yExQUFBYWlpKSgtegvby8qqqqfv75Z7QIxh//+EeZLogG1/X19RMSEoKCgvz9/d3c3N5//31vb++MjIy5aMbl5+d3dnbOdZuPxWKdOnUqJiamtbU1IyOjoaHh5s2bQUFB+ACnyG5SXV1dCwuLoaGhVatW4Ura2Nj09/fHxMTgfaQ2NjYYhqWlpaHNmpqapqamqKgotLllyxZra2sqlcr/bKxdu/bu3btbtmwJCwubnp7Gm/h+fn7GxsaLzKDkLuUvv/yysrIyNjaWRCIhBUZHR3ft2qUQcwO3NnciS9sP/+zZM9S/IeGYnp6eR48eaWlpvffee/xlIKHDBGFtbX379m0ej1dXVxcfH3/9+vWsrKzc3Fw0wxDx008/MRiMCxcuyPc/0dQGfFNCn4mEjF577bUZy4DFYkVGRoaGhkZGRhoZGfX29kp5hyoqKuIeVhBfGv2npqaGh4cFEplM5sDAgEApiJOaRqNRqVRvb289PT08UUtLy8jIqLKy8tixYyUlJe7u7jL9dzS47uHhERgYiCfevHnzwIED6enp+/fvl0mBu3fvGhsbb9myRXhXc3Mzinw//PBDVlYWGpWcO61u3bp18uRJ/vs/cuRIRUUF6lRMSUmJiYkRmTWJRJqamuKPJfr6+gKOW1tbW01NbXx8HG3u27fv+vXrjo6O+AHr168vLi4WuLKmpmZKSsrWrVvPnDnj6Oioq6srbkkTJTcookpZgMrKyri4uJCQEPRGFoZhfX19AwMD/CMa0pvb0NCQ8AC2MCtXrly2bNlSdmuz8WlzHtcHBwdnLAOk4/nz53V0dPDEsbExcRMdWSwWf5GTSKTNmzeHhYWtW7fOx8envr6evwyioqI+/vhj+f5kUVFRRUXFV199hadwOBxxoUVCRmi0VXJesbGx586do1AoeNtF+rh+584dMzMzEF/cpSTrHxoampqaKjwA3NjYqKWlJSB1amqqcDMuNDR0enr65MmTwr7pf//7X25urq2trawdMGhw3cHBgT/x8OHDx48fj4iIEI7rkhWg0+k8Hk+kxzcyMvrmm280NDS+//77Ge9q9lpt3bpVeMqxm5tbTk6Oubn5tm3bUItQHPwBCR0pEKJIJNLk5CT6ra6u7unp2djY+MMPP9Dp9FdffbW+vl5kUDE1NU1ISDh06FBvb6+EKatKblBElbIAISEhPB6Pf5JmYWEhj8cTNwFNgrkxmUwnJye8gCRw4sQJcXOMlohbm41Pm/Px9XfffZd/jFCYgYEBHR2d1atXC6yh4+Li8tlnn4k85auvvhI3Mo0qrXjK8PAwmUym0WhyjDSw2eyDBw8ymUyBdC0tLVVVVXxtI2kyMjExiYyMnDHH+vp6Ozu77777jqhRHBBfJv3lHjNet26dsbGxcLr1S06ePClww9KMr4t8c72mpgbDMFtbW4GDZ1QgJCTk119/nWFJipfd4PMwvi4Mi8U68xKRC4ThA4coMOApqOoj8L9eeeWVzz//HP1+8eKFu7u7qalpZmYmSvn444/19fVFXn9kZMTc3PzVV1999uzZAjWoOSplIyMjU1NT/pQvvvhi1apV4p5qOcwN3JqAW5Pbp83T+Dpayfb58+d4taigoKCtrY1/YHJsbCw+Pl6gvr969Wpx9RFx6/W3tLSQyWQ7OzsMw5qamry8vDw8PNTU1GJiYgICAmStuERERJw4cUJNTU0g3cDAgM1m49MspcloeHh49erVM+b4+uuv5+bmFhQUREdHE1L3AvFl0l8+mExmW1ubyAFvCoVSW1vr5+cnuQ0qbnDd1NRUoNsQic/fvz17qZUBCoUyODjI5XI1NDQIvKynp+evv/6amZn5l7/8BW+NoB8dHR38b76htl1WVtaWLVsOHTo0MTGxyAxKbkZGRjo6OgS+R5Cfn29vb08ikdrb2/knnM+DuS16t6ZgnyZl/H/y5MmyZct8fX1R9dnHx6e4uDgiIgJV8FNTU9XU1PDprAJL/+zYsUM4vbW1FcOw3t5e4V3Ozs4oI5zTp0+///77wkc2NzdLXl67o6PDzc1N5C40wldbWytNRuhfk8lk6Rs0vb29RkZGg4ODs6/Ygvhy6C9HG1RfX//YsWPC6TY2NlZWVpJPFNleLyoqQl2RAune3t4Yhn366ados6WlBdXxJSug/O11Ho+3cePG0tJSCQfI2l6fnp5WUVHZvXs3/97t27ej9vqPP/7Y1taGp1++fLmoqIjH43V1denp6R09enQBGRQhpSzOKjkcjpqa2tmzZ/lFwDvnL126JNAmls/cFrpbm9GnyerW5PZpEtrrzc3Nkk1MtnVkL1++rKmpSafTo6Kinj9/jkzu9OnTkZGR69evF/cg3r9/X0NDA70mxE9cXJyKioq3t7dA+vfff29tbS3QxWFpaRkWFib8pKIWT1lZmcisu7u7bWxsoqOjRe5F03DS0tJmzAhRXFxMoVAkdDAK891LCLGBJS6+fPrLGquCgoLeeOMN/pS+vr6AgAD0HhePx2tsbETiSxnXT58+jZYxEEhHQ/jnzp3DDxOnQF5e3h0+3N3dz549y59SVVWlVHFdIAALg15WHh4e5n9KMQy7c+cOfz+niooKXu8xNTXdtGkT/sooGiXV0tLicDinT59G649OT09TqdS9e/fiF7l9+zaGYVeuXFF+gyKqlCVbpYuLy8GDB/FWh5ubG4lEQl3TX375JSHmtqDd2ow+TQ63JrdPExfXORwOat8LNIoEkOE7rQEBAWvXrj1w4ICxsfHrr78+MjJSVlYWFxf3xRdfPH78GHWVjI2NLV++nP+sHTt2cLncJ0+eCHyTrqCgICoqislkOjs7u7q6mpiYtLW1JScnGxsbFxcX83dxjIyM1NbWCi/rQSaTHRwcaDRaaWkp6mDB6e/vd3V1pdFoHA4nICBgcnIStZDw7p379+/X1dUtX77cx8cnOjo6KirqD3/4g7iM8InNO3fulKmD0c3NzdnZWe75liD+LPWXFT8/PwaDceLECRcXl66uru7ubj09vcDAwIGBgbfeeuv27dtlZWXh4eEzXofD4Xh4eLS3tz99+nT58uVUKjU9Pf3ixYvW1tbogFOnTqWlpdFotM7Ozp9++unw4cPipGYwGPxvzoyNjQ0PD/OnvPbaa8rTD89gMDZs2CBub3V1ta+vLyp6BwcHe3v78PDwffv2PXv2bMWKFX5+fvHx8cnJySEhIchxp6end3Z2+vv737lz55NPPvHw8Pjkk0/a29tHRkZ++eUXKyurDz74YOfOnerq6n/729+Sk5P7+vrQrG/0mlxwcLC6uvq3336bmpp6+PDhs2fPKq1BEVXKEqwSrd7j6uoaGxvL5XJramoiIiJWrFiRkZHx3//+9+jRo/NvbspWCpLVk8OtaWtrE+7T0E3W1dWtXbuWgH54/mkp+fn54eHhUVFRubm5K1euHBgYwPsHAgMDhU85cuTIpUuXhLso0Y+enp67d+9evXo1MjISX6qCn8zMTG1tbVRbF66kcLlcKpVKSOVRckZ79uyJj4+X9Zr19fUEVm+XrPhy6y9HG3R0dPTRo0f9/f0CM9qqq6unp6dlnTcnYZgmMTHxH//4R3Nzs5QKKH8/fHJyssh1ewihpaWlrKysp6cHb9PzN/oXh0ERUsoSrBKFQ/6Ri8ePHw8NDRFlbougFJTEp833d1oFoFAoCQkJPB7v6dOn27Ztw50UP+Xl5WvXrhXuNpGS4ODg9957j8fjNTQ0CPeq1dfXp6SkEFIMEjJqaGgwNDSc614pEF9cRnLrHxgYKGFeNFHIEddlVUBKj89mszEM41+LbT618vb2JrYiu9QMipBSnr1VKtbdKbYUlMSnKTiu6+npUSgUU1NT1Lso7rCjR4/++9//li+LkpKSnTt3/utf/woKChJeC/DChQtEfTdaQkaenp7oUVMqloj4Sqs/sXFdsgIzevzu7m5vb+8DBw6sWLHC1tb21KlTc7oIqEjs7e0JWTh5yRoUIaU8e6tUrLkpthSUxKcpOK4fOXIE9efv2bNHwgK8vb29lpaWnZ2d8uXCYrHwnhl+srKybty4QeAjJTKj7Ozs/fv3i3vLU4EsBfGVWX9i47oEBaRvySmQ4eHhffv28RYyijUoQkp59lapcHNTYCkoj09TcFyfnJzMzMzMycmZsZ5eVVXl4uJCbHVeZBcNsXR1dTk6Os5+JG8uWPTiK7n+hMf1Gfn5558lTNZVLHQ6XWnrHAvCoAgp5VlapTKYmwJLQXl82izjOglf4WEeaGlpGRsbe+uttxbQUhvZ2dlvv/32XM8LBfEXtP4GBgYFBQWSV8QEFhkL1KAWmbtbrDEFfcWO/+NGMjGvcR0AFiUQ1+cONpsdHBxMIpHIZLKZmRmdTuf/gg4ALEpmGddVQUEAAJQTDofj6Ohob2//9ddfYxjm5OQ0MjICcR0AIK4DALAgCQ8Pb25ufvDgAdrkcrkiPxYOAAA/ZJAAAADlJD4+/sMPP0Sf2+FwOCUlJeI+KgoAAMR1AACUGi6XS6fTLS0t0WZVVdXExMSOHTtqa2tBHACAuA4AwELzTWSyiYmJiooK2rx27Zq5ubmWllZaWhqIAwASgPF1AACUlKioqNDQUCaT2dTUtH///qqqqmvXrtna2oIyACABeM8NAGaLgYGBv7//mjVr0Obu3bt1dHRAFkJgsVj9/f2GhoYYhk1NTY2PjyvVJ+wAgChoNFpLSwv6fevWrY0bN8J7bgCgMHbt2nXv3j1808LCAuI6USxbtgwFdQzD1F4CmgCLkocPH2ZkZOCbZmZm0F4HAAAAAADmzQEAAAAAxHUAAAAAACCuAwAAAAAAcR0AAAAAAIjrAAAAAABxHQAAAAAAiOsAAAAAAEBcBwAAAAAA4joAAAAAABDXAQAAAGCR8n8BAAD//6EMeMSBGdBhAAAAAElFTkSuQmCC)Also, the agent uses epsilon greedy method to deal with exploration-exploitation tradeoff:![q-learning-epsilon-greedy-1.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAR8AAAFDCAAAAAAmcFUdAAAWgElEQVR42u2dC1xU1b7HNxeQeMhoQJKoqAgqIymSyPFFRyEKjTRNT6KkzQ2v4g0fpPjoFGn54ohC6blpPiJNk8zATCWz6KU8j+JJMiMFjpoMiCA+YJjf/ew9w2OGmT2vvffMOOv/+SDOZj1++ztrr7X+67UpWI4VbZ8fEdDdgXLoHhAxf3uRRWiiLAVO1mwvsST9RFltM5pry06kS8Res7MIH4WVL/OK3HZV7eLVbZFey8oJH1TMd04q0/iXsiTn+RW2zmedQ3It8x/p4RUxQzwdKUfPITErDkuZa7XJDutsmk/x6ClM2ZGmh7tEpxw+V92Epupzh1OiXcLTGURlU0YX2y6fndQ2+teFeCouR+1POXFU/AX6P9uonbbKZ6U4ny47CaKUag1/rU4RJdBlKF+80jb5LIioAbDHc0mdlgB1Szz3AKiJWGCLfBY8R/8bH5zHEiYvOJ7+9dwC2+OzMgLA9XCJjmCS8OsAIlbaGp+d4hqgfEiKzoApQ8qBGvFO2+JTTOUD14ds0iPopiHXgXyq2Kb4jKYb9vAUvcKmhNPN/Ghb4rNuCl01S/QMLaEr6SnrbIdPhUMZsCdY7/DBe4Ayhwqb4TM/GZB65ukdPs9TCiTPtxU+5c61QMISA2IsSQBqnctthM+yJOCCqM6AGHWiC0DSMhvh41UGxKcYFCUlHijzsg0+WZGAlKo2KE41JQUis2yCz+xtQHqcgZHi0oFts22Cj9dVIDzHwEg54cBVL1vgUyQGpC4GR3ORAuIiG+CzXQIcjjY4WvRhQLLdBvjMTwdWpBgcLWUFkD7fBvhEnABiDqtc+jzYyyu6HM0fd7yo+gmHY4ATETbAJ6AMGHKu45Ur3fNxf8lIFEd1uChT+QScGwKUBdgAn+61gKdK7+dr+rbv/VHf220C8kP6B36Df8wZmkR/6tgD8gRqu9sAH4dmwLGp45X63jOO1QM4FAUM34v9A5Hu9TvzqYM1OQLNDjbAh+qc8c0VQY5RJQyRuzJcc0RGFNT5aIpnI+WHQbTao5Em8smYkSH2yJgJmy0/neufArrbJ3cuOxSFqkd+QaU9MmI78bGZ+qdz+/Wh37/Q9N7jTUfC5KWe92Wv292l+RwJk9tk+6Wh/7N1cLfHJp7H9T4+iPMNPjV2FM2H/mSL/R/Sfyb+F/HfyfgPGT+0UD5k/FnXA0bmL1iNzH+xG5k/1dVFJPPvrEbWb+gwsv5Hh5H1Y+xG1h/qMLJ+VYeR9c86bAFdrei3fn5sDGyPT47zCH33X0Q9VmZzfHLtv9R//07a8Ds2xucHt88M2f+19Hnb4lPoyaw+0H//4IwEW+JzoVdre63v/tOW0TbUf7484L12X0zP/ctXfDNthU9VUKrKcId++99/sD9tG3xqRqxRv6TX+QkHel62BT6NY1dpuqzH+Rupofcffj4tkUuNjrvohYefz3OmNNTTXnvY+UyXmBL7wciNDzefuJmmxf/d55OHmU+8yRXId3bfP7x8Xos2PY19vf94WPksG98EqzJh+bw5qh6Ej3YLvslBIrtd1eqfFumZ06euNFsrnw7yr3KQ3N7XxqnxqVu0uiJxYr218uFYfj0mqPGpGlZ0v/qbe9bKh3P56nzqN6yukN6XWysfzuWr87mZlFB2yXr5cC5fyWeG3wjF5xODfpE1nK21Vj6cy1cvP3mBP9T/+OQZa+XDsfwksWs/8S2VB/j4W2+8us9qyw+f8hUma6iu46lfLkT/kEf5D+H4BuEjpO0acM26+fwZzO+sXspTVs2n/i9v8pzDq7OtmE/T+OW8P2LPrrBePtECzDbcCnrfWvm8EC9EHX2++xfWyWdmHASxY25F1shHMl2oVv6DQTetj0+CgEsq/x5hdXyWRsqF44NX5lgZn1VjGwXtSD+92qr4rGFW7gpo1YP/aUV8UoP+I7QnVtL1S6vh896A34V3VXNE56yEz85e/zaHL79dXGMVfD72LIRZbGWUNfDJcvsBZrK4/7Z8Pl86fA2z2fi3LJ3PKcej5sODGwE7LZvPj13N+9K3QpfjlsynyCsT5rXPPS5YLp9feu+AuS1j6G1L5VMekAHz2/KJFsrn2hObYAkWO88i+dSGvg3LsHFrLJDP3XErLQQPqvrvtjw+Ty+BxdiZLl9bGp/nF8CCLIur7eBc8ZnxCizKtnC0HZwjPi+/BAuzpTEWxGfeFFiccbMdnBM+ic9aHh7ldvBLFWblU+9yCkj+6wML5IOr9HbwbrFm5bPfaR7eCrsNi7Qf7U8fc37CrHwmU6L1w/6EhdqBnuMo6r45+ThTzt1OWyidlosfBDhRolwz8jkuoqguLr5PWySfmC4uFEVRfzcjn3hagLtDoEXy+TrRjQYUbEY+XpSrffjHFru2Wb4j0J36r2az8cl3Dkq/AYu205OdvxGSz63CrM1LYyND/B5zc7Szs3N0e8wvJDJ26easwlsWRIU7lQbwKd2zOKKne/ALizZlnii8/GdDkxzypoY/LxeeyNy06IVg954Ri/eUmh0Nxyr15PP92mdEg2I3HWfrrVcc3xQ7SPTM2u/NxoYHlXrwufHhNPfQ17OleqUnzX491H3ah8LXSjyp1MVHuj3SfWamYasfb2bOdI/cLhUQDn8q2fkcndFl1hGjFB+Z1WWGUBPNfKpk4dOQOmjMB8YvImz8YMyg1Abe4fCsUiufa8td5/5kovaf5rou53f7Ee8qtfCRLnNYcoUD/VeWOCzjryISQKVmPuvdFnP1vV9b7LaeJzxCqNTE53PxTC4Pyy2bKf6cBzrCqOzMp/rloVw3PEeHvlzNcZJCqezE51PvVTx82au8P+U0PcFUqvNZLOZnOPC0eDGHqQmnUpXP1fFz+JqKeDBn/FWOkhJSpQqfvN58ni24sXceJ+kIqrIjn88cD/HamTvk+BkHqQirsgOfTI9vefYFvvUwffWmwCrb+ezrkc+7s5TfY5+JKQitso1PtuisAK72WVG2SfEFV9nKp9A5F0JYrrMpe1eEV6nkcztwj0CDNXsCjZ+tN4NKJZ8ZyRDKkmcYHdUMKhV80idAOJuQbmREc6hk+FxyuCBgzhccLhkVzywqGT7TNkBI2zDNqGhmUUnzyRkKYW1ojhGRzKOS5jP6oMA5HzTmTYLmUUkB2WEQ2sIM7yWaSSUFRH8keM4fGX7KuplUUijzhvDmbejQsblUUlizyAw5LzJ0B5K5VFJ48jsz5PzdkwZGMJdKqsoD5jCPKoOCm00ldXCyWXKebFhrbTaV1LJ3zJLzO4a96d5sKqlJrGfjNVP2tKkOaaZpeMFQ88co8DMg5y8mGSSUVSW7yAI/WpxxMr+YRIlL2flUdr7YkY9M+bs4Cs2GzJGWig3iw6qSXWRzNS3OOJmlYspDqg+f1Bgg8v3CwMTR4tN01udH+Qdlo2ho7FPYG9AvvKK+t9uEAj+0rAoIiL2Nbv+M9NPxQEgNq3BZVbKLLPCjxRknU+pBOTTrw0c2/MSRsJYSu5M44o80Scvg/bjgXn3e9QBqHvkd8fNwKIouuJ8E35HPXAaPlajqwr5mqdnBID6sKtlFFvjR4oyT2exA2bHfBuXk5OTkBxQNGXQeJSKg2a46TXLZTQ6EHSl1bqHf14H9kcqMZ20Cjg+DRzHQvZz9ju0MwSO3M16kgo+RMu30LD/AqLFAiS8Al0tpkp/60B7RjtKeQMsboaH+E5QZP70LyPeBx29gfgQuP5pFKvgYJ7PZQc/6B0fHhR5FSVc57lM19FfTAoRml/oA+4fcQmZrxrM2AMeG68OHj/pHs0gFH+NkSj30bL/u+J3P73unxP4QMsVIk8gD96Hk0VqaT8Yk1E4YiSNh8gI/HBzW2DLtDX348NF+aRZZ4EeLM05mqVi//s87SxMByaIS/yUBA39QNA0BQ0+C5nNzpP9fz3ivvN7HR9Ew+Evu6MOHh/6PFpEFfrjex8c4mV9MMqj/XDLACvrPHIqk+8+GeDYlfpzlzJ//xaFI2v8yxDPmMGv+/Hcu+XhUkfEfHeM/ZPxQx/ghGX9mU0nmL9hVkvkvdpVk/pRdJZl/Z1dJ1m+wqyTrf9hVkvVj7CrJ+kN2lWT9KrtKsv6ZXSVZP8+ukuy/YFdJ9u+wqyT7v9hVkv2D7CrJ/lN2lWT/MrtKsv+dXSU5P4FdJTl/g10lOb+FXSU5/8eY83/I+VE6+ZDzx3TxIefXkfMPTTz/kDFyfqZuI+ev6jZyfq8epnKyMmUV5z9TAp3/zHFkwYyyzsiED+FD+BA+hA/hQ/gQPoQP4UP4ED6ED+FD+BA+hA/hQ/gQPoQP4aPBUh/ZAgpbHkm1aDYmqzSaT72jswf1qLNjvUXzMVml8YUvuQtFUV2SLfzhMlWl8XzqnSiKcqq3cD6mqqRM+mosvviYrJIy6aux+OJjskpj+FzKWjs3QuztakfZuXqLI+auzbpkgWC4UWkon8KNkzyozuYxaWOhBbHhTqVBfPIS+1HarV9inkXA4VSl/nykG8WULhNvlJoZDtcq9eVzaaE9pY/ZLzRnXcS9Sv34VMzvmLrbuIT0nOLKBhlkDVXFOekJ49w6/nl+hZno8KFSLz4pHb6VqNQCDSEKUqM6fDspZsHDi0o9+Bxtf6KjdtVpDVa3qz1z8VHB6fCkUjefxLYCm3RRR9CLSW1FOFFgPHyp1MWnMESZlGiNPnsFGtaIlOFDhOwP8adSB5+PWp/ppDo9pdYltT7fwh2IxaNKdj4prU90iQFqS1qfcKGqaT5VsvJZqEwizUDBacp4CwXBw6tKNj5zja5JWuuDuQLg4VclpTPjWTIjRMtmCQWIZ5WUrmK70kjdK4V5xPhWSemo9IyfnkgVopLmXaU2Ph8p4qWboD1dkQSfzTz/KrXwKbQ38Xtp/27s+esoCqBSC58Qk55q1ac7hDc+AqikWLyZWSbfwCxefTEhVGrkc1RBVGZyzjLFN8yPNy+ISo18FEMFXNQbhYqBBF74CKKS0tpopnFyE2m8NfLCqNTAp4JpFaI4ug3GDbTnfshVIJUa+ChGcUs4yrlEMdrLOR+BVHbmc0kxksLZjShGWrie1RBKJaXZoxHVcZZznYgPP0wolZ34SJnneg2Ht7KGeba5nTgUTGUnPhuZQW4uz6VpYIbDuT2zUDCVlMZeRRKnN5PEfR9IMJXqfPKYeuoipzlfZNLkcvGCcCopTT5NFLi1KK69MOFUqvNhlobs4jjnXczCEg4TFE4lpckTqeM45zrOPCXBVVIa2gWuC66y6HLXggmoUo3PJLbxuFtzfDwGbGnz6dTewpxbqT1nZoxuEmc3oq5ybX/Px//nXusnkaqQ3xQnRY12dHIddkw1nfb3MmtVqcaHWbVXoEXWq3MbUdrjKy18Yn7WfkMFzOo/zvioqdw37Bqqxy3XwmfzOgWfTMh2uzWp/Kk4SqdKqrNX46ZN1pj99OHLjfg1fGDIjzSf+tn+gduBC2E+Y8vXO/VnOR3cjUsfTF3l6nj6ODppqxxRJdYOHJTwALde7D345KlHeyQr+eA+dU0p/o+xA/q9y7yXWYdKVT5ZdIBxWkdJvLcyXY7gD5Df80GaBItmyaV9/yUPOowNURCzlB+MoxPO4oiPusqzrsk/0+/XVciBqPJIYJ1s8lYsTMTZbvcS2spPc8aIVvEJa1A3te5QlE6VqnzW0gEStArLnizyeVt21bUFePLbNAn6nQGS3roiApoa2Pkk0Amv5YhPJ5XnJb27zpUq5UBUOXc9cPQp9C0GatHKx0Xk1C0bSvFrw8+2AOp8NKhU5TNX12xSS17A+gJ7X19fz0NpErj6+Pp6J+b3UXT52fikcznZrEnlxZhnlHIgqpzo4evbKwQuzLun28uPvCwgWym+eV1Qj82d+GhQqcongg6g7eVcjR/LAaz7W6WotX72Y4anrri14MFFdj45dMIRHPFRV5lNny77s7dSDkSVEsWoa9+zQNn9dj5AYoJSPF2L+RSp89Ggkurs9hVrkSXr92YjyoPfw/D9uPlSQ5oEi+fJmxYXysW7sSUKw75iuaViLl1UdZWzn7uGWy9PU8qBqPKLkHr8324seKWlqPu9Rcvb+PwWsEMp/m9f4d7ggiNhcl0qVfl40wG0vrn18hSvbv7vyvHrU37+25n2K87PN6EJ50N7jC1HijvLg1lJJ8zVKyjVVTb+b69uvV6taZUjqsQ7Af2i/oPaqd4DT+Bk15nK/o9Tn7+3is8P6dv/Tfq9zLpUqvJxpQPwcSZxA52wK0eJCalSlQ8zKifjIWcZMzrHUWJCqlTlw3ho4MO4TFlIlaT8GFJ+SP3DXv8wLUMlVtk7OTkHaXqT2fPaXlC1tHtFR09+olq4Ks7bL4UTmku/zFwSo0dhqvF6VcVvd9Uw3FClq/1S9ixWSQBZlssN/fnIAta929GTv3VPmP4PzeftUXf1iJTxdv97Hf32P1v06qVp7D+vYoYufL5Hfkj/wG+w5ZWZI0Ou4I+w/i8+m4nzo/yDsrFlzktB0TnP9FeuSzsaV0Wn2+bJT2wLp4jLV/851w97AmvQmk3NdP+BKYjdgRvUSRQFKa8yNuL3+AOAfHXf3qmM3+5a2bIqICD2NiaviQ6aLNOv/6z0bJjyc0BUi+F7sX8gMjyuY95KTE/GeafMlsH7ccG9OuPxW7JeEnw3UBFxWi7G56Pdk5/YFk4Rly//K9cv15d+sJXZzIvH7YHHPnwFh0LfxNZFyqu0/Xs08qKBrJF3a3x+pv0K18pPgu/IZy7D1PAmWeBJ/fwvpWe8qotI5DAiD7grwzVHZMQAW+PQoxgIy7zsJgfCjmRMA8YcRJU7E6+2Xwt2LUS7Jz+xLZwiLl/+e67I35Mec1Fm06sASF5UPhiv7Y3AiznKq7S9vh3yATcwZzNwW6bgM2sTcHwYpm4GpuzSz39XjqzQ5ScpFsAnY0aG2CMjFvSP4x9AdOZPtLcevSPjZSA8B9cVtf37TiKRu2dTuyc/sS2cIi5f4z+5j1ZmDG5Aq8Qu5cDGWPSrfbJeLOtzW3mVrh97uotETv9ANANCwefpXUC+D6buAPOjz/iPcmSO5lPTPR9Vj/yCyjY+j5UAIZmX3VqA0GxVPqG07x7zebsnP7EtnCIuX+OHdP08/cU2Pr3OAsuWYM6BEMzI+gva+RyjPfXioZizAaisUZafDcCx4R356Bw/VI7sMvXzu2NQ6nlf9rrdXWU2zyfjTJdMeeA+lDxaq8Lnl8dpP3jXlHZPfmJbOCYub+PPNJ96/9RWEvPjUet3CnvDErB11OoOfKZvp//1PffpE7dr+/9E++2ulQeHNbZMe6MDH93jz8qZAYZP4+OHEOcbfGrsKGU2v4b0fGHqbpwfFTD0JFT4LJ9H/1vtLG3z5Jn2iwnHxOVt/oLp/5zr+q1SYu10/4GpQAW1H/nUN+18bjkzLyBKXNqyopfPRsZvZ9ovf8mdDnz0mL8g81/s819k/pR9/pTMv4OdD1m/wc6HrP9h50PWj+ngQ9YfsvMh61fZ+ZD1zzr4kPXz7HzI/gsdfMj+HXY+ZP+XDj5k/6AOPmT/KTsfsn9ZBx+y/10HH3J+Ajsfcv6GDj7k/BYdfMj5Pzr4kPOjQM4fM+X8MXJ+HTn/0LTzD8n5meT8VZjGh5zfq9tPJuc/66oAyfnh7KbpZPcq5mT3Sgs/f95UleT9BdzwIe+/0MPI+1N09zTI+3d0P+XKNyPZU/ZW8P4m01T+P0ngXxkteNZVAAAAAElFTkSuQmCC)
###Code
class Agent:
def __init__(self):
self.actions = ["up", "down", "left", "right"] # Four possible movement for agent
self.env = Environment(BOARD_ROWS,BOARD_COLS) # Defining environment for agent
self.env.setWinState(WIN_STATE[0],WIN_STATE[1])
self.env.setStart(START[0],START[1])
self.env.setHoles(HOLES)
self.state_size,self.action_size = BOARD_ROWS*BOARD_COLS,len(self.actions) # Defining state and action space
self.qtable = np.zeros((self.state_size,self.action_size)) # Defining Q table for policy learning
self.rewards = [] # To store rewards per episode
def printTable(self):
# Utility fucntion to print Q learning table
print("------------------- Q LEARNING TABLE ------------------")
print(self.qtable)
print("-------------------------------------------------------")
def printPath(self):
rows,cols = self.env.getSize()
data = np.ones((rows,cols))*150 # Create a matrix to display in heatmap
for hole in self.env.getHoles():
data[hole[0],hole[1]] = 300 # Mark all the holes to represent in heatmap
START = self.env.getStart()
state = State(START[0],START[1])
while True:
print("::: ",state.getCoordinates())
coerd = state.getCoordinates()
data[coerd[0],coerd[1]] = 50 # Mark the movement path to represent in heatmap
if state.getCoordinates()[0]==self.env.getWinState()[0] and state.getCoordinates()[1]==self.env.getWinState()[1]:
break
old_state = state.conversion()
action = np.argmax(self.qtable[old_state, :]) # Perform action which gives maximum Q value
nextstate = state.nxtCordinates(self.actions[action]) # Get coordinates of next state
state = State(nextstate[0],nextstate[1]) # Update the state for next cycle
hm = sn.heatmap(data = data,linewidths=1,linecolor="black",cmap='Blues',cbar=False)
plt.show() # displaying the plotted heatmap
def q_learning(self):
# Q-learning, which is said to be an off-policy temporal difference (TD) control algorithm
START = self.env.getStart() # reset the environment
for episode in range(total_episodes):
state = State(START[0],START[1])
total_rewards = 0 # total reward collected per episode
for step in range(max_steps):
exp_exp_tradeoff = random.uniform(0, 1) # First we randomize a number
old_state = state.conversion()
if exp_exp_tradeoff > epsilon: # If this number > greater than epsilon --> exploitation (taking the biggest Q value for this state)
action = np.argmax(self.qtable[old_state,:])
else: # Else doing a random choice --> exploration
action = random.randint(0,len(self.actions)-1)
nextState = state.nxtCordinates(self.actions[action])
new_state = State(nextState[0],nextState[1]).conversion()
reward = state.getReward()
total_rewards += reward # Capture reard collected in this step in overall reward of episode
# Update Q(s,a):= Q(s,a) + lr [R(s,a) + gamma * max Q(s',a') - Q(s,a)] : Q learning equation
self.qtable[old_state, action] = self.qtable[old_state, action] + learning_rate * (reward + gamma * np.max(self.qtable[new_state, :]) - self.qtable[old_state, action])
state = State(nextState[0],nextState[1]) # Update the state
#epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-decay_rate*episode) # Epsilon can be resuce with time to reduce exploration and focus on exploitation
self.rewards.append(total_rewards)
def plotReward(self):
# Utility function to plot Reward collected wrt to episodes
plt.figure(figsize=(12,5))
plt.plot(range(total_episodes),self.rewards,color='red')
plt.xlabel('Episodes')
plt.ylabel('Total Reward per Epidode')
plt.show()
###Output
_____no_output_____
###Markdown
Learn Q Values and display them
###Code
ag = Agent()
ag.q_learning()
ag.printTable()
###Output
------------------- Q LEARNING TABLE ------------------
[[ 32.61625379 33.7513931 32.61625379 37.3513931 ]
[ 37.3513931 42.612659 32.61625379 42.612659 ]
[ 36.50233035 48.45851 37.3513931 42.22485186]
[ 15.47584594 50.93707266 -2.21097188 -2.29187813]
[ -2.26219063 46.10034375 -2.18453438 -2.26219063]
[ 28.55093884 38.6116049 29.74702129 38.612659 ]
[ 37.3513931 48.45851 33.7513931 48.45851 ]
[ 42.55658871 54.9539 42.60177656 51.35389251]
[ -5.3875 58.171 38.36354 42.6345 ]
[ 11.79312187 70.19 24.86445 55.9892888 ]
[ 33.75137587 36.57684747 41.91148947 48.45851 ]
[ 42.612659 51.3539 42.612659 54.9539 ]
[ 48.45851 62.171 48.45851 62.171 ]
[ 51.3539 70.19 54.9539 70.19 ]
[ 62.171 79.1 62.171 70.19 ]
[ 42.61256306 1.40807818 17.73717535 -3.125 ]
[ 44.45851 54.931 33.29255777 58.171 ]
[ 54.9539 66.59 51.3539 70.19 ]
[ 62.171 79.1 62.171 79.1 ]
[ 70.19 89. 70.19 79.1 ]
[ 28.10236456 -2.26219063 1.33059381 -2.54190922]
[ 22.60582499 55.02157623 6.76307184 66.59 ]
[ 58.17099953 62.59 54.93099988 75.1 ]
[ 70.19 79.1 66.59 89. ]
[ 90.1 100. 90.1 100. ]]
-------------------------------------------------------
###Markdown
Display the learned path by the agent
###Code
ag.printPath()
# The below heatmap represent the path of agent to reach destination from source.
# The dark blue cells represents hole with light blue as ice, the white cell represnts path of agent
###Output
::: (0, 0)
::: (0, 1)
::: (1, 1)
::: (2, 1)
::: (2, 2)
::: (3, 2)
::: (3, 3)
::: (4, 3)
::: (4, 4)
###Markdown
Plot of Rewards vs Episodes
###Code
ag.plotReward()
###Output
_____no_output_____ |
R/35_Cell_Segmentation.ipynb | ###Markdown
Cell Segmentation Segmentation of cells in fluorescent microscopy is a relatively common imagecharacterisation task with variations that are dependent on the specifics of fluorescentmarkers for a given experiment. A typical procedure might include1. Histogram-based threshold estimation to produce a binary image.1. Cell splitting (separating touching cells) using distance transforms and a watershed transform.1. Refinement of initial segmentation using information from other channels.1. Cell counting/characterisation.This example demonstrates the procedure on a 3 channel fluorescent microscopy image. The blue channelis a DNA marker (DAPI) that stains all cells, the red channel is a marker of cell death (Ph3)while the green channel is a marker of cell proliferation (Ki67). A typical experiment might count thenumber of cells and measure size in the different states, where states are determined by presenceof Ph3 and Ki67, various times after treatment with a drug candidate. AcknowledgementsThe image used in this example is part of the data distributed with the [Fiji training notes](http://imagej.net/User_Guides) by C. Nowell and was contributed by Steve Williams, Peter MacCallum Cancer Centre. Cell segmentation and splitting Histogram-based threshold estimation is performed by the segChannel function, listed below.It applies light smoothing followed by the Lithreshold estimator, one of a range of threshold estimation options availablein SimpleITK. A cell splitting procedure usingdistance transforms and a marker-based watershed (implemented by segBlobs, also listed below) was then applied tothe resulting mask. Distance transforms replace each foreground pixel with the distance to theclosest background pixel, producing a cone-shaped brightness profile for each circular object. Touchingcells can then be separated using the peaks of the cones as markers in a watershed transform.A marker image is created by identifying peaks in the distance transform and applying a connected-component labelling.The inverted distance transform is used as the control image for the watershed transform. Load and displayMicroscopes use many variants of the tiff format. This one is recognised as 3D by the SimpleITK readers so we extractslices and recompose as a color image.
###Code
library(SimpleITK)
## set up viewing tools
source("viewing.R")
# Utility method that either downloads data from the Girder repository or
# if already downloaded returns the file name for reading from disk (cached data).
source("downloaddata.R")
# this is to do with size of display in Jupyter notebooks
if (!exists("default.options"))
{
default.options <- options()
}
cntrl <- ReadImage(fetch_data("Control.tif"))
## Extract the channels
red <- cntrl[ , , 1]
green <- cntrl[ , , 2]
blue <- cntrl[ , , 3]
cntrl.colour <- Compose(red, green, blue)
###Output
_____no_output_____
###Markdown
Display the image
###Code
show_inline(cntrl.colour, Dwidth=grid::unit(15, "cm"))
###Output
_____no_output_____
###Markdown
Set up the functions that do segmentation and blob splitting for a channel (i.e. separately for red,green blue)
###Code
segChannel <- function(dapi, dtsmooth=3, osmooth=0.5)
{
# Smoothing
dapi.smooth <- SmoothingRecursiveGaussian(dapi, osmooth)
# A thresholding filter - note the class/method interface
th <- LiThresholdImageFilter()
th$SetOutsideValue(1)
th$SetInsideValue(0)
B <- th$Execute(dapi.smooth)
# Call blob splitting with the thresholded image
g <- splitBlobs(B, dtsmooth)
return(list(thresh=B, labels=g$labels, peaks=g$peaks, dist=g$dist))
}
splitBlobs <- function(mask, smooth=1)
{
# Distance transform - replaces each voxel
# in a binary image with the distance to the nearest
# voxel of the other class. Circular objects
# end up with a conical brightness profile, with
# the brightest point in the center.
DT <- DanielssonDistanceMapImageFilter()
DT$UseImageSpacingOn()
distim <- DT$Execute(!mask)
# Smooth the distance transform to avoid peaks being
# broken into pieces.
distimS <- SmoothingRecursiveGaussian(distim, smooth, TRUE)
distim <- distimS * Cast(distim > 0, 'sitkFloat32')
# Find the peaks of the distance transform.
peakF <- RegionalMaximaImageFilter()
peakF$SetForegroundValue(1)
peakF$FullyConnectedOn()
peaks <- peakF$Execute(distim)
# Label the peaks to use as markers in the watershed transform.
markers <- ConnectedComponent(peaks, TRUE)
# Apply the watershed transform from markers to the inverted distance
# transform.
WS <- MorphologicalWatershedFromMarkers(-1 * distim, markers, TRUE, TRUE)
# Mask the result of watershed (which labels every pixel) with the nonzero
# parts of the distance transform.
WS <- WS * Cast(distim > 0, WS$GetPixelID())
return(list(labels=WS, dist=distimS, peaks=peaks))
}
###Output
_____no_output_____
###Markdown
Segment each channel
###Code
dapi.cells <- segChannel(blue, 3)
ph3.cells <- segChannel(red, 3)
Ki67.cells <- segChannel(green, 3)
###Output
_____no_output_____
###Markdown
The DAPI channel provides consistent staining, while the other stains may only occupy parts of a nucleus. We therefore combine DAPI information with Ph3 and Ki67 to produce good segmentations of cells with those markers.
###Code
# Create a mask of DAPI stain - cells are likely to be reliably segmented.
dapi.mask <- dapi.cells$labels !=0
# Mask of cells from other channels, which are likely to be less reliable.
ph3.markers <- ph3.cells$thresh * dapi.mask
Ki67.markers <- Ki67.cells$thresh * dapi.mask
# Perform a geodesic reconstruction using the unreliable channels as seeds.
ph3.recon <- BinaryReconstructionByDilation(ph3.markers, dapi.mask)
Ki67.recon <- BinaryReconstructionByDilation(Ki67.markers, dapi.mask)
###Output
_____no_output_____
###Markdown
Now we view the results
###Code
sx <- 1:550
sy <- 1450:2000
r1 <- red[sx, sy]
g1 <- green[sx, sy]
b1 <- blue[sx, sy]
colsub <- Compose(r1, g1, b1)
dapisub <- dapi.cells$thresh[sx, sy] == 0
dapisplit <- dapi.cells$labels[sx, sy] == 0
###Output
_____no_output_____
###Markdown
A subset of the original - note speckled pattern of red stain in some cells
###Code
show_inline(colsub, pad=TRUE)
###Output
_____no_output_____
###Markdown
Segmentation of DAPI channel without splitting - note touching cells on mid right that get separated by splitting process.
###Code
show_inline(dapisub, pad=TRUE)
show_inline(dapisplit, pad=TRUE)
###Output
_____no_output_____
###Markdown
Lets check the segmentation of the Ph3 (red) channel. Note that the simple segmentation does not always include complete cells (see lower right)
###Code
show_inline(ph3.cells$thresh[sx, sy]==0, pad=TRUE)
###Output
_____no_output_____
###Markdown
After geodesic reconstruction the incomplete cells match the DAPI channel segmentation.
###Code
ph3sub <- ph3.recon[sx, sy]==0
show_inline(ph3sub, pad=TRUE)
###Output
_____no_output_____
###Markdown
Characterization and countingImage segmentations can lead to quantitative measures such as counts and shape statistics(e.g., area, perimeter etc). Such measures can be biased by edge effects, so it is useful toknow whether the objects are touching the image edge. The classes used for these steps inSimpleITK are ConnectedComponentImageFilter and LabelShapeStatisticsImageFilter.The former produces a _labelled_ image, in which each binary connected component is givena different integer voxel value. Label images are used in many segmentation contexts, includingthe cell splitting function illustrated earlier. The latter produces shape measures perconnected component. The function below illustrates extraction of centroids, areas andedge touching measures.Cell counts are also available from the table dimensions.
###Code
# Function to extract the relevant statistics from the labelled images
getCellStats <- function(labelledim)
{
# create a statistics filter to measure characteristics of
# each labelled object
StatsFilt <- LabelShapeStatisticsImageFilter()
StatsFilt$Execute(labelledim)
objs <- StatsFilt$GetNumberOfLabels()
## create vectors of each measure
areas <- sapply(1:objs, StatsFilt$GetPhysicalSize)
boundarycontact <- sapply(1:objs, StatsFilt$GetNumberOfPixelsOnBorder)
centroid <- t(sapply(1:objs, StatsFilt$GetCentroid))
# place results in a data frame
result <- data.frame(Area=areas, TouchingImageBoundary=boundarycontact,
Cx=centroid[, 1], Cy=centroid[, 2])
return(result)
}
## Label the cell masks
ph3.recon.labelled <- ConnectedComponent(ph3.recon)
Ki67.recon.labelled <- ConnectedComponent(Ki67.recon)
## Collect the measures
dapistats <- getCellStats(dapi.cells$labels)
ph3stats <- getCellStats(ph3.recon.labelled)
ki67stats <- getCellStats(Ki67.recon.labelled)
## begin creating a data frame for plotting
dapistats$Stain <- "dapi"
ph3stats$Stain <- "ph3"
ki67stats$Stain <- "ki67"
# Put the data frames together
cellstats <- rbind(dapistats, ph3stats, ki67stats)
cellstats$Stain <- factor(cellstats$Stain)
# Remove cells touching the image boundary
cellstats.no.boundary <- subset(cellstats, TouchingImageBoundary == 0)
###Output
_____no_output_____
###Markdown
Once the data has been collected it can be used for plotting, statistical tests, etc:
###Code
# Reset the plot options after dealing with images.
options(default.options)
library(ggplot2)
ggplot(cellstats.no.boundary, aes(x=Area, group=Stain, colour=Stain, fill=Stain)) +
geom_histogram(position="identity", alpha=0.4, bins=30) + ylab("Cell count") + xlab("Nucleus area")
###Output
_____no_output_____
###Markdown
Cell Segmentation Segmentation of cells in fluorescent microscopy is a relatively common imagecharacterisation task with variations that are dependent on the specifics of fluorescentmarkers for a given experiment. A typical procedure might include1. Histogram-based threshold estimation to produce a binary image.1. Cell splitting (separating touching cells) using distance transforms and a watershed transform.1. Refinement of initial segmentation using information from other channels.1. Cell counting/characterisation.This example demonstrates the procedure on a 3 channel fluorescent microscopy image. The blue channelis a DNA marker (DAPI) that stains all cells, the red channel is a marker of cell death (Ph3)while the green channel is a marker of cell proliferation (Ki67). A typical experiment might count thenumber of cells and measure size in the different states, where states are determined by presenceof Ph3 and Ki67, various times after treatment with a drug candidate. AcknowledgementsThe image used in this example is part of the data distributed with the [Fiji training notes](http://imagej.net/User_Guides) by C. Nowell and was contributed by Steve Williams, Peter MacCallum Cancer Centre. Cell segmentation and splitting Histogram-based threshold estimation is performed by the segChannel function, listed below.It applies light smoothing followed by the Lithreshold estimator, one of a range of threshold estimation options availablein SimpleITK. A cell splitting procedure usingdistance transforms and a marker-based watershed (implemented by segBlobs, also listed below) was then applied tothe resulting mask. Distance transforms replace each foreground pixel with the distance to theclosest background pixel, producing a cone-shaped brightness profile for each circular object. Touchingcells can then be separated using the peaks of the cones as markers in a watershed transform.A marker image is created by identifying peaks in the distance transform and applying a connected-component labelling.The inverted distance transform is used as the control image for the watershed transform. Load and displayMicroscopes use many variants of the tiff format. This one is recognised as 3D by the SimpleITK readers so we extractslices and recompose as a color image.
###Code
library(SimpleITK)
## set up viewing tools
source("viewing.R")
# Utility method that either downloads data from the MIDAS repository or
# if already downloaded returns the file name for reading from disk (cached data).
source("downloaddata.R")
# this is to do with size of display in Jupyter notebooks
if (!exists("default.options"))
{
default.options <- options()
}
cntrl <- ReadImage(fetch_data("Control.tif"))
## Extract the channels
red <- cntrl[ , , 1]
green <- cntrl[ , , 2]
blue <- cntrl[ , , 3]
cntrl.colour <- Compose(red, green, blue)
###Output
_____no_output_____
###Markdown
Display the image
###Code
show_inline(cntrl.colour, Dwidth=grid::unit(15, "cm"))
###Output
_____no_output_____
###Markdown
Set up the functions that do segmentation and blob splitting for a channel (i.e. separately for red,green blue)
###Code
segChannel <- function(dapi, dtsmooth=3, osmooth=0.5)
{
# Smoothing
dapi.smooth <- SmoothingRecursiveGaussian(dapi, osmooth)
# A thresholding filter - note the class/method interface
th <- LiThresholdImageFilter()
th$SetOutsideValue(1)
th$SetInsideValue(0)
B <- th$Execute(dapi.smooth)
# Call blob splitting with the thresholded image
g <- splitBlobs(B, dtsmooth)
return(list(thresh=B, labels=g$labels, peaks=g$peaks, dist=g$dist))
}
splitBlobs <- function(mask, smooth=1)
{
# Distance transform - replaces each voxel
# in a binary image with the distance to the nearest
# voxel of the other class. Circular objects
# end up with a conical brightness profile, with
# the brightest point in the center.
DT <- DanielssonDistanceMapImageFilter()
DT$UseImageSpacingOn()
distim <- DT$Execute(!mask)
# Smooth the distance transform to avoid peaks being
# broken into pieces.
distimS <- SmoothingRecursiveGaussian(distim, smooth, TRUE)
distim <- distimS * Cast(distim > 0, 'sitkFloat32')
# Find the peaks of the distance transform.
peakF <- RegionalMaximaImageFilter()
peakF$SetForegroundValue(1)
peakF$FullyConnectedOn()
peaks <- peakF$Execute(distim)
# Label the peaks to use as markers in the watershed transform.
markers <- ConnectedComponent(peaks, TRUE)
# Apply the watershed transform from markers to the inverted distance
# transform.
WS <- MorphologicalWatershedFromMarkers(-1 * distim, markers, TRUE, TRUE)
# Mask the result of watershed (which labels every pixel) with the nonzero
# parts of the distance transform.
WS <- WS * Cast(distim > 0, WS$GetPixelID())
return(list(labels=WS, dist=distimS, peaks=peaks))
}
###Output
_____no_output_____
###Markdown
Segment each channel
###Code
dapi.cells <- segChannel(blue, 3)
ph3.cells <- segChannel(red, 3)
Ki67.cells <- segChannel(green, 3)
###Output
_____no_output_____
###Markdown
The DAPI channel provides consistent staining, while the other stains may only occupy parts of a nucleus. We therefore combine DAPI information with Ph3 and Ki67 to produce good segmentations of cells with those markers.
###Code
# Create a mask of DAPI stain - cells are likely to be reliably segmented.
dapi.mask <- dapi.cells$labels !=0
# Mask of cells from other channels, which are likely to be less reliable.
ph3.markers <- ph3.cells$thresh * dapi.mask
Ki67.markers <- Ki67.cells$thresh * dapi.mask
# Perform a geodesic reconstruction using the unreliable channels as seeds.
ph3.recon <- BinaryReconstructionByDilation(ph3.markers, dapi.mask)
Ki67.recon <- BinaryReconstructionByDilation(Ki67.markers, dapi.mask)
###Output
_____no_output_____
###Markdown
Now we view the results
###Code
sx <- 1:550
sy <- 1450:2000
r1 <- red[sx, sy]
g1 <- green[sx, sy]
b1 <- blue[sx, sy]
colsub <- Compose(r1, g1, b1)
dapisub <- dapi.cells$thresh[sx, sy] == 0
dapisplit <- dapi.cells$labels[sx, sy] == 0
###Output
_____no_output_____
###Markdown
A subset of the original - note speckled pattern of red stain in some cells
###Code
show_inline(colsub, pad=TRUE)
###Output
_____no_output_____
###Markdown
Segmentation of DAPI channel without splitting - note touching cells on mid right that get separated by splitting process.
###Code
show_inline(dapisub, pad=TRUE)
show_inline(dapisplit, pad=TRUE)
###Output
_____no_output_____
###Markdown
Lets check the segmentation of the Ph3 (red) channel. Note that the simple segmentation does not always include complete cells (see lower right)
###Code
show_inline(ph3.cells$thresh[sx, sy]==0, pad=TRUE)
###Output
_____no_output_____
###Markdown
After geodesic reconstruction the incomplete cells match the DAPI channel segmentation.
###Code
ph3sub <- ph3.recon[sx, sy]==0
show_inline(ph3sub, pad=TRUE)
###Output
_____no_output_____
###Markdown
Characterization and countingImage segmentations can lead to quantitative measures such as counts and shape statistics(e.g., area, perimeter etc). Such measures can be biased by edge effects, so it is useful toknow whether the objects are touching the image edge. The classes used for these steps inSimpleITK are ConnectedComponentImageFilter and LabelShapeStatisticsImageFilter.The former produces a _labelled_ image, in which each binary connected component is givena different integer voxel value. Label images are used in many segmentation contexts, includingthe cell splitting function illustrated earlier. The latter produces shape measures perconnected component. The function below illustrates extraction of centroids, areas andedge touching measures.Cell counts are also available from the table dimensions.
###Code
# Function to extract the relevant statistics from the labelled images
getCellStats <- function(labelledim)
{
# create a statistics filter to measure characteristics of
# each labelled object
StatsFilt <- LabelShapeStatisticsImageFilter()
StatsFilt$Execute(labelledim)
objs <- StatsFilt$GetNumberOfLabels()
## create vectors of each measure
areas <- sapply(1:objs, StatsFilt$GetPhysicalSize)
boundarycontact <- sapply(1:objs, StatsFilt$GetNumberOfPixelsOnBorder)
centroid <- t(sapply(1:objs, StatsFilt$GetCentroid))
# place results in a data frame
result <- data.frame(Area=areas, TouchingImageBoundary=boundarycontact,
Cx=centroid[, 1], Cy=centroid[, 2])
return(result)
}
## Label the cell masks
ph3.recon.labelled <- ConnectedComponent(ph3.recon)
Ki67.recon.labelled <- ConnectedComponent(Ki67.recon)
## Collect the measures
dapistats <- getCellStats(dapi.cells$labels)
ph3stats <- getCellStats(ph3.recon.labelled)
ki67stats <- getCellStats(Ki67.recon.labelled)
## begin creating a data frame for plotting
dapistats$Stain <- "dapi"
ph3stats$Stain <- "ph3"
ki67stats$Stain <- "ki67"
# Put the data frames together
cellstats <- rbind(dapistats, ph3stats, ki67stats)
cellstats$Stain <- factor(cellstats$Stain)
# Remove cells touching the image boundary
cellstats.no.boundary <- subset(cellstats, TouchingImageBoundary == 0)
###Output
_____no_output_____
###Markdown
Once the data has been collected it can be used for plotting, statistical tests, etc:
###Code
# Reset the plot options after dealing with images.
options(default.options)
library(ggplot2)
ggplot(cellstats.no.boundary, aes(x=Area, group=Stain, colour=Stain, fill=Stain)) +
geom_histogram(position="identity", alpha=0.4, bins=30) + ylab("Cell count") + xlab("Nucleus area")
###Output
_____no_output_____ |
keras/keras_basic_cnn_visualization.ipynb | ###Markdown
CNN for Digit Recognition **Run with theano backend for keras** modify keras.json as https://keras.io/backend/ 1. load data
###Code
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digit = load_digits()
data_x = digit.data
data_y = digit.target
x_train, x_test, y_train, y_test = train_test_split(data_x, data_y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
2. data preprocess
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Activation
from keras.layers import Convolution2D as Conv2D
from keras.layers import MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 1
# input image dimensions
img_rows, img_cols = 8, 8
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
###Output
x_train shape: (1437, 8, 8, 1)
1437 train samples
360 test samples
###Markdown
3. CNN model
###Code
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
input_shape=input_shape))
convout1 = Activation('relu')
model.add(convout1)
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
4. train & test
###Code
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Train on 1437 samples, validate on 360 samples
Epoch 1/1
1437/1437 [==============================] - 462s - loss: 2.3010 - acc: 0.1287 - val_loss: 2.3000 - val_acc: 0.3056
Test loss: 2.30000511805
Test accuracy: 0.305555555556
###Markdown
6. parameter visualize 6.1 Visualize Conv layer conv-kernel weights
###Code
# Visualize weights
import numpy as np
W = model.layers[0].kernel.get_value(borrow=True)
W = np.squeeze(W)
print("W shape : ", W.shape)
###Output
W shape : (3, 3, 32)
###Markdown
** 32 pictures with 3x3 matrix **
###Code
W[:,:,0]# image 0
%matplotlib inline
import matplotlib.pyplot as plt
for i in range(32):
sub = plt.subplot(4,8,i+1)
plt.axis('off')
sub.imshow(W[:,:,i], cmap=plt.cm.gray)
###Output
_____no_output_____
###Markdown
6.2 Visualize Conv layer output
###Code
convout1_f = theano.function(model.inputs, [convout1.output])
C1 = convout1_f([x_train[0]])
C1 = np.squeeze(C1)
print("C1 shape : ", C1.shape)
###Output
C1 shape : (6, 6, 32)
###Markdown
**32 pics with 6x6 matrix**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
for i in range(32):
sub = plt.subplot(4,8,i+1)
plt.axis('off')
sub.imshow(C1[:,:,i], cmap=plt.cm.gray)
###Output
_____no_output_____ |
Modelo Inicial No Usar/3. Credit Risk Modeling - Monitoring - With Comments.ipynb | ###Markdown
Import Libraries
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Import Data
###Code
# Import Train and Test Data.
loan_data_inputs_train = pd.read_csv('loan_data_inputs_train.csv', index_col = 0)
loan_data_targets_train = pd.read_csv('loan_data_targets_train.csv', index_col = 0, header = None)
loan_data_inputs_test = pd.read_csv('loan_data_inputs_test.csv', index_col = 0)
loan_data_targets_test = pd.read_csv('loan_data_targets_test.csv', index_col = 0, header = None)
# Here we import the new data.
loan_data_backup = pd.read_csv('loan_data_2015.csv')
###Output
_____no_output_____
###Markdown
Explore Data
###Code
loan_data = loan_data_backup.copy()
pd.options.display.max_columns = None
#pd.options.display.max_rows = None
# Sets the pandas dataframe options to display all columns/ rows.
loan_data.head()
loan_data.info()
###Output
_____no_output_____
###Markdown
*** Population Stability Index: Preprocessing >>> The code from here to the other line starting with '>>>' is copied from the Data Preparation notebook, with minor adjustments. We have to perform the exact same data preprocessing, fine-classing, and coarse classing on the new data, in order to be able to calculate statistics for the exact same variables to the ones we used for training and testing the PD model. Preprocessing few continuous variables General Preprocessing
###Code
loan_data['emp_length'].unique()
loan_data['emp_length_int'] = loan_data['emp_length'].str.replace('\+ years', '')
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace('< 1 year', str(0))
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace('n/a', str(0))
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace(' years', '')
loan_data['emp_length_int'] = loan_data['emp_length_int'].str.replace(' year', '')
type(loan_data['emp_length_int'][0])
loan_data['emp_length_int'] = pd.to_numeric(loan_data['emp_length_int'])
type(loan_data['emp_length_int'][0])
###Output
_____no_output_____
###Markdown
Earliest credit line
###Code
loan_data['earliest_cr_line']
loan_data['earliest_cr_line_date'] = pd.to_datetime(loan_data['earliest_cr_line'], format = '%b-%y')
type(loan_data['earliest_cr_line_date'][0])
pd.to_datetime('2018-12-01') - loan_data['earliest_cr_line_date']
# Assume we are now in December 2017
loan_data['mths_since_earliest_cr_line'] = round(pd.to_numeric((pd.to_datetime('2018-12-01') - loan_data['earliest_cr_line_date']) / np.timedelta64(1, 'M')))
loan_data['mths_since_earliest_cr_line'].describe()
# Dates from 1969 and before are not being converted well, i.e., they have become 2069 and similar, and negative differences are being calculated.
# There are 2303 such values.
loan_data.loc[: , ['earliest_cr_line', 'earliest_cr_line_date', 'mths_since_earliest_cr_line']][loan_data['mths_since_earliest_cr_line'] < 0]
# We set all these negative differences to the maximum.
loan_data['mths_since_earliest_cr_line'][loan_data['mths_since_earliest_cr_line'] < 0] = loan_data['mths_since_earliest_cr_line'].max()
min(loan_data['mths_since_earliest_cr_line'])
###Output
_____no_output_____
###Markdown
Term
###Code
loan_data['term']
loan_data['term'].describe()
loan_data['term_int'] = loan_data['term'].str.replace(' months', '')
loan_data['term_int']
type(loan_data['term_int'])
type(loan_data['term_int'][25])
loan_data['term_int'] = pd.to_numeric(loan_data['term'].str.replace(' months', ''))
loan_data['term_int']
type(loan_data['term_int'][0])
###Output
_____no_output_____
###Markdown
Time since the loan was funded
###Code
loan_data['issue_d']
# Assume we are now in December 2017
loan_data['issue_d_date'] = pd.to_datetime(loan_data['issue_d'], format = '%b-%y')
loan_data['mths_since_issue_d'] = round(pd.to_numeric((pd.to_datetime('2018-12-01') - loan_data['issue_d_date']) / np.timedelta64(1, 'M')))
loan_data['mths_since_issue_d'].describe()
###Output
_____no_output_____
###Markdown
Data preparation: preprocessing discrete variables
###Code
loan_data.info()
# loan_data['grade_factor'] = loan_data['grade'].astype('category')
#grade
#sub_grade
#home_ownership
#verification_status
#loan_status
#purpose
#addr_state
#initial_list_status
pd.get_dummies(loan_data['grade'], prefix = 'grade', prefix_sep = ':')
loan_data_dummies = [pd.get_dummies(loan_data['grade'], prefix = 'grade', prefix_sep = ':'),
pd.get_dummies(loan_data['sub_grade'], prefix = 'sub_grade', prefix_sep = ':'),
pd.get_dummies(loan_data['home_ownership'], prefix = 'home_ownership', prefix_sep = ':'),
pd.get_dummies(loan_data['verification_status'], prefix = 'verification_status', prefix_sep = ':'),
pd.get_dummies(loan_data['loan_status'], prefix = 'loan_status', prefix_sep = ':'),
pd.get_dummies(loan_data['purpose'], prefix = 'purpose', prefix_sep = ':'),
pd.get_dummies(loan_data['addr_state'], prefix = 'addr_state', prefix_sep = ':'),
pd.get_dummies(loan_data['initial_list_status'], prefix = 'initial_list_status', prefix_sep = ':')]
loan_data_dummies = pd.concat(loan_data_dummies, axis = 1)
type(loan_data_dummies)
loan_data_dummies.shape
loan_data_dummies.info()
loan_data = pd.concat([loan_data, loan_data_dummies], axis = 1)
loan_data.columns.values
###Output
_____no_output_____
###Markdown
Data preparation: check for missing values and clean
###Code
loan_data.isnull()
pd.options.display.max_rows = None
loan_data.isnull().sum()
pd.options.display.max_rows = 100
# loan_data$total_rev_hi_lim - There are 70276 missing values here.
# 'Total revolving high credit/credit limit', so it makes sense that the missing values are equal to funded_amnt.
# loan_data$acc_now_delinq
# loan_data$total_acc
# loan_data$pub_rec
# loan_data$open_acc
# loan_data$inq_last_6mths
# loan_data$delinq_2yrs
# loan_data$mths_since_earliest_cr_line
# - There are 29 missing values in all of these columns. They are likely the same observations.
# An eyeballing examination of the dataset confirms that.
# All of these are with loan_status 'Does not meet the credit policy. Status:Fully Paid'.
# We impute these values.
# loan_data$annual_inc
# - There are 4 missing values in all of these columns.
# loan_data$mths_since_last_record
# loan_data$mths_since_last_delinq
# 'Total revolving high credit/credit limit', so it makes sense that the missing values are equal to funded_amnt.
loan_data['total_rev_hi_lim'].fillna(loan_data['funded_amnt'], inplace=True)
loan_data['mths_since_earliest_cr_line'].fillna(0, inplace=True)
loan_data['acc_now_delinq'].fillna(0, inplace=True)
loan_data['total_acc'].fillna(0, inplace=True)
loan_data['pub_rec'].fillna(0, inplace=True)
loan_data['open_acc'].fillna(0, inplace=True)
loan_data['inq_last_6mths'].fillna(0, inplace=True)
loan_data['delinq_2yrs'].fillna(0, inplace=True)
loan_data['emp_length_int'].fillna(0, inplace=True)
loan_data['annual_inc'].fillna(loan_data['annual_inc'].mean(), inplace=True)
###Output
_____no_output_____
###Markdown
PD model: Data preparation: Good/ Bad (DV for the PD model)
###Code
loan_data['loan_status'].unique()
loan_data['loan_status'].value_counts()
loan_data['loan_status'].value_counts() / loan_data['loan_status'].count()
# Good/ Bad Definition
loan_data['good_bad'] = np.where(loan_data['loan_status'].isin(['Charged Off', 'Default',
'Does not meet the credit policy. Status:Charged Off',
'Late (31-120 days)']), 0, 1)
#loan_data['good_bad'].sum()/loan_data['loan_status'].count()
loan_data['good_bad']
###Output
_____no_output_____
###Markdown
PD model: Data Preparation: Splitting Data
###Code
# loan_data_inputs_train, loan_data_inputs_test, loan_data_targets_train, loan_data_targets_test
from sklearn.model_selection import train_test_split
# Here we don't split data into training and test
#train_test_split(loan_data.drop('good_bad', axis = 1), loan_data['good_bad'])
#loan_data_inputs_train, loan_data_inputs_test, loan_data_targets_train, loan_data_targets_test = train_test_split(loan_data.drop('good_bad', axis = 1), loan_data['good_bad'])
#loan_data_inputs_train.shape
#loan_data_targets_train.shape
#loan_data_inputs_test.shape
#loan_data_targets_test.shape
#loan_data_inputs_train, loan_data_inputs_test, loan_data_targets_train, loan_data_targets_test = train_test_split(loan_data.drop('good_bad', axis = 1), loan_data['good_bad'], test_size = 0.2, random_state = 42)
#loan_data_inputs_train.shape
#loan_data_targets_train.shape
#loan_data_inputs_test.shape
#loan_data_targets_test.shape
###Output
_____no_output_____
###Markdown
PD model: Data Preparation: Discrete Variables
###Code
loan_data.drop('good_bad', axis = 1)
loan_data['good_bad']
#####
df_inputs_prepr = loan_data.drop('good_bad', axis = 1)
df_targets_prepr = loan_data['good_bad']
#####
#df_inputs_prepr = loan_data_inputs_test
##df_targets_prepr = loan_data_targets_test
df_inputs_prepr['grade'].unique()
df1 = pd.concat([df_inputs_prepr['grade'], df_targets_prepr], axis = 1)
df1.head()
df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].count()
df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].mean()
df1 = pd.concat([df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].count(),
df1.groupby(df1.columns.values[0], as_index = False)[df1.columns.values[1]].mean()], axis = 1)
df1
df1 = df1.iloc[:, [0, 1, 3]]
df1
df1.columns = [df1.columns.values[0], 'n_obs', 'prop_good']
df1
df1['prop_n_obs'] = df1['n_obs'] / df1['n_obs'].sum()
df1
df1['n_good'] = df1['prop_good'] * df1['n_obs']
df1['n_bad'] = (1 - df1['prop_good']) * df1['n_obs']
df1
df1['prop_n_good'] = df1['n_good'] / df1['n_good'].sum()
df1['prop_n_bad'] = df1['n_bad'] / df1['n_bad'].sum()
df1
df1['WoE'] = np.log(df1['prop_n_good'] / df1['prop_n_bad'])
df1
df1 = df1.sort_values(['WoE'])
df1 = df1.reset_index(drop = True)
df1
df1['diff_prop_good'] = df1['prop_good'].diff().abs()
df1['diff_WoE'] = df1['WoE'].diff().abs()
df1
df1['IV'] = (df1['prop_n_good'] - df1['prop_n_bad']) * df1['WoE']
df1['IV'] = df1['IV'].sum()
df1
# WoE function for discrete unordered variables
def woe_discrete(df, discrete_variabe_name, good_bad_variable_df):
df = pd.concat([df[discrete_variabe_name], good_bad_variable_df], axis = 1)
df = pd.concat([df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].count(),
df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].mean()], axis = 1)
df = df.iloc[:, [0, 1, 3]]
df.columns = [df.columns.values[0], 'n_obs', 'prop_good']
df['prop_n_obs'] = df['n_obs'] / df['n_obs'].sum()
df['n_good'] = df['prop_good'] * df['n_obs']
df['n_bad'] = (1 - df['prop_good']) * df['n_obs']
df['prop_n_good'] = df['n_good'] / df['n_good'].sum()
df['prop_n_bad'] = df['n_bad'] / df['n_bad'].sum()
df['WoE'] = np.log(df['prop_n_good'] / df['prop_n_bad'])
df = df.sort_values(['WoE'])
df = df.reset_index(drop = True)
df['diff_prop_good'] = df['prop_good'].diff().abs()
df['diff_WoE'] = df['WoE'].diff().abs()
df['IV'] = (df['prop_n_good'] - df['prop_n_bad']) * df['WoE']
df['IV'] = df['IV'].sum()
return df
# 'grade', 'home_ownership', 'verification_status',
# 'purpose', 'addr_state', 'initial_list_status'
# 'grade'
df_temp = woe_discrete(df_inputs_prepr, 'grade', df_targets_prepr)
df_temp
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
def plot_by_woe(df_WoE, rotation_of_x_axis_labels = 0):
#x = df_WoE.iloc[:, 0]
x = np.array(df_WoE.iloc[:, 0].apply(str))
y = df_WoE['WoE']
plt.figure(figsize=(18, 6))
plt.plot(x, y, marker = 'o', linestyle = '--', color = 'k')
plt.xlabel(df_WoE.columns[0])
plt.ylabel('Weight of Evidence')
plt.title(str('Weight of Evidence by ' + df_WoE.columns[0]))
plt.xticks(rotation = rotation_of_x_axis_labels)
plot_by_woe(df_temp)
# Leave as is.
# 'G' will be the reference category.
# 'home_ownership'
df_temp = woe_discrete(df_inputs_prepr, 'home_ownership', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# There are many categories with very few observations and many categories with very different "good" %.
# Therefore, we create a new discrete variable where we combine some of the categories.
# 'OTHERS' and 'NONE' are riskiest but are very few. 'RENT' is the next riskiest.
# 'ANY' are least risky but are too few. Conceptually, they belong to the same category. Also, their inclusion would not change anything.
# We combine them in one category, 'RENT_OTHER_NONE_ANY'.
# We end up with 3 categories: 'RENT_OTHER_NONE_ANY', 'OWN', 'MORTGAGE'.
df_inputs_prepr['home_ownership:RENT_OTHER_NONE_ANY'] = sum([df_inputs_prepr['home_ownership:RENT'], df_inputs_prepr['home_ownership:OTHER'],
df_inputs_prepr['home_ownership:NONE'],df_inputs_prepr['home_ownership:ANY']])
# 'RENT_OTHER_NONE_ANY' will be the reference category.
# Alternatively:
#loan_data.loc['home_ownership' in ['RENT', 'OTHER', 'NONE', 'ANY'], 'home_ownership:RENT_OTHER_NONE_ANY'] = 1
#loan_data.loc['home_ownership' not in ['RENT', 'OTHER', 'NONE', 'ANY'], 'home_ownership:RENT_OTHER_NONE_ANY'] = 0
#loan_data.loc['loan_status' not in ['OWN'], 'home_ownership:OWN'] = 1
#loan_data.loc['loan_status' not in ['OWN'], 'home_ownership:OWN'] = 0
#loan_data.loc['loan_status' not in ['MORTGAGE'], 'home_ownership:MORTGAGE'] = 1
#loan_data.loc['loan_status' not in ['MORTGAGE'], 'home_ownership:MORTGAGE'] = 0
loan_data['home_ownership'].unique()
df_inputs_prepr['home_ownership:RENT_OTHER_NONE_ANY'] = sum([df_inputs_prepr['home_ownership:RENT'], df_inputs_prepr['home_ownership:ANY']])
# 'addr_state'
df_inputs_prepr['addr_state'].unique()
#df_inputs_prepr['addr_state:ND'] = 0
if ['addr_state:ND'] in df_inputs_prepr.columns.values:
pass
else:
df_inputs_prepr['addr_state:ND'] = 0
if ['addr_state:ID'] in df_inputs_prepr.columns.values:
pass
else:
df_inputs_prepr['addr_state:ID'] = 0
if ['addr_state:IA'] in df_inputs_prepr.columns.values:
pass
else:
df_inputs_prepr['addr_state:IA'] = 0
df_temp = woe_discrete(df_inputs_prepr, 'addr_state', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
plot_by_woe(df_temp.iloc[2: -2, : ])
plot_by_woe(df_temp.iloc[6: -6, : ])
df_inputs_prepr.columns.values
# We create the following categories:
# 'ND' 'NE' 'IA' NV' 'FL' 'HI' 'AL'
# 'NM' 'VA'
# 'NY'
# 'OK' 'TN' 'MO' 'LA' 'MD' 'NC'
# 'CA'
# 'UT' 'KY' 'AZ' 'NJ'
# 'AR' 'MI' 'PA' 'OH' 'MN'
# 'RI' 'MA' 'DE' 'SD' 'IN'
# 'GA' 'WA' 'OR'
# 'WI' 'MT'
# 'TX'
# 'IL' 'CT'
# 'KS' 'SC' 'CO' 'VT' 'AK' 'MS'
# 'WV' 'NH' 'WY' 'DC' 'ME' 'ID'
# 'IA_NV_HI_ID_AL_FL' will be the reference category.
df_inputs_prepr['addr_state:ND_NE_IA_NV_FL_HI_AL'] = sum([df_inputs_prepr['addr_state:ND'], df_inputs_prepr['addr_state:NE'],
df_inputs_prepr['addr_state:IA'], df_inputs_prepr['addr_state:NV'],
df_inputs_prepr['addr_state:FL'], df_inputs_prepr['addr_state:HI'],
df_inputs_prepr['addr_state:AL']])
df_inputs_prepr['addr_state:NM_VA'] = sum([df_inputs_prepr['addr_state:NM'], df_inputs_prepr['addr_state:VA']])
df_inputs_prepr['addr_state:OK_TN_MO_LA_MD_NC'] = sum([df_inputs_prepr['addr_state:OK'], df_inputs_prepr['addr_state:TN'],
df_inputs_prepr['addr_state:MO'], df_inputs_prepr['addr_state:LA'],
df_inputs_prepr['addr_state:MD'], df_inputs_prepr['addr_state:NC']])
df_inputs_prepr['addr_state:UT_KY_AZ_NJ'] = sum([df_inputs_prepr['addr_state:UT'], df_inputs_prepr['addr_state:KY'],
df_inputs_prepr['addr_state:AZ'], df_inputs_prepr['addr_state:NJ']])
df_inputs_prepr['addr_state:AR_MI_PA_OH_MN'] = sum([df_inputs_prepr['addr_state:AR'], df_inputs_prepr['addr_state:MI'],
df_inputs_prepr['addr_state:PA'], df_inputs_prepr['addr_state:OH'],
df_inputs_prepr['addr_state:MN']])
df_inputs_prepr['addr_state:RI_MA_DE_SD_IN'] = sum([df_inputs_prepr['addr_state:RI'], df_inputs_prepr['addr_state:MA'],
df_inputs_prepr['addr_state:DE'], df_inputs_prepr['addr_state:SD'],
df_inputs_prepr['addr_state:IN']])
df_inputs_prepr['addr_state:GA_WA_OR'] = sum([df_inputs_prepr['addr_state:GA'], df_inputs_prepr['addr_state:WA'],
df_inputs_prepr['addr_state:OR']])
df_inputs_prepr['addr_state:WI_MT'] = sum([df_inputs_prepr['addr_state:WI'], df_inputs_prepr['addr_state:MT']])
df_inputs_prepr['addr_state:IL_CT'] = sum([df_inputs_prepr['addr_state:IL'], df_inputs_prepr['addr_state:CT']])
df_inputs_prepr['addr_state:KS_SC_CO_VT_AK_MS'] = sum([df_inputs_prepr['addr_state:KS'], df_inputs_prepr['addr_state:SC'],
df_inputs_prepr['addr_state:CO'], df_inputs_prepr['addr_state:VT'],
df_inputs_prepr['addr_state:AK'], df_inputs_prepr['addr_state:MS']])
df_inputs_prepr['addr_state:WV_NH_WY_DC_ME_ID'] = sum([df_inputs_prepr['addr_state:WV'], df_inputs_prepr['addr_state:NH'],
df_inputs_prepr['addr_state:WY'], df_inputs_prepr['addr_state:DC'],
df_inputs_prepr['addr_state:ME'], df_inputs_prepr['addr_state:ID']])
# 'verification_status'
df_temp = woe_discrete(df_inputs_prepr, 'verification_status', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# Leave as is.
# 'Verified' will be the reference category.
# 'purpose'
df_temp = woe_discrete(df_inputs_prepr, 'purpose', df_targets_prepr)
df_temp
#plt.figure(figsize=(15, 5))
#sns.pointplot(x = 'purpose', y = 'WoE', data = df_temp, figsize = (5, 15))
plot_by_woe(df_temp, 90)
# We combine 'educational', 'small_business', 'wedding', 'renewable_energy', 'moving', 'house' in one category: 'educ__sm_b__wedd__ren_en__mov__house'.
# We combine 'other', 'medical', 'vacation' in one category: 'oth__med__vacation'.
# We combine 'major_purchase', 'car', 'home_improvement' in one category: 'major_purch__car__home_impr'.
# We leave 'debt_consolidtion' in a separate category.
# We leave 'credit_card' in a separate category.
# 'educ__sm_b__wedd__ren_en__mov__house' will be the reference category.
df_inputs_prepr['purpose:educ__sm_b__wedd__ren_en__mov__house'] = sum([df_inputs_prepr['purpose:educational'], df_inputs_prepr['purpose:small_business'],
df_inputs_prepr['purpose:wedding'], df_inputs_prepr['purpose:renewable_energy'],
df_inputs_prepr['purpose:moving'], df_inputs_prepr['purpose:house']])
df_inputs_prepr['purpose:oth__med__vacation'] = sum([df_inputs_prepr['purpose:other'], df_inputs_prepr['purpose:medical'],
df_inputs_prepr['purpose:vacation']])
df_inputs_prepr['purpose:major_purch__car__home_impr'] = sum([df_inputs_prepr['purpose:major_purchase'], df_inputs_prepr['purpose:car'],
df_inputs_prepr['purpose:home_improvement']])
# 'initial_list_status'
df_temp = woe_discrete(df_inputs_prepr, 'initial_list_status', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# Leave as is.
# 'f' will be the reference category.
###Output
_____no_output_____
###Markdown
PD model: Data Preparation: Continuous Variables, Part 1
###Code
# WoE function for ordered discrete and continuous variables
def woe_ordered_continuous(df, discrete_variabe_name, good_bad_variable_df):
df = pd.concat([df[discrete_variabe_name], good_bad_variable_df], axis = 1)
df = pd.concat([df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].count(),
df.groupby(df.columns.values[0], as_index = False)[df.columns.values[1]].mean()], axis = 1)
df = df.iloc[:, [0, 1, 3]]
df.columns = [df.columns.values[0], 'n_obs', 'prop_good']
df['prop_n_obs'] = df['n_obs'] / df['n_obs'].sum()
df['n_good'] = df['prop_good'] * df['n_obs']
df['n_bad'] = (1 - df['prop_good']) * df['n_obs']
df['prop_n_good'] = df['n_good'] / df['n_good'].sum()
df['prop_n_bad'] = df['n_bad'] / df['n_bad'].sum()
df['WoE'] = np.log(df['prop_n_good'] / df['prop_n_bad'])
#df = df.sort_values(['WoE'])
#df = df.reset_index(drop = True)
df['diff_prop_good'] = df['prop_good'].diff().abs()
df['diff_WoE'] = df['WoE'].diff().abs()
df['IV'] = (df['prop_n_good'] - df['prop_n_bad']) * df['WoE']
df['IV'] = df['IV'].sum()
return df
# term
df_inputs_prepr['term_int'].unique()
# There are only two unique values, 36 and 60.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'term_int', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# Leave as is.
# '60' will be the reference category.
df_inputs_prepr['term:36'] = np.where((df_inputs_prepr['term_int'] == 36), 1, 0)
df_inputs_prepr['term:60'] = np.where((df_inputs_prepr['term_int'] == 60), 1, 0)
# emp_length_int
df_inputs_prepr['emp_length_int'].unique()
# Has only 11 levels: from 0 to 10. Hence, we turn it into a factor with 11 levels.
df_temp = woe_ordered_continuous(df_inputs_prepr, 'emp_length_int', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# We create the following categories: '0', '1', '2 - 4', '5 - 6', '7 - 9', '10'
# '0' will be the reference category
df_inputs_prepr['emp_length:0'] = np.where(df_inputs_prepr['emp_length_int'].isin([0]), 1, 0)
df_inputs_prepr['emp_length:1'] = np.where(df_inputs_prepr['emp_length_int'].isin([1]), 1, 0)
df_inputs_prepr['emp_length:2-4'] = np.where(df_inputs_prepr['emp_length_int'].isin(range(2, 5)), 1, 0)
df_inputs_prepr['emp_length:5-6'] = np.where(df_inputs_prepr['emp_length_int'].isin(range(5, 7)), 1, 0)
df_inputs_prepr['emp_length:7-9'] = np.where(df_inputs_prepr['emp_length_int'].isin(range(7, 10)), 1, 0)
df_inputs_prepr['emp_length:10'] = np.where(df_inputs_prepr['emp_length_int'].isin([10]), 1, 0)
df_inputs_prepr['mths_since_issue_d'].unique()
df_inputs_prepr['mths_since_issue_d_factor'] = pd.cut(df_inputs_prepr['mths_since_issue_d'], 50)
df_inputs_prepr['mths_since_issue_d_factor']
# mths_since_issue_d
df_temp = woe_ordered_continuous(df_inputs_prepr, 'mths_since_issue_d_factor', df_targets_prepr)
df_temp
# !!!!!!!!!
#df_temp['mths_since_issue_d_factor'] = np.array(df_temp.mths_since_issue_d_factor.apply(str))
#df_temp['mths_since_issue_d_factor'] = list(df_temp.mths_since_issue_d_factor.apply(str))
#df_temp['mths_since_issue_d_factor'] = tuple(df_temp.mths_since_issue_d_factor.apply(str))
plot_by_woe(df_temp)
plot_by_woe(df_temp, 90)
plot_by_woe(df_temp.iloc[3: , : ], 90)
# We create the following categories:
# < 38, 38 - 39, 40 - 41, 42 - 48, 49 - 52, 53 - 64, 65 - 84, > 84.
df_inputs_prepr['mths_since_issue_d:<38'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(38)), 1, 0)
df_inputs_prepr['mths_since_issue_d:38-39'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(38, 40)), 1, 0)
df_inputs_prepr['mths_since_issue_d:40-41'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(40, 42)), 1, 0)
df_inputs_prepr['mths_since_issue_d:42-48'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(42, 49)), 1, 0)
df_inputs_prepr['mths_since_issue_d:49-52'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(49, 53)), 1, 0)
df_inputs_prepr['mths_since_issue_d:53-64'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(53, 65)), 1, 0)
df_inputs_prepr['mths_since_issue_d:65-84'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(65, 85)), 1, 0)
df_inputs_prepr['mths_since_issue_d:>84'] = np.where(df_inputs_prepr['mths_since_issue_d'].isin(range(85, int(df_inputs_prepr['mths_since_issue_d'].max()))), 1, 0)
# int_rate
df_inputs_prepr['int_rate_factor'] = pd.cut(df_inputs_prepr['int_rate'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'int_rate_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
# '< 9.548', '9.548 - 12.025', '12.025 - 15.74', '15.74 - 20.281', '> 20.281'
#loan_data.loc[loan_data['int_rate'] < 5.8, 'int_rate:<5.8'] = 1
#(loan_data['int_rate'] > 5.8) & (loan_data['int_rate'] <= 8.64)
#loan_data['int_rate:<5.8'] = np.where(loan_data['int_rate'] < 5.8, 1, 0)
#loan_data[(loan_data['int_rate'] > 5.8) & (loan_data['int_rate'] <= 8.64)]
#loan_data['int_rate'][(np.where((loan_data['int_rate'] > 5.8) & (loan_data['int_rate'] <= 8.64)))]
#loan_data.loc[(loan_data['int_rate'] > 5.8) & (loan_data['int_rate'] <= 8.64), 'int_rate:<5.8'] = 1
df_inputs_prepr['int_rate:<9.548'] = np.where((df_inputs_prepr['int_rate'] <= 9.548), 1, 0)
df_inputs_prepr['int_rate:9.548-12.025'] = np.where((df_inputs_prepr['int_rate'] > 9.548) & (df_inputs_prepr['int_rate'] <= 12.025), 1, 0)
df_inputs_prepr['int_rate:12.025-15.74'] = np.where((df_inputs_prepr['int_rate'] > 12.025) & (df_inputs_prepr['int_rate'] <= 15.74), 1, 0)
df_inputs_prepr['int_rate:15.74-20.281'] = np.where((df_inputs_prepr['int_rate'] > 15.74) & (df_inputs_prepr['int_rate'] <= 20.281), 1, 0)
df_inputs_prepr['int_rate:>20.281'] = np.where((df_inputs_prepr['int_rate'] > 20.281), 1, 0)
###Output
_____no_output_____
###Markdown
PD model: Data Preparation: Continuous Variables, Part 1: Homework
###Code
# mths_since_earliest_cr_line
df_inputs_prepr['mths_since_earliest_cr_line_factor'] = pd.cut(df_inputs_prepr['mths_since_earliest_cr_line'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'mths_since_earliest_cr_line_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
plot_by_woe(df_temp.iloc[6: , : ], 90)
# We create the following categories:
# < 140, # 141 - 164, # 165 - 247, # 248 - 270, # 271 - 352, # > 352
df_inputs_prepr['mths_since_earliest_cr_line:<140'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(140)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:141-164'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(140, 165)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:165-247'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(165, 248)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:248-270'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(248, 271)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:271-352'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(271, 353)), 1, 0)
df_inputs_prepr['mths_since_earliest_cr_line:>352'] = np.where(df_inputs_prepr['mths_since_earliest_cr_line'].isin(range(353, int(df_inputs_prepr['mths_since_earliest_cr_line'].max()))), 1, 0)
# REFERENCE CATEGORY!!!
# delinq_2yrs
df_temp = woe_ordered_continuous(df_inputs_prepr, 'delinq_2yrs', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# Categories: 0, 1-3, >=4
df_inputs_prepr['delinq_2yrs:0'] = np.where((df_inputs_prepr['delinq_2yrs'] == 0), 1, 0)
df_inputs_prepr['delinq_2yrs:1-3'] = np.where((df_inputs_prepr['delinq_2yrs'] >= 1) & (df_inputs_prepr['delinq_2yrs'] <= 3), 1, 0)
df_inputs_prepr['delinq_2yrs:>=4'] = np.where((df_inputs_prepr['delinq_2yrs'] >= 9), 1, 0)
# inq_last_6mths
df_temp = woe_ordered_continuous(df_inputs_prepr, 'inq_last_6mths', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# Categories: 0, 1 - 2, 3 - 6, > 6
df_inputs_prepr['inq_last_6mths:0'] = np.where((df_inputs_prepr['inq_last_6mths'] == 0), 1, 0)
df_inputs_prepr['inq_last_6mths:1-2'] = np.where((df_inputs_prepr['inq_last_6mths'] >= 1) & (df_inputs_prepr['inq_last_6mths'] <= 2), 1, 0)
df_inputs_prepr['inq_last_6mths:3-6'] = np.where((df_inputs_prepr['inq_last_6mths'] >= 3) & (df_inputs_prepr['inq_last_6mths'] <= 6), 1, 0)
df_inputs_prepr['inq_last_6mths:>6'] = np.where((df_inputs_prepr['inq_last_6mths'] > 6), 1, 0)
# open_acc
df_temp = woe_ordered_continuous(df_inputs_prepr, 'open_acc', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
plot_by_woe(df_temp.iloc[ : 40, :], 90)
# Categories: '0', '1-3', '4-12', '13-17', '18-22', '23-25', '26-30', '>30'
df_inputs_prepr['open_acc:0'] = np.where((df_inputs_prepr['open_acc'] == 0), 1, 0)
df_inputs_prepr['open_acc:1-3'] = np.where((df_inputs_prepr['open_acc'] >= 1) & (df_inputs_prepr['open_acc'] <= 3), 1, 0)
df_inputs_prepr['open_acc:4-12'] = np.where((df_inputs_prepr['open_acc'] >= 4) & (df_inputs_prepr['open_acc'] <= 12), 1, 0)
df_inputs_prepr['open_acc:13-17'] = np.where((df_inputs_prepr['open_acc'] >= 13) & (df_inputs_prepr['open_acc'] <= 17), 1, 0)
df_inputs_prepr['open_acc:18-22'] = np.where((df_inputs_prepr['open_acc'] >= 18) & (df_inputs_prepr['open_acc'] <= 22), 1, 0)
df_inputs_prepr['open_acc:23-25'] = np.where((df_inputs_prepr['open_acc'] >= 23) & (df_inputs_prepr['open_acc'] <= 25), 1, 0)
df_inputs_prepr['open_acc:26-30'] = np.where((df_inputs_prepr['open_acc'] >= 26) & (df_inputs_prepr['open_acc'] <= 30), 1, 0)
df_inputs_prepr['open_acc:>=31'] = np.where((df_inputs_prepr['open_acc'] >= 31), 1, 0)
# pub_rec
df_temp = woe_ordered_continuous(df_inputs_prepr, 'pub_rec', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
# Categories '0-2', '3-4', '>=5'
df_inputs_prepr['pub_rec:0-2'] = np.where((df_inputs_prepr['pub_rec'] >= 0) & (df_inputs_prepr['pub_rec'] <= 2), 1, 0)
df_inputs_prepr['pub_rec:3-4'] = np.where((df_inputs_prepr['pub_rec'] >= 3) & (df_inputs_prepr['pub_rec'] <= 4), 1, 0)
df_inputs_prepr['pub_rec:>=5'] = np.where((df_inputs_prepr['pub_rec'] >= 5), 1, 0)
# total_acc
df_inputs_prepr['total_acc_factor'] = pd.cut(df_inputs_prepr['total_acc'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'total_acc_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
# Categories: '<=27', '28-51', '>51'
df_inputs_prepr['total_acc:<=27'] = np.where((df_inputs_prepr['total_acc'] <= 27), 1, 0)
df_inputs_prepr['total_acc:28-51'] = np.where((df_inputs_prepr['total_acc'] >= 28) & (df_inputs_prepr['total_acc'] <= 51), 1, 0)
df_inputs_prepr['total_acc:>=52'] = np.where((df_inputs_prepr['total_acc'] >= 52), 1, 0)
# acc_now_delinq
df_temp = woe_ordered_continuous(df_inputs_prepr, 'acc_now_delinq', df_targets_prepr)
df_temp
plot_by_woe(df_temp)
# Categories: '0', '>=1'
df_inputs_prepr['acc_now_delinq:0'] = np.where((df_inputs_prepr['acc_now_delinq'] == 0), 1, 0)
df_inputs_prepr['acc_now_delinq:>=1'] = np.where((df_inputs_prepr['acc_now_delinq'] >= 1), 1, 0)
# total_rev_hi_lim
df_inputs_prepr['total_rev_hi_lim_factor'] = pd.cut(df_inputs_prepr['total_rev_hi_lim'], 2000)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'total_rev_hi_lim_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp.iloc[: 50, : ], 90)
# Categories
# '<=5K', '5K-10K', '10K-20K', '20K-30K', '30K-40K', '40K-55K', '55K-95K', '>95K'
df_inputs_prepr['total_rev_hi_lim:<=5K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] <= 5000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:5K-10K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 5000) & (df_inputs_prepr['total_rev_hi_lim'] <= 10000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:10K-20K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 10000) & (df_inputs_prepr['total_rev_hi_lim'] <= 20000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:20K-30K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 20000) & (df_inputs_prepr['total_rev_hi_lim'] <= 30000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:30K-40K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 30000) & (df_inputs_prepr['total_rev_hi_lim'] <= 40000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:40K-55K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 40000) & (df_inputs_prepr['total_rev_hi_lim'] <= 55000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:55K-95K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 55000) & (df_inputs_prepr['total_rev_hi_lim'] <= 95000), 1, 0)
df_inputs_prepr['total_rev_hi_lim:>95K'] = np.where((df_inputs_prepr['total_rev_hi_lim'] > 95000), 1, 0)
###Output
_____no_output_____
###Markdown
PD model: Data Preparation: Continuous Variables, Part 2
###Code
# annual_inc
df_inputs_prepr['annual_inc_factor'] = pd.cut(df_inputs_prepr['annual_inc'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'annual_inc_factor', df_targets_prepr)
df_temp
df_inputs_prepr['annual_inc_factor'] = pd.cut(df_inputs_prepr['annual_inc'], 100)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'annual_inc_factor', df_targets_prepr)
df_temp
# Initial examination shows that there are too few individuals with large income and too many with small income.
# Hence, we are going to have one category for more than 150K, and we are going to apply our approach to determine
# the categories of everyone with 140k or less.
df_inputs_prepr_temp = df_inputs_prepr.loc[df_inputs_prepr['annual_inc'] <= 140000, : ]
#loan_data_temp = loan_data_temp.reset_index(drop = True)
#df_inputs_prepr_temp
#pd.options.mode.chained_assignment = None
df_inputs_prepr_temp["annual_inc_factor"] = pd.cut(df_inputs_prepr_temp['annual_inc'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr_temp, 'annual_inc_factor', df_targets_prepr[df_inputs_prepr_temp.index])
df_temp
plot_by_woe(df_temp, 90)
# WoE is monotonically decreasing with income, so we split income in 10 equal categories, each with width of 15k.
df_inputs_prepr['annual_inc:<20K'] = np.where((df_inputs_prepr['annual_inc'] <= 20000), 1, 0)
df_inputs_prepr['annual_inc:20K-30K'] = np.where((df_inputs_prepr['annual_inc'] > 20000) & (df_inputs_prepr['annual_inc'] <= 30000), 1, 0)
df_inputs_prepr['annual_inc:30K-40K'] = np.where((df_inputs_prepr['annual_inc'] > 30000) & (df_inputs_prepr['annual_inc'] <= 40000), 1, 0)
df_inputs_prepr['annual_inc:40K-50K'] = np.where((df_inputs_prepr['annual_inc'] > 40000) & (df_inputs_prepr['annual_inc'] <= 50000), 1, 0)
df_inputs_prepr['annual_inc:50K-60K'] = np.where((df_inputs_prepr['annual_inc'] > 50000) & (df_inputs_prepr['annual_inc'] <= 60000), 1, 0)
df_inputs_prepr['annual_inc:60K-70K'] = np.where((df_inputs_prepr['annual_inc'] > 60000) & (df_inputs_prepr['annual_inc'] <= 70000), 1, 0)
df_inputs_prepr['annual_inc:70K-80K'] = np.where((df_inputs_prepr['annual_inc'] > 70000) & (df_inputs_prepr['annual_inc'] <= 80000), 1, 0)
df_inputs_prepr['annual_inc:80K-90K'] = np.where((df_inputs_prepr['annual_inc'] > 80000) & (df_inputs_prepr['annual_inc'] <= 90000), 1, 0)
df_inputs_prepr['annual_inc:90K-100K'] = np.where((df_inputs_prepr['annual_inc'] > 90000) & (df_inputs_prepr['annual_inc'] <= 100000), 1, 0)
df_inputs_prepr['annual_inc:100K-120K'] = np.where((df_inputs_prepr['annual_inc'] > 100000) & (df_inputs_prepr['annual_inc'] <= 120000), 1, 0)
df_inputs_prepr['annual_inc:120K-140K'] = np.where((df_inputs_prepr['annual_inc'] > 120000) & (df_inputs_prepr['annual_inc'] <= 140000), 1, 0)
df_inputs_prepr['annual_inc:>140K'] = np.where((df_inputs_prepr['annual_inc'] > 140000), 1, 0)
# dti
df_inputs_prepr['dti_factor'] = pd.cut(df_inputs_prepr['dti'], 100)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'dti_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
# Similarly to income, initial examination shows that most values are lower than 200.
# Hence, we are going to have one category for more than 35, and we are going to apply our approach to determine
# the categories of everyone with 150k or less.
df_inputs_prepr_temp = df_inputs_prepr.loc[df_inputs_prepr['dti'] <= 35, : ]
df_inputs_prepr_temp['dti_factor'] = pd.cut(df_inputs_prepr_temp['dti'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr_temp, 'dti_factor', df_targets_prepr[df_inputs_prepr_temp.index])
df_temp
plot_by_woe(df_temp, 90)
# Categories:
df_inputs_prepr['dti:<=1.4'] = np.where((df_inputs_prepr['dti'] <= 1.4), 1, 0)
df_inputs_prepr['dti:1.4-3.5'] = np.where((df_inputs_prepr['dti'] > 1.4) & (df_inputs_prepr['dti'] <= 3.5), 1, 0)
df_inputs_prepr['dti:3.5-7.7'] = np.where((df_inputs_prepr['dti'] > 3.5) & (df_inputs_prepr['dti'] <= 7.7), 1, 0)
df_inputs_prepr['dti:7.7-10.5'] = np.where((df_inputs_prepr['dti'] > 7.7) & (df_inputs_prepr['dti'] <= 10.5), 1, 0)
df_inputs_prepr['dti:10.5-16.1'] = np.where((df_inputs_prepr['dti'] > 10.5) & (df_inputs_prepr['dti'] <= 16.1), 1, 0)
df_inputs_prepr['dti:16.1-20.3'] = np.where((df_inputs_prepr['dti'] > 16.1) & (df_inputs_prepr['dti'] <= 20.3), 1, 0)
df_inputs_prepr['dti:20.3-21.7'] = np.where((df_inputs_prepr['dti'] > 20.3) & (df_inputs_prepr['dti'] <= 21.7), 1, 0)
df_inputs_prepr['dti:21.7-22.4'] = np.where((df_inputs_prepr['dti'] > 21.7) & (df_inputs_prepr['dti'] <= 22.4), 1, 0)
df_inputs_prepr['dti:22.4-35'] = np.where((df_inputs_prepr['dti'] > 22.4) & (df_inputs_prepr['dti'] <= 35), 1, 0)
df_inputs_prepr['dti:>35'] = np.where((df_inputs_prepr['dti'] > 35), 1, 0)
# mths_since_last_delinq
# We have to create one category for missing values and do fine and coarse classing for the rest.
#loan_data_temp = loan_data[np.isfinite(loan_data['mths_since_last_delinq'])]
df_inputs_prepr_temp = df_inputs_prepr[pd.notnull(df_inputs_prepr['mths_since_last_delinq'])]
#sum(loan_data_temp['mths_since_last_delinq'].isnull())
df_inputs_prepr_temp['mths_since_last_delinq_factor'] = pd.cut(df_inputs_prepr_temp['mths_since_last_delinq'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr_temp, 'mths_since_last_delinq_factor', df_targets_prepr[df_inputs_prepr_temp.index])
df_temp
plot_by_woe(df_temp, 90)
# Categories: Missing, 0-3, 4-30, 31-56, >=57
df_inputs_prepr['mths_since_last_delinq:Missing'] = np.where((df_inputs_prepr['mths_since_last_delinq'].isnull()), 1, 0)
df_inputs_prepr['mths_since_last_delinq:0-3'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 0) & (df_inputs_prepr['mths_since_last_delinq'] <= 3), 1, 0)
df_inputs_prepr['mths_since_last_delinq:4-30'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 4) & (df_inputs_prepr['mths_since_last_delinq'] <= 30), 1, 0)
df_inputs_prepr['mths_since_last_delinq:31-56'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 31) & (df_inputs_prepr['mths_since_last_delinq'] <= 56), 1, 0)
df_inputs_prepr['mths_since_last_delinq:>=57'] = np.where((df_inputs_prepr['mths_since_last_delinq'] >= 57), 1, 0)
# mths_since_last_record
# We have to create one category for missing values and do fine and coarse classing for the rest.
df_inputs_prepr_temp = df_inputs_prepr[pd.notnull(df_inputs_prepr['mths_since_last_record'])]
#sum(loan_data_temp['mths_since_last_record'].isnull())
df_inputs_prepr_temp['mths_since_last_record_factor'] = pd.cut(df_inputs_prepr_temp['mths_since_last_record'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr_temp, 'mths_since_last_record_factor', df_targets_prepr[df_inputs_prepr_temp.index])
df_temp
plot_by_woe(df_temp, 90)
# Categories: 'Missing', '0-2', '3-20', '21-31', '32-80', '81-86', '>86'
df_inputs_prepr['mths_since_last_record:Missing'] = np.where((df_inputs_prepr['mths_since_last_record'].isnull()), 1, 0)
df_inputs_prepr['mths_since_last_record:0-2'] = np.where((df_inputs_prepr['mths_since_last_record'] >= 0) & (df_inputs_prepr['mths_since_last_record'] <= 2), 1, 0)
df_inputs_prepr['mths_since_last_record:3-20'] = np.where((df_inputs_prepr['mths_since_last_record'] >= 3) & (df_inputs_prepr['mths_since_last_record'] <= 20), 1, 0)
df_inputs_prepr['mths_since_last_record:21-31'] = np.where((df_inputs_prepr['mths_since_last_record'] >= 21) & (df_inputs_prepr['mths_since_last_record'] <= 31), 1, 0)
df_inputs_prepr['mths_since_last_record:32-80'] = np.where((df_inputs_prepr['mths_since_last_record'] >= 32) & (df_inputs_prepr['mths_since_last_record'] <= 80), 1, 0)
df_inputs_prepr['mths_since_last_record:81-86'] = np.where((df_inputs_prepr['mths_since_last_record'] >= 81) & (df_inputs_prepr['mths_since_last_record'] <= 86), 1, 0)
df_inputs_prepr['mths_since_last_record:>=86'] = np.where((df_inputs_prepr['mths_since_last_record'] >= 86), 1, 0)
df_inputs_prepr['mths_since_last_delinq:Missing'].sum()
# display inputs_train, inputs_test
# funded_amnt
df_inputs_prepr['funded_amnt_factor'] = pd.cut(df_inputs_prepr['funded_amnt'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'funded_amnt_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
# WON'T USE because there is no clear trend, even if segments of the whole range are considered.
# installment
df_inputs_prepr['installment_factor'] = pd.cut(df_inputs_prepr['installment'], 50)
df_temp = woe_ordered_continuous(df_inputs_prepr, 'installment_factor', df_targets_prepr)
df_temp
plot_by_woe(df_temp, 90)
# WON'T USE because there is no clear trend, even if segments of the whole range are considered.
###Output
_____no_output_____
###Markdown
Preprocessing the test dataset
###Code
#####
#loan_data_inputs_train = df_inputs_prepr
#####
#loan_data_inputs_test = df_inputs_prepr
######
loan_data_inputs_2015 = df_inputs_prepr
loan_data_targets_2015 = df_targets_prepr
#loan_data_inputs_train.columns.values
#loan_data_inputs_test.columns.values
#loan_data_inputs_train.shape
#loan_data_targets_train.shape
#loan_data_inputs_test.shape
#loan_data_targets_test.shape
loan_data_inputs_2015.columns.values
loan_data_inputs_2015.shape
loan_data_targets_2015.shape
#loan_data_inputs_train.to_csv('loan_data_inputs_train.csv')
#loan_data_targets_train.to_csv('loan_data_targets_train.csv')
#loan_data_inputs_test.to_csv('loan_data_inputs_test.csv')
#loan_data_targets_test.to_csv('loan_data_targets_test.csv')
loan_data_inputs_2015.to_csv('loan_data_inputs_2015.csv')
loan_data_targets_2015.to_csv('loan_data_targets_2015.csv')
###Output
_____no_output_____
###Markdown
>>> The code up to here, from the other line starting with '>>>' is copied from the Data Preparation notebook, with minor adjustments. ***
###Code
inputs_train_with_ref_cat = pd.read_csv('inputs_train_with_ref_cat.csv', index_col = 0)
# We import the dataset with old data, i.e. "expected" data.
# From the dataframe with new, "actual" data, we keep only the relevant columns.
inputs_2015_with_ref_cat = loan_data_inputs_2015.loc[: , ['grade:A',
'grade:B',
'grade:C',
'grade:D',
'grade:E',
'grade:F',
'grade:G',
'home_ownership:RENT_OTHER_NONE_ANY',
'home_ownership:OWN',
'home_ownership:MORTGAGE',
'addr_state:ND_NE_IA_NV_FL_HI_AL',
'addr_state:NM_VA',
'addr_state:NY',
'addr_state:OK_TN_MO_LA_MD_NC',
'addr_state:CA',
'addr_state:UT_KY_AZ_NJ',
'addr_state:AR_MI_PA_OH_MN',
'addr_state:RI_MA_DE_SD_IN',
'addr_state:GA_WA_OR',
'addr_state:WI_MT',
'addr_state:TX',
'addr_state:IL_CT',
'addr_state:KS_SC_CO_VT_AK_MS',
'addr_state:WV_NH_WY_DC_ME_ID',
'verification_status:Not Verified',
'verification_status:Source Verified',
'verification_status:Verified',
'purpose:educ__sm_b__wedd__ren_en__mov__house',
'purpose:credit_card',
'purpose:debt_consolidation',
'purpose:oth__med__vacation',
'purpose:major_purch__car__home_impr',
'initial_list_status:f',
'initial_list_status:w',
'term:36',
'term:60',
'emp_length:0',
'emp_length:1',
'emp_length:2-4',
'emp_length:5-6',
'emp_length:7-9',
'emp_length:10',
'mths_since_issue_d:<38',
'mths_since_issue_d:38-39',
'mths_since_issue_d:40-41',
'mths_since_issue_d:42-48',
'mths_since_issue_d:49-52',
'mths_since_issue_d:53-64',
'mths_since_issue_d:65-84',
'mths_since_issue_d:>84',
'int_rate:<9.548',
'int_rate:9.548-12.025',
'int_rate:12.025-15.74',
'int_rate:15.74-20.281',
'int_rate:>20.281',
'mths_since_earliest_cr_line:<140',
'mths_since_earliest_cr_line:141-164',
'mths_since_earliest_cr_line:165-247',
'mths_since_earliest_cr_line:248-270',
'mths_since_earliest_cr_line:271-352',
'mths_since_earliest_cr_line:>352',
'inq_last_6mths:0',
'inq_last_6mths:1-2',
'inq_last_6mths:3-6',
'inq_last_6mths:>6',
'acc_now_delinq:0',
'acc_now_delinq:>=1',
'annual_inc:<20K',
'annual_inc:20K-30K',
'annual_inc:30K-40K',
'annual_inc:40K-50K',
'annual_inc:50K-60K',
'annual_inc:60K-70K',
'annual_inc:70K-80K',
'annual_inc:80K-90K',
'annual_inc:90K-100K',
'annual_inc:100K-120K',
'annual_inc:120K-140K',
'annual_inc:>140K',
'dti:<=1.4',
'dti:1.4-3.5',
'dti:3.5-7.7',
'dti:7.7-10.5',
'dti:10.5-16.1',
'dti:16.1-20.3',
'dti:20.3-21.7',
'dti:21.7-22.4',
'dti:22.4-35',
'dti:>35',
'mths_since_last_delinq:Missing',
'mths_since_last_delinq:0-3',
'mths_since_last_delinq:4-30',
'mths_since_last_delinq:31-56',
'mths_since_last_delinq:>=57',
'mths_since_last_record:Missing',
'mths_since_last_record:0-2',
'mths_since_last_record:3-20',
'mths_since_last_record:21-31',
'mths_since_last_record:32-80',
'mths_since_last_record:81-86',
'mths_since_last_record:>=86',
]]
inputs_train_with_ref_cat.shape
inputs_2015_with_ref_cat.shape
df_scorecard = pd.read_csv('df_scorecard.csv', index_col = 0)
# We import the scorecard.
df_scorecard
inputs_train_with_ref_cat_w_intercept = inputs_train_with_ref_cat
inputs_train_with_ref_cat_w_intercept.insert(0, 'Intercept', 1)
# We insert a column in the dataframe, with an index of 0, that is, in the beginning of the dataframe.
# The name of that column is 'Intercept', and its values are 1s.
inputs_train_with_ref_cat_w_intercept = inputs_train_with_ref_cat_w_intercept[df_scorecard['Feature name'].values]
# Here, from the 'inputs_train_with_ref_cat_w_intercept' dataframe, we keep only the columns with column names,
# exactly equal to the row values of the 'Feature name' column from the 'df_scorecard' dataframe.
inputs_train_with_ref_cat_w_intercept.head()
inputs_2015_with_ref_cat_w_intercept = inputs_2015_with_ref_cat
inputs_2015_with_ref_cat_w_intercept.insert(0, 'Intercept', 1)
# We insert a column in the dataframe, with an index of 0, that is, in the beginning of the dataframe.
# The name of that column is 'Intercept', and its values are 1s.
inputs_2015_with_ref_cat_w_intercept = inputs_2015_with_ref_cat_w_intercept[df_scorecard['Feature name'].values]
# Here, from the 'inputs_train_with_ref_cat_w_intercept' dataframe, we keep only the columns with column names,
# exactly equal to the row values of the 'Feature name' column from the 'df_scorecard' dataframe.
inputs_2015_with_ref_cat_w_intercept.head()
scorecard_scores = df_scorecard['Score - Final']
scorecard_scores = scorecard_scores.values.reshape(102, 1)
y_scores_train = inputs_train_with_ref_cat_w_intercept.dot(scorecard_scores)
# Here we multiply the values of each row of the dataframe by the values of each column of the variable,
# which is an argument of the 'dot' method, and sum them. It's essentially the sum of the products.
y_scores_train.head()
y_scores_2015 = inputs_2015_with_ref_cat_w_intercept.dot(scorecard_scores)
# Here we multiply the values of each row of the dataframe by the values of each column of the variable,
# which is an argument of the 'dot' method, and sum them. It's essentially the sum of the products.
y_scores_2015.head()
inputs_train_with_ref_cat_w_intercept = pd.concat([inputs_train_with_ref_cat_w_intercept, y_scores_train], axis = 1)
inputs_2015_with_ref_cat_w_intercept = pd.concat([inputs_2015_with_ref_cat_w_intercept, y_scores_2015], axis = 1)
# Here we concatenate the scores we calculated with the rest of the variables in the two dataframes:
# the one with old ("expected") data and the one with new ("actual") data.
inputs_train_with_ref_cat_w_intercept.columns.values[inputs_train_with_ref_cat_w_intercept.shape[1] - 1] = 'Score'
inputs_2015_with_ref_cat_w_intercept.columns.values[inputs_2015_with_ref_cat_w_intercept.shape[1] - 1] = 'Score'
# Here we rename the columns containing scores to "Score" in both dataframes.
inputs_2015_with_ref_cat_w_intercept.head()
inputs_train_with_ref_cat_w_intercept['Score:300-350'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 300) & (inputs_train_with_ref_cat_w_intercept['Score'] < 350), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:350-400'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 350) & (inputs_train_with_ref_cat_w_intercept['Score'] < 400), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:400-450'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 400) & (inputs_train_with_ref_cat_w_intercept['Score'] < 450), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:450-500'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 450) & (inputs_train_with_ref_cat_w_intercept['Score'] < 500), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:500-550'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 500) & (inputs_train_with_ref_cat_w_intercept['Score'] < 550), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:550-600'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 550) & (inputs_train_with_ref_cat_w_intercept['Score'] < 600), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:600-650'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 600) & (inputs_train_with_ref_cat_w_intercept['Score'] < 650), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:650-700'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 650) & (inputs_train_with_ref_cat_w_intercept['Score'] < 700), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:700-750'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 700) & (inputs_train_with_ref_cat_w_intercept['Score'] < 750), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:750-800'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 750) & (inputs_train_with_ref_cat_w_intercept['Score'] < 800), 1, 0)
inputs_train_with_ref_cat_w_intercept['Score:800-850'] = np.where((inputs_train_with_ref_cat_w_intercept['Score'] >= 800) & (inputs_train_with_ref_cat_w_intercept['Score'] <= 850), 1, 0)
# We create dummy variables for score intervals in the dataframe with old ("expected").
inputs_2015_with_ref_cat_w_intercept['Score:300-350'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 300) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 350), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:350-400'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 350) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 400), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:400-450'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 400) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 450), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:450-500'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 450) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 500), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:500-550'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 500) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 550), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:550-600'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 550) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 600), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:600-650'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 600) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 650), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:650-700'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 650) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 700), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:700-750'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 700) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 750), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:750-800'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 750) & (inputs_2015_with_ref_cat_w_intercept['Score'] < 800), 1, 0)
inputs_2015_with_ref_cat_w_intercept['Score:800-850'] = np.where((inputs_2015_with_ref_cat_w_intercept['Score'] >= 800) & (inputs_2015_with_ref_cat_w_intercept['Score'] <= 850), 1, 0)
# We create dummy variables for score intervals in the dataframe with new ("actual").
###Output
_____no_output_____
###Markdown
Population Stability Index: Calculation and Interpretation
###Code
PSI_calc_train = inputs_train_with_ref_cat_w_intercept.sum() / inputs_train_with_ref_cat_w_intercept.shape[0]
# We create a dataframe with proportions of observations for each dummy variable for the old ("expected") data.
PSI_calc_2015 = inputs_2015_with_ref_cat_w_intercept.sum() / inputs_2015_with_ref_cat_w_intercept.shape[0]
# We create a dataframe with proportions of observations for each dummy variable for the new ("actual") data.
PSI_calc = pd.concat([PSI_calc_train, PSI_calc_2015], axis = 1)
# We concatenate the two dataframes along the columns.
PSI_calc = PSI_calc.reset_index()
# We reset the index of the dataframe. The index becomes from 0 to the total number of rows less one.
# The old index, which is the dummy variable name, becomes a column, named 'index'.
PSI_calc['Original feature name'] = PSI_calc['index'].str.split(':').str[0]
# We create a new column, called 'Original feature name', which contains the value of the 'Feature name' column,
# up to the column symbol.
PSI_calc.columns = ['index', 'Proportions_Train', 'Proportions_New', 'Original feature name']
# We change the names of the columns of the dataframe.
PSI_calc = PSI_calc[np.array(['index', 'Original feature name', 'Proportions_Train', 'Proportions_New'])]
PSI_calc
PSI_calc = PSI_calc[(PSI_calc['index'] != 'Intercept') & (PSI_calc['index'] != 'Score')]
# We remove the rows with values in the 'index' column 'Intercept' and 'Score'.
PSI_calc['Contribution'] = np.where((PSI_calc['Proportions_Train'] == 0) | (PSI_calc['Proportions_New'] == 0), 0, (PSI_calc['Proportions_New'] - PSI_calc['Proportions_Train']) * np.log(PSI_calc['Proportions_New'] / PSI_calc['Proportions_Train']))
# We calculate the contribution of each dummy variable to the PSI of each original variable it comes from.
# If either the proportion of old data or the proportion of new data are 0, the contribution is 0.
# Otherwise, we apply the PSI formula for each contribution.
PSI_calc
PSI_calc.groupby('Original feature name')['Contribution'].sum()
# Finally, we sum all contributions for each original independent variable and the 'Score' variable.
###Output
_____no_output_____ |
src/Keras_Fashion_MNIST_TPU_Example.ipynb | ###Markdown
###Code
%%capture
!pip install watermark
%load_ext watermark
%watermark -p tensorflow,numpy -m
###Output
tensorflow 1.12.0
numpy 1.14.6
compiler : GCC 8.2.0
system : Linux
release : 4.14.79+
machine : x86_64
processor : x86_64
CPU cores : 2
interpreter: 64bit
###Markdown
(Adapted from https://github.com/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb) Fashion MNIST with Keras and TPUsLet's try out using `tf.keras` and Cloud TPUs to train a model on the fashion MNIST dataset.First, let's grab our dataset using `tf.keras.datasets`.
###Code
import os
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.model_selection import StratifiedShuffleSplit
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step
###Markdown
Value distribution of X:
###Code
pd.set_option('display.float_format', lambda x: '%.3f' % x)
pd.Series(x_train.reshape(-1)).describe()
###Output
_____no_output_____
###Markdown
Value distribution of Y:
###Code
pd.Series(y_train.reshape(-1)).describe()
###Output
_____no_output_____
###Markdown
Create a validation set
###Code
sss = StratifiedShuffleSplit(n_splits=5, random_state=0, test_size=1/6)
train_index, valid_index = next(sss.split(x_train, y_train))
x_valid, y_valid = x_train[valid_index], y_train[valid_index]
x_train, y_train = x_train[train_index], y_train[train_index]
print(x_train.shape, x_valid.shape, x_test.shape)
###Output
(50000, 28, 28, 1) (10000, 28, 28, 1) (10000, 28, 28, 1)
###Markdown
Defining our modelWe will use a standard conv-net for this example. We have 3 layers with drop-out and batch normalization between each layer.
###Code
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(64, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(128, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.BatchNormalization(input_shape=x_train.shape[1:]))
model.add(tf.keras.layers.Conv2D(256, (5, 5), padding='same', activation='elu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(256))
model.add(tf.keras.layers.Activation('elu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10))
model.add(tf.keras.layers.Activation('softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
batch_normalization (BatchNo (None, 28, 28, 1) 4
_________________________________________________________________
conv2d (Conv2D) (None, 28, 28, 64) 1664
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 14, 14, 64) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 14, 14, 64) 256
_________________________________________________________________
conv2d_1 (Conv2D) (None, 14, 14, 128) 204928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 128) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 7, 7, 128) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 7, 7, 128) 512
_________________________________________________________________
conv2d_2 (Conv2D) (None, 7, 7, 256) 819456
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 256) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 3, 3, 256) 0
_________________________________________________________________
flatten (Flatten) (None, 2304) 0
_________________________________________________________________
dense (Dense) (None, 256) 590080
_________________________________________________________________
activation (Activation) (None, 256) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 2570
_________________________________________________________________
activation_1 (Activation) (None, 10) 0
=================================================================
Total params: 1,619,470
Trainable params: 1,619,084
Non-trainable params: 386
_________________________________________________________________
###Markdown
Training on the TPUWe're ready to train! We first construct our model on the TPU, and compile it.Here we demonstrate that we can use a generator function and `fit_generator` to train the model. You can also pass in `x_train` and `y_train` to `tpu_model.fit()` instead.
###Code
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
tpu_model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy']
)
%%time
def train_gen(batch_size):
while True:
offset = np.random.randint(0, x_train.shape[0] - batch_size)
yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]
tpu_model.fit_generator(
train_gen(512),
epochs=15,
steps_per_epoch=100,
validation_data=(x_valid, y_valid)
)
###Output
Epoch 1/15
INFO:tensorflow:New input shapes; (re-)compiling: mode=train (# of cores 8), [TensorSpec(shape=(64,), dtype=tf.int32, name='core_id0'), TensorSpec(shape=(64, 28, 28, 1), dtype=tf.float32, name='batch_normalization_input_10'), TensorSpec(shape=(64, 1), dtype=tf.float32, name='activation_1_target_10')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for batch_normalization_input
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 3.1950180530548096 secs
INFO:tensorflow:Setting weights on TPU model.
99/100 [============================>.] - ETA: 0s - loss: 0.9384 - sparse_categorical_accuracy: 0.7172INFO:tensorflow:New input shapes; (re-)compiling: mode=eval (# of cores 8), [TensorSpec(shape=(64,), dtype=tf.int32, name='core_id_10'), TensorSpec(shape=(64, 28, 28, 1), dtype=tf.float32, name='batch_normalization_input_10'), TensorSpec(shape=(64, 1), dtype=tf.float32, name='activation_1_target_10')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for batch_normalization_input
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 1.434025526046753 secs
INFO:tensorflow:New input shapes; (re-)compiling: mode=eval (# of cores 8), [TensorSpec(shape=(34,), dtype=tf.int32, name='core_id_10'), TensorSpec(shape=(34, 28, 28, 1), dtype=tf.float32, name='batch_normalization_input_10'), TensorSpec(shape=(34, 1), dtype=tf.float32, name='activation_1_target_10')]
INFO:tensorflow:Overriding default placeholder.
INFO:tensorflow:Remapping placeholder for batch_normalization_input
INFO:tensorflow:Started compiling
INFO:tensorflow:Finished compiling. Time elapsed: 2.0235393047332764 secs
100/100 [==============================] - 18s 180ms/step - loss: 0.9345 - sparse_categorical_accuracy: 0.7179 - val_loss: 1.0746 - val_sparse_categorical_accuracy: 0.6738
Epoch 2/15
100/100 [==============================] - 7s 69ms/step - loss: 0.4890 - sparse_categorical_accuracy: 0.8298 - val_loss: 0.8284 - val_sparse_categorical_accuracy: 0.7191
Epoch 3/15
100/100 [==============================] - 7s 68ms/step - loss: 0.3926 - sparse_categorical_accuracy: 0.8628 - val_loss: 0.4900 - val_sparse_categorical_accuracy: 0.8290
Epoch 4/15
100/100 [==============================] - 7s 68ms/step - loss: 0.3340 - sparse_categorical_accuracy: 0.8806 - val_loss: 0.3220 - val_sparse_categorical_accuracy: 0.8829
Epoch 5/15
100/100 [==============================] - 7s 71ms/step - loss: 0.2931 - sparse_categorical_accuracy: 0.8940 - val_loss: 0.3302 - val_sparse_categorical_accuracy: 0.8822
Epoch 6/15
100/100 [==============================] - 7s 66ms/step - loss: 0.2733 - sparse_categorical_accuracy: 0.9012 - val_loss: 0.2330 - val_sparse_categorical_accuracy: 0.9173
Epoch 7/15
100/100 [==============================] - 7s 69ms/step - loss: 0.2486 - sparse_categorical_accuracy: 0.9093 - val_loss: 0.2219 - val_sparse_categorical_accuracy: 0.9204
Epoch 8/15
100/100 [==============================] - 7s 68ms/step - loss: 0.2291 - sparse_categorical_accuracy: 0.9148 - val_loss: 0.2171 - val_sparse_categorical_accuracy: 0.9224
Epoch 9/15
100/100 [==============================] - 7s 67ms/step - loss: 0.2036 - sparse_categorical_accuracy: 0.9242 - val_loss: 0.2227 - val_sparse_categorical_accuracy: 0.9209
Epoch 10/15
100/100 [==============================] - 7s 67ms/step - loss: 0.2049 - sparse_categorical_accuracy: 0.9248 - val_loss: 0.2335 - val_sparse_categorical_accuracy: 0.9183
Epoch 11/15
100/100 [==============================] - 7s 69ms/step - loss: 0.1808 - sparse_categorical_accuracy: 0.9319 - val_loss: 0.2162 - val_sparse_categorical_accuracy: 0.9263
Epoch 12/15
100/100 [==============================] - 7s 67ms/step - loss: 0.1715 - sparse_categorical_accuracy: 0.9362 - val_loss: 0.2197 - val_sparse_categorical_accuracy: 0.9226
Epoch 13/15
100/100 [==============================] - 7s 69ms/step - loss: 0.1580 - sparse_categorical_accuracy: 0.9418 - val_loss: 0.2136 - val_sparse_categorical_accuracy: 0.9255
Epoch 14/15
100/100 [==============================] - 7s 68ms/step - loss: 0.1388 - sparse_categorical_accuracy: 0.9485 - val_loss: 0.2360 - val_sparse_categorical_accuracy: 0.9221
Epoch 15/15
100/100 [==============================] - 7s 68ms/step - loss: 0.1440 - sparse_categorical_accuracy: 0.9463 - val_loss: 0.2203 - val_sparse_categorical_accuracy: 0.9283
CPU times: user 19.2 s, sys: 3.28 s, total: 22.5 s
Wall time: 1min 53s
###Markdown
Checking our results (inference)Now that we're done training, let's see how well we can predict fashion categories! Keras/TPU prediction isn't working due to a small bug (fixed in TF 1.12!), but we can predict on the CPU to see how our results look.
###Code
LABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']
cpu_model = tpu_model.sync_to_cpu()
from matplotlib import pyplot
%matplotlib inline
def plot_predictions(images, predictions, true_labels):
n = images.shape[0]
nc = int(np.ceil(n / 4))
fig = pyplot.figure(figsize=(4,3))
# axes = fig.add_subplot(nc, 4)
f, axes = pyplot.subplots(nc, 4)
f.tight_layout()
for i in range(nc * 4):
y = i // 4
x = i % 4
axes[x, y].axis('off')
label = LABEL_NAMES[np.argmax(predictions[i])]
confidence = np.max(predictions[i])
if i > n:
continue
axes[x, y].imshow(images[i])
pred_label = np.argmax(predictions[i])
axes[x, y].set_title("{} ({})\n {:.3f}".format(
LABEL_NAMES[pred_label],
LABEL_NAMES[true_labels[i]],
confidence
), color=("green" if true_labels[i] == pred_label else "red"))
pyplot.gcf().set_size_inches(8, 8)
plot_predictions(
np.squeeze(x_test[:16]),
cpu_model.predict(x_test[:16]),
y_test[:16]
)
%%time
# Evaluate the model on valid set
score = cpu_model.evaluate(x_valid, y_valid, verbose=0)
# Print test accuracy
print('\n', 'Valid accuracy:', score[1])
%%time
# Evaluate the model on test set
score = cpu_model.evaluate(x_test, y_test, verbose=0)
# Print test accuracy
print('\n', 'Test accuracy:', score[1])
###Output
Test accuracy: 0.9194
CPU times: user 445 ms, sys: 31.3 ms, total: 476 ms
Wall time: 3.42 s
|
machine learning.ipynb | ###Markdown
Load file sampled from data in auth.txt.gz so that number of fails is similar to the number of successes.
###Code
df=pd.read_csv('md/msample1.csv', header=None)
len(df)
df[8].value_counts()
###Output
_____no_output_____
###Markdown
Creating clsssification label
###Code
Y=(df[8]=='Success')
###Output
_____no_output_____
###Markdown
Creating features for machine learning First I define a function that works with source_user and destination_user from columns 1 and 2. This function maps strips users that start with 'C' and 'U' all the numbers that follow after the first symbol. I hoping that this will be a useful classification for the users.
###Code
def map_user(x):
if x.startswith('C'):
return 'C'
elif x.startswith('U'):
return 'U'
else:
return x
###Output
_____no_output_____
###Markdown
Creating features for machine learning:columns 5-7 from df for authentication type, logon type and authentication orientation are expanded to include all labels from the columns as new expanded columns holding 1(True) if the label applies and 0 (False) otherwise.Columns 1-4 contain a lot of unique labels. The number of labels is on the scale of 30,000. Therefore I do not want to apply the same procedure I used for columns 5-7 as the number of my features will explode. And I have no evidence that these features are useful. Many of the labels here come in the form of C+{number} or U+{number}, which probably mean some ordering and labeling for different computers and users in the lab. My goal is to create fewer more informative features. So I take columns 1 and 2 and split them into new columns that separately track source user, source domain, destination user, destination domain. I classify users with more general labels replacing C-labels and U-labels with just the first letter. I later convert these new labels into features just as I did for columns 5-7. I also do comparisons between data derived from columns 1-4 to see if source and destination computer are the same, source user is same as source computer, etc.
###Code
df["source_user"], df["source_domain"] = zip(*df[1].str.split('@').tolist())
df["source_user"]=df["source_user"].str.rstrip('$')
df["destination_user"], df["destination_domain"] = zip(*df[2].str.split('@').tolist())
df["destination_user"]=df["destination_user"].str.rstrip('$')
df['source_class']=df['source_user'].map(map_user)
df['destination_class']=df['destination_user'].map(map_user)
X=pd.DataFrame.from_items([('time', (df[0]%(24*60*60)).astype(int))])
X['same_user']= (df['destination_user']==df['source_user'])
X['same_domain']=(df['destination_domain']==df['source_domain'])
X['source_user_comp_same']=(df[3]==df['source_user'])
X['destination_user_comp_same']=(df['destination_user']==df[4])
X['same_comp']=(df[3]==df[4])
X['source_domain_comp_same']=(df[3]==df['source_domain'])
X['destination_domain_comp_same']=(df['destination_domain']==df[4])
for j in [5,6, 7]:
for label in sorted(df[j].unique()):
if label=='?':
if j==5:
X['?_authentication type']=(df[j]==label)
elif j==6:
X['?_logon type']=(df[j]==label)
else:
X[label]=(df[j]==label)
for cl in ['source_class', 'destination_class']:
for label in df[cl].unique():
if cl=='source_class':
X['source_'+label]=(df[cl]==label)
else:
X['destination_'+label]=(df[cl]==label)
X
###Output
_____no_output_____
###Markdown
Separate current dataset into train and test data
###Code
n=int(len(X)*.7)
Xtrain=X[:n]
Ytrain=Y[:n]
Xtest=X[n:]
Ytest=Y[n:]
###Output
_____no_output_____
###Markdown
Logistic regression
###Code
from sklearn import linear_model, datasets
logreg = linear_model.LogisticRegression(C=1e5).fit(Xtrain, Ytrain)
print logreg.score(Xtrain, Ytrain), logreg.score(Xtest, Ytest)
from sklearn.metrics import confusion_matrix
trainPred=logreg.predict(Xtrain)
testPred=logreg.predict(Xtest)
print confusion_matrix(Ytrain, trainPred)
confusion_matrix(Ytest, testPred)
###Output
[[ 98017 27581]
[ 15614 139002]]
###Markdown
Coefficients for logistic regression should tell which parameters are important
###Code
logreg.coef_
X.columns
###Output
_____no_output_____
###Markdown
Try L1 penanlty
###Code
clf_l1_LR = linear_model.LogisticRegression(C=1000, penalty='l1', tol=0.001).fit(Xtrain, Ytrain)
print clf_l1_LR.score(Xtrain, Ytrain), clf_l1_LR.score(Xtest, Ytest)
###Output
0.93957118488 0.94798154748
###Markdown
Try L2 penalty
###Code
clf_l2_LR = linear_model.LogisticRegression(C=1000, penalty='l2', tol=0.001).fit(Xtrain, Ytrain)
print clf_l2_LR.score(Xtrain, Ytrain), clf_l2_LR.score(Xtest, Ytest)
###Output
0.551781852441 0.380091929521
###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.05, max_depth=1, random_state=0).fit(Xtrain, Ytrain)
print clf.score(Xtrain, Ytrain), clf.score(Xtest, Ytest)
###Output
0.889084770925 0.880649835126
###Markdown
Analysis From the results I just got, I can see that Logistic regression with L1 penalty works better than Gradient Boosting than logistic regression without any normalization than logistic regression with L2 penalty.Lasso logistic regression (L1 penalty) works really well for correlated features, whereas L2 penalty fails badly when features are correlated. Given how I constructed my features, they can easily turn out to be correlated, but I probably want spent more time on understanding correlations between features as Lasso gives really good accuracy score. At this point, I do not know if I should trust my result as I tested it on a very small subset of data from auth.txt.gz. I'd like to get more independent randomly sampled subsets to see how well my results hold on.
###Code
clf_l1_LR.coef_
pd.DataFrame.from_items([("feature",X.columns), ("LR contribution",clf_l1_LR.coef_[0]*100)])
###Output
_____no_output_____
###Markdown
Load file sampled from data in auth.txt.gz so that number of fails is similar to the number of successes.
###Code
df=pd.read_csv('md/msample1.csv', header=None)
len(df)
df[8].value_counts()
###Output
_____no_output_____
###Markdown
Creating clsssification label
###Code
Y=(df[8]=='Success')
###Output
_____no_output_____
###Markdown
Creating features for machine learning First I define a function that works with source_user and destination_user from columns 1 and 2. This function maps strips users that start with 'C' and 'U' all the numbers that follow after the first symbol. I hoping that this will be a useful classification for the users.
###Code
def map_user(x):
if x.startswith('C'):
return 'C'
elif x.startswith('U'):
return 'U'
else:
return x
###Output
_____no_output_____
###Markdown
Creating features for machine learning:columns 5-7 from df for authentication type, logon type and authentication orientation are expanded to include all labels from the columns as new expanded columns holding 1(True) if the label applies and 0 (False) otherwise.Columns 1-4 contain a lot of unique labels. The number of labels is on the scale of 30,000. Therefore I do not want to apply the same procedure I used for columns 5-7 as the number of my features will explode. And I have no evidence that these features are useful. Many of the labels here come in the form of C+{number} or U+{number}, which probably mean some ordering and labeling for different computers and users in the lab. My goal is to create fewer more informative features. So I take columns 1 and 2 and split them into new columns that separately track source user, source domain, destination user, destination domain. I classify users with more general labels replacing C-labels and U-labels with just the first letter. I later convert these new labels into features just as I did for columns 5-7. I also do comparisons between data derived from columns 1-4 to see if source and destination computer are the same, source user is same as source computer, etc.
###Code
df["source_user"], df["source_domain"] = zip(*df[1].str.split('@').tolist())
df["source_user"]=df["source_user"].str.rstrip('$')
df["destination_user"], df["destination_domain"] = zip(*df[2].str.split('@').tolist())
df["destination_user"]=df["destination_user"].str.rstrip('$')
df['source_class']=df['source_user'].map(map_user)
df['destination_class']=df['destination_user'].map(map_user)
X=pd.DataFrame.from_items([('time', (df[0]%(24*60*60)).astype(int))])
X['same_user']= (df['destination_user']==df['source_user'])
X['same_domain']=(df['destination_domain']==df['source_domain'])
X['source_user_comp_same']=(df[3]==df['source_user'])
X['destination_user_comp_same']=(df['destination_user']==df[4])
X['same_comp']=(df[3]==df[4])
X['source_domain_comp_same']=(df[3]==df['source_domain'])
X['destination_domain_comp_same']=(df['destination_domain']==df[4])
for j in [5,6, 7]:
for label in sorted(df[j].unique()):
if label=='?':
if j==5:
X['?_authentication type']=(df[j]==label)
elif j==6:
X['?_logon type']=(df[j]==label)
else:
X[label]=(df[j]==label)
for cl in ['source_class', 'destination_class']:
for label in df[cl].unique():
if cl=='source_class':
X['source_'+label]=(df[cl]==label)
else:
X['destination_'+label]=(df[cl]==label)
X
###Output
_____no_output_____
###Markdown
Separate current dataset into train and test data
###Code
n=int(len(X)*.7)
Xtrain=X[:n]
Ytrain=Y[:n]
Xtest=X[n:]
Ytest=Y[n:]
###Output
_____no_output_____
###Markdown
Logistic regression
###Code
from sklearn import linear_model, datasets
logreg = linear_model.LogisticRegression(C=1e5).fit(Xtrain, Ytrain)
print logreg.score(Xtrain, Ytrain), logreg.score(Xtest, Ytest)
from sklearn.metrics import confusion_matrix
trainPred=logreg.predict(Xtrain)
testPred=logreg.predict(Xtest)
print confusion_matrix(Ytrain, trainPred)
confusion_matrix(Ytest, testPred)
###Output
[[ 98017 27581]
[ 15614 139002]]
###Markdown
Coefficients for logistic regression should tell which parameters are important
###Code
logreg.coef_
X.columns
###Output
_____no_output_____
###Markdown
Try L1 penanlty
###Code
clf_l1_LR = linear_model.LogisticRegression(C=1000, penalty='l1', tol=0.001).fit(Xtrain, Ytrain)
print clf_l1_LR.score(Xtrain, Ytrain), clf_l1_LR.score(Xtest, Ytest)
###Output
0.93957118488 0.94798154748
###Markdown
Try L2 penalty
###Code
clf_l2_LR = linear_model.LogisticRegression(C=1000, penalty='l2', tol=0.001).fit(Xtrain, Ytrain)
print clf_l2_LR.score(Xtrain, Ytrain), clf_l2_LR.score(Xtest, Ytest)
###Output
0.551781852441 0.380091929521
###Markdown
Gradient Boosting
###Code
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.05, max_depth=1, random_state=0).fit(Xtrain, Ytrain)
print clf.score(Xtrain, Ytrain), clf.score(Xtest, Ytest)
###Output
0.889084770925 0.880649835126
###Markdown
Analysis From the results I just got, I can see that Logistic regression with L1 penalty works better than Gradient Boosting than logistic regression without any normalization than logistic regression with L2 penalty.Lasso logistic regression (L1 penalty) works really well for correlated features, whereas L2 penalty fails badly when features are correlated. Given how I constructed my features, they can easily turn out to be correlated, but I probably want spent more time on understanding correlations between features as Lasso gives really good accuracy score. At this point, I do not know if I should trust my result as I tested it on a very small subset of data from auth.txt.gz. I'd like to get more independent randomly sampled subsets to see how well my results hold on.
###Code
clf_l1_LR.coef_
pd.DataFrame.from_items([("feature",X.columns), ("LR contribution",clf_l1_LR.coef_[0]*100)])
###Output
_____no_output_____
###Markdown
**Customer Personality Prediction or Analysis![824-8247150_click-happy-customers-cartoon.png](data:image/png;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAARCAIKAzQDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD+7Ik5PJ6nufWkyfU/maG6n6n+dJQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUALk+p/M0ZPqfzNJRQAuT6n8zRk+p/M0lFAC5PqfzNGT6n8zSUUASoSQcnPP+FFCdD9f6CigCNup+p/nSUrdT9T/OkoAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigCVOh+v9BRQnQ/X+gooAjbqfqf50lK3U/U/wA6SgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoor5E/bE/bM+Ff7Gfw4fxp4+uTqev6mZbXwX4E065hj17xXqUYG9YFk3/AGPS7Pesmp6tPG1vaRskarNdzW9tNnVq06FOdWrONOnBXlOTskvzbb0SV23ZJNs6MJhMTjsRRwmEo1MRia81To0aa5pzk+i6JJXcpSajGKcpNRTa+uyQBkkADqTwB+NQ/aLfO3z4d2cY81M59Mbs5r+G39pD/gqP+1r+0Rqt8H+IOqfDLwXM1xFZeCPhvf3vh2ySxleQJDq2rWk0es67OYWWO4e9vPscjp5kFhahig+FpL3x7psmn+M5bvxZYSajczS6X4pkn1a2e+vLJojPJY6yzo1zc2rtD5rwXDyQMY95Ula+ZrcU0YzaoYWpWgt5zqKldXtdRVOo7PpzOL7o/VcF4SY+rRjPH5thcFWmvdoUsPLF2lbm5J1HWw0eZJPm9mqiVm4uSV3/AKQPXpRX8P8A+zN/wVZ/au/Z61jTYtV8b6r8XfAULRQ3/gv4halc6zIbNC+V0bxJefatb0i4jEhMO24ubLhVlsZVVQv9c37Kn7V3wt/a7+Gdp8RvhpfSI8JgsvFHhfUTEmueE9bkt0nl03Uoo2ZJYjuf7FqNuWtL+OOQxMssVxBD6uX5xhcxbhTcqVdJt0aluZpbuEk+WaXVK0lq3FLU+R4k4LzfhpKtiVTxWBnJQjjcNzOnGb2hXhNKdGcnpG/NTk9I1JSvFfTVFFFeqfIBRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQBKnQ/X+gooTofr/QUUARt1P1P86Slbqfqf50lABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAEF1cRWdtcXc7iOC1gluJnYgBIoY2kkYkkABUUkkkAY5NfwT/t8/tM65+1L+0n488c3d9LL4V0fVb7wr8P9OFw81np/hLRr2e2sZ7dCfLSbWGV9WvHRfmmu/L3PHDGR/cH8fri+tfgb8YbnTCRqMHwz8by2JWRoj9rj8Oai1viRXiZD5oX5xIhXqHXqP866TJkfPUu2fqWP9a+R4qrzUcLh02oTdSrNJ/E4csYJ91Hmk7d2nukfsvhFgKFTEZtmM4qVfDww+FoNq7pxr+1nWlHXSU1Spxva6ipJO0mhERpHVEUs7sFVVBLMzHAAAySSTgADJNf07/EP9k74hXv/AAR0+Fnw+PgjV/iF8WY9W8K+NPAuh+HtDmu9c8OweNPEq6lLZeVBC90RH4c1e9GpGQwxrNc7p1zaqw/NrwF/wSl+PWvfs5y/tUSeIfAcPhiz8Hn4j6R4St73U9V8R65oWnKb+6gl+wWS6fp94LK3uXa2a/llSRBbymCbzfJ/sT+DfijSPGvwk+Gfi7QDCdF8SeA/Cms6YtupSGOz1DQ7K5ghSMkmIQpIIjEx3RshRgGUiubJMslU+txxMZ0frGDUaV0vepVpXlVindNxcIOPMrpuMktmenx9xXRpSyaplVajjP7MzqdbF8rmlTxeChD2WGnNKMlGtCtXjUdN2nGE4cztJH8LXxy/YQ/aZ/Zw+G3hz4qfGHwGnhLwz4m1pNAtYpta0i81ix1Oe0ub60g1XTLC7uZrH7XbWd08RcsUaBorlYJXiR/Rv+CaP7UWufsyftPeCL46nPD4A8e6tp3gr4h6W04SxuNJ1i6WztNYlik/d/avD19cRanBIuyZoY7q0Rwl1Irf01f8FffAFh41/Yf+JmpT6Tpeo6l4Gm0TxXpV3fwXU11pLW+qWtnqF5pJtZEEd7Jp15c2zyXSy2q2c1y0ke8RyJ/E9pDvHqumyRsySJf2jxshKurrPGyspXBDBgCpBBBAI5rgzDC/2PmFD6tOb5Y0q0JTacnJScZJ8sYLlk4tOKv7rab6H0XDWavjfhvMI5pRw6lUrYnA16dGEo04wlSp1KM4KdSrJTpqpGUZuSaqU1JJNJv/AEnlIZQwOQwBB9QRkH8qWuH+GV5eaj8OPAN/qDO9/e+DPDF3etJu8xru40WyluGfeS+8yu5bcS2c5Oa7iv0SMuaMZJWUoqVu11ex/NFSDpznBtNwnKDa2bi2m15aBRRRVEBRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAEqdD9f6CihOh+v9BRQBG3U/U/zpKVup+p/nSUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAZmt6Taa9o+q6JqESTWOr6de6ZeROoZJLa+t5LaZGU5BDRyMCCK/z3P2nPgh4i/Z0+OnxG+EXiS2khufCviG8i024aOZItT0C7c3mg6pbNMoMsF7pc9tLvUyKspli8xmjY1/obV+W3/BSL/gnL4f/AGzfC1t4r8ITad4Y+OnhGykg0DXLpBDp3irSgfN/4RjxJNFE8wjV976NqOJDptzNMrobW6n2+Fn2Wzx2HhOiuavh3KUY7OpCSXPBP+a8Yyim9bNLWR9/4fcT0eHszq0sbJwy/MY06VarZtYetTlJ0K8krv2aU6lOrZNqM1PaDT8P/wCCMPxh0b4k/sa+KPhf4v1SwdPhXrWr+Hr231W5txbWvgjxVZSajYNdtcMkaWkl83iOIecQixw7CdoBPyL4a/4K16P+xh4G8Q/syaR8O5fin4g+EPjbx34S8IeLE8WWEXg288JW/ifUJ/DEhvbGLUtQ1A2unXAt5EjEK7YIoUuUGfJ/C74q/Cf44/s8a5rnw2+Jnhzxj4AuprhV1HSbwX9no2vrYSzR2t/bTQuNL16xUvK9jf28l1BslZ4ZB5jZ8UOSeckk855JP8818xLOcXRo4ahTg6GIwtKeHlVnaU5Qk42g6VSHuOCp00m3Jpxdkrn6vS4FybHY3M8wr4iGPyzNsVQzCjg6DlTowrwjNussVh66dWNSVfEXhGMIuFSN5SlG5+mn7SX/AAVj/ax/aQ8PeIPA2p6z4d8B/D3xNaT6brPhPwVosUI1PS7mIRT6fqOt6s+qa1LbzLv85LW8so5ldo5I2jwtfOf7FnwH1v8AaM/aS+GHw10qxlu7K88Safqfie4Tzki03wtpFxHf65fzzwlTAIrGGVYCZIzLcvDBGxlkRW4D4Jfs8fGP9onxZZeDfhF4F1vxbqt1PFFPcWlq8ej6VFIyq15rOsTCPT9Ms4VbzJZrq4T5AfLWRyqn+yP/AIJ2f8E+vC/7FHgW7vNVurLxT8ZfGNvbnxl4rggK2um2iBJYvCvh1pR5y6VaT7pLq8cRT6tdgTyxxQRWttA8vwmMzfFQrYmVWpQhJOrWqNtcsXd0qd9LyeloK0LuTS0TniPOcj4LyjEYHKqWEw2YYmnKOGweFjBVFUnD2f1zFON5WpRalGVaXPVlGMINrmlH9Hbe3itbeC1gRYoLaGKCGNFCpHFCixxoqjhVVFCqBwAABU1FFfoZ/NYUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUASp0P1/oKKE6H6/0FFAEbdT9T/OkpW6n6n+dJQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUVzfizxh4V8CaJd+JPGXiHSPDGg2Kl7vVtbv7fT7KLgsEM1y6K0rhSI4U3Syt8saMxArOrVpUKdStXqU6NGlCVSrVqzjTp04RTcp1JzajCEUm5Sk0kk22kbYfD4jF16OFwtCticTiKkKNDD4elOtXr1aklGnSo0acZVKlScmowhCMpSk0optnSUV+dfiX/gqN+yhoGpS6daa94q8UJCWR9R8P+F7k6eXU4ZYpNYn0iaYA5xLFbvC4G6ORlIY+pfCX9u/9mb4x6ha6J4e8fw6L4ivZFhtNB8YWk/hy9upn4igtLq7zpF5czN8sVraalNcyPhVhJZQfk8L4g8DY3GrLsJxdw7iMbKfsoYelm2ClOrUuo+zpWrctao20lClKcm9k7M/R8w8F/FvKsqlneY+G/GeDyunS9vWxlfh/Moww9Dlc3WxUPYOrhaMYrmnUxFOlCCtzSV0fYVFAIIBBBBGQRyCD0IPcGivsD8yCiiigAooooAKK/Jj9ub/gs7+xd+wxNqPhbxX4vn+JvxdskdD8KPhkbPWtb0+6GAkPirV5LmHQvCuGZWmttQvX1hISZodIuBtDfzRfHr/g6Q/aw8ZS3tj8BfhN8Mvgzo8jSpaanrwvviT4uijyRFMLm9/sTw1HLs2s0Unhu9RXLDzJFAIzlUhHRu77LX/gfiO3y/pf53/I/vEJA5JwPU1TudR0+zRpbu+s7WNQS0lxcwwooCljlpHVRhVZjk8KCegNf5bfxS/4K2f8FGvjBJenxf8AtbfFy3t79y0un+DteHw+0xVOQI4bDwNB4ft4kCEphU3MpIdmyc/FviP4wfFfxjdNe+LfiV478T3jMXa68QeLdf1i4Z2GGczahqFxIWYZBYtkgnJqHXXSL+bt/mI/124fG/gy4lSC38W+GZ5pWCxxQ67pcksjHoqRpdMzMfRQTXQxXEE6hoZ4ZlYBlaKRJFZSAQQUYgggggjgggiv8daPxFrsTiSLWNTjkXO10v7pXGQQcMswIyCQeehr1rwH+05+0X8LrmC7+HPxz+LXgia3ZGiPhf4g+KtFjxHIkqxvDYapDDLD5kaM0EsbwuVG9GAxS9v/AHfx/wCAH9f1+J/rq0V/m8/s9/8ABwf/AMFH/gfdWFv4i+JmlfHPwvaSZm0D4uaDZ6teXER++o8XaQuj+LfN/wCecl1q97Gh6wOOK/pN/Yu/4OR/2S/j5caX4Q/aF0m+/Zm8e300NnDqmrXcniT4X6jdTZVNviu1s7e98Pb3ADHxDpVtptuHUyay3zFbjVhLTVN9/wDNfrYD+jmisnQtf0PxRpGn+IPDWs6X4g0LVraK90vWdFv7XVNK1G0mUPDdWN/ZSzWt1byqQySwSujA5DGtatQOT8X+A/BHxA01tG8deEPDPjHSX3FtN8T6HpuuWWWG0sLbUra5iV8cB1UMMAgggV83J+wH+xdHejUF/Zm+D5uRcNdZfwbpckPnM5kJNq8LWpTcSRF5PlKMBUAAA+vaKynQoVWpVKNKpJbOdOEmrdnJNrY6qGOxuFjKGGxmKw8JfFGhiKtKMr73VOcU79bo5rwt4M8I+B9Lh0TwX4X8PeE9Ht1VYNK8N6Pp+i6fEqIsahLTTre2gXaiqgwnCqB0FdLRRWiSikopJLRJJJJdklojnlKU5OU5SlKTvKUm5Sbe7bd22+7YUUUUyQooooAKKKKACivLfi98Zvh38DfCN340+I+vwaJpNvmO2hA8/U9VuzjZY6Tp6Hz726ckZWMCKFCZrmWGBXkX8gfHn/BYu5TU7m2+G/wms5NJimK2mqeLtZuXu72BXcCSXSNKit47F3UI6oNVu9m5lYsQK+oyHg3iPiWM6uUZbUr4eEnCeKqVKWGwymrOUI1q86cak4ppyhS9pOKacopNH474l+PvhN4R1aOE454uwmW5niKSr0MmwmHxmbZxKjPm9nXqZflmHxVfCYeq4TjRxONjhsPVlGUadWUotL90KK/DPwB/wWJlk1OK1+JnwntYdMlkRZdT8IavOtzaI08Cs/8AZWrJMl35cDTyEDVLYu6RooUMzD9gfhT8X/h58a/Ctt4x+G/iOz8Q6NOVjmMJaK90+6KK7WWp2MwS6sbtFYFobiNCw+eMuhDGc+4O4i4ajCpm+W1KFCpJQhiqc6WIw0ptXUHWoTqQpzevLCryTlyycYyUW0/DTx78KPFypWw3A3FuEzLMsNS9tXybFUMXlWcQoqynWhl2Z0MLiMVQptxjVxGDjiKFKUoRqVYucU/S6KKK+ZP2EKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigCVOh+v9BRQnQ/X+gooAjbqfqf50lK3U/U/wA6SgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigDA8VeJtH8GeGte8W+IbtLDQ/DekahrerXchAWCw022ku7mQDgs4iiYRxrlpHKogLMAf5Mv2rP2q/Hf7Tnjy81XVry607wVp13PB4N8GQzv/Z+laeHMcVzcxIRHea1eIBLfX0qu+9zb25jtI4YU/ff/AIKWeI9S8O/sj+PhpkrQSa5f+GdAu5kd0kGn3+t2j3sSFCMi6gt2tJlbKtbzyqRkgj8jv+Cb3wi+BPjjxl4l8d/GHxTonn/DO1j8QaV4D1p47axvre2UzXHijVJborb6lpujSIinSovM3XLxTagn2Xy4bv8AlTx0xme8VcX8NeFmUY+jlmEzDBrNs0r4nErC4avGVXFKKxMnOLr0cDh8DXxMMJDmeIr1Ie450qcof6HfRJyvhTw/8NOOfpAcRZPis9zLJsznw/kWGwOBnmOPwip4fL/aPAUoU6kcJic2xubYbBVswq8iwmEoTvVp0MRiI1vzj1jQdb8PXMVlr2j6not5Na219Da6rY3On3EtleRLNaXccN1FFI9tdQsstvOqmOaNleNmUgnOhExkX7OJDKp3p5QYyKU+bcu35gVxnI6YzX9MmufBPwx+3j460D4meMfDFxoPwO8E2V5pfgdjbpo/i/4ryXE0bTa1eXkcS6jpPgK0eJl8PWaTpd6m0txqUb2kFyEbzP8Aa80z4c/sWeCBrPwW/Zm8PDUfFenan4c/4WlPbjVLHwXNqsE1s0UovpNR1IajcRHzLBp3tNPkIaJprgrJaP8Ak2ZeA2Jy3CZtxJV4koUeDctkq2GzSeW46tmuYYNQp3xeFyrCxqRVGpXnKjhcRXxdChXpxjjXOlg6kKr/AKNyL6W+AzvNeHOBqHBWMxHidnUPY4/h6lnmUYbIcjzJ1Ksv7NzDiHMKtCcsbRwVOOKx2BweW43F4OvKeVOlXzShWwsOW/4Ji/ti+IPGlz/wz18S9SuNX1Sw0u4v/h94jv5nnv7mw05Ue88L6hO4aW5ls7UyXulXU7s62dtc2UkhEVkh/aSv42/2aPE2peFP2hPg9r+lvsvYPiL4Xh4eSNZINR1e2sL2BzEyt5VxaXU8MiA7XjkZHDIzKf7IxyAfUCv6D+jrxfmPE3BuKwOa4iri8Vw7joYCjiq0nOtVy+th41cHCtUk3KpUoONegpSbl7GFFScpJyf8XfTY8Nsl4F8TMvzjh/CUMuwHG2VVs2xWAw0I08PQzrCYuWHzKth6MbRo0sZGrhMVOnCMIfWqmKnGKU+VLRRRX9An8aiMyqrMzBVUFmZiAqqBksxOAAACSScAcmv4/wD/AILWf8F27/wjqfif9kz9inxYttr2ny33h/4vfGzR3V5tIu4m+zX/AIO+Hd+jskepW7rPaa94miRmsZA9lo0q3iSXtt9Lf8F9P+Ctw/Zm8Gaj+yP+z/4j8r49eP8ARQvxA8V6NeoLv4UeDNUi/wCPK3uIHMln4z8UWTstntKXOjaNO2qK0F1daXKf4JpZZZ5ZJppHlmldpJJJGLvJI7Fnd2YlmZmJZmYksSSSSSawq1Le7F6/afbyX6/8PYLOpalqGsX95qmq313qWpajdT3t/qF9cS3d5e3l1K81zdXVzO8k1xcTzO8s00rvJJIzO7FiTVKiiuYAooooAKKKKACgEg5HBFFFAH6r/wDBO/8A4K6/tPf8E/PE2n2Xh3Xrz4i/BOe6T/hI/gv4s1S6m8PPavMHurvwldSi6l8Ha4VaR0vNMiNldSlTqmnX6Iip/oT/ALEf7dXwG/b1+EWn/FX4J+IUmliWG08ZeB9Tkhh8X+A9dMatNpOv6cjs3lMxLadq1t5um6pbjzbWcyJPBB/lA19h/sQ/trfGH9hP46eGvjV8JdWkR7CeOz8WeE7qeb/hHvHHhaeRBqnh3XbRHCTRTwgyWN1tNxpmoR21/aMk9ujDWnUcXZ6x7dvNf5dQ/r+v60P9YWivmv8AZH/am+GX7ZfwF8DfH74UXxuPDfjCxP2zTLh421Twx4hsiINc8M6zHGzCLUdJvQ8LEfu7q3a3vrYva3UEj/Sldad1dbPVAFFFFABRRRQAUUUUAFQXVzDZW1xeXDrHb2sEtxPIxCqkUMbSSOzMQoCopJJIAxyanryb49Nep8Evi2+nMVv1+HPjJrNgVDLcjQL8wlS/yBw+0oX+QNjd8ua2w1H6xiMPQ5lD29alR53tD2lSMOZ+Uea716Hn5vjnlmVZnmSpSrvL8vxuOVGGs6zwmGq4j2Uf71T2fJHzaP5af2vv2ivEH7Rfxf17xJd3k6+FNJurrRvBGjC4d7PTtAtrhkhuEgDGFL/V/LS/1KVN7vM8duZ5be0ttnmvwE+GWs/GD4ueBfh/oenx6jc67rtpHcwzsyWkemWrG91a5vHT5ltbbTbe6nn2guyIUjVpGVTxnhXRLTxL428O+HtQvm0ux1zxJpWk32ppD9pfTrTUNRgtLm+W33xiY2sMrziLegkKbdy5yP6AfgR+yN4U/ZF/al+GB07xHqfi2P4jfD74iaVBe6vZWlmNP8R6SdD1VH06O3dhELjQotThZJZ7ichpPLLRvJ5H9h8QZ5lfBWSQyjAw9ljY5Nj55NhY0aroz/s/CyqVatSvCKpqVNJ4irzVI1a0ryvzT5j/AAT8LvDvjH6QfiFV484krfXcgreIPDFHj/OauOwkMZRlxTnVHDYTCYXBVan1h08TKUctwfsMPPC4Gm6VOMPZUFRPzT/bS/Z0u9A/aI8b+H/gn8MvFF34V0ey0Ce5s/C/h7W9Y03Sbm60W1ublVmtrO48qAsWkJeV0VzIC4YMo87/AGM/2k9b/Zx+L+i6vJd3T+B9cuodH8caMJmWCfSrmQRf2ikDMIvt+kSsL23dgrssc1tvRLhyP6x9ZtludI1WA71M+nXkReKV4JRut5FBSeJXliYHG2REdkIBVGI2n+KXxPZjT/GWuWKblFpr9/bqTO9zIPJvpYwWuJY43mf5ctK8SM5yxUEkV83wBxBHjvI8y4ezjBxqUMBl2X4KrWq154ivjfa0q1Kpias6kYuFeNXDxrUpx5p05uMvaOcFN/rn0n/C2r9GzxG4R8UeBs/nRzHibi3ijiPBYDC5ZRyrLeH3gMdl2NwuUYShha9WOIy+eFzSrgMZSn7KlicPCcFh6dGvKjD+2mCaK5hhuIXWSGeKOaKRSCrxSoHjdSMgqyMGBBwQeKlrzv4QyTzfCj4Zy3Tyy3UvgDwfJcyXDO88k7+H9PaZ5mfLtK0hZpGcli5JbnNeiV/L9en7GtWo35vZValPm2vyTcb287XP9k8txbx+XYDHOm6TxuCwuLdKWsqbxNCnW9m3Zaw5+V6LVbIKKKKyO0KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAJU6H6/wBBRQnQ/X+gooAjbqfqf50lK3U/U/zpKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAPnP9rT4T3Pxs/Z8+JPw+06KObWtR0UajoEbgZk13QrmDWtMt43P+qe9uLFbHzAVAW5YOfLLg/yAXNvfaTe3djcx3Fje2k09leW8qvBPDNDIYp7eeNtro8ciFJI3AKspDDIr+4uvyj/AGyP+CbelfGjV9Q+JXwivtO8KeP79ri71/Q9QVofD3im7ZTJ9sint43fSdXuJd32mZ457S8eQSSi1kEk0v8ANnj94V5pxjTy/iXhuj9azrKsNPBYvARmoVsdl/tJ16MsJzOMJYnCVauIfsnKMq9Ks403KpShTqf3P9Dv6QfD/hnVzfgXjjFf2dwzxBj4ZrluczpzqYXKs5lh6WExVPMVTjOpTwWYYfDYNRxMYSp4Svhuauo0a9SvQ8d/YC/4KEWL2OjfBH466xBYzWEEOm+B/H2ozRwWk1nbxCO10DxLdSMqQTwRokOm6tKRHPGEtr51mWOefmv2+f8AgoNoXirR/EnwM+D/ANj1rR78NpXi/wAY3Nqt3p+oWjwsLrTdAiuUjeKa1vBGDq3lyIzxF7FhiO4r88vEf7E/7U3hbVJ9Mvfgv41u5beRkF3oemvrmnTbcESW+oaS13ayowII2y7h0ZVYED1z4Pf8E2P2lPidqEDa/wCGT8MvDvmRm71nxmfsl4IS58z7FoMbPq1zcbVYxieCztmYpvukVt1fjtDjPxuzXhqHh3Q4dzqWMangMRm1bK8dRzOWVumqKwGJxWJp08NQjGLdOrj6s41qlBwpyqKXPUqf03jPDL6K2RceVfGrF8ZcK0cFH2WcYbIsPxDlOIyCOfKt9aWdYHAYKtWxmKrVGo1sPlWFhUw8cXz4ijh2/Z06WZ/wTp+DepfFX9pTwfqi28h8O/DS8tvHWv3vlxPDDLpMwm0K0bzjtMt7rMdqqqiSSCCK4lVV8oyJ/VPXgP7Ov7OXw/8A2avAkHgzwPatNc3DR3XiTxLexxf2z4l1REK/a76WNR5dvAGePT7CNjb2MLsqb5pbieb36v6g8H/D+p4ecJwy7Gzp1c4zHEyzLNpUXz0qVepTp0qWEpVLL2lPC0acYyn8M68q04e5KJ/AX0lvGSh4z+Ic85yqlWocNZLgYZJw9DEw9nicRhaVariMTmWIpXbpVcfiq0506TfPSwlPC06qVaNRBX5y/wDBUX9vPw3/AME+v2WPFfxauGsr/wCI2vGTwf8AB/wxdPka5461G2me2urmFT5jaP4dtI59c1d/kSSG0isBNHcX9sT+jVf5z3/Bwp+2Rd/tLftxeIPhnoWr/bPhl+zVFcfDbw/a20wksbjxlvhn+ImtMEkdGvDrUUXhx3HAtvDsACqzS7v1GpLki2t3ov8AP5L8bH89f16+R+JvxH+InjP4teOvFXxJ+IXiDUvFXjXxprd/4h8Sa/q1zJdX+p6pqU73FzcTSyEkLufZDCgWK3gSOCFEijRF4qiiuMAor9Dv2Kv+CbXx0/bQmm1zw8lp4F+F+nXf2TU/iP4mtro6bNdJtM+n+HbCEJceINRhVlM8cEsFpbF0F1ewMyqfpX9rz/gi78bv2f8Aw4fHXwp1mT47eEdPtHn8S2ujaFLpvjDQfJG6W7Tw9Feam+saWsYaSW506Zrq1Cnz7LygZ6+XxPGvCuDziGQ4nO8FRzWclD6tKcuWnUaTjRrYhQeGoV53XJRrVoVZNxSheUU/qMNwXxTjMonnuGyXGVsrhFz+sRjHmqU07Sq0cO5LE16MWnzVqNGdOKUm5WjJr8XKKkmhmt5ZIJ4pIZoXaOWGVGjljkRirpIjgMjqwKsrAEEEEZFR19QfL7bhRRRQAUUUUAFFFFAH9K//AAbcftzah8Ef2m5/2V/F2q7Phl+0Q7R6BFdz7LbQvirp1oz6HcWwdhHGfFVnDJ4euI41Mt3qB0MZIgxX9+Nf4+fwt8d658L/AIkeBfiN4avJNP8AEHgfxb4f8VaNfRO0clrqWg6pa6lZzqyspBjntkbG4AjIJwTX+uB8F/iPpvxg+EPwv+K2kADS/iR4A8JeN7FRu/d23ifQrHWI4ju+bMS3fltnnKnNdNCV04vpqvR/5fqH9f1+J6ZRRRW4BRRRQAUUUUAFV7u1gvrW5srmNZba7t5raeJ1DpJDPG0UqMrAqysjMpVgQQcEYqxRQm07p2a1TW6fcUoxlGUZJSjJOMotJqUWrNNPRpp2aejR/Id+1V8B/EH7NXxt1jw9dQTR6FcanN4g8E6tHAVt7/QJr6SazEbsrQNd6fgWd9B/BNEJPJW3ng3fYvx3/wCCk2n+NtP+Euq/DDwtr3hz4h/DHUYdSg8R+IJNLuLSVrzwxqGga9YfYLZrp7uyu3u4p42lubXzGt4pZ7ffHGi/ub8Z/gV8NPj54Vl8J/Enw9baxaASPpuoKqRaxod26hft2j6gEaazuAAA4XdDOgMVxFLGSp/HLx//AMEePFEOoPL8Mvifoeo6U7kraeMbW90vUbePA2r9r0m01K1vHByHY29gpC71UFvKT+h8l444O4mo5P8A653w2b5LSr0adasq8svx0cTRhQxE6vsIyg/rFKmlXw+KiqTlKfI5xnyx/wAq/EL6OPjx4QZhx0vo/JZxwLx/jMpzDFZfgZZdHibh2vkuZVc0yuhg/wC0atGqlluPxE55fmeVVamNVGnRWKp0quHjXq/CXi/9vD9qnxnNO2o/FvX7G1mWaNbDQY7DQLaKK4QRyRK2k2lpcSJtB2GeeZ0LMyuCSao/sjfATX/2lfjbomhzR3V34fs7+LX/AB7rM5km+z6NFMbi58+4k3s93qsqfYbcuWaS4uQzkKGcffnw/wD+CPHiebUY5fid8T9E0/SUkUvZ+DbW81PUrmIZ3qLvV7TTrSykOAI3FtfqobzHQlDC/wCynwc+Cnw8+BHg+18FfDnRE0rS4T5t3cysLjVNWvCAHvtVviqyXVw+MAbUhhX93bwxRgLXZxD4jcJ5DluJwHBeHwc8fiqUqKr5fgo4LCYRSjJKvOp7Ck8TWp87lSpwjOHtG5VKkbOE/C8J/ooeNviVxhk3Ev0gcwz+nwvkmMhmEst4q4hqZ9nedzpVqdR5XQws8fmCyrL8XKjCGY1688PWnhYqjhMPOVSGIw3qNtbw2dvb2ltGsNvawxW8ESDCRQwoscUagdFRFVQOwFTUUV/Nm+5/rjGKjFRilGMUoxilZJJWSSWiSWiS2QUUUUDCiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigCVOh+v8AQUUJ0P1/oKKAI26n6n+dJSt1P1P86SgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigDxj9oz4o2nwR+AXxn+L98yi3+Gnwx8beNdjMq+fN4d8PahqdraoXKqZbq5t4baJWYBpZUBIBzX+R54q8Q6n4t8TeIPFOs3L3mr+I9a1PXNUu5CS9zqGrXs9/eTuSWJeW4uJHYkk5Y5JPNf6Zn/BcDxjN4I/4JdftXanAxSTVPCfh3wmWBIPleMfHPhjw1cLkdnt9TljYEgMrFTnOD/mLHkn6muau9Yrsr/e/+AAlfa/7Af7Juofth/tEeGfho73Fn4O01H8UfEPVrcMJLDwjpU0H2yC3lClI9Q1eaWDStPLkbZ7rz+VgcV8UV/TL/AMG9fg6IW/7SXxBeJGlJ8C+DbWcrmSJSda1q/hV8Y2zldNeRRyPIiLZBXHw/H+eV+HeEM6zXCSUMXRw8KOFm7P2eIxlelhKdVJ6OVF1/bRTunKCTTWh9jwBklDiHi7JsrxUXPCVa86+Kp3sqlDB0KuLnSk1ZqNb2KpSaalyzfK07Nf0V+CfBXhT4ceEtA8CeBtB0/wAM+EvC+m22k6Foel26W1nY2VqgRFVEA3zSHdNc3Ehaa5uJJbid5JZHdupBI6fT8D1FJRX8F1KlSrUnVqznVq1JyqVKlSUp1Kk5tynOc5NylOUm5SlJtybbbbZ/dlOnTpU4UqUIU6VOEadOnCKhCEIJRjCEYpRjGMUlGKSSSSSsflp+2d/wSi+AH7VSaj4s8PWtv8Ivi/cebP8A8Jh4csIxoviC7ZflXxb4dhaC2uzI4XzNVsDa6mBl5XvP9Uf5WP2pv2FP2if2RdZe2+KHg+ebwvPcvBo3xA8PCXVfB2sKCxj2alHEjafdug3HT9Uis7xSGCxOqlz/AH5Vka/4f0HxXo9/4d8UaJpPiPQNUt3tdS0XXNPtdU0u+t5FKvDdWN7FNbzIysRh4zjORg4NfqfB3i1xDwwqODxcnnWTw5YLC4qrJYrDU1pbB4xqc4RivhoVo1qKS5Kao35l+YcYeFHD/E7q4zCxWTZvO8ni8LTj9XxFR63xmEThCcpO/NWoypVm25VJVbKJ/mvUV/WV+11/wQ5+GHxDGq+Mv2YtWh+Fvi+X7ReN4B1iS4uvAGq3LAuLbS7vM2oeFmlkwkabdQ0yLdtS3s4QCn81nx3/AGZ/jd+zV4nl8KfGXwBrnhC+8yRLG/ubc3Gg6zFGxH2nRddtfN0vU4GUB82ty8sasonjiclB/UPC/HnDfF1KP9l46MMZy81XLMU40MfSsk5NUXJqvCN9auHlVpr7Uou8V/MPE/AvEfCdSX9p4KU8HzJU8ywqlXwFS7tFOsop0ZvpSxEKVR7xjKNm/CKKKK+xPjwooooAVfvD6iv9QP8A4IveMJ/G3/BMv9lHVLmWSW407wJe+GXaVgzeV4X8S63oVmowTtjWxsbURJxsiCLtXGB/l91/pR/8G+00s3/BLH4AGVy5TVPifEmeixx/EjxMqIoHAAA7DkkscsSTtQ+J/wCF/mgP2looorqAKKRmCqWYhVUFmYnAAAySSeAAOST0FfgP+3T/AMFCvFd/4o1z4TfBLXZtA8OaHcTaVr/i7SpZLfV9d1CBvLvbfTL9PLuNO021mElr51qY7i8ljeVJltvL86ox5mfmHiv4s8LeEHDn+sHEtStVniKzwmU5TglCeYZtjVB1HRw8KkoQp0aMF7TFYqrKNHDwcU3OtVoUav7r6h4v8K6TMtvqniTQtPnf7sN5qtlbyY+U5KSzqwGGU5IAwQehzW3bXVteQpcWlxBdQSKGjmt5Y54nVgCrLJGzIwIIIIJBByK/iL1HXdZ1e6mvdT1S/v7y5ZnuLm7up7i4nd/vPNLK7SSO3RmdixAAJ4FfVvwu+Mf7TX7LY8C/EOyk8UaZ4I8ZRz3Whab4iN7N4R8YaZplwsF+ILGeXyRtaUIt7brb3YSRZbecxMGa+Rd7Pz69/O3Xrbz3P5N4f+nZg8xzOss28NszwXD2GdOpj82yrN3m+JynB1cTRwlPG47CPKsFR9j7fEUKMmsXS5q1WnSpOrVnTpz/AKzqK8G/Zx+Pvhb9o74Y6V8QvDWLWaR307xBorypLc6FrlsiNdWMxU5aN0kjurOVgpmtJonKq+9V95rNqzafQ/u7Js5yziHKcvzzJsZRzDKs1wlHHYDG0JOVLEYbEQVSlUjdKUW4u0oTjGdOalCpGM4yiiiiikemFFFFABRRRQAUdOtfO37Tf7U/wW/ZG+GmpfFP42+LbTw3oNoDBptgpFxrviXVWRmt9G8O6UjfadS1C42k7Il8q3hWW6upYbaGWVP4wP23f+C7H7Tn7R1/qvhb4K6hf/s/fCZ5Zbe2t/Dl4qfEHXbMbo1uNb8V24W40prhNsjaf4dktRbFnt5NS1CL52+cz3ijK8giliqkquKnHmp4OhyyryXSdS7UaNNvadRpy19nGbTS+y4V4Fz3i2blgaUKGBpz5K2Y4pyhhoyVuanS5YyqYiqk7uFKLUdFVnT5ot/2R/HH9tj9lT9m+KU/GX45eAPBt9GhcaDca5bX/iaZQrMPs/hvTGvNbm3bSqmOxZS/y7gQcflH8T/+DjH9ifwg1zbeA/DnxX+Kd1E7pBdaZ4fsvDejzFcASG58S6jYamkbZOCNIdjsYFVBRm/h31PVdT1q+utT1jUb3VdRvZnuLy/1G6nvLy6uJGLST3FzcPJNNLIxLPJI7OzEliSTVCvzHHeJeb1pSWBw2EwVPXlc4yxNb5ym4UvkqPzZ+4ZZ4KcPYaMXmeNzDMqyS5lTlDA4ZvqlTpqpXS8/rN/Q/rJ8Wf8ABzpJmaPwR+ympTnyLvxJ8TSkn3htMtjp/hGZB8udwW+OTwCM5XyK8/4OZ/ju5b7B+zp8MbceaSv2rxJ4juyIfmwjeVDZ7pBlcygKpw2IhuG3+ZOivDnxvxRUd3mk4+UMPhYL/wAloJ/e+vpb6il4X8DUkksipztvKrjMwqNvu+bFta9kkuyR/UdoX/Bzf8TYJ1bxH+y/4M1G3DEtHpPj7V9LlZPkwqyXPh/VFRuJMuUccqPL+UlvsP4Yf8HKX7M/iCeztPij8Hvif8PXm+W61HRpdG8Y6TZuSdrtsudH1WWHABcw6ZJMmflhl6j+K6itqHHnE9GScsdCvFNXhXw2Hkn3TlCnTqa9bTVuljnxXhRwPiYuMcqqYSTTtUwuOxkZrS11GrWrUrrdXptX3TWh/phfs9f8FEv2OP2ofs9r8Ivjl4Q1bX7hUI8Jaxcy+F/Fu59oKx+HvEUWmandKhYBprO3ngyeJMhgPtVWVgGUhlYAqykEEHkEEcEEcgjg1/k32t1dWU8V1Z3E9pcwSJNBcW0rwzQyxMHjliljZXSSNwGR1YMrAEEEV+zP7Fn/AAW7/au/ZcutJ8N+PNau/j18KLeZI7rw5441O4ufFWm2TPGJD4e8ZXIu9SieCMMYLPVRqVkRiCJbRCJE+zyjxLoVZRo5zhFhnJpfW8JzzopvS9ShJyqwit3KE6z7QPzbiDwTxNCE8Rw5j3jFG7WAzD2dLESSV7UsXBQoVJvZRq0sPHq6t9D+/iivjf8AY8/bs/Z6/bc8FL4q+DPi2GfWLK3hk8UeAdYaGy8aeE55eBHqulCWQyWruCtvqli9zp1zhljuPMSSNPsiv07D4ihi6NPEYatTr0KsVKnVpTU4ST7Si2rrZp6p3TSaaPw/GYPFYDE1cHjcPVwuKoScK1CvCVOpTktbSjJJ2aacWrxlFqUW4tNlFFFbHMFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAEqdD9f6CihOh+v9BRQBG3U/U/zpKVup+p/nSUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFeJ/Hr9or4M/sy+BL74j/G7x5ongTwvZ5jiuNUuR9t1W88uSWPTNE0yLff6xqc6xOYbKwt552CO5UIjsv8xv7Tn/Byhq51HUtA/ZR+EOmx6XC8kFr4++Ksl1cXN6oBUXNl4O0a6sxZoxw8D32uzSlP9dYxudq+Jm3EWUZIl9fxcKdWS5oYemnVxE10apQvKMX0nU5IOzSldH02QcH8Q8TSf9k5fUq0Iy5amLqyjQwlOSteLr1XGM5xuuanS9pUSabhbU/rdZlVSzMFVQSzMQFUDkkk4AAHJJ4FfF/7QH/BQz9jn9mVbmD4t/HbwVpGu28bv/wiWk3zeJfFzspCqreHfD0epanbCRyFWa7t7eDG5zII0dl/hi+N3/BXT9vv49aXqGgeKvjxrnh/w7qYaO80T4f2mn+CLeW2Zs/Y2v8AQbe11ySzZSY5reXVZI7uMlLwXAr83Lq6ub2eW6vLie6uZ5HmnuLiV5pppZGLySSSyMzu7uSzszFmYkkkmvgsy8TqaThlOAnOX/P/ABzUIr0oUZylJebrw/ws/Wcl8D60mqnEGbQpxuv9myqLqTktG+bFYmnGMHurRw1VPdTWx/Zd8XP+Dln4D6BNdWfwa+Bvj74hPGCsGq+LNV0vwPpUrlYiskcVsniXVJIwWmRkns7Jw0SFWZZSY/gjxf8A8HK37VWpPKPBfwb+DPhaFmPljV4/FPie4jQkkATR63oMLOAQNzWxUkAlOoP84NFfGYnjnibEtv8AtF0I62hhqFCklftL2cqr7LmqS+/U/SMF4XcE4KMV/Y6xc1a9XG4nE15Satq4e1jQV+qjSitXpsfvE/8AwcS/t9NqAu0tfgvHbeckh08eBtTa2Mald8G5vE5uNkgDAt53mqGysgYBq9P8J/8AByj+1zptxF/wl/wm+CXiexWRTLFplh4r8OXzxb9zql43iTWbZJCpKI5090QBWaNyG3fznUVyQ4t4kpyUo5xjG072nONSPzjUjKNvK1j0Knh/wXVjyS4dy5K1r06cqMv/AAOlOE7+fNfzP7GPhR/wcw/CbVZrOz+Mv7P3jPwiJNiXWr+CNf0vxdaRM3DyPY6pD4au1ijPJMD3UjjhYFNfrj8AP+Cqn7C37RwsbTwR8dvC+jeIr4rHH4R8fSv4H8Q/aWKAWkFt4iWxttSnLSKi/wBkXWoRSPuWKWQjn/N2pVZkIZWZWHQqSpH0IwRXvYHxHz3DOKxccLj6eil7SkqFW2nw1MPyQT0esqU732Plc08GuFcZGTwEsdlNV35fY13iqCf9+li/aVGvKGIp+TR/rI29xBdwx3FrPDc28yCSKeCRJoZUYZV45I2ZHVgchlYgjkGpq/zx/wBgL/grV+0V+xh4q0DRtU8S6z8S/gU19a2/iL4beI9Rn1AadpTOyXF14MvrySSbQdQto5WnhtYXGl3TxrFc2uSs0X9/nwv+JPhL4w/DvwZ8UfAepx6x4P8AHnh3S/E/h7UYsgXGm6tax3UHmIwDxTxCQw3ELhZIZ45InUMhA/VOHeJ8DxFRqOhGVDFUFF4jC1GpSgpaRqU5pJVaTaceZRjKMtJwjzRcvwbjHgfNODsRSjip08XgcU5rCY+jGUIVJQs5Uq1KTk6FdRalyOc4yjd06k+Wah+TH/BwXHO//BKb9oxoZliSPU/hC9whiEhuIf8AhcHgdBCrb1MJEzxTeaA5xEYyuJCw/wA1qv8AUA/4LT+FG8Y/8Ewv2uNMQZaw+H9l4nGI/MIXwj4p0DxLKduGAxDpchLkfuwC4KlQ6/5gB6n6n+de1X+Nf4V+bPi/6/r7xK/ry/4IIaFb2H7J/wAQdeWKRLnxB8ZdTtppGBCSxaH4Y8OJA0eVAKg6jMhYFvnRlO3bg/yG1/Y5/wAEKf8AkynUP+yyeNf/AEz+E6/HPG6co8C4iKbSqZnl0JpdYqpOok+65oRfqk+h+ueCcIy44pSaTdPK8wnDTaTjSptrs+WpJXXRtbM/ZyivNvjD8U/DXwS+GPjX4reL2n/4R7wPoV1reoRWih7u6EO2O2sbRWKobm+u5YLS33sqebMm5guSPlb4UftLfG3xJ8APAv7YXxK+GXw78H/szfE74g2fgPw9faT8Qp9U+I/h06v4qm8F6LrviTQLjSLbS7nTLnxHGLK/isNRh1LTrZ49UNlcWJd4/wCXMk4N4j4iwOYZlk+XTxeEyxP61UjUowfNGn7WVKjCpUhOvVjSaqOnSjKVpRSTlOEZf0/nPF/D3D+NwGX5tmMMJi8xaWGpunWqLllUVKNStOlTnDD0pVX7NVK0oQupO/LCco/eVFKRgkZB9COhHYj60lfLn0oVxHxD+Gnw/wDi34YvvBfxN8HeH/HPhfUY2S60bxHp1vqNrllKie3Mymazuo87oby0kguoWAaKZGANdvQSACScADJPoB1NaUqtWhVp1qFSpRrU5KdKrSnKnVpzi7xnTnBqUJJ6qUWmnqmZ1aVKvTnRrU6dalUi4VKVWEalOpCStKE4TTjOLWjjJNNaNH4I/H3/AIIMfBXxlcX2s/AX4g658KNQn8yWLwx4khfxf4TWVmLCO1vDNa+INOgOdgEtxrHlqAUTA2H8kPi9/wAEaP23Phf9ru9G8FaR8V9Gt/NddQ+HGtQaleyQIwEch8P6kml66ZJVJfybaxu2QKdxyVDf0xfCf9p/42ftN69+0nq/7NXww+HviL4U/srMyeONZ8d+Prrwt4n8ZzW66zLeReC7K30fUNOtIzB4e1V7K71yeKzm8uAzT2/2kJH9efCj4leHvjB8NvBXxR8JyTN4f8caBY6/pqXKql1bpdR/vrO7RWZVurK5Wa0uFVmUTQvtJXBP7zLjLxY4FwGV4vP6NLG5bmEYrCvNY0cRXs6aqqjVxGErU8ZSxEqXvqOMlUn7s04twqRj+GLg7wq43x+Z4XIqs8HmOAlJ4pZVKrh6P8R0nWpYfFUamEqUI1VyOWEhThdxaajUpzl/nb+M/h548+HWqS6J498G+JvBurwu6Sad4m0TUdEuw0Z2tiDUbe3d1B6OgZGGCrEEE8dX+iB+0n8Gfht8cfhB488JfEvwpo/iXT5vCuvzWdzf2FtNqejX0OmXMtrqei6jJE11pt/aTok0NxayxsHUB9yEqf8APDlj8qWSLO7y3ZN2MZ2kjOO2cV+z+HnH1PjrB46o8vll2Ly2ph6eJpqt9YoVFiY1ZUqlGo4U5q7o1FOnODcPdtUqcza/GfELgKfA2MwNOOYLMMLmVPETw9SVH2Fem8NKjGrTrQVSpCVlXpuFSEkpXknThyrmYOSB6kV/pcf8EDdMXTP+CWv7OaLv23h+IGo5ZSoLXfxC8SyP5eSSUWQMhO4gur4Cj5F/zR1OGB4PI69K/wBQ/wD4I16A/hv/AIJmfskafLD5Ek/w6uNYZPl5/t/xNr2trJ8gCnzY79JMgZbduYsxZj+lUPjf+F/mj88P04ooorqA8O/aX8W3ngX4A/F3xVpzyR6jpHgPxHPp0sRKyQXz6dPDbXCkEHNvLIs+Ny7hHtDAkV/NT+xF+zxb/tQfH7SvCHiBb0+DdPtNQ8T+NZ7SaSG4OkWQWOKyivEBNvPqOo3NnapIMOsTzPH80eR/UT8U/BcXxG+G/jnwJNJ5SeLfC2taCJsZ8mTUbCe2hmA3LkxSujj5l+7yQK/mN+Av7SXxQ/YR+I3xD0O18HeHtQ1G7vF8P+KdL8QW1xHfwtod3dCNNN1e3eOa3heSZ5GJiu7S5UQTrGdqOdI/A0viu/u0/wCCfwR9K3DZLhvFbwXz3xAWKl4aYT+0aea+ywVXMMO8bhcTTx9XBYnDUvenTzSFPL8PiIRUqlTBUcVKnCp7GUV+rnwN/wCCZXwJ0346eP8AXL6XV/Hnw58B32naPoHh/wATfZ5LOfxi1supazBqU9klvFrmmaDa3emW0cMkMUcuoTXVvfpObJ0k8f8A+CrXwE+M+oW+mfEqwXwdD8B/hfotlomg+G9DuDpt/wCHU1G6s7Sa7udJnt7e1ne6untbSNdJmlWCytLfMC/vmDP2Yf8AgrH4R8LaTH4R+MngfUdOFzreuaxd+NfC0v8Aa32q98Q6veaxeXmr6TeSQ3aslxevFvsbi7P2eOJIrSNI1jHkP/BTP9tXwf8AHeHwd8NPhHrceveA7BIvFPiDWVsru0N5r0guLaw0xEv7e1u4hptq0012vltBNNdW21y9s4Cjzcyve/n0Xrtonunv6nzfGHEf0bqn0e+LsJwdjcvwGNzetDMZ8PZHXqZRxBiM/wAVjZ4nKcuzHD5hReNzDIshr1KUqtNU6mW+wy1VMJOlXnSm9/8A4I9+L7628c/E/wADtPK2m6r4asvEKWxOYUvdH1GGxMy5UlHeDVihCsiuFBYMUXH77V+Ln/BIv4O6vo2ieO/jHrNjLa2niSK08L+FZJ42ja8srS4N9rV7Bu/1lo91Hp9tFOg2NNbXMYbMbiv2jon8XyV/69Lf8Of0X9E3AZvl/gVwfTzeFWlLETzfHZfRrKUalPKsbm2LxGBk1NtqniITli6FrRlh8RSlFWlqUUUVB/SAUUUUAFeOfH/45eA/2bfg/wCOvjV8StSXTfCPgPQ7rV74q0f2q/uEXy9P0fTY5HjW41TWL57fTtOtt6me7uYowRur2CWWOGOSaaRIoYkaSWWRlSOONFLPJI7EKiIoLMzEKqgkkAV/FH/wXh/4KQaB+0F4s0/9ln4Ma7HrHwz+Guuyal4+8S6bOkul+LvHVmklpb6bp08LtHe6R4YEt0slyGaC71aZjCuzT457jweI88o5DllbFzlB4iSdPB0ZPWtiJK0bR3cKV/aVXolCNrqUo3+s4M4XxPFed4bL6cKiwcJxrZliYLTD4OMrz99rljVrW9lQTu3UkpcrhCbX5Kftxftt/Ff9uP4yav8AEv4halc2+g289zZeA/BENzKdE8HeHBcSNZ2Vpa7hBJqM0RjfVdTMYuL+5G52WCK3hh+MaKK/mzE4mvjK9XE4mrOtXrTdSrUm25Sk/PoltFKyjFKKSSSP7SwWCwuXYWhgsFQp4bC4anGlRo0oqMIQirJJLdveUneUpNyk3JtsooorA6gooooAKKKKACiiigD134H/AB2+Kf7OfxF0H4p/B/xdqng7xj4fuobi2vtOuJI4b23SaOWfS9WtAwg1PSL4RCG/067SW2uYSySIeCP77P8AgmH/AMFNPAX7fnw4e1v1sPCfx48GWFqPiD4GSXZBext+5XxX4VWeR57rQr2ZcT25aW40e6kW1uneOS1ubj/O2r3H9nL9oL4jfswfF/wb8ZfhhrVzo/iTwjq1temOKRxZ6zpolX+0tC1a3V0W80vVbTzbO7t3IzHKXjeOZI5E+q4X4nxPD+LipSnVy6tNfWsNe9k7J16Keka0FZ2VlViuSVvdlH4LjrgfBcXYCcowp0M5w1OTwGNsouTXvLC4lpXnh6juk3d0Jy9rDTnhU/1J6K+cv2Tf2kvBn7WvwC+Hvx28DyBNN8ZaPHLqWlu2658P+I7M/ZfEGgXeQD52l6nHPbrJgJcwCG6iLRTIx+ja/oqjWp4ijSr0ZqpRrU4VaVSOsZ06kVOEl5Si016n8dYnD18JiK+FxNOVHEYarUoV6U1adOrSm4VISXeMotPpppoFFFU9R1Gw0iwvNU1S8ttP07T7ea7vb67mSC1tbWBGkmnnmkKpHFFGrO7sQAoJNaHPOcKcZTnKMIQjKc5zkoxhCKblKUm0oxik3KTaSSbbsXKguLq2tUMl1cQW0ags0lxNHCgVerFpGVQB3JOB3r8Tf2oP+CqP9nXl94P/AGd7a0uzA1zaXnxC1SD7RbtKj+UJPDWmviKZAVd49Q1BZYJAB5djNE6zV+Qnj348/GD4m3st945+IXijxC8zM5tr3Vro6dEzly32bS45I9OtFO9hstrWJAuFChQANFT7u3ktfxvp+J/HfiL9NDw64Px+JyjhrA4zjrMMLOVKvisvxVHAZDCtB2lTp5rVp4mpjHCWjqYPA18LKz9nipNO39iK+MPCbymFfE2gNKAjGMavp5bEjMsZx9o/jZGC+pUjtW/FNFMu+GWOVD0eJ1kXkZHzKSOnPXpX8PyX96hUpdXClcbcSuAMdBjdjA6Y6Y4r3D4a/tO/HT4S3tvdeCviP4k06C3lWT+yp9Qm1DRJQCS6TaNftcadIJAzBmNtvGcqykAhumujfz1/yPzfJfp9ZbVxkKfEPhzjMDgZSiqmKyjP6OZYmlFv3pLB4vLMsp1rb8v1yk9LXd9P7FaK/JP9k7/gpt4e+Jl/pvgP4122n+EPF1/PDY6T4nsi0PhrWbiRUSGC+juJXfR72eXciSGaaynkK/PamRIR+tSOkiq6Mro6q6OjBldGAKsrAkMrAgqwJBBBBxWbTW/9f195/avAHiPwf4m5JDPuD83o5lhFJUsVQadDH5diHFSeFzDBVLVsNWSd4uUXSrR/eYerVpNTbqKKKR9yFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQBKnQ/X+gooTofr/QUUARt1P1P86Slbqfqf50lABRRRQAUUUUAFFFZOu69ovhjSb7XfEOqWOjaNpsD3N/qWo3EVrZ2kCDLSTTysqIoHqck8AE0b7EVKlOjTqVq1SFKlShKpVq1JxhTp04RcpzqTk1GEIRTlKUmoxim20ka1FfjX8ev+Cs3hrw9d32gfA3wzH4suIDPb/wDCX+IxdWWiidPOjWbTtHQQahfwCRYyJLubTQy7ikUsbJIfzg8Vf8FEP2rvFNy8zfEqfQ7diGXT/D+l6Vp1rCQScRSC0lvinOB515M5HDu1WoN76fmfyvxn9MXwd4TxlbL8Hi814vxdCpKlWlwzhKFfL6dSDako5nj8VgcJiYprSrgJ4uk+lRn9WdFfyR6P+3L+1Rok8c9p8YvFMxSYysmoTW2qJIGkV5ImXUra6HlNt2hFwI1JEWzrX2f8Hv8Agrb8RdEuraw+MPhjTPGOkGXbPrOhRJoviCCFgxaRrdWOk37q5AjiSHTRsGGlLfNR7N9Gv6/r/gnjcN/Tb8Is6xdPB5thuJeFvayjCONzPAYfF5fBydv31XKsXjcTSim0nN4N04q8pzhFNn9BtFeO/Bj48fDL4+eGU8UfDjxDb6rbpsTUdNlxba1o9wwJ+z6ppsjfaLZiQ3lTbWtrlV822mmiKufYqhq2jP61yvNctzvL8JmuT4/CZplmOowxGDx+AxFLFYTE0Zq8alGvRlOnUi9m4ydmnF2kmkUUUUHeFFFFABXgn7Tf7Rfw8/ZS+Cfjn45fE2+Nr4a8F6W1ytnC8Y1HXtYuGFvo3h3SI5GVZtT1nUJILO2UkRxeY9zcPHbQTSp73X8bP/ByD+1PqHiX4t/Dz9lLw9qkieG/h1okHjzx5Z21yfKvvGfieNxoNnqEKcF9B8Nql9bLIzAnxK7GJWijdvB4kzhZHlGJxyUZVly0cLCXwyxFVuMLrrGCUqslpzRpySabPquC+HJcUcQ4LK25RwzcsRjqkdJQwdC0qvK/szqtwoU5a8tSrGTTSaPxM/bR/bU+MX7bvxd1n4m/FDWroaZ9suovBPga2upm8OeBvDzSn7HpOlWh2RPc+QsTanqrwrd6pdh7iYonkwQ/IFFFfzXiMRXxdericTVnXr1pudWrUfNOcnu2/wAElZJJJJJJH9q4PB4XL8LQwWCoU8NhcNTjSo0KUVCFOEVZJJbvrKTvKUm5Sbk22UUUVidIUUUUAFFFFABRRRQAV++n/BMX/gtf4o/Y+8NeEvgB8XvCcXjP4Eabq10tjr2kvNF428D2Wt6k99fyW0Msr2OvaVa3V1dXg0xlsruNXlW2u5P3dufwLor0MszTHZRio4zAVnRrJOEtFKFSm5RlKlUhJOM4ScYtp6ppOLUkmvIzvIsr4iwM8uzbDLE4aUlUiuaUKlGtGMowrUakGpU6sFOSTV003GUZRbT/ANNb9oEeDf2rv2GvjVH8PNa07xd4S+M/7PPxCg8I61pkiXVnqA8QeCtWj0m4j5G2e3vnhE1vL5c9tdQyQTLFNEwX/J/uYzFcTRMMNHK8bDGMFGKkY7cg1/Z//wAG8f7bMlnr3in9h/4i6sZ/D/jCz1bxT8IDqV0DDZa3FbSS+LPB1qJnG231vThLr1naw4VLzT9VKo02oV/Jt+1T8O5vhJ+0v8fvhhPC8J8AfGL4jeEo1kQoWt9C8W6tp9rMqkLhJ7aCKaMgANHIrDgiv6GybOKWe5ZhcwpxUJy5qWJpJtqjiafL7Snd68ruqlNvV05wb1vb+M+LeHK/C2eYvKa0pVKUGq2DryXL9Ywda7o1bLTnVpUqyVkq1KaV42PA6/r/AP8AggvrMF9+yH4z0dZka50L4za880Kj54odW8OeGJbYvxz5j2lyVIJ4GMDHP8gFf07f8G9njGOTQf2kfh9JMolt9R8DeMbW3MhMjx3NvrWi30qRnIEcTWunpIwwA0sYPLc/nvjNhniOAcynFNvCYrLsTZdvrlKhJvR6KOIcntZK7dkz63wbxKw/HmXQk0li8LmOG17/AFSpiIpeblQS+fc/fL40/Cjw78cvhX44+EviuS5g0HxzoVxot7c2RC3lmztHPaX1sW+UzWV7Bb3aI3ySNCI3yjMK/Hj4J/8ABJb4s+D/ABj4a0P4m/tFXXib9nrwN44tPH2j/C3RNS8WR6Xr2uafci5sbm+8NX1ynhvRbqR44V1G9tEv7qaM3EcMsZlEo/deiv5VyLjXiPhvAZhluT494XCZmm8RD2NGpKNSVP2Mq1CdSEpUK0qSVOU6bTajB6ShCUf6nzvg7h7iHHYDMc2wCxOLy1r6vP2tWnGUI1FWjSrwpzjGvSjVTmqdRNXlNO8Jyiw47AKOwHAA7AUUUV8qfThSEAgg9CCD9DwaWigD8IPH/wDwSP8AixH8VfH+pfAz9oy7+G3wm+Ld7qE3jjw7FqPirTdTbStXvZ7/AFHw9e2Og3FvpfivSknuJ1sYNVuLaOOCQRyxsQ7yfsx8IPhj4f8Agv8AC/wN8KvCxnk0LwL4esdAsbi6Obm8+zIWub64wSqzXt3JPdSIp2RtMUT5VFej0V9Vn3GnEfEuCy/Ls4x7xWFyxL6tD2VKm5TVNUVWrzpwjKvWVL3FObbSlN25qk5S+YyPg7h7h3G4/MMowCwuKzJv6xU9rVqJQlU9rKjRhUnKNGi6lpuFNJNqK+GEVHzr4wazD4d+EvxQ164dUg0j4e+MtRldmKBUtPD2oTklgCVwE4bsee1f5xpYuS5JJb5iSckk85JOc/nX99f/AAUN8bReAP2Kf2kNfeZYJpfhlrvh+zYsyF7zxTEvhy3hjZfmWWV9TCRFfmDkFSGAr+BPjsAB2A6AdgPYV++fR+wsoZPxBjGny4jMsLhovo3hMLKrK3p9cj95+C+P2JjPNuH8Gn71DL8XiJLssViYU439fqkixaQ/aLmCAZLTSxxKFBJLSOqqABySSeB3Nf60X7FHgk/Dj9kD9mPwM8XkzeGPgT8LdJuo9uxheW3g3SBeF14Ika6MrSk/MZCzNliTX+W7+yR8MLn40ftOfAX4V2sD3D+PPix4E8NSJGocpa6p4j0+2u5ip6rDavLK/X5UPyt90/621jaQWFlaWNtEkNvZWsFpbwxKFjiht4khijjUcKiIiqqjgKABX9FUF8T9EfgBaoooroAK+BP2uf2C/A37Srt4s0m8j8GfE2C1MA1xIGl0zX0iiCWkHiC1j/eb4Aoji1K1X7UkREc8d3HFAkX33RTTad1/w/kz5jjDgzhnj3I8Vw5xZlOHzjKMU4Tnh6/NGdKtTu6WJwuIpShXwmKotv2eIw9SnVipThzOE5xl/MFrv/BL/wDap0nU5LCx8NaJ4htRKyQ6tpXiTR47KaMNtWYx6nd2F7CrA7ts1qrgZ4OOfpb4Cf8ABJjxLNrNnrXx713TtP0K0mWaTwl4avDfanqgQqy295qsapZ6fbuw/etaPezSR7o18hmEqfvPRVe0fRJP7/wd/ne/yP52yP6GXgvk2cU82q4XP87p0Kyr0coznNaVfKYyi1KEK1HC4LB4jF0YSS/c4rFVqVWK5MRCtByUsTw34c0PwhoOleGfDWmWmjaDodjBp2laZZRiK2s7O2QRxRRoOTgDLuxaSRy0kjM7Mx26KKg/qyjRpYelSoUKVOjQoU4UaNGlCNOlSpU4qFOlTpwShCnThFQhCKUYxSjFJJIKKKKDQKKKqahe2+m2N5qN3IkVtY2s93cSyNtSOG3iaWRmbnChVJJwfoaASbdkrt6JLdvsfzsf8F7/APgoVqnwF+Hmm/st/CjWpNO+JPxb0a5vfHesafcyQaj4V+Hczm0jtbWeBklttR8WTx3losqSLLb6XZXxKK13ay1/E0SWJZiWZiSzEkkknJJJ5JJ5JPJNfVv7cPx+1f8Aaa/ap+NXxh1S8ku7bxJ421iHw4jSvLFZ+EtIuX0rwxZ24ckRxxaLaWbOqBUad5pdoaRq+Ua/mnijOqmd5vicQ5t4alOVDBQu+WGHpyajJLpKs06s3vzS5buMY2/tfgXhqhwxw9g8IqcVjcRThi8yq2XPUxdaClKEpWu4YeLVCmtlGDlbmnJsoo69K/Wv9hj/AIJjeKP2goNP+JnxfbVfA/wjkcTaVZRxC18T+OY0PL6clyhbS9DkI2jV5oWku1BOnxSRkXC/JY3G4bL6EsRiqqp046K+spye0IRWs5voktk5O0U2vs4xlNqMVdv+rvsj8lKK/rz8Tf8ABK39i/XvC7+HdN+HWoeFb5bUQWvirRvE+uy6/DMqbVu521S9vtPvHDAPIktiqynKnarEH+f39rr/AIJ//GH9lbUrjVJ7Ofxx8L555P7M+IGh2U721rEZGENr4ns4xK+hX4TYGeZm0+eRgLa7kY7B5mW8R5bmVR0ac50K13yU8SoU3VX/AE7cZzjJ9eTmU7Xai0m1c6NSmrtXXVx1S9dNPXbzPg6iiiveMgooooAKKKKACiiigD+nr/g2+/anuvDnxR+In7KfiHUs6H8QdKk8deBLW4nIFr4r8NwOdesrCJm27tX8Pk31wFGT/YKsOSQ39j1f5kv7A3xYvvgl+2N+zx8RbK7Wzh0b4o+FbfWJnZUQ+HNX1ODR/EkLM/7tVn0O+v4S8gKpv38FQw/02EYOiuOjqrDvwwBHPfg1+7eHGYyxeS1cHUk5Ty7EOnC71WHrx9rSXf3antorooqKWx/KvjLk9PAcTUcwowUIZxg1WqJKyeLwslQrSstPepPDyl1c3OT3u3V/P/8A8FMv2wNS8Q+I9Q/Z/wDh9qzW/hjQJlh8d6jYSyJJreuQlxPoLTIVDabpLbVukRnW51APHKqizUy/tN8d/iAvws+D3xE8fZXz/DnhbVb2xV1LJJqZtnh02JgCp2y30sEZOR97rX8besanea1qupavqFxLdX+p311f3lzMQ01xdXc73FxPKw+9JLLI7u3VmYk8mv0aC3fyXltd/p95/lx9NnxUzHhnh/KPD3I8VUwmL4to4nG57iKE3TrxyDD1Fh4YGE4tShTzXFOrHESi7zw2CrYaV6eIqRebSrgsoPQkZx1xnn9KSvb/ANm/4UT/ABv+OHw4+GMLiOHxR4jtYNSmPWHRbJZNS1uVQCCZI9Ks7xowOTIFrTY/zByfK8bnmbZXkuXUvb5hnGY4LK8DRbsquMx+JpYXDU27OynWqwi3Z2TufqV+0l+wt8F7b9i/wd8e/gppepaZr2i+EPC3ivxDcXeq3+onxRoms2ll/a1xfW9xcXFnZahptxdfbEfTY7S3EEdzbyRuogaH8Rq/tL+Pnwe1D4hfs8eMfgt4Ak0fw9LrXhS18JaL9ujkh0jTdOha0t/KMdpDNJFFDYwMkKwxEqVQAY4P5vfDT/gjR8NdMslm+K3xI8SeJ9UeBle08JwWnh/SbaZ42AZJ76HU7668p2VlZhaK/l5eIByi5RlZWbbfzb6dXZfq+x/f3jh9FXiXiHjHhuHhlwnk2XYD/VDLaPEmPw1XA5Hw8s8wlbEYapXpYTneIWIxOHjQnWjhcJXc1GFavKVedarU/nWVmRgykqykMrKSCpByCCOQQeQRX9BP/BMj9ru/8ead/wAKF+IWpyXviTQLCS68EazfTl7rVdEtFXztBnkkJe4u9JhDT2T72kfTlliKhLIM34jfGzwJpvwy+LXxD8AaPqkmtaV4S8V6xomnapMiRzXlnY3kkVvJOsZMfniJVSZo8RvKrtGqoQop/CXx7q3ww+JHg3x5okzw6j4a1/TtTi2HAmjt7mNp7WXBUtBdQeZbzpkb4pHU8GtLKS8mrp9r7M/lnwk8Qc88EvE3D42dWpTwuEzSeRcX5ZRrRq4fG5bTxjwuYU/3cpUa1bBThPFYCvFtRxFGDhN0qtSM/wC0uisjQNYtfEWhaLr9i2+y1vStP1e0f+9bajaQ3kDdusUynoK16wP9y6VWnXpU61KaqUq1OFWnOLvGdOpFThOL6qUWmn1TCiiig0CiiigAooooAKKKKACiiigAooooAKKKKACiiigCVOh+v9BRQnQ/X+gooAjbqfqf50lK3U/U/wA6SgAooooAKKKKAKGq6pp+iaZqGs6teW+n6XpVnc6hqN9dSLDbWdlZwvPc3M8rkJHFDDG8kjsQFVSTX8wn7b/7aniX9obxXeeFfDF7d6P8JfD99JDpOmQTPC/iS4t32f27rQjfE/muhfTrOTMdnAUYxi5eZz+nf/BVX43y+Afg7pHwx0e9+z638Ur24j1JYnImXwlo3ky6jGShzGuo39xYWpEg8u4tY7+DDDdj+b7rWsI6X6vb0/r8D/NX6aXjRmSzVeEnD2NqYTAYbC4bF8Y1sNUcKuOxGMpxxOCySpODUo4OhhJ0Mbi6SfLiqmKoU6q5cNKNQr1P4JfDGf4z/FfwL8LrbWLTw/N411610VNYvonnt7Hzw7tKYEeNriUrGY7a3EsXn3LxQmWMOXXyyvv/AP4JsfBC/wDjF+014U1EvdWvh74XS2/xC1y+tzJE3n6PeQHQdNS4RSsct/rLWzyROVM2nWmoqhDKKpuyb8vx6H8R+HfDtTi3jvhHhungZ5lHOOIcrwmJwMK08O6+Bli6UswUsRT/AHmGpQwMcRUrYin79ClCdWPvQRxP7Y37HHi79kfxXomm6nrEPizwn4qtbm48N+K7axfThcT2DxJqGmX9g1xdmzv7QXFtMALmWG5gnSSCQtHcRQ/G1f0tf8Fb/BPiz4i+A/g14S8C+DNf8Y+Kbvx1rF5a2/h7S73VLq3sLTQXjvVlis4ZRFFPNdWTebMY0Bt8Akkgfk9H/wAE0v2th4L1vxrqHgC20mDRdNudUbQ77XNLPiO9trO3e6uFs9LtJ7p3uBFG3l200kM80mIo0aQ7amMtNWr9L2V13/Q/a/GrwGzjI/FHifIvDThDibNuGsBhcDmdP6ll+Z5nhsrhicsoY3FYOWZzhVjVVCcpyoRrYipinTnToN1q0XKfy/8ABn40eO/gV420zxz4C1aTT9TsZALi2cvJp+qWTFftGnanaB0W6s7hBtkjZlZSFkieOZI5F/q8/Zv+P3hb9o34Y6R8QPDjJb3TgWHiXRDKj3Gha/BFG17YyAMzm3cuLiwncI1zZyxSMkcnmRJ/HWQVJUjBBIIPUEHBB+hr9IP+CZnxzvvhl8etN8D3t4y+Evim0fh2/tpHYwQa7tlbw3fxpu2rO1+40xmCgGDUZGcny0KuUbrzWx6/0UPGjMeAON8u4PzTGVavBvF2Po5dVwtWo5UspzrGzjh8vzTCqb5aEK2JlRwmZKLhTqYepHE1FKeEp3/pvooorE/2ECmSSJFG8srrHHGjSSSOQqIiKWd2Y4CqqgsxJwACTxT6/OX/AIKw/H2b9nT9g348eNNM1GTTfE2v+HU+HXhO4t3aK8TXPHs6eHzcWUqMjxXWm6Vdanq0cqOrw/YDLHudFVubGYqngsJicZV/h4WhVrzV0m40oSm4pvrK3KvNo7ctwNXM8wwOXUP42OxeHwlN2bUZ4irGkpNLXljzc0u0U2fgz/wUZ/4L5fFG1+Jnin4R/sZ32i+G/CHhK9u9A1H4uXGnWWv6z4o1ezle31C58K2+pRXOkafocEyPbWd9PZX9xqIRr63ktoXgLfzOfEb4jeN/i3438R/Ef4j+JNS8XeNvFuoyar4h8Q6tKJb7Ur2RVTzJCipFHHHFHHBb28EcVva28UVvbxRQRRxrxbMzszsSWZizE9SzHJJ9yTmkr+Z82zzMs6rzq47E1JwdSU6WGU5LDUE7qMaVJPkXLF8vO05y1cpNtt/2zw/wtkvDWGp0MswVGlVVGFKvjXTi8ZimrOU69dr2klOa5/ZqSpQdlThGKSRRRRXkH0QUUUUAFFFFABRRRQAUUUUAFFFFAHe/C34k+LPg98RfBfxR8C6pPo3i7wH4j0rxPoGo27lXg1DSbuK7hDgHEtvN5ZguYHzFcW8ksEqtHIynH/4KD/GLQP2iP2rfib8efDWhX/hvT/ivNoHiu+0a/wDKY2Him48MaNb+MYLaeGSRbizPie31S4s5mKSy2k8Ek0MDuYV5quJ8faat5oEl0kRa406aKcSKpZhbSN5MyHap2x75IpWZmVB5XOWK19rwPnNXL80p4GUm8HmVSFKcHtDEtONCrG+zlJqlO3xRlFu7hG35Z4r8MYfOeHq+awpv+0skozxNGpG96mDUoyxdCol8UY01KvTbTcJwlZpVJ38Fr9hv+CI3xdt/hz+2ZY+EtRukttN+L3g/XPBS+a6pHJrlsIfEOhRlnZRvuLnSprKFAC8lxdxIuAWz+PNdx8M/Hut/C34heCviN4cna313wR4m0bxNpkitt/0vR7+C+jjc4P7qYw+TMCCGid1IIJFfrPEeUxz3Ic2yeVl/aGAxGHpye0K06bdCo/8Ar3XVOf8A26fzJw5m0sjz7Kc3jdrAY/D4ipGO86EaiWIprb+JQdSH/bx/pAdKK8r+B/xe8M/Hv4SeAfjB4QnSbQ/Hfhyw1qOJXDvp97LEI9U0m5xjZdaVqUd1YXCMAwkt2ONpBPqqnBB/kcH8+31r/PfEYetha9bC4inKjiMPVqUK9KatOnVpTcKlOS6ShOLi13R/oDh69HFUKOJw9SNXD4ilTr0asHzQqUasFUp1IvrGcJKSfVNHgutftS/s1+G9W1DQdf8Aj18ItG1rSrqax1PStS+IPhez1DT7y3cxz2t5aT6mk1vcQuCksMqLJG4KsoIIrL/4bA/ZU/6ON+Cn/hyvCX/y1r+O/wDbR+GPiH4R/tPfGXwl4ghnjlk8ca74g0y6kR0j1PQ/EmoT61pWoW7MqrJFNa3iI7Rl0S5ing3lomx8u5PqfzNfp2G8P8BiMPQxEczxMo16NOrFxpUuVqpCM043vpZ6Xb6X21/WcLwBgMThsPiI5niZRr0aVZShSpcrVSEZpxvd21dru+19U7/3Zf8ADYH7Kn/RxvwU/wDDleEv/lrSf8Ng/sqbo0X9ov4MO8siQxpF8RfC0rNJIcKMR6mxUE9WbCg4BIJGf4Tsn1P5mtDSNM1HW9V03RtJtp7/AFTVr6007TrK2Rpbi7vb2eO3tbaCNctJLPNIkcaKCWdgAMmtn4dYBJuWZYpJJtt06KSSWrbeiS1bv09DZ+HWBSbeZYpJJtt0qNlbdvbRa31+emv+hPaXdrf2ttfWVxDd2d5BFdWl3bSpNb3NtOiywXEE0bNHLDNGyvHIjMjowZSQQasV5J8AvBGofDX4HfCPwBq7tJq/hH4e+FdE1ZmkeYrqdppFqt/EJJcyMkF35sEe8lhHGinpXrTMqKzuyoiqWd2IVVVQSzMzEAAAEkkgADk1+U1YwhVqQpz9pCNScYTtbnhGTUZ21tzJJ2u7XPymrGMKtSEJc8ITnGM7W54xk0pJXdlJK61e5+F3/Beb4ux+FP2avBPwos7wx6p8VPHlrd3tqjDfJ4b8FRDVbppU2NmA61caEoJeNvOCFd4RxX8jNfpv/wAFZP2nYP2kv2r/ABEnh7UBfeAPhPbyfDrwjJDIJLO8n067kk8T61blXdGXUtb8y3ilUlZbLTbORCVbJ/Miv7l8MchqcPcG5VhMRB08XiozzLFwatKFbGtVIU5ppNTpYZYejNPVTpyR/DPibntPiDjHNMVQmqmEwkqeW4SafNGdLBpwqVINXThVxLxFWDWjhOLP3x/4NyvgJN8Xf+CiXhbxvdWButA+BPhDxN8RtQmkVWtk1WezPhbw3ExZSBcpquvR6jbBSHDaa8in922P9Fev5oP+DZH9l2X4Wfsk+NP2g9f0uSz8Q/tAeLjDoMt1aiKc+APAxuNN0+4gdwJfI1PxDda87DCxzR2NpMm5SrH+l+v0ulG0F5+99+34I+BCiiitACiiigAooooAKKKKACiiigAr5G/b38ez/DL9jH9pjxrZ3Bs9Q0f4N+Ozpl0r7GttUvNBvLHTZkbDEPHe3ELR7VLGTaFGSK+ua/Kz/gtVr/8Awj//AATb/aJl81Yjqdh4R0PLOELDVvG/h6zaNSZYiWkWQrtUsWBKmORSUPn5tWeHyrMq6dnRwGLqJ3tZww9SSd/JpHr8P4eOKz7JcNJKUcRm2XUZRaunGrjKMJJrqnGTVj/O8di7u5OSzMxPqWJJP602iu2+G3gPXPih4+8IfDzw1AbjXPGPiDTPD+nR4JVZ9Ruo7fzpMciK3R3nlI5EcbEc4r+VpSjCMpSajGKcpSbsoxSu230SSbb7H94pbJeiR+l//BMj9hqL9oXxY/xa+Jmmyv8ACDwPqaR2umzrNDH458UWvlXEemBgEM2h6bvjn1d4pNs8ph087lkuAv8AU9b29vaW8FpaQQ2lpaQx21ra20SQ29tbwoI4YIIYwscUMUaqkcaKqIihVAUADzv4PfCvwx8Evhl4N+FnhC2S30Pwdo1tpkUgSNJdQvAvm6lq12Ywqy3uqX8lxe3UpGWlmIGFVQPSa/GM7zapm2MnVbksPTcoYWk72hTvbna29pVspTe60jflij06VNU4rT3mryfn29Ft+IVVv7Cx1Wxu9M1Sys9S02/gktr3T9QtobyyvLeVSskFza3CSQzxOpIaORGUjqKtUV4+zTWjTTT6prVNeaeqNT8V/wBq7/gkH4H8fvqfjP8AZ0vrT4feK5hLdXHgbVHlbwXq1yzNI/8AZl0FmuvDk8pYhYdl1ppIAEVopZ6/n2+LnwP+KvwK8S3HhT4qeC9Z8JatC7CI39szadqMSkYutJ1WHzNP1O1cEFZ7K4mQE7HKSBkH92NcR8Qfhr4B+K/h258J/EjwjoXjPw/dI6vp+u2EF6sLOrIZ7OZ1+0WF0oYlLqzlguEOCkgIr63K+Lcbg1GjjE8bh42SlKVsTBaJWqO6qpK/u1fee3tYpJHNUw0ZNuL5H2t7v3aW+X3H8F1Ff0IftNf8EZ4Jl1HxV+zH4kEMoEt0fhp4wum2SEDe1v4f8TMG2ueVtrTWIkjyAjamMgj8MPiR8KfiN8IfENx4V+Jfg3XvBuu2zOrWWt2EtqJ0RipnsbkhrXULUkfJd2U89u/8Epr9By/NsBmcObC14ymledGXuV4d+anLVpbc8eaDe0mcc6cqbtJej6P0f9NdTz6iiivSICiiigDc8M3s2m+I9B1C2ZkuLHWNNu4HRtrpNb3kMsbK2G2sroCDtOCM4PSv9Vjwbcy3nhDwtdzndNdeHdFuJWGfmkm062kc8knlmJ5JPqa/ymtK/wCQnp//AF+2v/o5K/1RfhDLLP8ACj4ZzTSPLNN4A8ISyyyMzySSSeH9PZ5JHYlnd2JZmYksxJJJNfrXhbJ8+cx6OOBl5XTxS/U/n3x2guThqppdSzWF+rTWXyXyTT+8+Sv+Cl+o3mn/ALI3jxLRZGW/1TwlY3RiLK0du/iPT7guzKDiMSW8auDgMHALAcN/LRX9dP7a/giX4gfsw/FvQLeN5bmHw62v20UYBlkm8NXUGvJFH33TCwMR2kMVdlG7JVv5GHUo7IwwysVI9CDgj8K/ZIfD839//DNH+Dv07cDi6Xihw3mNVSeCxvBWFw2Fm17irYHOM3niqUX3isZh6kv+vy06v6C+Af7L3xj/AGldS1XT/hT4ci1aLQfsR13VL7UrLS9L0hdQaZbQ3VzdzI7tMLa5dIbWK4nZIJCsRO1W/TL4AfskeOf2Kf2qP2cvEvxP1jwzqun/ABFvvE3hK1n0Ca/nTRPEt94dvILOxu5r2wtY5PtjXQitp7dv3mJhIkSgF+A/4I9fFIeF/jr4o+Gt5MEsfiT4VkmslZwAdf8ACryahaxqrOozNpVzrGSA7l4oUVcElf2F/b48Iajr37PureLvD6v/AMJV8H9d0H4teHJoiyTwXPg++S71J4XjIkDvob6nGqJnzWYRkfMGWJN3aei07ff8n+Vj6HwG8IOB8b4TYbxmwH9sZhx5wdnNXOqmFeLg8vw2J4QzihnFTL8Jl9GhSlXnmXD9KjFfWq9ef1nGKdB0ZRhy/alFcb4F8Z6T448CeFPHen3UB0nxR4c0jxBb3BljEKw6nYQ3m1pC5RTF5pSQFzsZGUnIJrxH4pftm/s1fB5bqLxl8VfDi6parIX0HRLg+INcaSNZD5J0/SFupIHdo2jU3Rgj8wqrOuc1H6/f9x/o9mHE/DmU5ZRznNs9yjKsqr0aWIo5hmWY4XAYOpSrU41aU4V8XVo05KdOcZxs7uMk7an4Kf8ABWvwhqWg/tOjX7nTPD1hpvi3wnpN5pc2iyOL3Uhp7TWd5e+Ibd44hFqzXAMIlhEsFxZQ2jCd7hLlIvy8RiroynDKwIPoQeDzxX39/wAFBf2pvh3+1J8RPDWv/D3wvqOkWfhjR7vR7nXtZigttU8RLNcxXFsXtLe4uEhtbDbOtsZn+0P9pk3rGoVa+D9JsZ9S1Ow0+1ha4uLy7t7aCBBuaWWeVY0jVf4mdmCgdSTgCt43srn+HvjhjMjzjxe45zDhXH0M4yfNM+qY3AY7BylUoYqpjqNCviVh5eww/PCGOq4ijBwpyhLk/d1sTBxxFX+wP9le9udQ/Zw+CV1dyebO/wANvCkbPgDK2+lW9vEMDgbYokXjA44Ar32vP/hR4WPgf4Y/D7wcyhJPDPg3w5okwAUD7Rp2k2ttcHCqi5aeORjhFGSeBXoFYvVt92/z8tPuP9wuF8JiMBwzw7gcXf63g8iyjCYrmvzfWMPl+Ho1r31v7SEr31vvqFFFFI90KKKKACiiigAooooAKKKKACiiigAooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/OkoAKKKKACiiigD+av/gq/wCLJta/aWt/D3mP9m8IeBvD+nrDuUxi41F77W5ZlUOxR5I9Rt0fKxllhQ7SArN+Ydfpt/wVc8MTaP8AtNjXGjf7P4q8DeG9RjlKRqjSWP23RJI1IO+RoxpaMzuBgSKikhOPzJrdbL0X5H+FH0gXjJeNXiW8dz+3/wBbMyUOe/N9UjOMcBvry/UFhuTpycttLGvo2ga74jvoNM8P6NqmualdSLFbWGk2F1qN7cSudqRwW1pFLNK7sQqqiMzEgAEkV/Qj/wAEZ/Dq6N4K+Ocuo6ZNpviaPxroGjapBfWz2upW1tp2k3U9vZ3VvOiXFv5V1e3xMciIfMLAruTj7R/YC/4QHW/2XvhL4q8J+FfDWh6hdeGYtL8RXOj6RYWV1ea/oM8ujarc31zbxfaLie6ubA3Je4mkk2SorbduxeK8MBPhL/wUI8deH1jjs9A/aR+FukeMrBUVY4JvGngCSTS9SijUA/6RPpLXN/PtMfmvKZJFZghfOUrq1rf1s9v6R/Zvgx4BYXwszjwr8V6nFNLiKjxJicLgJUaOVrB4TLMPxpw1j6WUYuliquLxFbEVpZpiMBlcm6OFUlmd1GLp8lX9DMDIOBkdD3GeuPrWL4j0mLX/AA/rehz317pkGsaVqGmT6hpssUOoWUN9ay20tzZTTw3EMVzDHIzwyyQSrHIquUOK2qRmVFLOyqqjLMxCqAOpJOAAPUmoP9BqtOFWnUpVIqVOrCdOpF3SlCcXGSbTTScW02mnbZo/hu+KnhvQ/B/xJ8d+FvDOsyeIfD/h/wAV67pGja3NEYJ9T06w1G4trS7niMUIWeWGNDNtijjaTc0aiMrWZ4F1258L+M/CviOyd47zQ9f0nVbWSPAdLiwvoLmFlyQMrJEp5OPXivp3/goBZNZftd/GjdqOian9r8Q2uoJPoCLHZRxXuj6dPDbTRozquo20TpFqbB3M18s85bMpA+XPBui3fiPxZ4b0CwG691nXNL0y0Xaz77m9vYbaBNq/M26SRVwoLHOACa3i9E+tk+//AA/3H/P/AMT5XUyDxD4gybLoqFXJuMs0y7ARw0ZQUJ4DOq+Hwiw8J18TOCi6VNUozxNecbRUq9WSc5f2xWVwl5Z2l3Gd0d1bQXEbcjck8SSqcMAeVYHkA+oBqzVSwtls7GytEGFtbS2tlGScLBCkSjJ5PCjk8nvzVusXa7ttd2vvbof9ANPm9nT57KfJHnS25uVc1vK97eQV/MP/AMHMvxNn0r4N/s6/CW1uVVPGHj3xP411O3V/3jweDNFtNKsN6YIMUlx4vuXVsgh7bjgmv6eK/jD/AODmPxRJeftG/s/+D92YdA+D2o68E3Kds3ibxdqdnI2wEsu6Pw1AMsAGx8ucHHx/HWIlh+GMw5W1Ks8NQTWjtUxNLnXzpxnF+TP0bwswscVxvlHPFSjhli8U01e0qOEreyl5ONaVOSfdI/mjooor+dj+xT7E/YK+EWh/G/8Aam+GfgLxVo0Wv+ErufWdS8U6ZcC5Ftc6JpWh6hd3Ec01pLDcW/mzJbxQzxyoyXEkOGBIr9ivjj/wRZ+HHiOS61b4D+PL/wAA3zgtH4W8XJNr/hwy/MQtrq8ATWrCNmwCs8WqiME7NqhYxgf8EY/2br/w/ofi39pPxRp8lrN4rtJPBvw7W4j2Sy6FDdpN4j1yNXTcLe/v7S102zlV1E0dlfHa0bxu37s1+bcQZ/i8Pm8o5fiZQhhacKFSKtOjUqqUp1OaEk4tx5lSlJLmThKKaszto0Yyp3nHWTundppWstvO7s7rbQ/ka8Sf8En/ANtLQdSmsbH4faT4pto3ZY9V8P8Ai7w41lOoPyOsep6hpt7HvU5Ky2q7Tlcngnmbz/gmB+25ZQPcP8FrycIgfyrLxP4PvZ3ycBY4bbX5ZHkJ/gC7wPmZQvNf2GZI6Ej8aMn1P5msI8a5okk6GClbd+zrJy210r2V+tlbV2S0s3hY62lJdtnb17+W3nc/h78a/smftL/DxZZfGHwO+JWj20Cl5r1vC2p3mnxopOZHv9PgurRI/lP7xpgmMfNyM+BXVpd2U0lteW1xaXETFZYLmGSCaNl4ZZIpVV0ZTwwZQQeCK/0At7YK7jtPUHkH6g9fxryjx/8AAz4MfFO3kt/iJ8LfA3i5ZUmQzav4c02W+T7QczPBqaQR6jbyyc7pre6imOc7678Pxw7pYrAK3WeHra/KnUjr/wCDUTLCfyz+Ul+q/wAv8j+Eyiv60PH/APwSP/Y+8ZG5n0PRPFvw5vZxIVl8KeJJ7ixilcBVddL8QR6vCFjHKRQS28Zb5n3cg/Ifin/ghxp7yTyeCvj7PDES32W08U+D0ldBj5fPv9J1SMOwYcmPTowwbIC7cN7dDi3JqyXPWq4dv7NajPy+1SVWPzbWxi8PVX2b+aa/Vp/gfz2KrMdqqWJ6BQSTjrwMmggg4IIPoRg1/Q38Gv8AgkJ8XfhD8T/C3xE0r47fDtpvDOpx3IhufBGp65b6jp9xDJaatp19o+pzQWNxBe2M9xaFHuAwWXz4pYJ443X9IPiX/wAE+v2RPiotxL4g+Dnh/R9VuU/ea14H+0eDb0TlZM3CwaLNBp0j+bI0xFzZXAkkCm484AgxieLsroVacISniqU43lVw6fNTknblnTqxpJpxaalCcndSUoRsnKo4ao1raL7S6ryav81ZWP4w6K/pE+In/BEb4a6iLq4+Fvxe8UeGp3Uvbab4x0ux8Q2COEwsTX+mDR7xEZ/maU21yygkLGRjH4//ALVH7EHxr/ZKudOuPH9npeseEtbupbLRfGvhm6e80S7vI1aX7Bdx3EVvf6XqDQL5y297bIkybjbTzmOQJ6WBz3K8wnGlh8VH20r2o1IzpVG0rtRU0ozaV21CUtm1omzOVKpBNyi7LqrNfer2+flfdHx3X1n+w58A/D/7Uf7Tnw7/AGf/ABPLJb6R8Ubfxr4ba7iZ1fT9Sk8B+JrzQ9TCoQZf7K1y107UfJOUmNqInBVyK+TK/VL/AIIqaRJq/wDwUo/ZwEcQl/s/VPGOqPkyARpZfD/xTIZcxgj5DggSERscKx5AP1eRxc86yiK3lmeBXXriqV9tdux87xTJQ4Z4hlJpKOSZo23t/uVfvo+1up+Ifxe+FvjH4JfE7x18JfH+kXOheMfh94n1fwr4g0y6jeOS31DSLyW0lZNwHmW84jW4tZ0zFcW0sU8TNHIrHzmv7tf+C+3/AASD1z9o61l/bB/Zq8MpqXxg8N6RFa/FjwLpNuF1P4j+HNKhZLPxPo8KELfeLdAskSyurPZ9p1nRbe3jtme902C2vf4VtQ0++0m+u9N1K0uLDULG4mtLyzu4ZLe6tbmBzHNb3EEqrLDNFIrJJHIqujAqwBFf0tODg7Pbo+6/z7n8Kn7af8Ehv+Ch+n/s6eJJvgH8YdWNp8HfHWrpd+HvEF3J/ovw+8X3pit5Li7difs/hvXNsKanIMRabdxR6i6rDLfSD+vK2uba9tre8s7iC7tLuGK4tbq2lSe3ubeZFkhngmjZo5YZY2WSORGZHRlZSQQa/wA03p0r9c/2Gv8Agrb8YP2V7bTPh74/tbr4t/Bm3ljittIvr5k8XeD7PaIzH4U1i6d4pdPhCq6aDqYNqo3pZXdgrYP4H4neFFXPsRV4g4cjTjmlRKWPy6Uo0oY+UUorEYepLlp08W4pKrCpKNPEWU+eFbm9t+7+GfirSyPD0uH+I5VHllN8uAzGMZVZ4CMnf6viKcVKpPCRbbpzpqdShdw5J0uX2X9IH7eH7BXhD9sbwra39ldWnhT4u+GLWWPwt4ueDdbalaHfJ/wjfiURIZ5tLkmZpLS6j3XGmXLvLEskMtxBL/K98Yv2O/2kPgVrFxpXxA+FXiq1hilkjt9d0rTLrXPDeoRx5P2ix1vS4rmykiZVL7ZJIpkAIlijYED+tT9nP/goL+yp+1AlhafDj4maZaeLb6MH/hAPFxHhrxjHNgl7eDTdQdI9VdCCu/RrjUImxuVyCDX2mTlSjYZDwVYBlIIwQQQQQRwQeCOK/C8HnvEHCE5ZVmmXV4RptuODzCnWwteinJ3dGpKDboyldr3KlNu7ptJu/wDZHCniB7LBQll2KwedZXJt0nRxMaipSdpONOvSdTk3vKjUi3FvSMG3f+BnwF8BPjR8T9ZtdA8A/DDxt4n1S7fZFDpvh7UXhT5lRnub14EsrSFGdBJPdXEMMe4b3XIr+jD/AIJ7f8Esv+FI6zpPxp+P66Zq/wARrBYb3wh4Jtnjv9M8GXxBZNV1e6Xda6jr9sCps4bYyWmmzZmE09ykbQ/tnEkdunl28UVvHz+7giSFMk5J2xqq5J5JxzXi/wAZ/wBov4I/s9aJ/wAJB8ZPiV4X8C2Lo72sGr6jEuq6kyKWMWl6PEZdT1KVsEKlpaynIJbCqxFY/i3Oc/SyzLcDOlLFL2boYNVcXjMRzaOnBwpxkoyWko06XO1dOfK2j0OIfEDEV8FXjOWGyfAcj+tV6mIin7J6SjUxNVUqdKnJaStGMpJuLlyycX7WSSSTyTya/DX/AIK2f8FGtG+CXgvWf2d/hBr8N98ZPGemT6d4r1bSLlJf+FbeHL5Hgu0muYJc23irVrdpILG1GZ9OtpH1CdY2NoJfjn9tD/guNrHiux1T4f8A7I+mal4U0q7jubDUPiz4ktEt/EdzbyB4XfwhobPINFMiHdDqurb9QiBVodOtZtsqfzx6lqWo6zqN9q+r313qeq6ndT32o6lf3Et3fX97cyGW4u7y6neSe5uZ5GaSWaV3kkclmYk5r9T8OfB/F08Xhs94soxoU8PONfB5LO061WrBqVKtmCV4UqVOSU44W8qlSaSxCpwi6VX+RvETxewksLicj4UrOvVxEJ0MXnELxo0aUly1KeXt2lVq1IuUHiko06cW3QdSco1KdMkkksSzEklmJZmJ5JZmJLEnkkkknknNe9/svfALxf8AtQfHz4X/AAJ8DWr3PiD4i+LNL0GJwhaLT7GedW1TV7s8LHZaTpyXV/dyuVWOG3diRiuZ+DHwS+Jv7QHjzR/hv8KPCuo+K/FWszKkVpZRH7PY225RPqerXrYttM0uzVhJd313JFBEnG5nZEb+xT/gnD/wTQ8MfsQTwfFLXfEM3ir9oC/0S/0i61zTJ57Xw54S0/WYIotR0rw5ARFcXd08SyWt1rl0UlnieWO1trWCVxJ+ycX8e5BwZRjLMqssRjasefDZXhXCeMrR1SqTUpRjh6DkmnXqtJ2kqUKs4uB+PcIcB57xlXay+jGhgKUuXE5nilKOEpPRunCyc8RXs01RpJ8t06sqUZKR/Sh4Bi+EX7L3wm+H/wAHNAvbSz0L4aeD9C8I6PpWnxxy3bwaNYw2QuLmK2/dx3d/PHJeXkszJ5l1PLLIxZyx5DXP2p7ZHePQfDxkUMyi41C6w425AYW0CMjIWAI/0oMUJyFIzXxjNPLO5eWRnZiSSxJyWOSfxPJ9yTUVfzpnvjpxjmU5xyt4TIcK3aEMNRp4vFcvapisXTnFy/vUcPQt0XV/0Vkngpwll0ISzJYrO8SknOWIqzwuF5tHenhsLOElHT4a2Ir3Tdz6Fvf2k/iDdSM0MunWUZzsW0skBUZO3cbproMwBGTtALDOMcVi/wDC/viX53mf2/L5eMeT9k07bnGM7vse7OecdM9q8Uor4Ovx5xpiJ+0q8VZ9zXUv3eZ4ujFNbWp0atOC9FFLyPuaPBHB9CCp0+Gck5UrfvMuwtaTXnOtTnOT03cm13Pe7f8AaM+I0DKWv7a5wR8t1Z2rIQCOGFvFbOeAed+Tk5J4rv8ARf2p9SQxprugWNyOfNnsZprMqM8GOGRbsOcZJBkQE4A2jLV8i0V3YHxM48y+anQ4nzOrZ35MbVjmEHto4Y6GIVna2lmlezVzixvh1wRj4uNbhvLad1bmwlKWBmrdVLByoO/e979bn6X+Fvjl4C8TmOEaidIvHVWNvquy3UFuAgug7WzOW4CeYHOR8vXHr0ckcqLJE6SRuAyPGwdGU9CrKSCD2IOK/HRWZCGVipHIIOCD68V6l4G+Lvi7wPJHFZ3z3elqwMml3ZM1qynbu8sNlrdyFOGhZMu7O4c8V+u8MfSArKdPDcWZbCdNtReZ5XFwnT2XPXwFSUo1F9qcsPVptJNQw820l+U8S+BNFwqYjhbMJwqJOSy7M5KcKj35aONhGMqb6RjXpVE21z14JNn6eUV5f8Pfiv4b+IFuqWcostXSMPcaVcOPNGAA720mFW4iDEDKgSKGQyRpvXPqFf0jlea5dnWCo5jlWMoY7BV43p16E1OLa+KElpKnUg9KlKpGNSnK8Zxi1Y/nnM8rzDJsZVy/M8JWwWMoO1ShXg4ySesZxesalOa1hUhKUJx1jJrUKKKK9A4Ar8av+C9dw0X/AATZ+LUKqpFz4m+GMbk5yqx/EDw9KNuCOSyKMnI25GMkEfsrX48/8F3tPe+/4JrfGdooRLJZa58MrwHeqGOOP4h+HFmf5mUNiJ2BQbi2QQpYAjxuIk3kOcpb/wBmY37lh6jf4XPo+D2lxZw25bf25le/d42ilv52P8+2v1l/4I5/Da28Y/tS3fjDULVZ7T4Y+CNY160d0DLDrmrSQaBp0gJVgJY7a/1GaLODmJnVg0Yz+TVf0Bf8ENNNjNx+0PrG2PzltPAmnBiGMojafX7llU42rGWVC2CGdguQQimv5G4irOjkuYTi7N0VS07VqkKMv/JZs/umkk6kE2l7yevW2tvnt2/I/oG60UVn6vqdrouk6prN8xSy0jTr3U7xwCSlrYW8l1cMAMklYomIABJPABr8aSbaSV22kkt23ol8z1NtzQor8Z/2aP2sf2//ANtbxf8AGTWP2afhh8Hr/wCHPwcsh4h1bQvF9/e6Vrl/o08l+NM0m01ttTSGfxBqttpt7dKDYWthbmFo5LlC0Ky/qX8GPiZYfGX4U+A/ijpthcaVaeNfD1prH9mXbBrjTrlmkt7+xkdQFl+yX0FzAky/LMkayqAHAHt5pw5m+T4XB43H4Z0sLj3UjhqylzQqTo2VWCdknKnK8ZcnNFSi1zX0OLD5jgcVicXg8PiaVXFYH2P1yhCSdXD/AFiCq0Paw3h7Wm1OHMlzRd0em0UUV4Z2hXB/EP4XfDn4s6FN4b+JXgrw9410WZHT7JrunQ3bQb1KmSzuiq3ljMATtns54JlPIcEV3lcJ8UfHlh8Lvhv46+I+qQS3WneBvCuueKLu1hKrLcxaNp896bdGbhDOYRF5hBCbt5BAxWlH2vtafsXONZzjGm4ScZqcmox5ZJppttJNNA7Wd9up+Tfxn/4IwfBzxZNd6p8HfG2t/DK+lM0qaDrMJ8UeGhIygpHbzNLa6zYwiQNlXuNRCq/7tVVFjP5gfE//AIJQftffDxrmfSPCWk/EvS4Qzpe+A9YgvbqSIOyhm0bUl0zVhIyjzPLhtbjahG5g2RX6m/st/tXft/ftEeAPjR+0n4f+FXwk8Q/AL4IyzXfjfRE1W58O+MI9KsrFta1a38KXN1eXces6npGgA6jM1/b2dtdOI7e3Z55DCP138MeIdP8AFnhrw94r0h3fSvE2h6Vr+nM42ubLVrKC/thIASA6xTqHAJAYHBI5P3eLxfFnDVPBTzOnGthsbTnPCVMRFzjWhSlyVOSvH2VSbhJOLlN1FdOza1PLw+Jy7HVsXQwuIp1K+BqQpYylSnGU8PUqU1UpwrQTfJKdNqaTs2nqrqSX8KXjD4TfE/4f3M1p43+H3jHwrPAxWVdd8O6rpyqQWBKy3NqkLplWHmRu0Z2nDHFeflWX7ykfUEfzr+/+9s7LU4GttTsrLUrZwA9tqFpBeW74ORvhuI5ImwRxuU4r5s+JX7Gn7MHxZsr208X/AAX8Efar9JA+t6DpFt4a16KaRGjW5i1XQ0sblpog2Y/tDXESsAxiYgVrh+OKTssVgKkO8qFWNTtryVI0n3+2zeWFl9mafk01+Kvr8j+KHShnVNOHrfWo/OdK/wBU/wCF9l/Z3w1+H+n+Z532HwV4Xs/N2eX5v2bRLKHzNm59m/Zu27225xuOMn/Mu+O3wetfgr+0x40+D2nX11quneFPHdvpGl3koja+uNNu5LO804T+UvlveJaXsMMzJGqvOjsIkzsH+nR4OhFt4S8L24DgQeHtGiAf7+I9Ot1G/hfm4+b5RzngdK/pPwolGtDNsRB3p1KWXShLVNxqLFTi7Pa8bPXVbH85eO8rLhmnfXmzeTXosuinf5v1+Ru3NvBeW89pcxrNb3MMtvPC43JLDMjRyxuD1V0ZlI9DX8mn7av7Ompfs8fGXXNJhtbj/hCvEd3da54K1J0/cTaZdy+fJpnmqixtdaLJMLKdcmVolt7mRVFymf60a8G/aH/Z68C/tH+AbzwT4ytvKnTzLvw94gto0OpeHtWETpDe2rNgyQNu2XlmzLFdQ5UlJFjlj/Y4ys9dn+D7/wBfpY/z1+kb4Lx8Y+C44TLnQocWZBVq5hw5ia7VOlXnUhGGNynE1Wn7LD5jTp0uWq/do4vD4WrN+yjVUv5RPgL8Rrj4SfGX4bfEaCea3Xwn4u0fU714CRI+lLdxxaxb4BBdLrS5bu2kjJ2yJK0bAqxB/pV+NH/BR79kLQ/CusaJJ4uk+Iz69ol7YXGheD9Pn1JJ7bUrOe2lguNRnNnpUBYO0bo1+JkyHC7SrH+ej9of9lH4r/s46/NYeL9FnvPD01xLHovi/TYpJ9D1aEOfJInUMLO8aIq0tjdFJ4237RJGBI3zLWjjzNO7Xp9+n9elj/Nfgnxh8Tvo9YTi7gSnkOCwWPzLHUK+LwvE2AxdWrleJp4ephcRWwuFjicPh8RHH4f6tatU+sYWrTw9GpTVanNM9a1L43/FS58OW/gS1+I/jqLwBpT3MOh+FpPEeopp1lp8rziO1a0t7iO2dFhneN4thgG91RAhAHk7MzHLMzE9SxJP5nJptKAWICgkk4AAySTwP1qkktv69e5+A47M8xzOVKWYY7F414elDD4ZYrE1sRHDYamlGlhsMq05qhh6UUoUqNPlpwilGMUkhK/Sz/gmt+zbffFj4u2PxH1zTXbwF8Nb2HVJbmaM/ZtT8TQYn0fS4WaNo5mtpxDqN6gb5LeKNJAv2mMny79lb9iH4m/tG61p+oy6fd+GPhrDdodY8X6hC0MdxbRkNPaaFFIofUb6QfuVaJTa27sXuJU2hG/pz+F3wu8GfB3wVpHgLwHpMWkaBo8RWONcvcXd1Jg3OoX07Zkub26kG+aVycDbFGEijjRZlK2nV/h5n9j/AEWfo75xxbxBlXiDxZl1XAcHZJiqGY5Xh8dRnTq8TZhh5RrYJ4ehVinPJ8PWjCvicVNexxcoRwmHVaMsTPD+hVi+IPEnh7wnpN7r3ijXdI8OaJp1vJdahq+uajaaVptlbQrulnur29mgt4Io15eSSRVUdTX4r/8ABTf/AILefAP9gtNV+Gvg5LX4w/tGizfy/BGl3u3w/wCC554S1ndePNagEi2so3JcJoFl5mq3EOwzCwhnjua/hf8A2v8A/goz+1p+274mvNa+N/xT1q+0KSZn0v4eeH7i40H4e6HAHLwwWPhmymW0uZYQdn9pasdR1aZAqz30iqiryTqqLaSu/wAF6v8Ay+9H+snr+H9afn5H90n7Sn/BwP8A8E7v2fbnUNF0bx9rHx18VWBmik0j4P6ZHreli4iZ02P4v1G50zwtInmIVd7HVL50BDeUwwD+MnxW/wCDrT4hXk11b/Bf9l3wpoVsHmSy1P4h+MtS8QXTxhwIJ7jSdB0/Q4IZGjBaS3TVZ1R22rcSBdx/kRJJOScmkrB1Zvql6L/O7Ef0GeJf+Dlv/gpFrlzJNpd58GvCUTgBbbQ/h3Jcxx4J5VvEOva1ISRgNucgnJCqTxy0P/Bx3/wU4jlSR/H/AMOp0RgzQy/DDw6scgByUcwiKUK3QmORGwflYHmvwcoqeef80vvYfJd9l5f5I/pX+Hn/AAdDftzeHJok8dfD34GfEGxDAz/8SHxJ4Z1SVQm3EV7p/iS6soXZsSMTpMqbtwWNVIA/T34D/wDB0/8AAXxNcWOmftBfATxz8MppiI7jxD4I1iw8f6FAcMTcXNlc23hvW4IuFBjs7TVJcsSBhef4a6KpVZr7X3pP8Wrgf6yP7Nn7dv7Jn7XOmR3/AMAvjf4K8cXZt1ubjw3FqS6V4wsImQOW1Dwlq4sdftFTJVpJLARbkcLIwUmvrev8dnwr4v8AFXgbXdN8UeDPEWteFfEejXUV9pWu+H9TvNI1bTryBg8NzZ39jNBdW80TqGR4pVIIr+qj/gmf/wAHHnj7wVq3h/4Rft13Nx468DXLWul6b8brS1jHjHwx9yGKbxnZ20ccfifS0XBuNTt44tbgUSTzrqzvhNY1k9JaPutv80H/AAP66evp3P7fqK5XwR448I/EnwloHjvwH4h0rxX4Q8Uaba6voHiDRLyG/wBM1TTryNZbe5tbmBnjdWRhuGdyMCjqrqQOqrcAooooAKKKKACiiigCVOh+v9BRQnQ/X+gooAjbqfqf50lK3U/U/wA6SgAooooAKKKKAPyX/wCCsXwUuvGPwu8N/FnRbOS51H4cXs9hriwrK7/8IxrrwAXbpGj7k03VYLcZbasUWo3EjNtU4/nXr+3zW9F0vxHo+qaBrdlBqOj6zYXWmanYXK74LyxvYXt7m3lXuksMjIcEEZypBAI/lh/bP/ZA8T/s1+OLq6sLW91X4YeILu4ufCviJYnljtEkkaQ6Dq0q7xBqdgjKivN5a6jAou7YEi4ht9YPS3Vben/A/Kx/mT9NTwczHD52vFrIsHUxOVZlQwuD4sjh6blLLcwwlKng8FmtaME2sHjsLTw+ErVuVQoYvDU3WnzY2Fv1A/4Ix/E1tT8B/FH4T3l0Xl8Ma7p/i3RreSQll0/xBbtYakkEbcrDBf6ZbzP5ZKia/LOqNIDJ9Jf8FDtVX4V2fwI/aWs4vtGo/BX4p2n2+xhuI7a+1Xwl4ts5dN8Q6XaM5XfJcpb20e0nywsjtKvlhnT+aT4R/Gz4m/ArxDeeKvhZ4ouvCuu3+kXeh3d7bW9ndCbTrxopJIZLe/t7q2cpPBBcQu0RaKaFHQjBBz/H3xd+J3xSvHvviH478UeL52kEyrrmsXt7bQyKpRWtrKWU2doFQlVW2giVVJAUAnKcG5XurN3636XXz11vppoz4rIvpSYDJvAnLfDapkmbY3i3KYOjledOvhKOW5dPLM5hnXDGZQlOeJxOIxGT4ijg0sG8JQozWBpx+sctSSj+u3xM/wCCzvjPUFurP4UfC3R/DyOkkdtrPi7UZtbvU3oypMNKsI7C0imiYq6iS+u4jgq8bAhh+dfxP/bV/aa+LjXcXiz4r+JItMvPlk0Pw/cDw5o4hEplSBrTR1tDcRIcKDeSXMrKoEkj8k/LFFWoxWy/X8/T+rs/DuLvHjxc439pDP8AjnO54So5c2XZbXjkuXOMk17OeDymGDo14KL5V9ZjWla7lKUnKTklllnkeaaR5ZZGLySyOzySOxyzu7EszMeSzEknqa+/P+CcHwVvfit+0R4e1uezE3hb4ayReMddnlTdbm7sZVPh+xBZWRrm41cW9ysDgeZaWV64P7og/HXw4+G3jL4seLtJ8EeBdEvdd1/WLhIbe1tImcRRlgJbu7lx5VrZWyEy3N1O8cEEStJK6qpNf1d/spfs3aB+zN8LrDwdYNBf+JNQMep+NPEEce1tX1to9pjhZlWRdN05GNrp8LBcIJbl0We6nLEpcqv16evf5XP0v6K/g5mXiNx5l3EmPwlWPB3CGYYbNMxxtaDVDMc0wlSOKy/JsPKStiKk8RGliMwUeaNHBQlCtKFTF4ZVPpqiiisD/ZEK/h+/4OTc/wDDb3w29P8AhnDwl9M/8J/8TM/0/Sv7ga/jK/4OZfDFxaftCfs9+LzCws9d+EmraEtxtba914c8V3t5NFvzsLRw+IrZsABwJPmJXYF+I8QouXDOIa2hicJKWmydZQ17azWvey6n6f4QVI0+NcJF71cFmFOOtry9g6r9fdpydvI/mcr7C/Yi/Za1b9q3426P4L23Fp4I0PyvEPxF1uJWA0/w1bTqHsYJuEXVNcmA0zTlyWRpZrwo8VpLj49r+uz/AIJc/Aaz+DP7LnhnxBdWQi8YfF8R+PNfupYgl0ml3SGLwvpmW/epBa6QI7zym2j7VqNy5QFq/mLiHM3leXTq02liK0lQw99eWck3Kpb/AKdwUpLpz8qejP6+ow9pNReyTcvRf5tpH6CeHvD+ieEtB0bwv4b0620jw/4e0yz0fRtMs4xHbWOnWECW9rbxIoAASKNQWI3O2Xcl2Ynzb4zfFHXfhd4eXVPDXwn+IXxd1m43JaaB4D063nZHCuQ+p6heXEEFhb5UDckd3Of+WdtIcA+w0oJ7E/hX4/CcVVVSrD265nKcJznH2jer5pxanq9W1JSfc9JptWT5X0aSdvRPTbQ/Ic/8FFP2ktB8SW9t47/YL+Kui+F2uMXd5ph1vVNdt7IMAZ4rI+GLeyuJkXMnk/bIVnUERyqGDj9Qfh14/wBA+J/g/SPGvhpdUh0zVo3/ANC1zSr3RNb0y7gcxXmmavpOoRQ3djqFlOrQ3EMiFQ6lonkiZJG6c6/pMV4NObW9Li1BnEYsG1K0S9ZzkhBbGYTF25KrsyeoFaBJJJPJJJJ9SepPue56nvXXi8Rha8KfscvjgqkfilTr1qkKkWusK7m072acalrXTTumlGMk9ZuStazjFNPTW6tfS901e+t+glFFFcBQVDcXFvZ2893dzxW1rawyXFzczyLFBbwQoZJZppXISOKKNWeSRyFRQWYgA1NUU8EF1BNa3UEN1a3Ebw3FtcxRz288MgKvFNDKrRyxupIZHVlYEggg4pq11e9rq9t7dbX0v2D0PzR+Jn/BQbxKniCbw/8As4fs3fE7492tndS2l340sNJ1fSfB95PbySRXEPh+9XSL2bWUjljCrfRrBaTgsbZrhAJD7P8AAn9rqb4lX9t4T+Kvwd+I/wAA/H1y6Q2mmeNdB1MeGdZnbdtt9I8VGxt7Jb1wrbdP1OOwuZGVkthcsK+uW1nQ9Plj0p9W0exnUIkOmNfWVtOocYiSKzMiOAwAEarHyMBRWo5LqEf5lDK4VgGAZSCrAHI3KQCrDkdQa9GpicA6KpRyx0ny+7ifrdaVeUtPelzR9hKL3cI0Yb+7KO7zUaid3Uur35fZxSs2tL3crW2d30eutwgg4PBFfGn/AAUE+Hdp8Sv2P/jZpE1qlze6F4Xk8aaM5h86a31PwlNHrIe2wVMck1lbXlm0gOEhupGYELX2VXJ+P9Lh1zwD460W4Ctb6v4M8UabOrglDFfaJfW0m4DqNspNcuDqyoYvC1otqVLEUais7fBUjJr5pWfrs9ip2cJJ7csr+lmfwR1+7f8Awbw+BZfFP7f0Xibyybf4c/CTx54kaUpuVbnVP7K8HQx7gf3byQ+Jbp1YjBWJ06sK/CiZQs0qjosjqPoGIH8q/ri/4Nlvg+9voH7SPx2vInUalqPhb4YaFIY2UMmmwT+JvEREjNtkVnv/AA6qhEGxopdzsSFT+n+DcK8VxJlceVtUa7xUn0isNCVZSflzwil5tJ6H5j4kY6OA4Kz2o5cs8Rho4GmtLylja1PDyivNUqlST/uxk+h/Vl161+P/AO3P/wAETf2Mv24L3UvGWr+G7n4R/FzUA8lx8SfhpBYabcaveELtufFXh6SD+x/EEpKL5948dnq9wg2PqgAUr+wFFf0g0pKzV0fxefw5fE3/AINU/wBoPTr66Pwj/aM+E/irSlYtaf8ACb6X4o8HapJH1KSw6Rp3i2zEvZMXYjJHzMgIx5P4d/4Nbf25r++SPxB8Uv2d9B08SxLNdR+JfGuqXHkvv82S3tbfwKiyyQ7VPlS3FushcASjDFf74J7iC1hlubqaK3t4EaWaeeRIoYY0GWkllkKpGijlmZgoHJNfjd+1/wD8F1P2Dv2TJ9U8N/8ACeyfGj4jab5sU3gj4TLBr4tLtMAW+reKHlh8L6dIrkrPb/2pPqEAUs1kcruydOktXp8/yH8vz7f0/wDgaH4s+Hf+DV74jeGf7H8UwftiaBpfi3Qb201q3bR/h3rAgsr3Tit1E+n6r/wk1nfJNHcxK0N0LSF0GGEe8c/tRDZTadbWlhc3b6hc2Vna2txfyII5L6e3gjimu5EBYJJcyI0zqGbDORuOMn8bfAX/AAcvfGX48ftP/BT4TeBfgZ8LPhx8M/iL8WPBngXxDqXjzxJrGua3D4e8Va9p2hXeof2zaTeGdG0GezhvZrwzT2OsW8TRorrMiusn7feMdD1Tw/r99p+rWUljcpJu8tsFHRhkSQyITHLExzteNivBXhlKj+avpDYapOhwxi6NGpLD0KmZ0K+IVKThTq144GeHp1K3LaDqqjXdOm5Lm9nUajo2f0X4AYijGrxJhZV6ca9aGXVqWHdVKpUp0XjI1qkKLleUabq0VOpGLUfaQjKSukcwOSPqK/LT9oj/AIN5viF+2j8WfFH7RF/+1fY+HrTxzPFJoXhTU/A+p6//AMIro2mwpplrpFpejxLYwC232kt4Y7ezgRZrqTd5r7pn/VSxs7nULu3srOGS4urmVIoYYkZ5JJHYBVVVBJyfQcDk8V8l/wDBUf8A4K0/FX/glr4s/Zi+Gng74beDfiLo3jLwr4l8UePB4nudUstRms7DVrGyg0vw3qWl3aQaTcRXN3qDzXd/pWrxyoloqRLmVj4f0fcFWnnueZhLDylg6GUxwssRy+7DE4jGYerTpRlvzSo4etKXK7Rio89ueF/c8esXQjkWT4B14xxVbNHio4fmfNPDYfDV6VSrKC3jCriKMYuS1blyX5J2/PH/AIhPfF3/AEeF4d/8NXqX/wA19c14l/4NX/F3hjTb3WL39r7weumWUXmSTTfDfWIp2Y8LFFBH4gnDyyOVjhRZS0kjKoUEjP7Afsr/APBxL+wX+0ANP0b4ia3rX7OXjS7KxSad8SYUm8KSXDMqgWXjfSVn0tbf5v8AXa5DoTHB/c+n3N8bPjJoXxFj0eHwJ4h0vxH4LmsbbVrHxB4f1KDU9G8QLfwJPbXthf2UktnfWItpF+z3FvPLDK0kh5KI1fv3HnFeE4N4cxWbckK2LnKOFy3DzlK1fHVlJ01NKSl7KjCM69ZJxk6dOUIyU5wPwfgbhWrxhn+GytSnSwkIvFZjiIJc1DBUpR9pyN3iqtaUoUKLako1KinKLjCR+Vn7F37Evwt/Yr+HMXhLwdFBr/jXVI1fx38SbrT4rTWfFV6sjukESeddPpuhWW/Zp+kxXMiJt+0XMlxdO8tfZVFFfwbmWZY7N8dicxzHEVMXjcXUdWvXqu8pSeySVowhCKUKdOCjCnCMYQjGMUl/ceXZdgspwWHy7L8PTwuDwtONKhQpK0Yxj1b3nOTvKpUm3OpNynOUpSbZRRRXEdoUUUUAFFFFABRRRQBf0zVL7R7231HTbmW1vLSVJ4JomKOkiHKsCCO2QexUkHg1+hfwd+MFp48s00rVHjt/E1pCDIhIRNSiQBTcQDgecODNEvUZkQBd6p+ctami6vfaDqdnq2nTvb3llOk8MsbFWVkOeCM8EZVlIKspKsrKSD95wFx3mPBGawxFGU6+VYmcI5plzk3Tr0rpOtRi2o08ZRjd0qqtzJeyqN05O3xHHHBOX8Z5XPD1oU6OZ0ITlluYcv7yhVtdUqsknKeEqtKNWm78t1VppVIq/wCvlFcD8OfHNl498N2erwtFFfBTBqVkrgvb3cQXzCqnDmGUMksTlQNr7Dh1ZR31f3hl2YYTNcDhMywFaOIweOoU8Thq0dp0qsVKLaesZK/LOEkpQmpQklKLS/iHH4HFZZjcVl+NpSoYvBV6mHxFKW8KtKTjJJrSUXa8Jq8ZxalFuLTCvzI/4LG6APEP/BOL9pe22B3sPC+j67GGMYAOh+KdE1N2JkRxlYbaVlChXLBQjqTmv03r5X/bj8HL4/8A2O/2mvCBjMsutfBL4iwWiAAlr+LwzqN1p+FZHDYvYIDt25bG0FSdwjNaTrZZmNFb1cDi6a9Z0KkV+f8AwVudWRV1hc7ybEydo4fNcvrSfaNLF0Zt/dFn+YfX72f8EN9aij8R/tAeHmlKzXWg+DtXhhMpxJHZ6hqtncSLDjBMTXdsGkzkeaqkYIr8FXXa7r/dZl/IkV+t/wDwRj8QNpf7VmsaKZAkXif4X+JrRlZgBLLpt7pGrQoFI+Zx9lkZQCCFD8N/D/HnEVL2uS5hBbqiqn/gmpCq+/SDX+R/e9F2qU2usktv5tOu2+/Te6P6l6p6hYWmq2F9peoQi4sNSs7mwvbds7Z7S7he3uIWxztkikdD7GrlFfjSbTTTs0001umtU/keofihoX/BKj4q/DP4geL7n4HftXeJ/hd8MfiBDd6X4q0vQRremeJ7zwvdXLTP4au30rUrTTNWjijluILS9umgaJH5gxPOK/X34deA/D/wu8B+Efhz4Vilh8O+C9CsPD+krcOJbl7WxiEfn3MgVRJc3MnmXFw4VQ00rsFAIA7KivTx+c5lmVLD0MbiZVqWGT9jBxjGMW9JTfKleUtbt9W3a8pN40sNQo1KtWlSp06ldxdapGKU6vJFRhzySvPkiko813Fe6rRSSKKKK8s2CuW8ceDtE+IXg3xT4E8SQPc6B4w0HVPDmsQRvskk07V7OayuhG+GCSiKZmjcqQrhWwcYrqaKcZShKM4txlGSlGS0alF3TT7ppNA1dNPZ6M/EnwN/wSk+K3gHV/FXgjw7+1f4p0H9nfxzc27eOPBPh+TXNI1TxppFtIhGja/YWeoQ+H55ZbffZy6i6ziS1aSJrNo3MJ/abSNKsNB0jStC0q3W00vRNNsdJ021T7tvYadbRWlpCMAA+XBDGpOBkjJ5NaFFejmGb5jmiw8cdiZV44WEoUYtRjGCm05tRgormnJXk7b3aSu740sPQoSqypUoU5VpKdWUYpSqTUVBSnJK85KMYxUpNvlileySRUc00VtDNcTusUFvDLPNI3CxxQo0kjseyqqksTwACTUlfFn/AAUC+OFv8CP2XfiJ4giu1tvEnijTpfAvg9Fl8u5fW/EkE1m91bgAsTpenG91IsMBWtkUspda5MLh54rE0MNTV516sKUfLnkk2/KKbk+yTZrKXLGUnqopu3V2V7K/V7I/l38W+J2+NP7ZmreJoi08Xjf46CXT1d5ZS+nXHiuK10uDzE/esv2CO2hVowXC48sEgZ/04NPgW2sLK2QYS3tLaBBknCxQpGoyxLHAUcsSfUk81/mLfsR+GpPGf7YP7M/hkRfaF1n44fDS1uUILBrN/FulNfMyhWLKlos8jjacqpr/AE81G1VX+6oH5DFf2z4VUFSwWaOKtTVTB0I+lGlVsvlGovvP5Z8daqljuHqCetPCY+q12VathoJ/P2EvuFooor9YPwYydb0HRPEumXWi+ItI0zXdIvYzFeaZq9jbajYXUZ52T2l3HLBKuQCA6HBAIwQDXwl8Rf8Agmh+y/49vLnUbLQNZ8B3tyrmT/hDdTjtbAyuSTKNM1K11K1h2g4WKzFpCoChUAGK/QOimpNbP9V9z0Pk+KOBODONaEMPxZwxknEFOmmqMszy/D4mvQTd2sPiZQ+s4e739jVp363PyST/AIJBfBIXTvJ8QviG1mc7IF/sFLhThcb7k6a6MAQxwLZCd2M8c/Q/w0/4J2fsv/De5TUB4Nn8aanE0bxXfja8GrxQtGwYbNMggstKdWYKWW5srjpgEDivuaim5yfX7kl+SR8dk/gJ4N5Di4Y/LPDrhmni6clOlWxOB/tF0pradKGYzxVOlKL1jKnCLi7OLTStWsrKz061t7HT7S2sbK0iSC1s7SCO2treGNQscMEEKpFFGigKiIqqoAAAFfzi/wDBcz/gsQ37Hnh+4/Zo/Z61m3k/aN8ZaKZvEnie2aO5X4SeGdTjaOC5jAYx/wDCZ6vCZJNJt5Q/9lWoXVLqIGWwS4/Vb/go5+2p4Y/YN/ZX8ffHPWVgvvEcFsPDvw58OyuwbxF491mOWDQ7MhCH+xWbrJquqyKytFpljdMhaXyo3/y4fij8TPGfxk+IfjD4o/ELWrvxD408c6/qPiPxFq97K8s15qOpXD3EzDezeXBHvENtAmIre3jigiVY41Uc9Wpy+6t2tX2X+b/Dc/XIxjCMYwjGMIxUIQilGMYxSSSilZRS0ikklbS1jl9c13WfE2san4g8Q6nf61rms311qWratqd1Ne6hqOoXsz3F3eXt3cPJPc3NxNI8s00rs8jsWYkmsqiiuUYUUUUAFFFFABRRRQAUUUUAfvj/AMEWv+Cu3i/9iL4k6R8Gfixrl7rn7Lvj3Wrez1Ozv57i7k+F2r6hOIl8W+HFZ3NvprSyhvEWkwx+Vdwj7bAiXkTGf/RF0bWNK8Q6TpmvaHqFpqujazYWmqaVqdhPHc2WoaffQJc2d5aXETNHNb3MEkcsMqMVdHVlJBr/AByQcHI6jkV/eR/wbXft633xp+CniP8AZI+ImtTah44+B1umsfD+7vpGlu9T+GN/ciCTSzcSzyyzN4T1eeOC3DrGI9L1OztLdTDp5Cb0Z/Ye3Rt/h8+nn3vof193+R/UBRRRXSAUUUUAFFFFAEqdD9f6CihOh+v9BRQBG3U/U/zpKVup+p/nSUAFFFFABRRRQAVzPjDwb4X8f+HtS8KeMtD0/wAReHtXga3v9L1KATW80bDhl6SQzxnDwXEDxzwSBZIZEdQw6aihNrVOz7oxxGHoYuhWwuKoUcThsRSqUMRh8RThWoV6NWLhUpVqVSMqdSlUhJwnTnGUZxbjJNNo/DD4/f8ABJa8N5e6/wDADxHby2k8skw8EeKbj7PNZKzBlg0nXdrRXUSZKRxaotvKkaoZL64k3M35t+L/ANjT9pvwVcSW+r/BvxvcKkrR/atE0W58Q2TBSAJBe6Gt/bBG3KFZpVyzBfvZA/rzoq1N9Vf8PX+vzP5G40+hZ4U8TYytmGS1s44MxFecqlTC5PVoYnKOeTvKVPLsfSqzw8W72o4XF0MPTXu06MIpRP459K/Ze/aJ1m5W0sPgt8S5JWxlpPB2uwRJuOF8yaeyjiTc2ANzjuegJH2J8Hv+CWXx68b3tpc/EJdO+GPh4ujXT6ldW2qa7LCJMOtnpGm3EyrKUU4GoXdiBuVl3iv6VKKbqdlb1d/ySPG4c+gz4Z5XjKWLz7PeJeJqdKan/Z86mEynA17NPkxP1KjLHTpu1mqGOw0nf4raHzt+z7+y/wDCj9m7Qm0vwDo7Pqt5DDHrXirVTFdeINXaIZ2y3SxRJbWgkLPHY2kcNupIZ1lkHmH6JooqG23dn9h5JkWT8NZXhMkyDLMHlGU4CkqODy/AUIYfDUIXbfLTgknOcm51KkuapVqSlUqTlOUpMooopHqhX803/By18LZtd/Z5+BnxatLZpX8AfEzVPC+pzRxg+TpfjnQzcxzTyY3LEmpeFrO3jBIXzb0ADLV/SzXzF+2T+zhoP7Wf7NfxW+A+upbK3jbw1cRaBqFyuV0bxbprpqnhbWFdUd4xYa5aWUsxjVmktfPhZXjleNvHz/L5Zpk2YYCCTqV8PL2Seidam41qKb6J1acE30TvqfR8JZvDIuJMnzWq2qOFxkPrDW6w1aMsPiZJdXGhVqSS62t1P8wZACyg9CwB+mea/vI+FFvYWnwr+GVrpZjbTbf4feDIbBoRiJrOLw7pyWxj+Z/kMIQr87cY+Zup/h8+Lnwn8d/A74i+LfhZ8StAvvDXjLwZrF5ous6ZfQyROs1rKyJc2zypGLmwvYgl3YXsQMF5aTQ3MLNFIrH+zD9jrVL7Wv2Uv2edT1OQy31z8J/CInkK7SRb6ZFawAr2220EKjPUAN3r+KOO6NSnRwftFKnKjia9GpTmnGcZygm1KLSacHSkmnZps/uzAVadZKrRnCpSq0Y1adSElKFSnLlcZQkrqUZRkpJrRrU+ka4n4l6preh/Df4g614Zt3u/EekeCfFGpaDbRqXebV7LRb2409EUA7nN1HEVHc4BBGRXbUfUAg5BBAIIPBBB4II4IPUV+cQkozjJxUlGUZOL2kk03F+UkrPyZ6Po7efbzP5Qv2Uvir+xAnwj/ao8Sftgf8LU1z9qTVrGfUfgH4t0DVPEq3lt4lfTNRa1ltdT0y6Sy0zVbfxQ9pd6hd+IVezOlRrbWalhNBL/AEhfss+JvFvjP9nD4J+K/Hay/wDCW698OvDmoa1NcArc3lxLYxiLUbpSqFLrUrVYL+5QqCs9zIOcZPmup/sAfsgav45b4iX3wU8OyeIZb86rcW8dxqkHh661FpDM93c+Gob5NEmeSVjLLE1l9mlkJeWB2wR9gwww28MNvbxRW9vbxRwW9vBGkUEEEKCOKGGKMKkUUUaqkcaKqIihVAAAr6/iXiXC53g8twuHwMMO8CqnNXdGjTrVFUlOXJUqU5TdZR51CMpciVOlSioNps8rA5a8Hisfini8VX+vVKdR0K2InVw+GdOlCly4SlKKVCE+T2lSKcuarKUk0mkpKKKK+OPVCqWpzXVvpmp3FjF599b6dfT2UGN3nXkNrLJaw7erebOqJgEE7uCOou0oJBBBwRyCKFo07Xs9u/kB/KV+xj+0n+yv4X+OHx18f/8ABQ74beOPjRqXiXQ9VXwbNa3d3dXvhfxvDq91cXCfZzrGkS2FwyeRY6PqEMzQ6ALR1iswJI3h/oE/Yc8aeKviD+yv8JvFnjJb/wDtfUtL1RLWbVHebUbzw9Y67qdj4au7+4kLPdXcuh29gs145D3robxlBnyaXjX9gv8AZK+IPjiX4ieKfg7od54mu786pqcltd6ppumazfs4lkutW0fT762028lnmHm3LPbgXTM5uRNvbP1jp2nafo+n2Ok6TY2mmaVplpBY6dp1jBHa2VjZWsaw21ra20KpFBBBEixxRRqqIihVAAr7LiHibC5zlmW4ChgYYeWCq1arrewoU6tqsFGVF1qbdSvTi4x9l7TlVOMUoxXM1HysHlccJj8fj1XxNSePhhoTo1MRVqYaisLGcYSw2Hl+7w8qinJ1nBN1ZWk2mtbledfGHWB4d+EXxU19nSMaN8OfG2ph5CVQPZ+G9SniDFWRgGkjVflZW5+VgcGvRa+Rf29tV1HRv2OP2hLzS4JJ7qTwBeae6xKzPHZ6reWWmahcDaQVW2sLu5nd84RYy54U18tg6ftcZhKWn73E0Keu1p1YRd/vPTm7Qk3soyf3Jn8Wh3SyHAy0jnAHdnbgfiTX+jT/AMEi/wBn24/Zy/YL+CXhTVbJrHxP4t0m4+J/iqGQbZ11Tx5MNYs4LldqlLiw8PvoumyxsoeJrMxuC6sT/Bb+x54I+H/jv9pH4OaP8XPEOmeEfhQfH3h248feItaZotKtvDtlfxXt9ZXMyo4hGrRW50sTuFt7Q3guruSG1illT/Sy0H4q/CG88N2WseHfiH4CuvCsdnAthqWm+KNCl0hbKOLZbiC5gvDbCFYo9sYVgoVMKMCv7M8MsFS9tmGZVKlNThTjhKMJThzqM3GrWq8jfNGPuU4RnazvUinoz+bPG7NqjoZVkVCM5xnVnj8XKEZOMZQi6OEouUU488va1qsqbfMkqM7WlFv0yvG/j78ffhV+zL8KvFfxm+M3iuw8H+A/B9g97qep3r5luJT8lppmmWibrjUtW1K4aO00/T7SOW5urmWOOOMk14J4/wD+Ckf7D3w0u5bDxR+0d8OzewM0c1p4f1KXxZcxSLjMcsPhiDVpIm+YYMiqp5w3ytj+K7/gtv8At0/Ez9vT41p4J+Faaif2YvhXM0PguGK4j09PHXiF4Quq+ONV0+6uoLh9rNJpvh+2u7YSWdhHPdIqS6nMifqNbOMrotxlmWAjUTtySxmHUk721i6l1Z90tdD8FjlGazipxyzMJQkrxmsHiHFrTVNU7NarVaHnv/BTP/guR+0L+2xruv8AgH4X6vrnwW/ZvW4ls7Dwhod+9j4n8a2McgCX/j3XLCSO4mS78sSr4csJo9Jto5PIu/7TljFxX4Vu7OxZ2ZmYklmJJJPUknkk9z3ru7r4YePLRPMl8N6gyc8wLFcsNo3ElLeSVwMA4JXBIwOeK5i90LWdNUtqGl39kqsELXNpPApchiFDSRqpb5W+UHPysMZU4zp4vD4l3pYmhXe/7qrTqOy8oSdremhzVsJisPrXw1eitNatGpTWqVvjjHe6+89q/ZQs4tQ/ae/Z4sZmkSK8+Nfwvt5GiKrIqTeNdFRijMrqGAJKllYZ6qRxX+rd4j8JeHvFVstrrumQXqxgeTMwMdzATGyBobiMrKmPMZgm4xl8MyMQMf5W37D+mR6z+2V+ytpE7vDFqX7QnwfspJEA8yNLnx9oMRdAwILLuyAeCRg1/q15zg9Mge/YfStp4TDY7D18LjcNQxeErxUKuHxNKnXoVVq5RqUqkZQmtY6Si1daaoMLi8Tgq9PFYPEV8JiaMualiMPVnRrU30cKlOUZxe97NXTs7o888K/C7wh4Pu3v9JsZXvmUpHdXsxuZYEb76wAqiRlujOEMmPlDhSQf44P+DqUE/Hn9lIDkn4SeNv8A1L7Kv7Yq/jH/AODnrRJ9S/aM/ZDlkt5Tp/8Awq3x4HnKfuZHsvFelzzwb2G1mVLm3LoASVlXjBJHPTwWU8PZZiI4DBYTLcBh4VsVUoYOhSw1K8Y89WpyUowi6kowScn70rRTdkj0ZV834nzbC08XjMVmOY46vhsDSrYutUr1L1akaVKHNNycacZTvyxtFXlK122/5gvCvgeIxRajrMZdpFWSGyYFQqnlWuFIBJOAREeNpO8ZwB/d5+xJBDbfsg/s3w28UcMKfCTwoEjiRURc2QJwqgDJJJJ6kkk5JJr+JjgYAAAAAAHQAcAD0AHAHpX9tn7Ff/Jov7OP/ZJfCf8A6QrX8heKGeY7O6OGrYqrL2Sx0nQwyk1Rw8HRqJRhBWTko6SqSTnN3u7WS/tTg7hTKeFcIsJl+Hh7Z4eKxeOnGLxWMqqVPnnVqW5lByu6dGLVOmnaMb3k/pyiiivxw+zCiiigAooooAKKKKACiiigAooooA/Cz/gqJ+0x8c/2Yv2i/gZ43+B/xH8R+Atai8A38t3HpN9J/ZGtR2/iWdktNf0SbzdK1qzOWU2+o2lwi72aPY+HH7uf8EpP+Cpvh39vbwhf+DvG1ppfhH9oLwPpttd+JNBspfK0rxfpGVt38VeGLe4mkuUjiuDHHrGmM87abLc20iTywXC+X/Nb/wAFwP8AkrfwV/7J1q3/AKkc1fnT+xP+0Frv7MH7T3wg+MmiX09nH4Y8X6Z/b0MUkqx6l4X1CYad4k0u4jjyJYr7Rrq9t8Oj+W7pMgWWNHX+kPDPiXF5Hgsog6055bVvDFYacnKnCFTETUq9KLdqdWnfnbjy+0ScZ3TTX5d4g8FZfxRgMbUjh6VPOsPRlUwONhCMa06lKHNHDV5JJ1aFW3s7T5nSclUp2aal/p31j+ItNh1nQNb0i4QPBqek6hYSoV37o7u0lgYbD97h+FyMnjI61b0zULXVtOsNUspVns9SsrW/tJkOUmtruBLiCVD3V4pFZT3BFXevWv6n0kujTXqmn+aaP4xV4yT1UovzTTT+9NP5n+VD8T/DE3gr4k+P/B1zE8Fx4V8Z+JvDs8MilJIptF1m906SN0YBlZXt2DKQCpBBr2r9iz4t23wR/ad+EfxB1Gf7Pomn+J4dM8QyHOxNC8QQTaHqczjK5W1ttQa8XJwr26MQ23afZ/8Agqx8N5fhb/wUB/aY8OtaR2dvqHxBu/F9jFEoWM2fja0tPFUbqFVVyz6tJvKgr5gcAnGa/PQEg5BwRyCOoPrX8nZrgY06+Y5dVXuwq4rCVFa3uqc6T0eztsf3zlGNWNy3LMwg7rFYLB4uMt/41CnWT89ZH+gLFNDcxRXNtKk9vcxR3FvPEweKaCZBJFLE6kq8ckbK6OpIZSCDg0+vwm/4Ja/t82/iDTtE/Zn+MOsiPX9Pi+w/C7xXqc8aJrFhEFFv4N1G5k2k6laJuTRJ5nLXluiaeWa5jt1m/dkjHBr8IzLLq+V4qeFrq9vep1ErRrU38NSPra0o7xknF7H0cJxqR5o+jT3T7MKwtX8U+F/D8kMOv+JdA0Oa4RpIItY1jT9NlmjU7WkijvLiFpEDfKWQFQeCc1u1/MH/AMFw/AniPTfjh8OPiOxv5PC/ibwCnhy1l3zNZWeteHNV1C5vbYHPlQTXVnq9lOsaqpmEUsmXKybOrIcrp5zmNPAVMT9VVSnVlGp7NVHKVOPMoKLnTTckpP4tovRnuZDldPOcxp4CpifqvtKdWUanIqjlOnHnVOMXOmm5RUn8V7RejP6R/wDhZXw4/wCig+CP/Cq0P/5Oo/4WV8OP+ig+CP8AwqtD/wDk6v8AP486X/nrJ/323+NHnS/89ZP++2/xr7v/AIhxS/6G1T/wjj/80n3f/EOKX/Q2qf8AhHH/AOaT/QH/AOFlfDj/AKKD4I/8KrQ//k6tjSPFPhfxBJNDoHiXQNcmt0WSeLR9Y0/UpYY2O1ZJY7O4maNC3yhnAUngHNf58HnS/wDPWT/vtv8AGv22/wCCH3gTxHq3x98ffESNryLwv4R+H13o2oTCSVbW81fxLf2a6dYSLny5mjtrC+vthBeJoIXG0Nz52bcD0cry7FY95pKf1enzxpywsYKpJyjGMOZYiTTnKSimovV7M87N+B6OV5dise80lN4eCkqc8LGCqSlOMIw5lXk05OSSajLW2jR/UNRRRX54fnghIALMQqgEszEKqgDJLE8AADJJ6Cv5Q/8Agqd+1anx4+Mo+HnhLURdfDf4SXF3pVrNbSq9nr3i5iYdd1qNk+WaCzZTo9hIC6Olvc3MTGO7BP6Wf8FN/wBviz+Enh3U/gP8JtYhufij4ktJbLxdrOnzhz4C0S6jVZbZZonwniTVIZGjghXL6dal7qby5pLVW/mEnuYYQ8tzPHGAw8ySaVV+ZySCzOwO5jkgnljnqc1+i8IZJUi/7UxFOXPKLjg6bi3JQlpOu42uuaPuU+8JSla0oN+fjMTThFqVSEIJ+/Ocowje6suaTS0e/nbsfqR/wRm8KP4u/wCCjn7ONkII7iLS/EGt+JJ0kOAsfhvwtrespKoCvuaGezilAxjCksyqGYf6Klf5I3h/9oH4ifCLxxonjf4K+N/EPgLxj4anln03xd4X1KbTdThe4t5LW4ghngYebaT280kFzBKJILmN2jkjeNiG+/Zv+C8//BUqbQo9B/4aXu440hW3fU4/Anw5GszRICP3moP4UeTzWGA9wipcNt3GXczs39K8GZzhMhyyth8ZRxLr18VLE3oxpSioSpUacIS5qsGpL2cpNWa9626Z/L/idhqnE2f0sRl2Iw88LgsFTwSnOdRRqVY1q9arUpuFOalTvVjTUl8TpuUeaDjJ/wCl5kDqcVC1zboSr3EKsOqtLGpGeRkFgeRzX+Vl4r/4Kd/8FBPGrzP4g/a7+OkonkaR4tO8eazodurM6PiG20S40+3t0DRqVjgjjjQFwqgO4bxa9/a1/an1Kb7RqP7SPx2vrjaE867+LPju4l2KSVTzJdddtqknC5wMnAr6ifG+CUv3eCxUo95ypQlsr+6pVFvf7WyT0vZfn0eD8Y4pzxeGjLqoxqzS9JOMG9P7qs+61f8ArcJLFIMxyRyDOMo6sM8cZUnnkce4qSv8l/w9+21+2J4Umjn8O/tSfH/SXiYvGtr8W/HSwq5ZHZvIbXGgJcoofdGd6gq2VJB+4Pg7/wAF2f8Agpd8H7uB0+Pt58RdLiVEk0P4oaHpHiu1nRDlRJqbWtn4jVgpZd8WtRkhhuyVXF0uNsvk7VsNiqSdvej7Oql3crThJW6csZN9kRV4Qx0Y3pYjDVXZ3i/aU29rKN4yi7635pRSS3bdj/TAor+Sr9kj/g6L+HfirUNM8L/tffCO5+HE11LDbSfEj4bTXXiLwvAzqqtd6v4VvB/wkGmWqvku2mXniGYA7hAFGB/Q/wCMv2x/gjb/ALKPxI/au+H/AMQfC3j74beDPh14k8aQa94c1e3v7K5l0nRp7+20yRrdpJ7TUp7n7PaGwuIY72K4mWGSBZflr6TBZngcxg54TEQq8qvKGsasF3nSmlUSvopcvK/stngYzL8ZgJKOKoTpJ/DO3NTl/hqRvBvvG/MuqR/Ff/wck/tn3Hxx/azsv2cfDOpSv8P/ANm20fTdUhguN1jq3xM163t7zxBePHGBHK+hacbDQYDKXktbtdZSMoJn3fze123xK8eeIPij8QfG3xH8V3smo+JfHXirXvFuu3sru73Oq+INTudUvpS0jO+DcXThAWO1AqjhRXE0Sbk23u3/AEvkcQUUUUgCiiigAAJOACSegHJP4UV/Q/8A8Ee/+CcMXjq5t/2ofjv4YS48D20VxF8K/COt2ivb+K7+VZLW48WajY3CkTaHpqGVNIjmQR6jfsLxN8FmjS/n9/wUm/Yc8RfsefGrU30nTb26+C3jy/vNZ+G/iIQu9rZpcSNc33g6/nVTHDqegSSGK2SRw97pP2W7jDMtyIvjsHxzkGO4oxnCdDExlmGDoKftOaHsK+Ii5fWsFQnf38ThIck6sFd61opc2HqpfY4zgfPsDwvg+K6+GccBi67h7Pll7ehhpxh9VxteFvcw+KqOpCnJ2tajJ+7iKZ+cNFFFfYnxwUUUUAFfpl/wSC/aMuv2Zf8AgoF+z544N3LbeHte8YWnw88YKrbYJPDXjth4cvprpcESQ6bJfQasqkEiaxjdcMoI/M2ui8IatdaF4q8OazZTPbXela3peoW1xHJ5UkE9pewTxzRyjPlvG8YdZAMoQGHIpp2afZp/cB/sSqQyhgchgCD6gjIP5Utee/CPxOPG3wq+GvjIHI8V+AvCPiMHdu41rQNP1H7wZt3/AB8/e3NnruPU+hV3rVJ9wCiiigAooooAlTofr/QUUJ0P1/oKKAI26n6n+dJSt1P1P86SgAooooAKKKKACiiigAooooAKKK4/x78QfBHwu8K6x44+IninQ/BvhLQbSW+1fxB4i1K10rTLG1gQvJJNdXcsUS4A+VdxZjgKCTQB2FFfyyftf/8ABzv8Cvhvfar4T/ZT+HmofG3WrOd7RfHfiWe48K+AN0bFXudMtTC/iLXIwy7FWS00e3kDedFeSoAr/gf8Yv8Ag4P/AOClHxVm1GPSfiloPwq0i9kcw6V8OvCOk2bWkBkcxxxatr0evauJFicRSTRXcPmbdwjjJwMnWgurfov87fhcdvP5f1p/Vt72/wBIWa4t7cbp54YF/vTSpGO3d2Udx+Y9axrrxX4WsQDe+JNBtAWKA3OsafAC45Kgy3C/MByV6+1f5Qfj39uj9sf4nXM1z45/ab+N+v8AnsWktbj4keKYNOBZdjCPTLPUrbToUZeGSK1RWyxIJYk+BX3xC8d6mxfUfGXii/dmDs17r+q3TF1XYrEz3bksqfIGPIX5QccVHt/7v4/8ANPP8u3r5/gf7AFv4h0C8VXtNc0e6R13I1vqVlMrr03KY5mDLnuMj3rWR0kUNG6up6MjBlP0Kkg1/j9aT8WvijoEsdxoXxF8caNPDs8mbSvFWu2EsRjDLGYntb+JkMaswTaRsDELgE19b/Cj/gqJ+398F5oH8D/tU/FyOCB4WTT/ABJ4mufGumBIF2Rwrp/jAa3bRwhCV8uFIwAcjDYNNV11i16O/wCiDTz/AD7enn+B/oo/tpf8E1f2X/25bFLn4r+FrjSfHljZGx0f4m+EJ4tJ8X2UCLN9mtb2doLiy1zT7eWZpIrPV7S6EG6RbOW182Qt82WfwXtv2etC8M/BvT57u90n4feHdI8M6RqV7DHDcappul2UVra6lLHD+6Et7HGLidYspHPJJECShNfzv/s1f8HRH7SHgu6sNK/aT+Fvg74veH1aOG61/wAJNL4I8YRR/KJLySBzqegapOqh2FpFb6FFKxUfaIcMX/f74df8FJP2Kv8AgoD4QtdY+EfxGsvD/wAVdFhja++GHjprfwz43ks5ngSW3tbK6uPsmvrZ3FwrRXOg3WpxANLG22SVEX8b8YeDqOfcOYjNcuoU4ZrlMpZjWUIKE8bhKVKaxUJculStSpN16bs5z9lKkrucbfsnhLxxisozvCZJmOLrVcozG2Bw1OrUlOngMZVqQ+rTpc1/Z0atT9xUhFqnF1VVaXJJvpKKKK/jI/rsKKK+Vv2hf20P2ef2XtS0TRfi/wCNH0bXPEFm2pafoum6Rqeu6n/Zgmmt11G6tdLtrhrSzmuLeeCCa4MYnlhmWIN5T7dsPhsRi6saGGo1cRWlflpUYSqTaSu2oxTdktW7WS3NsPhsRi6saGGo1cRWnflpUYSqVJKKbk1GKbskm27WSV2fVNFfmun/AAVq/YedFf8A4WZrC7lVtreBfF4ZdwB2sBpBAYZwQCQCDyad/wAPaf2Hv+inat/4Q3jD/wCU9eh/YOdf9CnMP/CSv/8AIef9WZ3f2HnP/QpzL/wixP8A8rP0mor82f8Ah7T+w9/0U7Vv/CG8Yf8AynrvfhN/wUe/ZL+NXj3R/hr4F+IV3ceLPEFy9lolnq3hnXtEt9TvFSSRbW2vtRsYLUzzLE3kRSSRvM21I1ZmAqJ5JnFOE6tTK8fCnTi5znLC1oxhCKvKUm4JKMVq29Er32Yp5Nm9OE6k8rzCFOnFzqTng8RGMIR1lKTlTSUYrVvZK7dkmz7poo6V4B+0h+0N4O/Zs+HV7448UMLy/nZ7Dwt4chlVL7xFrRjLRWsWcmG0gGJ9QvCrJbW4OA8zwxScWHoVsVXpYehCVStWnGnThHeUpOyXZJbtuyik22kmzx69alhqVSvXnGnSpRc6k5OyjFbt9+ySu22kk20j1nxb408JeAtGufEPjTxHo/hfRLRS1xqetX9vYWqYUttV53TzZWAOyGIPLIflRGJAr8yfjh/wU6/Zyj0jxH4G0Xwnrnxf07WtO1HQ9YieKLRPCmp2F9C9pd2r3eoB9RmgnhldVmh0ojI3xOP3ch/FH45ftB/Ez9oLxXc+J/iBrk92nmyf2ToNvJJDoehWjMfLtNNsAxjTamFlupBJdXLDfPM5wB4jX6nlXAmEoRp1cyrVMRiFyy9lQnKjRpyTTS9pG1WpKL+1GVNX2i7Jv82zLjbFVJyp5bShh6KulWrRVStP+8oNulTW9otVG1Ztp+6rWpW3heHWtcufB2i3nhzw/qWpTXlhoV7qza3Jpds5Pk2C6k9pZSXcVsCywyzwG4KHE0szgyM1bu6WMxLczrETkxiWQITx1UNjsO3YVXor72MVCMYRuoxSSu3J2Xdybk33bbberbZ8VXr1cTVnXr1JVKtRpznK15NJRWySSUUkkkkkkkkkKzM7FmYszEksxJJJOSSTySTSUUVRkFRSwQzxtFPFHNG4w8cqK6MM5wysCrDIB5B5qWimm4tOLaaaaabTTTummtU00mn0eomlJOMkmmrNNJpp7pp6NPsyT4b2vhz4a/Fr4afF/SvDWnya/wDDHx54W8f6VbxNJYWl9qXhXW7HXLS0v1tNoe1uJ7COKchPNWKSQxMjkMP7wP2Gv+CjPwb/AG19Dey0Zl8EfFXSLRJvEPw11m+gkvmjUYl1TwzdHyjr2jhhmSSKFLyx3Kl9bRBopZf4O67r4a/Erxr8IfHHh34i/D3Xr3w34u8LahDqWkarYSmOWGaI/NFKv3J7W5jL291bSq8NxbySRSoyORX1WScWZjlVanGvWq4zAtqNXD1pucoQbSc6E53lCcErxhzezlqpJNqUfl864Uy7M6NSVCjSweNs5U69GCpxnOztGvThyxnGTspTt7SO6btyv/SWr+Sb/g5svtL/AOE4/Y00pFhbV08KfGzVZ5dqCdbO41P4fWUNuWC72iSSykmALbFa4JCbizH93v8Agnz+3H4T/bV+EFprwksdI+KfhaC1034leEIpAr2epbCkWuabC7GSXQ9a8tri2kXeLWYy2MzCWEF/wp/4OQ/ht458Y/Gv9jC58KeG9V8RHV/Bvxi8L6faaPZXGoX11rNvqvgXU3tIrW2ikkZvsdwk4IGSiStjZGzV+m5/jMPjeFczxeFrQqYerl9aaqKSSUYxbmpNu0ZQcXCpGVnGSlFq58Jwhh6uX8c5BhsZSdOrSzjCU5QlHVSlUSpzje3NFtxnCaveNpRvofy4V/bZ+xX/AMmi/s4/9kl8J/8ApCtfxz+P/gd8YfhVHBN8R/hp418FwXQ/0e58ReHtT0y1mOQMR3NzbpAzZYDaJC2TjHWv7GP2K/8Ak0X9nH/skvhP/wBIVr+OONJwqZfgp05xnF4t2lCSlF/uamzi2n95/b2FTVSd1b929/8AHA+nKKK8/wDiT8UvAXwh8M3fi74ieJdP8NaHaK3+kXshM93MFLLaafZxLJd395LjbFa2kMs0jEAJzX5xCE6k406cJTnNqMYQi5SlJ7KMUm230SVzubSTbdktW3sl3Z6BRX5J6n/wVv8Ahta600GmfA/42az4WW7WBfFkPh+K0t5oDJsa8g0+6dLloyv72KKV4Z3QgNGj/LX6P/CP4v8AgP43+DLHx18PNYGraJeExTJNBNZajpd8iqZ9N1bTrlI7qwvrcsBJBNGpwQ6FkZWPbissx+ChGpisNUpU5NRUm4ySk1dRlySlySa+zPle+l0yI1YTbUZXa3Vmu2uqV1qtUem0UUVwFhRRRQAUV8H/ALRn/BQn4Jfs86+/gq4t/EfxE8eQBDfeF/AtlHqM2ktINyQ6tfyyxWdndyR/vEst8tzsw0kcakE1/gT+394L+NerWuhz/CX41eBL+9lSG2utd8CatqOhvJLsMQk1jRLe9jtVcNuM15Db26ptJmzIor0VlOYvD/WvqlX2DjzKb5U3H+ZU3JVHGzvzKDjbW9tTP2tPmUOdczdkul27Wv8ACnfSzdz74ooPH/1qK840P5tv+C4H/JW/gr/2TrVv/Ujmr8R7aRobmCVcbo5o3XPTcrhhnGOMj1r9uv8Agt6jyfF74Jxxo0kj/DzVUREUs7s3iSYKqqoJZiSAAASTwK/MXw/+yr+0XrmmWXibT/gl8S7/AMNzGK4/tS28I61JbPaB1LzIy2m94wh3b0UgrypIr9i4eq06WSZa6lSFNShJJ1Jxgm/bVNE5Na+mp5ldXq1NG+6SvpZH+kN+yFrNz4g/ZV/Zy1q83G61H4JfDK5uGZ/MZ5m8H6SJJHfYm5pGUux2L8zHivouvHv2evCkngX4DfBjwZNC1vP4W+FvgPQbiBohA8VxpfhjTLO4R4QkYicTRPvTYpVshhuzXsNf2dhE1hcMpfEsPRUvVU4p/if5+Y6UZY3GSh8EsViJQttyutNxt8rH8QH/AAce/C+Twr+2H4E+I8UDJYfE34T6cjz7GCT6z4Q1bUNNvxvLENJFpt7oilUVVWMxZJctX881f2yf8HIfwVfxh+y98NfjJYwebe/CX4g/2ZqbpEGePw946tFsZJWlCF1ji1vTdGjKF1RjchyCYxX8Tdfz7xxg3g+JMfpaGKdPGU33VanF1H/4OjVXyP688LsxWY8F5VeSlUwKrZdVSd+V4Wo/Yxeit/s06DtrvuyxaXd1YXVve2NzPZ3lpPFc2l3bSvBcW1xA6yQzwTRlZIpopFV45EYMjqGUggV/Sv8A8E8f+ClelfEmw0f4LfH7XLbSviJZxRaf4U8canNHbaf41giQJb2GsXUjJDaeJVVRHHNKUh1jAIZb4lLj+aCnI7xOkkbtHJGyukiMUdHUhldGUhlZSAVYEEEAgg1+f5plWFzbD+wxEbSjd0a0UvaUZu13FveMrJTg/dkktpKMl+iwqSpyvF+q6Ndn/nuj/QFBDAMpBUgEEEEEEZBBHBBHII4Ir51/al/Zs8F/tU/CPWvhb4y3WbzsNS8M+IIUL3fhnxJbRSpYarAoZDNCPNkgvbRnEd3aSyxttfy5I/59P2Sf+CrvxK+C9rpvgj4w2t98Ufh/ZiK2s9Ua53eONBtFIURxX13KItctIIxtitb+SK5jUBY74oFiH9A/wJ/az+A37RtkJvhf4703VNVSBZ7zwxfMdM8T6erDLfadGu/LuXRDlTPbC4tiyttmODj8wxmT5tkeIjiYKbjQqKpRxuHUnCLi04ynZN0nqk41FyvWKc43b9XCYyVKrSxFCo6OIozjUpyTtOFSD5k4/wAyTXazWklZtH8dP7R/7I3xs/Zf8V3fh34j+E79NLE039i+MtNtri88KeILJZGWK60/VUi8hJHTa81hdGC+tWbZPAvBPzLtb0P5Gv8AQs1nRNF8R6dcaP4h0jS9d0q7Qx3Om6xYWupWM6kYxJa3kU0L8dNyHGTjFfOj/sVfslSak+rv+z18MTqDzNO0o8PQrEZGBBIs1YWQTBx5QtxEBxsxX2WD8RYqjGOPy+pKvFJSq4WpBQqNW9506vK6beraU5rqrXUV+o4PxEiqMY4/ASlXjFJ1cNUiqdRrq6dRXpt7tKc1e9lFWS/jl/Z9/Za+M/7S3i6w8K/DPwhqd/DcTINS8TXVpc23hfQLMsPOvtW1loTawRxIdywI8l1cNiO3hkcgV/ZD+yd+zH4O/ZP+EGkfDHwqy6hfNINW8YeJngEN14m8TTwRRXd+6FneGzhWJLXTbQyMLa0jUEmWSVm9+8P+G/DvhLS4dE8K6Bo3hrRrfIg0rQdMs9J0+IE5Oy0sYYIFPqQgJ7mvHPjv+0z8Gv2cPD8mvfFPxhp+jSPDJJpmgQyLeeJNbkTAEOlaNCxu7gs5VGnZEtYcl5540VmHzmecSY/iSpTweHw86WGVRSp4Sk5VataotIzrSjFc3Le8YRioQbcm5tRkvl+IuK8TnajR5FhMDTlzqgp88qk1flnWnaKk4q/LCMVGLbfvStJe9MyIrPI6RoilnkkZUREUEszuxCqqgEsxIAAJJr8W/wBvr/gqP4c+E9lrfwt+Amrafr/xB8q4sfEPjqGSO78P+CVKMk8emTIxg1bxFECwRo3ksNLmUNcG4nja1X87P2vf+CovxU+Pg1TwX8NhdfDL4W3Ba3litZseLvElqQysNY1SFsafaTAgnTtNZSQNtxdzqxjX8VPiBrjy3K6RBIfKiAlvCDkyTSBXRHPX92MOeSGZ85yMD3+GeCXWr062aW91e0+qxtKMErWdeWsZyu0lSjeN9ZSnFSg/zDiPiCnk2XVMTGPPVlJUcPFtpVa002k7WcYRjGc5u6bjFxjaUkyfxr8UvEPirWNU1OfU729vdTu57rUda1C4lu9V1S6uHL3F3c3U7PKzzsxJLkydG3ISVHmUtxPO5knmlmkOAXlkZ3OOBlmJJwOBk8VDRX7NSo0qEFTpQjCMUkklbZJavdvTVs/AsdmGMzGtKvjMRUr1JNtKTfJBdI06atCnFbKMEl3u22yiiitTiCiiigAooooAK9Y8G/Gv4s+BfBPj74c+EPiJ4p8P+BfibplvpHjrwbYavdQeHvFNja3lvqFuup6c0hsnmtrqzgaO6WJLryfOsxN9muriKTyeiunB4qpgsTSxNKznSkpKLclGS6xlySi3FrRq9n1TV0+bGYWnjMNVw1VtQqwcXKKi5Rb2lHnjJKS6O110aeqidGRiGHPr1BzzkEcEHsRweo4plXN5ZBE5zGDwSu5kyRkryp6Z+XcFyc9a9F8H/BL4wfEG3ivPAvwx8deLrOeWSGC88PeFtZ1a0lliYJJGlzZ2csDvGxxIqyEpglsBSR+l5bn2Cx1CVSpUp4SrSS9tTrVYwjFO9pwqT5Izg7NPaUHZTSUoSn+aZjkONwOIjSpU6uLp1W/Y1KFKU5Sta8J04c8oTV9FrGa1i3aUY+XUV95+BP8AgmR+3H8QZYk0n9nzxtpUMkkaG88WW1v4Sto/MBIeRvEFxYSLGApLP5RCjGeWUH9Fvgn/AMECvjBrs9pqHx1+JvhTwBpRKyXOheEhL4u8SOmfmg+1stloVrKQCDKlzqCKCrKshyg87NOO+EMnhKWO4hyyMoK7o4fEwxmJfksNhHXru+1+RLu0ehlnAvF+bzjDBcP5k4ydlWxGHngsOttXiMX7Cjpe9lNtrZM/ALT9O1DVr2207S7K71HUL2aO2s7Gxt5bq7uriVgkUFvbwI8s00jsqpHGjOzEBQSa/oY/4J3/APBG3XvE9/oPxl/a00i58P8AhG2eDVPDnwhu1e317xM6FJrS58ZICsujaG3yyHSMpqmoLtS5Flbl0m/br9mX/gnt+y9+ymlvf/DzwJBrHjOJFEnxA8ZeTr3igSBNryadLNCtnoiyZY7dKtbaRVbY08gGT9tkknJJJPc1+C8a+NuIzGjWy3hSjXwGHqxdOrm2ItDHThJWksJSg5LCKSulXlOddRd4RoTSkv3fgvwVw+XVqOZcU1aOPxFNxqUsqoXlgadRNSi8XVnGLxbi/wDlzGEMPdWnLEQfKU9P0/T9I0+x0nSbG00zS9MtLew07TrC3itbKwsrWJYba0tLaFUhgt4IUSOKKNFREUKoAFeb/Gf4L/Df9oD4ea78Lvir4btPE3hHX4ds9rOPLurC8jDG01bSbxMT6dqthI3m2l5bsrxvlW3xPJG/qdFfglHE4jD4iniqFerRxNGrGvSxFOpKFanWhJTjVhUi1ONSMkpKSd763ufu9bD0MRQqYWvRpVsNWpSo1aFSEZ0alKcXCVOdOScZQlFuLi1a2lj+Lj9uD/gkp8av2ZL7U/GPw2s9U+LvwZMktzDrWjWEtz4p8K2wzIbfxZolnHJIILdCANbsFmsZUR5LpLAqUP5IyRyRO8cqPHJGxR45FKOjKSGVlYBlZSCCCAQQQa/0tSAQVIDKwKsrAMrKQQyspyGUgkEEYIJB61+bH7UH/BKv9lP9ph9Q12Twy3wu+IF8Xnk8Z/D6G104Xt4wY+frfh9o/wCx9TMkh3TypDZ3spALXhI5/ovhHx09nTpYLi7DVKjilBZxgacXKSVkpYzBx5bySu51cLdydksLe8n/ADzxb4HKpUq43hLE06Sm5TeUY2clCLerjg8Y+ZpN6RpYlWiv+Ym1or+G6iv2X+NX/BEL9rrwBrdxH8M7TQPjJ4YbzJLLVdD1Sx0PVljDny4L/QtburaWO6MeCzWdzeWpYMBMvAr4e8afsIftg/D8XD+KP2ePijZwW2POu7XwtqGrWKjGeL3SY762fHQ7ZTzkV+65fxdwxmsKc8Bn+VYj2sYyjTWNoQrrmslGeHqzhXpzu0nCdOM09Grn4bmHCPE+VzqQx2Q5rQ9m5KVT6lXqUPd3ccRShOhONtVKFSUWtU7JnyTUsLmOaKRfvJIjDPqrAj9a3Nb8JeKfDcrQeIfDmu6HMpKtFq+k32myBlwWUpeQQsCNwyMZGR61hQo0ksaIMs8iKoyBklgAMnAHPrX0EZRmlKEoyi1dSi1JNd01dNeh8/KMoNxnGUZLeMk4teqdmj/Wu/Ywvp9T/ZA/ZZ1G6Km5v/2d/gzeXBRdiGa5+Hfh2aUqvO1S7thc8Divpavmb9iyzuNP/Y9/ZWsLyPybuy/Z1+C9rcxbkfyp7f4deHYpo98bPG+yRGXcjsjYyrEEGvpmvRjsvRfkSFFFFMAooooAlTofr/QUUJ0P1/oKKAI26n6n+dJSt1P1P86SgAooooAKKKKACiiigAoorhviZ8SPBfwf8A+Lfid8RNfsPDHgnwPoWo+I/EmualMkFrYaZplu9zcSFnZfMmdU8q3t03TXNw8cEKPLIikA8c/a3/a3+DP7FnwZ8R/Gz42eI4dG8P6NC8Ok6TC8T694u194nfT/AA14b09nSS/1S/kTaAuILSAS3l5LDawSyr/nGf8ABRn/AIKm/tCf8FDvH13eeMNXu/CPwe0nUZpfAfwe0a+kHh/RbZXkS21DW3jWA+IvEklu4Fzql7H5cJZ4tOtrSFnEkv8AwVN/4KQ/ED/goh8f9V8WXd1qGi/Brwjd3mjfCHwAbiZbPS9BinkjXxDqtqJGt5fFXiFAt3qlyFb7LE0Gl28jW9oJJfzBrkqVHN2WkV+Pm/0QfrbdK/6/g9QooorIAooooAKKKKACtLSNZ1bQNSstZ0LU7/RtX024iu9P1PS7uexv7G6hcSQ3Npd20kU9vPE6h45YpFdGAZWBGazaKTSaaaTTTTTV009Gmno01o09xpuLUotqSaaabTTTummtU09U1qmf0pf8E7P+CyeoNqOg/BX9rnV47m2vJLfSfCvxnnVYp7e4lbybTT/HyxgRSQyu0UMXiSJIzEx/4miFCbtf6YoZobiGK4t5Yri3niSaCeF1lhmhlUPHLFIhZJI5EYMjqSrKQQSDX+aX161/SV/wR4/4KQ3NreaJ+yX8dfELT2F2yWHwa8aa3e5ksbkjbB8P9WvruXL2s+3Z4XnkZmhlP9kMwjeyCfzh4p+FeHjh8RxLwxhY0Z0Yyr5plWHhalUpK8qmMwVKKtTnTV54jDQSpzpp1KUYThKFX+jPC7xSrzxGH4b4mxLrRrONHK81ryvVjVbUaeCxtWX8SNR2jh8TN88Z2pVZTjOEqX9N1fzi/wDBc34N6yPE3wo+PNja3FzoN1oEvw48QXKBng0rUtN1C/1vQvNAUiJNTg1TVUSV2CtLYGPIZow39HRGODXnPxa+FHgj43fD3xL8MfiJpEWs+FfFFi9neQMFFxZzgFrPVNNnZXNpqmnXGy6sbuMeZDMikZUsrfz/AJDmn9j5phsc4udODlTrwVuaVCquSpy3+1FNTirq8opNpNs/qvIc0eT5phsc4udODlTrQXxSo1YuFTlu0ueKanG7ScopPRs/gJAUodoBIAZixClcMV2p8+HDBlJ+XcMHGACS5lOXG2IZdiMSA7RGCWVcyHKkEbScliAEYnNfpV+1d/wTE+Pv7O2tahqPhXw/qfxX+FrTPcab4s8M6dLf6hYWal2W28T6PZiW/wBMnt4zie8ijbTJwDNFcJho4vzsfw54gikaCXw/qsUyAq6S6ffRyKykudyugIfaNgUjkYAXeQa/fMHmGCx9GOIwmJpV6UknzQkrx292cXadOSvZxnGMk9Gkz97weY4HH0Y4jCYmjWpSSd4zScW/szg7ShNWacZpSTT00MgrknKois6HeCzJEHBIXKl+MHJBDONmOoIr9Rv+CRvwJ1T4q/tWeHvG8lrMPCXwbgn8Z6xf7GFs+s+TLY+GtKE3AFxc6hcfbxGDk2um3JYYwD4L+zt+wX+0Z+0lrllaeFvAmqeHPDEs6tqnjrxVaXej+GNMtCyb3jmu4Un1K5jVmMdlp6XU8rDYwiAZx/Wr+y9+zT8O/wBkL4Q23gDwpIkvkLLrvjfxjfRR2954j1lLYG+1e+YE/ZrG0giMOn2hkaOxsowC7yvPNJ8rxbxHhcFga+Aw1aFbH4unOhyUpKf1enUThUqVHFtQm4NxpQvzuTU7csXf5Xi7iPC4LAYjA4avTq47FU5UHGnJTWHpVE41Z1XFtQm4OUacG+fmkptcq1+mgNxwB1PAAz19AOtfy8f8FEPjjc/F/wDaD8QaTZX7z+EPhrLceDfD9ukm61e8sZtniDU0UMymW91WOWHzAzBrWztACAuK9k/ax/4KQfETxn4o1vwh8Etbm8HeANMu59Ph8R6Z+78ReKBbu0UmoLfOC+madO6lrKGzWK6aAJLPc/vWgT8rp557qea5uZpbi4uJZJ555naWaaaVi8sssjkvJJI7M7uxLMxLEkkmvO4R4ZxGXVP7Sx6pxrVKHLh6GrqUFUac51HZRjUcFyKMXJxjKak020v5G4p4ioY+n/Z+Cc5UoVuavW0VOt7P4I09XKVNTbm5NRTlGDimtSKiiivvz4cKKKKACiiigAooooAKKKKAPdP2df2iPiX+zB8UdB+K3wu1h9N1zR5RHe2Uu6TSvEGkSuhv9D1q0DKt1p99GuxxlZYJBHc20kVxDHIv9vt5448F/tLfsl/AD9pJPDllBrWthPEOiNOY7u98Jajrmk3eheKNMsrxQp2GWyuLGViN00UaSMsbnan8B1fpl+wX+2z8SfhV4z8F/Brxd8Q9Xf4A61q0thL4T1KS3utE8Paxq7yrp2tWMtzC93pUMOq3PmXqWVzBayR3NzLcxNzIvRis1xFDhjibKVGrXpZllWJp0KMGvcxTjFqaTa0qU4yhNR1k+TR2ObD5VQxHEnDeaylSo1MtzXDVatWcX7+G5mnByinrCbUoOS5Ypz1inc/oo8U+FfDXjjQ77w14x0HSfE/h/UoWgvtI1uxt9QsbiNxg74LiN1Vx1SVAssbYeN1YAiLwf4R8PeAfCvh/wT4TsBpfhnwvpkGj6HpqyyzLY6bbZFvapLMzyvHCp8uMyO7hAoLHGa6PIIBUhlYBlZSGVlIyrKwJBUjkEEgjoaK/mHnnyez55ez5uf2fM+TnSceblvbms2r2vbTY/pHS97K9rXsr23tfe19bbXs+gViah4Z8N6tf2Oq6t4f0XVdT0xZE06/1LTbS/ubBZTmUWUl1FKbVpekjweW7qArMVGK265nxrf6ppXg3xbqmhwi51nTPDOu6hpNueRPqVlpl1cWUOON3m3McabcjduwCM0Q5nOKjJxcmopp2+L3Xd6aNOz6WvcHon/W2v6HA+MPjn8B/h5rFv4Y8a/Eb4ceFtcumRYdD1nWdFsb5mfb5YeylkWSPeCCgljXcoLDKqSPU9LOjT2sepaGmltZapFFdR3ukx2gttQhYFoZ1nswI7lCrsY5NzjDHacGv4+v2N/EH7MHjb9q9/EX/AAUM1fxVf/C7X7XxXfeJdXs5temv38Y3MZbSW1WTQxLriack5mi2WKloZUtI2VLZJBX7l/8ABL/xppviHRv2jPDPw+1DxNq/7P8A4J+M2rWXwGvvFX2p9StfA2oS39xY6SZbtVuCkFlFp94YLjE1tJeyiWKGSR41+2znhD+yslhmsMxo15e3WHxGE54xrpv2X76NH440ZTqqMJycud06l+WUUpeVQzNVsxrZf9TxkPZYaliFjJUbYGoqs6kVQp4i9p4in7Pmq0lFckZ05aqWn6l0UUV8OeqFFFFAHkvjPxh8EPg/A2t+OtW+HHgCPVbqSVtR1z+wdFm1K8k3PNM006Qz3c7YZ5ZiXbAZnbAzXbeD/G/hPx3olvr/AIG8S6J4o0C4LLb6n4d1G01LTmZeGjWaykliV0zhoyQy5wQM1/Kf+1j4y0Dxl/wUY1my/aY1HxH/AMKc8MfFTRfDPiG00oXS32kfC+yuLEXTaHaxLJMkl5pjyai8tnE9xcyXLzQBnMQH6Qfsr+Mf2edK/wCCivjrwB+whqXizUf2VNd+FK6nqmn65Prraba+LdHgs/P1nTofEUceqpELqS2sEnvUW5mnvb2ON2s0gRfua3B6jw9Uzp5lRdejToVZYSckqlSNWlKtejDWcoUoR5ZVW+X2rpw5VzpryXmqjmdHLvqeMar0KtdY2FFvA0/ZVKdP2FbEKVoYmpKp7SlS5Xz04zlzLl1/bSiiivhj1jwDxr+zZ8M/iN8ZvBvxq8caVF4l1r4f+GLrw/4W0PVLeC60KyvLvUn1B9fntZVdbzUbYN5Nik6mC1YtchHnETRfQ1sxSSFEACI0apGqgKFXAVFUDAUABVUDAAAAxxUFdV4L8NX3izxJpWiWMbtJeXUau6qSsECsGuLiQjokMQd2JK5xtB3EV2UKeMzLEYPA0VUxNepUpYPB0I3k+etVUadKnFbOVSd3Zat3b6mGIxFDBYfEYzEThRoYelUxGIrSajGFKlBznOUu0Yxb9FZH6f8AgqSSbwh4ZklBEjaHphcNkHItIhznnnGecn1JPJ6eqtlaRWFnaWMAxDZ20FrEMAYjgiWJBgcD5UHA4q1X+kWCozw+CwmHqS5qlDC4ejOW/NOlShCUr9byi38z/O/GVoYjGYuvTjyQr4mvWhH+WFWrOcY/JSS+R8s/ts/A2L9pD9lX44fBswxzah4w8Ba3b6AZAhEHiaxtm1Hw5cgsj7DBrFpZyFlG/aGVT83P+Y5qumX2i6nqGj6nbTWWo6Xe3Wn39ncIY7i1vLOd7e5t542AaOWGaN45EYAq6kEAjFf6xBAYFWAKsCCCMggjBBHcEcEV/nyf8Fsv2WLr9m79trxvrGm6cbXwF8bGk+J/hOaGF1tIr3V53XxZpQl2iEXVp4hS7vjbRsTBYappxZVEig/mvibljqYfA5tTjd4eUsJiGltTqvnoSl2jCqqkL7c1aK7X/cPBHPI0cZmfD9aaSxcI5hg03a9eglSxUI951KDpVEl9nDzevT8g6KKK/Gz+jwre8MeKPEXgvXdN8TeFNa1Hw/r+j3UN7puraVdzWd7aXMDiSOSOaFlbAZRujbdHIuUkVlJBw0R5GVI0Z3chVRFLMzE4AVQCSSTgADJNeleGfhF478VBJbHRpbSzkTzFvtTJsbZkP3ShlXzZQ/VfJikBUhjhSCcMRXw+HpSqYqtRo0bcspV5whTaenK3NqLve1uu1j08qyXOM9xUcFkuV5hmuLla2Hy7CV8ZWSbspShQpzlCKe85WjFJttJNn9fX7CX7QuoftL/s4+EviDryRJ4tsLi98JeL2gXZFd63oQhRtUjQALEuq2c9pfPEo2RXE08KEpGpP2FX5f8A/BKDRdP8Ffs8674FbUrS88S6f461LXdbhtRMojttYtLGHTZB5yIZBt0+4iZ0BUGMAtkhV/UCvxPMvqv1/FvBOMsI603QcHeHs5O6UP7ibaj/AHUr2ei9TM8ozTIsdXynOsFXy/M8G4QxWExEVGtRlOnCrDnSbXv0qkKkWm04zTTaZ8lfttftFyfswfs/+KPiTp1tFeeJpZ7Pw34QtbhC9q3iLWTJHa3N2gI3W2nW8N1qMsZKiYWotwwaZa/jh+IPxF8bfFXxXqnjX4geI9T8UeJdYnee81LU7h5nG92dbe2iJ8mzs4S5W3s7ZIreBPljjUV/Ur/wVe8M2Pjn9nbSPCKarbWHiKfx9ousaJb3BYi5TTbLVItQMqR7pUhS2vDtm8toxcmCJmUygj+VD4p+D/E3wx0+ebXLQQ/aIpItPvrZhc2c07Hy12SgAb0LLI0bhJVRlcoOQPveCPqE6X1eFWh/aeIrzUqcpQVf2KUOXki3zukknOXL1u5L3UznzTIOIMHw5U4wrZNmT4Yoqsnm9PDVJ4FToPkqQnWinCnKVa1CEqrhCpWapQm5pxXmWteNNM0qR7aINe3SHa6RHEcTdw8hGGI5BEe7BGDjOR4nfXkt/d3F5OcyXErSN7bjwo/2UXCqOygDtVd3eR3kdmd3Znd2JZmZiSzMxJJJJJJJJJ5NNr9lw2Eo4WLVNPmkkpzk7ylb7kl5JLzuz+Ys5z7H51UUsVOMaNOc5UMNTio06XMkrt6zqT5Uk5zk7Pm5FBScQqWOCabIiiklwCT5aM+AMAk7QcAEj86+pP2MP2ZNb/a3/aB8GfB3S7mXTdN1OabVfFuuRxiQ6F4S0lVuNa1BFYFGuTDstLGOTCSX11bI3ysa/t4+E37In7N3wT8I2HgvwL8IPA0Wn2dnDaXeo6z4c0nXdc1uSOMJLfazqup2lzc3l1dNukm+dIAW2xQxxqqD5riTjDBcO1KOHnRqYvF1YKq6NOcaap0XJxU6lSSlZzcZKEIxk3yty5VyuXp8O8I43iCnVxEK1PCYWlP2SrVISqOpVSUpQp04yhdQjKLnOU4pOSjHmfNy/wCe+VZThlKn0IIP5Gkr+8L48f8ABNj9j74/6bdw6/8ACbQ/B3iCeIrbeMPh3a2/hPWrOUsjCV7fT4k0e/UFBuiv9OnJUsqSR72J/C/46f8ABBf42+F577UvgV498M/E7RQzSWmh68w8I+KkjYuVhZ7mSfQryRfkXzF1C03jexiQqofnyvj3IcwtCvWlltd29zGJRpN/3cRFypW/6+uk/I3zPgPPcBedClDMaKv7+EbdVLpz4edql32pe1S6s/A2ivszxl/wT0/bS8CXMtvrv7OnxMdYpXhW60fw/ca/ZzlCwL29zon2+KWNth2OG2uCm3O9M8bYfsX/ALWep3KWln+zn8Y3nk+6snw/8SW69QOZbjT4ol5I+84r6qOYYCceeGOwkoPVTjiaLi135lO34ny8suzCEnCWBxkZJ2cZYaspX9HC58y0V+qPwm/4I4/tufEy5tm1bwDZfDDSJivm6r8QdXtdLkhTzEVyuj2jX2sysqM8igWSxyCMqsu8qrftF+zT/wAEPPgD8Lp7PxF8cddvPjb4kh8uZNCSGfw/4FtJ1XJWW1huDq2tKkn3Wu7qygcIGeyIYqPCzLjLh/LYSc8dTxVWO2HwTjiKjfZyi/ZQf/XypD8j28u4Pz/MZR5cDUwtJ2vXxqlhoJO2qhNe2no9OSnK+qurO38j1to+rXsTz2emX91BGCXmt7S4mjQDJJZ442VcAEnJHANZzKyMVdSrKSGVgQwI4IIPII7g1/o5eGfhT8LvBejL4e8JfDfwL4c0JYhB/ZOkeFNEsrJ4Qu3y5oYbJVnXb8p8/wAwkcEmv55P+Cz/AOwP4D8LeDbb9qP4N+E9N8Jvpup2ekfFPw/4es47HRru31WYW+l+KrbTLSJbayuo9QkisNUNukMM63drO0YeKR28bJvEHBZrmNPAVcHUwX1ifs8LWnWjVU6j+CnVioQ9nKo/dg4yqR52ot63PZzjgDGZXl1TH0sbTxjw8faYijGjKk40klz1KcnUn7RU9ZSTjB8icldrlf8ANPX9Sn/BDL9r7wrd+Arz9knxNeRaR4w0PVdb8U/DySZ0ih8S6Nqsn9oa3o0LFV3arpV8brUEiZ2e5sbuQwg/Y5BX8tddp8OvGGsfD/x14U8Z6DreseHNW8Oa5p2qWet6BMsOsaa9tco73Ng7ssTXCRbwsUzCCbJim/dO4P1OfZNQz7La2ArT9k5ONShWScvY14X9nUcVdyjrKE4pczpzmo2k018xkOcVsizKlj6UPaxSlSr0bqPtqFRx9pBSbSjK8YzhJvlU4Rcrxun/AH6ftBftO/Df9nLTdAk8XvrOu+KvGeqJovgf4d+DrA67468Z6o3zSW+haHFJHJNHbx5kuryd4LSBcLJMHdEb4w1f/got8Zra7kTR/wDgnp+1JqNiCfLub7SNP0ydlwCDJbRm/RDyQcXL9O2cV498Mv2Z9K0j9tX9j39obUPjl48+O8PxO+FHxC17QLr4mz6Y+oaTeWng3TtW0q+0Gy06O3s7C0ay1+7+0WVtbL9nvYt7yytkr7X/AMFffiF8TPh/+yrDdfDvVNb0Rde8d6LoPizW9BmubW/sPD89lqlz5IvrNknsYdQ1G1srSadXTcjmDzFExz+aZZkfDsc24byClg6Gf4rPqcZVs0x2JzPLMLha8sfjMFUw+HweHnhq8o4Z4GouavP2mJrTtCNOLhFfpGY51xBLKuIs9q4qtkWGyOpKNHLcDhstzLE4qjHA4LGwr18XiIYmhGWI+uw92hFU8PSjecqklJmZpX/BRX486rIwt/8AgnT+0zLDH/rpbW3066ZCQdoCCFASWABzIu0ck52q3mHxY/4LGH4F3WlWXxg/Y++O/wAOrrXIJ7nSIfFY0XSm1GC2kWKeS0+0TKJRC7KsgUkqWXIAIJ5/9kL4/wD7LtxL+xJ8PP2VI/jHefHbWYtQtf2t4vFmseJtQ8K3WlweEtTuPEOrzRavd3Xh8SxeLY7HUvDd3oFvFPDpqizv7gTSyxz+p/8ABZj4WRfFz4BfCjwhplnbT+O/EH7QHgHwl4GuZYwZYb/xWNR0u/h80AyLaPYl7u7RdwZbFJNpeJCPrc14V4VyHjjKOF8zyHAV8DmlGnWeOweaZ1Rr4anU+sQU69CvjsRGmqdTDudT33F0G5xmpRaPlcr4n4oz3gvNeJcrzzHUcdltWdFYHG5Zk1ajiKtNYabp0a1DBYeU3Vp11CHuqSr+7KLjJN/OS/8ABwJ8HXOE+A/xKY9fl1fw4f5XFNP/AAcDfBtThvgR8SQRxg6v4bH/ALcV+m/7OH7Af7Nn7N/gLR/CekfDTwf4q8Rw6fbx+KPHHivw/pmv654j1YxJ9uuzLqsF4tjYvcBjZ6dZCG3t4AilZJN8r+O/thf8Eyf2Xfjz8P8AxPqeneBNE+F3xA0jRdT1bRvGfgTTLbRs3VhaTXa2+taLZLbaZq1jcNF5coeGK7jDmSC5RlAPh0Mz8HKuarBS4YzmngJ11QpZn/amLnzJzUI16mDWJVWFFv3vdqVKyg7+x5/cXtVsu8XqWV/XIcS5LUx8aHtqmW/2XhoJNQU5UIYx0HTnWjZxvKnTpSndKrytTfhXgb/gsNc/E3RG8R/Dz9jX9oPxpoiXs+nSan4asLLV7KO8t44ZZbd7qzWWASqlxGTGJC65+dV4z10v/BUf4gQoZJv2Cf2oooxjLyeH4kUZIAyzRADJIA56mvUv+CQMFjYfsB/B5NNa3Ez6n8RDqktptRrjUIvH/iO2Et2FO77R9hgsUAkw4gSEEYwT8zf8FUvi1P4O+O/7KPh/4mal46tP2arvWzrvxW0/wVqmpaXe+JdNsNe0uHWNPa502a2mlmt9Elma2sxcCRzPJLCqyBHHTl2UcK5tx1iuDsJwrhsPChmGZYOOOxed5xKcqeW+256nsKdWClUrKhJwpxnFJTTc7RbXPj834oyrgfC8X4vifEYidbAZbi5YLCZLlEYKpmP1ZRp+2qUpONOk61qlSUG3yu0LySO21P8A4Kq/ESw0+TUYv2B/2m3t4Qss802irFbxWoG6ScyRWk5YIuCF2gEHJdQM199/sp/tUfD39rb4Xx/E34d/2pp6W2pz6B4m8N67AlrrvhnxDZxRS3Ol6jFHJLE/7uaOW3uIJHinibIKyJJGnx/+x98bPhT8Q/2s/jt4Z/ZBb4iz/seaX8OvCl9pdp491DxBqEOh/EQXCWuoL4cPii6vNYs9J1awdlbT7u4Znu7G6u1jSF4jXyd+yR+z18UfjZ8fv2yPHPgr42ePfgD8Ak/aF8UaRb+GfhpDpNjqXjDxbocr2Wr3yX2q6beJpllaM0nnG2t5Elubvy/IJt2YTxTwjw3ga3EeVSoYPIsRw9TyvGQzijjszx+HxtPMZwgsDXwFZYmtTxbjP2lFUZNRdKUqk1Qbmq4a4r4hxtDh3NI18XnmHz+eZYOeU1cDluAxGEqZfGUnj6GOpPDUamFjKnyVnWim1WhGlB1o8j/dDxz8OPAPxL0HUvDPj7wb4b8WaLq9tJa31lrej2N+ssckZj3rJPC8sU0YO6KeJ0licB43VgDX8DX7Wfwg0z4EftU/Fz4SaDI8+h+EPHs9loO8l5U0i+W11TTbV2LM0klpbX8Vm0jYaVoN7DLV/Zq/7E2pPJFKf2wf2uw8Ik2bfH3hRIz5i7W8yJPAqxzYA+XzVbYfmQqxzX8mn7XnwB1TwV+3/wCJfgtbeOPEPxO1jWvHvg1Y/F3iZluPEur33jKPSLxYNVmjLxXeoWJv0smlhWOOUQxqlvBgQr9P4HVMNh82zfA4XPp4+lVy2OI+ofVMZh6UKlHFUaf1qMq69kpRhWVKUY8s5qrqmqV18v43U8RiMoynG4nIo4KrSzJ4dY/63g61SVOthqs/qso0f3soznS9om+aFN0nqnVs/wDTG/ZYilg/Zl/Z5gmjaKaL4I/CyOWNwVeORPBGiK6Op5VlYEMDyCCDzXvNcz4K0C38KeDvCfhe0RY7Xw54b0PQrZFxtSDSdMtrGJFwFG1UgUDCqMDgDpXTV/UiVkk+iSP5sCiiimAUUUUASp0P1/oKKE6H6/0FFAEbdT9T/OkpW6n6n+dJQAUUUUAFFFFABRRRQAV/EZ/wclf8FJLrxn40X9g74T6+R4O8E3Gn6z8db7T5B5ev+MlWHUdD8FtcRufO07wxFJBqerQAGOXXZbWCTbNpDqf6sv29v2pND/Y0/ZM+Mv7QOsSQm88HeFbmDwlYTZP9r+OdcK6N4P0sIpDvHca9e2b3ZTmGxiurhiEiZh/lP+NfGHiL4g+L/E3jnxdql3rfijxdrmp+Itf1e+laa71LV9XvJr6/vLiViS0k9zPI56AZAUBQAMK0rJRXXV+n/BYHMUUUVzAFFFLgjqCPwoASiiigAooooAKKKKACpre4ntLiC7tZpba6tZori2uLeR4Z7e4gdZYZ4JoyskU0UirJFIjB43VWUhgDUNFG+4JtO6dmtU1un3P9CT/gkf8AEhv27P2I/C3jUa+Lj4zfC6/ufhp8ULTUnTdrWq6Pbw3OheIRNHHC0MniDw5dabcXMssMkE+rw6mqzoYpAPrjxD4R8ReFrx7HXNKu7CZWKqZomEcwGcPDMAYpkODho3YV/Np/wa0/HaTwl+1L8ZvgLfXMiaX8XPhfF4n0uFpG8pvFPw31WOSKJIfueZceHfEevTSTffC6dGnKsSP7p9S0nTNYt3tNVsLTULZwVaG7gjmTBGDt3qSpx/EpDDqCDX4/xV4JcPZ/7TG5LU/1fzCblKpTpU/a5ZXnu3LCKUHhpS3csNKNNXb+rybufsvCvjNn2Sxp4TOKf9u4GCUYzq1PZ5jRirK0cVyyjiIxW0cRCVTZe3UUkfkAGZc4JGcgj68EEfoayJfD/h6eVrifw9oE9w7b5LibRtNlnkfg+Y8z2zSO+QDvZi2e9fR/x88EaJ4N8UwpoUQtbTUbGG+NkrO6W0sk1zC6xlyxWNjbmRULEoXYDCbAPBq/k7PMoxfDucZhkuMnTlisvxEsPWnh5ylRm0ozjOEnGEnGcJQmlOMZRvaUVJNH9T5HnGHz7KcDnGCVaGGzChGvShWioVYJtxlCpGMpR5oTjKLcZSjK3NGTi0xFCoixoqxxooVI41VERR0VEUBVVRwFAAA6V8M/8FFfijP8Mf2YfFken3D2utePbyx8CadLGdssdvqomudckRgyuudDsr+1MkeTG91FnG4Gv0s+FXhey8X+ONE0PUSwsbqeZ7kIdrvDa2s928atg4Mwg8otjKhyw5GK/LX/AIOG7Dwz4V0j9l/wvoGn22mSX138StXu4rU7TcR6bb+D7C0mvFMheSSL+0LpbaVkc4mvBvTcwk+z4L4Gxmc5fX4rlXw1LK8nzGhQqUajnLEYvEReGqOnThGPJGEPrOHc5VJrmUpKMXytnyPF3GuDyjMKHDEaWJnmebZfXr068FGNDC0ZRxEIzqTclOVSf1esoRpxfK1GUpJNH8xlFFFfpx+cigE9ATjrgZpK/bf/AIJYfAXwj4y8B/Fjxp8QvBXh7xVpGt6tpHhPRY/EWkW2obE0u1ur7WpNPe6hdrdZpNT02GWe0dJC9qY2YbMV9C/Fr/gld8DPG802pfD7VdZ+F+pymWQ2NqBrnhySVw7ALZXkiXtlEJCu1be9dEQMiw4KeX8rieL8rweZYjLsUq1P2EoweJjD2tFzdOM5RcYN1Y8jlyNqE1zRd7I+mw/CuZYvL6GPwzo1PbxlJYeUvZ1VFTlCLjKS9nLmUeeznC0WrXeh/OHRX7OXf/BHrxbGpNl8ZfDFy+1iFm0DVrVdw6Ass1ycEdwhIIxt715R4g/4JNftF6bE0mia38PfExDkJFa63eabKYxn52GradaRhjjAjSSQk9SBXZS4oyCq7RzPDpt2/eKpR/GrCC/H0OWpw1nlL4surPb+HKlW3/69VJ/Pt9x+XVFfVnjL9iL9qPwNDNc6z8IPFFzZwKXlvNCgh8Q2yIGC7y+jTXrKpPTcinAyRtIJ+atW8P67oNw9prejaro91G2yS21PT7qwnRsZ2tFdRROpwQcFRwQehr1qGLwuJXNhsTQxCte9GtTqq3/bkpWPKr4XE4aXLiMPXoSXStSnTf8A5PFGRRRgjqMVPbW095cwWlrE01zdTRW9vCgy8s0zrHFGg7s7sqqPUiujbcwIKK6LxF4R8U+Eb1tO8UeHtZ8P3ycm11fTrvT5sHIDKlzFGXQlWw65U4OCcVzuCOoxSjKMkpRalFq6lFppp7NNaNeaG04txknGSdmmmmn2aeqfqFHPYkHsQSCD2IIwQR1BByKKKYj7K/Yg/wCCt3jP4KeObf4FftG3t342+Fya22g6B46ZpLzxT4GtJZxDp0WpSlpZ/EOg2aNHA4lLarYxDKy3SxmFf6o9Pv7HVbCx1TTLu3v9N1OzttQ0++tZUmtryyvIUuLW6t5oyySQzwyJLG6kqyMGBINf50vxDvZ9E+KOparYS+Vd2Gr2OpW7oAphurdbW5jf5NuWEqLJkgMScsWJLH/QS/Zw1fwz4+/ZU/Zw+MHgSBbfwb8SfhnpWpxafC7S2vhzxTYPNpvjTwpbswJittC8SW2oW2n2rSyvb6YLNCwUKB53i3wNgqGUYHjDKMIsPOosMs7o0IqOHf1unD2WPVNJKlOWIkqNdwioVJVqVRxjNTlPo8KeN8biM3x/CObYp4mFF4mWS168m8Qo4Wq1UwDm23VhHDp1qHO5TpxpVaanKHs4w9To4OQQCCCCD0IPBB9QRwR6UUV/PB/QB+Y/xV/4JN/su/FDxzfeOUbxf4HuNYv21LW9D8KX9hFoV9dzSiW7ltbW9sLl9KN229pUs5PIWSR5I4EJxX3Z8Ivg/wDD74FeBNI+HHwy0GHw/wCF9H8ySOAO1xe317cENdanql9Jme/1C6ZVM1zKc7USONY4o4419Mortr5jjsVRp4fEYqtWo0rOFOpNyinFcsW+snFXScnJpN2auyVCEXeMUm76pa66sKKKK4igooooA+G/2m/+CfPwC/ak1yDxd4ut9b8L+No4YLS58VeE7mC3utVsrZdkFvq1ldwXFleNAmEhuhHFdJGqxGZogIx6B+zN+x78F/2UtJ1Gz+GmlXlxretxxQa94w1+eK98Q6pbQyGaGzM0UFvb2dhHKfMFpZwRJJIqSTmZ442T6kortlmOPnhY4KWLrPCxSSoOb5OWLTjF9XCLScYNuCaTUbpWnkhzc3LHm7217el/PfV6hRRR1riKOr8G+D9X8ba5aaHpEQae4YtJNISkFtBGN0s80gVtkaL3CszMVRVLMBX6KfDX4V6H8ObFhaZvdYuoo1v9TlGGcqPmitk58i335JUEvIcGRjtVV8z/AGa/A76NoV14qvoit3rgWGwV1AaPTYmDNKuRlRdTgdCQyQRtkgjH07X9heDfh/gcryfBcT5lg/aZ5mEZYjByxCv/AGfgaqccO6FN+7CviaL9tKs06qpVo04uC9op/wAmeLnHeNzPNsZw1l2L5MlwMoUMWqDS+v42naVdVqi96dDDVv3MaSapyq0p1JKf7vkKKKK/dj8SCvyO/wCCyf7FB/bA/ZT1m58LaYt58Wfg/wDbfHXgMwwxvfapb2to58ReFopGHmFdb0+ISW8KMok1Sx04uQiNn9caRlV1ZGAZWBVlYAqysMEEHgggkEHgjiuPH4KhmWCxOBxC5qOJpSpTta8eZe7ON7pTpySnB9JRTPRynM8Vk2ZYLNMHLkxOBxEK9O9+WXK/fpTtq6dWDlSqJbwnJdT/ACbJoZbeaWCeN4poZHilikUpJHJGxR0dGAZWRgVZSAQQQRmvUfh58JfEXj6VbmJP7N0NJNs+q3SOqSBSRJHYx7c3Uy4KnBWJG4eRSNp/db/grd/wT6+HHwc/a+j8beD9RsrTwX8XLC++IGqeArZwt3oniNtSMWppDEmFtPD2t3jzXtkqhPLlj1Cyto44IImT4/trW3soIbW0git7eFBHDBCixwxIoACoiAKoAxwBk5yOcE/xXxxntThfMsZkOH9lXzPC1HTrVr89DDxkozpSUVf2lWpSnGfsm17LmSqpyTpn+v30evBql4p5FlPH/EEsRguEcwputl+XQc6GYZvOhVlRxCnVaTwuXUsRSq4d4ik3Wxcqc3hp0afJiJ+deDvhR4P8GRxvZacl7qSKpfVdQVbi7LjOWhDL5dspyPlgVc4UsWPJ9Kx7Y+gwAO3HYelP/U+/TjPTt688g9ScbhRz757g8fz6d85znIJPJB/GsXjMXj6rr4vEVcRVk9ZVJOSje2kIr3YR1VoU4pLZRR/oLkfD+RcMYGnlmQZVgspwVNRSoYOjCl7RpJOpWnb2uIqys3OviJzq1H706km2z0P4YfFPxh8IvE8Hinwdf/ZLxE8m7tZlMtjqVmzKz2l9b5USwuQCGVkkjbDRurc19x/8PH/Fv9mCIfDnw/8A2xsKm9/tXUBYhtwKy/YPK84goSpjOoABgHDlWKL+bPvznuD/APX78985yMn5ipP8PXH5E9OvXuDjksGOMKtSmrRlZb2aTtez0umle+2291c8HiTw54I4vxlLMOIeHcHmONoxhTji3PEYetOnBrkp16mEr4eeJpQ5mowrupGKdopJtHo/xR+K3jT4v+JZfE3jLUWu7naYbGyhBh07S7TcWW0sLUMUhjBOXYlppnO+WR2INfmX+3Lri2/hfwb4eUp5mpaxfanICv7zytNtUtwFcOMI0moAurIQ7IhBBjr7t6Z6Y9O/v9Pockdxgkn8xP25LuSTxx4Rsi7GG28MyTpGSdqS3WpXKyuo7b1t4lJzk+WMjgGvtfDuh9Z4rwU53l9XpYnEPmd3eNB04Pp8MqkWt7WWllc/EfpXYnC8LfR84hyvKcNhsBhcXVyHIsLhsNShRoUMPVzbC161KlSilFc2Gw1WLtr786jk5XT+H6KK0dI0nUtf1XTtE0eyuNR1bV72107TbC0iea5vL28mS3tbaCJAXklmmkSONFBLMwAGTX9KtpJttJJXbeiSW7b6JH+NCTbSSbbdklq23skurZ/SF/wb7fC/N38ffjNd2ny29p4c+HWiXkkXIlu5pvEGvJbyHj5YrXRBOAM4liwSCwH9LtfG37A/7NafsqfswfD74YXkMS+LZ7WTxX4+njAzL4u8QiO6vrRnBPmLo9uLTRo24UiwLqBvJP2TX80cU5lDNc9zDF0p89B1VRw8ujo4eEaMZR/u1HCVVd+e71bP6Q4Yy6eV5HgMJVjy1lSdavHS8a2InKtKEmt5U+dU29fgsm0kFFFFfPnvjxI69GI/Gl82T++351HRRZdv6/pL7gFJJ6knvSUUUAFeJftJfC+0+NHwD+Lvwtu4kl/4TTwF4i0iyDgFYtWfT5ptHuBuIUPbapDaTxuSPLeNXBBFe20o68gEHghhkHPHIPBHrWtCtPD16OIpPlqUKtOtTl2nSmpwfylFMzrUoV6NWhVipU61OdKpF7ShUi4Ti/JxbR/mt6jYXOl6hfaZewyW95p93c2V1byo0csFzazPBNDJG4V0kjkRkdGAZWBBAIqkOox619r/APBRT4Wf8Kf/AGzPjx4ShhMOnXPjS78VaR8oVG0zxjHF4mgEQycpbvqklpu43NbsQqjAHxSoyyj1YD9a/qvCYiGLwuGxVN3hiaFGvDr7tWnGa+aUrPzP5bxeHnhMVicLP48NXrUJ/wCKlUlTb+bjdH9n3wD/AGf/AB549/Z8/wCCb3xp8BeKdM0zxl8CPAUN5NoOuG6i0fxt4U8X6Kml61oM2pWkNxPpl41jDCbO8ktLu3STJmgZkQ19XfEfXPj/AOJ9J1fwbr37JPg/4jeFNZgkstTsrr4u+E5NJ1G3Zm4ms9c0mwm28JJG3lRzRSbXjZJIw47X9jzSzov7J37N+mEIptfgx8Pz+7Ty0P2nw7Y3e5UwCA5n35xyWJGQRX0dX88Y3PascdKFbCYLHRy3FY2nl9Sv9coVcNSnmOJxvLCtl+MwVWXLia9atTlUnOdOVSShJRUUv6DwuSUp4NToYrF4KWY4bBVMfCh9UrUsTUhl+FwblKlj8JjaUebDYejRmqcYRnGnFzi5OTf5i/Bv4f8AxZ+CV7qd78Kv+Cf3wo+HN9rSmHU9Vsvjb4fkvZ7besotftn9mapew2QkAYWlu0cRZAWiO1TXr/iT4XfHf43fEP4D698TfDngD4eeDvg38Qx8UrnT9C8Z6l4z1rX/ABDp2h6ro+hacI28O6Hp9na20mr3N5PcvNdsTHCsKqxYV9t0VGI4oxNfEzxywOBp4+dGrQePlWzfGYpQrYeeFlaeZZrjouUaFSUKcpQk6d04WcYtVh+GsNQw0MF9dxlTAwrUa6wUaOVYTDOdHEU8VFOGXZZg2oyr0ozqRjKKqLmjO8ZSTU8kn1JrM1rTIdb0XWdEuHkjt9Z0nUdJnkiO2WOHUbOazkkjPGHRJmZDkYYA5rSor5qMpQlGcW1KMlKLW6lF3TXmmrn0UoqcZRkrxknGSezTVmn6p2PwZ/Zg/Z7/AOCl/wCwpP4s+HXw68JfCr49fBC78R6jrfh7TdW8e23hPVLR7+XabzT7i/2TaZLdW0VvJqmnT2l5bNeK8ttKSzu/e/HLxR+3/wDGLTofC3jf/gm/8JvG3hSO8+1R6br/AMUdN8Qy2s0QMf2mz1Gxk0yWyuJI3ZUlgWIui4kR0O2v2ppcn1P5mvu6nHlbEZl/bGM4d4dxOatxnPHwp5tgsRUrRgofWJ/UM3w1FYiVlKVaFGEnK70vY+Ip8D0sNl39kYPiDP8AD5WlKEMBOeU42hToylzLDwePynE1nh43ajSnVnBRsrO1z8YfhDH/AMFD/APhSbwT8H/2H/2dvgXpN7L50l1qXxH823iu5YxA1/qNnYzz6zqlzCoyGnlnZVQog2OgP6Ofsw/Bu++Bfwd0TwTrt7pep+ML3VvEvjXx5qujQSwaXqPjbxtrl74i8Qy2C3B89rKC7vvsNnJMqSSWtpC7RozFR9BEk9ST9aSvIznifEZxSq0f7Py7L4YjE08Zi5YKONnXxtejCrTozxeJx+Nx1eqqUa1VwgqkYc0+aUZSjFr1so4coZTUpVXj8wx88Php4TCxxjwVOhg6FWVKdWGFw2AweCw9J1XQpc0vZylywUYuMXJSiuJ4rWCa5nkWKC3iknmkc7UjiiQvI7E8BVRSxPYA1/Lp/wAE+PADft6/8FtNS+JC2o1PwH4F+J2vfFfUZpFlurVfD3w2uIdO8JBi7Miwapq2n6ErxHMWL1olQq2R+sX/AAVQ/ars/wBmf9mTxHaaVehPiR8VoLzwH4Gs4ZALy3bU7SWPWfECIGWQRaNpzyOkic/b57KEFWlDD6o/4N4v2Ebr9mP9lmb44+PNHew+Kn7RiWOvRxXsLx6jovw4t903hmykWUCWGTXGkfXriNhl7eTS2fbIrov799H/AIdq0sPmnEmIhKEMY4ZfgXJW9pQw8/a4yrG+rhKv7ClGS05qFVbo/CPHjP6VbEZVw5QmpSwfPmOPUXf2dWtTVPB0pW2mqLrVXF/YrUpdT+hbp0ooor+lT+eAooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/ADpKACiiigAooooAKKKKAP5E/wDg6r+Pt7pPgP8AZu/Zt0u7MUHizXPEnxW8WW6Ehprbw1BB4d8KRPhsNbyXuseIbiRHXmextnjJMbgfxR1/Rf8A8HOniy813/godougSyObPwZ8BvAWl2sRxsjfU9Y8Wa/cOgCgkyHU4g7FmJ8sKMBcD+dCuKq7zl62+5B/X9fmFKASQACSSAAOpJ4AHuTSV+8v/BHL/gnvpnxv1yT9pX4x6It/8MfBOtfY/AnhvUYs2PjjxfpzRzT6hewOhW78O+HJWiWSIkw6jqym0l3QWd3FJ8/xJxDl/C+T4rOcym1Qw0UoUoW9ricRO6o4agm0nVqy0192EFOrNqnCcl7nDnD+P4nzfC5Pl0E6+Ik3OrO/ssNh4WdbE12k2qVKLu7e9ObhSgnUnFPp/wDgm9/wR/k+Kmm6J8cv2o7DUdK+H+oRpf8Ag/4Y77jTNa8X2kih7bWfEM6GK70nw/OrCWzs4jFqGqRbJvNtrR1M37bfE3/gmP8AsS/E/wAJ/wDCK3XwN8MeDmgtTb6Z4j8Awnwz4j0yQRiOO5F9al49TkTajuuswaikzJ+9VizE/eihI1SNFSOONFSONFVI440UKkcaKAqIigKiKAqqAAABTq/jDPvEfirPM2/tRZpjctjSqN4HB5fiq1DD4OmneMVGnKCr1Gre2rVoylVu01Gny04/2XkXh3wtkmUrK3leDzF1aaWNxmPw1GviMZUaXPNyqRm6NNSV6VGjKMKNk05VOapL+MT9s/8A4JAfHX9m7+1fGnw3S6+MvwmtzLdNqmh2Lnxb4csV3Of+Ek8OwedM8VugxLqmlfarPavmTpZ52D8hZI5IXeKVHjkRirxyKUdGU4KsrAFWBBBBAIIIIzX+lr2I7EEEHkEEEEEHgggkEHggmvzI/a0/4JS/sy/tRPqPiS00gfCT4nXoeVvGngqyt4bHU7tgSJfEfhkGDTdReRzma8tvsGoPjMl1KcCv1bg/xzcVSwPGFFySShHOcFSTlpZc2NwVNK/VyrYRXeiWFbvI/K+L/A+M3VxvCNeMG3KcsnxtV8nflwWMndx7RpYt8vV4qKtE/h+or9Kf2n/+CVf7Vv7NUuoatJ4Ql+JngC1aR4fHHw+hn1i3jtVJZZdX0WNDrWjMEwZWubR7RW+VbtwVJ/NqWKWF2imjeKRGKvHIrI6MpwysrAFWU8EEAg8EZr+hcszfK86w0cZlOPwuYYadrVcLWhVUW1fkqRi+elUX2qdWMKkXpKKZ/PuZ5RmmS4mWDzXAYrAYmN/3WJpSpuSTtzU5NclWm+lSlKcJLWMmiOiiivRPOCiiigD9df8AghT4xufB3/BUb9l6aGdoYNe17xX4UvlUA+fb+IfAfiewigbcD8rXz2b8YIZAQT0P+mLf6nYaXC1xf3cFrEuPmlkVcknCqoJySzEKAByxA71/lvf8EkZp7f8A4KO/smz27zJLb/E2K8VoG2Pmw0PWb4qxCufJkFsY5wAMwPJ8wHNf6Jeo6xqWqTPNfXlxcyMeWmkLHiOOI4wFVd6xIXCKis437Qa6aPNy6JWbbu3r00UUtdt3JW7MTlGK15nLolZK3m9110Sd+63Oq+Nnw+1jx9d23ijw5NbXgi0+O1/s4SATSQ281zIJ4Zy3kSSP5hzAGBX5Qru7MifH11oer2V2LC7029t7wuyLby28qyuyYLhEK7n2ggsUDAAg5wQa+xPBXi99GnWw1CcDSJd3zylVWylOW83zGKhIXbibeQig+bldr7v55Pjn/wAFL/ijrX7UGifErwPt03wV8K9X1jR/DHhOeSUWviLRLqf7Frc/iDaQWu/EFpbxldiZ0tUtRCHlgeSX+Y/GjhTg7KsXhM/xeY5ll+Y59mEKdfD0KMMfTxFOlPDrMcxjTq1aE6Dw2HqxlKnCtKFavOhSpUaUZVKkf7V+jBkviR4rrO+GuHMvyjFZdwhkVXGyx+PxMsuVDFV6OMeQ5JKpTp11Wnm2Ow1ShDESoxWEwtLFYvEVqsqNOhV/eP4P/DjxP4e1e08cavamwtdJhubuLTZg6ajfCS0urdk8ggGAIriUCcq0gKhFJ3bfwR/4OC/FD+LPGP7Nt7b20kGmWnh34iW0LTGESSXjap4Ze4JWOaR1X7OLIgOFAYuFLc4/evwV+0Gnxr+F/hDxx4e02fQdK8aaBDqM1ndyRXF7A82+3vbFpox5XlQ3EU8KyxgPPGqyN5W9oR+Bv/BcvRriXw5+z14gUN9lsdY+IGkSNtbZ5+qWfha8hBbaVDGPSJyqlwSFchTtJH6TR4YyXhvw8rYbh+tiMVgsb9Rzb65WqOc8bLFVMBbFOHJCNKnPD06UoUoU6ahFJyUp80n+DY/Ps+zXxJnR4lwtDL80ymvmWQ4nL6cI2y+tlzx1OtgvaRqVPa1aOMVeNSs6lRTlzezfs+RL+duuz+HngLxJ8T/GvhzwF4SsJNR8QeJtTttM0+3jViqvO4ElzcMAfKtLSESXV3Ow2w28UsrfKhrjK/oL/wCCXP7NEHg7wTL8ffFWnr/wlPjeCay8FJcx/vdJ8IhzHcalErf6u48QTxsI5cbhplvEY2Ed7KG/Ic9zank2X1cXK0qr/dYak3b2teSfKu/LFJ1Jv+WLS1aT/U8lyuebY+lhY3jSX73EVEv4dCDXN/29NtU4f3pJvRM/Rf4H/CbRPgd8K/B/ww0HElt4a01Y72+2Kkmq6zdO13q+qShR9+8v5ppFH/LOHyohxGK9WJA5JwPU0oxkZ5HfFfF3x6/ZT8e/HK6uJx+1J8WPh3YNM32Lw54EttN0PRLO2O1dk01lLb65qUrKimWS71cxszy7IY43EY/CocuMxM6mMxaoOrOVWrXqU6tVynOTlJqFKLbbbb1cIrvsj9thCNClTpUafuUoQp06cXGKjCCUYpczWkYpLS78t2faFFfGn7PP7OHxp+BmqeTr37UPiT4veCXSVZfDPjnw0t5qsUj7/LnsPFU2v3ep27RMULxTrdQSrvURR/IV+y6zxFKlSquFHEwxVOycasIVaad+koVoQlGS6pc0e0nrbSLbV3FxfZtP8m0ODupyGIPqCa5zXvCHhHxTA1t4n8K+HPEUDpJG0etaLp2pgpKpSRQby2mK7lO0lSCVyM4JroaKyhOdOSlCUoSW0oScZL0aaa+TCUYyTjKKlF7qSTT9U7pnyr4m/Yh/ZV8WSPPqXwb8NWdw4IM+gtf6EwJUjIh027t7TOSWybcnf83UnPlF/wD8Eyv2Tbr/AFHhvxTpvKHNr4tvyQyuW+VrpLhhnhcA5wowclif0BHvzX51/HT9iz42fHPxfeeIrz9sPxx4G0VblzoPhDwLoN1oGk6PZA/uIppLHxZb3Oq3yrzcX167mSUs0EVtGwhX2sBmuYc/JUz3F4Kko3U51MXXV9lGFOnz9XfVwit1dnm1spyypeUsswdWT3fsKMW9U7uXKnulrq3tsfcVx8PPA+oeHtK8L654X0TxNo2jaZZ6RYw+KNNstfmWzsbWK0g8y51K3uJXnMMMfmz5Ekrje7Fjmvm/4lfsHfsw/EnTru2l+G2leENTuI9sGu+Ck/sK7tJRkpKLS3/4lk4DHdJHLZnzssGcEgg+B3wp/aV+Cl1b+H/Ffxe0/wCP/wAPppI4EvPElg/h74g+GUIx9qivvPv7PxFZhj/pFpfXkd6sa+bbTs+63f7DIwSM59x3rl+t4vAVr4LM6ko3541cNWrU4ybak+ejPklGV/ihUp2fTmi7vorYLB4qChiMJQqrlUeWrShNxVrWjKzat0cZJ+j0X8hn7Tn7PniD9mz4pan8P9ZuRqdg8Eer+GddSMxR6zoF3LNHa3Lx8rDeQyQy2t9bqzCK5hcoWieJ2+ea/bf/AILD6HALn4J+JEhC3D2/i3Rpp1UZkjjk0e8hjlcISfLaSYwqZQV8yU+Xzur8Q55VhhlmckLFG8jEAkhUUscAck4HQV+38P42tmeU4HFVbOvWg4VOVW56lOpKk5KK2c3Dmajpd2StZH4vnmDpZdmmMwtK6o0pxlT5ndxp1KcKqTb1ahzuKb1aim293+f3xLnNx478Sylt3/ExdB04WJEiVThU5QIFPB5H35Pvt/ZN/wAG4fxhsfjF+xp+0L+y1r2qR/8ACRfA3xcPi14IW8kBa08KeLrJl1iC3klbdHY22vaTfy3SqRFHJrsT4BINfxY6xeNqGranfMQWvL+7ujjOMzzySnGWc4y/dm/3j1r9T/8Agjd+07/wzT+1rdnUtQ+xeFPjB8Ivin8JfEcclwltbTza94Tv9R8KieSU+TH/AMVdpOiW4mdWMUd1MUBYgH+l6mW4bGZTPKMbTjVwmJy95fiab1UqU6Cozt1Uo/FCSXNGSUo2kkfzPQzXE4LOYZxg6jp4mhmCx1CpHS041/bRutnCXwzi9JQcoyTTaf8AaVpupafrOnWOr6RfWup6VqlpBf6dqNjNHc2d9ZXUazW91a3ETNHNDNE6vHIjFWVgQau1/Lv/AMEf/wDgo+3hfVdN/ZU+OGuMvhnVbw2nwi8U6pOSPD+qXUw8vwVqd3O4CaTfyu39iTOVjsr1vsJ2w3MJi/qIr+EeMeE8w4OzmtleNTqUnergMYo2pY3COVoVYPVKpH4K9K7dKqmryg4Tn/cfCHFWA4vyejmmCahUVqWOwjknVweLjFOdKezdOXx0KtkqtJqVlJThErz74k/Fb4cfB7QE8U/E/wAaeH/A/h+W9h06HVPEOoQ2FtPfzq7xWluZDvnuGjjkk8qJXZYo5JGARGYeg1+Rf/BZz4Ya947/AGWdM8U6HBcXSfC/xvZeJtbtoA0hGiahYXuiXN68QcAR2Nxe2kssgR2jhaRjtQOw8XKcJRx+ZYPB16ro0sTXjSlUjbmXPdRUeb3eac+WCbvZyTs9j7jKsJRx2Y4PB4iq6FLE14UZVY8vNHn0io83u3lPlgm7pOV7PZ/Xn/Def7HZxj9ob4bHOMf8Ttec7cY/dd96Y9d6/wB4ZP8AhvP9js4x+0N8NjnGP+J2vOduMfuu+9Meu9f7wz/D1hSMhzkLkqQRltwG1CN2flw2W2jgjnAy7aCcAECT/VZlT5PnCjzDgAYCkc7P4W6Yz+nf8Q7y3/oOxv3UPL/p35P7/I/Tf+IdZd/0H43/AMBof/Kz+4P/AIbz/Y7OMftDfDY5xj/idrznbjH7rvvTHrvX+8M2tP8A25/2QtUv7TTbL9oT4Zy3t9NFb2sL+IbeASSzECJfMnEcSBywCtI6qcj5uRX8OIAwrYbapUOBIoZiSx+QYyBtUDOHAYAk8gV3Pww8D618S/iJ4K8BeGreS81zxb4j0nQ7C3WPeTcX97HAGI3AGKKNjNKztGiojlmVVLCZ+HuWQhOcswxkYwhKUpSVDlioxu5SvBaJJt6rS+qtcmp4e5ZTpzqTzHGQjCEpynJUOWMYpylKXuL3YpNvVadUf6ASsrokiMrpIiyI6MGR0cBkdGUlWVlIKsCQQQQcUtZmh6YNE0PRdFWRpho+kaZpQmYlmlGn2UNoJGYklmcQ7mYkkkknk1p1+SO13Z3V3Z2tddHbpfsfkrtd2d1d2e110dgr1P4SfD+bx/4pt7JwyaXYlLzVZxnC20bj9yCAQJLkgwx5IwWL8qjV4X4r8W+GPAvh7VvFvjLXdM8NeGtCs5tQ1fWtXu4rLT7Czt0LyzT3EzKihVBwoJd2wqKzEA/hR+1B/wAHE134D0DVfhh+wz4ft4dRu57iLXvjn4y04TtO8TiKAeB/C9ycC2hQSmDVdfBSZ5ZJY9GMckctfpXhfwZPiziGhLF4epLI8uksVmNXkkqNZ03F0cAqluR1MTNx9pTUlNYaNaataLf5x4l8Yw4U4fr/AFbEU45zj19Vy6kpwdal7RSVbHeyvz+zw9OM+So48n1mVGEn7zR/aN4y+I3wx+EPh7+1PHvjXwh8P/DemWwU33ifXdM0Kwtre3jHHnahcwIFRAO5JyOpPP5O/Gf/AIL8/wDBNT4PXMunQ/GO7+J2rwSPHLYfDHw3q3iO3UpvGV11re08OSjcgBEWrOw3A7Thgv8Anm/G39pb49ftHeJbjxb8cPiv41+JOt3EzTCbxLrd3eWlmX35j0zSg8elaVbjzJAltptlawIHYJGASD4cST1JP1r+41V5YqNOEYQilGMUrKMUrRUUrKKSSSSVktFofxPKTlJyk3KUm5SlJtuUm7ttt3bbu2223e++p/c34r/4Oq/2brJZ08G/s6fF/XJkeRYX13U/CWg2sqBf3TF7TVNauE3Pw4a2BVRkBicDw/Uf+DsGAmH+yv2PCB8/n/2h8WMkj5PL8r7P4NGP+Wm/fn+Db3r+NOip9rU/m/Bf5Cv6f1b/AC/Fn9qOk/8AB174JluVXW/2R/EdlaZG+XTPiTpuo3AG1txWG68PabGSGCAKZgGDMSylRu+lPBX/AAdF/sS64oXxb8Mfjl4NmZlQM2jeGtctUJjDNJJLpviR7gwrLmMlLRptuJBCdxRP4GqKPa1P5vwX+QX8k/v1+5r/ADP6aPjl+2L4e/bL+M3jT4r6d4usNXh1bUprXw9opu1jvdD8MWUrpo2lnTrh/tEDQWzrLdhUMR1C4unUkMRXnXPf0yOf8n8eccnIOSP547a6ubK4iurO4mtbqFt0NxbyvDPE3HzRyxsroeByrA8D0r6p+GH7W/xB8GTWtj4ombxn4ejKRyR6g5XWreAZXNrqfJmMalSsd5HMZNm0zx7t4/lfjXwKzfEYnH5zkWcSzfE4qvWxlfB5nyUMbVq1ZurUVLGQth6s22+SNWnhYrSKnpr/AKxeCP0/uDcvy3h/gvj7guPB2W5VgcDk2BzrhZ18wyTB4TCUIYag8Xk9bmzTCUIQhH2tTCYjN6025TlSbcpH65888noMcd/84+nGMkLk/wARyf6Yxx6YIHXqBuHnnw6+KHg/4oaQureFtTjujGkf2/TpSItS0yZ13eVd2rEOBnOydA0E2GMcrYYV6FwcfljP+c/l2AwcAN/N+NwGMy3F1sDmGGr4PGYefs62GxNOdKrTkrWUoSSaTT5oyV1OLUoNxab/ANM8hz/JeJ8owOfcO5pgc6yXMqEcTgczy3E0cVg8TRkl71OtSk4uUZJwqQfLKlUjOFWMZxlFGR/n/P6Ywe+M/KhJP+f8/wCST1JpcDv19M/5/T6YJGGMDv19M/5/T6YJGG5P10667afr/SPYTXrpdXs9O68trvbVX1uIDg/5/wA/y9iDzX5d/twqR8RPDTEEBvCcWCe+NT1DP8xzjH5Gv1FwO/X0z/n9PpgkYb86P26tEZbzwH4hSP5ZbbVdIuJMueYJbe7tUIGY1GLi7YHKu3P3lUbfvvDWtGlxThotr9/hcZRhd2972SrO2mr5aTVr+fa/8rfTMwFXHeBed16UZSWWZzw7j63Kk0qMsyhgeeXXkU8dB3S3au7KVvz8r+in/giR+w/D4n1iT9rr4k6UJtE8MX9xpXwj0u+t90OpeJIFMWp+LmSUbJbbQfM+x6UwV0fVXuJ+H0+Mt+Cvwq8Bal8UviX4C+HGkKW1Lxv4t0DwvaEEDZLrWp21h5zEqwVIVnMsjbW2ojHacYr/AEN/hj8OPDXwh+Hngz4YeD7CHTvDfgfw9pvh7S7aGNIw0dhbpHNdzCNVD3V/cede3kxy811PLK7Mzkn9P8Qs9qZbl1PL8NJwxOZqpGc4tqVPCU+VVuVp3Uq0pxpJ/wDPt1bWlytf5VcAZJDMMwqZjiIqWHy105U4NJxqYufM6TkmtY0VF1bafvPZbrmT6zVNTs9H0++1bUp/s9hp1rPe3twUkl8m2t42lmk8uFJJX2IpO2NGc4wqk8V+Kn7U37eX7b1n4wm8O/sqfsleO9Q8J6TdNHN4+8a/DrxReHxQEGDLoejRPpr2Glu+TDdXzPd3UYDi2tQwLftz0rwb9oH9qT4Nfsv+GrDxT8ZvG8Phaw1a8On6LZpb3uqazrN4iq8kOlaPpsF1qF55CMjTvFAYoA8YkdTIgb8iyWtSpYuKeTRzqvU9yhhqk6rhzWu2qFGDdWXKpNc7cIpc3Jdcy/Ws4pVauFk1m8sno0/fr4qEaKlypqydatJRpRbsm4pSk2o81m4vxH9iv9pv4v8Ax38PXmk/Hj4BeN/gn8R9Ctknu5tU8P6vY+DPE9tuSJrzQr/UYg1leCRwZtFup55kjPm29zdRpKY/uevE/gp+0X8Mf2hNK1PUfh3reo3U+hSWaa5oHiHRNY8LeJ9GGowG50251Hw74gs7DVbey1K3DTaffG2Nneokht5pPLfb7ZXNmqccdXUsullUk48+Ak6v7iThFtJVoxqRjK6nGMl7qlaL5bJb5Y1LA4drHxzSLi+XHR9lavFSaUm6LdKUlbllKNruN5Lmu2UUUV553hX43/tZftz/ALZvhD4gX3gz9mP9kPxr4r0Dw7fSWepePvFvgnxTe6b4jngYJOPDen6bJpzrpqSK8cWqXN1IbzBkgtlg8uaX9kCQASTgDkk9APU18Wap/wAFEf2VdI8fXvw6f4mve6jouqQ6F4i13R9B1/VfBHhfV57lrKDTvEfjKw0+fw9pEz3im03XV8kKXGYnlV0cL7eRxl9Yq1I5JLPPZUuZ4e1eVOkr61akaEZOWl4xU/dvd2bSt4+dNPD06bzmOS+1qxgsRehGpVelqNKVeSSbbV+RObVldK98T9j/APav+JHxr0uPQfjv8BPHnwJ+JEJSCFdb8P61B4O8VusEk0sugareWq/Y7xYoJp5tI1CQypEjPb3N0qOU+7ulKtwZ445Fm82KVEmikVw6SRyKHjkjYEqyOhDKykhlIIJBpK87G1qGIxE62HwkcDTnZ/VoVZ1qdOVve9nKpFTjFvVQk5cutmo2iu7B0a9DD06WIxUsbUgrPETpQpTqLo5xpv2bklpzRjFNJNq92/5OP+C+vw6Gh/tB/Cz4kwW6RwePfhw+k3cscbgzap4P1e4jeSeU/IZDpus6bCiA7hHbBiCCMfg1EpaWNQCS0iAAZJOWAxxz+Vf1W/8ABwJoFtcfBD4F+KHVftml/E3V9Dgfnf8AZdb8NXN9cJ6FRLoVueckE/KQCwP8+n7FH7N95+1x+1L8GP2ebXUptGi+JXjKw0fVdZt4VuJ9I0GFZdQ1/U4IXKxyz2Wj2d7PBHIypJMkaMfmwf6G4CqVcw4eymlBOdZSngYR7yp4idGjFeXI6a8vkfgHHdOngOIM0qzfLSlGnjJPe0Z4eFSrKy/6eKo+5/db+z9bR2XwF+CVlFJHNHZfCT4dWQkikEsbG08I6RbttkAG7DRkEkK2R8yqcgeu1+JvibWvj7/wR3+OB+CPxni8SfE79jXxNqUs/wALPiU1pLeX3hfSrqcv5CXcMXlCfSvN2a54ZkaMosRv9EjWG4SKf9i/BfjTwr8RPC+jeNPBOvab4l8L6/Zx32k6zpVyl1Z3dvKMgq6ElJEOUmhkCTQyK0cqJIrKPynjvgfOeDs0qxx1N18Biq9WWBzOlCSw+JUpObpyvf2OJhF/vKM29nOlKpTtM/UuCONMo4uyylLBT9hjsLRpQxuW1ZL6xh3GMYe0jt7bDTa/d14K2qjUjTqXgtnU76PS9M1LVJkkkh0ywvNQljiG6WSKzt5LiRIl/ikdYyqDuxAr+c3Wv+C6/jCHV9Sh0b4CeHX0mG9uY9PfUvFmpx6g9nHKyQPeRwaY0Mdw8YVpY4mZEclFdgAx/o9kijmjkhmUSQzRvFLGcEPHIpR0YEEFWUkEEEEHBr+K79v39lDxV+y78cvEtm2nXTfDjxnqupeIPh/r8cDf2dc6Vf3b3j6HJNGvkx6loUkos7m1ba7Qx292iGG4jNZ8F4LJ8wxOLwuZ0IVq8qdOphIzqVKaag5+3jFU5w5p2dOVnd8kZtKykftnBeByfMcVisLmdGNatKnTnhIzq1aaag5+3jFU5w5p2cJWbb5IyaVlJn6DP/wXb8eb38v4C+ExHk+WH8WauW24YgMRpwGQ21SQMEEvgY2lp/4LtePucfAbwkeuP+Kr1cZ+/jP/ABLzjOEz1xubrsG/8FSxBchoiRKrDEYw2N3zKCmBH6oQucr8pxwDghRJERvkXcUJwGCqXOY8lSOUHLKQSFUnn9G/1R4d/wChZT/8HYny/wCn3l+L7n6N/qfw5/0LYf8Ag/FeX/T/AMvxfc/oD8Pf8Fz/ABhqGvaNYal8BvDw0+91SytLxtP8Vakb5ba5uRDLJaJPp5he4RGV445SEdyUZ1A3n+jG2mFzbW1yEeNbm3huFjlXbJGs0ayBJFydrqG2sASAQea/jh/4JrfsoeIP2j/j14c1m802ZPhj8NdU0/xL4y1iWBlsrk2Fx9s03w9FKVVLm81e7hSGSESeZFZefOwMcRB/slJyeAAOgA4CgcAAdgBwB2r8240weUZfjMLhMsoRo1YUpzxihUqzSc3D2EZe0nNKajGc5JWfLODaaat+bcZ4PKMvxmGwmWUY0alOlOeMjGpUqJObp+wjL2k58s1GM5NJrScW1qhK4H4ofE3wZ8HPAXib4lfEHWrTQPCXhPTJ9U1bUbuVI1WKFf3dvArspnvLuUpbWltHuluLiWOKNWdgK3/FPinw/wCCfDus+LPFWq2eh+HfD+n3Oqavq1/MsFpY2NrG0s080jkABVU4H3mYhVBJAr8AIPDnx9/4LmftKWnwr+Glrr/gH9iL4W69Be+MfHF3bXmn2uvxW8wH9qTiaJYtT1/Uo0lTwnoCq4sbeVtTv2i+eW16eAeBcfxtmsaFNTo5VhZ055njktKdJu6w9BtWniq6TVOOqpx5qs1yx5ZfinHfG2C4Myt158lfM8VGdPLME3rVqpJOvWs+aGFoOSlVlo5u1KD5pXjo/sN/s6/EH/gtJ+23d/tN/GLRdS0r9kH4J61Hb+HvDmoxSrY69Hp8/wBt0jwbatvSKe+1eb7Nq/jK7g8wQ2Bi04sGNm9f3J2NjZ6ZZWmnafbQWVhYW0FlZWdtEkNta2lrEsNvbwQxhUihhiRI4o0UKiKqqAABXkf7P/wC+F/7Mfwl8H/BX4P+G7Xwx4G8F6bHYadZwIv2m9uCN99rGq3IVXvtX1W6Ml5qF5L80s8jBBHEscaezV/duWZbhMowOGy7A0YYfCYSjToUKMFaMKdOKilfeTbvKc5XnOcpTm3KTb/iTMMfi80xuJzDHVpYjGYytOviK0/inUm7vyjGKtGEI2jCEYwilFJBRRRXecYUUUUAFFFFAEqdD9f6CihOh+v9BRQBG3U/U/zpKVup+p/nSUAFFFFABRRRQAUUUUAf53X/AAcrRSRf8FLNaaRGVbj4O/DGaEkYEkS2uq25dT3Amhlj/wB5DX8/lf0qf8HRnheTSv29Ph14jWJVt/FP7OvhJ/NWMIXvNI8Z+PNPuEd9zea0dv8AYTkhCqSIm0gB2/mrriqfHL1G/wBF+RqaJpF94g1rSNA0yFrnUtc1TT9H063QFnuL/U7uKys4FA5LS3E8cY6ct1Ff6IvwE+EujfAr4JfDP4QeHYorWx8B+C9H0EyRRjFxq0Nkj6tqcykKZptQ1eW8v52fa8jztuIJ4/g3/Y50y11n9rH9m7Sr1C9tffGz4cRSgdRs8U6bOjc9cPCvHGema/0K5TmRz6sT+dfzL9ILMaqqcOZVGTVDkx2YVYa8tSqpUcPQlJfC3Sh9YUbp6VpdGf0n4A5fS9lxFmkop1nUweX05fahSUamIrRT3SqTdBy7ulHsfmf45/YB+I/jnVdX8Q3n7bv7QFhrWp3V3dRxaXJa6X4fsEm3Pb2Vjoem39pDbWls5CqqTGR4htZjJ+8r1D9jj4G/tJfAkeP/AA58bfjjbfGnwhdXVhN8O7q7XU5vEuk+XJdjUP7Su9TiMq2t1btaeXYrfaglvcRSyRTKszJXFftCfts+I/A37Tfwm/ZA+CvgHQ/Hvxk+KF94etPtHi7xKfC/hLQX8T3FxFpNte30VvcXD3MltbS6hcbVXyrZraOCO6ubpIV+v/B/jHxQvj34kfBX4paPoHhv4y/CQ+GJvGOi+F9dk8R+HrnSfGWk/wBr+Hdd0PVJrOwunsryJLq1uLa9tILuxvbSWGVXjeCaX8/zKh4gVOC1mmZ4DCVOF8XHC+xxDy/JqdfCwdeCw2Iw8cLQpYzCU681GjGqoqFSnVUH7laLl+g5bX4DhxjLLcux2Kp8S4R4lVqCx+cTo4qaoS+s0K7xNapg8XUoQbqypOUp06lPmXvUpKPqVFFFflx+ligkZHYggg8gg8EEHggjgjuOK+CP2nv+CbX7LH7Ulld3HiXwPZ+CPHEwke3+IXgC0stC137SwYrJq1rDAuma9EXO6VNRtmuH5Md3Cx3197VkeIdc07wxoGu+JtYm+zaT4d0fU9d1S4xuMOnaTZTX97KFHLGO3gkYKOSRgcmvSyrNM1yrGUsTk+NxeCxinGNOeEqzpznJtKNOUYvlrRm2k6VSM4Tvyyi07HnZpluWZrg6uGzbB4XGYNxlKpTxdOE4QSV5VIykr0pRSuqsJQnC14yTVz+Y/Wv+De/x2mqXa+Hv2hvB9xo2J3sJtX8La1a6iDub7PDd29pc3VsCU2+dNDcsFYnZEwANfjt+1t+yD8Wf2NviLB8P/ila2E39rWD6v4X8S6JNLdaB4m0qOY2811p088ME6S2k+Ib2zuYYrm0keLzE2TQvJ/XR+yp+2N8cv2tfCfx++N/gD4K+DLT4Bfs+m5vPFN5rfxDOnfES/wBHstKvNfvZ9G0ZtLm0i81G20G0+3Gyub7T7aa5mi06C/lmLSL+eX/BfS78PeK/gl+yr490byL2HWvEviK40TV1QLLP4d8QeEtJ1qBA+N4guBFZXPlMcLIoJG7p/UPDfEPiZlvFGQ5PxthqDwOf08VSw1WNLAKpCvhcJPE83tcA+RVU4QhWo1VZxqOdNJxbf8y8ScO+G2Y8MZ7m/BeIrRxuRVMLVxFN1ce6c6OJxVPDcvsseuaVKSlOdKrSk7Tjy1HaSR/L5RRRX7qfhR+o3/BGHRG1v/gpJ+zh+6MsWkan411y4AjMixRWHw78WbJ5MEbEjuprb96fljdkJ9D/AKBZ6n6n+dfw7f8ABvp4PPiH9vgeIGiWSHwL8HfiDrjM+4LFNqb6J4WhbIGNxGuyBFYgFuedu1v7iK66K9y/m/0/r7+xlPden9f15nmHxt1J9H+DXxY1RLhrR7D4b+NrpbpThrcweHNRkE4PO0xFd+4crtyORX8MOn/EHxZptnHY2uqv9lijeOJJoLed41cMBtmmiecCPdmJTIVQKqBdg21/bV+1pevpv7Ln7RF/EwWS1+C3xJkiZk3qJj4S1WODcn8QMzop7AHLfKDX8MX5D2HAH0HYV+I+LdDD4rHZRRxNCjiIU8LXqxhWpQqxjOdZR54qaklK0LXSTtdX6H+rP7OujKjwr4lY2m3CeI4gyXCOcHyycMNluKqcjkkny/7Y2lzOzbaSbvL+yP8A4Jf+JZvFH7EfwfurmaWe50tvF2hTSzyLLLI2neLtZw7sMsCyyggSEvs259T4J/wWg8JHW/2XfDPiWOCSWTwX8UNHnlkSNnWCy13SdX0qSWWQDEUZu2sYQWIDyzRL1xXSf8EcdVkv/wBjaysmHyaL8SfG9hGTGUyJpNO1M4YnEq7tQPzgcNuj6ocfU/7cPw0l+Lf7KPxq8G2tulzqX/CI3XiPR4XR336t4Ski8R2kcYjIcTXB017WEjKrLOpdSoIr77BUnmHAeGoxXNKWQU4U46a1MNh0qUV29+jGKfQ/hPxfowyP6QfiDBKNOlS8R89rNJWUKGPzeviJO3lSxTeml9tD+LHQtPGra1pGlmVIBqWp2NgZ5GVI4Rd3UUBlkdyqKkfmbmZmVQoJZgMkf2keGtD07wx4b8P+GtIgS20vw/oumaLp8CAKkdpptlDaQAKuFBMcKltoALEkdcV/FPFJLbTxyxsUmt5VkRgcFJInDKwPYqygg+or+rf9jf8Aab8OftG/C3R7kX1vD8QvDenWel+N9AeVBeJe2sIt01uCL5Hm0/WFhN2k0aMkM7zWsjCSI7v5Q8QcNiKmGwGJpxlLD4epXjX5U2oTrKl7Kc7bRfJOHM9FKSje80j9t4FxFCGIxuHm4xxFeFKdFuyc4UXU9rCLfX34z5Vq4xcrWg2vryvhH9uH9tjTf2Q9E8HWtl4Uk8aePfiFdXsHhfQ5Lw6dpsdtp8lpBd6hqN8kU8gRLm/tLeG2hj8yaSUlmjjQvX3dX59/t9fsQH9sXw14Pm0DxRZ+EvHvgGfUhod7qsNzPo2o6brBtHv9OvmtFkurSVbixtbm1vIIZ9rpJFJEVkDx/neVfUXj8Osyv9T5pe1+K1+SXs+bk9/kdTkU+X7Ld7K7X6RU5uSXJ8dvd9bq/wCF7edj6h8I+IPjf4e8T+G/hz+0j8L9G+F3xB8YeApviL4Vt/DnjCz8Y6LrOh6bdaXp+v2klzBDbXOm65oV5rGm/bbGeGS2kgvUksr66EM+z2OvgX9kn9l343/C3xLdfEf9pH466x8a/HVj4Lt/hx4FgvNW1vWdM8EeDEu7e+u7OwutbZJRcX81lYpKlvaxRCK3/ezXMjq0X31XXxE8lebYh5BCtTyxql7GFar7aUZ+zj7W1X2dPng6nM4vkVr2TaSb4srjmUMDQjm1XDVselP29TCU50cPL95J0+SnUnOcbU+RSvJtyTfWwUUUV4h6A12VFZ2OFRSzH0VQSTx6AV+b/wAB/wBr/wCPf7XHxg+L3gn9l/4CeHfGvg74KW1xqPibXPE3xCt/C2ratpsN7d2ED6NaXOnS2pvtUm0++Ol2k04jKRo99d2gJB/SB0SRHjkG6ORWR16ZRwVYfiCRX4f2/wDwTC/aL+GXxe8c+Jf2cf2mZfhR4F+IcupWmtPo9/4k0bxLD4Z1e+e9ufD13b6Ti11dLIybdPne9t2yFdmtpAzv9Rwt/q59brPiSNSeFUFyQp1Z0Zu6mp+zqRhUiqik6bipwlzR51FKVmvNzWGZzwk45RVw9HHPl9lUxdOdXDxaqU3L2tOnOnOUXTVSKUZpczi5XSaP2Q+HfjfSfiV4G8LePNDSeLTPFOk2+pwW10FW7spHLRXen3aozILvT7yK4srkI7IJ4JArMoBPZVwHwq+Hej/CT4b+C/hpoM1xc6T4L0Cy0O1u7sg3V81uha6v7kgkfaL+7knvJvmb97O+Wbqe/r5ut7L21X2HN7H2lT2PN8XsuZ+z5v73Ja/mehHm5Y89ublXNbbmt71tXpe9tXp1Pxz/AOCwaqfAXwccqu9fFniNVbA3BW0myLKD1CsUQsBwSqk9BX86/wATNZ/sPwTr12rmOaazewt2BZWWe+H2ZWRlBYPGJGkUjoVBJUZYftF/wVe+MWk+MfiX4R+GOg38N9B8OdPv7jX2tpVmhi8R669sXsmdcgXFjYWduJ0z+7e6MbYkRgv4A/tDa6Vg0bw5C4JlaTVbxQTuCRhre0BGMFXLXLnJyDEhA5zX9FeGOWVa1DIsNWg0nVqY2pFpq1BVqmJjzaOyqQUFrb40tG7n8/eJeZ08PPPcTSmrqlTwdOSs715UaeFfL505ube9vZybTS1+VzySfWrNneXWn3VvfWU8ltd2k0dxbXELFJYZoXEkckbDBVkdQykdCKhMbjqpGenFd78MPhd46+Mnjrw98N/hx4evvE/jHxRepY6RpFiqmWeQq0kksskjLDbWttCj3F1dXDxwW0Ebyyuqrmv6Xq1adGnUrVqkKVKlCVSrVqSjCnTpwi5TnOcmowhCKcpSk0opNtpI/mWlSqV6tOjQpzrVqs406VKlFzqVKk2owhCEU5SnOTSjGKbk2kk2zhYppo5454ZZIbiORZYponaOWKVGDJJHIhV0kRgGR1YMrAEEEZr/AELP+Cdfhz4w/Eb/AIJ6/s7/ABo8f6m/ivXtf8L6sNTvvIcap/ZWj+KNa0LQb3UWDyC+ml0fTrJ7u/AjeSQmaZGZpZT/ADXeHf8AggP+0zqOkWN9rvxQ+EPhrVLiFJbnRZbzxLqk1gzjd5Ml7puhS2csqD5ZfJkeMOCEeRMO390n7AfwmT4G/sb/AAA+D0l/Yavc/D7wBp/hnVdQ06OZLC/1e0luDrF1bx3KrN5NzqMt1MvmorMsgZkXO0fm2a0+BvFTD4jIMPm+GxePwdKpjMLicGpSxGBnGdOhKvTlOEKdfDzlUp069GNRwrRcJJxnClVh+l5PLjfwwxNHPcRlOJwmAxk6eExOHxjjHD42MoyrxoTjCcqlDERhTqTo1ZU1OlJTi1KMqlOfxmQVJDAggkEEYII4P61k67oekeJ9F1bw54g0+11bQ9d0+70rV9MvYlntL/T76B7e6tbiJwVeOWGR0YHBGcgggGv0L+Jn7POmeIWuNX8I+RpOrPueTTmAi026c8s8RRSbSVjyVCtA5J+WIszn4t8ReEfEPhW8ey1zS7uwmRiFM0TCKUDPzwzAGKZCBkPG7L26giv5a4t4B4j4MxUo5hhZ1cFz/wCzZthIzngqyTvBuolzYatpd0a/JUTTcOeHLOX9Q8J8d5BxZQp1MtxaoY+MVKtluInGljsPNWbcY3Xt6cZfDXoOUHpzck7wj/Ij+3H/AMEu/if8C9f1nx38GtG1P4g/CC+u7vVI4NFtJL3xH4FhaSScabqlhCZb28060R9lvq9pG8ZiRBepDKu5/wAlJoLm1leG5tp7eSJvJuYnSSGUOHLFJVkGUkG0jaUGNmSuQa/0NskZHTIII9QRggg9QQSMHsa8b8V/s7fAHx1dyah4y+C3wx8S30pzLear4M0K5uZm3+YWmnNkJJnLkuZJGZyxJLcnPflfiBXw9CFHMsK8XKmlFYmjONOrNKyTqwlHklNLecZQ5tLx5ryf7vlfiBXw9CFHMsK8XKnFRjiKVRU6s4rRe1hKLhOdt5xcL2vKLk3J/wAK/gjwB44+I+t2PhvwL4X1/wAU67eTxQWVhomn3eoTgySBQSltDIY0V5Fd5nZIokDyOQoLD+n7/gmz/wAE25P2c5IPjR8Z4LK8+L17YvF4d8OxNHeWvgC0vYyl1PPcruhufEdzA5tpHtyYtNiaaGKaaSZ3j/Vvwd8Ovh98PLU2XgLwN4T8GWrABofDWg6bpAcABQHaytoXcYAG12ZQOgFdmAWOACSfxPNcGe8a4rNaE8HhaH1LC1Vy1m5+0r1oO16bklGNOnLacYqTmvdc+Vyi/Pz3jXF5rQng8NR+pYWouWs/ae0r1oaXhKajCNOm38UYpuS91z5XKLSud8XeLPD3gPwtr/jXxbqtpoXhjwxpl1rGuaxfyrBZ6fp9nGZJp5pWIA4ASNBl5ZWSKNWd1U++eAvg74s8czxyQ2jabpOQ02qXySRQlMqGFspXfcS4JKqgCcfM6iv53v8Ag4y/ak034Uaf4S/YT+FWpOl7r+l6d48+O+twTn7dqOnG7kfwf4KuJIlVbWyku7J/Eep6ejBpxBoTz7o/lk9rgnwrz/iyVLGYilUynI+aMp4/E05Rq4mnze9HL6EkpVpSV0q8+TDx1fPUkvZy/n3jPxQyLhWFXC0akM1zlRcYYDDVIyp4ebXuyx1ePNGildN0Y81eS05IRfOvxB/4KIf8FE/Hf7Y3jm+0DQL7UvDXwI8O6hND4U8JRTSWz+ImtpWSPxR4qjjcC6vbraJrDT5d9vpcJjAQ3fmSL+ZlFFf2LlGUZfkWX4bK8rw0MLg8LBQp04LWTsuarVn8VWtVfv1as25zk22z+Pc3zfMM9zDEZnmmJnisZiZuU6k37sY7QpUofDSo0o2hTpxtGEUkurZRRRXpHmhRRRQAUUUUAFFFFAHVeDvGniPwHrlp4h8MalPp2o2kiuGjYmG4QH5re7gJ8q5tpVJSSGVWUqxI2sAw/Yv4IfGfRvjB4aF9D5Nj4i08JBruimVTLBNtTF7aqT5kmn3LMfJlxlJFeGTDId34lV6F8L/iJrHwx8YaX4o0mWTbbzJHqViJCsOp6bIwF3ZTryrCSPLROQTDMscqEMgr8y8SvDvA8bZXUqUadKhxBg6UpZdjuVRlW5U5fUcVNWc8PWd1TlO7w1WXtYWi6sKn9TfRf+kjnngXxZhsLjsTicd4dZ3i6VLibJHOdWGCVVxp/wBv5TSbao5jglyzxFOmlHMsJTlhay9qsLXw/wC8P5c8evHT378cZHb5+aTnjpzx7kfr26de2P4aw/DXiHTfFmg6V4i0icT6brFlDe2kikH5JVBaN8EgSwvuhlXPyOjKTxg7o59Ovv8A/r59evI+83T+Eq1Grh61XD16c6VahVnRrUqi5KlKrTnyVKc4S1jOE4uMk7NNNPZpf7+4HG4TMsFhMxy/EUsZgMfhaGNwWLoTVWhisJiaMK+FxFGpH3Z0qtGcKkJRcoyi4tatXP6H9Prz254yQMH5gAR8yftZ+EG8UfCLVLu3iWW98LXdv4hhbYC/2WAPb6iobqiLZ3Elw4GQTbpuA27l+m/TOO/X0+nTr1HIyMAE5Bo6np1pq2n3umX8Sz2Wo2k9ldwNkLJb3MTwzISOQGR2HB3D+EgjC92T5hPKs0y/MYNv6piaVWai7uVJOKrQs7L36TnC3m1fS58z4gcKUOOeCeKeEcRKMYcQZJj8upVZxThQxVWjJ4HFWabk8NjI0cSmne8FZqTTX4tfs6fEyP4NfHf4R/FOePzbXwJ8QPC/iS+j8szM+n6bq1rPqAjiDLvl+xrP5K5A83ZnIyD/AKEvhDxl4a+IXhfQfG/g7WLLXvDHijS7PWtE1fT5kntb2wvoUngljkQkBgr7ZYzh4pVeKRVdGUf51vxL8F3fw+8ceIvCd2sg/su/lS0lkUqbnT5f31hdL2YT2kkUhIJAYshwykD9AP2DP+Cm3xS/Y4vIvCOqwz/ED4KX14Z9Q8F3dxt1DQJZmH2nUvCF7KwWyuJB88+mzk6deSKGItpme4P75xhw7PiXB4PMsrqQqYijQc6MHLlhi8NXjCrFQm7KNRaSp8/LGSnKMpRdmf4c8OZrV4MzfNsgz/D1cJUoY6rgsdCUb1cvzHA1qmFxEK0FduEZwlCq4c0ounFxjJNn9utfz+f8FoPhb8a4viB+zZ+0p8NfCd/480P4P3pn1LSINHm8RWGj63pniPTPEul3WtaNDHO1xo2rmxWyvnkha3dbVbadlWaMH9cf2cf2s/gZ+1R4Vi8T/CLxnY6vIkcR1bw1ePHY+KvD9xIm42ur6LLIbmEqQypcwieyuNrPb3MqDdX0lnhh2YFWB5BB6gg8EV+SZVj8bwxnFLFzwrWJwkpxqYbEKVKVqkHCSvZypytLmhNRkrpO0otp/p+aYHCcSZRVwkMUnh8XGnOnicO41Yp06kasJJKXLOPNFKcHJO11eMkmvyi/Yz+Nn7Tv7bv7SvjP9sz44/CjTvgf4atPgtoHwX8NeHNF0nU9FtfHFxaa8/iE63dxawq3+pjRklmgs71kW2s4ruKwtmkKXTt+rtKSSAOwGAOgA9ABwBSUuI8+xXEubYnN8XCMK2IVNOMXzWjTgoR5p8seebteU3GN27JJJJHD2R4bh3KsNlWFlKdLD87UpaXlOTlLljeShBXUYwTdkldtttlFFFeGe0YnibTbvWfDfiDR7C7bT77VdE1XTrK/T79jd3tjPbW92gDJlreWRJVG9clANw61/KF8Hvit+3j+yP8AB79qH/gnlpP7La+NU/aL1/VdKvPGOp+CNX1TVreXWbKPw3d6zo2uW0Emm6xptzYw2uo6PdX14bXRbySfUAxa4njr+tinb24+Y8AqD3AbOQD1AOTkd6+v4U4wxvCc8ZPCUKdZ4umoPnm6bg1CrTvdQmpwlCrOM4NJvRxnFq58pxTwlguKoYOGMrVaSwlSU4qEYzjNSnSqO8ZNcs4yowcJpvld1KM07LzH4K+F9e8E/Bz4U+DfFV0L3xP4V+Hfg7w/4huxK04uNZ0nQLCx1GQTv882bqCUeY2S+NwJBBr0yivhL9tP9vv4Ofsc+Dr651zVrDxL8Tbyzl/4RP4babfQyaxf3ro4trvWFhaSTRtDilAe6vrpEaRFaGzSe4ZIz87Qw2MzXG+ywtCVfFYqrKSp0o6KVSblKT6U6cXK8pSahCOspJK59BXxGEyzCe1xNeFDDYalGLqVp7RhFRirv3pzaSSjFOc5O0U20j8hP+C/3xk0q9v/AIK/AnTr6G41LRDrHxC8TWcUivJYNqMMejeHY7lACYZZbdNYnjQsGaGVJGQK8TN7L/wa+fsm3PjL45fEv9rTxDpYbw58JNBk8D+Cbu4jytz478YQEapc2TcgvofhZLqC73DKv4jsXjIZWI/mx+KnxN+IX7SPxe1/4g+MLq417xz8QvECSNFAssirLdyx2mmaPpluDK8dnYwfZtP0+2TcUhiiT53yzf6av/BLX9kW1/Yr/Ys+EXwgubO3t/Gt3pCeNviZcQxlZLrx54rggvtWhnkZVkmbRbYWPh6KRwN0GkxlEjQhF/r3wy4b+oUsFQmueOV0XWrVF8E8dXlKdk7aqNSVSUNmo0oX3s/5N8RuIfr9fGV4XhLMaqpUIP44YOhGEE2r+7KcIwU91zVJpd19kfFT4T/Dr42+Btd+G/xU8JaP418F+I7SSz1XQ9atUubeRJEZVnt3IE1ne25bzLS+tJIbu1mCywTI6g1/Mp8Qf2A/2sf+CZ3xE1bx9+yXpniP9o39kfxFfvfeKPgrFO1/498BxPIZp7zQrPa01+9pEZUg1HS4ZftcSrFrtnD5cV/X9V9FfrmbZRl2eYGvlua4SljMFiI8tWjVV1/dnCStKnVg/ep1abjUpySlCSZ+WZXmuYZLjqGY5ZiquExmGmp0q1KVuq5oTi7wqUqiXLUpVIyp1ItxnFo/DP4a/EXw/wDFTwnYeLvDceqW1rdjyrzStd0u90XXtE1GP5brStZ0nUIYLuyvrSZZIZEkj2OYy8LyRlXNf4o/Cb4c/GnwjfeBvih4T0rxf4avwS9jqcG6S1n2lUvdOu4yl1p99DuJiurSaGZDxuKkg/tb4i8G+GfFVo9nrmj2d4jA7JTEsd1CxBG+G5iCTRsM5wH2kgblYDFfEnxc+Bk/gqCTX9CnkvtB81VmikQm608ysQnmlMrNCzsqLKqxlSVVk5LV/IfHHg/nfCf1nOslrSzHJsLKWI56cnTzPLaMXze0rwioqrTor4sTh5cyinUqUaUU5H9Z8C+L+WZ/WweX5gp5Rn0nThRnGTWDxmKVkvqlaL58PWnP3qdCtazahTr1Z2T/AJm/iN/wQz+GGsanPf8Awy+MHiTwjZTu7jRPE2jWniOC1DkkJbajZXGk3RSPOEW4t5nAClpXwQy/Df8A4IafC/RdShvvid8XvEfjGzhkVzovhvR7bw3BchcnZcajdXOp3IjYkblgt4nO0fvQCy1+6tKAWIAGSeAK/OP9beIfZey/tOry25eb2dD2trW/i+y9pf8Avc3NfXmvqf0B/rbxD7P2X9p1eXl5eb2dD2lv+vvsvac397m5rdTg/ht8MPAHwg8Jaf4G+GnhbSvCPhfTEUQabpVuIhNKEWNry+nbdcX99KqL513dyzTyEAF9oVRV+IfxX8BfC7SrjVfGfiKw0tYYfOisDMkuq3m4ssaWemxs13OZGVlVki2DaxLAK2PFvH3wG/4KY/tF+L7rQvgVbeDv2bfgxZzi2T4r+Pbqy1Hxj4yQDFxqfhzw3aW+s3el6WGJ/s86hYWk2oRhLkXUKP5S8D+0B/wSA8b/AAm/Zw+MXxx8TftQeKvjX8YvAfw18Q+MXfxZ4cQaTqI8JaLqerQ+HdNMerte2VjNOZjEPuiWQlYIzI7H77C+DHGmY5NDO0sJUxWMp0MRhsvrYrlxlWniZU5Kviq1XloUlGjN15wVapXaXs+RVXyr8lyfxd8NpccUMn4wzTN8BkFHE4xZ5n+EwVXG1FUwtGtNYXBUaMMTiq9fGYyFPBLEzw31ei6k8RKU6dP3vzd/bU/b207xJ4fvrHU5U8LfDiGQS22hSvFc+IPFtzbbJIlmtVbbOFnVZIbSJXhtzsluJ+Tt5r9h7/g421/9nS30P4TfEb4FeHta+B2jzLY6RqPggW2g+P8ARtMeXLXWo2+yLQfE9xEHZliA0W4KARS390yiRv5qfG3jbxP491241zxVqtxqd/JmNPMYrb2sAdnS2s7YHyraCMuQqIoJABkZ2G6uQr938N/DejwNTq42vjauNzrHUI0sZOlOdPL6NNSjP2GGw2iqcsoR5sTiIurPlvCNFTqRn+W+PXj7hfFCll3CPCPC+A4V8OuG8bWxeS4WvhsPiuJcyxk6boVM2znN5qrXpVsTS1lgMFXVCCcYYitjXRw86P8Ard/sv/tXfA79sP4X6V8WvgR40sPF3hjUFSO9hjP2fWfD2pGJJJ9G8Q6TLtvNK1K2LgPDcRqsqbZ7d5raWKV/o2v84H/ggL+114o/Z0/bs8AeADq13/wrj9oC9t/hn4r0GS6kXTW1fVpRH4T10W7BoUv9N1poIFudok+w3t7bhgs5I/0fq/Xac+eN7Wadn/n6M/mgKKKKsAooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/OkoAKKKKACiiigAooooA/ip/4OvvBk8HxB/ZG+ICxH7Pqng74leEpp8DaJtC1nw5q9vFuA3Fimv3L7WOAFygyXNfyG1/eP/wAHTfw4m1/9kH4I/Em2gaV/h/8AG7+xryRULC203xv4T1dZJXYKSiPqPh3S4csyoXkRTudkFfwcVx1v4kvl+SA9m/Zz8SL4P/aA+CXid5Vgi0P4r/D7Up5nYosdvbeK9KkuHZwyFVEAkLHcuFz8w6j/AEVGdZCZEIZH+dGHQqwyCPYg5r/NKgnktp4biE4lglSaI5IxJGwdDkFSMMByCMeor/Re+BPjaP4lfBL4Q/EKKVJ18afDbwZ4keZBtWS41XQLC6umC5O0G5klwpOVAwQCMV/NH0g8G7cM5gl7qeZYOo7bN/VK9FX80q7SfZ2vrb+kPADGR/4yXL2/e/4TsZTXeK+tUKztfo3Q6ddWrK/5p/t5f8E3fHn7Rnxi8OfHv4KfEPTfBHj3T9N0XT9Uj1W81bSpYb3w3LJJoXiLw/rekx3FzYajaoYYnRUgaOS1t7q2uFlLqPpn9iP9k3xb+zjpXjzxV8W/iPf/ABb+N/xW1HTbzxv411HUtW1qUWGiWzWukaPHrOvSSavqYgEkss13dmMMTBDDFHFbIX+6KK/GcTx1xLi+G6HClfGxlk2HjRpxpKhSjWnRw81UoYepiFFVJ0aM4wlCLafuQUpSjGx+x4bgjhzCcRVuKaOBcc4rutOVV16sqMauIh7OvXp0HJ04VasHJTklb35uMYuVwooor5A+tCuf8W+GtM8aeFPE/g7Wld9H8WeHtZ8NaosbbZDp2uadcaZeCNiCA/2e5k2kggNg4PSugoqoTnTnCpTk4TpyjOE4u0ozg1KMk+ji0mn3RM4RqQnTnFThOMoTjJXjKMk4yi11TTaa6pn87vhD/gjx+0V4L8T+JPCXh79pRfDXwM8a3UOn+OLfwzrXivR9U8XeELeaWSLStd8LWT2+jatd+RNJaiPUbyfT0llklG6I+W3Kf8F8IdC8F/CT9kb4V6DF9l03Qb3xUuk2W9WMGh+GfDXhvw7YAliZnaNJ443f7ruxZyWAx/SbX8jf/BfH4jQeI/2n/h38PLW5WWP4afC23lvoVIb7PrHjXVrnVJlfDkI7aTp+iuY2RX2ujklWQD984B4q4k45484dedYqGJo5Dhs0xcI0sNSoRj7TAywkq9VUoxU6tSrVoR5naMbtU4Qc5c34Tx5wvw5wRwLxEsmwssNWz3EZZhJyq4irXnJU8bDExo03VnJwp06VKvJRjeUnb2kpKMeX8K6KKK/qo/lc/qG/4Nqvh+0/i39qD4pSwoYtN8O+BPAdlcEsJEn1fUdX1/UIUAO1keLSNNdyVO1lQA5JFf1lV+Ev/BvV8OpPCX7D2u+M7m2EU/xS+MXinWbeVo3WSbS/DenaN4Utv3jYWSEX2l6o0YQbY5HnBZmZgv7tV201aEfRPp+hjP4n5Wt9yv8Ac7r1Plb9uO/Gm/sfftHXJkMYf4VeJrMsEDkjUbX7AU2kNgSC58stjKBi4IKgj+Iiv7UP+CiV2LP9ij9oaQyeWZfBKWqnaXybrXNJtymNrY3rIybiBt3bsqRkfxX1+HeKMr51gIX0jlcJWvs5YvFJu19LqC1trbrbT/XD9nph1Hwz42xVnzV+OZUG+jjhsgyipFbbp4qTer0a0XX+pv8A4IoXDSfsteM7YliLT4xa7IoYkqq3fhfwlhYx0X95byO4AAJcNyScfsFLBBcxS211GsttcxSW9zC4DJLBOjRTROp+VkkjZkdW+VlJByDX4v8A/BEC7839nj4qWhl3NafFsssX/PKK48KaEyscDGZJI5eSSxCAH5QtftHX6ZwZLm4XydvX/Zpx6bRr1oW07JW/PU/gj6UFL2Hj94nwVlfiL22l98Rl+CxF+9/3l353tpY/hz/as+Edz8Df2hfit8M5YpY7Pw/4s1F9DeVFQ3PhvVJBqvh65Qp+7dZdIvLTLx4XzA67Y2Vo18y+HvxG8a/CvxPp/jHwF4gv/DniDTZA8F7YylRLHkGS1vIG3QXtlOAFntLqOWCVeHQ4BH7p/wDBa34BTyv4B/aN0OxeSJIE+H3jqWCF2WDy5J73wrqly6LsRJTNqOlSzSnJddNhDEsi1/PrX49xHlkcvzTH4GdOMsO6kpUoTipQnha656cWmuWUVCXs5K1uaMovY7uHczeYZVgMbCo1XjThCrKMmpwxNC0JyuneMnKKqRd78sovqf0G/s4/8FR/Ani+107w38dYY/A/ikiO3bxVZwSSeEdTlztWe6RWmutDkcYMokWexRgX+0xIwjT9UNA8S+HvFWnQat4a1zSdf0y6jWW3v9Iv7bULSaNgGV0ntZZY2BUgjnoQe9fxRV33gf4p/Ef4a3ov/AXjbxL4Tud6O50TV7yximaMkp9ptoZVt7lV3NhLiKRcMRjmvyvM+A8FiZTq5fXeCnK79jKLq4e7f2dVUprfROaWlopKx+n5dxti6EY0sfRWLjGy9tBqnXt/eVvZ1H52pt295tts/s6or+Zfwb/wU8/ah8LxxwarrPhzxpBFjB8R6DbrdMq4Co93o76XI68fM7hpnGd0u47h7ppf/BX74jQRqNY+E3hHUJQvzPY61qumRs3PIjmt9SZQM9PNJ4HPJFfK1uBs9ptqnHC10tnTxEY823Sqqdm/N2032PpaXGeS1EvaTxFBu11UoSkk/Wk6l7d7fI/fSivwl/4fD+J9g/4sloXmbzk/8Jhf7NmF2gL/AGJu3bt2TuxjAC5yay73/gsD49kB+wfCDwvakgY+0eIdSuwp6McJYWpOeoyTg9cjgYR4L4hk7PCU46XvLFYe3p7tSTv8red1Y2lxdkKV1i5yfZYbE32vf3qSXlve/Q/e6iv53r//AIK3fHicMth4M+HNgCrBWay1u7kUlQFYl9YjRtrZbARcjCk8ZPlPif8A4KZftV+IoJra08T6B4XSUkLN4d8OWcN1GjbwUSfUm1PHDKA4QSqUDJIp5rpp8CZ3NpTeDpJvVyryk4rq7U6cr+iZz1ONMmgpcn1qq0rpRoKPM+ydScLdm2ku1z+lLxV4y8KeBtIute8YeItH8NaPZxtLc6hrN/bWFtGiKWPz3Eib2wDhEDOxGFUmvx4/ah/4KkadFa6l4M/Z2jku72VJrO7+Iup2rQ2toGDRySeGdOnxLcTrk+VqGoQx26MBJDa3KFJK/Grxr8SvH/xH1B9V8d+MPEPiu+d2cS61ql3epEWADC2glkNvaoQB+7t4oo/9muIr6vKeBsFg5xr4+p9eqxs1R5OTDRkrO8otuVazW0nGDV1Kmz5jM+M8Zioyo4Gn9Spy0dXm58TJf3ZJKFK+z5VOa3jURe1HUNQ1rUbzVNTurnUdT1K7nvL28uZHnury8upWmnnmkcs8k00rs7sSSzMTVFfAXhd9Tl1zUdJtdW1a5ijjNxqcS3kdtCiKscFnbzq0ECKQ8nmLEJ2eWTdIVYKL2nQGa5TK5SM73zvA+UEgbkBw2RuUEgHaeex6Njkk+/6fjX+hn0UuBMFjKGfcZ5rgMNi4Qr0skyWOKw8K1OnUoxp4vMcTShVjKnzL2mCoU6iheDVeMZJ8yP8AK76dvirmmU1uFvDvh7N8bl1fEYbEcQ8Rzy/GVcNXq4atOWCyjBVqmHnCqqdSVLMsTXoymo1EsLKUZRs3lz6Jo11HDDc6TplxFbrst4p7G1mjhTCDZFHJEyRphEG1QFwijGFGP1m/4Jm/DD9nb4T+KIfjH4m8V/D7Q/ih4+guvCngTwpBd2K6tZWd1fDSry+vII2caVd6xc2n2Gwtgtm09uZriXzRdKy/lZXsv7POiReJPjl8JtFuZ2gt73x94XillXOUjXWLWVhnK7Q2zaW3AqGLDJAFf0B4ycBZPxh4e8RZVisX/q/g4Zdi8dmGY5ZlmXV8wlgMvwtfGVcHQq4nD1ZYalXq0aMsVPDxWIrYenVwsJ01iJVI/wAj/R58V+IOAfFPhbNaGFrcU4nE5rl+VZdlWbZ9nGFyqlmGa4/C4CnmOIw+ExMKeNr4WhXxEMHTxanhaGJrU8ZOlVnhqcH/AFyEEHBBBBxjvmv0a/Z0Y/8ACtLFC3zR6jqQKE5KB596grnKhg24DAyDkda/Ib4neM9Q8K+NfgdplpMyaf40+JM/hzWIlMe66tX8I+I7q0gZnR2KDUoLOdkjMcsjwRgPt3Ryftp8O9JstO8LaNcWtkbGbUdG0Wa9gxsHnwaZb24fyudjMiAtyS3BJznP+Ungdw1mGDznDcQVJUfqeZ8PYutQipT53Qec4rLKjk1BxVWnjMol+6k0pUa8asZ3hKD/AN2PGjiTAYzKcXkEIVljMrz/AAdCvKSp+z9s8mweaU3T9/ndKeEzdR9q4Jxr0KlJwcZxqHdVnappGl61bPZ6tYWmoWrjDQ3cEcydQcrvUlGBAIZSCCAQa0arXd5aWFvLd311b2VrAjSTXN1NHbwRRopZ3kmlZI0RVBZmZgAASTgV/UtWlSr050q1OnWpVIuNSlVhGpTnF7xnCacZRfVSTT6o/menVqUakKtGpOlVpyUqdSnOUKkJLVShOLUoyT1Ti010Z+d/7VOjfC74MWejeKdQ8SaH4K0zW72TTVg8Qaza6faS3ywvcIthPqNzESTBFM0sO+QrtVlxuwPiK7/aL+Adgqte/Gb4ZWquxVDP418PxBmGQQu+/GSCrA46EEHkGvyK/wCDh/8Abt+FXx/8VfDT9nX4Qa/pnjTTPhPq2teJPHXinSLkXmjnxXe2sel6foWl3sJNpfNpVk2ovqVxA80K3F5FbI4lt7hR/M0CQcg4PTI4OM5xke/P1r8M4g8DchzjOMTmWFzCrk2GxDpyeXYDBUFh6U404QqSouVRRpqrKLqOEaShCUmoxsftGR+Nud5RlOGy7E5fTzfEYdTisxx2NruvVg6kpU41UqUpTdKDVNTlVc5KKcpNn96EP7Tf7Ockscf/AAvL4VnzJETC+OfDjMSzBQFUahliSeAOSeBzX69/DL4OfDiLRdA8U2ph8VDV9K0/V7DU3uIL3Sbm21C1hvLa6sFti9rc200ciTW8/mTxyxuroxUqa/ytw7gg724IP3j2/Gv9ED/git+1d8O/jR+w18F/B8vjzRZ/ib8MNEvPAXirwxqOs2i+IrSHw9qNxBoF19iuJUu5tOufDk2j/ZbqNJIdyTWomeW1mCexwj4P8N8M42tjK7/t2rKNL6r/AGphcPOODqU5TlOrRgr05VKnND3pwcqfs06ck3K/lcU+LfEXEmEpYOhH+xKUXV+s/wBm4qvGeMhUjCMaVWb5akacHGfuwmoz9o1NSSR+qnxF8eeFvhH8PPGPxG8WXlvo/hHwD4Y1jxRrl4+2KCz0nQrCa/unAACjEMDKiqMliqqMkCv8pD9s/wDaT8Qftc/tOfF/9oPxHELW7+Ifiu51Cw09Gd00vw/YxQ6T4b0xWd3LNY6HY2FvK4KrLOksoRA+0f2/f8HJX7V0fwW/YotPgloGqG28Y/tI+IoPD8sVrIy3C/D3w40Ws+LLjzY5UMdvfXK6Locynct3a6pdQ7Hi84p/nwnkk+tfqtZpOMEklFWstEtFZJLRJJaKysflTbd22227tvVt977tu7v+oUUV96fsM/sDfFH9tbxv9h0OOXwz8NNBu4P+E4+Il7bM9hpUDjzDp2lRMUGra9cxjFvYROFgVhcXkkMABby8yzLA5RgsRmOZYmlg8FhYOpWr1pWjFbKKSvKdScmoU6cFKpUnKMIRlKST7ctyzHZxjcPl2W4ari8ZiZqFGjSV5N7uUm7RhTgryqVJuMKcE5Tkops+E4bS6uQ5t7eecRrufyYnk2KOrNsU7V/2jge9QEEEgggg4IPUGv8AQZ/Z5/Y8/Z//AGZfA1v4H+HHgHRGV7ZYte8Sa9p1lrPiXxRcMircXOs6leQSu8czAlNPgEVhboRHFbjBZvzr/bh/4I2fCr47R6p4/wD2f00j4R/FN1lurnw9FALX4e+LLgKT5bWFpHt8M6jO2FF5p8J093INzYplpx+P5b46cNY3Np4HF4PF5bl8qns8LmteUKlObvZTxeGpQdTCU5v4ZxniFFNOsqUeZx/Xsy8DuI8HlMMbhcZhMxzCMPaYnK6EJU5xVruGExNSahiqkftRlTw/M01SdR8ql/HzRXs3xu/Z9+L/AOzr4vu/BHxf8Eaz4P1u2kdYTfW7NpuqQIxC3uj6rFvsNUspQN0dxZzyrtOH2NlR4zX7RQxFDFUaeIwtaliMPWgqlKvQqQq0qsJK8Z06kHKE4taqUW0+5+M4jD18LWqYfFUauHxFGbhVoV6cqVWnOOjjOnNRlGS6qSTCiiitjEKKKKACiiigD9M/2IvHr6l4f1/wDezM9xoMy6xpId3dv7NvmEN3AoYkJHa3axyAL8ub0jaMZb7v5PX0OOfzz/Xt1zzuI/G79lHxC+g/Gjw5FvCQa7Ff6HcBm2qy3Vq88APYt9rtbcIOpYhR1IP7JDoPyPA6f4evUDOSM4J/iHxvyOnlHHGIxFGKhRzvCUM0UI2UViJSnhsX86lbDvES/vVmz/d76B/HWI4w8B8vyzHVnWxvAuc4/hX2lSUp1JZbTp4fNMp5m0rQw+EzFZdRgm4qjgYQunokx/Id/wDP/wBf+EYPAR39uuc47fkOORkdsAnAX+p7juPz/qTzgkkqT8M89vXt659MnPPODn5vx63fT+l5dnd/8DX+z0366pdb7Lu1o2mteumt2fEn7YvwnbxH4ftviFolm02s+G4vs2tLAgMlzoGZJBcuF+Z20uZixIDEW08zsdkC1+X9f1S/sl/DTQfi/wDtHfCT4d+KbGDVPDHiXxjpNl4j0u6jjlt9S0QzrJqVhNFL+7khvLRJIJlbOY5H2iRsK35ef8FeP+Ccmvf8E9/2kdR0PR7W/vfgd8RpdQ8S/CHxHOrSJHpj3bNe+D7655D6x4WaaC2lZiGvLGWxvsK9xLFF/SXhbLMsfwpiq9VKpg8pzCOX0qvM3UjCrRjXjCcbW9lSlUUIS5tFUp0+WyTP8g/pvZBw3w74uZXicr5sPmnGXD39u5xhI04xw0sZhcVUy5YylJSX7/HUsM6mJpKnFOrh6mJc51MTUS/MvwP4+8afDXxHp/i7wD4o1vwj4l0qZZ7DWdB1C406+gdWVsCa3kQyROVAlgl3wzJmOWN0JU/tZ+z1/wAF1Pjn4EjstE+OnhTSfi7o0AjhfxBYPH4a8YpCoVTLO8EMmjapOFBOHs9PaRjukuC2Sfwlor6rMcmyzNocmYYOjibK0Zyi41YLe0K0HGrBX3UZpPqmfyjl2cZnlM+fL8ZWw93eVOMlKjN950ZqVKbtpeUG10aP7P8A4c/8Fq/2JvGsVsniLxB4v+G2oTLGJbXxV4Wvru2hkK5kH9peHP7ZtfKRhtEspgLZB8sV9GWn/BSr9hu9hkni/aN8ARpEoZluru8s5jkbsJBdWcMsjY6rGjEH5fvkKf4bPhp8OfFnxc8feE/hp4F006t4u8aa1ZaDoWn+bHbpPfX0ojjM1xMyxQW8S7prieRgkMEckjcKa/am1/4IC/tMy2kM1z8U/g7aXUkKSTWZvPFM3kTMgZ4DcReHjFIY2JQyR7o2I3KSpBr87zbg7g7LqsFi81xOXOsnKlRnXpzvFSUW4qWHnUUU9E5Set9Xax+g5VxfxfmNObwmVYXMY0WoVKsaNSnaTV0pNYmFNya1tCKsrNpXTf7h3/8AwVV/YO0+7itJPj1olz5hw09ho3iS8tovk3jzJ4dIZAGA2hgSqv8AI5VuKnH/AAVO/YNaD7SP2gPD4jBI8ttI8SC4+UEn9x/Y+8g4O1gNpOBuycV+Hq/8EAP2kSAW+LnwdUkDIE3iw4PcZ/4R4Zx64GaX/hwB+0h/0V74O/8Af3xZ/wDM/Xl/2H4fWS/1jxF1u/a0tdun1PTfp/lf0v7c4+u3/q9hrPZezndbdfrur9Ut9j9jdf8A+CwP7BmhrIU+LV7rJQsFTRvB3iq7MhCBgFZtLiQBiQm8sEDA5YAZPy58RP8AgvZ+zpocE0fw6+HPxF8c36rKIJNTTS/C2lPIpxGXnlu9R1BYJB8wcab5qj5WhB6fkT+1H/wSG/aD/Zc+E2tfGPXfFXw+8ZeF/DU+nr4hh8MX2qx6nptpqN5Dp0F/9m1fTLBLm3S9ubaGZbeZ5oxMJPLKKxH5Q19FlPBXCGOpfWsJiMRmdGM3TlJ4q0FUioycJKjToTT5ZJtN6qSa0seBmvGfFuCqLDYvDYbLa0qamksM3Nwk5RU4utWr02rxkk0tGmnqj9j/AI7/APBbL9qz4p2l9ovgCPw98F9CvUlhaXwxE+q+KPs7s42DxFqqsLaRoyqtPp+m2NwpXdFNGS2fyI1/xFr3irVr3X/E2s6p4g1vUpmuL/V9Zv7nUtSvZ3OWlub28lmuJ5D/AHpJGOABnAFY9et/Ar4J/ED9ov4teBfgv8L9EufEHjbx/r9joGjWFtGzhJLuULPf3jqCLbTdNthNf6leSlYLOxt57md0iiZh95luUYHAKOHyzA0aDqyjDloU17WrJtKMZT1qVG3ZJSk9dj4XMc3x2YN18yx1bEKmpTvWqfuqUUrylGmrUqasrycIxvbW5+23/BvV+wVJ+0/+1NF8cvG+ivd/CD9nW5sfEcj3EYNjr3xI3rceEdDKyQulzFYSRvr+oRBgBHZWsMuVuwD/AKF4AAwBgDgAdAPSvjb9gv8AY58C/sMfs1eA/gN4Lht57zSLKPVPHPiSOFYrnxf461GCFtf166bAkeN540stNjlLNa6TZ2Nrk+SSfsmv3XI8sWV5fSw7s60262Iktb1ppXinbWNOKjTXflcurPxfOMweZY6pXV/YxSpUIu6tSjeza6OcnKb6rmSewUUV+d37XX/BSb4E/sqpfeHpb3/hYHxQhhHk+BPDl1Cz2E8n+q/4STVP3lto0W0GR4WWfUDGY3jsnSRWH2GR8P51xLmFLKshy3FZnj63wYfC0+ZxjdJ1a1RuNKhRg2uevXnTpQunOaR8dxDxJkXCmWVs34hzPC5Vl9DSWIxVTl55tNxo0KUVKriK80n7OhQhUrTs+WDsz9ECQASSAACSScAAckkngADkk9K8G+MfxZ+Fnh7QdW8J+IvHvg/TfEmv2F1pmh+Hr/xDpNrrGq6nc2/+i2mnWNxcrcT3cjSRmCOGNpWdkEYMjKD/ACjfH3/gqz+1d8bZL7T9M8Ux/Cvwldbo18P+AvMsruS3ZNmy+8RTGTV53ZS4kNnNp1vIGw1scA1+eUnivxLPr0Hii613Vb3xDbX8Gpxaze39zeakL+2nW5humu7mSWd5o50WQSM5bcM5r+isD9E3Ps4yXMaPEPEOAyfE47LMbh8Pg8Hh5Zm6OIxOFqUqH12vKeHoKFOrNOvTwyxKnCLVOtd3X8yZl9MPIcpzvL6nDfDOPzfCYLM8FXxOOx+JjlntsNh8VSq1/qWGhDEV/aVKMZxoTxU8M4VHGVSi4pxf9Wcm3zH2Ahdx2g8kDPAJ9aWIgSISQAGGSeAPxr43+BH7Y/ww+J/hXTD4p8SaP4N8cQRraaxo2t6hDZR3t1bwB5NS0u5uWjintLpVebyi4mt3EkTKyosj0/jr+2h8Lfhv4a1KLwn4h0zxp41ngntdK0zRLuO8tLK8YNEt3q17BvhggtmJmECs09yUVI1WOTzl/wAi4/R78ZJccvw7Xh/xI+I45l/Z0orLMW8sivayp/2i839l/ZyyjkhLELNHifqksPGVVVWk0f64P6QvgzHgReI0vEHhqPDUst/tJTeZ4X+0n+6jU/s5ZP7X+0nnHPOOH/sr6t9d+syjRdHmaP3/APg9498EeJfB2i6f4e8XeG9bvtD02z0zWLPS9a0++udM1C1jNvc2V9DbXEkltdW88M0M0EqrJFLFJG6q6Mo+Zf8AgqR8SbD4U/8ABPv9q7xbfzxwD/hT3izQLIvL5Rl1TxTYSeHdLgiYEMZpr7UoEiCFW3EFWUjI/jF03x14z0XXbnxPofinX9D8QXd1cXs+saLq19pWovc3Usk1xL9rsJ7eZWlkkdm2uASx4wa88/b6/bt/aR8dfs4af8AfHnxU1rxb4O8R+JtMv5NO1oW1zqLw+HC1/F5+ri3TU7yCO/Ni5j1C7ulMiRGNQYgU/wBVONvo447w84Oed0OJ8DmeCyPLsuo4yjisLWwOJnP/AGXAcuElTniqOIlPEVE4RqfVW4O2slaX+WHAn0nsF4icZQyHE8KY7LcbnuY5hUwNfB4uljsNCH+046MsZCpDDVcPGnh6cvbTpPFJSjdJRd4/hlI253b1YnmmUUV/Np/Sx97f8EvNBv8AxH/wUE/ZH0zTk8y5Pxy8A3ZXBOLew1y2vLqRsA4WO3hlfnALKFJGcj/VSAwAPQYr/PW/4Nsf2dbj4r/t4Q/Fe+sJpvDnwD8H634qe7KA2f8Awkmu2svhnQLSR2UgXCnU7zU7dEIkY6a7g7I5BX+hTXVQXut93+SQ+iXq/v8A+GCiiithBRRRQAUUUUASp0P1/oKKE6H6/wBBRQBG3U/U/wA6Slbqfqf50lABRRRQAUUUUAFFFFAH5O/8FvvhGPjD/wAEyv2mtKhgM2p+DvDOmfE7SmWEzPBL8P8AXtN8RanIiBkIL+H7TWLdpM/uo53kKuFKN/mNEYJHocV/sG/FDwRpvxM+GvxA+HWsxrLpPjzwX4n8H6lG4DK9j4k0W90i6VgyuCDDdv1Rv909K/yJPiN4N1b4d+P/ABt4B12BrbW/BfivxB4V1a3dSrQal4f1W70q9iZWAYFLi0kXBAIx0Fc1daxfdNfd/wAOBxlf26/8Ed/ihB8Sf2Evhnp7XT3Gq/DLUfEfw41RJXLywR6Tqcmp6KrkgYT+wdX01LdQAqwxpGudhY/xFV++X/BB39o608D/ABi8dfs9eItQS10n4vaZb694PFxMEh/4TnwrFOZ7CIOcC41rw/NcGJUOZptHhi2M7oR+S+MWSVM54KxdShB1MRlGIo5tCMVeUqVGNSji0vKGFxFWvLv7FWV7H6l4P51TyfjPC0q81Chm+Hq5VOUnaMa1aVOthb9LzxNCnQjfrWP6xqxfEXiTw/4R0a98Q+Kdb0rw5oOnIsl/rOt39rpmmWaMwRWuL28lht4gzkKu+QbmIUZJArar88v+Cpnw88Q/EX9i34l2nhmO5udQ8L3egeN7mxtSfMvNI8O36zawpjXmYWmnzT6iYhlm+xEqrMFU/wAa4DD08XjcJhatX2FPEYijRnWsn7ONScYOVm0tL9Wkt3omf2zgMPTxeOwmFq1fY08RiKNCdWyfs1VnGHNZtLS/VpLdu1z6S/4an/Zq/wCi8/CT/wAL7w1/8saP+Gp/2av+i8/CT/wvvDX/AMsa/g83OOrN+Z/xo3v/AHm/76P+Nfpv/EOcJ/0MsR/4Jp+X971/DzP1D/iHOB/6GWL/APBNH/P1/p6f3h/8NT/s1f8ARefhJ/4X3hr/AOWNdX4M+Nvwf+Iuqz6H4C+J/gTxjrNtbPeT6X4b8UaPrF/HaRlBJcm1sbuaYwxmWMSSBCsZkQOVLrn+BHe/95v++j/jX7Gf8EU/hzrXiX9qDWfiAkV0vh34eeA9b/tC+VT9nbVvEoi0jStPkdkKNJNbtqV2I0cShbIvgxhq8/NeB8JluXYzHPMa0nhqEqkYypU1GdT3VCm2pXXPN8ia1TknrZp+dm3A+Cy3LsXjlmNdvDUnUjGpSpKM53UYQbjJNOc5KCa1u07PY/qovr+y0qxvtV1K5is9O0uzutR1C8ndYoLWysoHubq5mkchUiggjeSRmICqpJ6V/nt/tbfGq5/aH/aR+MPxgmeRrTxj401S40KORgzW/hjT2TSPDNtwSAYtCsLAMoJCvuGWI3H+qH/gs1+1rB8Cf2dJ/hF4Y1Zbf4mfHWK40FIrabF9o/w+TKeKtYcKC0A1JCnh6ykYoZWvb14WLWb4/jUAAGAMAcADoB6V+veA/DVTCZfmHE2JpuE8zawOX80bSeCw9RzxNaN18GIxUYU154Rte7JN/wAKeOvElPFY/L+GsNUUo5apY7MFFpxWMxFNRw1GVtp0MNKdRrti4p6xdikJwCcE4BOB1OOw9z2pa9n/AGdPhZf/ABv+PXwf+EWnQvPc/EP4i+E/CzpGrOUstT1i1i1OdtoOyK1037XczSHCxQxPIxAUmv6DSu0t7tH4Af6C3/BOj4Yn4P8A7Dn7MfgaW3NpfW3wq8P67qsDAhk1fxikni/VC4Kqwke+1yd5A2WV2KsSRX2nVWxsLXSrGx0uxiSCx0yztdPs4IkWOOG0soEtreGNEVVVI4o0RVCgBVAAFWq7krJLskvWy3+Zg3dt+Z8D/wDBT2//ALP/AGH/AI1sNwa9tPDOmhl2n/j98V6NGVdW4EbAHc4y6kDYCeR/GvX9ev8AwVvvRafsPfECEnDaj4q+HtmmHCZK+K7C9ZcdZAY7N8oP948KSP5Cq/A/E2XNxFRV78mV4aPpevi52/8AJr/M/wBi/oAYf2XgxndeyTxPiDnDvazlGlkfDdNNv7Wqkk+lrdD+lX/ghnfF/hN8dNNKqFtviF4dvEfJ3ubzw68LqV6bYxZRkEckyHPQV+5dfz+f8EKtVL6d+0ZoPmE/Z7z4d6ysWThBdQ+KrGSQDbgFjZxKfnJbaPkAUsf6A6/TuA5qfCmVf3Vi4P1jjsSuy6W/zP4G+l5hnhfpEeI0XFR9richxKs27rE8LZJW5tXdOTk21orvRWseVfHH4S6D8dfhL47+E3iQKuneM9Bu9Niuym99L1VVFxo2sQqWUGbS9UhtL1F3ASCFonzHI4P8PHxF8BeJPhd458U/D3xfp8umeJPCOs3uiarZyggpcWczRiWJiAJba5j2XNrOuUntpYpkJR1J/vcr8DP+CxP7Jb6ha2H7UfgfTGkubCC10H4qWdnC7vJZxssOheK2ihjIC2ob+ytWuGwBD/Z00hAjlc8PHeTPG4KGZUIXxGAjJVkl71TCSld+rw826iX8k6r1aSPyngXOVgsbPLa8+XD4+SdFt+7TxaSjHySrwSpt9ZwpLY/niooor8cP2M9E+Engyw+IvxP8AeA9T1b+w9P8X+LdB8O3erbBI9lDq2o29k8sSEFWnxNtgDjyzMyeZhNxr9IP2gPgt/wTe/Zu8d6b8MviX8T/AIu2Hiy40+11G/GkQ2uuW2kWd7j7JPrK2Xh+R7OS8jVrmC1t/tVwtuyzSRLHLAz/AJUadqF5pOoWOq6dcSWmoabeW1/Y3ULFJra7tJkuLaeJxyskU0aSIw5DKCK/TH4i/sy+Gf8Agptptp8bvhh490Xwd+0Xp3hvRdB+J3w/8UPJHpWv3+hWUWm2fiLTru2Sa6sYNQtIIY0lW0u7RPLjt5zbTxuX+dzmVSjisHiK+PxeBytUq1LEVMIor2eKnKn9WqV5OlVcaDTqRc3HkhUVNTtGo2fpPhvguFczzDEYDib3IVafPhqq0k5RVnBScZ8ij8bcacpPS/7uM3H23wn+yL/wTn8b6TDrXhv9pmS9sZlU5l8b+D7G4iZhny7i0vrG2uoJV5VklhUhlI7VwXxO+Bn/AATF+E9rJN4k/aV12+u02hdJ8La5ofirVZHbeFUWeg6PfvGC0bI8k5iihbAmkjBBr83Nf/4JW/txaDfSWUXwfutbjR3CX2geIPDt/ZSqrYV1lXVYyPMUhgrIrryrqrAiuy8C/wDBIX9srxdcL/b/AIX8PfDzTl2Ncaj4w8TadGYIjsaR1stKk1O7kaOMuxURqpaMq0iBg9ebbCwftqnHFSWHvdQhWy/2jWjUVKMZyb/w0bu+iVj9p/4hr4d037armmDlh3dqFOtQVRrTTnVao215UU3daaa/ZviH9nL9j34i/ssfEH4+fs4ePPG+py/Du5W0vk8UmKJ5r+O4sBNpmoaZNpenTWst1Z38Vzp9zbM0b5RSshMgT8ta+pfG+oWP7PHwy1v9kr4deOLfxtpV54tTxX8VvGelQta6Xrviq0toLGDw3oRLGW48P6GllbPPdTkSX+qpKRHFBAEf5ar3sop4mnRryrYqviqNXEzq4KeKVq6wkoU1BVPcptOU1OcVKEZKEo3Sd0v544zWQwz7FUOG1fLMN+4p1UrKvOEpc017sFJK6hz8sefl5tdGyiip7aBriZIlBwxyxGflQcsSQrY9ASpAYjNfQYLBYrMcZhMvwVGeJxmOxNDCYTD005VK2IxNSNGjShFauVSpOMYruz4jMcwwWUZfjs1zLEUsHl+W4PE4/HYqtJQo4bCYSjPEYmvVk9I06VGnOc30jFm1pkQjt3kYfPMQF+8CIwOMgkKc53KQDw3J7C/RtVAsaYCRgIvGOBxz0yfU9zz3or/XbgHhShwRwfkXDFBxm8swUI4utDbEZhXlLE5hiE7JuNXGVazp31jR9nB/Cf8APz4tce4nxN8ROKeNcQqkKWcZlN5dh6jblhMowsY4PKsK1dqM6WAoUPbKNoyxDrTS99hVi0u7qwura+sbmezvbOeK5tLu1leC5triB1lhngmiZZIpopFWSORGV0dQykEA1Xor6+UYyi4ySlGScZRkk4yi1Zpp6NNaNPRrRn53Cc6c4zhKUJwlGcJwk4zhOLTjKMk04yi0nGSaaaTTufsv/wAEufi58Yfjh+3L+z14a+IHjPVfGGgeD7nxx4jtrXWWtJRZyWHw58UwR3TTCKK7vJ2knhgUzy3LB5VmdcRtIv8AXp8V/jL8K/gV4Q1Dx18XPHfhn4feE9LheW51fxJqlrplt+7RnEFss8iSXVzLsKW9rbJJPNKViijZ2VT/AJ+H7Mv7bMH7CnxVf4xad4Qs/HXjSDwZ4o0Twjot9fSWVjp+sa9ax2EOvam8EUssllZwG6iezRoZb3zmjimjCSyR/Fv7UX7ZP7Q37YnjSXxr8dfiFq3imZJJTo2gJM9n4V8N20jOVtNC0GFxZWaIreW1yyS31wqr9qup2UEfx14sUMhy7iWjlnDuCyrL8Jl+WUqNbB5RhMLg8Lh8bVxmOxeKjKhg4U6UcROWIjVr+7GfPNuoufmb/wBPvo3YjibMeAK+d8W43O8zzPOc8xGJoZjn2MxuOx+NyvD5fluBwFSOKx9SriKmEpxw1WjhW5ypqlTSo2pcp/UF+2B/wco+DfDlzq/hD9kH4enxxdwm6s4vid49W70rw4Jk3RxXujeGIhDrGqQFsyKdSn0X5VRlSZZDs/nA/aF/4KSftpftPS38XxW+O3jC80G/M6v4P8P3n/CK+EktpxtazfRdBFjb31uqfIp1U38+wsGmbc2fhqivy5yb3Z/QF/kKSSSSSSTkknJJ9STyTSUUUgCtfRPEGu+GtQtdW8Pazqmh6pZSiez1HSL+606+tZhjEtvdWksM8LjA+aN1b3rIooAuftAfGf4v/GLUPCdx8WfiP4y+Iknhrw9/ZHhq48Y+ItT8Qz6TpRvbmSSys5dTubmWGFp13sA25yF3MyogX57rvvHhzcaTyT/xLTyevF7djHU8DoOegHToOBrGXxP+ugH0V+yr+zl4u/ap+N3g34N+DwYLnxBeedrOsNEZbXw74bsds2t69dgFcxWFpkxxblNxdSW9srBplr+9T4GfBLwD+zv8L/C/wl+G2kw6V4a8MWMcHmJGi3ms6k6q2o67qsygNc6nqlyHubmVy20ssMe2GKNV/Gn/AIIOfs8W3hH4L+NP2h9Y09R4g+J2tz+FfC91NEDLb+DvC8qpqEtq7KCiar4ha4hnaMkONFiVj8pFfvhX8e+M/F9fOM/q8P4aq1leR1PZVIQfu4nM1C2Jq1LfF9VcnhaUXfklCvNfxdP688HOEaGT5BSz7EUk80zun7WFSaXNh8tcr4ajTv8ACsSoxxVWSfvqdGL/AISuUUUV+Ln7IeU/GH4HfCf4/eErnwR8X/A+h+N/D86y+TDqtqrXumTyoUN7o2px7NQ0m+UY23NjcQyHaocugKn+bD9r3/ghl448InVfGv7K2tSePvDiGW7f4b6/PFb+NNOiLFzbaLqJWKw8RxwjIijnaw1F0AQJdyjdJ/VLRX2HC3HXEfCNVPK8bKWEcuatluK5q2Ara+8/YuSdGpLrVw8qVR6c0pJWPkeJ+B+HeLaTWaYKKxahy0sxw3LRx1H+Ve2UWqsI9KVeNWmrvljFu5/m1eK/CHinwLrt/wCGfGPh/V/DHiDS55LbUNH1uwudN1C0mjYo6TW11HFKo3A7X2lHGGRmUgnnK/0G/wBo79jv9nz9qvRJNK+MHgPT9U1NIHh0zxlpiR6X4z0YsBtex12CIzyRIVRvsd6t3ZNtCtb4JNfzT/tV/wDBET48/Cy51TxL8A7lfjV4FiM1zDo8Ah0/4haXarl/Jn0iR0tdcaGPIE2jztcz7Cf7Ojdgp/pzhPxh4b4gVPDZlOOQ5nLlj7LGVU8DXm7L/Z8c1CnG72p4lUJttRg6r1f8z8V+EHEeQOpicthLPsti21UwlJ/XqMP+n+Ci5zlZb1MM60Uk5TVNaL8PKK6TxR4P8V+CdWutC8X+HNa8NazZSvBdaZrmm3emXsEqHDK9veRQyDB6HaVIIKkggnm6/WoyjOMZwlGcJJSjOLUoyi1dSjJNpprVNNprY/J5wnTlKE4yhOLcZQnFxlGS3UotJprqmk0FFFFUSeo/BOTyfiz8PZcZ2eLdEPAZm/4/ohgBfmOc8gckcDmv3RB4z6dOw/yM9e2fVsj8L/gkCfi38PABknxZowA6kk3keAB1Jz6V+6AGPrn+Xtxz+uSMEcZ/kv6RaX9t8ONL3nleKUmtW4rGLlTXZNyau7Ny6df9gv2arn/qN4lJz/dLizKnCGyjUeUJVJKV7XnBU4tJJpQveXMkj8AMEf5x/wDWzjOQxBwZ9v8APb/PPsQckr/+rp/nvweODg7dx4PX6AdP8M/pkAEEHO1a/nT7vnby6f1fXzt/pXfv5Ws5dbdfnqm+i6u59QfsV+MLfwN+1T8C/Ed6AbG2+IegWt8SyKEtdSvE06WYu5CqLcXX2jJZVPlYZgCxr9hf+DgHwN4b+IfwK+EXh/xLYR3lld+N9ZWOYKou7KU6DIY7qyuNpeCeJwjqyna+3ZIrxsyH+eK0uprG7tb63YpcWdxDcwyAkFZYJFljYFSCCrqpypGDjBwVx+3P/BQ/412nx/8A2JP2TfiRFMJdT1DXbzTvE0efnt/E+j+HW0/W0deqCa8ge8hU9ba5gcFlYMf2LgjiL6p4eeImS068qGOp08rzvAzhJxqWhmGBwmKcHo+ak1hGkr3U5PZM/i/x38N6WdfSG+jtxhjMuo5lkGNr8T8C8Q4XE0Y1sLKOJyDOszy2jiKUk4Sp4yjUzim3KMWnQgr80oqP8QXxi+DviH4ReIX03Uke80W8eSTQ9cjjK22oWynPlyYysF9ApVbm2ZsqcSRl4nRz5BX9AGs/DHRPjFDafDzXLVJ4PEuo2Ol2s5RGuNPv764jtLTULN3eMR3NtLMHU+bGjruilbyncHwX9v3/AIItftXfsOT6h4r/ALEl+LfwTVpLiy+J3guxubmLTLHYJY18YaQqy3Xh26RDseSRp9NnkBFreynKL+gcA55jeLMpxleeEnLE5O8PSzCrSSdOaxEaro4hQT5ouaoVfaxjFwpyjzXjGcYr+IfpQ+DmV+CfGOV0crzKnPh7jGnmOOyDBYmpJ47LXl9XCRx2XV6kk1XoUZY7D/UMTOarV6UpUqsZVsPOtW/OX9m74rSfA748/Cb4tJGZ4vAfjnw/4gvYFG57jTLS/i/tW3jB48yfTXuooyeBI6self6D3gzxn4Y+IvhTw9458Gataa54W8VaTZa3omq2Uolt7uwv4EnhcMOVkVX2TRMA8MyvFIqujKP83dlZGKOrIykhlYEMCDggg4IIPBB71+lH7D3/AAU0+Mn7G8qeFhEvxD+EN1d/aLzwHrF5LBJpEs0iG6v/AApqe2Z9LuZEDNLZyRzaddSYZ4IpmNxXDxrwtWz6jQxOBlBY7BxnBUptQjiaM2pOmptWjVhJN03NqD5pRk43Ul+U8F8T0cjq18LjlP6li5wn7WCcnhq0U4OpKCvKVOcLKfInNckHGMtUf3DUV+P/AIM/4Lc/sUeItFhv/Eeq+OfA2qGFXudD1bwhqGqzQyhAZEhvfD41KynQOSsbmWJ5ANzRR8gfK/7Sf/BePwVY6Lf6H+zH4I1bXPEVzHLbw+NfHtommaFpu4MovLDQILqTUtTmXhok1B9LiRjukjlCeU/5Ph+EOI8RiFh1leJpPmtKrXiqVCC6ydaTUJRS1fs3Nv7KbaR+q1+LeHcPQeIeaYWquVuNKhNVq8mtoqjC9SMm9F7RQit5SSTa9O/4Lh/tSeHfBXwPtv2bdG1GG68dfFK90zVPENlbSBptD8F6HqEGopLfBHzC+uapa2tvZwyr++tra/kAAjUn+Smu3+I3xH8bfFrxlrnxA+IfiHUfFHi3xHePe6rrGpzNNPNI3CRRrxHb2tvGFhtbWBY7e2gRIYY0jRVHuf7Kv7Fv7Rn7Znj2x8AfAb4ca34su5ri2XVddFs9p4X8NWM04hk1TxBr9yItN06ztwJHbzZ/PmZPJtoZp3SJv3rhfh+WUZfh8rwynisTUnKrXlShKTrYmooqbhFJtQhGEYRv9iClKzbPwrifP45vmFfMq7jhsNThGlQjVnFeyw9NyceeTajzzlKU5JXtKfJFySTfzx4R8I+JvHniXRPB3g7RNS8R+J/EepWmkaHoekWk9/qWp6lfzpbWlnZ2lukk08880iRxxxozMzAAV/oY/wDBFP8A4JHaf+wh4Bb4u/GGw03Vf2m/iDpcS3m1YruL4XeG7yKKVvCWmXQ3pLrd221vEmpQMY1ZV0qxka2iurnUO6/4Jbf8EXvgz/wT/wBLtPH/AIubTfip+0hfWhS+8dXNqW0XwalxHH9p0nwLZXSK8ILJsn166hTVLlN0UC2UEk0U37ZV+1cPcNrL3HGY3lnjHH93SVpQw3MtXzJtTrNPlcl7sFzKLldTPx/PeIHjlLCYTmhhOZe0qO6niLWaXK0nCkpa8rvKbUXLltyhRRXgv7T3xp0/9nv4EfEj4t6ghmbwn4euZtLtQQPtuvXpWw0KyJLLtW51W5tIpHBJjjZnCtt2n7nA4LE5ljcHl2CpSr4zH4rD4PC0Y/FVxOJqwo0aa851Jxir6a6nxOZZhhMpy/HZpj6qoYLLsJiMdi60tqWGwtKdetN9Xy04Sdlq7WWrPy3/AOCon/BSK5+Csd78AvgfqqL8TtS08jxj4vs5IZf+EGsL1V8rTtOYrKh8SXts0jvIdj6RBJBcITcyxeX/AC3ajqWoaxf3eqatfXep6lf3Et1fahf3Et3e3lzM5kmuLm5neSaeaV2Z5JZHZ3YlmJJJrX8YeLNe8d+KfEHjPxRqFxqviHxPq9/resahdSNJNdX+o3ElzcSMzEkLvkKxoDtjjCxoAigDm6/1F8OPD3KPDzIMPlmBpU6mY1qdOrnOZuKdfMMbyr2j52uaOEoycoYTDq0KVP3mnWqVqk/8mfEzxHzrxJ4ixObZjWq08vpVKtLJcq539Xy3A81qcIwT5JYqtGMZ4zE2c61a6TjRhRp0yiiiv0E/OgooooAK/LL9qzxj/wAJH8S59Ht5vMsPCdrFpSBJC0TX8gF1qLhQdiyJLKlnJ/GGtCrEYCr+kvjXxHB4R8J+IfEtxt2aNpN5eojsEEs8ULfZoAxBw1xcGKFOGJaQAKxIB/Eq/vbnUr271C8mee7vbia6uJpGLPLNPI0kjszEklnYk5Pev5D+lbxY8NleRcG4eclUzOtLOcxUW1/seDlKhgaUtbShXxcq1Vq3uzwVNn9jfRI4QWKzfPuNsTBOllVBZJlrkk/9ux0YV8dWjdNxnh8FGjRUk1zQx9SOutqlOVWdlRQWZiFVQCSSTgAAckk9hTa+lP2YvhrJ40+IOl67qFlFdeGfCGpadq+rQ3IJt9Rlt5xc2mkMNpDx3skG27GUK2glKuHZM/xnkGRZnxNnOX5Dk+HeKzHM8RDDYaknaKlK7nVqz2p0KFOM61erL3aVGE5y0iz+2OIuIMr4WyXMeIM6xKwuW5Xh54nE1bc02k1GFKjC6dXEV6koUcPSi+arWqQhHWR/fL/wQC/Ywf8AZX/Yl0Pxp4m04WnxJ/aImsviRr/mxoLuw8KvaGPwPosj+VHIFTTZ7nWnidnMc2tNE2Ghr9z68W/Zz+I+hfFv4F/Cz4h+HLazsNK8S+C9EuotMsChs9IuYLOOyv8AR4PLVYxHpN/bXOnqEVVAtwFAGBXtNLFYOvl+JxOAxVOVHE4LEVsJiaUvip18PUlSrQl/ejUhKL80ehgcZhsxwWEzDB1Y1sJjsLQxmFrR+Grh8TShWoVI+U6c4yXkwooorA6gooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/ADpKACiiigAooooAKKKKACv81L/gvd8AW+BH/BSb40S2libPw98X10f4y6Ay26wQT/8ACY2rL4lePy1WNmHjLTvEPmEfOxYSSANICf8AStr+Sn/g6h/ZxTXPhZ8A/wBqLSbInUPBHiXUfhR4suYlBZtB8V282u+G5rkiIt5On6xpOqW0bNKFWXWwoQtJkZVleF+zT/R/mH9L+vS5/EbXSeDvF3iDwD4s8N+N/Cmoz6R4m8Ja3pviHQdTt2ZZrLVdKuoryznUqVLKs0SiSMnZLEXikDRuynm6K4pwjUjKE4xnCcZQnCSUoyjJNSjKLunGSbTTTTTaZUJzpzhUpylCcJRnCcW4yhOLUoyjJWalFpNNappNH94/7Bf7c3gL9tH4WWGsWd3Y6R8VvD1haW3xK8CmdVu9O1MIsT61pcDkS3fhzVpVaexu4w627u9hcmO6t3U/dVzbW17bXNle28N3Z3lvNaXlpcxrNb3VrcRtDcW88Thklhmid45Y3VkdGZWBBIr/ADkvhZ8WfiL8E/Gmk/EL4W+LdX8G+LdFmElnqukXBiaSIsrTWV9bsHtdR066CKl3p99DcWlwgAlhYqpH9F/7M3/BerRbi20rw1+1H4Au9Nv41gtbn4j/AA/j+3abdEKEa/1fwnO6X1i7MN840e51CMli0VrEq7K/lPjjwYzXBYvEZjwpReYZbVnKt/ZsJpY7AtvmlToxm4/W6EXpRVOTxMY2hKnUcXVl/VPBHjJleNwtDAcU11l2Z0owpf2jKEvqWN5Uoxq1Z01J4TESterzxjhnK9SNSmpeyhR/bY/4JAeMtM8Raz8Rv2WrFPEvhXVbi61PUfhkbiOHXvDU0rPPND4aNw6R6zpG4sLSwEg1O1BW3jS7jCuv41618B/jR4c1F9J134W+PNJ1KNgjWV/4X1i3uNxIACxyWgZ8kgDZuyTjrxX90/wv+Knw/wDjP4L0f4h/DDxVpXjDwfrsJl0/WdIuBNCzLxNa3MZ2z2d7bP8Au7qyu44bq3kBSaJGr0BvnYM4DMAACwBIAOQASOgPSvhcJxxm+WKWBzLBrE1sM3Rl9YdTC4ynOm+V08QnCTdSDXLLnpxqXTc3KV2/6synxCxtLB0VOOGzWhKlCWHxarNSq0pJOnJ16ftKdeLhrGqo8001KU5Ntv8Ai3+An/BNv9qf46a3p8Efw71jwF4UmkibUPGfjqyn0HS7SzYB3ltba8WLUdVm8sjy7fT7WYlmUSNGCWH9L/gPwV8Av+CZf7L2s3uqaxDaeH/DVrJr3jLxVepbwa74/wDF0sAhtrWztvMJmvL6ZYtM0DR4ZJjbwFRukb7TO/2zrWt6ZoGkapr+vajbaZo2i6fd6pq2pX0yQWlhp2n28l1d3dzNIQkcNvbxPI7NwqqfSv4jP+Clv7fPiD9sj4pz6P4cu7rTPgZ4C1G8s/AmhpI8a+ILiJntrjxtrUakCW81NA39l20gYaZpjpGALqe7Zvo+GcuzrxRzWODr2wHD2Xzp4jM54dSs05N0sOqk23VxVflkqV0qdCCniJU3KEIz/MPFvxer5Pk6dVUVi8U5wyvK6TlyVa8EubGYuTbnLD4XmUmvdjObhSglKbqQ+Wv2sP2lPGP7WHxw8X/GPxg0lu2s3IsvDmgiYy2nhjwpYNJHouhWnAXFvAzT3sqqPtWo3N3cn/WgL830UV/XuEwmGwGFw+CwdKFDC4SjTw+HowVoUqNGChThHraMYpXbbe7bbbP8/wDF4vEY/FYjG4urOvisVWqYjEVqjvOrWqyc5zk9ruTbskklokkkgr9+v+De39ni4+I37V/iL46anp/neGPgJ4RumsLqZFaFvHnjiC60TRoYxJE6STWWhL4hvm2Oklu5s5c4dVk/Ay1tbm+uraysrea7vLy4htbS1t42luLm6uJFht7eCJAXkmmldI4o1BZ3ZVUEkCv9DL/glp+yKv7HP7I3grwPrNlFb/Ejxq7fEX4oS+XGLmLxNr9vb/ZtCmlRn8xfDGjQ2GjALJ5Zube7nVEadwe2jG8r9I679bq2nW/bZq5yydl66H6L9aKKK6zE/Jf/AILOX8tp+yDaWsbuq6n8V/BtrMofarwxWHiG+w67TvxNaQkfMuCAecEH+Uev6ev+C3+rG1/Z8+FWkKF/4mvxXa5fLYbbpnhbWQCEDDdhr4DeVYIGK/KZMn+YWv548RanPxPiY/8APrDYOnvfeiqunb+Jt3u+p/td9BnCPDeAeXVXGyx/E/EuLi7WcoxxNHA3fe0sFJX00VraXf71f8EKrh18a/tEWoC+XN4S8CzuSDvD22sa9HGFOcBSLqTcCCSQmCMHP9G1fzSf8ENL54/jJ8bdODuI7r4Y6VeNGMeW0ll4ptIUdud29FvpAmBjDvnnFf0t1+n+HcubhfCr+TEYyL8r4ic/ymn8z+AvpsUXS+kNxVNpf7RlXCtZW6pcO5fQu/O9FrTokFZHiDQNH8VaDrPhjxFYQaroHiHTL3R9Z0y6RZLe+03UIHtru2mRgQVlhkdc4ypIZSGAI16K+3aUk4ySlGSalFpNNNWaaejTWjT0aP5PjJxkpRbjKLUoyTs007pprVNPVNbM/i4/bf8A2Utd/ZQ+Muq+FXiuLrwH4gkuNc+HWvOGdL/QJpiRp9zMVC/2rojuLHUIwSWKw3QxHdRivjav7c/2tf2ZPCn7Vfwh1j4d68sNlrsHmar4H8StF5k/hzxNDC6Wtxw6M+n3gY2eqW4cCa1kLj99DCV/jG+Jfw38X/CPxz4j+HfjvSLjRPFHhfUZtO1KyuEdQWjbMN1bO6oLixvYTHdWV0g8u5tpY5oyVcV+F8V8PSyXGe0oRby7FSbw8ruXsZ2vPDzb6xd5Um23Kla7cozt+6cKcQRzrB+zrySzDCxUcSrKPtoXtDEQiuklaNRJJRq3slGUDha6Lwt4u8T+CNZtPEPhDXtV8Oa3YuHtdT0i9nsbuIghivmwOhaNiq74n3RyAYdGHFc7RXycoxnFxlFSjJNSjJKUZJ7pp3TT6pqx9ZGUoyUoycZRacZRbUotO6aas009U1qmfo94R/4Kj/tOeHLGGx1S68J+Lvs8SxJe65oXlX8pVVUSXM2lXOnxzPhclvIRmYlnZjXmHxa/b4/aR+L+mXGhav4uh8OaFeJLDe6V4Oszokd7BLuUw3d551zqUsXlsY2iW9SGRP8AWxu3zV8YUV5lPJMopVVXp5bg4VU7qaoQ9194q3LFp6ppJp7WsejUznNatJ0amYYudJrlcXWn70drSd+aSa0ak3daO6AkkkkkknJJ5JJ6knuTRRRXqHmhXQ6db+TEZmDCSTKhWUAqqsRwfvAnHPQnOCuACaGm2izyNJJkRwlG6ZDNuztwVKsCFIYEj7w4IyK3ycngAAcAAAAAdBgcD/OK/sb6MPhbVxeNXiNneGcMFgnVo8M0a0GvreNalRxOapO16GCi6mHwsmmqmLlUqQcZYSLl/nh9Nrxyo5blcvB/hrGRnmmawo4jjPEYeom8vyu8MRhMjlKF3HFZnJU8VjYc0JUsvhSo1I1KeYyUEooor+6j/LYKy9b1WHRNJv8AVrgZisoGkCfOPNlbCQQ7kjlZPOmZI9/lsqBt7YUE1qV4l8aNYe3sNM0WGZB9uklvLyJT+9MFsUS2DjeCIJJmmYBoiry2ylXBhYV8txpn3+rXDOa5tBx+sUaCpYOMtpYzEzjQw7tZ8yp1KirTj1hTnqlqv0Pwq4OXHfHvD3DlVS+pYnFvE5pKLs45ZgKcsXjYqS+CVelSeGpz+zVr03Zng+sard63qV5qd5Izz3c8kpBZmWJWYlIYt5JWGFcRxJ0RFVRwKzKKK/hOrVqV6tStWqTq1qs51KtWpJzqVKk5OU5znJuUpzk3KUm22223c/13w+HoYWhRwuGpU6GGw1KnQw9CjCNOlRo0YKnSpUqcUowp04RjCEIpRjFJJJIKKKKg2CiiigAooooA4LxurmXTpWl3g28sKpwTEIpS+D/FhjNxkbeCFJIYDhVGWUepA/M4r1HxbbNcaUJUR3NnOsjFckJDKPLkZ1CHgyeQN5cbDwEbeSvmUHM0P/XWP/0MVjP4mNatLu0f6CP7Evgm0+Hn7Iv7OfhS0hEP2T4S+D9RvFESwl9S1/SoNf1GSRFSPMzXmpTec7LveQMzMzEk/UVcN8L4Irb4Y/De3gQRwweAfB0UUa5wkcfh7T1RRkk4VQBySfU13Nf5yZriJ4vNMyxVRt1MTj8ZXm27tzrYipUk2+7cnc/0RyuhDDZZl2GgkoYfAYShBJWShSw9OnFJdEoxVjn/ABJ4s8L+DtP/ALW8W+I9D8MaX5iw/wBo6/qtjpFl5z5KxC5v54ITIwBIQPuIBOMAmrWh6/ofibTYNY8O6xpmu6Tc7vs+p6RfW2o2E+wlW8m7tJJYJNrAg7HOCK/AP/gqv4n07w1+2F+yzqfxw8OeIfF37MGjxaJrHirwlpc11BaeI4LXxa7+OLCPypreGTV5dCj06KFftMM728rRRz2yyNKPtT9iD4k/CH4pfHz9pPxL+yN8OfEnwu/Y9fRfh9b+FvDPiAXtvZn4oQxX48U6p4e027vNSGkW17p5tI77TYb14hJaW16Y4jeRwQ/otfw2p0fDulxx/bVKVWpGlVeXOjFU/Z1cZ9UVGGJ9q5SxUG/aTg6SV4zprWPO/wA+oeIlSt4gVuCv7GqxpU3VpLMPaydRVKWEWLdaph/ZcscLNXpwqKq2706j0nyL9PqKKK/Kz9PCiiuf8WtrKeFfEr+HFV/ECaBrD6GjY2tq66fcHTgcg8G7EORg8djVU4e0nCHNGPPOMOaTtGPM0uaT6RV7t9FdkzlyQnPllLkjKXLFXlLlTfLFdZO1kursjxT4s+F/2V/i7fj4efGTTfg54z1662W0Hh7xVN4an8UpI+DCliJZ49etpjkGJbWSJySuAcjP52/Fn/ghp+yJ45N1d+AtQ8dfCTU5mleOPS9STxPoMLvkqo0rXd14IUYnEaazGxXAEgAFflx+yL8a/wBkX4faL+1ZZftp/Bb4i/FP9ozxnbzW3wi8S6fPqY1/wz46P9sQ3u+6XVLC50bXU8RTaRfx6tNa37QpYS2y2xUtbXH9Qf7Pcvjmf4EfB+b4mLMnxAk+HfhV/Foug4uzrB0m2+0NerIA630g2Peo43rdtMrcgiv3HivIuIPCjB5Nisl4zxtWOYyqU6mDg3SoRqUqdOq61HCVK2Jw9fDty5ZTnSvFukm5e1cY/inC+d5B4pYzOMLnPB2CpywEac6eLmlWrypVKk6So1sVTo4fEUMQuXnUIVmpJVLKLpXl/NP8TP8AggF8cNFE9x8LPi34A8dRKZXg0/XrfU/B+qOgb90nmsmr6W0rKR5he8gRWB2lxX5vfGr/AIJz/tifAPTtR13x78GPELeGdLRpr3xP4aa18VaFbW6ffubq80Oe8ezt053y3sNsigFiQmGP97VZ2tCxfRtYXVo4JtKbS78apDdoktrLp/2WX7YlzHIDG8DQbxKrgqUyCCM1z5R458XYWpRhmNDLs3o80Y1HPDvCYucW0moVcLKFCM2r2csLPV6pm+beCHCWLhVnl9XMMorcsnT5MR9awsZJaOpSxUalZwT+JRxMHbZo/wA739nyxe/+M/w9t1wGj8Q214cnbxpyS37DJBOStsRtAG4kLkZyP29GBj/PX1/+v+X978pP2ctL0/Wv2mNXv9KtFXRdK1LxvrOmpb/Jb2tg97eWWneX8o/cJDqEEUKDaNrJgYGD+rYHGTx/h/n+fTpu5/pBYuNfirKcPHR4bIaM6kZWvTnicZip8r6cyhGm35SXTU/uz9nJlFTBeEXF2Z1VaGaeIGMo4ecf+X1HLclyei5p2u4e3rV4J3avCSVmndfyPP8A+rv2HHXGOQR8zA9/rn04z6/iOffd0OUxjuR69uP6dj3+nTcv1+v07dunpjgds5GV/Bu2i6W1S/l/T82+jP8AQfTWzb3T0b0e+it82rWslbXU+uD9P5/r7ZyME5OfQJfiV4kn+GEHwmuZ/tHhax8YnxtpcEhbfpurXGlS6RqYgwdpi1G3WyecEZWSwjZSN8gHn/A6459P/rY6+2Ac8ccq3Pt/n/PpjGBgCtKderRVRU6kqarUpUKqi/4lGfI5U5L7UW4xlaWilFONpRTXLicDhMa8M8VhqWIeDxVLHYWVWCbw+MoRnGliKMr3p1Iwq1aanDV06k4ScqdSUZeqfA2yTUPjJ8L7CREdbzx34WtyshIQ+drNnGNzLllALA7l+ZSMr8wr+8mS0tbuzeyu7eC6tJ7c289rcRRzW88EkflyQywyq0ckToSjxupRlJVgQSK/hR/ZotP7Q/aH+CFpv8sT/FXwGrNt3goPE2mO6FNy7lkRTGyk4KsQcr8tf3br0H0H8q/rD6NdO2WcU1tPfx2W079/Z4fEzt309r2tq7M/yU/aXV3Linwrwzb/AHXD/Edfl6J18xy6nzLp731ez6+6r9D8Vf2u/wDggx+wj+1Re6h4n03whdfAnx7qE0t3deIvhNFp+k6TqV7L5jNPq3hGa2fRZfMmcTXB0ldGuLhwWkuPMd5D/Mz/AMFBv+Dfvxp+xf8ADLUPjJ4d+Oeg/ELwXb+I9M0JbDUPDmo6B4gthrEk0Vpc3S282qaf5KSxRwzOl1u826iVImVGd/8AQKubq2s4XuLu4gtbeMFpJ7mWOCFFAyS8kjKigAEklgAK/Kr/AIKPaNpn7Z37KPxf+DP7Onijwb8Ufil4T1XwVr+o+E/DfinRb3UdPFlrwufsN6Yr7ybC/wBQsrHVP7Ntr+SEXstlNFEfMQFf27ibJ8LPKc0xWDwFOeaU8Diq2DVCElUq4unRlKjH2VJxVaU6ijHlcZOTdt3c/wA8OG8zxEc1yzC4vGzhl1XG4ejinWqRVOnh6lWEKknVqpqjGCd3PmjGC1dlc/zgm+A3j8PtFvprDAO8ajDt5YKRhgGyoO8/LjaDglsA7Og/s3/EDXdTsNKi/suG71K5tbOziN20zzXV3NHBDbokEUhMrySKqAkRk5BkXjP3l4z8DeMfh14i1Lwl478M634S8SaRcy2mo6Lr+nXOmahazwyPE6vb3UcblN6NslQNFKo3xuyEMfrX9jv9j/8AaR+O/j/wX4p+F3wx1rV/DXhzxfoGqaj4r1GJdI8LW0el6rZXs6NrGoyWlrdSrFGWNrZyzXL4Kqm6v5pwXEXEOPx1LL6OAhOvOrGnUpYfBYqpiaceeMakvZe0qNSprmb5oNJ6STSs/wChMTw9keDws8bWxs6dCFNzVavi8PSw8ny80E6rpwjabTS5ZpyTtF3sz9Zf2Of+DYz4MeE4PDXjb9rH4h618S9aey07Ur74ZeFYf+Ed8J2l5LBHPPpesa6zza3rEEMkht5l09NEbfAXS6dJdqf00fCj4M/Cr4F+EbHwJ8IPAHhb4d+E9OjRLfRfC2kWml2zMihTcXbW8azX95IADNe3stxdztl5pnYknG0f4/8AwX1X4l6t8ErP4meDX+L3hzTbLU9b+HD69p0fi/T9Pu7S2u7e9m0N7gX32WW3uraZJ1iaNo54nDFXUn2Wv6zwOXYHAQUcJh6dJuKU5pXqzsl8dSV5y11absm20kfy/jMwxmOm5YqvOok24wcrU4X/AJIRtBaaKSV2rXbCiiiu84gr8gf+C2GuXmlfsf2mnWzssHiP4n+FdMvwGwHtba01jWURhg7v9M0y1ccjBTqRkH9fq/I3/gtRpQ1D9jaS9IT/AIknxI8IagGYfMDOupaXiM7Hwzf2hkjdHlFc7zjY/wCh+EzprxK4J9qlKD4hy9JNXXtHVSov1jVcJLzSPzTxkVV+FnHaoycJ/wCruPba3dNQTrR/7fpKcH5SP5EaKK+ovjb+z/YfCv4Qfs2/Eyy1y91Sf44+EfEniHUbCezjgt9Dn0PW49Lit7e4jmla6W5SR5C0kcDKIgAjfM5/0/xWZYPB4rLcHiKjhiM2xNbCYGChOSq16GCxOYVYylGLjTUcLhK81Ko4xk4qCbnKKf8AlDhMsxmNwmZ43DUlPD5PhqGMx83UhF0cPiMdhctpTjCUlKq5YvG4em401KUVN1JJQhKS+XaKKK7jgCiiigD5u/as1Kew+EGpxQA/8TPVdI0+VgSNkJuTeSEgKciQWfkkbk4l3EsAUb8qK/U/9rOzkuvhHdSxhSLDXdIu5QS+7y2eazygUEFg92hO8hBHvbO4KD+WccbzSJFEjPJI6pGigszuxCqqqMlmYkAAAkkgAE1/nR9J6VeXidKNS7hDIcpjhVvelJ4mUrJbXxEq6tu3rfVJf6T/AEWI4aHhYp0WvaT4izeWMeqtiIwwcY3bbTthI4Z3VlZ6q6be34Z8N6r4s1zTtA0W1e71DUrhLeCJeACx+aSRzhIoo1y8sshWONAWdlAzX7HfDfwHpnw58J6d4Z07EjW6GW/vCqrJf6hN81xcybQON37uFTkpAkaEkgsT9kP9j06R8EPit8ddZuxD4k8FWPgxZ7CSLLW1t401+10y202NXRXhuvL827v5ywaN7VLRUKyuw7yv3b6OXhlT4awON4nzmjFcS4z2eDhhakGquSYCvhMJmEaNRTivZ47H0MVha+IUeaVDDujh3KFWWKpH4B9JTxRqcU5jgeF8lrT/ANV8D7XGSxdOS9jn2YUMXi8uniKbjJ+0wGX4jCYrD4WTtGviFXxCjOnHC1D+yr/gkTqtzqP7EfgC3uGLLo+ueLdNtiSvFt/a8l+qgKi7Qsl9IPmLsTli3zbR+m1fmn/wSR05rD9h74aSMIc6jqni6/BiB3Mr+ILu3QzEouZVW3CcFwEVAG4wP0sr+R/Ep034hcaukoqH+tGd25duZZhXU2vWfM35tn9o+FSqLw14EVWTlP8A1VyR3e/K8BRdOP8A27T5Y/LqFFFFfEH34UUUUAFFFFAEqdD9f6CihOh+v9BRQBG3U/U/zpKVup+p/nSUAFFFFABRRRQAUUUUAFfFX/BRP9my3/a1/Yw+P/wM+zxT6z4p8CalfeD2kiSQweN/DWzxH4RkUuCYxJrul2VrPJHiQWtxOqHLV9q0hAIIIyCCCD0IPBB+tJq6aezTX3gf44eoWVzpt9eafeQvb3dlcz2lzBIpSSG4t5WhmikQ8o8ciMjKcEMpBFVK/X3/AILi/sqv+yv/AMFBvi5p+mWK2ngf4uXKfGbwP5Vv5Fulh42nuZ9fsIUREhRNK8X2+v2MMcW5Vs47Vm2u5UfkFXC002nunYAorT0vRtU1u5Wz0qxub64bJEdvE0hACs5LEDagCo7ZcqMK3PFe56F8AdVuFEuv6pb6b97/AEW0j+2z8ZA3S744EJOOFMwxzkdD5OY53lWUpf2hjaOHk1eNJt1K8l3jQpqdVx/vcnL5n6HwL4UeIviXXqUeCOE81z6NGShiMZQp08PlmGm7Whic0xtTDZdQqNO6pVMTGrJXcYNJn7P/APBvr8RPE6fEv47/AAna+uJvBtz4C0zx9Fpsru9rYeItN8RWOhSXVmhYx28uo2OtFbzYo+0fYLZ2+aIk/wBSdfhz/wAEIf2ZPDej+E/2m/in4fm1PU/FulXXgTwtcQ3LW0gj8LXUOq6rctbW9vbxSpKdTt4priQySI9vBGojDQNI/wC43TrX8deLc6WL4tqZrhcPOlgc1wOExGFryp+zWLeHg8FiK1uk4YjDTozUrT/dxm1yzg3/AFfwbwxnfA+W4jgriZ0afEXDeOqYbNcDRxdLGxy+WOpUc2wuH+sUZTo1IywePoVVKjOdFSqTpwnJ05W/Db/gud+0vf8Awt+Anhj4F+GNQksvEHx0vr5vEc1rM0V1B8PvDT2kmo2m9NrJFr+rXVhYTYb99Y22o25BSViP5EK/b3/gvZrl5f8A7YHhDRZvM+xeHfgt4ZjsAeI86rr3ia/vGQbRlml2I7lmJ8pUAUJz+IVf0n4T5TQyvgbJnShFVcypzzTFVFa9Wripv2bk/wDp3hYUKKXRU77tn8p+K2a18z43ziNWcnRy6pDLcLTbfLSpYanH2iiv+nmJlWqt9edLVJBRRX7af8EdP+Caen/tneOtZ+K3xdt7xfgF8LtUs7K80yEy2snxE8YSRC+i8MxXqgGHR9MtTa3niOW3YTyRXtlYQSwvcSyxfpEYuTst/wCtT85bsrvZf8N+enqfSf8AwQ7/AOCauofEXxfov7Yfxq8OzQfDjwbffbfg/oeqwGNfGvjDT7jEPit7aWPM3h7wvdRGWwkfEOo65FHtWW3sJxJ/YUSScnkmsrRNE0bw1o2leHfDul2Gh6Bodha6Vo2jaXaxWWm6XptlEsFpZWVpAqQ29vbwoscccaKqqOma1K7YRUIpL1b7vr/wPIxlK78ugUUUVRJ+Lf8AwW68L3upfs+/DLxRaxs9p4Y+J/2bU3G4iGLXtA1GCzkYBGABurNYQxZAGlUHJZa/mDr+7H9pD4JaL+0V8FPHnwg1x0t4vFeksul6i0aSNo/iGxdb7QdVj3A4+x6lBbvMF+Z7YzRqVLhh/Eh8Sfhb43+E/wAQdf8Ahl420K90nxd4e1eXR7rTZIJS11MJvKtbjTm2D7dZ6ijRT6fc24aO7gmieIndgfg3iVllbD51HMuSTw2YUaUVUSvGOIw9NUZUpO1ot0oU6kLv3058t+SVv9f/AKBfiBlObeGmP4BqYqlSz7hHNsfjo4Kcowq4jIs4rRxdLHUk+V1Y0Myq43DYpw5/q/Ng3VcViaKf6s/8ERVvD+0n8RTFGpsB8IL8X07Bi0TnxR4dNnGhU7Q00qvnepBjR8FSAa/qFr8q/wDglf8Asf6v+zn8LNU+IHj2xm074l/Fu20y6udHuAq3HhnwjZ+bc6NpV1GQXg1O/kun1LU4G2yW+bO0mAmtpEX9VK/SeBcvxOXcOYWnioSp1a9StivZTVp0qdaSdKMo7xlKnGNRxl70XPlklJOK/hX6W3GmR8c+OHE2Z8O4mljsry7DZXkEMww8lPD4/E5RhI0MbXw9RNxq0IYyVbDUa9NulXp4eNalKdOpCciiiivsD+agr8yf+Cjf7C9j+1B4JPjrwLZW9r8bvBWny/2U6eRbL430SHM0nhnUpmCB76DEkmgXU8oEM0ktk7CC5Bi9H/ax/wCCi/7NH7Idlc2njnxbF4i8fCJnsfhr4Qkt9V8VTPtfyzqMaSi00G2Z1CG41ee2yWBiilw2P5kf2q/+Cyv7T37QB1Pw74Bvh8Dvh5d+ZbppnhG7kbxbf2nmqyHVPFuyK8t5JFjXzIdFSwVN0kYuJo2JbhzDB4XMcLWwWLp+1o1Y2aWkoSWsalOVnyVIP3oySdmrSTi2n35fjMVluLo43Cz9nWoyurq8Zxek6c43XNTnG8ZRbTs7xakk18sajp97pOoX2lalazWWo6bd3Nhf2dwjR3FpeWkzwXNtPGwDRzQTRvHIjAMrqQRkVTr5k03xt4h0+6lun1C4v2uJpLi7GoTSXbXU80nmzTzTTM8zzzOS0sxk8yRmZnZmOa+2v2Y/hD8Wv2ttZ8TeF/gt4F1Hxf4n8H+HP+Ep13SNPns1nTRReW+nG7tEuri3e7dr25hhW0gWS5JcEIyhmH4zmnB2bZe5ToU3j8OrtVMNGTqpLX95h9Zp23dN1I95LY/ZMq4xyrMFGFef1DEuydPESSpSk7L93iNINXdkqns5dotanAUV6b4s+DXxQ8C+ItU8JeMPBOueG/EuiXH2TV9E1e1NnqOn3OxJPIuraVlkik8uRH2sAdrA967L4SfstfH346eIX8L/AAr+GfiLxhq8EMdzexabBEYNOtJZ0tkvNSvJZY7WwtDPIkRubqWKJWYAsK+Op4ihWxKwVGtSq4x1JUlhKdSE8S6sG1KkqEZOq6kWmpQUOZNO6VmfomI4fz7CZa85xWSZvhsnVClinmuIy3G0ctWGr8joYh46pRjhfYVlUpulW9r7Op7SHJJ80b+AVzHiTxTp/h23bzZFkvnQm2tFyzO3IDSEDEcYPJLFSw4UEmt/9o3Q/HX7PHxM8W/Bbxro8WkeP/Btxa2PiGBL201CDTru806y1SKGG4s5J7W5kW1vYRKySsiSFlDMVNfI93d3N9PJc3crzzysWkkc5ZmPJPYD6AADoABX6JkPBWIxE6eKzeMsNh01OOEemIr9VGqk70Kb2lF2rNXjy07qZ+U59xrh8PCphcolHEYl3hLFpXw9B7N0rprETX2Wk6K0lzVLOB9BeF/i9YzZtPEMCacxkJhu7OGRrQKyjP2mIPNOr7w2HiWRCHAKRrGWb2qKWKeKOeCRJoZUEkUsTB45EYZV43UlWVgQQwJBBr4Lrt/CHjrVfCcxSErd6bNIrXNhPuKcsgea2cENBcmNNgcbo2yPNik2Jt/sbgLxbr5HSwWSZ7RhXybDUqOEwmJwtCnSxGW4elGNKjB0aMYQxGGpQSTSisRGKvGVVpU3/nx4v/RswPFlbMuKOEMRPA8T4ytiMfj8BjsTVr4DO8XXnKtXqLE4mdSrgMbXqSlJSc5YKc5KM6eGTlWX2DRWToutafr9hFqOmziaCUYZTgSwSAAtDOgJMcqZGVJIIIZSyMrHWr+o8Ji8Nj8NQxmDr0sThcTTjWoV6MlOnVpzV4yjJaNdGtHFpxklJNL/AD7zPLMwybH4vK81wdfAZjga08Pi8Hiabp1qFam7ShOL+UoyTcJwcZwlKEoyZXyl8WNT/tDxfcwoyPFplvb2Cskm9WkVTPPuA+VZI555Ld1HIMIDfMCB9XDGRnp3x/8AqP8AKvijxdL53ijxDJhVB1nUgAowNq3cyqcZPJABPPJJPFfifjvjZUsiybARlZYzNKmInFNpyjgsNKNmr2lFTxkJNNO0lF3Ttf8Aq76H+Vwr8W8U5xKCk8tyHD4KnJpPknmmOhUbTtpJ08tnFNNXjKa1V7fRv7DXwq8IfG/9rb4B/Cfx7a3N74P8d/Ejw74e8Q2lncvZ3Nxpl9eKl1DFcxgvC0seUMiDeqsdpVsMPE/i74UsPAnxV+JXgnSpJ5tM8IePPFvhnTpbple5ksdD16/0y1e4ZVVWmeC2RpWVVBckgAcV9af8EvP+UgH7KX/ZY/CX/pctfOf7Sf8AycP8df8Asr/xH/8AUv1ev5c6L1f6f5n+gh4pRRRSAKKKKACiiigCOaCO5hmtpSQlxFJCzDd8vmKVDYUqW2khtuRuxgkA14pJFJZ3bRSKUkgm2spyCGR8Ec4I5HfB717fXBeL9LIZNThUbH2xXIUqCsg4STYAvyyKArMN53rlyC65ia0v2BaNPsf6B/7NHimLxt+zt8C/FkMwuE174TeAr5pg6Sbp28NadHdqzoSrPFdpNE7KcF0Yjivbq/JP/gi98bofir+xnoPg66vUn8RfBbW9Q8DX0DPm4XQrmV9a8MTupJcxfYryfT43xtJ01lyXVq/Wyv8APHinLamT8SZ3ltWLjLC5ni4QTVuajKrKph5r+7UoTp1I94yTP9AuF8yp5vw7kmY0pKUcVluEnKzvy1o0o08RTbu9adeFSnLrzRd9TlfF3gXwV4/01NH8deEfDXjHSo51uYtO8T6JpuuWcVwn3ZorfUra5jjlHTeiqxAwTir3hzwx4a8HaRbaB4S8P6J4Y0Oz3fZdH8P6XZaPpluXO52isdPgt7ZGc8uyxgseSSa3KK8V167orDutVdBTdRUHUn7FTejmqd+RTa0clG/me0qNFVXXVKkq7jyOsqcfauGj5HUtzuN0vdbtotNAooorI0CiiigDze6+DfwjvvFK+OL34X/D688ZJcxXi+KbnwfoE+vi7hAEV0NVlsGvftMYUbZ/O81cZDivSSSTk8k0lFa1a9esqarVqtVUoKnSVWpOoqcFtCmpt8kF0jGyXYyp0KNFzdKjSpOrLnqOnThB1Jvec3FLnk/5pXfmFfHH/BQD4xx/Av8AZB+OHjtLkW2qnwbfeGfDh3bXfxF4u2+HdK8rBBL28+o/bCo4Mds5YqoLD7Hr+XP/AILqftNx+MfGfgf9lPwffJd2vhCePxr8QWtLlWi/4SW7tpbfw/od4FATzNL0ye61S5R5CYHv7R3VGjBr7Pw54drcTcX5Pl8KUqtCliqWNxqjFySwmEqQqTg0tb4iap4WCWrnXjZPU+P8Q+IqPDPCOcZjUqxpVp4Wpg8G5TUP9qxUJU4TUnolh4OpipttJU6E22tz87P2FvCkkdv4y8aTRuFn+x+HbGRgpEoiIv8AUSCCGwGNgpIUKWDAHcjBf0KwR279s8f5PpgHPHYr+PnhXx/4z8FaPDoXhnxDfaVpsLNKbe1MSpJcyhftFw2Y2JeZlDHLNgBUB2qoHR/8Ls+Kf/Q6av8A99xf/Gq/qnjP6KHHvGXEWO4hlxJwzg6eNWHWHwWI/tSdbCYehh6VKnRqulgp0nVXJKdb2c50/ayqckpR5W/D8Dv2oXgb4L+GfDvh3/xDzxHzbFZJ/aE8zzfAR4bp4PNMxx+Y4nG4rF4VYnOqOJWGbrqjhXiKNKv9WoUVUp05c0D9YME4yPzB/Lp2+nX+HGSTB7DHboT/AI9OnfjIzggH8n/+F2fFP/odNX/77i/+NVoJ8fvi5GiIvjC5KoqqC9hpMjkKAAWd7BndsDlnZmY8sSSTXykvoUceq3JxXwjLvzf2xG23VZbO7+7W/Rn67T/bKeBjb9r4YeK0EtuSHCVTs7NS4jppJapO7e2miP1Q5zyCeew/z+eecdTgEfRHwK/Zp8e/HiXULjQHsdH0DSpFhvfEGsGdLMXTrvWztY7eKWa6udnzssaKsalWkdAVx+PnwS8UfHz45/FXwR8J/DHjaK11nxrrUOlW97qFlokNlYQCOW6v9QuXexUGKw0+3urt41YPKIfKjzI6g/0U/FX9s39n7/gnf8LNG+DmgazJ8XfirounKLnR9PubFbi9167Tzb3XvGWo2US2ehw3M5Lw6fBBJeraLBbQWpWMzD8u45+j/wATcCZpk+SVs1yniHO84pzxFHKMiWY1K+HwNObpf2hjq2LwWEw2Fwcq8J0ac51lKUqdRpKNKcl+icPftJuDfE/hPiDOOCOD+KuEaOU4ihgZcR8bU+H1lrxlSEK9bBZdg8sznMsVj8zp4adOoqLwyoU44mlObqTlCjU9R+F/7Jvgb9mr4k/D/wCOXxS+N+g+HvBvw38YaF4p1q91TTV02xFrpd/b3BjkvrnUQIzI6qhCW80m0sY45Hwh9Y/at/4OQP2dPh1Z6poH7M3hTWvjV4vWN4bPxJrEFx4X+H9pcFmRZ3e7Rdf1dYivmG3ttOs7e4jeNU1OJzIIv5G/2lP2vPjZ+1P4kl1r4meJ7htGineTRPBOlSzWfhLQIixMaWWmCQrcXKrhZNSv2ub6Ug/vkTEa/MNfv3hZwPi+BMlxWFxuKo18ZmWJp4yvSw8ZOjhZQoqkqMa0mnXlZXnNU4QUvdgpRSnL+AfHfxpznxq4iy3Nc2r1sTDI8BWyvAV8Rh8FhalXDVMTPEyk8PgqFGFOLqSk4KrOtWcWlKULKnH9BP2pv+Cn/wC2b+11eXcfxN+Lmtab4TnkYweAPBM0vhTwhBD5krxwXFhpksdxq4jEpCvrV3qDggFCmAB+73/BsS7vpH7W7OzOx1D4T5ZmLMf3HjnqSSf1r+R2v64P+DYf/kDftb/9hD4T/wDojxzX6ct16r8z8OWz9P1Rz3/Bcb9rTwT46+LGifATwZ4T0K81v4Q6pHeeOPHuo6FaPq02tyW0U0Pg7S7+aL7VJolnBOJtZBY2t3evHbxqfskjt+2H/BLz9sXw5+1t+z9G9h4O0n4e+KPhbcaf4O8U+FPDtlDYeGELWIn0vV/DdtbRRQ22malBFNusSoksry3uYjviMMsn83/7d2jaB4r/AGsfj3c6np1pfY+JfiSKK4XKTeTbX8sUcf2mCQS7VKkvGJTGZC/yjOK/XP8A4IcS2tnpvx90C0tobWC3l8B6hbwW8aRQqj/8JPBckIiKFPmeSx5O9pHbAIYt/N3B3iLPM/E6tlvsHCpmuMzPAYuSVFUPY5Xh8ZUwcqU43qzlfCxhaSh/Em5Snpf/AFX8bvADg7hv6H2Q5/gMsjDOOFsn4R4go5p7SX9pYnF8W43JKGewzCcYwjiMNKWaTlQoTjKOChhcMsP7OKrKf4Yf8F4vEev+B/8Agpr4g8T+DNa1Xwp4js/hl8MLq113w7f3Wjavb3C6beKJotQ0+W3ukk2xRqXWUEqiqSQAKw/2YP8AgvX+3F8AZNO0nxp4j0/49eDLQwQzaT8Ro3fxEtnGqJILPxhYiLVGu3VARc6ymtHdklSGYF//AAcF/wDKRnxR/wBkr+GP/puv6/EWv6SlpJ20229Ef5Tn+gz+yD/wXf8A2Nv2mJ9L8L+M9XuPgH8Q9QeO2j0X4hXNtD4bvr2V1RIdL8YRldJbzHJWFNUOl3EzbVS38yRY6/ayxv7LU7SC/wBOu7a/srqJJra7s547i2nhlRZI5IpomeORHRlZWViCpBHBr/I7VmVgykqykFWBIIIOQQRyCD0Nfqt+wj/wV2/ai/Yj1bT9IsfENz8TPg8Z411b4XeMb24vbKC1ziSXwpqkrS3nhu/RS5iW3MulySMWu9Nnba6Up9/v/r9PuA/0ea/M7/grrp6337C3xRkyRLYat4Du4ucKdvjXRI5VcbWJBheQqBtO8Llgu4H0/wDYh/4KAfAH9u74fQeLfhP4gjtfE9haWzeMvhxrM1vb+LvCV9MpV47qxWVzeaa8qSLZavZmWyukXG9JklhjwP8AgqPa/a/2GPjou/Z5Ol+HLnO3du+z+L9Bk2YyMbsY3c7euDjFfa+HVX2XH3BNVNpR4r4f1W/LLNcLGS2e6bTVup8N4nUnX8OeO6S1cuEuILbfZyvFSvulpa+5/FBX7E/te+Exdf8ABNX9gvxssRLaNJ4p8OSSrGzYj1+51bUFDyj5UVX8Pjar4LtISmdrV+O1f0ZfGLwWfFf/AARK+E+qRrvn8Dw+G/FMShCzbG8a6voNyw2ozAR2utzSORtARCzsEVq/0J8ScxWV5z4W4ly5Y1OP8NgJO9tM0yfNct18m8Wk/Lz1P83PDHLP7WyTxZwqipSpeHeJzCKav72VZ7kmZ3XmlhHZrX12f85tFFFfqx+RhRRRQB5r8YPD7eKPhn4x0aNDJPNo1xc2yqu9zdaftv7YIuxyWaa2RcIu9gSqEMQa+Af2Wfh7F4t8eNrWpW6zaV4RSPUWSRSyT6q7lNMhYB1wInSW9IdZEY2yxuhEmR+o7KHVlYAqwKsCMggjBBB4IIOCDXi/wb+HQ+Ho8d2qwLFBqXjK9u9MZcEvoxtraWwTdt37bc3FxAFZiFdHKg7jJJ+C+IHhvS4l8U/DjiGvh1Xy3Cxx1HN4ypxlTcsohVzbJ4Vldc9OviqlanUU1KLhTVNpqfK/37w98TK3C/hR4l8N4fFPD5ljZZfXyZqo41IwzepRyjO50HZ8tWjg6dGpDkcZxnUdWLTptn7YfC/wcNN/4JZ/tKeN2tXifxR8XfhroqXDbNtxF4Y1jSp/3fJlASTxA6PlViJUbNzCTH5cda/oB8WeEk8K/wDBETR5Gi8q78TeJtI8S3x2hS8t38SbSyt2bADFk0/T7OF9/wAytEVPCgD+f9eo+o/nX13htmf9rVPEDGXvGPiJnmCg7tp08swGT5bTab6cmFjy7K1rJLRfJeJuVrJ6fh3guXllLw2yHHTVrP2maZjnWZ1Lqy15sW77+r3f9uP/AATN086b+w/8BYGRI5X8P6xcTKmNvmT+KtdkJOABvKlfMIyGk3MGbOT93V8p/sN6e2m/sifs927BlMvwy8O32GZGONStv7QUgp8oUrcgqp+dVIV/nDV9WV/nBxnW+s8X8VYi6ft+I87rXT5l+8zLEz0fVa6Pqf6ecCUPq3BHB+Htb2PC+QUmrJawyrCxd0tLtpt+bYUUUV80fVhRRRQAUUUUASp0P1/oKKE6H6/0FFAEbdT9T/OkpW6n6n+dJQAUUUUAFFFFABRRRQAUUUUAfzXf8HLv7Ia/GT9krw9+0f4c0z7R4y/Zv1ppNaktbUy3l58NfGVzY6brSytEDK8Oha3Ho2rKXBis7N9XuCY1aQn+F34e/DDUvGMwvLvzdP0KI5kvGT95dkNtMFkGG124JeU/u4wpBJdgB/pl/wDBWH9oPwT8BP2L/ipH4s0rTfEt98WdC1T4TeGvCepsDb61f+L9Mu7G/uLiIHzHtNE0lr3V59hjLSW1vbpNDLcRSD+Am3t7e0gitrSCK2toEEcFvCoSKKNRhURRgAAACvzbjjiyWTSjl+X2eY16UalStJRlHCUpOUYOMXdSrzUbwU4uEI2m1Lmil/a/0U/oxUPFqdTjfjSVanwLlePngsNltCpUw+K4mzLCxo1K9GWIilPD5NhlVhSxlfDzjicTWc8JhqlCVKtXpZWheHdG8N2cdlo9jBaRqiLJIqL9ouGUYMtzPgSTSMSSSxwCSFUDArboor8MrVquIqTrV6tStVqScp1Ks5TnOT3cpSbk36s/2AyjKMqyDLsJk+SZdgspyvAUY0MHl+X4elhMJhqUFZQpUKMYU43teTS5pybnNyk23/TL/wAG6mqxtf8A7VOgSMMtYfCzVo4SVIkHn+N7K4fYfmIjBt0bO5B5oBCs3z/0V3/wc8AaheSXsmjeRLK5kdLWeSGBpGdnZ/J+ZAWZjlQAmMKFCjFfzH/8G8GreT8av2hdEz/yEPhd4d1PGU5/snxSLbOCPMOP7YPKnYM/OCfLx/WBX7pwflOUZ3wnltHNstwOZwwuIxrpQx+EoYuNGbxVWblSVaE/ZtxlG7jZvrdWP8YPpe4/NuH/AKRXG9fKswxuWSzDA8K15ywWJrYZ14LhjKaP732U4qolUo1OXmTtra13f+J7/g6B/Z40fwP8Q/2ZPjf4Z01bDT/GHhPxX8NNcWHcY21TwjqFr4g0ieVpJWZ557DxHqEJYJhY7KJQ2OB/KhX95n/Bz34astU/YX+GXiWZQL/wp+0N4cjs5MZb7P4h8H+MrG+hGeAJJbewlbGM/ZlPJUA/wZ19uqFDDRhh8NRpYfD0YRp0qNCnGlSpQilywp04RjCEYqyUYpJKx/JtetXxNWeIxNarXr15OpVrVqkqtWrOTfNOpUm5TnJtO8pNt9R8UbzSRxRqXkldY0RRlmdyFVVHcsSAB3JxX9VP/BI/9unwx+zH4F8Mfsz/ABT0mw0fwhqmu3+sWvj+zBjfS/EfiaaGW6j8UIXZZtPEyw2kWqQ7WtIY4kuIjDEZh/NV8G/D6a54ztZp0D2uixPqsobhWmhZEs0HB+YXUkUwXjcsLjPevtk85z361+a8Z8VY3J8yy/C5bVUKmHj9bxaa5qdZVG4UsNVjpzU3CM5zSaac6U4SjOCkv77+iT9G7hbxS4I434l46wVetg8zrrhnhivQqyoYrLcRg4UsbmWeYKesJYinXrYHB4aVWFWhJUcyw9alUhUkl/oMWl3a39rbX1lcQ3dneQRXNrdW8izQXFvOiywzwyoWSSKWNleN1JVlYMCQasV/NV/wTm/4KVRfDHw1cfBX44T6xrWgaPYSTfDXXLSB9R1O0W2Qu/g+7XgvaMgabSL24lSO0CS2lxIIjblP0y/4eefAz/oXfHv/AIL9I/8AlzX6Pw1mceJsthmGCozbjL2OKorV4fExjGU6TlopRtOM6c1bmpyjJqMuaMf4T+kRl2T/AEaPECtwF4j8SZZlmJxOEWccPYys6sKefcPVsTXwuFzXD06cK0qSlXw2IwmIo1Jc1DG4bEUVKpGEas/0jor83P8Ah558DP8AoXfHv/gv0j/5c0f8PPPgZ/0Lvj3/AMF+kf8Ay5r6D6hjP+gar93/AAfP8+zPwb/iPHhB/wBF7kf/AIFiv/mbzX3n6R15B4x+AXwc+IPj7wf8T/GXgDQNf8deAzKfDfiC+tFkubYOrCFLtM+RqMdjI73WnLfRz/YLvbcWpjcEn47/AOHnnwM/6F3x7/4L9I/+XNH/AA88+Bn/AELvj3/wX6R/8uaxrZRVxMFTxGA9vTU4VFCtShUiqlOSnTmozTSlCSUoyteLV01ZnoZb9I3wzyfETxeU+J2ByzFVMNicHPEZfjcxwdeeExtGeGxeGlVw9KnOVDFYerOjXpOThVpTcJxlF2P0jJzyaK/Nz/h558DP+hd8e/8Agv0j/wCXNH/Dzz4Gf9C749/8F+kf/LmtvqGM/wCgar93/B8/z7M8/wD4jx4Qf9F7kf8A4Fiv/mbzX3n6Ia1rWkeG9I1LX9f1Ky0bRNHsrnUdV1XUrmKzsNPsLSJprm7u7qdkigggiRpJJJGVVVSSa/lc/wCCg3/BaPxT4w1DWvhL+ybqd34V8JWs17pmtfFm3fytf8SFHa3kXwnlSdI0pgshTVGBvrtJFkthbIElbzL/AIKvf8FM9Y+P1xH8BPhM2r+Ffhhpgt7rxzPLcC21bxlrDLFcwaRfJZzPHBoek/JI9kJ5vt16266CC1WGvwqrz6nNGUoNOLi3GSur3WjTt2d01f11R+r4DFYXMMHhMxwdWOIwmOw9HF4WvGMoxrYbEU41aNWMakYzUalOcZxcoxdmnZdb2p6nqWtaheatrGoXuq6pqFxLd3+paldT31/fXUzF5rm7vLmSW4ubiVyWlmmkeSRiWZieao0UVmdoV/SH/wAGz/8Aydb8c/8AshkP/qb6FX83lf0h/wDBtACf2rfjmB1PwMiA+v8Awm+hYo6x/wAUf/SkNb/KX5M6n/goDcC4/bJ/aBdSCI/H1/aggMP+PK3tbPkNyW/cfMR8rNllAUgV9v8A/BFC6eL43/FS2CqUufhpEzE53KYfEuklduDjBDsGBBOdpBGCG/Pb9sy9Oo/tVfHy8LySGb4n+KwXmOZGMeqTxHPLYGUwozwuBgYwPuX/AIIw3ccX7Sni61YOZLr4XayYyANg8jWNDd95LAgkNhcKec5xX8IcIYi/jJgq6s/bcXY/VPT/AGjE4uN09br3/n36n+93jBgHH6FOaYFxfNhPCHhNuNndPAYHIazuvda5fYO99rN2drP8EP8Agsh/ykl/ai/7HDRv/UN8N1+Y9fpx/wAFkP8AlJL+1F/2OGjf+ob4br8x6/u8/wAEZbv1f5hRRRQI6vwl4sv/AArqK3NsxltJSqXtk7ssVxFnrwGEcyDJimCMUJIIZGZW+wNN1C21WwtNSs33295Ak8ZypZN6gtFJsZ1EsTZjlVWYLIrLk4yfhSvZvhF4pax1E+HLqQfY9Sd5LMMADHqRWMBRIWBC3MUXlCPa5acQBNpd9/7H4S8c1MizSnkOY13/AGNmdVU6LqyfJl+PqyjGlVg5O1OhiJfusRFWipzhXbXJU5/5j+kd4S0OL+H6/FuTYSK4o4fw0q9dUYWqZxlFCMp4jDTUI81bF4OCeIwUnebhCrhYpurSUPpMDPpjqc4xgdevXjt3r4OuwRdXAJBImkBIBAOHPQEsR+JJ9zX6QXnw/wDHWnaXea1qHg3xRZaRZeXHeapd6Dqlvp9pJdLMbZLm8mtFt4XuBDMYFkkRpRFIUyEbH5wXn/H3c/8AXeX/ANDavpfHyqpz4XhCcZQ5M4k+WSkua+WJXs2rqL06q77nw30O8LOjh+PalajUpVZVuHIL2sJQbpxhnUvdU4p2cm7taNpJ6o+7f+CXv/J//wCyn/2WLwl/6XLXzF8e5Hl+OPxikkdpJJPif47eSR2LO7t4n1Nmd2YkszMSWYkkkkk5r6d/4Je/8n//ALKf/ZYvCX/pctfMHx4/5Ld8X/8Aspvjr/1JtTr+eOi9X+UT+1TyiiiikAUUUUAFFFFABTJYop4pIJkWSGVSkikKcqSD8pYHawIBDAZBp9FAH2r/AMEv/wBqyT9kL9pqz07xRfG3+FnxWFj4P8ZyzOyWmnma6B8PeKGAfyl/sW+uGW8k5aPT7q/XK4xX9s0E8F1BDdWs0Vza3MMdxbXMDrJDcQTIskM0MiErJFLGyujqSrKQVJBzX8PH7LH7JVn+1dp37SkcupXekal8Dv2aPiJ8c9CuLZEaK+1XwTdaCU0XUNyMfsmo2l7eKJFZGgmjjlDFQ8Uv6ff8EoP+Cnlpb6doH7L/AO0VrosZLBINJ+FnxA1i4CQTWocQWfhDX7udhskh3JDomoSMUkiX7FcMrxwSS/zz4z+H2IzSK4pyag62Mw1FU81wtKLlVxOFpL91i6cVd1K2Gh+7qxScp4dQlH+A4y/f/Bzj2hljfC2cV1SwmJryqZTiqslGlh8TVadXB1JysqdLETaqUZNqEa8qkXrWTj/SdXyh+238cvFX7OX7Nvj34s+CtJstX8SaF/Y9np0OpRzTafaSaxqtppjaleQwNHLPDYpcmbyRJGruEDyIm5h9WqyOqvG6yI6q6OjB0dGAZXRlJVlZSCrAkEEEV5x8YfhjoPxn+F/jn4WeJlP9jeOPD1/oV1KpYSWslzHm0votrKTLY3iQXcQyP3kK8iv5ewc6FLGYWpiqftcPTxFGWIptN89GNSLqRsmm7wUla6vt1P6qwU6FPF4WpiYe1w0MRRniKe/tKMakXVjZWvzQUla+t7H8sdl/wWR/bGtkdbjUPAV8zPuWSbwmqOi4A8sCG/jQqCMglN+SdzMNoW7/AMPmv2vv7/w9/wDCVk/+WdfBf7Q/7PnxC/Zs+Jmv/Dfx/pNzaXWmXcv9lasIJRpfiLSGdjY6xpN06CO4trqAK7KrF7eXzIJgssbCvC6/eqWR8PYmnTr0ctwFSlVhGpTnCjDknCSTjJcqtZr9U+p+8Ush4cxNOGIo5bl9SlVjGpTnTpQ5JRkk01y6Watol3ur3P1o/wCHzX7X39/4e/8AhKyf/LOvqz9hv/gpn+018d/2nPAfwt8Z23hPV/C3jKbUrfUoNN0STTrrSLfTtF1DU5dTtLqKedsRGzVpYboPFIGaNJIiyAfz2qrOwRFLMxCqqgliScAADkkkgADqTiv6Y/8Agjx+xnrnw8sNS/aV+JGkvputeKtGOj/DbSr6Ex31poF6+7VPEM8MsayWx1VIobXTzkNLYtcTYMc8bN43EeX8P5XlGMrPL8FTrVKM6GFUaUVUliKi5acqeqlek7VZyjrGEZd2n43EeXcO5VlOLrPLsFCvVpTo4RRpqNWWImuWEqbvzfum1Vm42tGL197X93DwSPQmkor45/bG/bX+En7G/gC58TeONSi1DxZqFtOngzwFYTI2u+JdSVcRKIgS1lpkMjK99qU6rDBCG2+ZM0cb/kWXZdjs2xlDL8uw1XF4zEzVOjQoxcpyk927aRhFXlOpJqFOCc5yjFNr8Lx+YYLK8HXx+YYmlhMHhoOpWr1pKMIRX4ylJ2jCEU5zk1GEZSaTwP29/wBs7wj+xr8GdU8U3l1aXvxE8Q293pPw28KNKputV12SIouozwKTMmj6OXW71C5KeXhEtlYyzIp/ic1DVfEXi/xH4g8feONSuNd8aeMdWvNf1vVr6Qy3c17qUr3Fw8hIAR5Gkx5QLLbwJDbosQR0r0n45fHn4m/tU/FDVfi/8XNRe+vLt5IvDegK8g0fw1o/m77TTNLs3JSGygQKWJXztQnBurpnUqJfNq/0x+j34NU+CssjnebU4Vc3xqp1+dq6Ukm6apc0VJYbDczdCTs8RWlPFOMYQw1v8sPpPeO0+MswnwtkVadPKsG6lHE8srNRbjz06jjJxeKxXLbFQWmFw8YYNylUqYtBRRRX9RH8ZBRRTkXeyrkDcwGScAZOMk84FKUlGMpSaUYpyk3skldt+SSuVCEqk4U4RcpzlGEIreUpNRjFebbSXmX9PurvTJ4tRsLy6sL6Ft1rc2c01rcxEhkaWO4hKOmVLR/I4JywJwCDHPPPczS3FzNLcXE7tJNcTyPNPNI3LSSyyM0kkjHlndmZjySaa7biABhVAVRnOAPpxkn5mIABYk45plfzfxJnVTPM0rYqTf1em5UMFBpLkw0ZtxbsvjqNupNu75pct+WMUv7Z4K4YocK5FhsBCK+uVowxOZ1k2/bY6dOKqJNt/uqKSo0lFRThDncVOpNsooorwD60K/rf/wCDYg40X9rg+l/8KD+Vv45r+SCv63f+DYshdC/a7Y8Bb34UsSeAALbxzkk9APc8U1uvVfmVHf7v/Sonxx+0rqLat+0H8atQYuxuPih44OXVEfEfiPUYlBWPKDAQAYPIwTyTX6qf8ER74p8SvjbpzTbUn8CaHeJCd5Ektr4gEJZcHywRHeNksC5AAU4LV+Q/xmvf7R+LvxR1DCj7d8QvGN4Ajb0H2nxDqE2Ebncg34U9xg1+nP8AwRfvxF+0d4203zQjX/wr1WbYfL/eLY6/4fYgbj5mVM4b5BjaG3nAAP8AA3h1ilDxVyWum2q2fYuCadrrFrFUrvVaNVbta32s9j/oC+kblntPolcY4BRUXguAsgqqNrqCyutkuJaXKmvdjhmk0rK11Zar8g/+DgK8iuv+CjvjWONXDWPw1+GVpKXCgNINHuLjcmGbKbLhBlgp3BhtwAT+KFfs7/wXz/5SR/En/sRvhl/6jUVfjFX98Ntu7P8An8krNpBRRRQI9p+AX7QXxY/Zm+JWg/Fb4OeLtU8I+LdBuEkS5sLiSO21Gz3g3GlataA/Z9R0y8QGO4s7qOSJwQwUOqsP7drT9vrwF/wUN/4Jc/HXxdpb2vh34l+FvA62vxO8EK7Sy6PrNhcWV8t/p6yGOWfQdbFuz2N38wt5GmtZC8tswf8Aggr6Y/Ze+P3jf4F+M9Zg8M6tc2vh74oeGdS+Gvj3R1YNZ654a8SKtvJDcQvFMoms7sW17aXKItxBJCRDLH5jGvoOE8XDA8UcN4yrONOlhc+yfEVakmlGnToZhh6k5ybTSUIxcm2mrJ3R85xjhqmN4R4pwVGEqlbF8O53hqVOCvKdWvluJpQjGN1eUpSUUrq7a1Ptiv6uPDmnx6j/AMEUpoZQhWH4DeJ74eYiyKJNP1zV7yIhGGN4lhQxuCGifbKpJQKf5R8g8joeR9K/qUk8V2nhD/giRZXt3OsP9q/Ce48MWys6I0914m8Y32iRQorqxkY/bGkKIpYJG77owhlT+/fHanVrU/DWnQTlWqeJWQRpKOrc2sRy2+f4H+dvgDVo0Z+KVXENRoQ8LOI5VW3ZcnPheZXut1otVq90fy10UUV+9n8+BRRRQAUq9R9R/OkpRwR9RQB/T5+1rpqaT/wRw+HdlGyuq+EPg3OWSMRBnvNc0O8kYoGb5jJO299xMjZc4LED+YRRllHqwH5mv6YP2lPFNr4w/wCCMHw91i1mgnSPw/8AC/R5ZLcts+0aB4t0zRLhXDBSkyy2DCdMAJNvVflAr+avTEWTUrBHUMj3lsrKwDKytMgIYHggg4IPBFfhvgRGtSyDjKOJTVen4i8URrpqz9rBYBVE1ZJPmT0srdlsfvPj7KjWz7geeHcXQqeGPCDoyjrF05Rx3I15crR/ev8AssWA0z9mz4D6eEKfZPhN4DhKsjRkMvhzTyxMbEmPcxLBM4QHaMAAV73XmXwVtPsHwd+FVjv837H8OfBVr5mzZ5n2fw5psW/ZubZu2btu5tucbjjNem1/nlnFT22bZpWvf2uY46pfvz4mrK+qT1vfVL0P9KshpKhkeTUY6Ro5Vl1JLyp4OjBfggooorzj1QooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/OkoAKKKKACiiigAooooAKKKKAP5Cf+DiD4sajrXx8+EfwciuCNE8D/AA7fxfPbJK5WXXfGWsX1o8k8QHlB7bS/D1iLdiTKq3lxkKkgL/zu1+3X/Bfi3eD9ui0kcqVu/gz4GuIwpJKoupeJrUh8gAN5ls5wpYbCpzklR+ItfzFxdVnW4lzmVSXM44ydJf3YUVGlCK/wwgl5u76n++v0aMBhMu8B/DChg6cadKrwxhcfVUUlz4vM6tfMMbVk+sqmLxNaTb1StG9ooKKOnWivnD9zP3Z/4N+9US0/a6+IemPLs/tj4H64qR+YV897DxX4UudhiAO/YjNJvYqE27ckvg/2D1/Fn/wQn1aPTf279OtpGAOs/Cn4h6bGrMVDyLFpWogBQD5jIlhI6qSoUr5uSYwrf2mV+/8AhpNy4clG6fs8xxUF5J08PUs9FrebfXRrXov8YPp1YVUPHWrWUWvrvCPDuJbe0nTeOwV46vRRwkYvZc0Xpu3+DH/BxxFpVx/wTf1y21BWF23xZ+Hlzo0gIxHqFi+r3coKn7wk02LUIeCCokMnITa3+etX99n/AAcjai0v7I3w08IqzD/hIviTrWqKuMxSTeGvA2tpD5vJI8uXXIyvyNy29fmjBH8CfHYgjsR0I7Eexr6Wni/bZhmmFc7vBVcLFR09yOIwdGsvN80nN3fotmfzLnvD08s4a4Hzz2EoUeJcuzusq7UuWviMq4hzDLq0YvWLdGjTwqla0lzrmSXK5fW/wC0lLbw7qerkL52pagLdWxlhbWMYwMlQRmeeUkKxDBULDKgD3mvPPhTBFb/D/wAOrEuBLb3FyzZzve4vJ5N2fTaVUYyNqjFeh1/PHE2Jli8/zatJ81sbWoQd7/u8NL6vTS3suSlHQ/3O+jlw/R4a8DvDPLqNJUniOFcvzrEK0VKWL4hg89xM6jj8UvaZg4Ju8lThCDfum54Zums/EGjXKyPF5Wo2hZ0ALrGZkEm0FW+bYWx8pIOCBnFferKyMysrKykqysCrKQcEMp5BB4IPIPFfnvZzSW93bTxY8yGeKSPIJG9JFZcgEEgkDIzyOK/QZVukSL7bFcQ3TQxSTx3aPHcrJJGrt5ySAOJCWy27kkkmv2PwNry/4yTDOS5F/ZVenDm15pfX6daSj2tGgnLXonbS/wDh1+3ryCnHFfRo4pp0IqrXw/ihkGKxUaa55U8LU4IzHAUKlVK7jCWLzKpRpyk0nUxEopNzbWiiiv6AP+d8KKKKACkZlRWd2CIqlndgxVFUEszBFZyFAJIVWYgcAnilrk/Hl6NO8EeML8hT9j8Ma7cgMSFYw6ZdSKpxz8zKFwOTnA5qZyUIyk3ZRjKTb2SSbbf3HoZRgnmWbZZlyck8fmOCwScVeSeKxNKheK6yXtNF3PyC8VaxN4g8S69rk+BLq2rX9+yh2kVPtVzLMsaO/wA7JEriNC3OxVB6VlWNlcale2mn2cZlu765gtLaIdZJ7iRYokHu8jqo9zVZjkk+pJ/M19D/ALJ3g9vHf7RXwl8NKgkF14usLt0Mckqumk+Zqzh44gWaPbZEydFEYYuQgavzptttvVttt+b1Z/unh6FLDUKGGoQjTo4elToUYLSNOlShGnTgr3ajGEUlq9EfPBBBIPUEg/UcUlS3AxPOPSaQfk7VFSNQr+kT/g2e/wCTr/jl/wBkNh/9TjQq/m7r+jb/AINpp5I/2vPjJACvlXPwFvpJAVyway8aeFpI9rZAAbz3VgQex7Yo6x/xR/8ASkVH4l8/yZzf7SMzz/tCfHCSRi7n4tfENSzdSE8Wasi5Pc7QoJPJIJJJJJ+8f+COl08P7Wd1AJVSO7+GHi1HRtmZTHeaHKiqWG7cCpbCEEhTnIBr86Pi9e/2j8V/iXqGHH27x74uvMSNvcfadev5sO38TDf8zfxHJ7196/8ABI68a2/bD0GFUVhf+C/GVoxJIKKthHeblA6sWtFTB42sx6gV/nzwTXi/E7IK6fu1eLMPZ66rEY9xjtrr7Rfrpc/6IvGzBSh9FvjrBOKcsL4T14tNRsngsjpVG0m2k4ujeNm7NLld7M/Fb/gsh/ykl/ai/wCxw0b/ANQ3w3X5j1+nH/BZD/lJL+1F/wBjho3/AKhvhuvzHr/QY/53Jbv1f5hRRWjpCq+q6ajAMrX9orKwBVlaeMEEHgggkEHgigRQZWRmR1KupKsrAhlYHBDA8gg8EHkGuo8C6s+g+NPCmtR436V4i0bUELKJFDWeo29wpaM8OAYwShxuHy5Ga+4/+CqXwr8PfB/9ub40+E/CtlZaZoN1c+FvFdjpmnW6WtlpzeMfB2g+Jbu0treNI4oYo77U7kpHEojjVgiDaor8+7OTyru2kB2mOeJwxxwVdWB544I78etA9n6M/wBTv4s6Z4Z8b/swePlvdF0u70TxP8G9buprGWxtZbWW31LwhcSr+6aExEpHN+7bZlCAVxiv8sO7YvdXDEAFp5SQBgcyN0HYe1f6gPgnWz4l/YG8H+IjIZTrv7Kvh3V2kZkdnbUPhdZXbMzx/u2YmUlinyk5xxX+X5cf8fE//XaX/wBDatak5yjCMpScU5TUXJuKlNQU5JN2TkoQUmtWoxTb5VbKNKlCUpwp04TlGMJTjCMZShBzcISkkm4wc5uMW7Rc5tJOTv8AoD/wSqjST/goJ+y2JEVwvxR0WRQwBAeMTMjgHjcjAMp6qwDDBAI+T/jx/wAlu+L/AP2U3x1/6k2p19Z/8Ep/+Ugv7Lv/AGU7R/8A0Gevkz48f8lu+L//AGU3x1/6k2p1n0Xq/wAolrd+v6I8oooopDCiiigAooooAKKKKAP2w/4Iu61aaJrn7dN9cPHmz/YK+Od4IZAr+altL4bdsxskgeMHasgMcg+ZVKMXVG/EbVrKHVAySkxPHIWtp4gA9sRJuHljghFySqKyAMSQRk5/Tb/gmp4uHhXWP2v4RKUfxF+wp+0tpKRKxR7j7F4Rj8StEjr8yMo0Hzdyn/lnhgyllP5pk5JPqSfzoeqs9tf0/wAh7JNb3f6f5n6rfsT/APBW74n/ALNltpPw1+PlrqvxS+FNu4tdL8TQzG88ZeFbMOqJHHPcOh1zTLWJGK2d4w1CGMoIZ3Ty4a/p8+CP7Rfwa/aJ8M2/iv4RePND8XafLHE9zbWV0iatpkkqCT7NqukzGO/sLhAcPFcwRsrAggEED+Vn/gkP8OPA3xb/AG/fgd8O/iT4X0jxl4J8UTeMLDXvDeuWqXmmalanwT4glVJ4X6NFNHFPBKhSWCeOOaJ0kRWH9HH7Rf8Awbz6L4d8UX3xi/4J5fGXxN+zr8RYpDf2/gu/1a+ufBd7PGXl+w299GJb23srghIRYaxDq1mCzbpoYcqPxzjPwayXiSVbMMplHJc2qOU6kqVPmwOLqPVyxGGjy+ynJ35q2HcW23OdGrO7f6/wb4v51w9CjgM1jLOcqpqNOCqT5cfhaUUoqNDEO6rQgvho4hOySjCtTjZL3f41fAD4SftC+GD4T+LPg/T/ABNp8Zd7C7cG21jSZnVlM+l6pBturVvmy0W97aUgefBKvFflJ4n/AOCG/wAFNR1Wa78MfFjxr4f0yWYONLvdJ03V3t4izNIkV8LmxaVhlViEluoCr85diTX2P+yx4p/bv8M/EXxL8Af23Pgw+gaz4c8M3Gv+F/jRoEZk8F+ObbT9WstJmtFvbZX0ubU5Uvor9WsrkOYQwuLC0bAP3xX8x5rR4n4EzPE5HXxcsNXoqFR06FWGJws6daKqU61KNWElD2kXd3p0qqd1Uimf1zwjxzUzTKKOacPZhi6WBxLqRVKrT5HTq05clWEqNVVKanGS5XOm5QlvGclqfmd8Av8AglL+zB8ENV0/xLqFjqnxQ8T6bNHc2l74x8gaRbXURDRzxaBbBraRkdVkVLye8iEi5MZG0L+lUstrY2zSSvb2VnaQ8s7RW1ra28KYAySkMEMUagAfKiIoAwBXD/Fbx0Phj8MvH3xF/se/8QnwR4R17xOug6XG8uo6zJo2m3F9HplkiJIzXN7JCtvEFRm3yDCseD/PV4/8Ff8ABZH/AIKK6Dqd7pvwq8QfAb4Bf2XqmsXCTzSeBtMvNBsrWW9kOr6zqU9trfiKR7aJU+xafAsMk5MTWzIXKe1wnwXxJ4k4qriHjYxweDqwo4rH4ubqexdRc/ssNhKdnObglLlXsaN3FTqxbR8t4g+J2F4aVKpnFXG5pmeKo1KmCwcW7ShCSg5VK07UcLQ9o1F8kZTdnyUZWdvpv9uP/gsf8MfgYurfD34CNpvxS+KMQmsrvW4ZluPBHhK6AKFrm8gkH9t6hA3XT7B/JRxsurqBhtP8unjL4neP/wBoDx5rfxH+LfizUfGXiu8kE0k2pTF44oZJXZbXT7NSLfT9KsmCJFYW0aW4MsZkDM0hl8PlVklkRySyO6sScksrEEknk8jrXSeEg51Ntp+UW0pflgCuUwMKQGO7YcNleN2NyqR/YfhjwHw3wjmmUYfC4GGNrYnG4TD43G4xKeJxSrVoU7ScUo0qEZyjUWFpKNKfJGNZVXeT/iDxa494m4ryDP8AFVcwnl0MHlePxWAwuBvGhhJYfD1K3NFSvKriJwhKn9aqN1abm50PY2il6RRRRX9pJJJJJJJWSWiSWyS6JH+c7bk3KTbk22222227ttvVtvVt6thRRRQIKtWkcskjtFE0vkwySyFRxEirjzWbGECuyAEkZdlVTuZQatWLYMTIEGSYmJHByq4ZiQWUEKoLfdfGNwUEBh5We1HSyXNqkXaUMtxrT7P6vUs/lv8AI+g4Soxr8UcO0Z6xqZ3lakujX12i3F76SSs9Ho9mFSKq7Gdsn5ljABxguGwxODkDb93jOeoxzHUo/wBSf+u8P8nr+fMgw1DF5rhqGIpqrRnHEuUG5JNww1acbuLi9JxjLR9NbrQ/sbivGYnL8hx2LwdV0MRSlglTqxjCTiquPwtGpZTjKL5qdScdYuyldWaTUVFFFeOfRBX9bP8AwbJceGP2xj6P8Lf/AEn8cV/JNX9YP/BtnfDTPhz+3DqRdoxYaZ8Pr0yINzoLXS/HsxZF7soTKjuQKzqzVOlUqPanTnN9NIRcnr00R04OjPEYrD0KavOtXoUYKzd51K0IRVlq9ZLRanwL4rn+0+KPEVyCjfaNc1WfdGcxnzb6d8oQWyh3ZU5ORjk9a/TL/gj3Ldx/tcbbaCSaOb4aeMIrxkYqttbCfRpRPMACHjNzFbQBSVHmzxtnKhW/LrUJFmvryZchZbqeRdwAbbJIzDIBIBweQCRnoTX6zf8ABGBQ/wC1Z4nBzgfBfxc3HqniHwe49eMqM+2cYPNf57eGyc/EHhT3nFvPcJNtWbfLUc3HXpJJxb3Sbas7H/RT9JRxwv0dPFNciqKHAmYUFGTaS56FOhGemt6bkqkVs5RSejaPyC/4L5/8pI/iT/2I3wy/9RqKvxir9nf+C+f/ACkj+JP/AGI3wy/9RqKvxir/AENP+c+fxP5fkgrqfA+i2XiPxl4W0DUZ5baw1rxBpGl3lxAN00Frf39vazzRKVbc8cUrOq7TlgBg5rlq9q/Zv0mPXf2gPgrosrbY9V+KXgTT3Y5AC3fibTYSSV+YDD8lfmAyRzigk6D9rX4IWP7N37R/xe+B2meJB4usPhv4vvvDtr4gNm9hJfwwJFMv2izdnNvd2wm+yXkaySRC6gmMMkkRR2+frOaW3u7aeFzHLDPFLE6lwySRurIymPDgqwBBQhgRkHNfdP8AwVAGP+CgX7WI9PjJ4rH5XQr4Ri5lj7/OvGM9x22tn/vlvoacdJRa3uvzE0pRaaupKzT1VmtU76PQ/WS0mFxa204bcJreGUNgruEkavuwQCM5zggY6YFfqZ+05+0lY3v7EH7JH7M/h29inn0/w6/jrx4beVHNtONX8Q23h3RrnaWKuYrufVJoiVZf9CdlwVJ/KnRv+QRpf/YPs/8A0njrUZ3cguzOQqoCxLEKihUUEk4VVAVR0AAA4r/WypkeEz+nwhmWPvOeR1cPnWGp2vCeOnlVbCU5zcm3ah9cnXp7v21OlJv3Xf8Axzw+f43h+XGGW5e1CHEGGrZHi6rdqkMBDNsPjatOFko3xH1KnQq7L2NSrFL3lZtFFFfVnygUUUUAFFFFAH6X+Cv2irDXP+CcHxn/AGddavoo9d8I+L/BvizwnBL5UT3ug6p4x0ZdXtbfLKbh9Ov1S7ZUV5fKvpncLHDvb85tBi+0a5o8G7b52qWEW7GdvmXUSbsZGcbs4yM4xkVk5IyATg9R68559eefrXYfD6wbU/HXg6wVQ5uvE2iQsrMUBRtRtxISykMAE3ElfnwPlBbAr57LskwXD8OIq+DbhDOM0xmf4iDUVGli8RgcJRxTh3jVqYN4qXNa1StOPwpH0eZZ7juIp8N4fG2nPJcpwfD2GqXk5VcHh8fjK+FVTqpUaeNWFjy3vSoU38TaP9A7whaGw8J+GLFiWaz8PaLalihjJNvpttESYyWKElMlCSVPGTjNdFVe0Upa2yEYK28KkdcFY1BGe/SrFf5J1JudSpOWspzlOT7uUm3tpu+h/shQpqlRo0ltTpU6a9IQUV1fbu/UKKKKg1CiiigAooooAlTofr/QUUJ0P1/oKKAI26n6n+dJSt1P1P8AOkoAKKKKACiiigAooooAKKKKAP4yf+DhG1EP7ZPgefy41e7+BnhmQyKqh5ETxX40hTzGA3HY0UgUMTgdODX4WaXZy6hqen2ECl5ry9tbaJAGYvJPMkaKAisxLMwACqzHPAJr+gH/AIOJ7IRftOfBu+EKKLv4JW0BnATzJXs/Gvi0sjkfvNsSXMRQNhP3jbMtvr8NPhDpw1f4rfDXSiEI1Lx34TsCJGdIyLvXLGAh2jBkVSJCGZAXAyV5Ar+ZuLKd+Ks2ppW58ckt1/EhSk383J6pH+9X0b8byfR58OMa3f6rwnUk2mpaYHE4+lbe3uqgotXVmrOzTPdP27PhNafA/wDar+LXw1sLM2NjoeqaRcWsJbcGj1zw5o+vGYEMyr58mpPK8Su628jPb7iYjXyNX7P/APBd/wAJjw7+3VeaugYr41+FngDxE5w2wTWlrfeFWjUkYDBPDkcjKDn94HIBkyfxgrzc8w0cHnOaYWCtCjj8VCmrWtT9tN09FZL3HHbTsfc+EOf1eKfC3w94gxFR1cVmnB+QV8ZVk3KVTGxy3D0cbOUpNuUpYunWcm3du7ep+o//AARp1I6d/wAFCPguvneTHqVh8QdLlO7aJBP4B8RTwxHg7vMurW3RU43Oy84BVv7ma/ge/wCCWmpNpn7f37MkiyPGLvx+dLco7puXVdD1fTwrbAS6F7hd0bDZIBtYqCWX++Jup+p/nX6/4WzTyXH076wzScmuq58JhEvl7jt53P8AMj9oDhfZ+LfDGKSSWK8P8BC6+1LD8QcRJt6atRqwW+yjouv80f8AwcVawF0P9l7w2z5FzqXxN1ox7kI8u3tPCenufLYFsv8AbANw/dkKQ2TgV/Dl4z0J/DfifWNIZGWO2vJDallA32cx860cdVOYHj3YJAcMucqa/s7/AODiTVhN8Uv2bNCDqTp3w/8AGuqmMEblGr+IdKtQ5GwEBv7FKqS7AlGwqEMX/lU+NvgmTWdPh8S6bA0t/pURhv44lUvPpwLOsuBh3e0djwu4mGRjghBjgpZ3TwPiBnOGr1FHD5i8HheeV1GGKoYTDxw1220otyq0W9uepGT5UpHr594Q4ri/6F/hlxRk2DlXz7gepxTntejSjzV8Vw5m3EmbLOFTjGnz1auFWFy3M1G/LDCYbHcinUlHm7z4Xgf8ID4WRQTt01I8Z3HKTSoRkAZ5GM7V+g6V9VfFT4DeNPg94U+FnibxnHFYzfFbRNU8R6RoxDfbrDSLG5sobO41A58tH1SG9ivLeBSXit2TztkjGNfB/wBijw3rfxZ8S/Cv4d+HrSHU9d1H4haP4dtbK4liiglW81q2u8XTzBYktktp5jOX3qIInyG+7X7Wf8FkYRb+LvgVAqLGsHhjxjCsaABI1i1PQkVFC8BUChQBwAMCvyDiipWwPFNTL/Z2hicwzaUpyT1hRarU4xdrXlHEUqjd7xjypq07n9teEPHscw4Q8BMmwE8LOGc+HuDq5o4TjUrUv7AyCOWKjGFualCea4DGQlVaTdTAVKMHJRrH48+F5LWLxL4dlvV32UWu6TLepgtvs49Qt3ul2ggsGgWQFQctnaOTX9Gn7R3wc+HPxZ8H6j8fvhV4i0aH+ztLnu/EdnEY4LTV49Otxl44/wB09lrsSCOGW1njU3a+WECzqBP/ADj+Hokm13R4ZBlJdTso3AJBKvcRqwBGCOCeQc1+hUGr6rZ2d1ptrqd/b6dekG8sYLueK0uin3TcW6OsMxXA2mRGx2xX6d4WcOY7NMzxGc5XnmIybHZHXy6FSnGhTxeCzPK8wqYiWZYDG4acqbcqscDQeFxEKilhasXUUJycJU/8vv2z3ivwzwhlPhT4f8ZcB4XjjIOO+HvEzGUJ/wBo18ozbhrinI5cJUuF8+yrMKUK6jDD1s0x0MywjoJ4zCzhRlWlh/b4TFZtFFFf1Yf8vYUUUUAFcB8Vv+SZ+P8A/sUPEH/psua7+uA+K3/JM/H/AP2KHiD/ANNlzWOI/wB3r/8AXmr/AOkSPq+BP+S34O/7KnIP/VthD8dq/QP/AIJeaPb61+2n8Mre5ClLXw98XdVQMpYedpHwd8ealb4AdMMJ7aMqxJCMAxR8bT+flfaf/BPTxtbeAP2uPhXrt5PHbW1x/wAJh4bmmlZ1RR4v8C+JfCqjKI5y0msIoztBJAZgpNfnp/t8t16rbf5Hxlcf8fE//XaX/wBDaoamuP8Aj4n/AOu0v/obVDQIK/oq/wCDbKRIf2uvjFLIwSOP9nzX3dycBVTxb4RZmJ7AAEk9gK/nVr+hX/g3PlFv+058f7lnKC2/Zi8Z3O8Ddt+z6/4Ym3bcNuA2ZKhSTjAGTWdWfs6U578kXO21+XX9DowtN1sTh6KverWp01ZXd5yUVZJNt66Kz9DzLxZOLnxR4juQyMLjXNVmDRnKMJb6d8ocnKndlTk5GOT1r79/4JSSrH+2n8PFIJM2ieOYlxjAYeENXly3I42xMOMnJHGMkfnZeyNNeXUrABpbiaVgMhcu7MQMknGScc8DqeCa+7v+CZN8LH9tP4PMztHHdXXiKwcIMl/tvhXWreOMkDKq8zxhyu3C7gfkLA/5zcE1OTjbhSrLZcT5LJ+n9p4bm19Hv8z/AKSPHDCur4EeKmDp6v8A4hhxdCCWrvT4bxziklq2+TRdXpdXPyX/AOCyH/KSX9qL/scNG/8AUN8N1+Y9fpz/AMFkf+Uk37Uf/Y46P/6hvhuvzGr/AEdP+bCW79X+YVpaN/yF9L/7CFn/AOlEdZtaWjf8hfS/+whZ/wDpRHQI/Uz/AILW/wDKQr4r/wDYr/CT/wBVX4Qr8ol+8v1H86/V3/gtb/ykK+K//Yr/AAk/9VX4Qr8ol6j6j+dA5bv1f5n+lt+y9qz63/wS1+D2oOZC0n7IegwZl27yLL4dJZKSE+ULttxsUcKm1e1f5p1x/wAfE/8A12l/9Dav9Ij9iyWSb/gkz8Hnkcuw/ZakjBPUJF4a1CKNR7JGioo9FFf5u9x/x8T/APXaX/0NquW0PT9EI/QT/glP/wApBf2Xf+ynaP8A+gz18mfHj/kt3xf/AOym+Ov/AFJtTr6z/wCCU/8AykF/Zd/7Kdo//oM9fJnx4/5Ld8X/APspvjr/ANSbU6novV/lES3l6/ojyiiiikMKKKKACiiigAooooA+lf2XPGJ8H+M/HrtcG3h8Q/AT9oHwpM2cLKNf+DvjOxhgkAyWSW4eFdqgkvsPIBB+az1P1P8AOtnw/cyW2pxNHNLAJoL20meEqJGtb2zntLqJd3ykzW88sWG4O/FUbHT77Vb6307TbS4vr68nS3tbS0hknuLieVtscUMMSs8juxwqqpJ9KEm2kk227JLVtvZJdWxSkoxcpSUYxTlKUmlGKSu5NvRJJXbeiS1P1W/4Ihf8pMf2b/8AsJeL/wD1BvEdf6LWq63o+hWst7rOqWGl2kKNJLcX91DaxIijLMWmdBgAEn0AzX8CP/BPz9lzx/8AATx94U/aL8QatP4a8caFbahP4U8N28TC80x9Y0u60xr3W5WZDDdRWl7MyaYqNtmaMXr7Y5bWX9XfE/xE8deM5vP8U+K9d1x/mwuoajczwqXXa7LA0nko0i8SMsYLj7xNfe5PwNj8bRhXxtVYCnOV1SlTc8S6f8zg3GNJyt7qm3Kz5nBaJ/y9x79KThLhfHYjKuHsDV4sxuGvTrYuhiqeFyeFdO0qVPGezxFTFuntOeHoug5e7CvJqXL+03x7+Ofws8cjTvBvhXxZp+s6/p2pTXNzb2m94RClq6SeXdlRbyMryxAxiTe5b92r+XL5fzdX5hWV9d6ddwX9jcS215bSrNBcRMVkjkQ5DA9/RgcqykqwIJFfT/h/9om0jsUi8R6PeS30SxobjTWgMdxhTvlkjnkhMLMcARp5g4LF8nA/nLx3+j1xVmWd0eJuC6NTiClisNhsLmOXSrYWjj8LiMNCNGGIoKrLD0sRhK9NR5oRm6+HqRk5KpSmpU/3v6MH06uAqmS47hbxar4PgbH4PHYnF5JmtHD5jislx+X4n99LCYqrShjK+EzLC1lUUZ1KdPC4yhUpKm6eIpzhV+tdB8QaJ4V1nS9f8RXcVjoulajY3uo3M0ckqR2lvcxyzkxRJJLKRGjHyo43d8EKrHiv0c8OfFr4Q/FDS59N0Dxj4c1q11axubSfTlvbdJpbS6hMFxDJaTMrDdHNskiZdyGQJIiudtfzr/EX4n6l45ljto1ew0WDa8dkGG+abau6W5ZSQ4VgfKTcVH3z820J5pbXd1ZyrNaXM9rMhBSW3lkhkUhgwKvGyspDKrAgghlB6gV+leC/gZmPCXCuIfEWNeFz3OMVDG1cDh/ZYihl1GFH2VDD1qkZWrYp3lUxDpVPZU7xowlNxlUl+Q+Pn06Mqz/xCo0eBMho59wTkmB+oxzXHSxeV5hnGNqVpVsVjMDCcKn1bL6a9nQwsMXglia0oVq9RU4VKVOH8v3/AAU0/ZTn/Y1/bP8AjN8FYgG8NWXiCTxL4EuBjbceB/FRbWPDoyoCtJZWs7aZcMmV+02MwGPuj4t8KOo1VEJYGSGdVIJCkiJpMOOSVKo2MAnfszhckfY//BSL4p+Ifir+1t8R73xDq9/rMnhIaR4G0+51G6uby4Sx0DTYC8Jnuj5zAajeahJg5UNKwR3Ta7fIPhG0El7LdsxxaxEIgx88k6vEC3fYieaeCD5nl5ypYV7+WYSrQ4owWDw8o1quFzmjThNrlhN4XFJyqWbbUOWnKeru4ra+h+nY7OaGa+HlfPMbh6mAoZvwm8dVwjqKtWwyzTK1UjhnVjCEalWEsRGjzRjGLnqvd1PQqKKK/pc/i4K92svgzfXf7NfiD4/p5z2Gi/GXwx8K5kUxmGKbXvB/ijxOkzKrmbLDQhEZHjWJCYkR2eVlHhNf0K/DT9n9dR/4N7/jb8Ro4C+pt+0xpfxGhZUukdtN8O3nhb4eSZ+V1uI4ItR1qYmJRbpy8rI9vK48zM8csDDBSbt9azLBYK9r/wC81eV+eyevTd6XPf4fyr+1qmaQtf6jkWaZml1vg6KnG1+vNJabvZatH89VaOntsN4S5UfY5VKB0US73iRVYPNFvCMyzBVSd98SsIgFM0WdWhYNtW+G7busyuN+zd/pFuduPtNvv6btmy5+7u8j5fOhnPlfJM4T/wChZjvL/mGq9i+EW48VcNNbrPcqtdJ/8x1Do7r07PValepR/qT/ANd4f5PUVSj/AFLe00R/JZD/AEr8C4ZaWdYVvRKnjG35LB4hs/rzjVOXDWYRSu5VMuSXdvNMEktdN+5FRRRXgn1QV/U3/wAG9F0bL4E/8FC7teGt/BXhiVPm2Henhr4jMu1xyr7gCjDJDYIBIr+WSv6cf+CD17/Z37Lv/BSm9+XFt8OvD8jF1ZlCjwz8Rg5KoQxwpJAXnIHB6Hzs4qexynNK3/PrLsbU/wDAMNVl59j6LhHDvF8VcNYVK7xPEGS4dLu62Z4Wmlprq5W0PjqU5lkPq7/+hGv1k/4IzXsdr+1nrEDgltQ+D/jS0hOVGJI9S8N3xyCQW/dWcgwuSPvY2qxH5Mv95v8AeP8AM1+jX/BKjxNF4c/bN8AxTTCJPEei+MPDaqf+W8194dvp7eHO5QM3FpE+TnlANpJFf59+HeIjhuO+Eq03aP8Ab+W05N9Pb4mFC7v2dS7P+iL6R+AqZl4CeLeEpQc6n+oXEGIjCKd28DgKmOaSWuiwz020s9Ln5sf8F8/+UkfxJ/7Eb4Zf+o1FX4xV+zv/AAXz/wCUkfxJ/wCxG+GX/qNRV+MVf6Kn/N7P4n8vyQV9A/sn/wDJzn7P3/ZY/hx/6luk18/V9A/sn/8AJzn7P3/ZY/hx/wCpbpNBJ7l/wVB/5SB/tZf9ll8V/wDpWK+D4iBIhOMB1znpjI65Vv8A0E/SvvD/AIKg/wDKQP8Aay/7LL4r/wDSsV8HJ95f94fzFCdmn2dxR1UV5I/VzRv+QRpf/YPs/wD0njrSr93fgT+yX+z5r/wQ+Duuav8ADrT7zVda+FvgHVNTvZL/AFjzrq/1DwrpV1eXTn+0Nvmz3EskzYUIHcgKFAWvzq/a4/ZtHwX8fQp4Vink8FeIrMX+kS3kwBsrpZHS80tridYUlMGI5YsO7mKUFgoAFf2J4V/Te8FuNcznwnjsVmPAuLyrLV/wpca1clyrI8bUwHssLiaGHzOnm+IpU67k1UoUsXHDSxFJTcP3kHTP4r8SPoF+PHDGCo8QZDlmG8SlnGZSUMk8PsJn+e8R4aljIVcXRxFXJ1ktHEVsPCEZQxVXBvExw0+WVT91L2q+N6K2P7C1H+5b/wDgbaf/AB+j+wtR/uW//gbaf/H6/ev+I+eB/wD0d7w1/wDE14e/+eHn+fZn4/8A8SmfSi/6R28bP/FY8ZeX/Un80Y9FbH9haj/ct/8AwNtP/j9H9haj/ct//A20/wDj9H/EfPA//o73hr/4mvD3/wA8PP8APsw/4lM+lF/0jt42f+Kx4y8v+pP5ox6K2P7C1H+5b/8Agbaf/H6gl0q9hYK8SkkbsxyxSrjJHLRO6g8dCQcYOMEULx78EHt4veG3/ia8Pf8Azw8yofRI+lNUfLT+jn43Tla/LDwv4zk7aXdlkzdldXfS5nV7P+zpb/bPjz8H7Tds+1fEfwfb79u7Z52u2Ue/bkbtu7dtyM4xkda9z/Y6/Zitfjf4q1HU/GsOpQeBPDEMUt6lrutv7d1CdisGkx34dZIFVQ1xdSW6vKIlEavE8iuP198H/svfAPwR4n0Hxb4f+HGlWut+HNTsdX0m7kudUuPs9/YTx3FvO0E181vPtljBaKaN4ZFJV0IOK/nrxa+nR4K8C4/F8L4GvmnHOLxGUTnPM+DJZPmWRYStjaFRYfD1c0rZrhqVetGEoVsQsBDGRw8Jxpzk8Qp0Yftfhh9AHx44qpUc74iy3CeGrwGcUqVTh/xAw2e5HxPWo4SpRq1sR/YqyiticHSqJyhhf7ReCnieX21OP1adKvU/oHQYRR6Ko/IAU6uY8GeIofFfhnR9ehwDf2cbzIucR3KDZcxjIBwkyuBwPlwa6ev5GwmKoY7C4bG4WoquGxdCjicPVjtUo16catKa8pQkn5XP7PxOFrYLE4jB4mDp4jCVquGr03vCtQnKlUi/8M4tedgoooroMAooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/OkoAKKKKACiiigAooooAKKKKAP5Iv+DjOzVPjP+z3qHksr3Pw28Q2huDv2SLZ+JTKsS5Pl7oTes7bRvxMu8kbMfhb+zfBFc/tCfA23mXfDP8W/h3DKmWXfHJ4s0lHXcpDDcrEZUgjOQQa/oA/4OQLKKPxb+yxqIZzPdeH/AIo2ciEr5axWOo+DJoWRQoYOz6hMJCXZSqxhVUhi34Ffsv2815+0n8ALS2jMtxc/Gb4Z28ES4DSTTeMtGjijUkgZd2VRkgZPJFfzlxbG3GmOTS1xuCdunvYfCy/G+p/uN9G6v7X6LXCVXmt7PhfiunzNtcv1bOeIqN7t6KPs9NUklpZH7Sf8HE1nDF+0Z8ELxARLdfBnyZRxtK2ni/xAYmHG7fi4ZWJYgqkYAGCW/nnr+g3/AIOItXtbr9pn4OaNEwN1pPwWt7i6A3ZX+0vF/iUwocoE4S035WRifMwypgF/58q4uMbf6z5xb/oJin/iVCipfNNNPzVj6b6LkakfADwxVVSUnkNeS5t3Tnm+ZSov0dJwcf7rR9b/ALBOrDRP20v2X9RYArH8avAduc7uP7Q1y204MNvzbg10Co4G4DcVXJH+hUQQSD1796/zjP2Z9TOi/tG/ALVw/lnS/jR8MNQ3/N8os/G2iTljsBfAEZJCgsRkAE4Ff6OjnLE+uD+YFfofhVN/U84p3Vo4rDVEut50akW7drUo28736H8R/tDcKocX+HON5WpYjhvN8K5fZaweaUq0Utb3i8dJvS1pRtrc/kJ/4ODdQa4/an+FGnearJpnwPsSYVkDeXJe+MvFM5kdBzG00axKA33lgDDuB+CTKrqyMAyupVlYAqysCGVgeCGBIIPBBINftf8A8F69QNz+3BZWIkR00v4M+A4sIYyUlu73xHeyLJsUMHAnQbZGZtmxhtVgo/FGvzfiublxLnUtmswrRVnt7NqmmmnpdQT30fbZf3N9GzCRw/gL4YYeUYThV4UwteUXFOMo4+riMXKMouKUk1iHGSaalrrJO7+3f+CQ/wABNT8Uf8FA/hTbeEdkWi22o33jvWLRnWNdNPhHS7/UC9uzI6i3v5nhsjEMFZZYQuE2mP7o/wCC3XhvW/DHxE+CFhrmnXOnXUvhXxhexpcRlVkt7nXNOhjeOQbo3xJZzBlDblXy3K7JY2bzH/gh94o03w5/wUB+HlpqL+W3inwt488N6exOB/aEvhy71SFCApLeamlSwgZGHkVicAivvT/g4/00L4q/Zc1dSxM+gfEvTpVJXYgtdQ8JXMBUffLS/bLgP1VREnQtz24rJqGd8OLivF4mvPNclxtXCJRdPkxFLFU8qwkZYvmg6k6lKjTjyTjOMptuVVzabPw7DRoeH30t+DPDzIsBSwHCOccHZvnOWYL3lQyvE4uHG2bZhh8oStGhl9bM1iaqwUueGFqVJ0sK6OHdKhD+bHwwCfEehAck6vp4A9zdRAV96Hkn6mvjv4K6LH4j+MPws8PzI8kWufEPwdpEkccixO6aj4g0+0ZElbCxsyzEK7EBSQxOBX6F/GLwbJ8PPit8RvA0lubX/hE/GviTQYoCWYJb6Zq11a2xVmJZo5LeOKSNySXjZWyc5r9U8DUlDiR63nLK105bUljnp1verr0tY/yq/b14OtLNvo149crw9LLvE3B1Gr80K1fE8GVqXM/h5atPD1uTrejUvpa3m9FFFfvx/wA84UUV9d/DX4Sab4h/ZG/aO+K15pqXOp+CPF3wk0/Qr4CQz2I1TUNatNbCbdi+RNb39gJ1YzDckEhjj8sSGKlSNNRcvtTp01b+apNQX3Xu/JM9TKMpxWc4mvhcJy+0w+WZvmtTm5rLDZNlmLzXFfCm+Z4fB1I07pJ1HFNpO58iVxnxGWST4feOo4t3mSeD/EsahS4LF9GvF2HYQzK+droMh0JQghiD2dZWvWz3miaxaRFBJdaXf28ZkJCB5rWWNS5VWYKGYbiFYgZwpPFFSKlTqReilCUW/Jxa8u/c24brrC8RZDiW+VYfOsrruV3Gyo46hUb5km42Ub3SbW9mfiaep+p6dPwq3p2oXuk39nqenXMtnf6fcwXlndQttlt7q2kWaCaNuzxyIrqfUCqjZ3Nng5OQORnPNJX50f7nCkkkknJJJJ9SeSaSiigAr9+P+Dfi9j0747/tPX0rtHHbfsk/EaZnRgrIE1Pw8dwZmQKVxnO4Y65r8B6/cL/giBdrYeOv2y753WNLT9iz4r3Lu2dipDd6DI5cL8xTap3AclcgV5+bVfY5XmNZa+ywOKq/+C6E5foe9wrhvrvE/DuD1/2vPMpw2iu/3+PoUtF1fv6LqZUuDLL/ANdHx/30f8/TnPGD9nf8E7rr7L+2d8AyX8tZ/GkdpJ8u7cl1p1/B5f3SV3u6JuGCuc7lAJHxeeWYjoWOCPrxj/PXHQ4I+rP2Gbw6f+1z8AbneYyvxF0OMMq7jmeVoQoGDy5kChuAud25QNw/zk4Xqey4m4dq3tyZ7lM27/y4/Du+6tbe916qx/0oeK9BYjws8SMM1f23AXFtJR3fv8P4+NtU+rWlpPXrdI/OH/gsj/ykn/aj/wCxx0f/ANQ3w3X5jV+nH/BZD/lJN+1H/wBjho//AKhvhuvzHr/Ss/5kpbv1f5hWlo3/ACF9L/7CFn/6UR1m1paN/wAhfS/+whZ/+lEdAj9TP+C1v/KQr4r/APYr/CT/ANVX4Qr8ol6j6j+dfq7/AMFrf+UhXxX/AOxX+En/AKqvwhX5RL1H1H86By3fq/zP9Jb9he0+3/8ABKX4K2XmeV9p/Zjlh8zbv2b/AA/qQ3bNy7sem5c+or/Nzv4vJvryHdu8q6uI92MbtkrrnGTjOM4ycetf6TH7CWla7e/8EsvgVpOhwiTxBqX7M8EGiwFoys1/qPh69OmoxLFNszzwblY8ByrgHIH+eL48+AXxt8C+JNY0Txl8MPG+h6xY6pfWd7a3/hvU4XjvLe5miuIjttjHvWSOQFVb+EkDAqa1ajSVJVatKk5p8iqVIwc+VR5uVSa5rXV7XtdX3NKdGtVUnSo1aqhZzdOnOagpNqLk4p8t2mle12nY+of+CU//ACkF/Zd/7Kdo/wD6DPXyR8c5Y5vjT8W5omDxy/ErxvJG4zhkfxJqTKwyAcFSCMgHmvuf/gl/8PPHuhftw/s6+L9W8E+LrLwxoPxB07UNZ1yTw5q39nabZwCVJLq9ujarDbW0cjos1xLIsUIJeRgFNfnp8QbhLvx540uowwjufFWvzoHwGCS6rdSKGwWG7DDOGIz0JHNTTr0ayapVaVXkk1P2dSE3BtRaUlFvlbV2k7aIVShWo2dajVpKpeUHUpyhzpWTceZLmSejavZnIUUUVoZhRRRQAUUUUAFFFFAGx4f0DXvFOtab4d8Madeav4g1m7i07SNM0+CS5vb2+uWEVvb28EKvJLJI7ABVUk88Yr+h39lX9grUP2d/hx4L+Inxa8L2h+K3im0lu743bLdv4Rae4a4sdG+yPEsFjrENlDBNdTwmaWOZpYVnQo6nwP8A4IhfCbSfFfxu+IXxQ1exivJPhj4Ts4NAadFdLTXPFd1cWf22IOrBbq30yxv0hkUiSLz2dSK/pZ8e6DB4m8J61plwu5jaSXds4DM8V3aKZ4ZI1Ugu+VK+XkCRWMZZQ2R+XYzxjjwb4oZBks8Fha+UUsTgqOfYitB1cRSjmkYqnVwauo0Z5fCtQxkp2nOsvaUF7O/M/f4w8CMX4meBPGuJy/NcxwHENfLcfjeGqGBryw1DGVcklKtPAZg4XnicNnLw+Jy2VF8tOjKdLEyVV01A/OWilIwSPQkflSV/eO5/hA1ZtPdaMKKKKACiiigD+S39tCCa3/an+N8U8bRyf8JzqEwVipzFdQ213A4KkjEkE8TgHDDdhgCCKxP2XPhRr/x3+OPgL4M+G7y10/U/iJq40OLUb6Oeaz01Fgmv59RuIbcNNJHZ21nPK4jUyeX5iqQWr1n/AIKK2Men/tdfFCGNI0NwnhbUX8tVUO1/4W0mUs+1U3SEgmRmBJbqzda97/4Iv6Eutft8/DSd0DDQPD3jzXgzAERyWvhbULaJh3Dl70KrDG0knI4z/LHGmbY3hf8A1nzvBzVHH5LHNsxwk5RhUjDE4WNevh3KEk4VIucYc1OScZxbhKLTaP8AaHwqyrL+M+F/DrJswp/Wcs4iyLhXAY6lCU6TqYTHYDA0cTCE4yVSlJU5zUKkJqdOSU4SUopmX+2N+w98UP2Ntf0Kx8Y3dh4m8NeJ4Z20HxloVteRaRd3VoVN1ptwl0gkstQhjkjlFtO26aPzZLcyxQvJXxZX9zv7a3wE0n9o79nH4j/D+8tIp9ah0a88R+DLt03Tad4s0O3mvdKlhdSroLpkfT7lUZfNtbuaJiUcg/w0XEEtrPNbTxvFPbyyQTRSIySRyxOUdHRgGR1ZSrKwBUggjIr7r6PfixivFLhTFzzn2K4lyDFQwmbSoQhRpYyhio1KuX5hDDwSjQ9tCnXw9WnBcnt8JUqw5IVY04flH0mfBnBeEXGOCjkPt/8AVTiTCVMZk1PEVKletgMRg5UqOZZbPE1XKeIjRnWw+JoVaknV9hjKdKq5zoyq1Ia/vA+BnwNml/4N677wFLaSW+q+Iv2ZviZ4/aORwksl9f33iLxvpMsp222CtnHpqiKYqFWJYJpmRWc/whwrvliTGd8iLgdTuYDH61/qT/Bn4XW+k/sU/DX4RT2jWsVv+zl4Z8G3lnJGm9JpPh/aaffRSxSm4Rna4kmEiTNcBmZhK8pLO33nH+LeGo5NZ25cyWL/AL3+yRVrejra6XvY+V8G8tWOxPFEmrqeRyy53+C2YVG/evpr9V020Uvl/lqsCpKnqCQfqDinxMyvlSVO1xkEg4ZGBGR2IJBHQgkHitbxHpU+heIdd0W6GLnSNY1LTLgfJ/rrG8mtpfuPIn34j9x3X+6zDBORH98fRv8A0E19bnTTyXNmtU8rx7XmnhKp+a8MKUOKOHou6lHPspi11TWYYdNffoS1KP8AUt/11T/0CSoq9s+B/wAMNb+KGseMbbR7W7uIfBfwz+InxE1uW1iMi2WjeEfCOq389zcuEkENu1ybS1EhRg89zDADG0wlT+e8hqwo5lTq1Hywp4fHyk9dEsDiXsk230SSbb0SbP7J4poVcVklfD0YudWtisqpwirK8pZtgVu2kkt220kk22keJ0UrdT9T/Okrxz6EK/pH/wCCKl2LL9jb/gp5OZGiJ+HHhWBHRgjCS60bx7bRgMWX7zyqpAOSCQAxIU/zcV/Q1/wSIuFtf2If+Cl0r4w2gfCSAZcIN11qHia2Xkg5JaUBV4LMQoOSK8Himp7HhjiOr/z6yLN6n/gGX4iS79ux9/4U4dYvxP8ADrCu9sTx1wjQdkm7VuIMupvRtJ/Fs3qeG9a+uv2DJ3t/2vvgLLGJi3/CdWKYgBMm2W3uYnwAQdmx28znHl7s5GRXyLX23/wTjihm/bY/Z/SdS8X/AAl9w7qOp8rQdXlBHYkMgIB+U4wwKkiv89OFIOpxRw3CL5ZTz7J4qT1s3mGHSdvLc/6LvF2pGj4U+JtWUeeNPw/4xnKN7c0Y8O5jJxvZ2ula9n6HwB/wXdv573/gpZ8ZoptmzT/D3wzsbcIuD5I8D6TdfOcnc5lupSW4+XaoHFfj3X67/wDBdP8A5SYfHX0/sr4af+q/0CvyIr/Sk/5k5b/JP70gr6B/ZP8A+TnP2fv+yx/Dj/1LdJr5+r6B/ZP/AOTnP2fv+yx/Dj/1LdJoEe5f8FQf+Ugf7WX/AGWXxX/6Vivg5Pvr/vL/ADFfeP8AwVB/5SB/tZf9ll8V/wDpWK+Dk++v+8v8xQEfs/I/vY/Zx/5N7+BH/ZHfhp/6h2j18F/tQftNT23iL4t/BTxV4J0bxHo0Jt7LwzqPmvaX+iXM+j2t0uoO7R3JuLq3u7gyRvbtZAw74JA6Mc/Xmj/Fj4XfAL9nT9lxfGXiaSHU/FP7NXwr8YWmlQ2E9xeSW1x4YsrNIoPIDwMTNaPEu+ZHDpI0yxLt3fiv8aPH0HxP+KHjHx1a2slnaa/qfn2dvL/rktLa3gsrVpwHkAmkgto5Jgrsqys6ocBWH+cHFWFxWVZ3muBxUVQxdHMMQq1JShUlBVJSqxTcHKDvCpBvXd2T0lb/AGj+ixwVS4reF4nzPJMZX4bXD+DxGR5vVhjcupyzbDZhgJUK+XYmE8LWqSpLCYuNSdGU6fJzU6jaqqMvL8fz7c4B9/pnk46Z5ByDb9f8B3z06epx+edqj278+nT/AOuOeMZGSAo5Ofpx265/nnr79RkDO35e39X9Otrde+nXz/0Ku7729bfle+99Enfo9U0m36/4Dvnp09Tj887Tb9f8B3z06epx+edq9O3J4wPUf/r6jscZJJITI7A8/wA/w7/THBwAM5o0/p+nl6/0tROX/BsvLXf8r311V1Y2/X/Ad89OnqcfnnaYGf6ew6+n58Y7452mR2B5/n+Hf6Y4OABnNHHXj1/Lr7fpjPVduCDT8uvp5eu1/nbV+91v6WV+m9nZeevfVXVv05/ZY/aD17WfG/wr+D3g/QNP0Hwbp+l3UfiRFhhuL/Xb2DT7i6vdYmuCqSQN9oXzAqPK/lqFdzGuwfq90bjsePz9/wCtfzw/s2fEzR/hL8XfDfjPX4p5NHtVvrG+e1jMs1vb6pZzWMtykI+aX7OkzSeWvzvtO0F8qf6G/wBmbxN8Pv2l/FeoaL4F8Uw6jD4ctbbVfEDJbXcUkFhNciBRC00KRvNI+I1UsCpbeVKKxH0GSZfjs8xuEyzAU3icfja6w+HoKUIylLlT15muWnCClOVST5YwjKTdou3+ev0muGqPCOb1OJcLkmJwXC2GyNY/OM7oYbF4qjPMa2aY6WLrZljf30pYytUxOFpw+sTjObqUaNK8YwjH9KfgBBcwfDPRzcgqZri/nhBGP3D3LhCMAAglW5HU5HOK9pqjpmnWmkafZ6ZYRCGzsbeK2t4xj5YolCLnAALEDLNj5mJJ5NXq/wBF+H8slkuRZPlM6iqzy3LcHgp1Ve1Sph8PTpTnG+qjKcW4p6qNkf4459mUc4zvNs1jTdKGYZhi8ZCm7XhCvXnUhGVtOZRklJq6cru7CiiivYPJCiiigAooooAlTofr/QUUJ0P1/oKKAI26n6n+dJSt1P1P86SgAooooAKKKKACiiigAooooA/l9/4OPtODWf7LuqeaQyTfEyyMO3IYOvg+YSB9wxt2FSu07twO4bcH8A/2LbCfUv2uv2ZbW2SSSY/Hb4VzKkcbSu32bxrotww2qQcFYjuboi5c5Cmv6I/+Djy13+Av2Yb3fjyPFPxItfL25DfatM8JS79+eNn2PaF2ndvJyNmG/mR+DPxN1H4NfE7wh8UNHtkutZ8E6n/buixyMqpHrNpBM2k3Tl45VK2WoG2uyuws3kYUqxDD+eONHTo8a4mpUvGlGvllWo1q+SOFwkqjSWt7KVl36H+1/wBFqji8z+ipkmBwPLUx9fK+PsvwUajUIPFVs94hhhKc5uyUHVrU1Kbdoxer93T7l/4K8/GW3+M37d3xgvdOvBe6L4CutP8AhfpLo5eOP/hC7YWGtxRsZHUp/wAJQ+uyAxiNCZCdm4s7/mXV/VdUv9b1TUdZ1W7nvtT1a+utR1C9upXnubu9vZ3uLq5uJpCXlmnmkeSWRyXd2ZmJJJqhXyOPxc8fjsXjamk8Xia2Ikv5XWqSnyryipcq8kf0pwbw1heDeEuGeE8FLnw3DmRZXktKrblddZdgqOFliJJ6qWInTlWnfVyqNvW51vgC+bTPHfgnUkLB7Dxd4bvVKkhg1rrNlMCpDIQ2U4IdTnow6j/Sut5Vnt7adc7J7eCZNwwSksSOpIycEhgSM8V/mW2cxtry0uQATb3VvOoOcFoZkkXIBUkZUcAjPTNf6WXgm8fUvBfgzUJGDPqHhPw3fOwCrl7zR7O4YFVyqsDKQyqSFYFQSBX6l4UztPPKeusMvmuy5XjIvpu+Zdem2h/nh+0Uwv8AyafGq3/NZ4WW13b/AFYqx63tG8/s2TlvrY/iv/4LbaydV/b++IFsW3f2H4N+HOk4KBCufDNtqZXIJ3/8hLcH44YLjKmvyUr9Jv8Agrtqw1f/AIKFftDSrt2WWq+ENJXAYHOlfD7wpYybtyrlvNgfJVQuMbS4+dvzZr854gm557nMm73zTH2d76fWqqj+CR/cfghhVg/Bvwsw6jy8vh9wjUlFqzU6+RYGvUutNeerJu+t99bnqfwQ+KmvfBD4vfDn4t+GpTHrPw/8XaJ4mtVDFEul0y+hnubCdlKt9m1C1WexugrKWt7iVQwJzX9EX/BfDxf4a+LHwV/Yu+MPg+4ttT8N+NrXxlquk6pCyOWsdY0fwbqltB0EiMu6dLiPgxz25jmVJEUH+Yqvsbxd+0rN49/Yw+HX7PPie5v7/Xvg/wDFvV9e8EXl1J9pjg8BeLfD841LRI5ZQZoItK8Q2MVzbQrIYjDqzRKqR2kKL05ZmqoZPnuUVX+6zCjh69C70hisJiaNVpJ6L21CM4trVyp0ory8fjzw7lnHib4Q+JWX0r4/grNs5yvN1CPv1+HeIsjzHBQqSa9+ayzNquHqQhflhRzDG1ZJqJw/7HtiupftX/s2WTxJNFP8dPhWs0MgJSaAeNtFaaJgAxbzYlZAuMMWAJAOR+uH/BS3w43hr9tH4yweU8UWqahoOvQ7o2jWRdZ8MaNfTSRMQBKhupZw0iDaJVkjJLIxP5nf8E79Oi1X9uD9l2ymIVG+M3gi43GNZMNZazbXqFVYgB99uoR+sb4kAJUA/td/wWW0I6b+1XpOrLEyxeIPhf4auWlKkLLdWOpa5p0u1gNpMdvb2isCxcAqWUI0Zb9i8Ely0s3nZfva8Kd3a7VKlCordb3m9O12up/i9+3QoLGZF4WNK8slm8wcnfSlmmJzTLpxXT3qlDDt6/Y1Xwn5KUUUV+/H/NyFfuP+yx8NP7R/4JTftU6lLbpcTeItY17XrUCMu4t/Aun+F7+D5Xj/ANbFd2F9IkkTPxIqqVkVxX4cV/Wb+xJ8NAv/AATa0nwrLaxy3Pj74ffEzUJ4Svnrct4quPESafujkjCOW097BTGUZDt25kHzN5ma1fZ0aOurxFJ26tQbm7dndR1/zP6F+jfw3/rLxXxZhnT54U/Dji6nazbdTNMLRyaEY6r35xzCokr3aUrbH8mVMkVHjdJAxR0ZXCMFcqykMFYq4VsE4JVgDglT0qeZdssq9Nsjrj0wxH9Kj616bV009noz+fqU5UqtOpFuM6VSE4yVrxlCSkmr6XTSeuh+IN8gjvLuMKUCXEybCCpXbIw2lSAQRjBBAIPB5qrXZ/EWy/s7x940shEYUtvFGuxRRFixSFdSufIG5mZmHlbCGZmLDBJJOa4yvzmaUZyir2UpJX3sm1r59/M/3Zy/FQxuAwWNp2dPF4TDYqDTUlyYijCrG0k2muWa1Tae6bWoUUUVJ1hX7H/8EgNQbTL/APbcuU3Zk/Yy+Idh8hCnGp+IfCmmtkkH5cXR3jqybgOTX44V+tX/AASu1GLTrX9tgyMFa7/ZUvbGLcCQ0lz8Vfhmm3AIYkpvwQCFOGcbFYjwOK6jo8L8R1k7OlkWbVE+zhgMRJP5WPv/AAow6xfij4b4Vx51ieO+EcPKKV+aNbP8vpyVut1J6dTvxjHfn8B/nn8M8jA+b6H/AGSrprL9pz4Dzqzq4+KvgmNWTG4NPr9jApO4gFcyfvAc5TcCrfdPzyOnXp1+n+fwOSO/zerfAi9bTfjb8I79JJIms/iV4JuVljmNu8Zi8Saa4cThSYwpGS2B8oPKnp/nPlNRUs1yyo3ZU8wwVRtaNcmIpSbTu3fT5Ptsv+kvjLD/AFzhDizCWT+tcN57hkmrputlmKppNNJW99adV1sj43/4LIf8pJf2ov8AscNG/wDUN8N1+Y9fpz/wWSAX/gpP+1IoJIXxlpABPJIHg7w2ASe5PevzGr/Tc/5dnu/V/mFaWjf8hfS/+whZ/wDpRHWbWlo3/IX0v/sIWf8A6UR0CP1M/wCC1v8AykK+K/8A2K/wk/8AVV+EK/KJeo+o/nX6u/8ABa3/AJSFfFf/ALFf4Sf+qr8IV+US9R9R/Ogct36v8z/Tw/4JsRSQfsDfslxSoUdPgX4CDKwwedEtyCOxDAgqwyGUggkEGvnPx1DBN4u8Ria3gnA1nUdomhjlC5u5idu9Wxkkk47knqTX1L/wT+JH7C/7KBBwR+z38LyCOoP/AAh2mc18teMiT4q18k5J1W+JJ6k/aJOTX87fSIk1l/CyV0/rWZO6f/TjCI/oXwAjfG8SPdfVcvVn/wBfsS/0OTe0tprW4sfKSC2vIJbadLZRb7op42jkAMIVlYq5AdcMp5UgjNfwI/tf+B9c/Z//AGlvjF8K76xmtLXw74z1WTw8brdJJeeFtVnbVPDl55zxg3Hm6Vd2ySXCkrJcRzchlZV/v0JABJIAAJJJwABySSeAAOST0r8jf24P2Nvgd/wUc8L6j4i+EXj3wfJ8b/hq91oEHiPRdRtNSs71rSSVn8I+L0spWmh23AkFhfODJYys6gS27Og/NPCPjCHCmb43+0FXjkuZUsPQxuLhCpUoZfiY1X9SxWJ5YyUaUuatQk37yVTnSlGnJH6H4scI1OKcowf9nug85y6pXr4HCznThWx2GlTj9cwuH5mnKpHkoVopXhenyS5XUjJfx1/8Jdef3U/Je3/Af/19qP8AhLrz+6n5L3/4D/8Aq71H498EeI/hr418U/D/AMX2DaZ4o8G65qHh7XtPZ0kNpqemTtb3UIljLRyqrqSkiMVdSGB5wOSr+y4VVUhCpTqKdOpCNSnODUoThOKlCcZK6lGUWpRkm1JO6bT1/jmpCdKc6VSEqdSnOUKkJpxnCcG4yhKLs4yjJNSTSaaaep2P/CXXn91P++V/+Jo/4S68/up+S9v+A/8A6+1cdRVc0u7/AK/4b8+7IOx/4S68/up+S9/+A/8A6u9H/CXXn91P++V/+JrjqKOaXd/1/wAN+fdgdj/wl15/dT8l7f8AAf8A9faj/hLrz+6n/fK//E1x1fYn7E/7HPjn9tL4tr8OfCN9ZaJpej6f/wAJB4z8SagS0GheHoru3tJJ4rdcSXt9cz3EdvZWkf35W3yMsUchHJj8xwuWYPE5hj8RDDYPCUpVsRXqO0KdONrt2TbbdoxjFOU5NRinKVn14DAYvM8ZhsvwFCeJxmLqxo4ehTV5VKk3ZLWyjFK8pTk1GEU5Saim1/Q7/wAEFLBLn9nr4w+LZ9Ne2v8AVvita6MNQbIS/wBO0Twzp9zbpCpVV22l3rF8sjJuBebaxBQCv3XwrZVgrKwKsrAMpBGCGB4IIOCDxivDP2d/hB8MP2ffhhoPwV+FZsxo3ga2ig1LZcW8+q3msXyme+1nXvIYsupavMslwwkVQsapDEPKhQD3Ov4J4yzqGf8AFOcZzRhWp0cZjFVw0a141vq9OlSpYec07uDqUqcKiim1BTUU2km/7v4QyaeRcMZRk1edKpWweD9liZUbOk69SdSriIxa+NQq1Jwc2k5uLk0m2l+Z3iC2Fnrus2gVFFtqt/bhYl2xqIbqWMLGoC7UAXCjaMDAwOlZFegfFKy+wePvEkIjREe/NxH5abEdbmOOYuAFUFmZ28xgMNLvOSck+f1/sZwxj1mvDXD2aJ3WY5JlWOve9/reAoV736/xN+p/y58f5K+HOO+NeHnCNP8AsLiziLJ/Zx0jBZbm+MwajFdIxVGy8kFFFFe4fIhRRXn/AMVvGsHw4+Gnjzx5chTH4S8Ka5roRiqiSbT9Pnnt4iWKr+9nSOMAnJLADLEAxUqQpU6lWo1GnShKpOT2jCEXKTfkops6cFhK+YYzCYDCwdTE43E0MJh6a3qV8TVhRowXnKpOMV6n8yv7fniew8WftZfFfUdOkSa3sL7R/DjSxsGje48OaHp+k3m0gnmO9t7iFjkgtEWGAdo+8v8Agg14eOp/tgeJNb2kp4Z+EHii53/LtWbU9U0DSUjOQW3SRXU8ilcAeSwY8gH8WdZ1W+17V9U1vUpXuNR1jUb3VL+Z2LtLeahcy3d1IWPJ3zSu30PpX9O//BAH4Ma7onhn42/G7WtE1HT9O8UyeHPBXhPUL6ymt7bVbXTnvdW1u60y4miUXMEVzJp1tNJA7RGRShO+IgfxP4w5uv8AVHi3H1Jcs8whKhBO128wxdKgoKys2qVWTdltFtvdn+7fgVw1HK854C4eo/vKXD+BwVB1NbOOS5Zb2ru21z1MPFq7dpTS1P6LiqurxuAyOjo6kZDKykMpB4IIJBB4I61/n/fGCzt9O+LPxNsLSNYrWy8f+L7S2iUALHBb6/fwwooGAAsaKoAAGBwK/v5vZhb2V3cNjbBa3Exz0xFE7nPtha/z8PiFef2h498a35Lk3vivxBdkucuftGq3UuXJZssd/JLNznk9a8/6FMZ/XvEKd3yLCcORa+y5utnDi35pRkl5SfY5/p6zprK/DWm1H20sw4mnF/aVOOHyaM0tPhcp029d0tH06n4C+FF8dfG/4QeC3hknj8VfEvwR4fmiiModoNW8SabYzkNDHLKgWKZ2aRI3aNQXCttxX+rnb2kcWnw2KDZDHZR2iKP4Y0gWFQOOygAcdunav8zD/glj4Vi8Z/8ABQz9knRJohNCvxk8Ma3LE2Nrx+GJpPEjg5lhBATSiSpZgwG3yZ8+TJ/pt1/T3iTVbxuWUb6U8NXqr1rVYwb/APKC+4/m7wMw6jlee4vrWx+Gw7fdYbDyqLz0eKf3n+WR+214Jf4c/tfftMeCWGF8P/G74k2dvhi4+xnxXqk9l8xZ3b/RJYeZCsp6ypHJuRfmGP74+jf+gmv09/4LM+GW8Lf8FKP2oLMqETUfFmja/GuJQ2Nd8I+H9TkZxKWIaSe4mkGxmiKOpi2oVRfzQ09VYX25QwWydslQwU+fAA24xtsOSFDedbElggklL/ZLn7vEVvbcJ1sRdy9tw9Uqtvd+0y5ybd+ut/U/IsJhfqviLhsGkoLD8ZUaCS2So5zGKSt0tFJW2WxWr9kP+CVWnWM/ws/4KX6lNawSX1h+wp8SorO6dFaa2jvL7ShdLC5GUE4hiWTHLKgHTOfxvr9nf+CUv/JHv+CnX/ZjHxB/9L9Pr+co7r1X5n9pH4xt1P1P86Slbqfqf50lIAr97/8AglrqI079hj/gosDII/t1x8A9PyUL7/tHiLxA3ljCttLmMLuOAoJ5BwR+CFfuF/wThmlh/YZ/bxEbMom8Yfs4RShQMvGfEHilyucZAJRd2CMqCpypIPyfHk3T4J4skm0/9Xc3V1ulLA14vbyb9D9c8AsPHFeN3hLQkk4z8ReD207f8u8/wFTqpK94aXTV7XtuYtfa/wDwTonjt/21f2f5JThW8YywA5A/eXOiarbRDLEDmWVBjOTnChmIB+KK9r/Zu8UX3gv4/fBvxRp0UlxeaP8AEnwfcx20RUSXS/27ZRzWilxtzdQvJByVz5nDIcMP8/OHcVHBcQZFjJpuGEzjLMTNJNtxoY2jUlZK7b5YuySbb2TP+h/xKyyrnfh1x7k1BxVfNuDOKMtouUlGKq43JMdhqblJ2UY89WN5NpJXbaWp8x/8F0v+UmHx0/7BXw0/9V/oFfkRX68f8F1Bj/gpj8dhkHGl/DUZByD/AMW/0DocDP1wM1+Q9f6Xn/MLLR27Jfkgr6B/ZP8A+TnP2fv+yx/Dj/1LdJr5+r6B/ZP/AOTnP2fv+yx/Dj/1LdJoEe5f8FQf+Ugf7WX/AGWXxX/6VivhGBQ08Sno0iA/QsPTFfd3/BUH/lIH+1l/2WXxX/6VivhO1/4+YP8Arqn/AKEKBR2XovyP60P28PDn9jfBX/gnncg+Ylx+yJ8PdPF0YFiaZrDw/wCH7103hmLJE2sfIhJWPzGIYs74/NLPoDxjn2/L6YP0OPuhf2p/4KIeGQf2I/8AgnZ4oe2dLjTvg94O8OzyujBoxqPwy8F36W7FnAjffpMzMnlFm25Zl8oK34r8++PUd/8ADPOOuegPJJ/gfxkwrw3iPxEle1eeAxUXrr7fLcFKWu1lU5kumlr3Vl/0E/QuzSGZ/Rt8OZac+Bo5/llVX0jLBcS5tCnp3lh3Rkl2layTDP1H+Rj+mPw56FUx6k+/9Bj+nt252rk9v/15/wAnnPfg5OWPw5GOh6+2PT279AQNoP5hp1t077adu2v+b0P6l6XWl/NN3du+yel9+jtuHpknPofU/wA+4xzznPJ2k+v5A8enJ9OevQZ53EtRjuR2+pOf5+nv2IGCqfTgcj6/5556+u0EYevTfTv/AHbbr83bt0utHq/n5dWk0/0bab13sv17f557nr06gkAhixyZ98fTnv8Ar7fic7gcnpn06Y/z0Hb0O7gkkJx14x09z/L/AB5z8oIKnzfR/fy9k9Pn0WiejLK/dbLTTTorvq/J7vXdi9/rz68d/wD65wc9+jKf3r/4IZWhfxj8cr75sQeG/Dlr1UL/AKTqVzLyMly2LUbSAFA3ZJJGPwUGO309z/ntxnPOPurX9HP/AAQz8PRxeEfjn4odAZrvXPCmjwyFEDrFa2mrXVwokWVnkR3mtziSKPayMULbmNfqXgvQ9v4j8O22oyzGu99PZZZi3fa3xNLbe2uzP5V+mvj4YD6NfiFzNqWMXDmAprrKeI4nyhctvKlCpJu9kotpXTT/AHsooor+/D/n9CiiigAooooAKKKKAJU6H6/0FFCdD9f6CigCNup+p/nSUrdT9T/OkoAKKKKACiiigAooooAKKKKAP50P+DjG2dvgV8AbwFPLh+KWt2rKSd5e58LzSoVGNuwLaSBiWByUwCCSP5Fq/u//AOCtn7Lg/ar+Cfw/8Hw+KLfwlfeHviSniSDUZ9LfU/PhTwxr+nz2IWO5tngSaW9tZnlHm/8AHsB5ZxkfzT/FT/gk18VPBnhPUfEvgrxjovj680iyuL+78OwWV3pmq3cNsqyyLpXnGWC8uFhWd1tmeKadkSOFHd8D+ZvErNMvw3GOKwtfFU6WIlh8A+Saml7+GgoXm4+zXNb+bTS7TaR/sJ9DrxE4RynwW4byDNc5o4DMqWdcQU1RxFLERppYrN61ejKeJVF4anCftkuepWjGFnzuKi2vyYop8kUkTvHJG6SRsyOjKVZHUlWVgRkMpBBB5BGKbg+h/I18wf3JvsJnb83pz+XNf6PX7OGrHXf2ffgbrLMHOp/CT4dXxdQoVjceEtJlLKEZl2ksSu1mG3GCa/zhwCCDg8EdjX+hh+wfqo1v9jH9l/Uirqj/AAT8A2oDoIz5elaBaaWhIVnGStlywc7jkkKxKj9T8LJtZjmlO+k8FSm10vTrqKffT2jt6vyP87f2h2FU+EvDfG8uuH4izrC81tljMsw1Vxv0u8CnbrbyP4sP+Clerf21+3j+1Jeh1dE+K2uWERV2kXydLjtdNhALElcRWqjy+BGQY1ACgD4cr6B/av8AEp8ZftPftCeKA5mj1r4zfEi8t5cH57M+LNVjsj1bpZxwKMEjAr5/wfQ/ka/N8wqe2x+Nq/8AP3F4mp0+3WnLo2uvRtdmf3NwFgXlnA3BeWyjyyy/hPhzAyjZrllhMnweHkrSSkrOm1aST7pPQSilwfQ/ka+xP2W/2LPif+1HJqOpeHpdP8OeDNEvYrDVfFeted9na9dEmfT9LtYI3m1C9it5EmmVTHDbpLEZpkMiK3nV8RRwtKVfEVYUaULc05uyV2kl3bbaSSTbeiR7Wb5xlmQ4Cvmeb4yjgMDh0nVxFeTUU5NRhCEYqVSrVnJqMKVKM6k3pGL1O1/4Jdacmqft+fsxWrqrBPiLbX6h2dQH0rTdR1RGzH825WswyKfkZwqyfIWr98v+C4fh+WDxp8CPFAVfI1Lw14t0MsAu8S6Pqel3wDHAYqU1vKjLAFW4Un5vE/2Af+CZV58Cv2svg38WLv4p6fr48J6n4hnn0WLw9ParcHUfB/iLSIRFfTXzAPHNfRTAtbA5QhMsFJ+8f+C2mgLd/A74WeIgqmbR/iTNpu/YxYQax4e1GZ1DhSBmXSoSEYru2sQSVwf33wOx2ExuX4vEYSqq0P7XrUJS5ZwcZRwGGbVpxi3ZVLppOLvoz/An9rzneS8fcP1q2Q4xY/DZLwNkmMVZUq9GNPE4fjTGV8TDkxFKlJtYJttqLi+eOt4u38z1FLg+h/I0YPofyNf0Sf8ANuKi7nVf7zKv5kCv7kP2a/CqeE/2c/gx4UaMf8S34WeDrO5jK7Q00nh+zkuwyksQXmll3AsxyTudjlj/ABJeDtFk8ReLfDGgIG3634g0fSU2oXbdqGoW9ouE3x7zmUYXzE3Hjcucj+9TSrKLTtL07T4ESOGxsLSzijjXZGkdtbxwoiL/AAqqoAq9gAK8DPJ6YeCfWpNr05VF3t5yWj9Vsf3N9CjLVUzHj/NpLShgcjy2Da3WMr5hiaqTa6fUaN11urrRH8KHxm0NfDPxd+KHh2O2FnHofxA8X6VFaqoVIIbDX7+2iiQBVXYkcShNqhSoBUAEV5rX2B+3zoMfh39sL4+afBEIopvHd7q4RIvKXdr9ta65IQgVR80moM24DDklwW3ZPyBg+h/I17VCXPRoy196lTlrq9YJ6vq9T+NeKMB/ZXEvEOV2Uf7OzzNsByq1o/VMfiMPbRJaeztordtD8mPj9YHT/i740hxxLqFveqfL8oN/aFhaXrELk7sNcMpkz+8ZTJhS20eO19RftaaZ9k+JdreojAar4c0+4diB801vcXtk20qg4ENvAMOzODk5EZjA+XsH0P5GvhsdDkxmJj2rVGtLaSk5LT0a8nuj/Y/wszNZx4bcCZjzKUsRwpkaqtNNfWKGX0MPiVdaaV6VRW3VrPVMSilwfQ/kaMH0P5GuU+9Er9Wf+CXXhvW/FDftfWGgwTXV7afs0XmszW0CzPNNp2jfEjwBqOp7I4UdpPKsbee4dSNpjhcsRjNflPg+h/I1/RX/AMG2NrBeftd/GC0u4I7i1uv2fdet7i3njWWGeCbxd4RjlhljcFJI5EZkdGBVlJBBBrzM6y2OcZPmmUSqOjHNMvxmXSrRV5UljcPUwzqKN1zOn7Tm5bpStZ6Nn03BfEdTg/jDhbiylh44urwzxDk+f08JOXJDEyyjMMPj1h5T5Zckazw/s3NRk4KXMk2kfPo4yD2PII546/l7g4wc4zg+0fs5eDtZ8ffHf4TeFNBgmm1HVfH3hhFaGEzm1toNXtbm9vnQK/7mxs4Z7uYsNojhYscA1+lP/BQr/gnLe/Ce7v8A4wfBHS73Uvh5qN683iHwrbJLd33g+9vrk7ZdPjUPNc6BNPMsUSAPNprmONi9uyvD9uf8Evv2Ipfg94ft/jr8SNPVPiJ4v0wp4Z0e6hYTeEvDl6AWnnSWMGPV9XhCuxBBtrJ0iALySMf4kyLwp4lq8dUeGMfhJ4eGAr0sdmGPipPCPKoVotYrDV7RVX62oujhoLlqKs5QqRpulW9n/uXx/wDSz8M8N4D43xL4ezahmGK4hwWMyLh/h6rKnHN4cUV8HKFTLs0wKlOWGjlCrRxmY1pc2HqYRUp4arWjjMG638kP/BZI5/4KT/tSHnnxjo554P8AyJ3hvqPX1r8xq/TD/gsPP9o/4KS/tU4Qr5PjjS4DzuDGPwf4bG4cDGRjI7HIycZP5oYPofyNf3duf4HvVv1YlaWjf8hfS/8AsIWf/pRHWdg+h/I1paMD/a+l8H/kIWfY/wDPxHQI/Uv/AILW/wDKQr4r/wDYr/CT/wBVX4Qr8ol6j6j+dfq3/wAFqmWT/goV8WdjK+zwz8JUfYQ211+FfhDcjYJ2sO6nBHcV+Uqg5HB6jsfWgct36v8AM/1C/wDgn0Af2G/2TQRkH9n74Wgg9CD4Q0sEH615B438H2+ofFvXfD+kOxtzeTXUj7S3koLMaheZbowjPnqmOAFAIODXrX/BPK4iuf2Gf2TZIW3ovwE+GcBO1lxLbeF9Ot5lwwU/JNE6ZxtbbuUlSCdnSPAfiG1+Inj7xRq9hNBYJp2uNY3cqqYrp7wyJAYHU/OVs5HU4GIwCrfNtx+U+LGRS4go8J4COBq4mm8/jUxmJpU5SeEy+lg61TFxnVimqMMRGMFzSai504Je8on6t4V51HIqvE+NljaeHqLIvZ4PDVakY/W8fVxlCnhXCnJr2s6EpyfLG8lCpN2te3xZJpksthfXJXNpHItjKxyRuu47japOBkMkEoJHQgZ61/n7+OfiF8W/2T/2m/jnpnwj8eeJ/h/qWi/E3x5ockujai8X2zToPEmoLZrf20vnWt6Hs/Ilie7glkRZA6MjndX+jBY+F9T1P4Y395YWFxcufGVtFJ5ELyMYIdOuzuYRqz7UmvIVB2gAs3UdP8/P/gq94KvvAn/BQT9pLQr20Wznk8W6XrDW8a4CDXfCfh/V1LKI02yyC88yZSCRKz5d87j+ceCeVV8Pis1o4vByrZbm2QZdjqrxFFTwtSo8biqeHpONWHs6vNRdWdrTS5G21zJH3njRmlGtg8rr4XGRo5jlWfY7BUlh67hi6cPqOFqV6qlTkqlPlrRpRbbi7yilezZ8B69r2s+KNa1XxH4i1O81nXdcv7rVNX1W/lae91DUL2Z57q7uZW5eaeZ3kc4Ay2FAAAGTTtj/AN1v++T/AIUbH/ut/wB8n/Cv6UjGMYxjGKjGKUYxikoxilZRilZJJJJJKyWiP5tlKUpSlKTlKTcpSk25Sk3dyk3dtttttu7erG0U7Y/91v8Avk/4UbH/ALrf98n/AApiG0U7Y/8Adb/vk/4UbH/ut/3yf8KAG16j8KPjV8VfgZr914o+EnjnXfAmvXunT6ReajoVwkMt1p1w0cklrcJNFNDLH5sUUyb4maKWNJI2VgSfMNj/AN1v++T/AIUbH/ut/wB8n/Cs61GjiKU6GIpU69GpFxqUa1ONWlUi94zpzUoTi+qkmvI1o1q2Gqwr4etVoVqUuanWo1J0qtOS2lCpBxnCS7xafmf1Tf8ABA/xJ4h8XeGv2ofEXinW9V8Ra7qvjTwHd6jq2s39zqN/d3E2l+JmkkmubuSWViT0XdtUcKoUAD+hCv51f+De1WHgL9pIEHP/AAlfw/4wf+gV4lr+ivB9D+Rr+H/FeMYeIHEMYRUYxqZfGMYpRjGKynAJJJWSSWiSVktj+2vCuUp8BZBOcpSnKnjpSlJuUpSeZ41uUpO7bb1bbbb1Z8WftBWUsHjWC9ZX8q/0m1KO5XDSWzSQyhFGGCKvlHcQQXZgrEqyr4TX0t+0hA66r4ZnJBWTTr2IKCxZTFco5yMYAImGMEk4OQABn5qwfQ/ka/0y8CcbPH+EfAteo1KVPJY4JNXtyZdicRl8E79Yww0Yy6XTtpY/58Ppe5RTyX6Sfi5g6UeWFbif+1rWS9/PcuwGd1Xpf4quYTlfd3u9biUUuD6H8jRg+h/I1+tH83iVxnxD8A+G/ih4L8QeAfF9rNe+G/E1idP1W2t7mazmkg82OZfLuIGWSNllijcYJVtuyRXRmU9pg+h/I0YPofyNTOEKkJ06kVOnUjKE4SV4zhNOMoyT0alFtNdUzfDYnEYLE4fGYSvVw2KwlelicNiKE5U61DEUJxq0a1KpBqVOrSqRjOnOLUoyipJpo/Pzwz/wTP8A2XfD+qxalcaD4h8QrDJui07W9fuJdOOGUxmeG0jtZpmQggj7QkUgOHhIAr+oC18A+Hfh5+zz+zl4f8LaLpOgaTafD6DydO0bT7bTLFI5ktL6No7S0RYlkd72aWZyd8ssryyfvXcn8hEUl0GDyyjoe5Ffur8YLM6Z8O/glpjCMPZeBrCzIgB8kG00rQ4GEY2oRHlcJlF+XGQvSv5f+k7l2XYDwux1PB4LC4bnxuBqt0qUITcoZjgIJ8ySk1apa17apdT/AEw/Z/cVcW8WeMOZY3iLiHNs5jgcjeEo08fja1ejRWMo5hVk6dCU1RhKSwXvVI0+d8qTlbR/J3i+7Fh4R8V3p3Ys/DWuXXyBWf8AcaZcy/KrEKzfLwGIUngkCv8APeuDm4nPrNKeevLt1r/QY8eQyz+BfGsESFpZvCXiKKNcqu55NIu1QbnKqMsQMkgDqTiv8+edW8+bg/62Tsf77V8P9ChL6v4jP7TrcLJ97KHEDj57uX426n9I/T4cnW8LVryqlxk1o7Xc+F0/K+i89u6P2C/4IOeHIPEX/BTT4FNPsI0Kw+IfiCJXiWQtNYeAvECx7GZh5LqZzIJAGJCGLbiQkf6KFfwI/wDBuVozal/wUTt7wLIRoHwY+I2qsVeNFUSy6BowMiyAtIhOrBQkOJBIUcnykkB/vur998Q5qWfU4p39nl+Hg9b2bq4ipa3TSaeu90z8b8F6bhwhWk017bOcZUV18SWHwVK67q9NrTTTve/+fh/wcNaD/Y//AAUf8XX27P8Awkvw1+HOt7cg+X5emXGh42+VHt3/ANi+bjdNu8zf5g3eVH+JGnDi+OPu2MhzjO3M0C53eWdmd2zPnW27d5fmTb/slz/RP/wcyeHzZftq/DHxAFQDXfgJoNtkNKZGOkeLvF6ZZDmJYwLwKjIQ7ssgkUBUZv52dOU4vjtPy2MhztJ25mgXO7yzszu2Z8623bvL8ybf9kufu8LU9rwI5duH8VT3v/Cwtal0/wAG3TZ6n5HmFD2Hi3GG9+MMBVvZr+PjsNX89nVs/MqV+0f/AASdt57r4Rf8FOoreJ5pP+GFPiNLsQbm8uC6sppWAHJCRRu5xzhTX4u4PofyNft//wAEgLeZvhb/AMFOrkRsYIv2EvifDJJxtWWeEvEh5zl1glIwCMIckcZ/AluvVfmf1ytWl3aPxAbqfqf50lOYHJ4PU9j60mD6H8jSEJX9Q/8Awb9fCLw98dvgj+3T8LPE2U0/xXD8K7SK8VEkl03Uoo/G1zpmpwLIGXzbG9hhuF4ydhUYzkfy84PofyNf1wf8Gw4P9j/tb8H/AJCHwnPQ9PI8cc1hisJh8fhsRgsXSjXwuMo1cNiaM1eFWhXhKlVpyXacJSi7a66NM9HKM1zDI80y/OspxVXA5plONwmZZdjKLSrYXHYLEUsThcRTbTSnRrU4VI3TV4q6auj5J/aL/Zs+JX7NPjm98H+PdHuIbcyyPoXiGGN5NG8QafubyLvT71QYmZkAM1szLc2z5jniRhiv0N/4Jb/sV6v8Q/G2jftB+P8AS5LT4e+C9QTUPCNpewlf+Eu8SWjSC2uYo5ACdK0S7RLt7lcie+ghgjBRZmX+gX4z/Av4bfH7wjJ4K+J3h2LXNHNzDeW0gJt9Q067gkVxcadfIpmtJJFUwzeWdssTMjqflI9C8OeG9G8JaFpXhnw5pltpGhaJY2+naXptlCsNtaWdtGI4oo0UAcKuXc5eRy0jszszH8EyLwEwGU8Zf2tiMasbw9gpUsbleAqpvFyxqqOUKGOkoqlUw2DlGNSM4WeLbpwqU4RhVVT/AED48+n1xFxd4Mx4QwWTvJfEHOqOIyXiniHCuKy2OSyw9OlXxuSU3UlXw2ZZzCpWw1enUjKnldONerhK06tbDSwv+fZ/wXIuWuf+CmH7QBZQv2eH4dWq4JJZYvhx4Xk3sT1ZnmfpgBQowSCzfknX6zf8FwEYf8FL/wBofKsMv8PmGQeQ3w08IlSPYggg9wc1+TWD6H8jX9BvRv1Z/nVFtxTeraTd+9hK+gf2T/8Ak5z9n7/ssfw4/wDUt0mvn/B9D+Rr6B/ZPB/4ac/Z+4P/ACWP4cdj/wBDbpNIZ7j/AMFQf+Ugf7WX/ZZfFf8A6VivhS0/4+rf/rtH/wChivuz/gqAD/w8D/ay4P8AyWXxX2P/AD9ivhewQtfWalSQbmAEYPIMigj8qBR2XovyP77f25PC0Ovf8EpP2fdXaLfP4P8AB/wA1u3kWPe8f2zwPZeH5xuCsyRGDWHeQ5Vf3SlicAH+bH8On5HPBPp6557HBxgr/XN+0P4Zttb/AOCVz6WlsPK0z9nT4b6vapHEj/Zv+Ec0Dw1qyMgcERqkdi0bOMOsTMEIYiv5F8N6H8j/AJ9PyHpX8ZfSGwiocZ4DFRVljchwkpO29TD4rF0Xd9WqcaXystmf7Z/s6c4eP8FM9yqc5OWR8e5pThFy+DDZhlWS42CivsxliJ4t26ycpbjvUdP8f88YxnnB5JDHtx9euT3+v6nnHGdhTDdMH8j/AJ/ye5OUw3ofyNfgt/X+rbdnpuf35y+a6dL7W311Wm3nu9zqvA/g7XfiH4x8M+BfDFsLzxD4s1vTtA0e2Z9iTahql1FZ2yySYYJH5kqmWQhgkYZgGx82Xrmjaj4d1rV/D+r27WmraHqV7pOpWrcvb6hp1zJaXcDEHBaK4ieMkHqDgkn5vrj/AIJ66Zd6p+2R8Bre0gaaSHxtZ6hIAhbZbabDcahdykiKTaI7a2lcsWiAxgyYOx4P+CgPhmHwn+2N8etKtbX7LbTeNZ9YijEflxk+ILGy1yeSJV48p7jUJim0AY4AUjav0byNPhGPEilUuuIpZLODS9k4yy6GNpyi0r+0TjUU9WnGUNE1735ouO5f8RhqeGcqeHdN+G9LjejWUn9bVaHE1TJMRRqR5lH2E4VMNOlaKkp063vcrSXx1znsMenPXt+ftjOCo5G49/r1Hb9fxPJOcHPAY5HAB+oH+Qf17gHBxRg56E89SD/h/T8OSD84n/Wq7dvTXRd7t2P0q1/+Gu7Lf4trva7forMO3GDgfp/n8z9fn/q4/wCCNPhBdA/ZW1DX2MjT+LviBrN7udFRfsunWOm2ECRYLM6LMl0S7EZdnAQAAt/KOByBtPXqAe/+fUj687v7WP8Agnx4MTwL+x58ENHERinu/CzeILsvGI5pLnxHqN7rTNMPvF1jvY4k3EskMcUYJVFr96+jzgHieNcXjGrwy7JMVNSs2o1cTXwuGgrvZypSrW8oSS0P4F/aKZ8st8FcmyWM2qvEfG+WUpU/dTnhMry/Mswqyel+WnioYK6XL71RN3sz7Nooor+1T/EwKKKKACiiigAooooAlTofr/QUUJ0P1/oKKAI26n6n+dJSt1P1P86SgAooooAKKKKACiiigAooooAxta8O6J4ihht9b0y11OG3kM0Md1GJFjkKlC6g9GKkrn0Nc2fhf8PyCD4T0Ygggg2q4IPBB56EV3tFedicnynGVXXxeV5diq8lFSrYjBYatVkopKKlUqUpTaiklFNtJaLTQ9DD5vmuEpKhhMzzDDUYuUo0cPjMRRpRcnzSap06kYJylrJpavV7s+N5/wDgnv8AsU3M0txP+zZ8LpZ55ZJppX0BC8ksrF5HY+ZyzuxYn1JqL/h3h+xJ/wBG0/Cz/wAJ+P8A+OV9m0U/7JytbZbgNP8AqEw/S3/Tvy/q7PoV4ieICSS454wSWiX+s2dWSVrL/ffL+rs+Mv8Ah3h+xJ/0bT8LP/Cfj/8AjlfT/hPwF4N8CeGdI8G+D/Dml+HfC2g2Sabo+h6VbLa6fp1jHu229tAmFRMszHqzOzOxLMSeuorahgsHhpOeHwmGoTcXBzo0KVKTi2m4uUIxbi3GLabtdJ7nl5txRxLn1Glh884izzOcPQq+3o0M1zbH5jRpVuVw9tSp4vEVoQq8kpQ9pGKnyylG9pNP44uf+CfP7Fd5cT3d1+zf8Mbm6uZpLi4uJ9DEs088ztJLNLI8peSSSRmd3YlmYliSTUP/AA7w/Yk/6Np+Fn/hPx//AByvs2isv7Kyv/oXYH/wkof/ACvy/Puz1F4h8fxSjHjnjBRikklxLnKSSSSSSxtkkkkltY+Mv+HeH7En/RtPws/8J+P/AOOV6/4G/Zt+BPwz0Z/D3gL4XeEvCuiSXs2ovpukaaltate3EcMU1yYwxzLJHbwozZ5EajtXt1FZzyXJ6seWrlWW1I3T5amBw043WztKk1dd7dX3Zz4rjnjXHUvYY3i/ijGUOaM/Y4rP81xFLnj8M/Z1sXOHNHW0rXV3Z6u/H6f4A8G6VeQahp3h3TLO9tXMlvcwQBJYnKshZGByCVZh9CRSeOvh94K+Jnh+bwt4+8NaT4s8PT3Ftdy6TrNql3Ztc2knmW0/lv8AdlhfJR1IYBmXJVmB7GiurCYLB5fB08BhcNgoSn7SUMJQpYeLqWjH2jjRjBOdoxXO1zWilfRHzOZVKmc050s3nPNaVWi8PVpZlJ46nUw8r81CpDEurGdGXNLmpyTg3Jtxd2fL/wDwxZ+yp/0Qn4e/+CSL/wCKo/4Ys/ZU/wCiE/D3/wAEkX/xVfUFFd3tq3/P2r/4Ml/n5L7j5T/Ujgz/AKJHhj/ww5V/8y+X592fNmm/se/sxaPqNhq2mfBLwFZalpd5bahp95DosSzWt7ZzJcWtxE2TtkhnjSRDg4ZQcV9J9OlFFRKc5255Slbbmk5W9Lt2PWy3JsnyeNWGUZVluVwryjKtHLsDhsFGtKCahKqsNSpKpKKbUXNNxTdrXd/DPG/7M/wD+JHiC58VeOfhR4N8TeIryKCG71jVNJhnvrmO1jEVus83ymQxRBY1ZssEVVzgCuS/4Ys/ZU/6IT8Pf/BJF/8AFV9QUVSq1UklVqJLRJTkkkrWSSelrL7kefX4Q4SxVaricTwvw7iMRXqTrV69fJctq1q1WpLmnVq1amGlOpUnJuU5zk5Sk25Ntu/xh4h/4J3fsSeK7qG98Rfs1fC/VrqCAW0M11oSs8cAkeURrtlUbQ8jt0zljzjFYH/Dsb9gT/o1b4S/+CD/AO3193UVnJuTcpPmk7XctW7Kyu3d6JJLslY9rCYTCYDD0sHgcLh8HhKEXGjhcLRp4fD0YuTk40qNGMKdOLlKUmoxScpNvVtnwj/w7G/YE/6NW+Ev/gg/+30f8Oxv2BP+jVvhL/4IP/t9fd1FKy7L7kdB8I/8Oxv2BP8Ao1b4S/8Agg/+316v8If2OP2X/gH4hvfFnwc+CfgX4eeI9R0mbQr3WPDmlCzvrnSLi6tL2awkmMjk28l1Y2k7oMZeCMk8Yr6Xoosuy+5AUZdM0+eNop7O3mifAeOWNZEYAggMjAqcEAjI6gGpBY2gGBbxgDgALwB6Vaoosr3sr2tfrbtftqxWVlGysm2lbRN2u0tk3ZXfkuyPj/x5+wF+xn8TvFut+PPHv7Onwz8T+L/El0L7Xdf1PQ0k1DVLwQxQfaruRJEEs7RQxiSQrvkK73LOWY8j/wAOxv2BP+jVvhL/AOCD/wC3193UUcsVtFfchnwj/wAOxv2BP+jVvhL/AOCD/wC31JF/wTK/YIgljmi/ZY+EySxOskbjw+Mo6MGRhmcjKsARkEcV91UUWXZfcgPkDx3+wH+xt8TvFOq+NviB+zz8OfFnizW5IpdV17WdJe71C9eCCO2g86Z58lYbeGKCJAAkcUaRooVQByH/AA7G/YEH/Nq3wl/8EH/2+vu6ikoxSsoxS7JJIbbk25Nyb3bbbfTVvXbQ5vwh4Q8M+APDGh+C/Bmi2Hhzwr4a0620jQdC0yEW+n6XptogjtrO0hBIjhhQBUXJwK6CaGK4ikgmRZYZkaOWNxlXRwVZWHcEEg1JRTaTTi0nFppppNNNWaaejTWjW1gTaaabTTTTTs01qmmtU09U1sZmk6LpWhWn2DSLC30+z8x5vs9tGEj82TG9yo6s21cn2HpXyx8Tf2Cf2OfjL401f4ifFH9nf4Z+N/G+vm0Os+Jdd0GK61TUTY2cGn2hurjepkaCztbe3QkZEcSAk4r66orOlRo0KVOjRpU6VGlFQpUqcIwp04R0jGEIpRhFJKyikl0RdWrVr1J1a1SpWq1JOdSrVnKpUnOW8pzm3KUn1lJtvqz4I/4dc/8ABPj/AKNL+Dn/AITMX/x2j/h1z/wT4/6NL+Dn/hMxf/Ha+96Kvlj/ACx+5f10X3GZ8Ef8Ouf+CfH/AEaX8HP/AAmYv/jtH/Drn/gnx/0aX8HP/CZi/wDjtfe9FHLH+WP3L+ui+4D4I/4dc/8ABPj/AKNL+Dn/AITMX/x2j/h1z/wT4/6NL+Dn/hMxf/Ha+96KOWP8sfuX9dF9wHwR/wAOuf8Agnx/0aX8HP8AwmYv/jtH/Drn/gnx/wBGl/Bz/wAJmL/47X3vRRyx/lj9y/rovuA+bPhb+x7+zF8ErfV7T4T/AAU8B+A7bX5rW41mHw9o8dkmozWKTR2kl0AzGRrdLidYsnCiV8feNer/APCsPh//ANCpo/8A4Cr/AI13lFedXyTJsTVnXxOU5ZiK1Rp1K1bA4WrVm0lFOdSdKUpWjGMVdu0UktND0qGc5vhqUKGGzXMcPRppqnRo43E0qUE5czUKcKsYRTl7zSSTbb3bPJNY+A/wf8QSQzax8PvDd/Nbo0cUs1ivmrGxDGPzFZWKBgWVWJVWZioBZs43/DM/wH/6Jh4X/wDANv8A45XulFevhK1bAYelhMDVq4PC0U40cNhak8Ph6SlJzkqdGk4U4Jzbm1GKvJuT1bZ8tmOQZFnGNr5lm2S5TmmY4lwlicfmOXYPG43ESp04UoOvisTRq16rhSpU6cHUnJxpwhBWjFJeF/8ADM/wH/6Jh4X/APANv/jlH/DM/wAB/wDomHhf/wAA2/8Ajle6UV0/2jmH/Qfjf/Cqv/8ALPJfccX+pvCH/RK8Of8Ahkyzy/6hvJHhf/DM/wAB/wDomHhf/wAA2/8AjlH/AAzP8B/+iYeF/wDwDb/45XulFH9o5h/0H43/AMKq/wD8s8l9wf6m8If9Erw5/wCGTLPL/qG8keGD9mj4Dggj4YeFwQcg/Y24I/7aV6hqvhHw1rcVlDq2jWN/Fp0XkWMdxEJFtotsaeXECflXbFGvrhRzXR0VxY3/AIU6Lw+Y/wDChQdm6GN/2qi7SjJXp1/aQdpQhJe7vGL3St6+T5bl3D1aeIyDAYLJMRUUVUr5RhaGW1pqEZxip1cHCjOSjGpUilKTSVSaWk5X8+n+FPw5uoJra58HaHcW9xFJBPBNZRyRTQyqUkiljfKvHIjFXRgVZSQQQa+Vz/wTM/YJYlj+yx8JSWJJP/CPLySck/67ua+6aKyy2hQyf239kUaWV/WPZ/WP7PpwwXt/Y8/svbfVlT9p7P2lT2fPzcnPPltzyv3Zo3nfsP7Zbzf6t7T6t/abeP8Aq/tfZ+19j9a9r7L2nsqXtOTl5/Zw5r8qt8z/AAi/Y3/Ze+Aviibxr8Hfgl4E+HviufSrrQ5td8OaStnqD6TezWtxdWJm8xz5E81laySKANzQpzgV9MUUV01a1avP2lerUrVGknOrOVSbUVZLmm27JaJX0OfD4bD4SmqOFw9HDUk21SoUoUaab3ahTjGN31drvqfPHxk/ZN/Zw/aE1bStd+NXwe8FfEfWNDsJNL0nUfE2lrfXVhp8s7XMlpBJvQrC1wzSlDn52J7142f+CZX7A7Ag/sr/AAlIPUf8I+B0Oe03qK+66K1jjMZGk6EcViY0HGUHRjXqKk4T+OPs1Lk5ZXfNG1nd3WrMJ5ZltTEfW55fgp4pTjUWJlhaEq6qQ5eSarODqc8OSPLLmvHlVmrHwj/w7G/YE/6NW+Ev/gg/+316Z4A/Yq/ZU+Fum+NdH+HvwM8BeE9L+I3hu78IeOLDSNKMFt4m8NX8ckV5o2qRGVluLK4jlkSWMgFlYjOK+oqK5bLsvuO4+Ef+HY37Ah/5tW+Ev/gg/wDt9H/Dsb9gT/o1b4S/+CD/AO3193UUWXZfcgPhH/h2N+wJ/wBGrfCX/wAEH/2+vcvg1+y3+z3+z1/b3/ClfhL4O+HB8T/YBr7eGdNWybVBpn2r7ALtt7s62pvboxLkAGZyQTjHvlFFl2X3AVfsVp/zwj/Kj7Faf88I/wAqtUUyeWP8sfuX9dF9x8nfEv8AYW/ZE+MfjDUfH/xP+AHw68aeMtXSzj1PxFrejC41K+SwtIbGzFxOJU8z7PZ28NvGSMiKJFyQorgv+HY37An/AEat8Jf/AAQf/b6+7qKVl2X3Ir/hvktEvktD4R/4djfsCf8ARq3wl/8ABB/9vrV0P/gnD+w14a1nSvEOg/sy/C7Stb0PULPVdJ1Kz0MxXVhqNhPHdWd3byCfKTW9xFHLG3OGUHBr7Zoosuy+5AfHHjP/AIJ8/sW/EPxRrfjXxt+zl8NfEvivxHfSalrmu6roxudQ1O+mAEl1dztODJK4VdzEc4Fc0n/BMj9gaN1dP2WPhMjowZHXQSGVlOVZSJ8gggEEcg192UU2k221dvVt6tvu29yYxjCMYQiowhFRjGKSjGMVaMYpaJJJJJaJKyOXvPBXhS/8Hy/D+90LT7nwZNoK+GJfDk0O/TJNAWzXT10toGJ3WoslW3CFs+WAM55rwH/hiT9k7/og/wAPf/BMn/xdfU1FcGMyrK8wnCpj8twGNnTi4U54vB4fEzhBy5nGEq1ObjFy1cU0m9bH0OT8V8UcPUq1DIOJM+yShiKiq16OUZxmGW0q1WMVGNWrTweIowqVFFKKnOMpKKUU7aHyz/wxJ+yd/wBEH+Hv/gmT/wCLo/4Yk/ZO/wCiD/D3/wAEyf8AxdfU1Fcf+rXDn/QgyX/w14Hy/wCnHkj1/wDiJfiN/wBF/wAa/wDiU555f9R3kjwrwL+zJ8Avhl4ht/FngL4U+D/C3iO0gube21jSdMS3vYIbuMw3McUu5igmiJjcgAlCVzgnMXjf9l79n34keI73xf46+E3g3xP4m1JLZL/WtV0tJ767WztorO1E825S/kWsEMCEjIjjRc4UV71RW/8AYmTfVlgv7Iyz6mqyxCwv1DC/VlXUeT2/sPZey9tye57Tl5+X3ea2hwLjbjJZk85XFvEyzh4RYB5r/b2af2k8CqirLBPHfWvrTwntYqr9X9r7H2iU+Tm1Pln/AIYk/ZO/6IP8Pf8AwTJ/8XR/wxJ+yd/0Qf4e/wDgmT/4uvqaisP9WuHP+hBkv/hrwPl/048kd/8AxEvxG/6L/jX/AMSnPPL/AKjvJHyz/wAMSfsnjp8B/h7/AOCZP/i6+l9K0vT9D0zT9G0m0hsNL0qyttO06yt12QWllZwpb2ttCuTtjhhjSNASSFUZJPNX6K7MHlWWZdKcsBl2BwMqiUaksJhMPhpVIx+GM5UacHJReqUm0ndrVnjZxxTxNxDChTz/AIiz3PKeGlOeGp5vm+PzKGHnUjGNSdCONxFaNKU4wjGcoKLkopSbSQUUUV3nhBRRRQAUUUUAFFFFAEqdD9f6CihOh+v9BRQBG3U/U/zpKVup+p/nSUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQBKnQ/X+gooTofr/AEFFAEbdT9T/ADpKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAIKKnwPQfkKMD0H5CgCCip8D0H5CjA9B+QoAgoqfA9B+QowPQfkKAGp0P1/oKKfgDoMUUAf/2Q==)** Import what we neeed for this work ::
###Code
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
import seaborn as sns
import scipy
import random
import math
import dabl
from scipy.stats.mstats import winsorize
from tqdm import tqdm
from sklearn import preprocessing
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.linear_model import LogisticRegression
from sklearn.kernel_ridge import KernelRidge
from sklearn.model_selection import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import fbeta_score, make_scorer
from sklearn import svm
import plotly.graph_objects as go
import plotly.express as px
import plotly.figure_factory as ff
%matplotlib inline
from IPython.display import clear_output
!pip install dabl
clear_output()
import dabl
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from IPython.display import clear_output
!pip install dabl
clear_output()
###Output
_____no_output_____
###Markdown
``` This is formatted as code```**Add Google Drive in colab :**---
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Adding CSV data file from google drive :
###Code
data = pd.read_csv('/content/drive/MyDrive/machine learning Lab Project/customer.csv')
data_copy = data.copy()
###Output
_____no_output_____
###Markdown
Reading that CSV file :
###Code
data
###Output
_____no_output_____
###Markdown
Showing Head of data file :
###Code
data.head()
###Output
_____no_output_____
###Markdown
Showing Tail of CSV data :
###Code
data.tail()
###Output
_____no_output_____
###Markdown
Seeing Data information :
###Code
data.info()
data.columns
data.describe().T
###Output
_____no_output_____
###Markdown
Data Clearing Work :
###Code
data.isnull().sum()
dabl_data=dabl.clean(data, verbose=1 )
types = dabl.detect_types(dabl_data)
types
Target ="Response"
ID="ID"
X = data.drop([ID,Target],axis=1)
Y = data[Target]
data.head()
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2240 entries, 0 to 2239
Data columns (total 29 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 2240 non-null int64
1 Year_Birth 2240 non-null int64
2 Education 2240 non-null object
3 Marital_Status 2240 non-null object
4 Income 2216 non-null float64
5 Kidhome 2240 non-null int64
6 Teenhome 2240 non-null int64
7 Dt_Customer 2240 non-null object
8 Recency 2240 non-null int64
9 MntWines 2240 non-null int64
10 MntFruits 2240 non-null int64
11 MntMeatProducts 2240 non-null int64
12 MntFishProducts 2240 non-null int64
13 MntSweetProducts 2240 non-null int64
14 MntGoldProds 2240 non-null int64
15 NumDealsPurchases 2240 non-null int64
16 NumWebPurchases 2240 non-null int64
17 NumCatalogPurchases 2240 non-null int64
18 NumStorePurchases 2240 non-null int64
19 NumWebVisitsMonth 2240 non-null int64
20 AcceptedCmp3 2240 non-null int64
21 AcceptedCmp4 2240 non-null int64
22 AcceptedCmp5 2240 non-null int64
23 AcceptedCmp1 2240 non-null int64
24 AcceptedCmp2 2240 non-null int64
25 Complain 2240 non-null int64
26 Z_CostContact 2240 non-null int64
27 Z_Revenue 2240 non-null int64
28 Response 2240 non-null int64
dtypes: float64(1), int64(25), object(3)
memory usage: 507.6+ KB
###Markdown
Model Train here :
###Code
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.4, random_state=45,stratify=Y)
print(X_train.shape,X_test.shape)
print(Y_train.shape,Y_test.shape)
train = pd.concat([X_train,Y_train], axis=1)
train.head()
###Output
_____no_output_____
###Markdown
Data Testing :
###Code
Test = dabl.SimpleClassifier(random_state=42).fit(train, target_col=Target)
Test.current_best_
dabl.explain(Test)
Test.est_
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
data = data.set_index('ID')
data['Age'] = int(pd.datetime.now().year) - data['Year_Birth']
data_copy['Age'] = int(pd.datetime.now().year) - data_copy['Year_Birth']
print("Columns with string datatype are:")
for col in data.columns:
if data[col].dtypes == object:
print(col)
data_Edu = pd.DataFrame(data['Education'].value_counts()).reset_index()
data_Edu.columns = ['Education', 'Count']
data['Education'] = np.where(data['Education'] == '2n Cycle', 'Master', data['Education'])
data_Edu = pd.DataFrame(data['Education'].value_counts()).reset_index()
data_Edu.columns = ['Education', 'Count']
fig = px.bar(data_Edu,
x='Education',
y='Count',
color='Education')
fig.update_layout(width=800, height=400, title='Education ')
fig.show()
fig = plt.figure(figsize=(10,6))
plt.hist(data.Year_Birth, color='#adc987')
plt.ylabel('Number of Person')
plt.xlabel('Birth Years')
plt.title('Customers Birth Year Distrubiton')
data_Mar = pd.DataFrame(data['Marital_Status'].value_counts()).reset_index()
data_Mar.columns = ['Marital_Status', 'Count']
data_Mar
mar_stat = ['Single', 'Widow', 'Alone', 'Absurd', 'YOLO']
data['Marital_Status'] = np.where(data['Marital_Status'].isin(mar_stat), 'Single', data['Marital_Status'])
data['Marital_Status'] = np.where(data['Marital_Status'].isin(['Married', 'Together']), 'Relationship', 'Single')
data_Mar = pd.DataFrame(data['Marital_Status'].value_counts()).reset_index()
data_Mar.columns = ['Marital_Status', 'Count']
data_Mar
data_Edu = pd.DataFrame(data['Marital_Status'].value_counts()).reset_index()
data_Edu.columns = ['Marital_Status', 'Count']
fig = px.bar(data_Edu,
x='Marital_Status',
y='Count',
color='Marital_Status')
fig.update_layout(width=800, height=400, title='Relation ')
fig.show()
data['Dt_Customer'] = pd.to_datetime(data['Dt_Customer'], utc=False)
print(f"The youngest customer is {data['Age'].min()} years old and oldest customer is {data['Age'].max()} years old")
data.isna().sum()[lambda x: x>0]
data['Income'] = data['Income'].fillna(data['Income'].mean())
age_data = data.groupby(by = ['Year_Birth']).agg({'Income':'mean'}).reset_index()
age_data['Year_Birth'] = 2021 - age_data['Year_Birth']
fig = px.bar(age_data, x = 'Year_Birth', y = 'Income')
fig.update_layout(height=400, width=700, title_text="Age Vs Average Income")
fig.show()
###Output
_____no_output_____
###Markdown
Shopping Products Selling Prediction :
###Code
from plotly.subplots import make_subplots
import plotly.graph_objects as go
def create_interval_column(age_data, interval):
inter = []
interval = interval
j = 0
while (j<100):
j = j + interval
inter.append(j)
interval_column = []
for i in age_data['Year_Birth']:
for j in range(len(inter)-1):
if inter[j]<i <=inter[j+1]:
interval_column.append(str(inter[j]) + '-' + str(inter[j+1]))
break
return interval_column
interval_you_want_to_plot = 10
columns_to_be_analyzed = ['MntWines', 'MntFruits' ,'MntMeatProducts', 'MntFishProducts', 'MntSweetProducts', 'MntGoldProds']
age_data = data.groupby(by = ['Year_Birth']).agg({'MntWines':'sum','MntFruits':'sum' ,'MntMeatProducts':'sum',
'MntFishProducts':'sum', 'MntSweetProducts':'sum', 'MntGoldProds':'sum' }).reset_index()
age_data['Year_Birth'] = 2021 - age_data['Year_Birth']
age_data.drop([0,1,2], axis = 0, inplace=True)
interval_column = create_interval_column(age_data, interval=interval_you_want_to_plot )# Creating interval of 5
age_data['Interval_column'] = interval_column
fig = make_subplots(rows = 3, cols = 3, subplot_titles=columns_to_be_analyzed)
cnt = 0
for i in range(2):
for j in range(3):
fig.add_trace(go.Bar(x = age_data['Interval_column'].to_numpy(),
y = age_data[columns_to_be_analyzed[cnt]].to_numpy()),
row = i+1, col=j+1 )
cnt+=1
fig.update_layout( title = 'Columns Vs Amount of quantity',font=dict(
family="Courier New, monospace",
size=12,
color="#7f7f7f"),
showlegend=False,autosize=True,
width=1200,
height=800)
fig.show()
Edu_data = data.groupby(by = ['Education']).agg({'MntWines':'sum','MntFruits':'sum' ,'MntMeatProducts':'sum',
'MntFishProducts':'sum', 'MntSweetProducts':'sum', 'MntGoldProds':'sum' }).reset_index()
fig = make_subplots(rows = 3, cols = 3, subplot_titles=columns_to_be_analyzed)
cnt = 0
for i in range(2):
for j in range(3):
fig.add_trace(go.Bar(x = Edu_data['Education'].to_numpy(),
y = Edu_data[columns_to_be_analyzed[cnt]].to_numpy()), row = i+1, col=j+1 )
cnt+=1
fig.update_layout( title = 'Columns Vs Amount of quantity',font=dict(
family="Courier New, monospace", size=12, color="#7f7f7f"), showlegend=False,autosize=True,width=1200,height=800)
fig.show()
Marital_data = data.groupby(by = ['Marital_Status']).agg({'MntWines':'sum','MntFruits':'sum' ,'MntMeatProducts':'sum',
'MntFishProducts':'sum', 'MntSweetProducts':'sum', 'MntGoldProds':'sum' }).reset_index()
fig = make_subplots(rows = 3, cols = 3, subplot_titles=columns_to_be_analyzed)
cnt = 0
for i in range(2):
for j in range(3):
fig.add_trace(go.Bar(x = Marital_data['Marital_Status'].to_numpy(),
y = Marital_data[columns_to_be_analyzed[cnt]].to_numpy()), row = i+1, col=j+1 )
cnt+=1
fig.update_layout( title = 'Columns Vs Amount of quantity',font=dict(
family="Courier New, monospace", size=12, color="#7f7f7f"), showlegend=False,autosize=True,width=1200,height=800)
fig.show()
interval_you_want_to_plot = 10
columns_to_be_analyzed = ['NumDealsPurchases', 'NumWebPurchases', 'NumCatalogPurchases', 'NumStorePurchases', 'NumWebVisitsMonth']
age_data = data.groupby(by = ['Year_Birth']).agg({'NumDealsPurchases':'sum','NumWebPurchases':'sum' ,'NumCatalogPurchases':'sum',
'NumStorePurchases':'sum', 'NumWebVisitsMonth':'sum' }).reset_index()
age_data['Year_Birth'] = 2021 - age_data['Year_Birth']
age_data.drop([0,1,2], axis = 0, inplace=True)
interval_column = create_interval_column(age_data, interval=interval_you_want_to_plot )# Creating interval of 5
age_data['Interval_column'] = interval_column
fig = make_subplots(rows = 2, cols = 3, subplot_titles=columns_to_be_analyzed)
cnt = 0
for i in range(2):
for j in range(3):
if cnt == 5 :
break
fig.add_trace(go.Bar(x = age_data['Interval_column'].to_numpy(),
y = age_data[columns_to_be_analyzed[cnt]].to_numpy()), row = i+1, col=j+1 )
cnt+=1
fig.update_layout( title = 'Columns Vs Number of Purchase',font=dict(
family="Courier New, monospace",size=12,color="#7f7f7f"),showlegend=False,autosize=True,width=1200,height=800)
fig.show()
Edu_data = data.groupby(by = ['Education']).agg({'NumDealsPurchases':'sum','NumWebPurchases':'sum' ,'NumCatalogPurchases':'sum',
'NumStorePurchases':'sum', 'NumWebVisitsMonth':'sum' }).reset_index()
fig = make_subplots(rows = 2, cols = 3, subplot_titles=columns_to_be_analyzed)
cnt = 0
for i in range(2):
for j in range(3):
if cnt == 5 :
break
fig.add_trace(go.Bar(x = Edu_data['Education'].to_numpy(),
y = Edu_data[columns_to_be_analyzed[cnt]].to_numpy()), row = i+1, col=j+1 )
cnt+=1
fig.update_layout( title = 'Columns Vs Number of Purchase',font=dict(
family="Courier New, monospace",size=12,color="#7f7f7f"),showlegend=False,autosize=True,width=1200,height=800)
fig.show()
PALETTE = sns.color_palette("Set2")
num = data.filter(regex='Num[^Deals].+Purchases').sum(axis=0)
sizes = dict(num)
plt.figure(figsize=(12, 8))
plt.title("Shopping types proportions")
plt.pie(sizes.values(), labels=['Website', 'Catalog', 'Store'], autopct="%.1f%%", pctdistance=0.85, shadow=True, colors=PALETTE)
plt.legend(title="Purchased at", labels=['Website', 'Catalog', 'Store'], bbox_to_anchor=(1, 1))
plt.show()
fig = plt.figure(figsize=(10,6))
plt.scatter(data.Age, data.NumWebPurchases, color='#88c999', alpha=0.4, label='Web Buys')
plt.scatter(data.Age, data.NumStorePurchases, color='#5f79c9', alpha=0.4, label='Store Buys')
plt.legend()
plt.ylabel('Web and Store Purchases')
plt.xlabel('Customers Age')
plt.title('Web vs Store Pruchases According to Age')
###Output
_____no_output_____
###Markdown
Customer Offer Acceptance Prediction
###Code
# imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
import warnings
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
from sklearn.preprocessing import MinMaxScaler
from sklearn.svm import LinearSVC
# Selected Columns
features=['Income', 'Kidhome', 'Teenhome', 'Recency', 'MntWines', 'MntFruits', 'MntMeatProducts', 'MntFishProducts', 'MntSweetProducts', 'MntGoldProds', 'NumWebPurchases', 'NumCatalogPurchases', 'NumStorePurchases', 'AcceptedCmp3', 'AcceptedCmp4', 'AcceptedCmp5', 'AcceptedCmp1', 'AcceptedCmp2', 'Education', 'Marital_Status']
target='Response'
# X & Y
X=data[features]
Y=data[target]
# Data Cleaning
def NullClearner(value):
if(isinstance(value, pd.Series) and (value.dtype in ['float64','int64'])):
value.fillna(value.mean(),inplace=True)
return value
elif(isinstance(value, pd.Series)):
value.fillna(value.mode()[0],inplace=True)
return value
else:return value
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
Y=NullClearner(Y)
# Handling AlphaNumeric Features
X=pd.get_dummies(X)
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
columns=X.columns
X=MinMaxScaler().fit_transform(X)
X=pd.DataFrame(data = X,columns = columns)
X.head()
# Data split for training and testing
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
#Model Parameters
param={'C': 1, 'loss': 'squared_hinge', 'tol': 0.05092207964551096, 'penalty': 'l2'}
# Model Initialization
model=LinearSVC(**param)
model.fit(X_train,Y_train)
# Confusion Matrix
plot_confusion_matrix(model,X_test,Y_test,cmap=plt.cm.Blues)
# Classification Report
print(classification_report(Y_test,model.predict(X_test)))
###Output
_____no_output_____
###Markdown
###Code
!apt-get http://download.tensorflow.org/models/object_detection/faster_rcnn_nas_coco_2018_01_28.tar.gz
!tar -zxvf /content/faster_rcnn_nas_coco_2018_01_28.tar.gz
!rm -rf $OUTPUT_PATH
!python -m object_detection.model_main \
--pipeline_config_path=/content/faster_rcnn_nas_coco_2018_01_28/pipeline.config \
--model_dir=$OUTPUT_PATH \
--num_train_steps=$NUM_TRAIN_STEPS \
--num_eval_steps=100
!cp /content/models/research/object_detection/protos /usr/local/lib/python3.6/dist-packages/object_detection/ -r
import tf_slim as slim
###Output
Requirement already satisfied: tf_slim in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied: absl-py>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from tf_slim) (0.9.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from absl-py>=0.2.2->tf_slim) (1.15.0)
###Markdown
Logistic Regression
###Code
log_model = LogisticRegression(solver='liblinear', penalty='l2', random_state=42, C=10)
log_model.fit(x_train, y_train)
log_train_predictions = log_model.predict(x_train)
log_accuracy_train = accuracy_score(y_train, log_train_predictions)
log_predictions = log_model.predict(x_cv)
log_accuracy_cv = accuracy_score(y_cv, log_predictions)
print(f"[Logistic Regression] Training Accuracy: {log_accuracy_train * 100}")
print(f"[Logistic Regresion] Cross-Validation Accuracy: {log_accuracy_cv * 100}")
log_report = classification_report(y_cv, log_predictions)
print(log_report)
train_sizes, train_scores, test_scores = learning_curve(log_model, x_norm, y_, cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.ylabel('Score')
plt.legend(loc="lower right")
plt.grid()
log_cm = confusion_matrix(y_cv, log_predictions)
sns.heatmap(log_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
log_test_preds = log_model.predict(x_test_norm)
log_test_accuracy = accuracy_score(y_test, log_test_preds)
print(f"[Logistic Regression] Test data accuracy: {log_test_accuracy * 100}")
log_test_cm = confusion_matrix(y_test, log_test_preds)
sns.heatmap(log_test_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
###Output
_____no_output_____
###Markdown
Decision Trees
###Code
#tree_model = DecisionTreeClassifier(criterion='gini', max_depth=2, max_leaf_nodes=2, min_samples_leaf=1, min_samples_split=2)
tree_model = DecisionTreeClassifier(max_leaf_nodes=10, random_state=42, criterion='entropy', max_depth=7)# max_depth=7
tree_model.fit(x_train, y_train)
tree_training_predictions = tree_model.predict(x_train)
tree_training_accuracy = accuracy_score(y_train, tree_training_predictions)
tree_predictions = tree_model.predict(x_cv)
tree_accuracy = accuracy_score(y_cv, tree_predictions)
print(f"[Decision Tree] Training Accuracy: {tree_training_accuracy * 100}")
print(f"[Decision Tree] Cross-Validation Accuracy: {tree_accuracy * 100}")
train_sizes, train_scores, test_scores = learning_curve(tree_model, x_norm, y_, cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.ylabel('Score')
plt.legend(loc="lower right")
plt.grid()
tree_report = classification_report(y_cv, tree_predictions)
print(tree_report)
tree_cm = confusion_matrix(y_cv, tree_predictions)
sns.heatmap(tree_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
tree_testing_preds = tree_model.predict(x_test_norm)
tree_testing_acc = accuracy_score(y_test, tree_testing_preds)
print(f"[Decision Tree] Testing Data Accuracy: {tree_testing_acc * 100}")
tree_test_cm = confusion_matrix(y_test, tree_testing_preds)
sns.heatmap(tree_test_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
plt.figure(figsize=(25, 20))
plot_tree(tree_model, feature_names=df.keys()[:-1], class_names=['0', '1'], filled=True)
plt.show()
###Output
_____no_output_____
###Markdown
KNN
###Code
knn_model = KNeighborsClassifier(n_neighbors=15, p=3, weights='uniform', leaf_size=1)
knn_model.fit(x_train, y_train)
knn_train_preds = knn_model.predict(x_train)
knn_train_acc = accuracy_score(y_train, knn_train_preds)
knn_preds = knn_model.predict(x_cv)
knn_acc = accuracy_score(y_cv, knn_preds)
print(f"[KNN] Training Accuracy: {knn_train_acc * 100}")
print(f"[KNN] Cross-Validation Accuracy: {knn_acc * 100}")
train_sizes, train_scores, test_scores = learning_curve(knn_model, x_norm, y_, cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.ylabel('Score')
plt.legend(loc="lower right")
plt.grid()
knn_report = classification_report(y_cv, knn_preds)
print(knn_report)
knn_cm = confusion_matrix(y_cv, knn_preds)
sns.heatmap(knn_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
knn_testing_preds = knn_model.predict(x_test_norm)
knn_testing_acc = accuracy_score(y_test, knn_testing_preds)
print(f"[KNN] Testing Accuracy: {knn_testing_acc * 100}")
knn_test_cm = confusion_matrix(y_test, knn_testing_preds)
sns.heatmap(knn_test_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
###Output
_____no_output_____
###Markdown
Random Forest Classification
###Code
forest_model = RandomForestClassifier(max_depth=7, random_state=42)
forest_model.fit(x_train, y_train)
forest_training_preds = forest_model.predict(x_train)
forest_training_acc = accuracy_score(y_train, forest_training_preds)
forest_preds = forest_model.predict(x_cv)
forest_acc = accuracy_score(y_cv, forest_preds)
print(f"[Random Forest Classification] Training Accuracy: {forest_training_acc * 100}")
print(f"[Random Forest Classification] Cross-validation Accuracy: {forest_acc * 100}")
train_sizes, train_scores, test_scores = learning_curve(forest_model, x_norm, y_, cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.ylabel('Score')
plt.legend(loc="upper right")
plt.grid()
forest_report = classification_report(y_cv, forest_preds)
print(forest_report)
forest_cm = confusion_matrix(y_cv, forest_preds)
sns.heatmap(forest_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
forest_testing_preds = forest_model.predict(x_test_norm)
forest_testing_acc = accuracy_score(y_test, forest_testing_preds)
print(f"[Random Forest Classification] Testing Accuracy: {forest_testing_acc * 100}")
forest_test_cm = confusion_matrix(y_test, forest_testing_preds)
sns.heatmap(forest_test_cm, annot=True, linewidths=0.1)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.show()
log_all_scores = [log_accuracy_train, log_accuracy_cv, log_test_accuracy]
tree_all_scores = [tree_training_accuracy, tree_accuracy, tree_testing_acc]
knn_all_scores = [knn_train_acc, knn_acc, knn_testing_acc]
forest_all_scores = [forest_training_acc, forest_acc, forest_testing_acc]
column_titles = ['Training', 'Cross_validation', 'Testing']
rows_titles = ['Logisitc_regression', 'Decision_tree', 'K-Nearest Neighbor (KNN)', 'Random Forest Classifier']
scores_df = pd.DataFrame(data=np.row_stack((log_all_scores, tree_all_scores, knn_all_scores, forest_all_scores)),
index=rows_titles, columns=column_titles)
scores_df
###Output
_____no_output_____ |
Compare1.ipynb | ###Markdown
Compare1.eps instruction This file is to compare model with regime switching and model without regime switching. The model with regime switching is calculated by our pricing formula with two sets of parameters as per below(Regime_vs, discreet). The model without regime switching is calculated by equating regime two to regime one (Regime_vs2,discreet).The first set of parameters is corresponding to the 'good' economic status (State I) with higher interest rate, mean reverted rate and jump itensity. We call it 'good' because it will lead to a higer variance and consequetly, higher fair strike price. The second set of paramters is obtained by equating two regime status with the 'poor' economic status (State II). Two equivalent inputs will eliminate the effects of regime and our model will degerate to the SVJ model without cosideration of regime switching. The 'poor' economic status is compared in this programme. As a result, we can see from figure that with the coaction of two regimes status, the fair strike price will be decreased.
###Code
# Parameter of the 'good' economic status (Benchmark)
from IPython.display import Image
Image("set1.png")
# parameters of 'poor' economic status. !!!Benchmark
from IPython.display import Image
Image("set2.png")
def Phi1(T,AF,Q,Delta,Stock1,Stock2,Regime,Jump1,Jump2):
###############################################################################
# PARAMETER INPUT #
###############################################################################
#Stock1 = Stock(100,0.087**2,AF,0.06,0.14,3.46,0.006704,T,-0.82)
#Stock2 = Stock(100,0.087**2,AF,0.03,0.14,3.46,0.002852,T,-0.82)
#Regime = Regime2(Q);
#S0, y0, AF, r, sigma, a, b, N, rho, mu,sigma_J,lambda_
#Jump1 = Jump_Merton(100,0.087**2,AF,0.06,0.14,3.46,0.006704,T,-0.82,0.05,0.086,0.)
#Jump2 = Jump_Merton(100,0.087**2,AF,0.03,0.14,3.46,0.002852,T,-0.82,0.06,0.086,0.3)
#####################################################################################
# ###############################Numerical Integration########################
n = 10 # time step of integration
X = np.linspace(T-Delta,T,n+1)
phi1_1_2j = [];phi1_1_1j = [];phi1_1_0j = [];
phi1_2_2j = [];phi1_2_1j = [];phi1_2_0j = [];
for i in range(len(X)):
x1 = Jump1.L(-2j,X[i]); x2=Jump1.L(-1j,X[i]);x3=Jump1.L(0,X[i]);
phi1_1_2j.append(x1); phi1_1_1j.append(x2); phi1_1_0j.append(x3);
y1 = Jump2.L(-2j,X[i]); y2=Jump2.L(-1j,X[i]);y3=Jump2.L(0,X[i]);
phi1_2_2j.append(y1); phi1_2_1j.append(y2);phi1_2_0j.append(y3);
phI1_1_2j = np.trapz(phi1_1_2j,dx=Delta/n);phI1_2_2j = np.trapz(phi1_2_2j,dx=Delta/n);
phI1_1_1j = np.trapz(phi1_1_1j,dx=Delta/n);phI1_2_1j = np.trapz(phi1_2_1j,dx=Delta/n);
phI1_1_0j = np.trapz(phi1_1_0j,dx=Delta/n);phI1_2_0j = np.trapz(phi1_2_0j,dx=Delta/n);
#################################Diagonal Matrix#########################################
phi1_Matrix_2j = np.diag(np.array([phI1_1_2j,phI1_2_2j]));
phi1_Matrix_1j = np.diag(np.array([phI1_1_1j,phI1_2_1j]));
phi1_Matrix_0j = np.diag(np.array([phI1_1_0j,phI1_2_0j]));
#######################Phi1_characteristic function#####################################
Phi1_2j = Regime.character(phi1_Matrix_2j,T-Delta,T);
Phi1_1j = Regime.character(phi1_Matrix_1j,T-Delta,T);
Phi1_0j = Regime.character(phi1_Matrix_0j,T-Delta,T);
return Phi1_2j, Phi1_1j, Phi1_0j
def Phi2(T,AF,Q,Delta,Stock1,Stock2,Regime):
###############################################################################
# PARAMETER INPUT #
###############################################################################
#
#Stock1 = Stock(100,0.087**2,AF,0.06,0.14,3.46,0.006704,T,-0.82)# S0, y0, AF, r, sigma, a, b, N, rho
#Stock2 = Stock(100,0.087**2,AF,0.03,0.14,3.46,0.002852,T,-0.82)
#Regime = Regime2(Q);
#
###############################################################################
n = 10 # time step of integration
X = np.linspace(0,T-Delta,n+1)
phi2_1_2j = [];phi2_2_2j = [];
for i in range(len(X)):
H1 = Stock1.H(X[i]);H2 = Stock2.H(X[i]);
x = Stock1.a*Stock1.b*H1;y = Stock2.a*Stock2.b*H2;
phi2_1_2j.append(x);phi2_2_2j.append(y);
#print(H1,X[i],T-Delta)
phI2_1_2j = np.trapz(phi2_1_2j,dx=(T-Delta)/n);phI2_2_2j = np.trapz(phi2_2_2j,dx=(T-Delta)/n);
phi2_Matrix = np.diag(np.array([phI2_1_2j,phI2_2_2j]))
Phi2 = Regime.character(phi2_Matrix,0,T-Delta)
return Phi2,Stock1.H(0)
def regime_VS(AF):
###############################################################################
# PARAMETER INPUT #
###############################################################################
Delta = 1/AF
Q = np.array([[-0.1,0.1],[0.4,-0.4]])#transition matrix
#Stock1 = Stock(100,0.087**2,252,0.06,0.14,3.46,0.006704,1,-0.82)# S0, y0, AF, r, sigma, a, b, T, rho
#Stock2 = Stock(100,0.087**2,252,0.03,0.14,3.46,0.002852,1,-0.82)
#S0, y0, AF, r, sigma, a, b, N, rho, mu,sigma_J,lambda_
#Jump1 = Jump_Merton(100,0.087**2,252,0.06,0.14,3.46,0.006704,1,-0.82,0.05,0.086,0.)
#Jump2 = Jump_Merton(100,0.087**2,252,0.03,0.14,3.46,0.002852,1,-0.82,0.06,0.086,0.3)
Regime = Regime2(Q);
################################################################################
U = np.array([0,0])#initialize
T = 1
for k in range(0,AF*T):
t_k = (k+1)*Delta
Stock1 = Stock(1,0.05,AF,0.05,0.1,2,0.075,t_k,-0.4)# S0, y0, AF, r, sigma, a, b, T, rho
Stock2 = Stock(1,0.05,AF,0.03,0.14,3.46,0.002852,t_k,-0.82)
Jump1 = Jump_Merton(1,0.05,AF,0.05,0.1,2,0.075,t_k,-0.4,0.03,0.086,0.3)
Jump2 = Jump_Merton(1,0.05,AF,0.03,0.14,3.46,0.002852,t_k,-0.82,0.05,0.086,0.)
R = np.diag([np.exp(Stock1.r*Delta),np.exp(Stock2.r*Delta)])# matrix of interest rate
Phi1_2j,Phi1_1j,Phi1_0j = Phi1(t_k,AF,Q,Delta,Stock1,Stock2,Regime,Jump1,Jump2)
Phi2_,H1 = Phi2(t_k,AF,Q,Delta,Stock1,Stock2,Regime)
if t_k == Delta:
M = Stock1.M(-2j,0)
uk = Phi1_2j[1]*np.exp(M*Stock1.y0)-2*Phi1_1j[1]+Phi1_0j[1]
#Uk = np.matmul(R,uk)
Uk = uk
else:
uk = np.multiply(Phi1_2j[1],Phi2_[1])*np.exp(H1*Stock1.y0)-2*Phi1_1j[1]+Phi1_0j[1]
#Uk = np.matmul(R,uk)
Uk = uk
U = U+Uk
K = (U/T)*10000
return K
def discrete(AF):
Kvar = []
for t in AF:
K = regime_VS(t)
Kvar.append(K)
return(Kvar)
def regime_VS2(AF):
###############################################################################
# PARAMETER INPUT #
###############################################################################
Delta = 1/AF
Q = np.array([[-0.1,0.1],[0.4,-0.4]])#transition matrix
#Stock1 = Stock(100,0.087**2,252,0.06,0.14,3.46,0.006704,1,-0.82)# S0, y0, AF, r, sigma, a, b, T, rho
#Stock2 = Stock(100,0.087**2,252,0.03,0.14,3.46,0.002852,1,-0.82)
#S0, y0, AF, r, sigma, a, b, N, rho, mu,sigma_J,lambda_
#Jump1 = Jump_Merton(100,0.087**2,252,0.06,0.14,3.46,0.006704,1,-0.82,0.05,0.086,0.)
#Jump2 = Jump_Merton(100,0.087**2,252,0.03,0.14,3.46,0.002852,1,-0.82,0.06,0.086,0.3)
Regime = Regime2(Q);
################################################################################
U = np.array([0,0])#initialize
T = 1
for k in range(0,AF*T):
t_k = (k+1)*Delta
Stock1 = Stock(1,0.05,AF,0.05,0.1,2,0.075,t_k,-0.4)# S0, y0, AF, r, sigma, a, b, T, rho
Stock2 = Stock1
Jump1 = Jump_Merton(1,0.05,AF,0.05,0.1,2,0.075,t_k,-0.4,0.03,0.086,0.3)
Jump2 = Jump1
R = np.diag([np.exp(Stock1.r*Delta),np.exp(Stock2.r*Delta)])# matrix of interest rate
Phi1_2j,Phi1_1j,Phi1_0j = Phi1(t_k,AF,Q,Delta,Stock1,Stock2,Regime,Jump1,Jump2)
Phi2_,H1 = Phi2(t_k,AF,Q,Delta,Stock1,Stock2,Regime)
if t_k == Delta:
M = Stock1.M(-2j,0)
uk = Phi1_2j[1]*np.exp(M*Stock1.y0)-2*Phi1_1j[1]+Phi1_0j[1]
#Uk = np.matmul(R,uk)
Uk = uk
else:
uk = np.multiply(Phi1_2j[1],Phi2_[1])*np.exp(H1*Stock1.y0)-2*Phi1_1j[1]+Phi1_0j[1]
#Uk = np.matmul(R,uk)
Uk = uk
U = U+Uk
K = (U/T)*10000
return K
def discrete2(AF):
Kvar = []
for t in AF:
K = regime_VS2(t)
Kvar.append(K)
return(Kvar)
# final main()
from VS_class2 import Stock, Regime2, Jump_Merton, Jump_Kou
import matplotlib.pyplot as plt
import numpy as np
import math
from scipy import linalg
AF = range(5,251,5)
X = np.linspace(5,250,50)
# calculate discrete sols based AF
Kvar_d = discrete(AF)
K_d = list(zip(*Kvar_d))
# calculate discrete sols based AF
Kvar_d1 = discrete2(AF)
K_d1 = list(zip(*Kvar_d1))
# calculate continuous sols and copy to len(AF)
# K = Continuous()
# Kvar_c = [K[:] for i in range(len(AF))]
# K_c = list(zip(*Kvar_c))
# graph and compare discrete and continuous sols
fig = plt.figure() # an empty figure with no axes
fig, ax = plt.subplots(1)
ax.plot(X, K_d[0], color='darkblue', marker='o', fillstyle='top',\
linestyle='solid', linewidth=2,ms=5,label='Regime Switching Model with Jump')
#ax.plot(X, K_c[0], color='green',label='Continuous Kvar without jump')
#ax.plot(X, K_d[1], color='darkblue', marker='o', fillstyle='top',linestyle='solid', \
# linewidth=1,ms=5,label='Model with Regime Switching')
#ax.plot(X, K_c[0], color='green',label='Continuous Kvar without jump')
ax.plot(X, K_d1[1], color='violet', marker='v', fillstyle='top',linestyle='solid', \
linewidth=2,ms=3,label='Jump diffusion model without Regime Switching')
#ax.plot(X, K_c[1], color='cyan',label='Continuous Kvar without jump')
#ax.set_xlim(20, 250)
#ax.set_ylim(105, 130)
plt.xlabel('Observation Frequency', fontsize=13)
plt.ylabel('Kvar', fontsize=13)
ax.legend(fancybox=True, framealpha=0.5)
#plt.title("Simple Plot")
# print(K_d1)
# print(K_d)
#plt.savefig('Compare1.pdf', format='pdf', dpi=1000)
Outfile=open('Kvar_regime1.txt','a+')
Outfile.write(str(K_d))
Outfile.close()
Outfile=open('Kvar_noregime.txt','a+')
Outfile.write(str(K_d1))
Outfile.close()
plt.show()
import numpy as np
X = np.linspace(5,250,50)
print (X)
AF = range(5,251,5)
print(AF)
print(K_d1)
for i in AF:
print (i)
###Output
_____no_output_____ |
01_Binary-Classification/Binary-Classification.ipynb | ###Markdown
Binary Classification Task 1: Logistic Regression Importing the libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Loading the dataset
###Code
X_train_fpath = './data/X_train'
y_train_fpath = './data/Y_train'
X_test_fpath = './data/X_test'
output_fpath = './output_{}.csv'
# parse the csv files to numpy array
with open(X_train_fpath) as f:
next(f)
X_train = np.array([line.strip('\n').split(',')[1:] for line in f], dtype=float)
with open(y_train_fpath) as f:
next(f)
y_train = np.array([line.strip('\n').split(',')[1] for line in f], dtype=float)
with open(X_test_fpath) as f:
next(f)
X_test = np.array([line.strip('\n').split(',')[1:] for line in f], dtype=float)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
###Output
(54256, 510)
(54256,)
(27622, 510)
###Markdown
Defining data preprocessing functions
###Code
def _normalize(X, train=True, specified_column=None, X_mean=None, X_std=None):
"""
This function normalizes specific columns of X.
The mean and standard variance of training data will be reused when processing testing data.
Arguments:
X: data to be processed.
train: 'True' when processing training data. 'False' when processing testing data.
specified_column: indexes of the columns that will be normalized. If 'None', all columns will be normalized.
X_mean: mean value of the training data, used when train='False'.
X_std: standard deviation of the training data, used when train='False'.
Outputs:
X: normalized data.
X_mean: computed mean value of the training data.
X_std: computed standard deviation of the training data.
"""
if specified_column == None:
specified_column = np.arange(X.shape[1])
if train:
X_mean = np.mean(X[:, specified_column], 0).reshape(1, -1)
X_std = np.std(X[:, specified_column], 0).reshape(1, -1)
X[:, specified_column] = (X[:, specified_column] - X_mean) / (X_std + 1e-8)
return X, X_mean, X_std
def _train_dev_split(X, y, dev_ratio=0.25):
"""
This function spilts data into training set and development set.
"""
train_size = int(len(X) * (1 - dev_ratio))
return X[:train_size], y[:train_size], X[train_size:], y[train_size:]
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
# Normalizing the training and testing data
X_train, X_mean, X_std = _normalize(X_train, train=True)
X_test, _, _ = _normalize(X_test, train=False, specified_column=None, X_mean=X_mean, X_std=X_std)
# Spliting the data into training and development set
dev_ratio = 0.1
X_train, y_train, X_dev, y_dev = _train_dev_split(X_train, y_train, dev_ratio=dev_ratio)
train_size = X_train.shape[0]
dev_size = X_dev.shape[0]
test_size = X_test.shape[0]
data_dim = X_train.shape[1]
print('Size of the training set: {}'.format(train_size))
print('Size of the development set: {}'.format(dev_size))
print('Size of the testing set: {}'.format(test_size))
print('Size of the data dimension: {}'.format(data_dim))
###Output
Size of the training set: 48830
Size of the development set: 5426
Size of the testing set: 27622
Size of the data dimension: 510
###Markdown
Defining some useful functions- Some functions that will be repeatedly used when iteratively updating the parameters.
###Code
def _shuffle(X, y):
"""
This function shuffles two equal-length list/array, X and Y, together.
"""
randomize = np.arange(len(X))
np.random.shuffle(randomize)
return (X[randomize], y[randomize])
def _sigmoid(z):
"""
Sigmoid function can be used to calculate probability.
To avoid overflow, minimum/maximum output value is set.
"""
return np.clip(1 / (1.0 + np.exp(-z)), 1e-8, 1-(1e-8))
def _f(X, w, b):
"""
This is the logistic regression function, parameterized by w and b.
Arguments:
X: input data, shape=[batch_size, data_dimension]
w: weight vector, shape=[data_dimension]
b: bias, scalar
Output:
predicted probability of each row of X being postively labeled, shape=[batch_size, ]
"""
return _sigmoid(np.matmul(X, w) + b)
def _predict(X, w, b):
"""
This function returns a truth value prediction for each row of X by rounding the result of logistic regression function.
"""
return np.round(_f(X, w, b)).astype(np.int)
def _accuracy(y_pred, y_label):
"""
This function calculates prediction accuracy
"""
acc = 1 - np.mean(np.abs(y_pred - y_label))
return acc
###Output
_____no_output_____
###Markdown
Functions about gradient and loss
###Code
def _cross_entropy_loss(y_pred, y_label):
"""
This function computes the cross entropy.
Arguments:
y_pred: probabilistic predictions, float vector.
y_label: ground truth labels, bool vector.
Outputs:
cross entropy, scalar.
"""
cross_entropy = -np.dot(y_label, np.log(y_pred)) - np.dot((1 - y_label), np.log(1 - y_pred))
return cross_entropy
def _gradient(X, y_label, w, b):
"""
This function computes the gradient of cross entropy loss with respect to weight w and bias b.
"""
y_pred = _f(X, w, b)
pred_error = y_label - y_pred
w_grad = -np.sum(pred_error * X.T, 1)
b_grad = -np.sum(pred_error)
return w_grad, b_grad
###Output
_____no_output_____
###Markdown
Training the model - We'll use the gradient descent method with small batches for training.- The training data is divided into many small batches. For each small batch, we calculate the gradient and loss separately. Then, update the model parameters according to the batch.- When a loop is completed, that is, after all the small batches of the entire training set have been used **once**, we will break up all the training data and re-divide them into new small batches. Then, proceed to the next loop until finishing all loops.
###Code
# zero initialization for weights and bias
w = np.zeros((data_dim, ))
b = np.zeros((1, ))
# some parameters for training
max_iter = 10
batch_size = 8
learning_rate = 0.2
# keep the loss and accuracy for plotting
train_loss = []
dev_loss = []
train_acc = []
dev_acc = []
# calculate the number of parameter updates
step = 1
# iterative training
for epoch in range(max_iter):
# random shuffle at the beginning of each epoch
X_train, y_train = _shuffle(X_train, y_train)
# Mini-batch training
for idx in range(int(np.floor(train_size / batch_size))):
X = X_train[idx * batch_size : (idx + 1) * batch_size]
y = y_train[idx * batch_size : (idx + 1) * batch_size]
# compute the gradient
w_grad, b_grad = _gradient(X, y, w, b)
# gradient descent updates
# learning rate decay with time
w = w - learning_rate / np.sqrt(step) * w_grad
b = b - learning_rate / np.sqrt(step) * b_grad
step += 1
# compute the loss and accuracy of the training set and development set
y_train_pred = _f(X_train, w, b) # float
Y_train_pred = np.round(y_train_pred) # bool
train_acc.append(_accuracy(Y_train_pred, y_train))
train_loss.append(_cross_entropy_loss(y_train_pred, y_train) / train_size)
y_dev_pred = _f(X_dev, w, b) # float
Y_dev_pred = np.round(y_dev_pred) # bool
dev_acc.append(_accuracy(Y_dev_pred, y_dev))
dev_loss.append(_cross_entropy_loss(y_dev_pred, y_dev) / dev_size)
print('Training loss: {}'.format(train_loss[-1]))
print('Development loss: {}'.format(dev_loss[-1]))
print('Training accuracy: {}'.format(train_acc[-1]))
print('Development accuracy: {}'.format(dev_acc[-1]))
###Output
Training loss: 0.272853645611585
Development loss: 0.29365746202333487
Training accuracy: 0.8830841695678886
Development accuracy: 0.8744931809804645
###Markdown
Plotting loss and accuracy curve
###Code
# loss curve
plt.plot(train_loss)
plt.plot(dev_loss)
plt.title('Loss')
plt.legend(['train', 'dev'])
plt.savefig('loss.png')
plt.show()
# accuracy curve
plt.plot(train_acc)
plt.plot(dev_acc)
plt.title('Accuracy')
plt.legend(['train', 'dev'])
plt.savefig('acc.png')
plt.show()
###Output
_____no_output_____
###Markdown
Predicting the testing labels
###Code
import csv
predictions = _predict(X_test, w, b)
with open('output_logistic.csv', mode='w', newline='') as submit_file:
csv_writer = csv.writer(submit_file)
header = ['id', 'label']
print(header)
csv_writer.writerow(header)
for i in range(len(predictions)):
row = [str(i+1), predictions[i]]
csv_writer.writerow(row)
print(row)
print()
# Print out the most significant weights
ind = np.argsort(np.abs(w))[::-1] # Arrange the array in an ascending order and take it from the end to the front
with open(X_test_fpath) as f:
content = f.readline().strip('\n').split(',')
features = np.array(content)
for i in ind[0 : 10]:
print(features[i], w[i])
###Output
;, 0]
['26687', 1]
['26688', 1]
['26689', 0]
['26690', 0]
['26691', 0]
['26692', 0]
['26693', 1]
['26694', 0]
['26695', 0]
['26696', 0]
['26697', 0]
['26698', 0]
['26699', 1]
['26700', 0]
['26701', 0]
['26702', 0]
['26703', 1]
['26704', 0]
['26705', 0]
['26706', 0]
['26707', 0]
['26708', 0]
['26709', 0]
['26710', 0]
['26711', 0]
['26712', 0]
['26713', 0]
['26714', 0]
['26715', 0]
['26716', 0]
['26717', 0]
['26718', 1]
['26719', 0]
['26720', 0]
['26721', 0]
['26722', 0]
['26723', 0]
['26724', 0]
['26725', 0]
['26726', 0]
['26727', 0]
['26728', 0]
['26729', 0]
['26730', 0]
['26731', 0]
['26732', 1]
['26733', 0]
['26734', 0]
['26735', 0]
['26736', 0]
['26737', 0]
['26738', 0]
['26739', 0]
['26740', 0]
['26741', 0]
['26742', 0]
['26743', 0]
['26744', 0]
['26745', 1]
['26746', 0]
['26747', 0]
['26748', 0]
['26749', 0]
['26750', 0]
['26751', 0]
['26752', 0]
['26753', 0]
['26754', 0]
['26755', 1]
['26756', 0]
['26757', 0]
['26758', 0]
['26759', 0]
['26760', 0]
['26761', 1]
['26762', 0]
['26763', 0]
['26764', 0]
['26765', 0]
['26766', 0]
['26767', 1]
['26768', 0]
['26769', 0]
['26770', 0]
['26771', 0]
['26772', 0]
['26773', 1]
['26774', 0]
['26775', 0]
['26776', 0]
['26777', 0]
['26778', 1]
['26779', 0]
['26780', 0]
['26781', 0]
['26782', 0]
['26783', 1]
['26784', 0]
['26785', 0]
['26786', 0]
['26787', 1]
['26788', 0]
['26789', 0]
['26790', 0]
['26791', 0]
['26792', 0]
['26793', 0]
['26794', 0]
['26795', 0]
['26796', 1]
['26797', 0]
['26798', 0]
['26799', 1]
['26800', 0]
['26801', 0]
['26802', 0]
['26803', 0]
['26804', 1]
['26805', 0]
['26806', 0]
['26807', 0]
['26808', 0]
['26809', 0]
['26810', 0]
['26811', 1]
['26812', 0]
['26813', 1]
['26814', 1]
['26815', 0]
['26816', 0]
['26817', 0]
['26818', 0]
['26819', 0]
['26820', 1]
['26821', 0]
['26822', 0]
['26823', 0]
['26824', 0]
['26825', 1]
['26826', 0]
['26827', 0]
['26828', 1]
['26829', 1]
['26830', 1]
['26831', 0]
['26832', 0]
['26833', 1]
['26834', 0]
['26835', 1]
['26836', 0]
['26837', 0]
['26838', 1]
['26839', 0]
['26840', 0]
['26841', 0]
['26842', 0]
['26843', 0]
['26844', 0]
['26845', 0]
['26846', 0]
['26847', 0]
['26848', 1]
['26849', 1]
['26850', 1]
['26851', 0]
['26852', 0]
['26853', 0]
['26854', 0]
['26855', 0]
['26856', 0]
['26857', 0]
['26858', 0]
['26859', 0]
['26860', 1]
['26861', 0]
['26862', 0]
['26863', 0]
['26864', 0]
['26865', 0]
['26866', 0]
['26867', 0]
['26868', 1]
['26869', 0]
['26870', 0]
['26871', 0]
['26872', 0]
['26873', 0]
['26874', 0]
['26875', 0]
['26876', 0]
['26877', 0]
['26878', 0]
['26879', 0]
['26880', 0]
['26881', 0]
['26882', 0]
['26883', 0]
['26884', 0]
['26885', 0]
['26886', 1]
['26887', 1]
['26888', 0]
['26889', 0]
['26890', 0]
['26891', 0]
['26892', 0]
['26893', 0]
['26894', 0]
['26895', 0]
['26896', 1]
['26897', 0]
['26898', 0]
['26899', 0]
['26900', 0]
['26901', 0]
['26902', 0]
['26903', 0]
['26904', 1]
['26905', 0]
['26906', 0]
['26907', 0]
['26908', 1]
['26909', 0]
['26910', 0]
['26911', 1]
['26912', 0]
['26913', 0]
['26914', 0]
['26915', 0]
['26916', 1]
['26917', 0]
['26918', 0]
['26919', 0]
['26920', 0]
['26921', 0]
['26922', 0]
['26923', 1]
['26924', 0]
['26925', 0]
['26926', 0]
['26927', 0]
['26928', 1]
['26929', 0]
['26930', 0]
['26931', 0]
['26932', 1]
['26933', 0]
['26934', 0]
['26935', 0]
['26936', 0]
['26937', 1]
['26938', 0]
['26939', 0]
['26940', 0]
['26941', 0]
['26942', 0]
['26943', 0]
['26944', 0]
['26945', 0]
['26946', 1]
['26947', 0]
['26948', 0]
['26949', 1]
['26950', 0]
['26951', 0]
['26952', 0]
['26953', 0]
['26954', 1]
['26955', 0]
['26956', 0]
['26957', 1]
['26958', 0]
['26959', 0]
['26960', 0]
['26961', 1]
['26962', 0]
['26963', 0]
['26964', 0]
['26965', 0]
['26966', 1]
['26967', 0]
['26968', 0]
['26969', 0]
['26970', 0]
['26971', 0]
['26972', 0]
['26973', 0]
['26974', 0]
['26975', 0]
['26976', 1]
['26977', 0]
['26978', 0]
['26979', 0]
['26980', 0]
['26981', 0]
['26982', 0]
['26983', 0]
['26984', 0]
['26985', 0]
['26986', 1]
['26987', 0]
['26988', 0]
['26989', 0]
['26990', 0]
['26991', 0]
['26992', 0]
['26993', 0]
['26994', 0]
['26995', 1]
['26996', 1]
['26997', 0]
['26998', 0]
['26999', 1]
['27000', 0]
['27001', 0]
['27002', 0]
['27003', 1]
['27004', 0]
['27005', 0]
['27006', 0]
['27007', 0]
['27008', 0]
['27009', 0]
['27010', 0]
['27011', 0]
['27012', 0]
['27013', 0]
['27014', 0]
['27015', 0]
['27016', 0]
['27017', 1]
['27018', 0]
['27019', 1]
['27020', 0]
['27021', 0]
['27022', 0]
['27023', 0]
['27024', 0]
['27025', 0]
['27026', 0]
['27027', 0]
['27028', 1]
['27029', 0]
['27030', 0]
['27031', 0]
['27032', 0]
['27033', 0]
['27034', 0]
['27035', 0]
['27036', 0]
['27037', 1]
['27038', 0]
['27039', 0]
['27040', 0]
['27041', 0]
['27042', 0]
['27043', 1]
['27044', 0]
['27045', 0]
['27046', 0]
['27047', 0]
['27048', 1]
['27049', 0]
['27050', 0]
['27051', 0]
['27052', 1]
['27053', 0]
['27054', 0]
['27055', 0]
['27056', 1]
['27057', 0]
['27058', 0]
['27059', 0]
['27060', 0]
['27061', 1]
['27062', 0]
['27063', 0]
['27064', 0]
['27065', 0]
['27066', 0]
['27067', 0]
['27068', 0]
['27069', 1]
['27070', 0]
['27071', 0]
['27072', 0]
['27073', 0]
['27074', 0]
['27075', 0]
['27076', 1]
['27077', 0]
['27078', 0]
['27079', 0]
['27080', 1]
['27081', 0]
['27082', 0]
['27083', 0]
['27084', 0]
['27085', 0]
['27086', 0]
['27087', 0]
['27088', 0]
['27089', 0]
['27090', 0]
['27091', 0]
['27092', 0]
['27093', 0]
['27094', 0]
['27095', 0]
['27096', 1]
['27097', 1]
['27098', 0]
['27099', 0]
['27100', 0]
['27101', 0]
['27102', 0]
['27103', 0]
['27104', 0]
['27105', 0]
['27106', 0]
['27107', 0]
['27108', 0]
['27109', 0]
['27110', 0]
['27111', 1]
['27112', 1]
['27113', 0]
['27114', 0]
['27115', 0]
['27116', 0]
['27117', 1]
['27118', 0]
['27119', 0]
['27120', 0]
['27121', 1]
['27122', 0]
['27123', 0]
['27124', 1]
['27125', 1]
['27126', 0]
['27127', 0]
['27128', 0]
['27129', 1]
['27130', 0]
['27131', 0]
['27132', 0]
['27133', 0]
['27134', 0]
['27135', 0]
['27136', 0]
['27137', 0]
['27138', 0]
['27139', 1]
['27140', 0]
['27141', 0]
['27142', 0]
['27143', 0]
['27144', 1]
['27145', 1]
['27146', 0]
['27147', 0]
['27148', 0]
['27149', 0]
['27150', 0]
['27151', 0]
['27152', 1]
['27153', 1]
['27154', 0]
['27155', 0]
['27156', 0]
['27157', 1]
['27158', 1]
['27159', 0]
['27160', 0]
['27161', 0]
['27162', 1]
['27163', 0]
['27164', 0]
['27165', 1]
['27166', 1]
['27167', 0]
['27168', 0]
['27169', 0]
['27170', 0]
['27171', 0]
['27172', 0]
['27173', 0]
['27174', 0]
['27175', 0]
['27176', 1]
['27177', 0]
['27178', 0]
['27179', 0]
['27180', 0]
['27181', 0]
['27182', 0]
['27183', 0]
['27184', 0]
['27185', 0]
['27186', 0]
['27187', 1]
['27188', 0]
['27189', 1]
['27190', 0]
['27191', 0]
['27192', 0]
['27193', 0]
['27194', 0]
['27195', 1]
['27196', 0]
['27197', 0]
['27198', 0]
['27199', 1]
['27200', 0]
['27201', 0]
['27202', 0]
['27203', 0]
['27204', 0]
['27205', 0]
['27206', 1]
['27207', 0]
['27208', 0]
['27209', 0]
['27210', 0]
['27211', 0]
['27212', 0]
['27213', 0]
['27214', 0]
['27215', 0]
['27216', 0]
['27217', 0]
['27218', 0]
['27219', 0]
['27220', 0]
['27221', 0]
['27222', 1]
['27223', 0]
['27224', 0]
['27225', 0]
['27226', 1]
['27227', 1]
['27228', 0]
['27229', 0]
['27230', 0]
['27231', 0]
['27232', 0]
['27233', 0]
['27234', 1]
['27235', 0]
['27236', 0]
['27237', 0]
['27238', 1]
['27239', 0]
['27240', 0]
['27241', 0]
['27242', 1]
['27243', 0]
['27244', 0]
['27245', 1]
['27246', 0]
['27247', 0]
['27248', 0]
['27249', 0]
['27250', 0]
['27251', 1]
['27252', 0]
['27253', 0]
['27254', 0]
['27255', 0]
['27256', 0]
['27257', 0]
['27258', 0]
['27259', 1]
['27260', 0]
['27261', 0]
['27262', 0]
['27263', 1]
['27264', 0]
['27265', 1]
['27266', 0]
['27267', 0]
['27268', 0]
['27269', 0]
['27270', 0]
['27271', 1]
['27272', 0]
['27273', 0]
['27274', 0]
['27275', 0]
['27276', 0]
['27277', 0]
['27278', 0]
['27279', 0]
['27280', 0]
['27281', 0]
['27282', 0]
['27283', 1]
['27284', 0]
['27285', 0]
['27286', 0]
['27287', 0]
['27288', 0]
['27289', 0]
['27290', 0]
['27291', 1]
['27292', 0]
['27293', 0]
['27294', 0]
['27295', 1]
['27296', 0]
['27297', 0]
['27298', 0]
['27299', 1]
['27300', 0]
['27301', 0]
['27302', 0]
['27303', 0]
['27304', 0]
['27305', 0]
['27306', 1]
['27307', 0]
['27308', 0]
['27309', 0]
['27310', 0]
['27311', 0]
['27312', 0]
['27313', 0]
['27314', 0]
['27315', 0]
['27316', 1]
['27317', 0]
['27318', 0]
['27319', 0]
['27320', 0]
['27321', 0]
['27322', 0]
['27323', 0]
['27324', 0]
['27325', 1]
['27326', 0]
['27327', 0]
['27328', 0]
['27329', 0]
['27330', 0]
['27331', 0]
['27332', 0]
['27333', 0]
['27334', 1]
['27335', 1]
['27336', 0]
['27337', 0]
['27338', 0]
['27339', 1]
['27340', 0]
['27341', 0]
['27342', 1]
['27343', 0]
['27344', 1]
['27345', 0]
['27346', 0]
['27347', 0]
['27348', 0]
['27349', 1]
['27350', 0]
['27351', 1]
['27352', 0]
['27353', 0]
['27354', 0]
['27355', 0]
['27356', 0]
['27357', 1]
['27358', 0]
['27359', 0]
['27360', 0]
['27361', 0]
['27362', 1]
['27363', 1]
['27364', 0]
['27365', 1]
['27366', 0]
['27367', 0]
['27368', 0]
['27369', 0]
['27370', 1]
['27371', 0]
['27372', 0]
['27373', 0]
['27374', 0]
['27375', 0]
['27376', 0]
['27377', 0]
['27378', 0]
['27379', 0]
['27380', 0]
['27381', 0]
['27382', 0]
['27383', 1]
['27384', 0]
['27385', 0]
['27386', 1]
['27387', 0]
['27388', 0]
['27389', 0]
['27390', 1]
['27391', 0]
['27392', 0]
['27393', 1]
['27394', 1]
['27395', 0]
['27396', 1]
['27397', 1]
['27398', 0]
['27399', 0]
['27400', 0]
['27401', 0]
['27402', 0]
['27403', 0]
['27404', 0]
['27405', 0]
['27406', 0]
['27407', 0]
['27408', 0]
['27409', 0]
['27410', 0]
['27411', 0]
['27412', 0]
['27413', 0]
['27414', 0]
['27415', 0]
['27416', 0]
['27417', 1]
['27418', 0]
['27419', 0]
['27420', 0]
['27421', 0]
['27422', 0]
['27423', 1]
['27424', 0]
['27425', 0]
['27426', 0]
['27427', 0]
['27428', 0]
['27429', 0]
['27430', 1]
['27431', 0]
['27432', 1]
['27433', 0]
['27434', 0]
['27435', 1]
['27436', 0]
['27437', 0]
['27438', 0]
['27439', 0]
['27440', 0]
['27441', 0]
['27442', 0]
['27443', 1]
['27444', 0]
['27445', 0]
['27446', 0]
['27447', 0]
['27448', 0]
['27449', 0]
['27450', 0]
['27451', 0]
['27452', 0]
['27453', 0]
['27454', 0]
['27455', 1]
['27456', 0]
['27457', 0]
['27458', 0]
['27459', 0]
['27460', 0]
['27461', 0]
['27462', 0]
['27463', 0]
['27464', 0]
['27465', 1]
['27466', 0]
['27467', 0]
['27468', 0]
['27469', 0]
['27470', 0]
['27471', 0]
['27472', 0]
['27473', 1]
['27474', 0]
['27475', 0]
['27476', 0]
['27477', 0]
['27478', 0]
['27479', 0]
['27480', 0]
['27481', 0]
['27482', 0]
['27483', 0]
['27484', 0]
['27485', 0]
['27486', 0]
['27487', 0]
['27488', 0]
['27489', 0]
['27490', 0]
['27491', 0]
['27492', 0]
['27493', 0]
['27494', 0]
['27495', 1]
['27496', 1]
['27497', 1]
['27498', 0]
['27499', 0]
['27500', 0]
['27501', 0]
['27502', 0]
['27503', 1]
['27504', 0]
['27505', 0]
['27506', 0]
['27507', 0]
['27508', 0]
['27509', 0]
['27510', 0]
['27511', 1]
['27512', 0]
['27513', 0]
['27514', 0]
['27515', 0]
['27516', 0]
['27517', 0]
['27518', 0]
['27519', 1]
['27520', 0]
['27521', 0]
['27522', 0]
['27523', 1]
['27524', 0]
['27525', 0]
['27526', 0]
['27527', 0]
['27528', 0]
['27529', 0]
['27530', 0]
['27531', 0]
['27532', 0]
['27533', 0]
['27534', 0]
['27535', 0]
['27536', 0]
['27537', 0]
['27538', 0]
['27539', 0]
['27540', 0]
['27541', 0]
['27542', 0]
['27543', 0]
['27544', 1]
['27545', 0]
['27546', 0]
['27547', 0]
['27548', 0]
['27549', 1]
['27550', 0]
['27551', 1]
['27552', 0]
['27553', 0]
['27554', 0]
['27555', 0]
['27556', 1]
['27557', 0]
['27558', 1]
['27559', 0]
['27560', 0]
['27561', 0]
['27562', 0]
['27563', 1]
['27564', 0]
['27565', 0]
['27566', 0]
['27567', 0]
['27568', 0]
['27569', 0]
['27570', 0]
['27571', 0]
['27572', 0]
['27573', 0]
['27574', 0]
['27575', 0]
['27576', 0]
['27577', 0]
['27578', 0]
['27579', 0]
['27580', 0]
['27581', 0]
['27582', 0]
['27583', 0]
['27584', 0]
['27585', 0]
['27586', 0]
['27587', 0]
['27588', 0]
['27589', 0]
['27590', 0]
['27591', 0]
['27592', 0]
['27593', 1]
['27594', 0]
['27595', 0]
['27596', 1]
['27597', 0]
['27598', 0]
['27599', 0]
['27600', 0]
['27601', 0]
['27602', 0]
['27603', 0]
['27604', 0]
['27605', 0]
['27606', 0]
['27607', 0]
['27608', 0]
['27609', 0]
['27610', 0]
['27611', 0]
['27612', 0]
['27613', 0]
['27614', 0]
['27615', 1]
['27616', 0]
['27617', 0]
['27618', 0]
['27619', 0]
['27620', 1]
['27621', 0]
['27622', 0]
Child 18+ ever marr Not in a subfamily -5.782176465923575
Ecuador -2.8707286542081873
Iran -1.98172355006101
Ecuador -1.8666016702909662
Vietnam -1.590774267693511
Alabama -1.4712455468856531
Philippines -1.3303691602915284
Unemployed full-time 1.1478789357340293
2 -0.8389458377729352
High school graduate -0.8368910242736142
###Markdown
Task 2: Porbabilistic generative model- Implement a binary classifier based on a generative model Loading the dataset
###Code
with open(X_train_fpath) as f:
next(f)
X_train = np.array([line.strip('\n').split(',')[1:] for line in f], dtype=float)
with open(y_train_fpath) as f:
next(f)
y_train = np.array([line.strip('\n').split(',')[1] for line in f], dtype=float)
with open(X_test_fpath) as f:
next(f)
X_test = np.array([line.strip('\n').split(',')[1:] for line in f], dtype=float)
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
# Normalizing the training and testing data
X_train, X_mean, X_std = _normalize(X_train, train=True)
X_test, _, _ = _normalize(X_test, train=False, specified_column=None, X_mean=X_mean, X_std=X_std)
###Output
_____no_output_____
###Markdown
Calculating the Mean and Covariance- In the generative model, we need to calculate the average and covariance of the data in the two categories separately.
###Code
# compute in-class mean
X_train_0 = np.array([x for x, y in zip(X_train, y_train) if y == 0])
X_train_1 = np.array([x for x, y in zip(X_train, y_train) if y == 1])
mean_0 = np.mean(X_train_0, axis=0)
mean_1 = np.mean(X_train_1, axis=0)
# compute the in-class covariance
cov_0 = np.zeros((data_dim, data_dim))
cov_1 = np.zeros((data_dim, data_dim))
for x in X_train_0:
# np.transpose([x - mean_0]).shape -> (510, 1)
# [x - mean_0].shape -> (1, 510)
# np.dot(np.transpose([x - mean_0]), [x - mean_0]).shape -> (510, 510)
cov_0 += np.dot(np.transpose([x - mean_0]), [x - mean_0]) / X_train_0.shape[0]
for x in X_train_1:
cov_1 += np.dot(np.transpose([x - mean_0]), [x - mean_0]) / X_train_1.shape[0]
# Shared covariance is taken as a weighted average of individual in-class covariance.
cov = (cov_0 * X_train_0.shape[0] + cov_1 * X_train_1.shape[0]) / (X_train_0.shape[0] + X_train_1.shape[0])
###Output
_____no_output_____
###Markdown
Computing weights and bias- The weight matrix and deviation vector can be directly calculated.
###Code
# Compute the inverse of covariance matrix
# Since the covariance matrix may be nearly singular, np.linalg.inv() may give a large numerical error
# Via SVD decomposition, one can get matrix inverse efficiently and accurately
u, s, v = np.linalg.svd(cov, full_matrices=False)
inv = np.matmul(v.T * 1 / s, u.T)
# Directly compute weights and bias
w = np.dot(inv, mean_0 - mean_1)
b = -0.5 * np.dot(mean_0, np.dot(inv, mean_0)) + 0.5 * np.dot(mean_1, np.dot(inv, mean_1)) \
+ np.log(float(X_train_0.shape[0]) / X_train_1.shape[0])
# Compute accuracy on training set
y_train_pred = 1 - _predict(X_train, w, b)
print('Training accuracy: {}'.format(_accuracy(y_train_pred, y_train)))
###Output
Training accuracy: 0.8548363314656444
###Markdown
Predicting testing labels
###Code
import csv
predictions = _predict(X_test, w, b)
with open('output_generative.csv', mode='w', newline='') as submit_file:
csv_writer = csv.writer(submit_file)
header = ['id', 'label']
print(header)
csv_writer.writerow(header)
for i in range(len(predictions)):
row = [str(i+1), predictions[i]]
csv_writer.writerow(row)
print(row)
print()
# Print out the most significant weights
ind = np.argsort(np.abs(w))[::-1] # Arrange the array in an ascending order and take it from the end to the front
with open(X_test_fpath) as f:
content = f.readline().strip('\n').split(',')
features = np.array(content)
for i in ind[0 : 10]:
print(features[i], w[i])
###Output
'26688', 0]
['26689', 1]
['26690', 1]
['26691', 1]
['26692', 1]
['26693', 1]
['26694', 1]
['26695', 1]
['26696', 1]
['26697', 1]
['26698', 1]
['26699', 1]
['26700', 1]
['26701', 1]
['26702', 1]
['26703', 0]
['26704', 1]
['26705', 1]
['26706', 1]
['26707', 1]
['26708', 1]
['26709', 1]
['26710', 1]
['26711', 1]
['26712', 1]
['26713', 1]
['26714', 1]
['26715', 1]
['26716', 1]
['26717', 1]
['26718', 1]
['26719', 1]
['26720', 1]
['26721', 1]
['26722', 1]
['26723', 1]
['26724', 1]
['26725', 0]
['26726', 1]
['26727', 1]
['26728', 1]
['26729', 1]
['26730', 1]
['26731', 1]
['26732', 0]
['26733', 1]
['26734', 1]
['26735', 1]
['26736', 1]
['26737', 1]
['26738', 1]
['26739', 1]
['26740', 1]
['26741', 1]
['26742', 1]
['26743', 1]
['26744', 1]
['26745', 0]
['26746', 1]
['26747', 1]
['26748', 1]
['26749', 1]
['26750', 1]
['26751', 1]
['26752', 1]
['26753', 0]
['26754', 1]
['26755', 1]
['26756', 1]
['26757', 1]
['26758', 1]
['26759', 1]
['26760', 1]
['26761', 1]
['26762', 1]
['26763', 1]
['26764', 0]
['26765', 1]
['26766', 1]
['26767', 0]
['26768', 1]
['26769', 1]
['26770', 1]
['26771', 1]
['26772', 1]
['26773', 0]
['26774', 1]
['26775', 1]
['26776', 1]
['26777', 1]
['26778', 0]
['26779', 1]
['26780', 1]
['26781', 0]
['26782', 1]
['26783', 0]
['26784', 1]
['26785', 1]
['26786', 1]
['26787', 1]
['26788', 1]
['26789', 1]
['26790', 1]
['26791', 1]
['26792', 1]
['26793', 1]
['26794', 1]
['26795', 1]
['26796', 1]
['26797', 1]
['26798', 1]
['26799', 0]
['26800', 1]
['26801', 1]
['26802', 1]
['26803', 1]
['26804', 0]
['26805', 1]
['26806', 1]
['26807', 1]
['26808', 1]
['26809', 1]
['26810', 1]
['26811', 1]
['26812', 1]
['26813', 1]
['26814', 1]
['26815', 1]
['26816', 1]
['26817', 1]
['26818', 1]
['26819', 1]
['26820', 1]
['26821', 1]
['26822', 1]
['26823', 1]
['26824', 1]
['26825', 0]
['26826', 1]
['26827', 1]
['26828', 1]
['26829', 0]
['26830', 1]
['26831', 1]
['26832', 1]
['26833', 1]
['26834', 1]
['26835', 0]
['26836', 1]
['26837', 1]
['26838', 1]
['26839', 1]
['26840', 1]
['26841', 1]
['26842', 1]
['26843', 1]
['26844', 1]
['26845', 1]
['26846', 1]
['26847', 1]
['26848', 0]
['26849', 1]
['26850', 0]
['26851', 1]
['26852', 1]
['26853', 1]
['26854', 1]
['26855', 1]
['26856', 1]
['26857', 1]
['26858', 1]
['26859', 1]
['26860', 0]
['26861', 1]
['26862', 1]
['26863', 0]
['26864', 1]
['26865', 1]
['26866', 1]
['26867', 1]
['26868', 1]
['26869', 1]
['26870', 1]
['26871', 1]
['26872', 1]
['26873', 1]
['26874', 1]
['26875', 1]
['26876', 1]
['26877', 1]
['26878', 1]
['26879', 1]
['26880', 1]
['26881', 1]
['26882', 1]
['26883', 1]
['26884', 1]
['26885', 1]
['26886', 1]
['26887', 0]
['26888', 1]
['26889', 1]
['26890', 1]
['26891', 1]
['26892', 1]
['26893', 1]
['26894', 1]
['26895', 1]
['26896', 0]
['26897', 1]
['26898', 1]
['26899', 1]
['26900', 1]
['26901', 1]
['26902', 1]
['26903', 1]
['26904', 0]
['26905', 1]
['26906', 1]
['26907', 1]
['26908', 1]
['26909', 1]
['26910', 1]
['26911', 1]
['26912', 1]
['26913', 1]
['26914', 1]
['26915', 1]
['26916', 0]
['26917', 1]
['26918', 1]
['26919', 1]
['26920', 1]
['26921', 1]
['26922', 1]
['26923', 0]
['26924', 1]
['26925', 1]
['26926', 1]
['26927', 1]
['26928', 0]
['26929', 1]
['26930', 1]
['26931', 1]
['26932', 0]
['26933', 1]
['26934', 1]
['26935', 1]
['26936', 1]
['26937', 0]
['26938', 1]
['26939', 1]
['26940', 1]
['26941', 1]
['26942', 1]
['26943', 1]
['26944', 1]
['26945', 1]
['26946', 1]
['26947', 1]
['26948', 1]
['26949', 0]
['26950', 1]
['26951', 1]
['26952', 1]
['26953', 1]
['26954', 0]
['26955', 1]
['26956', 1]
['26957', 0]
['26958', 1]
['26959', 1]
['26960', 1]
['26961', 0]
['26962', 1]
['26963', 1]
['26964', 1]
['26965', 1]
['26966', 0]
['26967', 1]
['26968', 1]
['26969', 1]
['26970', 1]
['26971', 1]
['26972', 1]
['26973', 1]
['26974', 1]
['26975', 1]
['26976', 0]
['26977', 1]
['26978', 1]
['26979', 1]
['26980', 1]
['26981', 1]
['26982', 1]
['26983', 1]
['26984', 1]
['26985', 1]
['26986', 0]
['26987', 1]
['26988', 1]
['26989', 1]
['26990', 1]
['26991', 1]
['26992', 1]
['26993', 1]
['26994', 1]
['26995', 0]
['26996', 1]
['26997', 1]
['26998', 1]
['26999', 1]
['27000', 1]
['27001', 1]
['27002', 1]
['27003', 0]
['27004', 1]
['27005', 1]
['27006', 1]
['27007', 1]
['27008', 1]
['27009', 1]
['27010', 1]
['27011', 1]
['27012', 1]
['27013', 1]
['27014', 1]
['27015', 1]
['27016', 1]
['27017', 1]
['27018', 1]
['27019', 1]
['27020', 1]
['27021', 1]
['27022', 1]
['27023', 1]
['27024', 1]
['27025', 1]
['27026', 1]
['27027', 1]
['27028', 1]
['27029', 1]
['27030', 1]
['27031', 1]
['27032', 1]
['27033', 1]
['27034', 1]
['27035', 1]
['27036', 1]
['27037', 0]
['27038', 1]
['27039', 1]
['27040', 1]
['27041', 1]
['27042', 1]
['27043', 1]
['27044', 1]
['27045', 1]
['27046', 1]
['27047', 1]
['27048', 0]
['27049', 1]
['27050', 1]
['27051', 1]
['27052', 0]
['27053', 1]
['27054', 1]
['27055', 1]
['27056', 1]
['27057', 1]
['27058', 1]
['27059', 1]
['27060', 1]
['27061', 0]
['27062', 1]
['27063', 1]
['27064', 1]
['27065', 1]
['27066', 1]
['27067', 1]
['27068', 1]
['27069', 1]
['27070', 1]
['27071', 1]
['27072', 1]
['27073', 1]
['27074', 1]
['27075', 1]
['27076', 1]
['27077', 1]
['27078', 1]
['27079', 1]
['27080', 0]
['27081', 1]
['27082', 1]
['27083', 1]
['27084', 1]
['27085', 1]
['27086', 1]
['27087', 1]
['27088', 1]
['27089', 1]
['27090', 1]
['27091', 1]
['27092', 1]
['27093', 1]
['27094', 1]
['27095', 1]
['27096', 0]
['27097', 1]
['27098', 1]
['27099', 1]
['27100', 1]
['27101', 1]
['27102', 1]
['27103', 1]
['27104', 1]
['27105', 1]
['27106', 1]
['27107', 1]
['27108', 1]
['27109', 1]
['27110', 1]
['27111', 0]
['27112', 1]
['27113', 0]
['27114', 1]
['27115', 1]
['27116', 1]
['27117', 0]
['27118', 1]
['27119', 1]
['27120', 1]
['27121', 1]
['27122', 1]
['27123', 1]
['27124', 0]
['27125', 1]
['27126', 1]
['27127', 1]
['27128', 1]
['27129', 0]
['27130', 1]
['27131', 1]
['27132', 1]
['27133', 1]
['27134', 1]
['27135', 1]
['27136', 1]
['27137', 1]
['27138', 1]
['27139', 1]
['27140', 1]
['27141', 1]
['27142', 1]
['27143', 1]
['27144', 1]
['27145', 1]
['27146', 1]
['27147', 1]
['27148', 1]
['27149', 1]
['27150', 0]
['27151', 1]
['27152', 1]
['27153', 0]
['27154', 1]
['27155', 1]
['27156', 1]
['27157', 0]
['27158', 1]
['27159', 1]
['27160', 1]
['27161', 1]
['27162', 1]
['27163', 1]
['27164', 1]
['27165', 0]
['27166', 0]
['27167', 1]
['27168', 1]
['27169', 1]
['27170', 1]
['27171', 1]
['27172', 0]
['27173', 1]
['27174', 1]
['27175', 1]
['27176', 0]
['27177', 1]
['27178', 1]
['27179', 1]
['27180', 1]
['27181', 1]
['27182', 1]
['27183', 1]
['27184', 1]
['27185', 1]
['27186', 1]
['27187', 0]
['27188', 1]
['27189', 0]
['27190', 1]
['27191', 1]
['27192', 1]
['27193', 1]
['27194', 1]
['27195', 1]
['27196', 1]
['27197', 1]
['27198', 1]
['27199', 0]
['27200', 1]
['27201', 1]
['27202', 1]
['27203', 1]
['27204', 1]
['27205', 1]
['27206', 0]
['27207', 1]
['27208', 1]
['27209', 1]
['27210', 1]
['27211', 1]
['27212', 1]
['27213', 1]
['27214', 1]
['27215', 1]
['27216', 1]
['27217', 1]
['27218', 1]
['27219', 1]
['27220', 1]
['27221', 1]
['27222', 0]
['27223', 1]
['27224', 1]
['27225', 1]
['27226', 1]
['27227', 0]
['27228', 1]
['27229', 1]
['27230', 1]
['27231', 1]
['27232', 1]
['27233', 1]
['27234', 1]
['27235', 1]
['27236', 1]
['27237', 1]
['27238', 0]
['27239', 1]
['27240', 0]
['27241', 1]
['27242', 1]
['27243', 1]
['27244', 1]
['27245', 1]
['27246', 1]
['27247', 1]
['27248', 1]
['27249', 1]
['27250', 1]
['27251', 0]
['27252', 1]
['27253', 1]
['27254', 1]
['27255', 1]
['27256', 1]
['27257', 1]
['27258', 1]
['27259', 1]
['27260', 1]
['27261', 1]
['27262', 1]
['27263', 1]
['27264', 1]
['27265', 0]
['27266', 1]
['27267', 1]
['27268', 1]
['27269', 0]
['27270', 1]
['27271', 0]
['27272', 1]
['27273', 1]
['27274', 1]
['27275', 1]
['27276', 1]
['27277', 1]
['27278', 1]
['27279', 1]
['27280', 1]
['27281', 1]
['27282', 1]
['27283', 0]
['27284', 1]
['27285', 1]
['27286', 1]
['27287', 1]
['27288', 1]
['27289', 1]
['27290', 1]
['27291', 0]
['27292', 1]
['27293', 1]
['27294', 0]
['27295', 0]
['27296', 1]
['27297', 1]
['27298', 1]
['27299', 0]
['27300', 1]
['27301', 1]
['27302', 1]
['27303', 1]
['27304', 1]
['27305', 1]
['27306', 0]
['27307', 1]
['27308', 1]
['27309', 1]
['27310', 1]
['27311', 1]
['27312', 1]
['27313', 1]
['27314', 1]
['27315', 1]
['27316', 1]
['27317', 1]
['27318', 1]
['27319', 1]
['27320', 1]
['27321', 1]
['27322', 1]
['27323', 1]
['27324', 1]
['27325', 0]
['27326', 1]
['27327', 1]
['27328', 1]
['27329', 1]
['27330', 1]
['27331', 1]
['27332', 1]
['27333', 1]
['27334', 1]
['27335', 0]
['27336', 1]
['27337', 1]
['27338', 1]
['27339', 0]
['27340', 1]
['27341', 1]
['27342', 1]
['27343', 1]
['27344', 0]
['27345', 1]
['27346', 1]
['27347', 1]
['27348', 1]
['27349', 1]
['27350', 1]
['27351', 0]
['27352', 1]
['27353', 1]
['27354', 1]
['27355', 1]
['27356', 1]
['27357', 0]
['27358', 1]
['27359', 1]
['27360', 1]
['27361', 1]
['27362', 0]
['27363', 1]
['27364', 1]
['27365', 1]
['27366', 1]
['27367', 1]
['27368', 1]
['27369', 1]
['27370', 1]
['27371', 1]
['27372', 1]
['27373', 1]
['27374', 1]
['27375', 1]
['27376', 1]
['27377', 1]
['27378', 1]
['27379', 0]
['27380', 1]
['27381', 1]
['27382', 1]
['27383', 0]
['27384', 1]
['27385', 1]
['27386', 0]
['27387', 1]
['27388', 1]
['27389', 1]
['27390', 0]
['27391', 1]
['27392', 1]
['27393', 1]
['27394', 0]
['27395', 1]
['27396', 0]
['27397', 0]
['27398', 1]
['27399', 1]
['27400', 1]
['27401', 1]
['27402', 1]
['27403', 1]
['27404', 1]
['27405', 1]
['27406', 1]
['27407', 1]
['27408', 1]
['27409', 1]
['27410', 1]
['27411', 1]
['27412', 1]
['27413', 1]
['27414', 1]
['27415', 1]
['27416', 1]
['27417', 1]
['27418', 1]
['27419', 1]
['27420', 1]
['27421', 1]
['27422', 1]
['27423', 1]
['27424', 1]
['27425', 1]
['27426', 1]
['27427', 1]
['27428', 1]
['27429', 1]
['27430', 0]
['27431', 1]
['27432', 0]
['27433', 1]
['27434', 1]
['27435', 0]
['27436', 1]
['27437', 1]
['27438', 1]
['27439', 1]
['27440', 1]
['27441', 1]
['27442', 1]
['27443', 0]
['27444', 1]
['27445', 1]
['27446', 1]
['27447', 1]
['27448', 1]
['27449', 1]
['27450', 1]
['27451', 1]
['27452', 1]
['27453', 0]
['27454', 1]
['27455', 0]
['27456', 1]
['27457', 1]
['27458', 1]
['27459', 1]
['27460', 1]
['27461', 1]
['27462', 1]
['27463', 1]
['27464', 1]
['27465', 1]
['27466', 1]
['27467', 1]
['27468', 1]
['27469', 1]
['27470', 1]
['27471', 1]
['27472', 1]
['27473', 1]
['27474', 1]
['27475', 1]
['27476', 1]
['27477', 1]
['27478', 1]
['27479', 0]
['27480', 1]
['27481', 1]
['27482', 1]
['27483', 1]
['27484', 1]
['27485', 1]
['27486', 1]
['27487', 1]
['27488', 0]
['27489', 1]
['27490', 1]
['27491', 1]
['27492', 1]
['27493', 1]
['27494', 1]
['27495', 1]
['27496', 1]
['27497', 0]
['27498', 1]
['27499', 1]
['27500', 1]
['27501', 1]
['27502', 1]
['27503', 0]
['27504', 1]
['27505', 1]
['27506', 1]
['27507', 1]
['27508', 1]
['27509', 1]
['27510', 1]
['27511', 0]
['27512', 1]
['27513', 1]
['27514', 1]
['27515', 1]
['27516', 1]
['27517', 1]
['27518', 1]
['27519', 1]
['27520', 1]
['27521', 1]
['27522', 1]
['27523', 0]
['27524', 1]
['27525', 1]
['27526', 1]
['27527', 1]
['27528', 1]
['27529', 1]
['27530', 1]
['27531', 1]
['27532', 1]
['27533', 1]
['27534', 1]
['27535', 1]
['27536', 1]
['27537', 1]
['27538', 1]
['27539', 1]
['27540', 1]
['27541', 1]
['27542', 1]
['27543', 1]
['27544', 1]
['27545', 1]
['27546', 1]
['27547', 1]
['27548', 1]
['27549', 0]
['27550', 1]
['27551', 0]
['27552', 1]
['27553', 1]
['27554', 1]
['27555', 1]
['27556', 1]
['27557', 1]
['27558', 0]
['27559', 1]
['27560', 1]
['27561', 1]
['27562', 1]
['27563', 0]
['27564', 1]
['27565', 1]
['27566', 1]
['27567', 1]
['27568', 1]
['27569', 1]
['27570', 1]
['27571', 1]
['27572', 1]
['27573', 1]
['27574', 1]
['27575', 1]
['27576', 1]
['27577', 1]
['27578', 1]
['27579', 1]
['27580', 1]
['27581', 1]
['27582', 1]
['27583', 1]
['27584', 1]
['27585', 1]
['27586', 1]
['27587', 1]
['27588', 1]
['27589', 1]
['27590', 1]
['27591', 1]
['27592', 1]
['27593', 1]
['27594', 1]
['27595', 1]
['27596', 0]
['27597', 1]
['27598', 1]
['27599', 1]
['27600', 1]
['27601', 1]
['27602', 1]
['27603', 1]
['27604', 1]
['27605', 1]
['27606', 1]
['27607', 1]
['27608', 1]
['27609', 1]
['27610', 1]
['27611', 1]
['27612', 1]
['27613', 1]
['27614', 1]
['27615', 0]
['27616', 1]
['27617', 1]
['27618', 1]
['27619', 1]
['27620', 0]
['27621', 1]
['27622', 1]
Manufacturing-nondurable goods -1.073486328125
37 -1.04461669921875
31 0.96826171875
Armed Forces 0.88555908203125
Group Quarters- Secondary individual -0.827392578125
Holand-Netherlands -0.8167724609375
Ireland 0.784332275390625
Wholesale trade 0.73870849609375
Grandchild <18 ever marr not in subfamily 0.708740234375
Not in universe -0.674560546875
|
Finbert_Sentinment_Analysis.ipynb | ###Markdown
Sentiment Analysis with Transformers: The HuggingFace Transformers library is presently the most advanced and accessible library for building and using transformer models. As such, it will be what we primarily use throughout these notebooks.To apply sentiment analysis using the transformers library, we first need to decide on a model to use - as we will be applying a pretrained model, rather than starting from scratch. The list of models available can be found at:* https://huggingface.co/ProsusAI/finbert ![huggingface.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABR0AAAKJCAIAAAC5zhlmAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAP+lSURBVHhe7P0JsGXXlR0GIqIHu9XtaLfklsOtoRRSO8LldlnhbltsuWzJZcmlkixZBRQZCg8th9yEwhLVUaTVkhhtt4oA4SiyJgCWHa5il2ogCBZRAAFCBBNFlTiAABIjQSIxkgSLGDKRmUAmkInMP/+fv9cezrn7DPfec9979+P/xF6x/7rrnXvufO9+e73zXuY1+w6Hw+FwOBwOh8PhcDgWhftqh8PhcDgcDofD4XA4Fof7aofD4XA4HA6Hw+FwOBaH+2qHw+FwOBwOh8PhcDgWh/tqh8PhcDgcDofD4XA4Fof7aofD4XA4HA6Hw+FwOBaH+2qHw+FwOBwOh8PhcDgWh/tqh8PhcDgcDofD4XA4Fof7aofD4XA4HA6Hw+FwOBaH+2qHw+FwOBwOh8PhcDgWh/tqx+qxtb1z4eKls2++der0mx4eHh4rDCQWpBckGU03DofD4XA4HIcA7qsdKwZK3jNvnH/7wjvvXEpw2eFwOBaCJhEGEgvSC5IMUo0mHYfD4XA4HCvCVTM8dvAfxLuvdqwS5966eP6ti1IKrzHWHQ6HY0WQrCIZBqkGCUdTj8PhcDgcjqUhw2OXLq/v7OxeuaKNwJHTmG5v7+JADvKDePfVjpUBd62YapS/Gxsbm4wth8PhWBEkqyC9iLtGwjmwN0uHw+FwOK5unHvr4tsXLl0xPvUq0HiBgzqYD+LdVztWg63tnTNvnEexC1Mtdnp7e3snYNfhcDiWgKaSnR0kFjHYMnyNtOO/tXY4HA6HY0lcuHgpmuqrj9++8M4BfBDvvtqxGuBmRYipRuErpfCew+FwrBSSW5BkxFpL5tE05HA4HA6HYzpkeAwW9CrGAXwQ/y756pc/vX/8p/bv+l8k8eX/y/4z/5/9t7+pfRxHCmfffOvy5bVoqlH+6l3scDgcK4W4a7HWSDtIPpqGHA6Hw+FwTMeFi5feubQm77DM+m57NWkc4NwfxB+4r379n+w/8B+Si/7Gv7v/rZ/YP/Gn9p/9kf0T/+b+iT+7/9h/uP87/2ea9c3/ev+dF7S/44jg1Ok3NzY2tra2rKnWeQ6Hw7EiSG5BkkGqQcJB2kHy0XkOh8PhcDim4+ybb21t7aB+56BC/urTW1vbc38Qf7C++rU7yTY/9hf3v/kn9k/80Xo8+aP7D/yZ/d/9t/Z33tGlHEcBKG3jYLUUvjrD4XA4VgrJMHHI2n21w+FwOCbh0uV1hL5wcBkv42L8G068w16deu6C4QB99XM3kKn+9p/PjXQ1HngfdXYcHYiv3tnZodvWTbXD4ZgTSDJINUg47qsdDofDMRXuqzOQr6Yvgu1d3Xy1+OqX/ifyyd/9ydw/D8ST1+5/5f+uizsOPXCnbm1tua92OBwHgOirkXbcVzscDodjEtxXZyBfzb/k5P/CBxb06tRXha9eP7V/3x/ef+a63DmPxjf+LI1yH3qc/ux111xzzXWfPa2v35MQX8237rivfvnV1596+gUEhDY5HA5HM8RXI+EcZl+9vb2D0BeOALlw8mahTQ6Hw3GwcF+dgX21JufOjl51+qrw1c/fuH/8J6xh/tKv/MixT/2IbUE8dde/dsctf/LSY3+sa3z6z+5/6Y/ubzRYr9duJ2ubmluxu8BNj2jLPDh+k2zmmpuOa8sqEI4ow8zHsjgaffW5ty587p4v333fVx954gQCAi/RqLMdDoejAYfcV+/s7L594Z3TZ95EQOClzngPY2Nz6623L75+5s0s0IhZ2snhmBMvv/r6p++471dvuwcBcfg+2T99+weuikGaR1AXD5bEVOJed/tr+updgfvqDHgnxVsVqvjF+fFbfuiHb3liuM+7zVeFrz72x/ef/SvRLX/ulj/57/7of4R44LZ/PTbCTkvjP/jbfzo2Ujz4H+x/7x/pegbwbvrq/eM38mZuXKWtvip9Nfzzpz+Xv5Ph5efunmCtzz18v7wpUtz/orbOg+/cP/smCrxx/K577nj4DX1VR0ufUbx47LZ7jj2jLxyOo4VD66uR/N65tCaO2gb//yXapxec9vOqmirU6aX2aF0LtPRZHU6dfiNz1DEwSzv1IL6bC64G4zETisrh0NYM7wp+92uP3n3fV+Wug8BLnTGK5MQ2PDXUf7pvlK2stphcHbTWjejfT+45dPjyRL+7N+fqffViF30QdCYP6n4QX90Yj3/yfT/0wxp/886zpvHDnz+Z9Fwk2J8/njR++9Yfft+tj9uWBePo++rTX9r/4r+sJvm+/w3iS7/yIz/+5//cx//un3rqrn8t+mf4ajjqv/7X/gza95/4V/Y//7/c/+q/SLO++Rf2v/qndVUDCCnvXfHVs6B2RIcZLb4a/rn68bBYa30xCDbVD3xHX63EXg7hPe+raUMr9t7PPPCrdz1xTl84HAvicPrqCxffeeON85mjjnHm7Lmzb5zXrlVI2v/A7TbpSy17Ffjq7770yrdOvPjci99/6fde+72XTyIg8BKNmKWdekDv5t1p4e+IpWepClpqxSUpNn0Qg2z3/dMHvzQW2jVDVtnzHfUet9aXLq89+8JL8tOzvkAHdNMFSqSnMb0be5BdiF4c0B21EqQej4bWWx7DQwv31RnwTrq9A+xu02R3QJN/vv6e17X927ew4836LKXJV98MX23aw1ZiH9t/ij76vvrpv7v/9fepf2ZfDfHdY/+qtqTx+tf+BH0PXHz119hXf+v/Rv/g2fZFXVsfOOtlxUfuq4s+Uq+YvMBpIkcoO8LiCeh2D0vpevQlNhp3ANtMnjQef8iQl021I+qQriF/10x21WzaLrXqbDjqq+XjYX1RQD5C1he9mN1IZ3Bf7b7acThxOH01vOL3fu/V10+/kTlqxKsnzzzw8De/8o3HtWsVSN0fuOmmD9j3Cyq7b7rxuqvDV2dmJsZEXw2QtR49J0fXV8uA6nBo1wz07p/s4UHW5YcTL4d/z2U4qh/6C5qMdIbiQvTg6Prq9mM8pHBfnYF9NZlP2E/mPk0W95bHh/sspx+7Gb76saT9W7TRx0wf23+KPvq++tG/tn/8z1nzPC2+9UfJV198TtfWB7qbCfaNdqKvpvfpGrjsqDlhQr+vzhGSsjHbCfISoXZEAt3tBN2TXFk/P5OV9qnvE4MY9dXffPqFR544oS8KYBY66It+kNHtdWVkFOX74dZz0iKhvbOI4u7A1K4D4Kbn/cf5HVZ8dffN85rHRh9sLi7LmyY7atejeP2JO2Q96R4C3SbueuI4r1Bn2L3qDjzx1eaL8enmaujWdv8T1jPbb9frms3exsZKN0I83vTqmDXIhrqtU8hp767a8EcYtGmzclpV6N9zBmr3A+3SA9/RHePOeg9QrPgTBMecOLS+GgX6t5958Qcvn7Sm+omnnvvN3/oi7rFxX33NTbd/1rhovPV84Pbbb0zeCMxbQOaKu3exmz5Lq+rmhjcUoPscNvHV5h1wnjJupb6aT0JsMUcXGtM3YmmsdCOYN0dbE3cnRE9+WgbwaTRbSXfvXQMdY1LZJ3V59TboufT12yz9LCa5LnyvntazpH3MSuJeFSeWMOOZlKdyNNBNFyhBB9Xrl+onyl6IvpOm50oR7yhzWnqeSjnV3dXMLpDCXOLVILmXgO4YabdvekQvIu1/esjJoydrsOeHNDrHg01OdVG79l6IqThIX92TZPJ7AMiumj3n0Di33arstVgFqIyH+dzeId5hruuzn7/+fT/0yW9V+ogfDu0nf/vD8bviHB++6zXuc/09p07ec702dv371sPt6qtJ07I3P/ZaXENoL9dT00ffVz/4E/uP/cXcLZ/4ox//u3/q9a/9ifjy0mN/7I5b/uSXfiX/x8z2n/4j5KvPjd099qEtoLdp6BNzlmbDNMFJZ71xQ34PeVPShMkd5mXoHF6GVYVl5VnKOmseqaS/6hHpU3T69ht1x+IaTKoSaEY7/Rp3DO1hQ7oble0uioPx1dGqFRbIjr52thOOK3oqdl/hO+TipoyRY8sX5z5xx8M0S3ygroE3XVov6aPtatKMLY9WMFmc9ra+Y9wtzsrdo67N+GrqH8wktLW1BZL94V3tdjueCmqP7tSe1d5uZsfMCbc7ZtZjOwP2AKG7bRXoXbB+Bur3A3e25t92gw5XwXHocZh9tcRzL7z06munwZ//4lc4LVC0+OrjYPPWgzcaYW7gN5Qwl9+nYvFK7wXpu1uYRauNxRx108xval/qb6q3Fb41RLzw3R/Ek5PFd78/2VebnccR9Z+ErvTs6WZPTnfmzVnit8tK53SvoON2lwfeDbNTVIZ2zZDuoRQ2tZ3vDrB66amxepuZewZIrosUUd0J55V0C95+3Wch6yd2vjMJDHygY2Pww51QuZmjE/SeKHu2B05adr3szWbvUtmBuHU51boSO8ueXuhuoyuBvVUItBuy83J+shvPnoruQI7feBN1swfOHWL/vlOanLdV4MB8dXJjdCet1JXrZc85P1D2cY7XejUQX90U5HvVJxftNz/Kmk210dffc7LrQ18j55dn72KLrovHMOsJ0fnqLXHUYYW8oWJP+uPo++on/qv9B/9M7pZP/FH5V8r+y7/2Z279//7bH/obP/rjf/7P4eXnbvmTWbf9p/44+eq1H+ja+qCPZR1654U+MWXLPaoPapgrnfkxyJ5tmsl3t2bYJN8BNsd1L+Oy+vAkG+UkCFSejeoRmYwWVhvAs2JjtsK8c0A8Fctj1Fe//Orr99w39D3w5n+6rBsdTaxptHwAXJ/xYAprwMgWGhOVmMAO1vUl9swg7VMY0bCVtJudlS6Sm0Dr9NBTdtL0yQ5kAPkxZtuNsO19fYBuVn5oDDTacxUvUGqP66e0il5fXTsDvfdDdhJ6rrvj8OPw+2qJ6Kglmnw1vYnI+wXeIEjgXcPUUqEOI9DbjcyiJB/edAimSusWZ1BPeSvp+nTrmQ8PPvLU57/4FfADx7/50KPfQkDERu3Ug/zogMYy1LxpWnSzauvpTpEAfWTr6fkfWP+SgK8eDe2aoawcwnnruQ1ql77/NstOV3JdsjOZr4TQd2LnO5PAff/0wc//k3/28GPfzp7NGJiFDuimC/SAdj49pUMnys4aOGn9a0i6AS0rrJ3zFSK9TFy+6sviLjJ7WL+4dlez3e5bdtVHd1C+2n7YQYgHlV7ivJvAnoH0TNae3OXAZfw2nOcm85h+CkZXhovhgLU9+uGt7Uc/+b4f+sRT2k5OmK1v2gdz1XLrOivrCe3qq0mHtYX+Z2DOr//tM7rs0D4TH31f/cx/u/+Vf2v/d/63FE/8K/vf/sP7j9E/Y/YP/vafhpH+q//+j//5f+/H/9M/+xfEZtMI9jf/kHTYP/4H94/9vv2v/hD56r1NXVsf6G4m2JssJkG9U4s+dI8C4bbWlwZmbWqADeLDQzc3QdeTvYy7YTNIBvNWFFE7IkVlDbq5fFsB8VRkWOEzOeqrgYF/t2zgp9d9IGcVrLXoJDIPFtrVJWZmrMedduaNsIyvLpaNji63dpln7vacQ3oma9OjKz9HyJAfY+6ZyYuGDYWVV3x1XzfzEqA9jN00Kr5aDp/mZlsp0eura2dAW2zI3MJIh8OpXH3HYcZV7aspY1NyxktO7Hhv0lyd1tNArLTSksv2DG9JFtLTri28rVQ+5F0RvvvSK09++/njjz/9wMPfjIGXaFzge+D8pmZOhX1bDD3zcwL0dTMvAW2xkLm0uH171cJghe+ky8LsIY7C7Fj/bVBe+v7bLJuVXJdsqWIlQO+JnfNMwjDHBxD++d5jX4+Bl3HWqK9W0HGF3R44UfZWGThp+R3VmaVuVQrju/pXyM8FkJ/5lSC7fNndlVy7bg+LWQJ74NlJsEdndHLeVoED8tVFY3cgdlZ6TSPsbZDeEj0ndgmor+YgC9qkydOqtcbLR8kPPyIdjGbzfDeZ56xPNiuuP+3D7WTjb+GtbL56N/nqV6UdzPvwiacG97PTR99Xn/rC/uf/1/tf+n37X/8/7N/9v6KAT/7iPw8L/cEf/49O/pW/8s2/9Jff/k/+kzv/wn9MXwKHqZYOcNRf+Of2H/2XSX/lT+1vv61r6wPdndlzHvNLq6+O/QXp/Zq9J9mHJMzS9WQv42rjIplF70l/tSNixD2RFSabyw85oNiH1aPFV79+5s3SWuPlp++4b7SuqiJ4rbrjJYg1VQNmXGJqMtlczeqrqT1Zlv0k9aQ+dV+dW9AOlT1RJ9ntSY7iGM2uirnVbdmVp4fT201AnbEP3D9d0KB+UOquE8ebIVswPe0EcwbqV4pQ+GoBn5yGzyYchwZXt68m8YHbj4cvxOKtSgRn8uQtg97FKPnzG4GtwrsqzZTjGcpKTt93ZnmnGPg67gK+Ohw4IG+Cus+2Z1mGVrsJtB6g/tSzeOdl0PkpT46+p9dP8kLAe6X9fX41tGsGu4fxdiL03wYCc+n7b7P8nklOYzkru7sGTqxi9WcSsL56IFp9NRBO8tCJshdi4KTZboR4ivh2tU80nxw9ewMrZPCOxQdkZUgfKIviynZ7aHbbwh54dhKSo+PzoEhO9fI4IF9Nh5M02ttG0w6hWJBhz3l6/kefpsnAO+nmJpvPifzIJ9jWQrMfflTbz9z5QR3QJpMc+yd9tk/ewb46zhUOzrlr2Xrq5tiSz+Xx6jvOJGvo5yPuq1/+DTLGiC//H2kIGtb6S/TvgZNnPv4HLz34xx/5u//e43/tJ47/V3/u+f/p/0rt//RfoD7ym+qn/jC1yOL3/oGRr4LT3cw3prnJNLPg/tUcrSlb70u61xmaemRuz3MrnfsTCkHXk72Mu6HPjLwcfxhqR8QYPIqwVHcgr51O2rs8e/p07RleGC2+Gjj31gVY63vu++ojT5xAQNx931dRVKGx/b+wjohese7Wcn9lzF42eNtjt1bnqwsfGGexpTQW1GylZ696fWN2UBlyA9/tav9hJocz6WzkxxvQc6WAnoMKGPXVhHAGerfSe0oHZzkOHw6nr75w8dKzzyf/o48t3D939++8Iv/gRR8oUUvqlveRrvbSNwLqYMuvrrSikqtL70ltmlZjBqaPwerLNcFKfXWTx0gOvL9bh9CnPhfIz3+H3pO8EODxRkO7Zkj30O5Vwx6GS99/mw2dxux2qp2r3hNrsNozCeBc2cewL3pPaQk6NC4sB06UnTVw0vrXkJ+rxhVG5GteAfovTZE0zB7Wl7K7l+2qWZYObaU3g8UB+WpOVvajou6E4EinPA7pmSzO+dIgX03mc4uZhnYbNfnqD9KY8+YjvyQWGu0n7/hpMdt5f9MH7TReLcsm64SLft/Nj5hlH+ERbNGvkK++85XYnzs/Gpe166noI+6rH/ix/Wf+6v6Jf3v/2L+4/+AfIJ8svvqxf3n/i/88CRvf/ENkob/9h8lRw12j5ct/YP+BP7T//J/a/9qP7D93g66zCrqbCfYmoweSEW9ouilLyG0d1pBCn5C4qgT6PNDN3f8yLqurqu9DPekQysdm6Chqu9rl6AxhkZWg0VcLXj/zpvxCLHpp8dsj1potqLFe5PqsA+yM1usvfocMElm10Mi6z1eLT4strz9xfMRwdkj79Ppq2fkwy+55umlapGfW/hvfeUbazZ4880Ds3JlJXkm3Gwo+A8FtUufQh7YS2nmLcQeSQ+7r9p37ox01/bOL9cyL5gC7Izp+V9Tx1CX72YEWDBviA9TTXj0D9ftB9sqYZ7yMfbods8v2ace7jMPpqwHs2Oun34iOkW5Ujie//Twyo3bqA6X9SkGJnB/fCCj/J9k+lNf8lhG6ydtQdRaKuePa3tWv6B86F1XgqrA6X80fLseXdBShliWdnp/Rbo/cFM+M6c+biO/Lrx0/rrVycnKO3xhr6NXXuAuCrnXcq/Rl/TaoX/rh20zPAOv0DMdVEbhW6Ra8nZaqn9hZz+QjT5yIj+FAPPvCS7pADn6a4mGmJ2fwRCVnvn7S0jsqPXyaFTTvQzxv6anu7lusvKfPSkAHG9efoLhqdut8yHHu8c+GXU3Oj7lpzbLJI7xqHJSvTm8MOrrQp+Ea2XOenv/VPyl4J91g80m8yVzVZIxhekP7q3d/kD0waRqL/iV4Zmg1zOV6TB+0vwb7/cG7X8v6SHsw0htssz94x2nt8wptEaZd+pOrv57XUG6rpo++r/7iv7T//E+SSf7GHyM7/c3/k3pmGOyv/O/VUSNgp+G0j//B/ad51tN/ZP8rv3//sR/af+Zf3X/qJ8hvv/wbus4q+LkF7E3GtzLB5CzO6QzqSfd0lhNLyE3PSa0E3eJhlq4nexlXGx+2bh8s8sejdkQB3c5grq7fph45LsV1N332uM4K6xSgPdQKK8AkX13FuLV+/Y1z4m1CdLaNYGfdf0wsKLs7aTn+Ohk2dZuJu1OIV5TO3b8H3vmoxGRGpH1oHzpDm22l25lsPewkZRb/t152rtmre2ACra3Fhr7zDP13WaFD2Ja1nQm6U4RNYCVhV7t2Wqf9J8dkVQhaW7Xbi995uDuuZKPmeH/1rvvlowpzsPcffxbLhvV350Q6VIaOu1OBrWDHeFv1M0Doux+6NZ975sXj3ekN7bLb4s/7tOPdxqH11YLNza0fvHIKjhG31tcefOLCxUs6YxiUoitlFsop+0ZA1ZUi7dxlfrzd4I3GzLXJ/wPXscMJjXjvgL35bPeuUXvTWQHW1zdefvV1uBfrqPESjZilnXqQvTtne9idELwj4yR074bx3ZZORbXb8UduN2/u9nzad+rrbnpEVxn35LrPfhEOodutrt59V0HXNKns+ajDcZW3Qf+l77vNumtBd073b9fz7Zf0BMxKruN/D5zaihOLfZjxTF66tPa7X3s05Pl6oAO66QI56Mt95kDScqv3RMlhakvvSUvuKLRlZsmcK3ta0lNNa+AVnn7k+O3mhJcGb0nQkdavTrbbxc2Q3Hg3UeVpb1SrgWTZrtxVrO72mMdXpwh7290AlcfTIL21AHvO0/NfnPOlIb56NF575fSGONsQ8Ng6lyw3PLO8PG2+B87BTjjtsxV9tby0wdZal4Wp7mbR1n/6zjvI3nN0a2uJI+6r4Ydf+E/2n/4r+w//2zQc/a0fJgsNzyzM3wZXX/1P/4X9L//vVGMW4qk/tv/kv0Yj1S/8VbLWsOizgp7k9ImVFn4G5NY3/jw8DMUzMAbJEfa5ClljdcniXcHyvhqAqcbbm75wLAMzinsEAWudf+rhcFgccl8tQKV+9o3z+sLhcDgcE0DlsS28cwe+HFbvqycCPiI1xvQxSnK8Bwu8k65vbMJ5roJfh6mG3+1aXv48zPBvv1z2nM7iq18Z7NPPR9xX3/v79x/+0/sv/jh9l/vJf19tc4z4bXD5BvjT6dyn/sT+szzQDWf+xX9p/4m/oeucB/phkvHJ6pzpYzO1vpUPESebYV2we3LC51ur/djp4LESX+1YEV4M/yPXkQR9nbsy2O5wdDgSvtrhcDgci4IKZlsbc6FuhsGXw7vtq9lZ5IN5K/vUYAGwrybzuQp+6pd4kDm2vEqDz790vLf/FA4WfahPPx9xXw23/OJ/TAPOj//Y/vP/TmKbEd/+w/Svf0PIv/ttZ0lgWSz44l+mEe+DGa8uIAY4eOwMCzzeYXQ6w+Rx70MH99WHBkd8sPdoj7Q7Dgjuqx0Oh+MqR/bN6pWWyu/6eHU3RKdY2UcGi4F89TqZzzXmZfXLyXfFf+iDn391sfWUmtYMX73QshtHfbz6ib9BQ83w1c/+67lnRnzlX9n/nT+yf+Iv73/zP9q/+5/bf/SH8g4IuHH5ffWpL+g650P+04jkFje/jmAs8bXtzKUf9ZFqgftqh8NxYHBf7XA4HI6FcQh89eEC+eqNTbKg65tsQa9OfcR9NfDyb+wf++NkjB//D+kfMHvxx/lr4T++/9C/tX/379t/9D/df/Av7j/614jv/Rf2v/lj2uHFv7z/7F/d//q/SQs+8GP7b3xd1+Y4rHBf7XA4Dgzuqx0Oh8PhWBXOvvnWO5fW2HxuXK186dI6DlMPeB7M76sFb3+b/qMsOOR7fz9ZZfATfyP/L6lPfYE6YC4CVvz4tfvfu2Xkv612HBrgTl3f2NjZ2XFf7XA45ob4aiQcpJ253yYdDofD4bi6ceHipbfevgjzucYWNMRVpXGArf89x6I4KF/tuNqBO/XiO5fdVzscjgNA9NVIO3O/TTocDofDcXVja3vn9NnzMJ+X18iFMl9tGgeIw9QDngfuqx2rAe7UM2+c397e3t3dRcnr1trhcMwEyTBINUg4SDtzv006HA6Hw3HV4+0Ll948fwH+86oMHBoOUA91NrivdqwM9B2SC+/En1gDOsPhcDhWBMktSDJINUg4PljtcDgcDsdKcO78BbHWly6vX02Mg0LoQc4J99WOVeLcWxcza+1wOByrRTTVSDiaehwOh8PhcCyNty9cOn32PN5eL75zGY70SAcOAQeCwzmAkWqB+2rHinHh4qUzb5x/59La1tY2yl+Hw+FYIZBYkF6QZHyk2uFwOByOlWNrewfvsGfffOvU6TePdOAQcCAH+WMx99WO1eOqeSA9PDwOWxz826TD4XA4HA7HKNxXOxwOh8PhcDgcDofDsTjcVzscDofD4XA4HA6Hw7E43Fc7HA6Hw+FwOBwOh8OxONxXOxwOh8PhcDgcDofDsTjcVzscDofD4XA4HA6Hw7E43Fc7HA6Hw+FwOBwOh8OxONxXOxwOh8PhcDgcDofDsTjcVzscDofD4XA4HA6Hw7E43Fc7HA6Hw+FwOBwOh8OxONxXOxwOh8PhcDgcDofDsTjcVzscDofD4XA4HA6Hw7E43Fc7HA6Hw+FwOBwOh8OxONxXOxwOh8PhcDgcDofDsTjcVzscDofD4XA4HA6Hw7E4rrl0ed3Dw8PDw8PDw8PDw8PDw2OxODLj1Vd0SrhiXoiWhiv8An8Uoq+kmoJepJomRgta9Aj2dEoodcYU/KKqVUSdsUSnQUarGNXEu53eKzVYol+rSGK3WYMlxvRObBnSALVwNOkaSwS906gL3tndHtTEopkl+vQSsVPX26Um3ur0jmpi1cTbnZZItYrhQCaiB9jhcDgcDofD4TiaOCy+mkwtWDVNtUX1ATEm46zWeh/edZx5kugFmSZRYxpaVDcyfGnCYqStbmSapHqMMc30AMOjZjow2jJdY7a4iW5kmnRMVjnVAww/mekeJu+a6mZm38saiC2ildFYYba7JcPTHgZ2X+1wOBwOh8PhONI4Yr+vhrONEK3Mk4wxhfO0WlmMsdUjvBTgWiNEDzDFXhBRN7KE1V2QoU31EMOR1pjcbKqJW0OsstWNLGF1LeBUG5hNsgYM6mIsETQb3UQ3MpneCbzSIEOb6oLZ9Fr7XVhxGmpOdTcEjc5Rj4b7aofD4XA4HA7Hkca746vJ7AbAu0bAW6oKWl6r5hdBr4AxGddirfd1/LlPGyZnS5onvZp4YASbJlWNaaplruqM4UV7tdjjFg3mpibdMflY0ZgO6N1Ew6xajWmiZe6QZrvbqzPm2VUttrnTMjfojOEt+zV51+mamSyuaBmL7tfEZHFVs93VuUETGw03G1tKPStLWO2+2uFwOBwOh8NxpOG/r2ZOtKBFjwDuNKLUGVPwi6pWEXXGEp0GGa1iVBPDhQZNTjXTYIl+rSIJNr1NGiwxpuFCY2O/ZjOs0aRrLBE0GdoWXTAZ2iFNLJpZok8vEWRoK5qNbqrV+gatZpjndsYYL6OWSLWK4XBf7XA4HA6Hw+E40vDfVyeMyTirtS5HqmvMk0QvyDSJGtPQorqR4UsTFiOdjU63ME1SPcaYZnqA4VEzHRhtma4xW9xENzJNOiarnOoBhp/MdA+Td011M7PvZZ2MVFtGY4XZ7pYMT3sY2H21w+FwOBwOh+NIw39f3cJLAa41QvQAU+wFEXUjS1jdBRnaVA8xHGmNyc2mmrg1xCpb3cgSVtcCTrWB2SRrwKAuxhJBs9FNdCOT6Z3AKw0ytKkumE2vtd+FFaeh5lR3Q9DoHPVouK92OBwOh8PhcBxp+O+rx7RYa/99daPumHysaEwHtP++uqaZyeKKlrHofk1MFlc1212dGzSx0XCzsaXUs7KE1e6rHQ6Hw+FwOBxHGv77auZEC1r0COBOI0qdMQW/qGoVUWcs0WmQ0SpGNTFcaNDkVDMNlujXKpJg09ukwRJjGi40NvZrNsMaTbrGEkGToW3RBZOhHdLEopkl+vQSQYa2otnoplqtb9BqhnluZ4zxMmqJVKsYDvfVDofD4XA4HI4jDf99dcKYjLNa63KkusY8SfSCTJOoMQ0tqhsZvjRhMdLZ6HQL0yTVY4xppgcYHjXTgdGW6RqzxU10I9OkY7LKqR5g+MlM9zB511Q3M/te1slItWU0VpjtbsnwtIeB3Vc7HA6Hw+FwOI40/PfVLbwU4FojRA8wxV4QUTeyhNVdkKFN9RDDkdaY3GyqiVtDrLLVjSxhdS3gVBuYTbIGDOpiLBE0G91ENzKZ3gm80iBDm+qC2fRa+11YcRpqTnU3BI3OUY+G+2qHw+FwOBwOx5GG/756TIu19t9XN+qOyceKxnRA+++ra5qZLK5oGYvu18RkcVWz3dW5QRMbDTcbW0o9K0tY7b7a4XA4HA6Hw3Gk4b+vZk60oEWPAO40otQZU/CLqlYRdcYSnQYZrWJUE8OFBk1ONdNgiX6tIgk2vU0aLDGm4UJjY79mM6zRpGssETQZ2hZdMBnaIU0smlmiTy8RZGgrmo1uqtX6Bq1mmOd2xhgvo5ZItYrhcF/tcDgcDofD4TjS8N9XJ4zJOKu1Lkeqa8yTRC/INIka09CiupHhSxMWI52NTrcwTVI9xphmeoDhUTMdGG2ZrjFb3EQ3Mk06Jquc6gGGn8x0D5N3TXUzs+9lnYxUW0ZjhdnulgxPexjYfbXD4XA4HA6H40jDf1/dwksBrjVC9ABT7AURdSNLWN0FGdpUDzEcaY3JzaaauDXEKlvdyBJW1wJOtYHZJGvAoC7GEkGz0U10I5PpncArDTK0qS6YTa+134UVp6HmVHdD0Ogc9Wi4r3Y4HA6Hw+FwHGn476vHtFhr/311o+6YfKxoTAe0/766ppnJ4oqWseh+TUwWVzXbXZ0bNLHRcLOxpdSzsoTV7qsdDofD4XA4HEca/vtq5kQLWvQI4E4jSp0xBb+oahVRZyzRaZDRKkY1MVxo0ORUMw2W6NcqkmDT26TBEmMaLjQ29ms2wxpNusYSQZOhbdEFk6Ed0sSimSX69BJBhrai2eimWq1v0GqGeW5njPEyaolUqxgO99UOh8PhcDgcjiMN/311wpiMs1rrcqS6xjxJ9IJMk6gxDS2qGxm+NGEx0tnodAvTJNVjjGmmBxgeNdOB0ZbpGrPFTXQj06RjssqpHmD4yUz3MHnXVDcz+17WyUi1ZTRWmO1uyfC0h4HdVzscDofD4XA4jjT899UtvBTgWiNEDzDFXhBRN7KE1V2QoU31EMOR1pjcbKqJW0OsstWNLGF1LeBUG5hNsgYM6mIsETQb3UQ3MpneCbzSIEOb6oLZ9Fr7XVhxGmpOdTcEjc5Rj4b7aofD4XA4HA7HkYb/vnpMi7X231c36o7Jx4rGdED776trmpksrmgZi+7XxGRxVbPd1blBExsNNxtbSj0rS1jtvtrhcDgcDofDcaThv69mTrSgRY8A7jSi1BlT8IuqVhF1xhKdBhmtYlQTw4UGTU4102CJfq0iCTa9TRosMabhQmNjv2YzrNGkaywRNBnaFl0wGdohTSyaWaJPLxFkaCuajW6q1foGrWaY53bGGC+jlki1iuFwX+1wOBwOh8PhONLw31cnjMk4q7UuR6przJNEL8g0iRrT0KK6keFLExYjnY1OtzBNUj3GmGZ6gOFRMx0YbZmuMVvcRDcyTTomq5zqAYafzHQPk3dNdTOz72WdjFRbRmOF2e6WDE97GNh9tcPhcDgcDofjSMN/X93CSwGuNUL0AFPsBRF1I0tY3QUZ2lQPMRxpjcnNppq4NcQqW93IElbXAk61gdkka8CgLsYSQbPRTXQjk+mdwCsNMrSpLphNr7XfhRWnoeZUd0PQ6Bz1aLivdjgcDofD4XAcafjvq8e0WGv/fXWj7ph8rGhMB7T/vrqmmcniipax6H5NTBZXNdtdnRs0sdFws7Gl1LOyhNXuqx0Oh8PhcDgcRxr++2rmRAta9AjgTiNKnTEFv6hqFVFnLNFpkNEqRjUxXGjQ5FQzDZbo1yqSYNPbpMESYxouNDb2azbDGk26xhJBk6Ft0QWToR3SxKKZJfr0EkGGtqLZ6KZarW/QaoZ5bmeM8TJqiVSrGA731Q6Hw+FwOByOIw3/fXXCmIyzWutypLrGPEn0gkyTqDENLaobGb40YTHS2eh0C9Mk1WOMaaYHGB4104HRlukas8VNdCPTpGOyyqkeYPjJTPcweddUNzP7XtbJSLVlNFaY7W7J8LSHgd1XOxwOh8PhcDiONPz31S28FOBaI0QPMMVeEFE3soTVXZChTfUQw5HWmNxsqolbQ6yy1Y0sYXUt4FQbmE2yBgzqYiwRNBvdRDcymd4JvNIgQ5vqgtn0WvtdWHEaak51NwSNzlGPhvtqh8PhcDgcDseRhv++ekyLtfbfVzfqjsnHisZ0QPvvq2uamSyuaBmL7tfEZHFVs93VuUETGw03G1tKPStLWO2+2uFwOBwOh8NxpOG/r2ZOtKBFjwDuNKLUGVPwi6pWEXXGEp0GGa1iVBPDhQZNTjXTYIl+rSIJNr1NGiwxpuFCY2O/ZjOs0aRrLBE0GdoWXTAZ2iFNLJpZok8vEWRoK5qNbqrV+gatZpjndsYYL6OWSLWK4XBf7XA4HA6Hw+E40vDfVyeMyTirtS5HqmvMk0QvyDSJGtPQorqR4UsTFiOdjU63ME1SPcaYZnqA4VEzHRhtma4xW9xENzJNOiarnOoBhp/MdA+Td011M7PvZZ2MVFtGY4XZ7pYMT3sY2H21w+FwOBwOh+NIw39f3cJLAa41QvQAU+wFEXUjS1jdBRnaVA8xHGmNyc2mmrg1xCpb3cgSVtcCTrWB2SRrwKAuxhJBs9FNdCOT6Z3AKw0ytKkumE2vtd+FFaeh5lR3Q9DoHPVouK92OBwOh8PhcBxp+O+rx7RYa/99daPumHysaEwHtP++uqaZyeKKlrHofk1MFlc1212dGzSx0XCzsaXUs7KE1e6rHQ6Hw+FwOBxHGv77auZEC1r0COBOI0qdMQW/qGoVUWcs0WmQ0SpGNTFcaNDkVDMNlujXKpJg09ukwRJjGi40NvZrNsMaTbrGEkGToW3RBZOhHdLEopkl+vQSQYa2otnoplqtb9BqhnluZ4zxMmqJVKsYDvfVDofD4XA4DhIoPy5cvHT2zbdOnX7T4+oLXFlcX1xlvd4HAv99dcKYjLNa63KkusY8SfSCTJOoMQ0tqhsZvjRhMdLZ6HQL0yTVY4xppgcYHjXTgdGW6RqzxU10I9OkY7LKqR5g+MlM9zB511Q3M/te1slItWU0VpjtbslIN4eB3Vc7HA6Hw+E4MMBxnXnj/DuX1lA7obwHUFTzlOBacHQ1/nBlcX1xlXGtZdYBwH9f3cJLAa41QvQAU+wFEXUjS1jdBRnaVA8xHGmNyc2mmrg1xCpb3cgSVtcCTrWB2SRrwKAuxhJBs9FNdCOT6Z3AKw0ytKkumE2vtd+FFaeh5lR3Q9DoHPVouK92AJtb2xffufzm+Qunz57PPnX28DjkgZsWty5uYNzGekM7HI7DinNvXXz7wiVyCNEpuL56Na41rji/mh3+++oxLdbaf1/dqDsmHysa0wHtv6+uaWayuKJlLLpfE5PFVc12V+cGTWw03GxsKfWsLGG1++r3OC6vbZx98603zr399oV33nnnksVlh+MQQ29TBm5d3MC4jXEz45bWm9vhcBwyXLiIR/USVdKhknd91Wtc8YMZtfbfVzMnWtCiRwB3GlHqjCn4RVWriDpjiU6DjFYxqonhQoMmp5ppsES/VpEEm94mDZYY03ChsbFfsxnWaNI1lgiaDG2LLpgM7ZAmFs0s0aeXCDK0Fc1GN9VqfYNWM8xzO2OMl1FLpFrFcLivfs8C98+b5y8g4EnEqKwx1h2OIwW5b+Uexs0sdzVub73RHQ7H4QBKjjNvnKcKG3WyTBiuBVexxnXH1df7YDb476sTxmSc1VqXI9U15kmiF2SaRI1paFHdyPClCYuRzkanW5gmqR5jTDM9wPComQ6MtkzXmC1uohuZJh2TVU71AMNPZrqHybumupnZ97JORqoto7HCbHdLRoo5DOy++r2J9Y3NU6ffvHCRHLV46Y2NjU3GlsNxpCD3LW5g8di4pXFj4/bGTa63u8PhOATAg/nOpTWpivsqatdXpcZ1P4Aha/99dQsvBbqYAaIHmGIviKgbWcLqLuimSvUQw5HWmNxsqolbQ6yy1Y0sYXUt4FQbmE2yBgzqYiwRNBvdRDcymd4JvNIgQ5vqgtn0WvtdWHEaak51NwSNzlGPhvvq9yDgN06fOfcO3ujYUYuX3mbQZ0QMel4djsMNvVmRNhniscVd4/bGTe7W2uE4PDj75lt4q+ECVipegWvB1axx3XH19T6YDfP56uM3XVPDjccxj8xuALxrBLylKtWnP/OBa6777Glpl5lBr4AxGddirf331Y26Y/KxojEd0P776ppmJosrGhjUxGRxVbPd1blBExsNNxtbSj0rS1jtvvq9Btxsp06/eekSDVNvbGyIo6b7FY+Zw3FkgRsYt7G4a9zYuL1xk+NWx92tt77D4XhXgedR3mec34OMq6/3wWw4gPHq07ezN9ZX09AtSzY3AI40QrQ0wGGKphAN92t1MMOpponRghY9ArjTiFJnTMEvqlpF1BlLdBpktIpRTcz3nmhyqpkGS/RrFUmw6W3SYIkxDRcaG/s1m2GNJl1jiaDJ0LbogsnQDmli0cwSfXqJIENb0Wx0U63WN2g1wzy3M8Z4GbVEqlUMx1Hw1fRBYUseO37jNdd84PbF8t17B2+ev3Dh4iUZpraOmrKew3GUQW+ClNvJXcvANW513PB66zscjncV4qs93pvxHvLVZGrBqmnKlI1Xz86YjLNa63KkusY8SfSCTJOoMQ0tqhtZSteOxUhno9MtTJNUjzGmmR5geNRMB0ZbpmvMFjfRjUyTjskqp3qA4Scz3cPkXVPdzOx7WZMTCS2ildFYYba7JcPTHgZesa9+hL8rw1+NsTj92evQvOhnfO6rV4bLaxvnzl9YW1sTU00PGPJCgHZioB3QFw7HoYfexPyGi4ws1hq3Om54/xfCHY7DADgrFEtlvdfHb719sWz5wSuvlT2dDz9frb7afkX8pq74lWqYcdMj0pQsK2Uxz6J2BZewUoxlLDbYamUxxlaP8FLoCsagB5hiL4ioG1nC6i6ATA8x1bkVJjebauLWEKtsdSNLWF0LONUG5kdLAwZ1MZYImo1uohuZTO8EXmmQoU11wWx6rf0urDgNNae6G4JG56hHYxZfbXMLQZPG0fLV5x6+/1dvu+eOh9/Q1wYDswq8cfyue371tvuPv66vge/cj5YQ97+orYLXn7gjzrrriXPaujKcffOtS5cub2xsyEg10gjnP8rVEWjb3Ny6vLaOgMBLneFwHHrI/Uzvj7tIm9u41XHDH8Dv+hwOxyjEV1OESmxAn3/7wk9c+1/8Nx+9wbY/dPyJX/wffkW0bXfdoL996w9/+PMny/YJ+vFPvu9v3nW2bG/RV6WvxsvrPvNa1Ndcc4NUv1S2fvy4FE/QNz1MZZYZr6ZamQplaDLYH/gM22nS191O1SL1mciYjGux1v776kbdMflY0ZgOaP99dU0zk8UVLWPR/ZqYLK5qtrs6N2hio+FmY0upZ2UJq1fvqz9w000fiB/PMV67/bprrrsu/4yvHQfvq188FsxtYZ4HZlUgDtz4arbZnWHmtUVrzaY6rpbs90qt9ebW9hvn3l5fX9/a2sINSrmAobMZW9vb4qhtoFFnv1t45oE5PmWwwNkeuqDz74BjVZC7Grc3bnLc6rjhcdvj5tfZDofjXQKcVaidRvj8W2Sqf+In//N/49/5c7DWsR2++hfYV2f9lU9+4fofft8PSXzy2/U+PQzHeMvjlfbIj33yfT90/RdOZS0/fMtjpmWcsYfX3/N63n7289eH3Z6+5838LfbVZfsExlm6/k7y1QN9+vhqHa/uwA6Za1Cuem9Xvx0Rlz3+cTP34Ru6L3lKPSZFmRRn+KMQDfdrdTDDqaaJ0YIWPQItGBmlzpiCX1S1iqgzlug0yGgVo5oYLjRocqqZBkv0axVJsOlt0mCJMQ0XGhv7NZthjSZdY4mgydC26ILpkR7SxKKZJfr0EkGGtqLZ6KZarW/QaoZ5bmeM8TJqiVSrGI4ZfPXtx5FbzFfBKdXceHuRi/ijPUWRfygpKW56pPTV1KIwG8p8tXzLRtBiywuQ6e3xWgOzDNgnH3sYnIxXW7DxfuA78gLmLWqAFu9dcAFcfIf+/6H4DXBKc5R6FZtbW2vrG5mjtrE2+GVaMwi/yn1WjNna8jMI3h9zMsfgvvpqgtzbuMnl2+C47XHz6zyHw/EugX21Vji22ik1zPPn7vonv3Drr0B85B987MHjT0j7g8cfR0vPst++5YfhjbUdphcOsG/9pUb/uGy1z2Of/PD119N4b2j/1i3Xf/j6H775sZ7+dX3ynuthzvN28tW0dW5XA18uu6zG+bH7X+0zovWsLrTsVeyrTUmqo9CxAJWRamrBsjxefRyMUvV1bQfDZhMWHqnuY0zGWa11OVJdY54kekGmSdSYhhbVjQxfmrAY6Wx0uoVpkuoxxjTTAwyPmunAaMt0jdniJrqRadIxWeVUDzCe2Ez3MB7yTDczJQjR8tmbtOjncMJorLAkl4LhaQ8Dz/A98JuOkyuOXwWnLHTTI1kuYlMdPTB/e7wb4mZTHTuTW7bGmOfan6tEa534alpntOtIeuVHh6NY0lfz0PT9Lw7b48QQig9/Rl4UNntpvHn+wqXLa3awWmcw4D3euvDOpcJOI9D66qkzr7wWL18O+nQgOYpV7jZh3Ffff8dd9iS/eOyu+yfthvvqqwzyHo1bHTc8bnv/18scjncdcFZUe/Bn+lqHDGpYaFhr206+Gi3V/q/dc/0P3/Jo2b69c+rOD+tQ8PX3nNL2s3fBJD/O49vXf+bXuuHiD9/12s42j3vf8liynkdhvD95y/W/Ta6SWh6/5fpP3gxfTVvkPqd+22xF9wFb0TXDjm49drN2QHzyW2Y/2Vc/FrdFS1F/0jtmKd0W9gSLy7JWn8QOkMbaPnzXYzgbslQ8J9+Cr6aj4+3WzondFveUdjqx0njzLfDVOANo7xq7dWr/Hn11+mqpULUklQqYpUDdtRamXLDSy+ukxrUl2JVgzrEqqc0yxlSKtqiVxRhbPcJLgSrHANEDTLEXRNSNLGF1F1TBpnqI4UhrTG421cStIVbZ6kaWsLoWcKoNzCZZAwZ1MZYImo1uohuZTO8EXmlQikl1wZyGhCUlWc1s0pZqYglJZI0xi69WL21bkjFnTjVJ8jGWOLXcBLtsYqQJxsNbX02bSFayAJby1d14ab+vlm+Jd0YaIC/NQ77PpB57FTh99vz6Ov2yGo8ipz1K0RHw1WfeOH/2TTLX1lSfPnvumRe+//Rz33u531f3m1I6UdkgNp8ZjuhUcdT3vyjtcsjh+/OhD9va74TGcltY9tj9D3TtzzxwB14aX52vUMAfZHD7A8fsIehVSPcw6G7/s9/GOw4T5A7HrY4bHrc9bn6d4XA43iWIr0bYKmVA03j1rb9i27/Bvlq0bWctJrazeRJkd6+/5yRrHgoWzV+9Du2yrBpptLC9JKPLcyVoQPsxWFM2t2zL2bve/BjPJU8b7DR5XdFYD28C63n0k7RgbJF1hvV3W5egfWa3zHvCC8omRPNKeFu0G9dfr/tAzp/2mc+Dbpc0nDDm0gA7nxxdf9iN7pzAKstG5RBUYyldg9XoED5i+NYtYux5QYmqvhp9NRWg8ffVua/WGkv6vApN49UoTGmkOvSUPpGpir3hYdvSzpiMa7HW/vvqRt0x+VjRmA5o/311TTOTxRUtY9H9mpgsrmokEW6xmthoSjShpdSzsoTV8/hqtrXsfpElOAUV3jgzvbSgDCknDpxhWpLBagHm6li09dXSM3PvE7GEryZjFpxkxVfLP2ZGdi4bUO08G8I6wFUA72ryJXDKAkiCKcRXS7x5/u2L71wGf+elV+CoJQZ8tRrR3GfSWdKPBnAS+HDgb8OHBWYuLx4/RGAPrGfm3DNPfAenjjvoCSddOW/HnsEKpR2nVz6YCCvBCq0rVm0votFm/Viwa5SlzGj8uYcfqH5c4jgkkDdo+Sr4AZR0DodjGOSrt7ap9mhj9tW/bFvYVyctOT+qI65k/6iFrCBZVu3zFHnLV6GDldX2M/TyUdF1fvQT1F9469W7r4cXBcPo0tz6VjonHNdDlvhuMrGxhTjfuo48c/v1v30mtMd9DkcBJ/yJp8hO07LYB9lWsjZa1SeeYh2Pve+cxBaznzifONLQznYa+8PWnfYwXWqQr1ZfHUpSW3RCx/Gfzmx3y14xY0QP3xC/Thn+YTN+IVUa/ihEw/1aHcxwqmlitKBFj4DKxoBSZ0zBL6paRdQZS3QaZLSKUU0MFxo0OdVMgyX6tYok2PQ2abDEmIYLjY39ms2wRpOusUTQZGhbdMFkaIc0sWhmiT69RJChrWg2uqlW6xu0mmGe2xljvIxaItUqhmMmX80ZBiKa3sJX2zFngBbk7GTTlCL31SWkf+KrCbSgIF1hI6zvyjAwC0jn9o9XS89o+ayf1FndyxVAfPVO7UvggPXVEtFRj/tqQvFhAbxo57TZ66YnARY3d60EWk/02IqkA85MZVVYRFg9fGeA6UyaFYbFk3V2O6MrEYSPA7rOEL1X03G4QO/je3u44d1XOxyHAeKrN2G02vjnb/1luGjb8o2HH8taevipm8lan9lk03gnTKNph+fcZPMJGxraz9zJXrRYT8dw1NQfPvMTT8Gskg6+un8r0od8vm6LbOrdZFPNmsutqxnO24PN3tp+BPb+UepGx/jozcRk9WXNyVKyKtbYK/LPQ3vLy8oHE7KfYU90bdgun1XSNIt6yhmIa+vlq9FX47X+jpqrTy18918/fvx21KMKKoK54kr//+pXP0NL3vArxz9r6tobj+vcVTAm46zWuhyprjFPEr0g0yRqTEOL6kaGL01YjHQ2Ot3CNEn1GGOa6QGGR810YLRlusZscRPdyDTpmKxyqgcYfjLTPUzeNdXNzL6XNY9Fa4toZTRWmO1uyfC0h4Hn8tWcfG5CulCjW/jqvvFqds42a5W+us8kF75aQe2LWGvyY4v4arJeYczZhnFxHcj+3cNGjvxkus7BrUzH6bPnNzbm89WK+OkAC3sGgh3lQ5ZGPbox2zzaQc0wut3/Irar7rrz1bY/nVV0oN0zA+zBV8dPB0JkvppAa+h23nFYIW/TuOFx2/v3wB2Odx382e4WXNbmJtutMQ0L/fO3kIuGhqP+yD/4GBgtv3XnvbG9b1lyfR+EMwymUdvZUr4CfebOD9p2ttmPVNYTtVjZzc3Td17/4es/yGbyFfLVj1Af2gqcc+hPDvbOV+x6wnbhq2mvYrv0wTqx9dg/7hu1X38HGem0fUvs7qOf4HXCUX/w7kfhse84zetMjiX6anPsPeeEPgJg4432sJ8n7/hpMtjhPJCv7vYHzB806BHZ463oq8NXrxK2ChOtzJOMMZW6LWplMcZWj/BSoOIxQPQAU+wFEXUjS1jdBRWxqR5iONIak5tNNXFriFW2upElrK4FnGoDs0nWgEFdjCWCZqOb6EYm0zuBVxpkaFNdMJtea78LK05DzanuhqDROerRmM1X66d4wSFbX93w++pkNLvw5NlYd0Cfr8623gyyTz3eaWBWATJ4/SOcna8u1zllKw148/yFtbX1uX217HZpXAXUGAxqHCIuXSufEINGX82j4nfcxXY68dV2hbq43RPA+up860CyA4Keno5DA3pnZ1+N297/3TKH410H++rtDditTTZdYxrmGRHbP/z3P/YTP/lfIP7CX/3P33jzraz/yTs+/EM//EtwudxOFhQOUNth/LgdtjDo0+hA5rPoT+skw6xz4/qxrLS8BqsJp4r2V+7+IG8ROm4FOm7l5B2/BHfN62Tj/cj2xiO/9ENkxXWdYf20dcyNmhfnPqY/bTcuiz384M03f1COF8dy8820e92xBM3G+BNPhX34admf6jnpjsvuAx1j2DfWOEtk+D9x92u8/k2zh7ysbrfUV62vtsWUraxsmSVaXqvmF0GvgDEZ12Kt/ffVjbpj8rGiMR3Q/vvqmmYmiysaGNTEZHFVs93VuUETGw03G1tKPStLWD2fr2Y3G382kjnbdMg6fgmcIYY8vpTR5m7ZfED7+O1BW18N3fWJg+HTQH6sx9aWs2Scs+afra9mF51+OxpLif2TAd5uDdnL5XHxncsI3JqUBZAEU2xubb157u0+X/38d3/Q/z8VpWcDFrRmaM+9nrpWPhU1Xy0Hrudk/5kXSbT6al5WTm/nqxMLjZ6qeQfqOxO3vv/GOdlQ3IFnHghHRMfSc3s4DgXkDRo3vNz52upwON4lwFmJ0drY6EzXgBZfbdthra2pzvqztQ5fY1YzmbYHM5mZT1rPI/rDbDLPrGmuWT/7atMfLOPVoU++lY3TjzxyN+y0NAbTzpa16yPLksnXZRHYc7NdttYyq9uWriccI3njaLk3zEcGcuzklrU9brd2Tsj8c+NP3/nI3R+M7eHMoOcjd3z4g3ec3nzlqUfuiHvFY928TmLdh4r28eoOtv6yxZhoaZAqDX8UouF+rQ5mONU0MVrQokdAZWNAqTOm4BdVrSLqjCU6DTJaxagmhgsNmpxqpsES/VpFEmx6mzRYYkzDhcbGfs1mWKNJ11giaDK0LbpgMrRDmlg0s0SfXiLI0FY0G91Uq/UNWs0wz+2MMV5GLZFqFcMxp6+2KEeM2VorCtPLTltQ/B9dAK0t4robj+s8WYpGs48f5//1OmLSl8DZ14WvAUsEV9Y/izwkXkYb2YE8m/WB2jNdPIBMXZwb3d1qsLm1/ca5t/v+3TIB7ofSV58+e66vf4A9qPrB3nHXE+qQpYX/fe+qrwbI/Wq3+4/jlCYdhnx1B+Orge7C2Q1Vd8a234bG6NLJRX/nmSf4oxCOYjTecahA7+P875bhtsfNr60Oh+NdgvhqGK1Gfun7r7z0e69k7WffPJ+1OB8Jfg/5aqmXpGiS4klbVB8QYzLOaq3Lkeoa8yTRCzJNosY0tKhuZPjShMVIZ6PTLUyTVI8xppkeYHjUTAdGW6ZrzBY30Y1Mk47JKqd6gOEnM93D5F1T3czse1nzWLS2iFZGY4XZ7pYMT3sYeMW++r2Nirs7ZDj75lvr6xt4riidITPWgLnynXA46ldO0r9QojMcjqMDucNxM+OGx22vrQ6H490DnNX6xiZcFvEms+v3jPbx6hy2BBOtzJOMMZWiLWplMcZWj/BSgGuNED3AFHtBRN3IElZ3QYY21UMMR1pjcrOpJm4NscpWN7KE1bWAU21gNskaMKiLsUTQbHQT3chkeifwSoMMbaoLZtNr7XdhxWmoOdXdEDQ6Rz0a7qtXhmLQ9RDi8trGufMXdnq+Cm6xtb29tr6hLxyOowZ6W9/bw62OGx63vbY6HI53D+Kr19c3iSVcv2f0VeurbSVlyypbY4mW16r5RdArYEzGtVhr/311o+6YfKxoTAe0/766ppnJ4oqWseh+TUwWVzXbXZ0bNLHRcLOxpdSzsoTV7qtXA/rm8Iq/tj0T3jx/4dLlNTxUlMuQCh2Oqw5yb+Mmx63u/2KZw3FIcPbNt965tA6Ltba+wUx2y/V7QeO6H8D3hvz31cyJFrToEcCdRpQ6Ywp+UdUqos5YotMgo1WMamK40KDJqWYaLNGvVSTBprdJgyXGNFxobOzXbIY1mnSNJYImQ9uiCyZDO6SJRTNL9OklggxtRbPRTbVa36DVDPPczhjjZdQSqVYxHO6r32vAbSODBpQ+GDrD4bgqIHc1bm/c5LjVccPrDIfD8a7iwsVLb739DowWHJfze4px3XH19T6YDf776oQxGWe11uVIdY15kugFmSZRYxpaVDcyfGnCYqSz0ekWpkmqxxjTTA8wPGqmA6Mt0zVmi5voRqZJx2SVUz3A8JOZ7mHyrqluZva9rJORastorDDb3ZLhaQ8Du69+DwJ+4/TZc26tHVcf5H7GjR1vcp3hcDjebaDkOH32PIzW5TWyW87vHcZ1x9XX+2A2+O+rW3gpaM3IED3AFHtBRN3IElZ3QbVrqocYjrTG5GZTTdwaYpWtbmQJq2sBp9rAbJI1YFAXY4mg2egmupHJ9E7glQYZ2lQXzKbX2u/CitNQc6q7IWh0jno03Fe/NyFDebj6yCScAgk6z+E4gtCbmN6F93Bjy5cydJ7D4TgcePvCpTf5nzzweO8Erjiuu94Bc8J/Xz2mxVr776sbdcfkY0VjOqD999U1zUwWV7SMRfdrYrK4qtnu6tygiY2Gm40tpZ6VJax2X/2eBe46vNude+vixuYW5QWH44gDtzFuZtzSuLFxe+uN7nA4DhPOnb8g1hrlh/NVz1RmHNQ/cuG/r2ZOtKBFj8AWiaXOmIJfVLWKqDOW6DTIaBWjmhguNGhyqpkGS/RrFUmw6W3SYIkxDRcaG/s1m2GNJl1jiaDJ0LbogsnQDmli0cwSfXqJIENb0Wx0U63WN2g1wzy3M8Z4GbVEqlUMBxIfPcCO9yrwtnf2zbfeOPf2xXcuw5Ps8D8V7nAcIeCmxa2LGxi3MW5m3NJ6czscjkOJty9cOn32/Lm3LuKxRRHicfUFriyuL67ywYxUC/z31QljMs5qrcuR6hrzJNELMk2ixjS0qG5k+NKExUhno9MtTJNUjzGmmR5geNRMB0ZbpmvMFjfRjUyTjskqp3qA4Scz3cPkXVPdzOx7WfNYtLaIVkZjhdnulgxPexgY6Q8PoOM9js2tbbwLvnn+At4FT51+08PjCAVuWty6uIH9/1p3OI4KUH5cuHjp7JtvZY+zx9URuLK4vrjKer0PBP776hZeCnCtEaIHmGIviKgbWcLqLsjQpnqI4UhrTG421cStIVbZ6kaWsLoWcKoNzCZZAwZ1MZYImo1uohuZTO8EXmmQoU11wWx6rf0urDgNNae6G4JG56hHw321w+FwOBwOh+NIw39fPabFWvvvqxt1x+RjRWM6oP331TXNTBZXtIxF92tisriq2e7q3KCJjYabjS2lnpUlrHZf7XA4HA6Hw+E40vDfVzMnWtCiRwB3GlHqjCn4RVWriDpjiU6DjFYxqonhQoMmp5ppsES/VpEEm94mDZYY03ChsbFfsxnWaNI1lgiaDG2LLpgM7ZAmFs0s0aeXCDK0Fc1GN9VqfYNWM8xzO2OMl1FLpFrFcLivdjgcDofD4XAcafjvqxPGZJzVWpcj1TXmSaIXZJpEjWloUd3I8KUJi5HORqdbmCapHmNMMz3A8KiZDoy2TNeYLW6iG5kmHZNVTvUAw09muofJu6a6mdn3sk5Gqi2jscJsd0uGpz0M7L7a4XA4HA6Hw3Gk4b+vbuGlANcaIXqAKfaCiLqRJazuggxtqocYjrTG5GZTTdwaYpWtbmQJq2sBp9rAbJI1YFAXY4mg2egmupHJ9E7glQYZ2lQXzKbX2u/CitNQc6q7IWh0jno03Fc7HA6Hw+FwOI40/PfVY1qstf++ulF3TD5WNKYD2n9fXdPMZHFFy1h0vyYmi6ua7a7ODZrYaLjZ2FLqWVnCavfVDofD4XA4HI4jjQPy1fA2m5vba+v0P3R7XF7boP+jdXeXLDF5ZB5kJtPbq9EZi8h/ce5xdQSu5jou6hY5TDbJldHpPr1E0OYqmo1uqonFALNWM8xzO2OMl1FLpFrFcOBsaKZwOBwOh8PhcDiOIA7CV7Oj3tzd3aMXV2gceGV8NLG3twcvIe6azLNY6H5GNzqB/BqL952MxdjxbiHeBnDX8KJqrYeZfS/rZKTaMhorzHa3ZOzAYWD31Q6Hw+FwOByOI43ZffX6xtbO9i55uDniiGNzcxueiswzOehudFoFazJd27vZca8qHIcBuMLr62ytyQb38kqDDG2qC2bTa+13YcVpqDnV3RA0Okc9Gu6rHQ6Hw+FwOBxHGvP6avhG1Nr7e3vk4ebgow+cotqoNU1A9DXh7Z09HC7bYBEr1I5DArLWG7jUY6PTfZqYLK5qtrs6N2hio7Gx2FLqWVnCavfVDofD4XA4HI4jjRl99e7e3tr6Jhng+eKqwOW1DXgkGZ0mOy1B/5rXLk4g3O8ujjU44dVqx+EBbgP9rTVZX40+vUTQJiqajW6qicUAs1YzzHM7Y4yXUUukWsVwuK92OBwOh8PhcBxpzOirNze3d3d2yf3uzsZXBeArNja3wkh1x2jcoX/men+XbfAc7Dg8wG2wvrGpI9V9zL6XdTJSbRmNFWa7WzI2ehjYfbXD4XA4HA6H40hjRl+9tr5B7nfWuCqwt7d3eW2D7HQYqRaBRjLAe/s7IVauHYcHchvAZxrDnPBKgwxtqgtm02vtd2HFaag51d0QNDpHPRruqx0Oh8PhcDgcRxoz+mrUyld2dsj9Ms+hdUtHHzhX6Xg1TdAI67uNY4UHZo76O6f3bFT7tGjHoQJd8eHR6T5NTBZXNdtdnRs0sdFws7Gl1LOyhNXuqx0Oh8PhcDgcRxrz+mpyvz2x+53vbPzGb1TjyoULWefeuFpAvjqaaok98tUwwBpshqOGl/5bv74p8dd/eTO2v3X5StQqBrXjUIGuuJpkjT69RJChrWg2uqlW6xu0mmGe2xljvIxaItUqhsN9tcPhcDgcDofjSGPm8ertbbjfKm/8+q9v/PZv73zzm9tPPml57ROfAPctlbFuaTJO3PpjH733NMsnP/W+Dx07x3L1aF45+Wox1YbFV2+xB85YfDX0a29dga+W9vOXr/yXv7L5ynmy1tWlSnYcKuCKJ+PVJbPvZZ2MVFtGY4XZ7pYMT3sY2H21w+FwOBwOh+NIY+bxarjfnoCv3oF/3t7effFFOOorb7+NgFj75Cc377xTWmz/evTh9LGP/Ni17+viUyf298/d/VERTb4a7bGPgFquvfVJfdWEKb6a7HQYqRaBRljfarx4eu8nfm5DQny1mOp7n9q13UZjDGfu/VA8h+nZWAInbu5Oo7koywAX9Nr33bz0ahYBbfojd5/RV8sBVxw+0xjmhFcaZGhTXTCbXmu/CytOQ82p7oag0Tnq0XBf7XA4HA6Hw+E40ph5vHprC+63yvDV208+uffaaxf/s/8MevfVV6HXf+3XLn3kI2s/+7Noufzf/Xd9y0bWLZUgXz3g1pp89Uc+9FHrl07c/NGPfGhGX52OV9NEfPXmDhngUf7U17Z/9Mb1v/0bmzHgtwf6Cw8iNavlBw2Lwvrq1QD7dvOnbl2BP3+XgSs+Mjrdp4nJ4qpmu6tzgyY2Gm42tpR6Vpaw2n21w+FwOBwOh+NIY+bx6q0tNcDgVMNC7zz5JAJGumyHyNrrug9VX9253Lbxali1btaJWz+El50hhDnUgdzYpxskr6yc+/caP/LV0VRL8O+rYX2r8cLr3e+rEdJ4wz1bN35hK/aBtX725F58WY0B0Ehyema6luT00ph2Ov7MJyEx5HJasEgxAJ6c/25u94kGXYgT8WxXR4bFqKd2HZf4UyfipuMmkp2Rl+FArDYHWB4RWj5y9wneVRyCOfy+Xe1uDI70rFrgisNtilWW6NNLBBnaimajm2q1vkGrGea5nTHGy6glUq1iONxXOxwOh8PhcDiONOYfr4YN3txUM2z0xq/92vYTT1x5662dF16w7Xuvvrp3/jw0fLVtr2rdUonV+GrySGKZopWKL4OFo0Fd0egcrNSJW8WDhZWTN+t3UwD5ajHVhsVXb2yTAc4Yvvq//vVN8POn9v76L2/G9l+8f/vNd66Ihq9+9CXy1XFuyQMwhxMQz2qPr7aHGU6dsZ2nj91KKzQtQHf+qb07gdGXokP8+j3p4rKKhcaUL5k0yRrCy3iNyp3BLL0T6BJ/KFzWsKraEXFjvH/sOuu7Gree7WEFuOLJeHXJ7HtZJyPVltFYYba7JcPTHgZ2X+1wOBwOh8PhONKYfbx6HwZ4c1NtsNHw1TtPPAELvQtfbdr3Xnnlyvnz0OSrTXtd9yEdIewcmnqkzk2ZxhRigdQIwTulI5MGwX/S3Nw1ycqrJj8FzhXZ6TBSLQKNsL4bbIAzhp2Gr/7kfduI/wd8da3P34Kv/v5e2W65H3Q4E321nCVpjB6SXGW6nvQ0xvMPYc+Sbe8ukLlwEU9+Kqwfc+Makp7hGpU7Yy8fj2/zFTQfCpRHxL66u9DmcKq7as/V2J2AKw6faQxzwisNMrSpLphNr7XfhRWnoeZUd0PQ6Bz1aLivdjgcDofD4XAcacw8Xr2xoQYYnGoZr4a1vvThD5ft0Fl7VeuWSlQNTGd7jOkKjTwIab6mqyaKrRrWRjo1hLSJ1Ld3KwmbppV/lLsVbjAFzlU6Xk0T8tU7++swwNvMRj//+t6Xvr2r8fRutQ989SMvka/O2q0eQDCcBvGsJqc3nhZyrXoOJYL5xKropZ789DTGi9JdHQZeyiaSdnPhAnTlIcKak572WNKd4fVjP3FE1HKCv/kf7XT9iKb5atOBFuw6VIArPjI63aeJyeKqZrurc4MmNhpuNraUelaWsNp9tcPhcDgcDofjSGPm8eqNDbHBEJne+Nzn1j7xibWf/Vn6d8t+7df2XnkFAXHpIx/Zff55dICv7lu2031IjF9AZ3uM6Uq8kIHYLfZC4evBdYNU8Z9xnRDs9FoMFY1Ri6mW4N9Xw/oi1pitfu7U3m8/tnPHYztgRLWP+Oqy3eoBlPvctfT66tz0WtDidEqNEQWSc2UumW3vdqPYRHah0Vkdb9KzvEZhZ8IadNAb+/bRe58Ujw3Uj6hblmAOp2dXqb868+KeTIErDrcpVlmiTy8RZGgrmo1uqtX6Bq1mmOd2xhgvo5ZItYrhcF/tcDgcDofD4TjSmH+8Wmxwjbcff3zzc5975/rrdx5/fO/cuSvnzqFl97nnZC6NV/cvK6xbKrE6X21WFR2UsVI0VzzbmXtvDuvBsrJIt/LUTBYgXy2m2jAa4X7XttgGb5ETjvrZU3v/9a9t/o+/u/2P/il9D7zaB+2/98aVst3qQeAsmW+200FFk0mz5HDENHa6O5lnzlHn8FPzbi6dis7lpqcotNP6VXcdAHPhGKnFtRcr6Rl8dbkzALb70Vtv1l9Wo/3Wm6M5rx5RtlFzZeu7CjFipyNwxZPx6pLZ97JORqoto7HCbHdLhqc9DDyrr6ZnyeFwOK4KaF6bDboZh8PhOPrQvHaAmH+8en2Id559Fr4adnrniScypvHqwWWJ+zDiq8kLqe6McQp0toaNkDooGYH80LET9E+andk/feLE3aGxatp5EV28AM4VLn8cqRaBxsuwwVv7JT97knz1Y9/fe/z7e+Kr735y1/4L4YhfOLZdXdbyGPhEZQfFIG/J7Th2uNZ4XLEd/W998sz+kyfu7VrCFQlnj5ayp8hsrrsiSYfMV+NldkppDWqhS19d3Rnd52Tf7DrzI5rsq+0aKLpDK4ArDp9pDHPCKw0ytKkumE2vtd+FFaeh5lR3Q9DoHPVozOGr8UDt7u5ubG6trW9g/R4eHh5HPZDNkNOQ2VZeL3rC9PDwuMpivoQ5gJnHq2F915nFBtf0+j/+xxv/+B+XvP3YY6PL6paOPnCu0vFqmqAR7vfyJnlgjaDhq/+b27diVPu06Cacpo8eUvvqaAbOXme2M++dA1d8ZHS6TxOTxVXNdlfnBk1sNNxsbCn1rCxhNQ5cT8GKgMcHyfTy2sb6xtbm1nbcHI7d6Nm57/Qmurg0UZvLV1ximZvdBkO6ynsL6T2stlFH3q1qyEITl7rKe82aeG9U46/QxKINw4GoRqoe0IDRKYMadZXhggY1k7YM68jy1lNoyFwzD+sq76smatPEVvNo6n6vJraaWRJCi14pg2qapGpBr9ZpApxk5A1kNuQ323lJYFWeMFNd5TIZtmhNcS06MjJARUMWmrjUVZbk1qJD6hvW+Cs0sWjDlNBE4+ke0IDRKYMadZVrSdJqJm0Z1pHx1NQ0ZK6Zh3WVPWEe0oQ5jNl99ZW1NbHBc2jd0tEH+Wo8DHy/k2AtvvoSgm0wiVXrZtDIsAy3usGehtRIn7i5+EfjDXDF6S2W39ol+vQSoaVGrkM5YjSxKWW0wOK5ptgqTHKqVQzHan01HqL1jU1EtpUy7O71H0JymPbwIVLNPUlz9Gl72nv15Gi5bUQr75Jo1BSiI0uQprqqVVOUusoSVLI16sgSQ1qqPYl2HVliqsY7fK+mus1qQDTLPi09VQeW6Nf85jKo9Q0o0ZElUm3ftuqaZMKCFj0IetgDGjQnB23JtQQVXrnmnqQlgua5MiUYmWqdrhibm9tIcXYHFgZW4glzQCsjLzVrCtGRJUgjBTVrilJXWaKSGPt0ZIkhjbQTW9p1ZImpmrJSn/aE2asHQQ97QIPm5KAtuZag/JNr7klaImieK1OCkanW6YqxwoQ5ihl99dp6GFuGDV5bUzO8Uq1bOuLAE7q2tqEPleHLaxtrW/vvbJINnokdBwDy0vF74P2mGrcBrjje8+Ttuc7hzRslhWpi0cporLCUKQVLWfOu8wp9NR4c1IdSI+LYq5t7F9me9r6Lkl0+e1nNhbaXPtV9N08TVwZbWhibnsoooSoMWoa7QZUWppIr1cR4nelRRsbO9CIMWobLQZiUQVOZ34xKBmV6YS4HXnQ4ZZiJrOYHP9EFS3Kw+sAZlGlmItWTgEpx+UEYLO4JcwmWBGh1E2PTUxlZosKgZbhLgC1MKS7VXRq0epTxpGd6EQYtw54wPWHOgxl9NY5hd2MTBni+0C0dceA9Qy42bnqK8MkWGte29uB+5wvH4QFuAxQ39KbbvfEnvNKQssPqgk3hUpY4zLYIE00sgc5Rj8aqfDUeItRKl9c2svUPh91VPRDV5WEql6dF2Z66vhOrYfUs0Xcj9fIuiURPYgnSVHuFKPUijEKKRacn8XhIzWf1JJawui2AHpaaz2plolSPsYTVacibjtWTWAJ/UZOczCsCJYEA0YPMSSPRHZvBllR3TJRqZoHVM4E+it3ZkR1bAFjQE2YM5K5pjOyU6UksQRppKraUehFGLmLR6Uk8HkhNmZ7EEla3BdDDnjAXAyWBANGDzEkj0R0nSdLqjolSzSyweiYsmTAbMaOvxn28trZBI8yXLytfvrxarVs64sCVpszHz0pgmiDhXl7bvLhBBngmdhwe4DbY3MI11zdp++aNkmJEE2vZQTqUKVYTG91X7thiaCaWsHpVvhoPDk7i+sZWuYmwA1bPzn2nN9HFpYnaXL7iEsvc7DYY0lWWwZOpWodQWnRkZLiKlpov1cSlrrIMnrToMJwyrPFXaGLRhmnARDRS9YAGjE4Z1KirXA62pJpJW4Z1ZHnrKbS+JVnNPKyrXA62jGliq6lWQ03Up4mtZkZmaNQrZVBNk1Qt6NU6HQJyCDIezry+nggs6AmzR1e5TIYtWlNci46MDFDRkIUmLnWVJbm16JD6hjX+Ck0s2jAlNNF4ugc0YHTKoEZd5VqStJpJW4Z1ZDytNQ2Za+ZhXWVPmIcrYTZiRl8NbG5ub69vwAPPFLqZowz5ZgJudAq+363GrLXNXRjgmcJxSIDbYH1jE++L9BbLLNGnlwgtNXIdyhGjiU0powUWzzXFVk9BFrSK4Vihr16jjyfo391pCbt7/YeQHKY9fIhUc0/SHH3anvZePTlabhvRyrskGjWF6MgSpKmuatUUpa6yBJVsjTqyxJCWak+iXUeWmKpxd/ZqqtusBkSz7NPSU3VgiX7Nby6DWt+AEh1ZItXZ21ZFk0xY0KIHQXVVQIPmOkxbci1BtVmuuSdpiaB5rkwJRqZap7MA141/RLa4r/aEOaqVkZeaNYXoyBKkkYKaNUWpqyxRSYx9OrLEkEbaiS3tOrLEVE1ZqU97wuzVg6CUFdCgOcVpS64lKL3lmnuSlgia58qUYGSqdToLcN2WSZiNmNdXA+sbW3AMNMJ86dLKWbdxZCFuSh+nHkYHWOsLbINXzo7DANwGa+ubeLeTN+YhDm/eKClUE4tWRmOFpUwpWMqad51X4qvxsKBewKrMyk2ddzjYnva+i5JdPntZzYW2lz7VfTdPE1cGW1oYm57KKKEqDFqGu0GVFqaSK9XEeJ3pUUauzvQiDFqGw8BLp1MGTWV+GyoZlOmFOR1skRbRg0xkNT/+iS5YUoTVB86gTDMTqV4AyHi4hWW1k4BFsKAnTGSAJVgSoNVNjE1PZWSJCoOW4S4BtjCluFR3adDqUcaTnulFGLQMe8L0hDkPZvfVANmGtfVdGbi+dGmFoRs4asADhveJy/wvv/ONrp82gSmKT7nQ7fLa5uVN+q01zDDFehDLace7iHgbZCPVfbzSkLLD6oJN4VKWOMy2CBNNLIHOUY/Gqnw1CiUpEyeF3VU9ENXlYSqXp0XZnrq+E6th9SwxfDtVeJdEoiexBGmqvUKUehFGIcWi05N4PKTms3oSS1jdFkAPS81ntTJRqsdYwuo05I3G6kksgb+oSU7mFYHyQIDoQea8keiOzWBLqjsmSjWzwOr5gIyHvCf7NglYxBOmDeSuaYzslOlJLEEaaSq2lHoRRi5i0elJPB5ITZmexBJWtwXQw54wFwPlgQDRg8x5I9EdJ0nS6o6JUs0ssHo+LJww23EQvhrAPc2Dchs4JI81dtTIKfpoVZgmViP5srv2E3j1BDvqLfObavkkW9+kicMbNkqKEU2sZQfpUKZYTWx0X7lji6GZWMJqnA3NFEuAH5OkTCw3h2M3enbuO72JLi5N1ObyFZdY5ma3wZCusr3l2rUOobToyEh3FS01X6qJS11lGTxp0WE4ZVjjr9DEog3TgIlopOcBDRidMqhRV7kcbEk1k7YM68jydlNoyFwzD+sql4MtY5rYanrMURD1aWKrmSU5tOiVMqimSaoW9GqdjgAZD8+5XbARWAQLesLs0VUuk2GL1hTXoiMjA1Q0ZKGJS11lSW4tOqS+YY2/QhOLNkwJTTSe7gENGJ0yqFFXuZYkrWbSlmEdGU9QTUPmmnlYV9kT5iFKmO04IF+9POw5sCdEtDTEy0AhGneM1d0NZDVNjBa06BHILS4odcYU/KKqVUSdsUSnQUarGNXEXYpBgiNKNFiiX6tIQlJniwZLjGlK3yH6NeV7tHA06RpLBB3e0sZ0weENu08Ti2aW6NNLhJYauQ7liNHEppTRAovnmmKrpyALWsVwzOSrR8PuXv8hJIdpDx8i1dyTNEeftqe9V0+OlttGtHLbbSmaQnRkCdJTHg2KUldZovdRLXVkiSE9JdV0OrLEVJ2mx1Tn6RQQzbJPS0/VgSX6dd/bRP2tpHi7kUh139tWp0kmLGjRg5j4ll15u49aArLQ3JO0RNA8V6YEI1Ot07mwQl89GpLugvaE2aspREeWII0U1KwpSl1liUpi7NORJYY00k5sadeRJaZqykp92hNmrx4EPfUBDZqzhLbkWoLSTq65J2mJoHmuTAlGplqnc+E95KvlGOVA4+Ux+oAYk3HmGwjMt/IY8yTRCzJNosY0tKhuZEorliXRWN3INEn1GGOa6QHuUnn3+agw2jJdY3ozSHUj06RjbCjTA8xvq4nuYXkDtrqZw5s3SgrVxKKV0VhhKVMKlrLmXeeV++qwclPnHQ62p73vomSXz15Wc6HtpU91383TxOUt2sTY9FQuHzdi0DLclxDqXKYUYrzO9CiXKW4RBi3DZUpPGTSV5S2mYFCmF+byLXLsLZWZyGr71ly+ZTNLirD6wBmUaWYi1QtgJb4aTzGzJ8ypLAnQ6ibGpqcyskSFQctwlwBbmFJcqrs0aPUo40nP9CIMWoY9YXrCnAdHZrxaYM+EnlxhnmSMabwkopXLG2uElwLfygrRA0xRfsrVyBJWd0EPdaqHuEs3CVMeTDVnxsaQNGp1I0tYXQtK6+NM7wnQHPx2tQhLBE1veyFEN3L3Bt/EKw0pO6wu2BQuZYnDbIsw0cQS6Bz1aKzcV08Ku6t6IKrLw1QuT4uyPXV9J1bD6lli+HaqsL0t+27aAZYgXXtMOr0I28e2eKjHeTzKVDOJJaxuC0mhNS5TrjJRqsdYwuo0yrePSSyBv6hJTuYVwb5ll2/lBZclQceVEkJ0x0SpZhZYPR9W4qsnhaRH1Z4wrZ7EEqSRpmJLqRdh5CIWnZ7E44HUlOlJLGF1WwA97AlzMVAeCBA9yJw3Et1xkiSt7pgo1cwCq+fDVeurk1NpXthD1UtiNb8IegWMybgONxDfyr3asNz05adKqSa2OmOaVDWmqZa5qjOmVNKnJdG0aDA3NemOKWmKxnRA29/egCmtdxrTRMvcIU1vDNJS0xnz7KrGChMtc4POmN9W+7S86U7VzOENGyXFiCbWsoN0KFOsJja6r9yxxdBMLGH1TL663ByO3ejZue/0Jrq4NFGby1dcYpmb3QZDusqL3aItt73qyPVHyT5iQROXusotj7xo4vE0gr9CE4s2PJ7WRANGpwxq1FUeS91M2jKsI/e8rUDmmnlYV3nsbbHUxFaPvB0TW23fyhv0ShlU0yRVC3q1TkewQl+NJzfTyCFGz869SdJqT5ipJi51lSW5teiQ+oY1/gpNLNowJTTReLoHNGB0yqBGXeVakrSaSVuGdWQ8QTUNmWvmYV1lT5iHKGG2w39fzZxoQYsegdziglJnTMEvqlpF1BlLdBpktIpRTdylGCQ4okSDJfq1iiQkdbZosMSYpvQdol9TvkcLR5OusUTQ4S1tTBcc3rD7NLFoZok+vURoqZHrUI4YTWxKGS2weK4ptnoKsqBVDMdMvno07O71H0JymPbwIVLNPUlz9Gl72nv15Gi5bUQrt92WoilER5YgPeXRoCh1lSV6H9VSR5YY0lNSTacjS0zVaXpMdZ5OAdEs+7T0VB1Yol/3vU3U30qKtxuJVPe9bXWaZMKCFj2IiW/Zlbf7qCUgC809SUsEzXNlSjAy1TqdCyv01aMh6S5oT5i9mkJ0ZAnSSEHNmqLUVZaoJMY+HVliSCPtxJZ2HVliqqas1Kc9YfbqQdBTH9CgOUtoS64lKO3kmnuSlgia58qUYGSqdToX3kO+Wo5RDjReHqMPiDEZZ76BwHwrjzFPEr0g0yRqTEOL6kamtGJZEo3VjUyTVI8xppke4C6Vd5+PCqMt0zWmN4NUNzJNOsaGMj3A/Laa6B6WN2Crmzm8eaOkUE0sWhmNFZYypWApa951XrmvDis3dR7zq7dde81PffpV03LAbE87dNyf5NKkl89eVnOh7aVPdd/N08TlLdrE2PRULh83YtDC/PCN11zz8QdJcss4lymFGK8zPcpliluEQctwmdJTBk1leYspGJTphbl8ixx7S2Umstq+NZdv2cySIqw+cAZlmplI9QJYia/GU8ycJ8ztH9z2k9dc++kfmJaD5Sxhxv3xhKm8u/vQx6655mMPiZ7MXQJsYUpxqe7SoNWjjCc904swaBn2hOkJcx4ckK/Gber/z9a7GPJ/ZSMR4znE4z3K6IfO/j97eSwQuGfW1jc3t7ZRDDUGltJMsQSQKJEusaps5TbUx6aNKM46LZWlaqnnrO6p9pi/8bFrfvK217qWtNozLEGa9uf9t5GvniHwCE/jXRKJnsQSpKn2ClHqRRiFFItOV/jhj8NXP1SbOx5S81k9iSWsbgugh7m2S7QyUarHWMLqNLhiS/QklsBf1CQn84pAeSBA9CBz3kh0x1TAscx1x0SpZhZYPR+Q8RYrE7HIaMLc+sGn4WN/8wdJo6RH1UskTMp+H3uwa2lImJ2v7tpXFshd0xjZKdOTWII00lRsKfUIP8i+OmtHLmLR6Uk8HkhNmZ7EEla3BdDDnjAXA+WBANGDzHkj0R0nSdLqjolSzSywej4snDDbcRC+mh31Jh4hHEg8xZgurx2NQIbAW5q4a1yGLun0aHSjS0ZtdKJxyumc04lfVoMdVzdwD8nNto7byJRTkSWsnslXl5ubdbz6gZ+Brz5pW8pqMtPsq814tdSOQaPkCprKr1hlimZGFdWoqyyDJ1M1fejWqCMj/1e01HypJi51lWXwJOpsvNr2IaaSa1jjr9DEog3TgIloSpj9GjA6ZVCjrvJYGmfSlmEdGU9QTUPmmnlYV7kcbBnTxFbTY453kT5NbDWzJIcWvVIG1TRJ1YJerdMRrNBX48nN9Db76pnGq9lXP2Rbqkky0dl4tSfMHTNeLT2lvcqS3Fp0SH3DGn+FJhZtmBKaaDzdAxowOmVQo65yLUlazaQtwzoynqCahsw187CusifMQ5Qw2zG7r17f2Nre3sUhIPaYV6gdU7G5uQ23g+eQHntiCdHESB/ogES/t38FgdMsYlVa98PxHgDuo7V1stajMZOvLkN9dXiJQirOwj0/oFGuiea6LWiq26J+7dPvD+PVEtpe6FDtIdhXx/Fq22dyoKhq1Mq7JBo1hejIEqSprmrVFKWusgSVbI06jlfjpbb0aan2JNp1ZImpmjNsj6bEazUgmmWflp5JAg/Rr1F1jWgqy3IdWSLVwIgmmbCgRQ+CnvqABs1ZQltyLUHvULnmnqQlgua5MiUYmWqdzoUV+upKpOPVku6CXjJh0hd8rvkZHq+W0PZC28QYfH7ePj2Qlxq1MvJSs6YQHVmCNKWsVk1R6oTDeDVeUgtyTqOOLDGkkXZiS7uOLDFVU1bq054we/Ug6KkPaNCcJbQl1xKUdnLNPUlLBM1zZUowMtU6nQtH3lfDxcFU7+2RE56DHQsAF2Vg1JqHGHdggHfZBu/iYV+p1p1wvDcAa903am155b46rNzUeczqqx+68RpFMRTzYJylNZ+004IBNzzIFWHQdpbFDQ+la2aWejHq6v6EehF8El494oaHcHzUzozy66EbdA4Bll7bZaiEBm8DPvZQ1z7Ir32GfP5r2rK3/QPs38e/IXpnVws4sq8M6pkMtlCHgBsf6tp30gO58eEwtKJ88jYzl2tEqfyYXzbn9/23nYztzN0W3/+Z1/z31ZlOGTSV8UzVGJTphRlv5ayJROOdfZyJrObHP9EFS4qw+sAZlGlmItULYCW+Gk8xc54ww/hwl2qyr+T0JUzbjtQUkyR98tiluwT5mpmzhFndn+aEmcy95mMParuku5ex5gD+rFPbRzgkxgAkt9i+s3sq3Z/YLokxmaveOM59Gbk4AMkN2+N2ZvOG8ZmT/vvqBdkTpifMeTCjr8atyd8lhqCAr1q5diyGy2sbuLHwTOJRt4ysTZeMDPDeTKx74HjPADfb6G+tV+6r+0LLu5950LxElRY6UC144wOo3vglmWf60jhpFEPMVM/95PuvhZ0OLWFoZduMV0tLV+1lLEFa94d9b3gp1hrBVWCYxQUlVWb6UvaEakp5SZ27l1R33fggazzUdCBSKYaWOu/uvoYdCDUlGtnT3ginSnpXfPWNN8pv+cLuaU+ZC6dNWs3wjQ+z3o09qSaTepGsNXQ5N/u5IK3n2k+/TKUcXpLtJ8NOGi1czsJIkw5lqP++umQJq9Pgii3Rk1gCf1GTnMwrAuWBANGDzHkj0R3T2xXLXHdMlGpmgdXzYSW+ujdofBigrBhfIsvp3PGEefI3f4oSpiRGaQ8Jk/sv8vtq2p9vSCO/DHlvOGGmcyUrJp018SKbcR6mTWhi7GNOfZr3WHd5j+aGzCY96TNK+bSxnIs0FT61ZB3yHmvNe/SJIc9lUx168ix5qesRRi5i0elJPB5ITZmexBJWtwXQw54wFwPlgQDRg8x5I9EdJ0nS6o6JUs0ssHo+HG1fTYPVO7jL93d2yQPPwY7FgLc3eB1KNOkndmSA9nZ3rpABZt5budY9cLxngJttbZ1+WRCLLQmrZ/LV5eaoYKLKL+5MZ4ZFW8O8tf0gvGvZYmtEw1xBLvv7avtlciC2oxSjOkwqSGr/waep8PoBqqjQk6s31tQTtZ1pJxPOLdInsgyedDqMV4f2brya+rCPhc3WQRV27x+HeydNPWV/ugGZ114+JT3BKKGCptHpa287SZpqPv4MgG229CEO+qGP0YBM17LLy2qLLBj67/nvqxPNpC3DOjKeoJqGzDXzsK7yvmqiNk1sNT3mKIj6NLHVzJIcWvRKGVTTJFULerVOR7BCX40nN9Pl76tNCm1JmNQnGcc2/MDPkCG0LaMJszPA0r7T7c9IwtyhzwB4xDj0pHSkGh4bWcW007LcIn0i5wkTBpj25+Wk/dUfILnZ/pL6aHT62s9oStzmMWf5Uo/po4xMG3uCd+OykPR1IU7C0DzXf1+d6yrXkqTVTNoyrCPjCappyFwzD+sqe8I8RAmzHTP66rX1jd1dMsDzhWMx4Hm+vLaBpzEEPeDSCOu7AxvMsT2D1j1wvGcg9xVKooGYyVeXoUVYeIlCimo7Hr6WEeAS+nVu7U99bnhINConWkPQ1hJzaDsiXbMM0XCwr05+X52MohS71I1IS09FN7QiS6FiK0E1JXfIvkCuYx28BtkfGnOWnjJezS/RR0dUWFOQj9UKL9GytkTnu4SaledSHx1yIXRDNBzp9yQDtL6kAWrtj6KK6jn/fTWxRL9G1TWiqSzLdWSJVAMjmmTCghY9CHrqAxo0ZwltybUEFVu55p6kJYLmuTIlGJlqnc6FFfrqSpS/r45j1E0J8+Sn6dNGGt/m1MRrCJoyWP331b0Js/L7avanOnw9mDApuSn4E8DQrkPZBbCs9BlImPGbQdoztFOQLvJe+DwRQV8OUugnktKuXwUqwDmTx67f/xkaHtf+8Qs+lNnQgpzTqCNLDGmkndjSriNLTNWUlfq0J8xePQh66gMaNGcJbcm1BKWdXHNP0hJB81yZEoxMtU7nwtH21dj77R1yv/OxY2Hg6kiKIQ6f2NENRzZ4d3s21s073kvgVKAjD1VGB+26BJAoY5kYVq41XOQ4uBFbYm1XG2wpeaBPZby6ZKkdo477ozWl/W7kjo6Nc8WGuTyEQmUiINUetYNjvXgjf+mxf3R6nMN4tbT0/L46DLDIeLXM5QJSx665prRM88zwix4ISii0JBx+uR1GpO3otNSChpOviDP776sznTJoKuOZqjEo0wtzOfCiwynDTGQ1P/6JLlhShNUHzqBMMxOpXgD0rr20r8YTzZwnzDg+3LWwk8RT35Ywu08by7nleHXJWcLMx6vBYX/aEyYvwnlGRrN7R6cbWEePpSUkRmXaDJlzbdExZ5mL3eg47o8myWRkOzKyBPgkp2j6Zyakxf//6oXZE6YnzHkwu68eiN/7we49927aeOBB5Mm820A4FgauDp5JPOqW6S32yu7W/i54m7mq/9HLx//7l75WxhfOPDe6rG7e8V6ClG4DgQ7adQkgUSJdDm9LfWzX8hqbYfm5YKeRhoh3HrqBTK9oqe3IstJv9rqWWOGZ8WppkeGUCkuQpv3pxqsRVOHpGEvt94F2+CULWpUOdGtPFF54yfzQjSv7fXUYrwbrGHXsSS6X51LtFSL7rTW4+05jaEkZqw0DMsHJUynHHbBs/H01F9T0BXIqzojDeHXXEng8pOazehJLWN0WQA9LzWe1MlGqx1jC6jS4Ykv0JJbAX9QkJ/OKQHkgQPQgc95IdMdmsCXVHROlmllg9XxAxlusTMQiowmz/P+rTQptSZj6aWMtYS78++o4Xk3RpdCJCZPcdci9vCeU2aCZsWzb76u7379QY8Ix72kL7Q+PV4cWCdJIU5xO+Tc40GYIWuZi2fD76vDpYZwbk6TpHxNmpyfxeCA1ZXoSS1jdFkAPe8JcDJQHAkQPMueNRHecJEmrOyZKNbPA6vmwcMJsx7y+emub3G8f3/2Fzd/5p1vPPLfz7PM7wt//PWSr3v4lOxYGrg4lmvQTO7pk+7ubV3aG+UeP//JXz7/0tfPfz/gvPf7rA0sJ6+aXwZOfet+Hjp3TFx3O3f3R9918Ql84DhNwX6EKicWWhNXooF2XABJlViaWm6MijGsvaeGXVDXKXB7HiIMzVBFe8/4bH4hziaeNV5fVZKbj/kiL7M+neYvZF8t3HrrtJ0OZiFIs9EQVhbkAF23xJ4J6IKx5Fg7kwZd5PcmQiwykGM3/KG74J23lq+bp76ul2pP+tJXw++ru3++RZWmelKr58MtDt12r9SVqOB2jtr+vTn4uaP6RM7TwJq79dPhNNX87U2w2GmgryXi1tkemkmtY46/QxKIN04CJaEqe/RowOmVQo65yOdiSaiZtGdaR8QTVNGSumYd1lcvBljFNbDU95iiI+jSx1cySHFr0ShlU0yRVC3q1TkeAjIfn3C7YCCyCBYcTJn/vmp5HbeGXkpSoZTxhThuvHk2YYp6xlLTI7oUkOZQweVcp0+LpliQpXloTJtljzGWtqYzyDC07nDAlSb7/M/JZpIxRY7WsKfFiB9ATq915+DPYn5gGZX/s76s1u7I2/8hZSIm0P5yaYmolrT3xkjQ1SZ8eluTWokPqG9b4KzSxaMOU0ETj6R7QgNEpgxp1lWtJ0mombRnWkfEE1TRkrpmHdZU9YR6ihNmO2X31QMBXw05DvH5679nndiUuXLxi+wzHAMhl/di1Erc+qY3vMk4f+8jEnTlxc9efj+hTqzKOuDp4GkPQA44/NML6brIB1jD6e+vnb3rpq4gf+cYtN730NdEcqv8ifHXPslHr5gvgSOP1CvHRe0/r3ASr8NX29vjI3We0dTlgnd2qpl/rKvi0rOyiTwJtunaeF4At3aqBDtp1CSBRIl0Ob4vs6E99+jdRzynsTwepbsv+25jYLgY4A9dwuiyElEoRYr81tI9Ep2m177/t0yibFFSkxrlaRApQAsr646+v7VwsaUZmUGAlOxMWofbIuyQqmsZDFDd+BvvX9vtqXgN1CAgj0rys/T9jsAbZBJWGsmx64rRdQq21gsaxQzsLs8Vrb/uM/74aLNGvUXWNaCrLch1ZItXAiCaZsKBFD4Ke+oAGzVlCW3ItQcVWrrknaYmgea5MCUamWqdzARlvsTIRi4wmTB2vvq17QJH0ZBZyBYmehLlt2yNodFqXpTSYZrBoiSl6Eqb+vjrZHzMiPZwwi59JxwUp9dllwzg2tUeWVFbTSd7jLwpRoI+YXkGS92QNZd4La0aa4uFuBf82h+dS6ksS5vs/Q28fXc7UxNiiI0sMaaSd2NKuI0tM1ZSV+rQnzF49CHrqAxo0ZwltybUEpZ1cc0/SEkHzXJkSjEy1TufCwgmzHfP66s0tcr/gTeZMf/4e8tWnTu994ucvQ3/+C5t/56ff+ZkbLr994Uq1f6l7AevVuZETt/Y5tEMP66tXC1wdSTHE4RM7umT7Oxtkgyv88e999W8+c/dXzr/0o8f/Z/BXz7+UafjqvmUj6+Z7cebeD40d8tK+OjWruD2uXYm1Tnz1aoB9+9Sts90DBwbcV6g/upKrYHTQrksAiTKWiWHlXf13SLgchKmw1pHKXL0lmlmquppOBlimcjkg08TY9FRGCVVh0DLcDaq0MJVcqSbG60yPMiXSVC/CoGU4pPFOpwyaynimagzK9MJcDrzocMowE1nNj3+iC5YUYfWBMyjTzESqFwAyHp58We0kYBEs6AkTGWAJlgRodRNj01MZWaLCoGW4S4AtTCku1V0atHqU8aRnehEGLcOeMD1hzoPZffUGDHAPw0ufeHbnmWd3IKTlEz93+Wvf2P6HN1x+C9a6ZynLfeixWInBhrlSFwSfdvMJ9loU8DBxMDP6GbZMJ2D5uJ1WEvobx05mPmskX3RC2ml/4g6Qewydw6hgvrjtwy2JnyQ3KHM705UeyLDHw9XBM4lH3TIaYX0R68yZhq9GQIh/lvY+LVFq3Xwvcl8dD6c7dj4P9xaHmV707uzlppRGks1VA7qWnjsEiFcn7gaPSHMjLdLtp24xWVW8o8wVxB5+9N4n40pqg9J8QZUVPUulO5McY3K8Zq/KI+INyYHgEMz5xFLhNrb9+RRpI0d+qgOkdBsIdNCuSwCJEulydFtloDjrtFSWqqWes3qs2pN6zuoKS1g9S6DwmsYyVGL1JJYgTbVXiFIvwiikWHR6Eo+H1HxWT2IJq9sC6GGp+axWJkr1GEtYnQZXbImexBL4i5rkZF4RKA8EiB5kzhuJ7tgMtqS6Y6JUMwusng/IeIuViVjEE6YN5K5pjOyU6UksQRppKraUehFGLmLR6Uk8HkhNmZ7EEla3BdDDnjAXA+WBANGDzHkj0R0nSdLqjolSzSywej4snDDbMa+vXt8k9ytRavHVTz9Dvlraf/bnLoPFWg8vK9ELsRnGADB6XBMbBrEE4n+69rAGbtdlo/HQdvEe2GIwP9ShMyRRA8kOMKgDba6+eGoyu/2hpbSdj1S1ORDWQ98fxtWhRJN+YkeXjAzw9vr+zho41R//3ld+5Bu3wDz/0a988i89/ut/6YnfyDSib9modfO9SA4Zpzdoc8j2MI1pND6QVhIsMRZMz4O5rAFxo8kFSu8QXQm2Io3dXCzF2zUtQLcq2rGwRbq4qmmjVptlFegc9ioeQn2pYmfiEdHOf+RDYSAdp0uWrR1RcmKT80kn32rpY44Lm8tu7AS4r1B5xDJLwmp00K5LAIkyKxPLzaE4M3p2LqvJipZ6saZRcgVN5VesLEUzo4pq1FWWwZOpWodQWnRklFAVLTVfqolLXWUZPGnRYThlWOOv0MSiDdOAiWhKnv0aMDplUKOucjnYkmombRnWkfEE1TRkrpmHdZXLwZYxTWw1PeYoiPo0sdXMkhxa9EoZVNMkVQt6tU5HgIyH59wu2AgsggU9YfboKpfJsEVrimvRkZEBKhqy0MSlrrIktxYdUt+wxl+hiUUbpoQmGk/3gAaMThnUqKtcS5JWM2nLsI6MJ6imIXPNPKyr7AnzECXMdszuq8UGi8g0/PN3v78LX30X+2q03PbZDVjrT/zc5b/z0+9In75lRQ+DPAwNo8WKv981qT0QnxY8jNHGY6T9rY7oGpMtFi95D+NqI7rFjTsCQrv1aUC3b8nO5NvKgKuDpzEEPeD4QyOsb1/c+L2vICDgn2Njn+4L3Xwv0kM2qF8v096dh+geGehgV5idPUbcaP0OSdagK6dFsmvX7SEhrgrC7kBsT440ucEUJ24N+2l2oLpUZWfiCk/czOPbvCo09h9RfmLNLvWelni86bHnsKVbNdBBuy4BJEqky9Ft2UAhZTTVbX0a5ZportuCprrN6lj5cfTpUO0N6smBoqpRK++SaNQUoiNLkKa6qlVTlLrKElSyNerIEkNaqj2Jdh1ZYqrmPNujqW6zGhDNsk9Lzy6BM0v0a1RdI5rKslxHlkg1MKJJJixo0YOgpz6gQXOW0JZcS1CxlWvuSVoiaJ4rU4KRqdbpXEDGW6xMxCKeMFu0MvJSs6YQHVmCNFJQs6YodZUlKomxT0eWGNJIO7GlXUeWmKopK/VpT5i9ehD01Ac0aM4S2pJrCUo7ueaepCWC5rkyJRiZap3OhYUTZjvm9dVr7IGH+dvsq7N2uOuspcpNgFvoHE7FHiR2YilfzfaGvxAbGpMtZi9pnXFbhHLxxETFbSU7Iy+lf7Iz2aZz4OpIiiEOn9jRJbuyffnKVpXFV0PDP8f2Pt3HuvlepIcM0FXQ01K5Xnwd5Wx0pwUdwiISyQrTxRlxo9U7xFwXibA4X0G06EVMvWVcVXkPyLaSI82uKSE7Cp3bu1S2M8EtY3NowVLUjj2Mm05WLkeUnhmz8uppsf3jQdWB+wo1h5RQVUYH7boEkChjmRhWbuq8w8FSL1pdYa0XlVFmZZqZyq+61qGSxbgy2NLC2PRURglVYdAy3A2qtDCVXKkmxutMjzIl0lQvwqBlOKTxTqcMmsp4pmoMyvTCXA686HDKMBNZzY9/oguWFGH1gTMo08xEqhcAMh6efFntJGARLOgJExlgCZYEaHUTY9NTGVmiwqBluEuALUwpLtVdGrR6lPGkZ3oRBi3DnjA9Yc6D2X312gZ54AEmX303+WrbLr4661lyG8hFsBMYtQdL+GrqHFae+I3MU4WXbBc7K1JfPDFRsZ12Ju6A3Te7Y/mmc+Dq4JnEo24ZjbC+fQFT/W88cMtPPP7rf+QrnwBLWI3IFilDN9+LwjeGI6pfL9Pecx4K2FMt6Fqqd0h6FUqEzXV7SIirgrCLx/biSOMNRsg2iqXkVhxeyh47b+hJ/X0B9u3WJ+sr6ZCeN7PyuM/8ojtMtKszNwdegZRuA4EO2nUJIFEiXY5uqwwUZ52WylK11HNWj1V7Us9ZXWEJq2cJFF7TeJdEoiexBGmqvUKUehFGIcWi05N4PKTms3oSS1jdFkAPS81ntTJRqsdYwuo0uGJL9CSWwF/UJCfzikB5IED0IHPeSHTHZrAl1R0TpZpZYPV8QMZbrEzEIp4wbSB3TWNkp0xPYgnSSFOxpdSLMHIRi05P4vFAasr0JJawui2AHvaEuRgoDwSIHmTOG4nuOEmSVndMlGpmgdXzYeGE2Y55ffVlGOCN/WEWX521w1dnLVXuAdmGzm90rona1QDAQlTHP6nzIr7adOCtW28TDIl5afaE0bN42q3bLvkZ9UW0k0HbHcs3nQNXhxJN+okdXbIrW5fYAF/a28z0DTxeDQ3/HNv7NHFN6+Z7QYccLJ/RfJjm2oVDMzo7h915O30mnBMFnKH5pgCdzNDZLEhrtjr2P3NOruDN4VSHubTaeJOY8087Fq4L9TEXN5rb5AYD7H1IiJ2rS1V2BiAvffOn4iF8BDreHpUj4sbu/rG7lNxL0VejQzhvI8B9hcojllkSVqODdl0CSJRZmVhuDsWZ0bNzWU1WtNSLNY2SK2gqv2JlKZoZVVSjrrIMnkzVOoTSoiOjhKpoqflSTVzqKsvgSYsOwynDGn+FJhZtmAZMRFPy7NeA0SmDGnWVy8GWVDNpy7COjCeopiFzzTysq1wOtoxpYqvpMUdB1KeJrWaW5NCiV8qgmiapWtCrdToCZDw853bBRmARLOgJs0dXuUyGLVpTXIuOjAxQ0ZCFJi51lSW5teiQ+oY1/gpNLNowJTTReLoHNGB0yqBGXeVakrSaSVuGdWQ8QTUNmWvmYV1lT5iHKGG2Y3ZfjYKZWKKm4as/9avr335mV+ME8T+8gXz16LL9IPshI2mI6EPYTnDjzSc6V2DtxKK+mr2HbK77OWtmSLqX7BJ1Tyiwleri3Q7TIdjtdv3To0s62E3nwNXB0xiCHnD8ofGdK5vwwJeYMw1fff0zd//u+e/96PH/GSxh9U88/mt9y0atm+9F4hvj4eO4TtjrdfOxcH3NMXLnsKy5AT70qRPFeaALKnOzsda4RXuH2PYfQ+OJ/dMnTtzdteg+hMvKSyXnv9tcd4GqDlmRWnQCdaBla0tVdwagfU72rX6kckTS0u2e3aXkWKKv5nZZg0Q06jls6VYNdNCuSwCJEulydFs2UEgZTXVbn0a5JprrtqCpbrM6Vn4cfTpUe4N6cqCoatTKuyQaNYXoyBKkqa5q1RSlrrIElWyNOrLEkJZqT6JdR5aYqjnP9miq26wGRLPs09KzS+DMEv0aVdeIprIs15ElUg2MaJIJC1r0IOipD2jQnCW0JdcSVGzlmnuSlgia58qUYGSqdToXkPEWKxOxiCfMFq2MvNSsKURHliCNFNSsKUpdZYlKYuzTkSWGNNJObGnXkSWmaspKfdoTZq8eBD31AQ2as4S25FqC0k6uuSdpiaB5rkwJRqZap3Nh4YTZjnl99SUY4PX9Ub7z7s0s7rt/a6B/ZMfCwNWRFEMcPrFD4ztXti7ubYLfYbb6uctnYa1v+N4/A3+MOdN3nnmmb9modfOHAuy9U/vqaAeNh8dPQLIPRFLgvkLNISVUldFBuy4BJMpYJoaVmzrvcLDUi1ZXWOtFZZRZmWam8quudahkMa4MtrQwNj2VUUJVGLQMd4MqLUwlV6qJ8TrTo0yJNNWLMGgZDmm80ymDpjKeqRqDMr0wlwMvOpwyzERW8+Of6IIlRVh94AzKNDOR6gWAjIcnX1Y7CVgEC3rCRAZYgiUBWt3E2PRURpaoMGgZ7hJgC1OKS3WXBq0eZTzpmV6EQcuwJ0xPmPNgdl+tsRbESrVjYeDq4JnEo24ZjRevbFLsMc+gdfOHBjQyLMOtbrCnITXSNB5uhspTSOk2EOigXZcAEiXS5ei2ykBx1mmpLFVLPWf1WLUn9ZzVFZawepZA4TWNd0kkehJLkKbaK0SpF2EUUiw6PYnHQ2o+qyexhNVtAfSw1HxWKxOleowlrE6DK7ZET2IJ/EVNcjKvCJQHAkQPMueNRHdsBltS3TFRqpkFVs8HZLzFykQs4gnTBnLXNEZ2yvQkliCNNBVbSr0IIxex6PQkHg+kpkxPYgmr2wLoYU+Yi4HyQIDoQea8keiOkyRpdcdEqWYWWD0fFk6Y7ZjRV6+tb1xev/LO2j4CNljEarVjMSBhXF7boESTfmKHxktXti7sbcAAz8S6B46rAOF77xy9plpuNlQescySsHomX11u7swb55/41nNPfOv5ki+tjQyqL8BlNVnRUi/WNEquoKn8ipWlaGZUUY26yjJ4MlXrEEqLjowSqqKl5ks1camrLIMnLToMpwxr/BWaWLRhGjARTcmzXwNGpwxq1FUuB1tSzaQtwzoynqCahsw187CucjnYMqaJrabHHAVRnya2mlmSQ4teKYNqmqRqQa/W6QhW6Kvx5GbaE2bKZTJs0ZriWnRkZICKhiw0camrLMmtRYfUN6zxV2hi0YYpoYnG0z2gAaNTBjXqKteSpNVM2jKsI+MJqmnIXDMP6yp7wjxECbMdM/rqzc3ttY29i5fJA1+UWLV2LAa8P21sbuFpDEEPOP7QeHl368KVDRjgmVj3wPGeAW62tfVN8EDM5KvLePyp57797HdfPXlG45SKrz34xKunzsZuKHQyjXJNNNdtQVPdtrP19Nd/9c7Hz5A+8+Cd99z3NFVUoWdNh2pvUJeRrrwIFFWNWnmXRKOmYH32wWO/euwF0hLUTnVVq6Yw+uQTn7vt689rS8YSVLI16sgSQ1qqPYl2HVliquY826OpbrMaEM2yT0vPLoEzS/RrVF0jmsqyXEeWSDUwokkmLGjRg6CnPqBBc5bQllxLULGVa+5JWiJonitTgpGp1ulcWKGvLmPmhLl7mvMJz1p5wtw9Y1deBPJSo1ZGXmrWFKJPPs4pjl9KC6WgZk1htCdM1Z4we/Ug6KkPaNCcJbQl1xKUdnLNPUlLBM1zZUowMtU6nQtH21fjDr68tgkDfOGymmHwarVjMVxe26DcyrmBOHxihzeqS+sbb++twwDPxLoHjvcMcLPBVaPm6Equglfuq8PKTZ3H/MS3nkNRCH36jfOvnTp7aW0du4di8WsPPfn0s9+VFttf+Plj9/zqbV187sHT2zsv3EfiDPXRMhE9T7P1TZbt1sB9pF4M6/z686ElYa0XlVFmsSZf/SXy1dJC5Vdd61BJnaMxDi0vfOm2Yw++FvtUBlssU6l6P5WqWTs2Lfz8/bSTtqWPkYB2XuOiU7Tw7pmH7rwHm2BtWu58/I2upZ+7QRXLOEZeZ9fC69SreYZbqPwSRo9MjzIlUtEnvo51fumEtgi/cL9uC/tg2wO/wQf4xBuQaAGdQvWMlWhaji13PIRzEFpKDmm80ymDpjKeqRqDMr0wlwMvOpwyzERW8+Of6IIlRVh94AzKNDOR6gWwEl+Np5h5NQmTUqLc8BKc+p7jHIjUaRImkhrS0fPlGlaRMKOvlpYFE+YOnug7Hz/btVDquI9SnLRIArQ65ZDisnZsWvhQJsywBtNukxi3ULoTRo9MjzKedKyS0rKNIj12G0V61PazD4fUzYu8SM2eMMeYyGp+/BNdsKQIqw+cQZlmJlK9AI62rwZoyHp9Fx54pnAsAFwUHqymx7xkGrLe3nprb/3tKxvEe+ur1boTjvcGNja318cGqxEr99V98fhTVCaef/vib939O9AQiCeeeu6Lv/ONrz34BFq+/NVH0A3lGrHWeTuoCNVCh8ovVnukpUwkHYaUpd1Ue88fO/a5O2FfpQXxwn13HpMKKbSMxsh4dRkovEqOYzih5YX7bjv20Gusd6lR20UX3I1Xl3M5UP7CVcaXVBHm2nBl+OXsQzgzd8oIT+hDLU+cpZfUgpILopVpE1KBUekp8QZ/OsAam6MdRiEV56qexBy0Wt6WrlAaH+JtsX7h/mMPnepmIdCmxSX7am7Z2z31xB04ZKoypeUKVkItD52NiVraRY+xhNVpcMWW6Eksgb+oSU7mFYHyQIDoQea8keiOzWBLqjsmSjWzwOr5sBJf3ReLJUwZkSYLPZIwQzqS9hUnzJHx6jKQxyocPgUILfHzTSQoatR20SXH8epyLsehS5jR7rKVRQsFfbigL+F1P/cg1h1mIURPYolO00Zt2kSkmVP9/N4eHe/DnEUBZU+Yi4PyQIDoQea8keiOkyRpdcdEqWYWWD0fjryvBtY3tsRav32JnPBq2TEVMNXrG5uSYojLT+z2rqCDWOu3rpAZBp9fndb9cLwHAFO9tr6JmsOUXMQSVs/kq8vNyfDLq6fOoi60uxSHZbJ2YZQ+PEZt29Xlku6GX/Lx6lhNYg33Hft6NOdY5HN4qWXiDo9pcClzm7HlWK028nCH8dU8akFWnzQNlQBa2/Vr5XS8es+MV9NY9OcefAFbke1+6YTtI43H7jtGtQ7XhXtmD2UN5FFjC7w6+tDmpCUsBY6jEJ+7X08CtUv9R8NBx750/zGqNamJ1vCl++kM6/BL9Mlch8nQShhzBtQn69qIiGOJJi3PozR86GzU2h81Gdd5qSYWbZiGU0RT8ky07oC0s23++guqUwZFHcpWbT+FyvjrX7rzAaoTqeXsw3c+8KX70+GXWuq2mklbhnVkPEE1DZlr5mFd5XKwZUwTW02POQqiPk1sNbMkhxa9UgbVNEnVgl6t0xGs0Ffjec/0YgnTpMSuXb7ync1l69uNV7cmTE6GkgQ6W54nTPsZIvcXezyUJK0OnIxXIxlSwkEeJo0t3v8CdlW2GxIyjUgjq2hj2G1qT/P8YU6YmpFCMqS9wv4EHZMn/rR/pzUZpkyJTjSe7lLzB50vckvXnvCJB3h/oJFRQ2KUuSBPmFETe8IcwtXgqwF4uUtrm5fX6bfWFy6RJV5VOBqBBxjvT5fXNszPqumhTjWxpBh0u7S+cWmX/qUxmGGOtSCW0rpDjqsXuIHkZlsnU61V2nDM5KvLkOGXS2vrp984j5dx986/fRE7DIEykduptpOA5vHq06K5hkN7tNBNv6+mMvHpF+7TuhDLHnvwaflGH/q8cB8XfFgVFYKqzapee/w+qthiC30FvdtKCKkIR3VXa+7SSx2vPkmaSzrVO1SkyjAL1WFixRN98vHPhcqPikjVUqXxGqRGREka+lA76jjaioyuRE2VmVSKvIZjD51AnSTjLeRLn+ciD6UtFVXavkulmGjsCQuUWTxWo+MqVOGFoA3pADW36yjx11/AmsGxp1SBfTqyRF1rnaoveX8eCmXxHQ/xOA8H0GmtGqX9Cg+/PPHCQ8dQF1IDdvL+F1BbY3G8kj5EIV2LDizRr1F1jWgqy3IdWSLVwIgmmbCgRQ+CnvqABs1ZQltyLUHFVq65J2mJoHmuTAlGplqnc2GFvrqMxRKmHa8OCZOTG1nopt9XDyZMGs1+kL5iI59Iii4TZpfryPrGrYRAjmrSureUryit8YY0xVGSDJr3xCRSSZ423SF3aTKkRtWHNGHypwmmndMavTtgQ3SYoSfSkYiqjizRq7HP4Ys8tp2ykgibVCV1S3xdjbQnTJ2Ogp76gAbNWUJbci1BaSfX3JO0RNA8V6YEI1Ot07lwlfhqAHczfyd8A4fkcfAhjhq5ldJK+OxNUkyuA+NNi74TvuaXzGNakKPe2Bz9TbVlLKWZYgkgUcYyMaw81HmBZZgFe4gy0ba/deEd+aFg33h1eCOn9/LnqF3LOOoTCq/h31ejXZhqQfSnOoyrRu6j/Kox2zoAAqDCQwvXc09TjagDI9zObPUb33nue08/973vvM7tduCFmSqznt9XUxmqv51GexiWoeEOquS4vef31VSEPU4VGy8Vfi4IrYMw2DHqQ9vVCgktSEY9PxdE3SbMwybww7x+HX6JHL5gaYdc3njoWDcUYzgbr+Zl9YKG/lR+CaNHquMp7eZmTGlTNf38DzsjLWSYqfJmTT/5O/bwqTDMYpkOMP19NV4K848MsQYuE83wS8llSk8ZNJXxTNUYlOmFuRx40eGUYSaymh//RBcsKcLqA2dQppmJVC8AZDw8+bLaScAiWHCOhGmGjikkJbaMV0duTZhdyi0TZvDV0RhzO/OEhDn0+2pa8xNiuTEXThg7QCmWMwD37/l9dUiqhzZh6hq6FuNmNZFSuhNGj0yffIlO6dMvUb6SlozxpEctyVlaCtbtagpFy8mzL5x6QzQtKGnTE+YYE1nNj3+iC5YUYfWBMyjTzESqF8DCCbMdB+SrVwV7JvTkCvMkY0zjJRGtXN5YI7wU+FZWiB5givJTrkaWsLoLeqhTPcRdukmY8mCqOTM2hqRRqxtZwupaUFofZ3qXgObgN7BFWCJoerMMIbqRuzf4Jl5pcFGS6IJN+SLljtXMtggTTSyBzlGPxsp9dV/I8AtChlkksKvSDv3FL/Pwi7SHQ6v9vrqz0Fq3UXsYM9l+HkWerSm5QOSex15AtUc6lokc6KBVS9nIRaGs/HN3HqOWYuwlxrmXv08FDdc01RtJa82uhcer+ffVarmlPXxdkMZkdAiFFom2HFr66G5z6SktUrElpZgEfZOQbDyN6sjjE+tL6i+Mmg8d+NuGD559/n7SMnjCoy7UBwVTWCcNNaMx/F5al0VL9uCHDqRjN9a0k8M/Fzz/Sjil36d6LpurLEFaa1Zt4T1H5ScvueALszgkhdIAS+hGLTz88gZ9m/HYQ6doxAmzsKz/XLAB9i27fCsvuCwJOq6UEKI7Jko1s8Dq+bASX90XiyXM6u+ro4U2CTOkI/vZJfcZTZg0S5MA9+TGNGHyyulX2Wg0C6YxmjB1b7sW/XyTNJtP5EbS/EtpSnH8SSXbY1okulnS0kd3mxsPa8LUxCWaf1NNiYs05TdaFmkqzFUd+a1Tz8opxT1SzhWWYF1mxTLoEDgNAngZmE4O/dbaE+bioDwQIHqQOW8kuuMkSVrdMVGqmQVWz4er1lcnp9K8sIeql8RqfhH0ChiTcR1uIL6Ve7VhuenLT5VSTWx1xjSpakxTLXNVZ0yppE9LomnRYG5q0h1T0hSN6YC2v70BU1rvNKaJlrlDmt9IenXGPLuqscJEy9ygM+a31T4tb7pTNXN4w0alMqKJuXwRLaWMzA2a2OhY7lT1rCxh9Uy+utzc089+92sPPYn43N1ffuJbz59/++JbF9554lvP3fflb8iATN949Qp+X03tWIr/jRm0d2UimXC16NIoy4bLJ6Wn1HO8ISq/0F8vfX5LnDXDL7Y9sNSC2pIMp9CGurFoKvhwgNSfS0Bplz5YLVd4XJNB6zqhw1KkY31JfcLjQztPdSSvTYtO0fq4YQ1UR5Lj1X99JwyeUB9ZnHt2wy/S/vXn0SL9i8c/Ga+mcWP6AqHOxcqpfiWNP+2TpKNyvHogramvlnYeo+5+X80VpGqi0B6rWG0PAy9oj//6Dpb1nwuCmXo1sdX2rbxBr5RBNU1StaBX63QEK/TVeMYzvVjCNCmxa5/8++rehMkmXCw6fUU8JE9Jd13CZF/NX8aR/jSXUk3oqXosYYY8wy1IjJTW+As+/Ptqsdz83h3Gq2nkHDlK+3cJVto59aXj1YcwYWpG0mTIO3YqzMXOh6Xwp/2ThFmOV1OiE42nO9W88pPaDsQ+CaeJlAh88ok75LfWnjCjJvaEOYSr1lcvAHsO7AkRLQ3xMlCIxh1jdXcDWU0TowUtegRyiwtKnTEFv6hqFVFnLNFpkNEqRjVxl2KQ4IgSDZbo1yqSkDTaosESY5rSd4h+Tfme34FadY0lgg5vaWO64PCG3aeJRTNL9OklQkqQQktZk2hiLXdIa7HFc7vCCy+jlki1iuGYyVdX49WTZ7797Hc//8WvQFxeW7+0tg4Rv+UowzLZ4azo99UkYh0Z6q2uysSqqP7jsom+1hgGpWkR6mBW3i3SRcttw5oqLa7/6CVXnDr+TGPRXHil2vYnLf2pgxhsGcHQMW3S1Jl08nNBHMdZGnXhf9GHjb1q2oo8OMLdEIrUqfR4cpEX6kUI6kkjGGFZ9KHOqKgeRK9KKpC9De2yk9LT6rG0E1mirmmF3fALD/WgziN9ClUg+Xn5iWDyW+va76t5cOZFFJ3yj99ymYh6UfsQJek6skS/7nubqL+VFG83Eqnue9vqNMmEBS16EBPfsitv91FLQBaae5KWCJrnypRgZKp1OhdW6KursUDCXNHvq0lUEiZ/xUZ/X41V6Xh1mTBlWWmMi3SBHNWmyfomSVLSF+bS1snTZpqSaugTUlw3uC2ZjRpJH9KEKWuAK5V2WjbkT6uRf1RUdWSJumZfHf+XBFk/Zz/JeNLOX/n++gunqFE/tZRBbPkCuSdMnY6CnvqABs1ZQltyLUFpJ9fck7RE0DxXpgQjU63TufAe8tVyjHKg8fIYfUCMyTjzDQTmW3mMeZLoBZkmUWMaWlQ3MqUVy5JorG5kmqR6jDHN9ABTik91YLRlusby5mF1I9PEflqc6wGmt+FU9zC/YSe6meXNnjQQW0Qro7HCXYmTcCyJ3l1eua8OKw91Xspn3ngLZeJrp86+eupMxouMV1PBRwWN9JSCyfQk5jKRdHfa4/ALW3T5nt59Tz/P/1QP6sUXHkSNFb68x9UkbZG/fwiQRrGFcqy4Dfpvnsi8w7pyrtikncrQ+78evsFOFRi323/G9uvPo3zUMW0afuFG+VdzsB6++VGQcbv8aJAqxdDtS0+j1kUfrlO1D4oqOgndQ6c/FxQdmEtSHn4xy57gZW0frs94NR1TQaY7QEF1J7XzBwQSOo7dJRksmelR5rRGJXLcEBfZ2h7+w9XQQsMs2BMaVOHdjotgZ8J/x0plYpJyuUw0wy8llyk9ZdBUlreYgkGZXpjLt8ixt1RmIqvtW3P5ls0sKcLqA2dQppmJVC+AlfhqPNHMq0mYdkQ6tnfj1SZhRmNse4IHEyZbdHlejj2vPSsJM/pqSoy8CKx1SJKTEqZ8nhhXHtvpML9+n3m6uR3cpan7QoojS57nrsOYMIuMJOmxWw+tOaRHmyStHmU86aplzDm0kIPGhsjSn33jFGvdE+nzxhsnYzoV+83JzRPmGBNZzY9/oguWFGH1gTMo08xEqhfAe8hXN8KeCT25wjzJGNN4SUQrlzfWCC8FvpUVogeYovyUq5ElrO6CHupUD3GXbhKmPJhqzoyNIWnU6kaWsLoWlNbHmd49oDnobWwhlgia32IT3cjdG3wTrzS4ZEl0wVzQWPttNbMtwkQTS6Bz1KOxcl89Go8/9VyMJ4xGsYi5svO1w1QuT4uyPXV9J1bD6lli+HYqmUo6/v4ktQzfulWWIF17TDq9CNvHtniomfXbiUU783iUqWYSS1jdFpJCa1ymXGWiVI+xhNVplG8fk1gCf1GTnMwrgn3LLt/KCy5Lgo4rJYTojolSzSywej6sxFePRsyQCE+Yymxf4++r0TiNJUgjTcWWUi/CyEUsOp2zJ8xelrA6DaSsTE9iCfxFTXIyrwiUBwJEDzLnjUR3nCRJqzsmSjWzwOr5cNX66uRUmhf2UPWSWM0vgl4BYzKuww3Et3KvNiw3ffmpUqqJrc6YJlWNaaplruqMKZX0aUk0LRrMTU26Y0qaojEd0P776ppmDm/YKClGNLGWHaRDmWI1sdF95Y4thmZiCatn8tXl5nDsRs/Ofac30cWlidpcvuISy9zsNhjSVU5uPx6vlrHo4Vu05bZXHbn+KNlHLGjiUle5e7RpOEV/Rpi0B008nkbwV2hi0YbH05powOiUQY26ymOpm0lbhnXknrcVyFwzD+sqj70tlprY6pG3Y2Kr7Vt5g14pg2qapGpBr9bpCFboq/HkZho5xOjZuTdJWn1oEmYcls/bc60prkVHRgaoaMhCE5e6ypLcSHvCzDWTtgzryHiCahoy18zDusqeMA9RwmzHAflq3JT+/2wdldD/lAtJmnMDEZIdESUXiUzjDcn/U67DEOH/uNJ6ohZ2ltGhHDGa2JQyWmDxXFNs9RRkQasYDuy5ZoolgESJmxarylY+EHb3+g8hOUx7+BCp5p6kOfq0Pe29enLgeW3UyrskrNbx6qJdQ3RkCdJUV7VqilJXWYJKtmGt35zkn/zhJVhiSCNxxZZ2HVliqqZU2afz1AqIZtmnpafqwBL9GlXXiKayLNeRJVINjGiSCQta9CDoqQ9o0JwltCXXElRs5Zp7kpYImufKlGBkqnU6F5DxFisTsYgnzBatjLyUafbV8jvqrA+F6MgSpJGCmjVFqassUU+SVnvCFB1Yol8jHY1oyle5jiyRamBEk0xY0KIHQU99QIPmLKEtuZagtJNr7klaImieK1OCkanW6VxYOGG24yB8NTvqTTwwdMbkYFbFjhmAjIF3O3HXkoYqTBPVGxtbcn1xSYYv11R2TAWuh1w7uOtQdjBLmVKwlDXvOq/cV4eVmzrvcLA97X0XJblwOKRCM1P5VdfJAMtUrgy2tDA2PZWRMSoMWoa7QZUWppIr1cR4nelRlpRo9SIMWobLQZiUQVMZz1SNQZlemMuBFx1OGWYiq/nxT3TBkiKsPnAGZZqZSPUCWImvxlPM7AlzKksCtLqJsempjCxRYdAy3CXAFqYUl+ouDVo9ynjSM70Ig5ZhT5ieMOfB7L56fYNGPveviKleOTtmxObmNuxZl3qQBHO9R/5tmyw17lKJFWrHwsBlWVuHtab3+1pI2WF1waZwKUscZluEiSaWQOeoR2PlvnpS2F3VA1FdHqZyeVqU7anrO7EaVs8SKLym8S6JRE9iCdJUe4Uo9SKMQopFpyfxeEjNZ/UklrC6LYAeLlOuMlGqx1jC6jS4Ykv0JJbAX9QkJ/OKQHkgQPQgc95IdMcoMkhzbkl0x0SpZhZYPR9W4qsnhaRH1Z4wrZ7EEqSRpmJLqRdh5CIWnZ7E44HUlOlJLGF1WwA97AlzMVAeCBA9yJw3Et1xkiSt7pgo1cwCq+fDkffVMGbBVO/tX9klsVrtmBm4gjRqTTkppCqjNza2yFTvkQeegx3LABeOP/UwBYrRfeWOLYZmYgmrZ/LV5eZw7EbPzn2nN9HFpYkaJVfQVH7p3KCZUUU16irL4MlUrUMoLToy8kZFS82XauJSV1kGT1p0GE4Z1vgrNLFow5QGRUtK7NOA0SmDGnWVy8GWVDNpy7COjCeopiFzzTysq4zEzpqoTRNbTY853hn6NLHVzJIcWvRKGVTTJFULerVOR7BCX40nN9PIIUbPzr1J0mpPmKkmLnWVJbm16JD6hjX+Ck0s2jAlNNF4ugc0YHTKoEZd5VqStJpJW4Z1ZDxBNQ2Za+ZhXWVPmIcoYbZjRl+NG3FtfZNt8GzhmB+X1zbo3YbzBFgCGm9C9PVvGOA9XGuN1WrHksC1K35rbV8aHcoRo4lNKaMFFs81xVZPQRa0iuGYyVePht29/kNIDtMePkSquSdpjj5tT3uvnhx4Rhu18i6JRk0hOrIEaaqrWjVFqassQSVbo44sMaS5etNo15ElpmpKm306Sa3Mqln2aempOrBEv0bVNaKpLMt1ZIlUAyOaZMKCFj0IeuoDGjRnCW3JtQQVW7nmnqQlgua5MiUYmWqdzoUV+urRkHQXtCfMXk0hOrIEaaSgZk1R6ipLVBJjn44sMaSRdmJLu44sMVVTVurTnjB79SDoqQ9o0JwltCXXEpR2cs09SUsEzXNlSjAy1TqdC0fbV29ubuN5J/e7Nxs75gfe+WDO8MxLSlLe3UPj9g5y1v7OLnngOdixJHCF+IfWoUwpWMqad51X6KvX6KMEHJys3NR5h4PtaS8vh7LWi8ooszLNTOVXXetQyWJcGWxpYWx6KiOHVBi0DHeDKi1MJVeqifE606MsKdHqRRi0DJeDMCmDpjKeqRqDMr0w76smEq3DKcNMZDU//okuWFKE1QfOoEwzE6meClwFZDw8+bLaScAiWNATJjLAEiwJ0OomxqanMrJEhUHLcJcAW5hSXKq7NGj1KONJz/QiDFqGPWF6wpwHM/rqtfUNcr+zhmN+4Ea8vLbBCYjyIJ5k0dS4SwZ4vnAsCblMeCMvQsoOqws2hUtZ4jDbIkw0sQQ6Rz0aq/LVeMff2Nhc39jK1j8cdlf1QFSXh6lcnhZle+r6TqyG1bMECq9pvEsi0ZNYgjTVXiFKvQijkGLR6Uk8HlLzWT2JJaxuC6CHNc0arUyU6jGWsDoNrtgSPYkl8Bc1ycm8IlAeCBA9yJw3Et2xGWxJdcdEqWYWWD0TkIKQ8fBIyL5NAhbBgp4wYyB3TWNkp0xPYgnSSFOxpdSLMHIRi05P4vFAasr0JJawui2AHvaEuRgoDwSIHmTOG4nuOEmSVndMlGpmgdUzASlo4YTZjhl9NdXKe9tsgBt469L+1mXbcuXt1/I+JTsOBLiUyAKaqmhCGo3bO+R+52PH8uDLhDd+LkpM+dJX7oielSWsXomvBnBrbm7R//dWbiLsgNWzc9/pTXRxaaJGyRU0lV86N2hmVFGNusoyeDJV6xBKi46MvFHRUvOlmrjUVZbBkxYdhlOGNf4KTSzaMKVB0ZIS+zRgdMqgRl1lGVTp10zaMqwjo9SoachcMw/rKssAi9SFbZrYaqrVUBD1aWKrmZEZGvVKGVTTJFULerVOh8C/9NnCmdfXE4EFPWH26CqXybBFa4pr0ZGRASoastDEpa6yJLcWHVLfsMZfoYlFG6aEJhpP94AGjE4Z1KirXEuSVjNpy7COjKe1piFzzTysq+wJ83AlzEbM76t3t9gDc/Tr7V/4DxDQV773DfDOXX9v6+/+wStvvWr7VLTjQIBLKakHLAEtvrovbv+tjU/+/FoZv/cDvN/knfvCsTyCr46hpUauQzliNLEpZbTA4rmm2OopyIJWMRwr9NXb2/SP2COyTVTD7l7/ISSHaQ8fItXckzRHn7anvVdPDhRVjVp5l0SjphAdWYI01VWtmqLUVZagkq1RR5YY0ly9abTryBJTNaXNPp2kVmbVLPu09FQdWKJfo+oa0VSW5TqyRKqBEU0yYUGLHgTVVQENmuswbcm1BNVmueaepCWC5rkyJRiZap2uHvIfdiDj4epp00RgQU+Yo1oZealZU4iOLEEaKahZU5S6yhKVxNinI0sMaaSd2NKuI0tM1ZSV+rQnzF49CEpZAQ2aU5y25FqC0luuuSdpiaB5rkwJRqZap6vH8gmzETP7ajLAWy28+7u/uPvPfhGmGnZ698s/t3fin+zc9f8e6K/8buPc3R99380n9MUKcObeD137vh+79n0fOnZOWw4FcCnxzEtKUt6l8eqtbXK/Vf6ZGy4/89zOs8/vWP7//eP1u7+A9+/epTJ2LA/11VKmFCxlzbvOq/LVyNooGTY3N9fXqVKMNdzhYXva+y5KqBeVUWZlmpnKr7rWoZLFWAZPrG5ibHoqI4dUGLQMd4MqLUwlV6qJ8TrToywp0epFGLQMh4GXTqcMmsp4oGoMyvTCXA686HDKMBNZzc9+oguW/GD1gTMo08xEqtshNSKA+1fWuQCwIBb3hLkESwK0uomx6amMLFFh0DLcJcAWphSX6i4NWj3KeNIzvQiDlmFPmJ4w58H8vnp3M0ST3v3yJ6+c/c6V7x/f/Wc32/a6ruL0sY/AmmbR7H5P3HztrU+qThFMLyL43hX4at5b3eKTn3rfj31KVofdmOqu7Z7TjoVVLQ9cSk5AlAfxJIsWX90Xn/j5y+DXT+89+9yuBF4+8I1tROwzGr2gE2UuLsdH7j6jc0dw4ta+M2PvnIP9aGPWa4c38iKk7LC6YFO4lCUOsy3CRBNLoHPUo7EqXw0gY6JcokpxY/Py2sb6xtbmFm65fIs27K7qgaguD1O5PC3K9tT1nVgNq2cJFF7TeJdEoiexBGmqvUKUehFGIcWi05N4PKTms3oSS1jdFkAPa5o1Wpko1WMsYXUaXLElehJL4C9qkpN5RaAMECB6kDljJLpjM9iS6o6JUs0ssHpVwFVE2uHMRjUicp3s1cLA4p4wJZC7pjGyU6YnsQRppKnYUupFGLmIRacn8XggNWV6EktY3RZAD3vCXAyUAQJEDzJnjER3nCRJqzsmSjWzwOpVAVcRaWeFCbMFM/vqnQ02wMwNeufOv7v39L3QO5/7f9H3wM99f2TZYcB9TXdHPb4aZuzazrlhzWynVzxevdAOR/R/IrAscCmRBTRV0YQ0Gje39je398HwwJmGrwbDRf/DGy7/nZ9+5xM/RzZb+rz0/d0LF69k/at6FAud/x5fTV79o/ee1lcrvrJjmPXa4V1fixJTvvSVO6JnZQmrV+irAeRNqRT5n+ShYhHr9/Dw8DjqIQUiMtsKa0RPmB4eHldlzJEwRzH3ePUG2+AQhd575os7//O19E+U7W5ceetleGlYa5q79c6V33t4eFmKYeQ2tRtwFg8DMxM7kI/60B2fjSPSxmIB1LPmshL3dbob7YweiRaURu3W7UNw6bB5vK108dTXkauPswj5tsxYuqwtOfZyo/rRQNy94cFeXEo11fxxoGg0wv1usA0mTjWMtOhnnt2JGu1f+8b23/voJZjtty9csf2rehSF+40nSi4fH3joQAf73/9qz4g09YxXrUC2Wmn51Amy4vmqaJekMWwXLR+5+wRfgnBppIPdSduSXLvi0gMTrx0crAkytBXNRjfVan2DVjPMcztjjJdRS6RaxXBQrlgpkD13d7Gr21wsbqw7HA7H0QeyGXIaMhvy2wprRE+YDofj6sNMCXMYc49Xi7UueOPC9i/82d3f/YXdb/zy1t//Q1defVLar5x/ud6/j4eR+BNyL8bKdl+3pkayqdJS9Ve9psv4uhO3hm1Ro+hutWj8VGaZTtwss7Azwa2ZucmaO+905t670VjbVraT3aqyAw8aHaJVI10bwg3ApezGq4X599WwvhSbzKmGlxZ9gn21aJjq/+ffvHjy9b1jv7N16/+4ZvtX9SjMWQLo6PSIcOaz0xJPiLkoHaqNiupqqdFeIOlgLgfdWl1jvMRYQ9hhMsaq+65duunkelk9cu2CGa4wPO1h4JX7agA5lG7VvT3k0x2Hw+E4+kA2k7S28hrRE6bD4bjKMF/CHMD8vroawVdDX3nze3ZW9nIkhtH5E+uICNHziKG6tfsWbtVCG3uTIvV1AbxOaiXPEwwVgy1W5oKw8kFfbY+iRNxWnzejfTBbtO3das0+1IBLKfemsdbkq9e39tc3Q6T6Z3/usujvfn/3ts9uQHztAfpO+Jd+Zwv8MzdchsfWzty/qkeRnH8cUXctcDbsWf3UrfFld8YM7FkizUPB8fxXVpucMf10xm4UCAvWbxKAdkwuQf3a0YLdNeq7JcavHbxrEWRoU10wm15rvwsrTkPNqe6GoNE56tGYw1dHIJ86HA7H1QHNa7NBN+NwOBxHH5rXDhAz++rttf2dNea63v1nv0S/o/7+g9J+5aUH6d8D/91f3N+8CD28LOlhWPsBLWYpRLQxNGzYuZTU4SiqjYTMMtGqdP3RuZEnR0sYMRZfZ3dgxFf3ubJiW3VvlnowOQ/cP2kf92Z2pFo0Gtc2yP2usQfONHy1bf8qm+pzb105//aVv/fRS/DYA8tGPQp7ckjrCZGIR0RnpjuHVV9dNuL8xPNfWW3VV+u17qJ+BXl/pINegvq1yxakl5VrOn7tyBJbq8wabja2lHpWlrB6Vl/tcDgcDofD4XDMjfl99fZltsGXM7335OeuvPGdKy99Y+e3/s7+xtvavnEBL2GzweS33/huddlOD8Paj8SKGHD7veqLgLqFJhObWCOFcT7kqXTBinMrVxvtkPFFZifjmjs31aG6rbo3I2F3xrZ3qx33ZnGkWgJafDUFm+FM33b7Bqz1V79O/hksphoBAY+tnU3/qh6FdZ5WW3D7se7kVH21PaUCnJ94/iurTc6Y8dWV05isgbZeXu5+X91dI7OeidcODtYEGdqKZqObarW+QasZ5rmdMcbLqCVSrWI43Fc7HA6Hw+FwOI405vbVMMBwwpfYD3f6ytkX6Z8o++zfytp3f/cXr3z/G9Dw2zRqXVs20cNI7AdZFzNofCbxM53bSbt1IN/VtZemy2yLR5LZuT35qWCWdLX6K2tCNHLGF5mVGDNmN33mxJNn6tvK9rzrY9vNqsxKkn2oAZeyG68W5t9XX2YPPMyPPL4DL/1mNNVf3x7ub3kUiWXtTinh3Ol4yHxoOF45UdbZWlCHrj07/72rlRfhcxlaqjurZ85xB7uTRtN1qV0je2nMpmm3g5547YIZrjA87WFg99UOh8PhcDgcjiONuX013G899p78rStnX+Dx6g/tb7yFlitvfIfM9m99yDaOxDAS+wGwk9Hv337qxGl+aQ1PMMPSJ1qpADI5ujh6ijPnztyzW/mtT6InrerEkzRMqovQhuCKj8WVGK8bfJHZYWvGxFPpUjefOFfbFsHueXLsXf+aeQPGvRnb6fz31XC/uMjDfOfdm98+sQNfDVP9FZjqsf6WR5GcJUJ3jT7yoWMn+GU8ZPoMgg+ZP4xAhPMWYc5zOjdbrbRUfDXAN5J0/uitT1JjupNxVR+990nzm//ea9dturshJ147eNciyNCmumA2vdZ+F1achppT3Q1Bo3PUo+G+2uFwOBwOh8NxpDGzr956h9xvP+/+7i/Q971f+oa0XPneA7DT2gg9uCyx40CAS2lHqkWj8RIM8Po+8QZzTcNXw1H/vY9egqnu69OnHcsj+GoywGqVWcPNxpZSz8oSVruvdjgcDofD4XAcaczvq8fiytkXsxZEtbESjgMBLmUcqZaAFl/9zhp5YImqfuXk3lNP77z4vd2svUU7lkfw1THI0FY0G91Uq/UNWs0wz+2MMV5GLZFqFcPhvtrhcDgcDofDcaQxo69eW9+4ssUDy1sX52LH/ICHvry20Y1XC+9S4+X1K++wDdZYtXYsCbl2wQxXGJ72MLD7aofD4XA4HA7HkcaMvnpzc3t3a50N8GzhmB+wPRubW2ynk99XoxF2+yIb4JnYsSRw7dY3NuFdiyBDm+qC2fRa+11YcRpqTnU3BI3OUY+G+2qHw+FwOBwOx5HGjL4a7mttjX315oX9rQuzsGN+0IDn9o4dqRa9s7N7eW0T7vfiZfLAF8ASq9OOJYFrt7mFq5daZdZws7Gl1LOyhNXuqx0Oh8PhcDgcRxoz+mpgc3N7Z3OdPPDm2yFWqh0zA1cQzgxOWk21+X01mIas13fhhGcKxzLY2NyuDVaToa1oNrqpVusbtJphntsZY7yMWiLVKobDfbXD4XA4HA6H40hjXl8NrG9s7WyukQfeeGv17JgTMNUwZt1ItWWakEaHy2yt375ETni17FgYMNRr62KqyeiqVS4YnvYwsPtqh8PhcDgcDseRxuy+GoA9W1tb3926vL95MVjit1bDjhkAxwyrc3ltg0eqrZ1Ofl8dGd0urcFd711kP7zCcEwFrohcu56fVccgQ5vqgtn0WvtdWHEaak51NwSNzlGPhvtqh8PhcDgcDseRxkH4agDui9z1+gYKaI9DHuKo4ZE6U02TIb2zs7uxsYUFs1V5HHCIo46/qcZ1UXtsrTJrXODYUupZWcJq7LlmCofD4XA4HA6H4wjigHz18riiU8IV80K0NFzhF/ijEH0l1RT0ItU0MVrQokewp1NCqTOm4BdVrSLqjCU6DTJaxagmFsPMWkenWXYj1RL9WkUSu80aLDGmd2LLkAaohaNJ11gi6J1GXTAZ2iFNLJpZok8vEWRoK5qNbqrV+gatZpjndsYYL6OWSLWK4XBf7XA4HA6Hw+E40jgsvppMLVg1TbVF9QExJuOs1nof3nWceZLoBZkmUWMaWlQ3MnxpwmKkrW5kmqR6jDHN9ADDo2Y6MNoyXWO2uIluZJp0TFY51QMMP5npHibvmupmZt/LmseitUW0MhorzHa3ZHjaw8Duqx0Oh8PhcDgcRxpHZrxaAGcbIVqZJxljCudptbIYY6tHeCnAtUaIHmCKvSCibmQJq7sgQ5vqIYYjrTG52VQTt4ZYZasbWcLqWsCpNjCbZA0Y1MVYImg2uoluZDK9E3ilQYY21QWz6bX2u7DiNNSc6m4IGp2jHg331Q6Hw+FwOByOI413x1eT2Q2Ad42At1QVtLxWzS+CXgFjMq7FWu/r+HOfNkzOljRPejXxwAg2Taoa01TLXNUZw4v2arHHLRrMTU26Y/KxojEd0LuJhlm1GtNEy9whzXa3V2fMs6tabHOnZW7QGcNb9mvyrtM1M1lc0TIW3a+JyeKqZrurc4MmNhpuNraUelaWsHpWX01Pi8PhcFwV0Lw2G3QzDofDcfShee0A4b+vZk60oEWPAO40otQZU/CLqlYRdcYSnQYZrWJUE8OFBk1ONdNgiX6tIgk2vU0aLDGm4UJjY79mM6zRpGssETQZ2hZdMBnaIU0smlmiTy8RZGgrmo1uqtX6Bq1mmOd2xhgvo5ZItYrhmMNX4/HBpeX/Yt3/OT0PD4+rIZDNkNOQ2ZDfNNOtCJ4wPTw8rrKYL2EOwH9fnTAm46zWuhyprjFPEr0g0yRqTEOL6kaGL01YjLTVjUyTVI8xppkeYHjUTAdGW6ZrzBY30Y1Mk47JKqd6gOEnM93D5F1T3czse1knI9WW0Vhhtrslw9MeBkb6wwO4QuDRQDLlfx19a3Nr22yObHzfbqyc7Unu1fYCpZouqGq90FYzpzfDkK5y3w05rHtv+FJHrj9QkIUmLnWV+x72UhP3JZBO46/QxKIN1xNaqQGjUwY16ir3JfCgmbRlWEfueXOBzDXzsK5y7Q1xWBNbrW/EfZrYamZJCC16pQyqaZKqBb1apwlwkpE35P/ssJ2XBFblCTPVVS6TYYvWFNeiIyMDVDRkoYlLXWVJbi06pL5hjb9CE4s2TAlNNJ7uAQ0YnTKoUVe5liStZtKWYR0ZT01NQ+aaeVhX2RPmIU2YwzggX42b0v+fras75MbFOxOudT0kgRqNt0B5L89W5XF0A1dzbZ3/o69iUHogsKBmilUA2XN9YxORbaUM3K5GU93Wp+VwoLluC5rqNqtj5cfRp0O1N6gnB568Rq3c9jUK0RSiI0uQprqqVVOUusoSVLI16sgSQ1pSkES7jiwxVeMdvldT3WY1IJpln5aeqgNL9GtUXSOayrJcR5ZINTCiSSYsaNGDoIc9oEFzctCWXEtQ4ZVr7klaImieK1OCkanW6YqBggopzu7AwsBKPGEOaGXkpWZNITqyBGmkoGZNUeoqS1QSY5+OLDGkkXZiS7uOLDFVU1bq054we/Ug6GEPaNCcHLQl1xKUf3LNPUlLBM1zZUowMtU6XTFWmDBHcRC+mh31Jh4YHJEcFIlwKpfRjsMD5Bm8R4q7xrUGmOlzRExLvbGxpXeFubIrYce7CFxauQ3W1zdRZ0jZlLGE1Sv01cibuAOlRow13OFhqRetrrDWi8ooszLNTOVXXetQyWJcGWxpYWx6KuPZrzBoGe4GVVqY0lGqNUFlepTLRLcIg5bhchAmZdBUxtNUY1CmF+Zy4EWHU4aZyGp+8BNdsCQHqw+cQZlmJlI9CaiskOdkhQsDi3vCXIIlAVrdxNj0VEaWqDBoGe4SYAtTikt1lwatHmU86ZlehEHLsCdMT5jzYHZfvb6xtb29iwNBwEKJWJV2HELIx0JITF1IAjUaHfDuQheRHw+9pivSjsMAXOE1stbqnIdjVb4aGRO1Elx9tv7hQHHWaaksVUs9Z/VYtSf1nNUVlrB6lkDhNY13SSR6EkuQptorRKkXYeQNFp2exOMhqcnqSSxhdVsAPSw1n9XKRKkeYwmr0+CKLdGTWAJ/UZOczCsCJYEA0YPMSSPRHcvbCijXHROlmllg9UxArkPGkx1bAFjQE2YM5K5pjOyU6UksQRppKraUehFGLmLR6Uk8HkhNmZ7EEla3BdDDnjAXAyWBANGDzEkj0R0nSdLqjolSzSyweiYsmTAbMa+vhsWCqd7bI7czBzsOJ+RjISRN5I7ImIreoDdxGqfGvU2XkmOF2nFIAGs9MGpteVW+GrcXj71sYZ1x5VajODN6di6ryYqWerGmUXIFTeVXrCxFM6OKatRVlsGTqVqHUFp0ZDz7FS01X6qJS11lGTxp0WE4ZVjjr9DEog0nCW1AA0anDGrUVS4HW1LNpC3DOjKqjZqGzDXzsK4ynlDWRG2a2GpkeFCvJraaGZmhUa+UQTVNUrWgV+t0CMghyHg48/p6IrCgJ8weXeUyGbZoTXEtOjIyQEVDFpq41FWW5NaiQ+ob1vgrNLFow5TQROPpHtCA0SmDGnWVa0nSaiZtGdaR8bTWNGSumYd1lT1hHq6E2YgZfTVuRPqi7x4EBQ5k5dpxaHF5bQPvUrgHuuAEika6K+iZ35+JHYcHuA02+V/BGY4V+uq1ti1K4G40muq2Po1yTTTXbUFT3WZ1rPw4+nSo9gb15MDT1qiVd0k0agrRkSVIU13VqilKXWUJKtkadWSJIS3pSKJdR5aYqnF39mqq26wGRLPs09JTdWCJfo2qa0RTWZbryBKpBkY0yYQFLXoQVFcFNGiuw7Ql1xL0/pFr7klaImieK1OCkanW6SzAdUPGA+vriZDFPWEOa2XkpWZNITqyBGmkoGZNUeoqS1QSY5+OLDGkkXZiS7uOLDFVU1bq054we/UgKGUFNGhOcdqSawlKb7nmnqQlgua5MiUYmWqdzgJct2USZiNm9NU0WL2De3p/Z5c88BzsOLTA++XGxhYyhSZKymek6T0cLTDAe+SB52DH4QFug/DLPS2hwBJWr8RXI2WjXsCq4spjDXd4WOpFqyus9aIyyqxMM1P5Vdc6VLIYy+CJ1U2MTU9lZIMKg5bhblClhSk1pVqTVaZHuUx0izBoGQ4DL51OGTSV8UzVGJTphRmFDmsi0fuqB5nIan78E12wpAirD5xBmWYmUr0AkPFwC8tqJwGLYEFPmMgAS7AkQKubGJueysgSFQYtw10CbGFKcanu0qDVo4wnPdOLMGgZ9oTpCXMezOir19Y3dnfJAM8XjkMLJJHLaxtITxSSQDm4cX8nxBzacXggtwGqouFYla9GoSRl4qRAcdZpqSxVSz1n9Vi1J/Wc1RWWsHqWQOE1jXdJJHoSS5Cm2itEqRdh5BAWnZ7E4yFpyupJLGF1WwA9LDWf1cpEqR5jCavT4Iot0ZNYAn9Rk5zMKwLlgQDRg8x5I9Edm8GWVHdMlGpmgdXzARkPeU/2bRKwiCdMG8hd0xjZKdOTWII00lRsKfUijFzEotOTeDyQmjI9iSWsbgughz1hLgbKAwGiB5nzRqI7TpKk1R0TpZpZYPV8WDhhtmNGX429394h9zsfOw4zcAMgdyB1CmMKpnt6l9zv9mzsOFSQug2VxwCjj/ZeAkiUWZkoK7caxZnRs3NZTVa01Is1jZIraCq/YmUpmhlVVKOusgyeTNU6hNKiI+PZr2ip+VJNXOoqy+BJiw7DKcMaf4UmFm24S2g2uZUaMDplUKOucjnYkmombRnWkfEE1TRkrpmHdZXLwZYxTWw1PeYoiPo0sdXMkhxa9EoZVNMkVQt6tU5HQG+pK/LVeHIzjRxi9OzcmySt9oSZauJSV1mSW4sOqW9Y46/QxKINU0ITjad7QANGpwxq1FWuJUmrmbRlWEfGE1TTkLlmHtZV9oR5iBJmO2b31ZPi2ed3/t5HL33i5y8j/of/cS2bW4bjMAM3AJJRF5xA6a7YJQO8zTY403//c1sf+s1NxKvnr8R2aGlEfOpr27Z/VTsOFWzd1hfoo72XABIl0mXL5mKgkDKa6rY+jXJNNNdtQVPdZnWs/Dj6dKj2BvXkQFHVqJV3STRqCtGRJUhTXdWqKUpdZQkq2Rp1ZIkhLelIol1HlpiqqdTq01S3WQ2IZtmnpafqwBL9GlXXiKayLNeRJVINjGiSCQta9CDoqQ9o0JwltCXXElRs5Zp7kpYImufKlGBkqnU6F5DxFisTsYgnzBatjLzUrClER5YgjRTUrClKXWWJSmLs05ElhjTSTmxp15ElpmrKSn3aE2avHgQ99QENmrOEtuRagtJOrrknaYmgea5MCUamWqdzYeGE2Y55ffXWNrnfAYaRfva53eee3wX/3g927/7CJkLmwlr3LRXZcZiBGwCZQhMl5TPSdFewASbeZTb6b//GJhjmmfzz3v53z5C7lpfSBx36lo16NXjyU+/70LFz+mIOnLj1xz5672l9sTxO3Hzt+34M8akT2nBYIHUbao7IElajj/ZeAkiUsUwMm+s2dEhY6kWrK6z1ojLKrEwzU/lV1zpUshhXBltaGJueysgGFQYtw92gSgtTakq1JqtMj3KZ6BZh0DIcBl46nTJoKuOZqjEo0wtzOfCiwynDTGQ1P/6JLlhShNUHzqBMMxOpXgDIeHjyZbWTgEWwoCdMZIAlWBKg1U2MTU9lZIkKg5bhLgG2MKW4VHdp0OpRxpOe6UUYtAx7wvSEOQ9m99UD8cxzOzDP4qURD3xjO/pqBGbFnn1Rx+ljHyGDkcbNrXYD/uTWJ1Wz+dE1fOTuM9SwypUnCL6Ioq/PHDh390f10AA+ulVtHTcA0hOFJFAOuivYBlfjr//y5uO/t/fxL2x99vgOXt7yZbLT0VcjxHgPRy8mXbvV+mqsLWwxnN7lfTXdnOa21LXhgk511/PdA4DUbcOBPtp7CSBRIl22bC4LFGedlspStdRzVo9Ve1LPWV1hCatnCRRe03iXRKInsQRpqr1ClHoRRg5h0elJPB6SpqyexBJWtwXQw1LzWa1MlOoxlrA6Da7YEj2JJfAXNcnJvCJQHggQPcicNxLdsRlsSXXHRKlmFlg9H5DxFisTsYgnTBvIXdMY2SnTk1iCNNJUbCn1IoxcxKLTk3g8kJoyPYklrG4LoIc9YS4GygMBogeZ80aiO06SpNUdE6WaWWD1fFg4YbZjXl+9uUXuF7zJnOlnnt35/Bc2bfvXv7H9yZ9fE3f9iZ8jX237l3oEC1kjY33Jt6g2vkWx7Motztz7IWvwVjySOYzEU60UuAGQO5A6hTEF012xQ+63yvc+tfsrXyUXfe7SldgCL83fAN9BiK+uLht5HC3XbnW+OjW6uNZycVd6len+XHyker57AJC6DZXHAKOP9l4CSJRZmSgrtxrFmdGzc1lNVrTUizWNkitoKr9iZSmaGVVUo66yDJ5M1TqE0qIj49mvaKn5Uk1c6irL4EmLDsMpwxp/hSYWbbhLaDa5lRowOmVQo65yOdiSaiZtGdaR8QTVNGSumYd1lcvBljFNbDU95iiI+jSx1cySHFr0ShlU0yRVC3q1TkeAjIfn3C7YCCyCBT1h9ugql8mwRWuKa9GRkQEqGrLQxKWusiS3Fh1S37DGX6GJRRumhCYaT/eABoxOGdSoq1xLklYzacuwjownqKYhc808rKvsCfMQJcx2zO6rN2CAe5h89T3kq237iWd30A4+dXov61/yCHJrxPbVjBnSEHHoQObnQ3d8NnR4X+F5cku8wpUPWbh8tWrMnoxDr9ZN0QcByfqx5ptP0J7I4mbAttvJpCUxe2wIeW63e+jwqRNYbd5eAW4AJKMuOIHSXbFDBrgxrrt1A14aAb+NePnclaxDGePov3advTR9+Czpee7OSfgchE3pieIyBZSfyCjsqe52oPt4JZ7ksOnuYmkfWipcNZ1F+58cXXlo+bZmvQcAW7f1Bfpo7yWARIl02bK5GCikjKa6rU+jXBPNdVvQVLdZHSs/jj4dqr1BPTlQVDVq5V0SjZpCdGQJ0lRXtWqKUldZgkq2Rh1ZYkhLOpJo15Elpmoqtfo01W1WA6JZ9mnpqTqwRL9G1TWiqSzLdWSJVAMjmmTCghY9CHrqAxo0ZwltybUEFVu55p6kJYLmuTIlGJlqnc4FZLzFykQs4gmzRSsjLzVrCtGRJUgjBTVrilJXWaKSGPt0ZIkhjbQTW9p1ZImpmrJSn/aE2asHQU99QIPmLKEtuZagtJNr7klaImieK1OCkanW6VxYOGG2Y15fvb5J7lei1DDPn/j5NVjrz38hBOv7v7xV7V/qERQGI1gLNgaibubGbrgvepUMidkgrG7l2q2C6mqpMWw66xBWji1KBzY/YYsnbg07zD5fdbr17jCzPkHTVoKvM1usATcAMoUmSspnpNG4we53Y5s51b94/7a46L//ua1zl66gHb76p27duPebu9X+VT2OoWsXdOhjz0N2TuTY2XlWTpoi2ZZFd6pP3Bxurc6Em1vl9LFbsUvdXYStfKrs041Xd1vkWyWY8BN3n0BjbVsz3gOA1G2oOSJLWI0+2nsJIFHGMjFsrtuQ8Ku3XXvNT336VdNywCz1YtTf+Ng1ET9528lQTSaMMivTzFR+1bUOlSzGlcGWFsampzKyQYVBy3A3qJLxqU+//5prbz+VtlNqSrUmq0xHPnn7tde8/zMni/Yy0eX88Mf1MgPv/8ypah/QMlwOwqQMmsp4poiPY+c//nDXAsr0wlwOvOhwiuVTt193zQduP2VaiKzG4//qZ67Duf3AZ17nVEAtKUuKsHp5Pv1Z2qbgus+eTuaG/UEqDe2gTDMTqV4AyHh48mW1k4BFsOBwwiz4NTxHP3nba2n7yd/8KRzqtb/5g7L/EvyDT/8k1mrSNaXHh26kcy14/22vSntIj5ROP/YgMklsEc38LiTMB2l/Hirbwdj0VEaWqDBoGe4SYAtTikt1lwatHmU86Zke5YdwMm94OGkHLcPzJcycQZlemJsSZslEVvPjn+iCJUVYfeAMyjQzkeoFsHDCbMfsvlpssIhSP/3MjgQ8dtT/8IbL3/3+brV/pkdgLU20moxoisSQ3NqNRRuvYjDil5ZaOTUGV8NGiIYB2djUV5ushHZMHA72R60OgD68hj5TxzumxqjuqTK/ZNvVdNGL3k8ECLgBkJ4oJIFyoBEeGAZYI9V/6zc2Rf/KV7cffWlP9GtvXXntPHns85f2/9u7tqSRomc947CnBTo6UiDOEmFOVHdWBejAJ7y7BEDSn1C5cxTJmQyIF5fOf3JuaT+z/uZOsNuNh1DsTIpu8fnuAUDqtuFAH+29BJAokS6HN6e+Om1EudZpqSxVSz1ntanwUg1GVScVp7ak9tiwhNUnuVo9aVpWECi8pvEuiURPYgnSVHuFKPUijBzCotOTeHf31G3w1Z85xboWkqasrvHJz7CvLudKWN0TnTPnAHpYaj6rlYlSPcYSVqfBFVuiM6YPBT7+UN9cCfxFTXIyD+B18tWfeV1f9UB97O2mG+WEEea8keiOzWBLqgMf//g1193+WmghUia8hp2m/clz/KqBjLdYmYhFRhNmEa/BQiPLxRbkMTLb7Ks//YNpCZOy8cce7FqyVPmD28hXwzzTy9iuQct2szTYVz9kWyYFclc/w9DTASbtyE6ZTll9dc9cDdJIU7Gl1IswchGLTk/i8UBqyvQklrB6LB66Ab76IWQx6BofpoQ5zBL4i5rkZF4RKA8EiB5kzhuJ7ng0YYYkaTWzwOr5sHDCbMe8vnqNPXAff/uZnUef2Cnbf/bnLsNdl+0ljyAaDNHytdUQ0TDAGBjnY7xKRMXSrG7lFWeCbry5+mqTlURTRyLpHNbQbZ1A+6Md1HGlW697J34pG53gqXADIHcgdQpjCkbjOrtf4h1mo+GrRf8y+2rRv3Ni954nd2OfvmWjHkd27ewponPOZ4baP8pfm4/HSyfBnGEdsx321eUlCEjPMG1XV5vcPGjpFtcdCOfc3AlVX9236WJb890DgNRtqDwGGH209xJAoszKRFm51bOOVz/wM+SNbUtZTVY01YsnUZiyr+7mQqPkCprKr1hZimZGFdWoqyyDJ1O1DqG06Mh49itaar5UE5e6yjJ40qLz8WpKR6XGX6E1cQkHV9wlNJvcSg0YTUxDrzJeTTND+7CucjnYkmombRnWkfEE1fQVGa9+SHQ3d1hXuRxsGdPEV+x4NVMy8GI1sdXMkhxa9ER++KZrrvvMq31zQTVNUrWgV+t0BMh4eM7tgo3AIlhwOGEihxgNro5XL8jsqx+yLc0JM/rqbi7aycf+DBt1pJTQM2pmpJpGnTFs3bWffrlMhkM6jldjtaF9REdGBqhoyEITl7rKktxadEh9wxp/hSYWbZgSmmg83QMaMDph9tVhvBoU55a6yrUkaTWTtgzryHiCahoy18zDusoLJkyj6TFHaujTxAeXMIcZVNMkVQt6tU5HsHDCbMfsvnptgzxwybfdvnHLP1oD/8MbLmdz4athubP+VR6B9RUDHuNDx+7tvEFhfcmEWHcRsJKVM4ohTeOrK6vt99XR3UUka4i+KLFh/Z7K7qptb/VUuAGQjLrgBIpGuN++uO7WjV/56g7ib//G5g33bEH84M0raH/jnSu3H6d2eTkc48iunXXCcRa10xkwVyc59ogRX00tlaXs2sg/6xoqd0hxZWMf07nPV2c707Ot+e4BwNZtfYE+2nsJIFEiXQ5vTn11eIlCKs5CcTOgUbqJ5houaKrbou4qTg1tL3So9oyW8Wqz7PRAUdWn9dvmPHIS56IYgmjUFKIjS5CmuqpVU5S6yhJUsjXqyBKp7sarqV3SkUS7juPVeBlaNFo0lVrdeDVg2qlusxoQzbJPS0/VgSX6NaquEU1lWaLteLVEqoERTTJhQYsG+ser6akPaNCcJbQl1/sPo3AHbjzOjdTOWYXKNdUSQcfxagE3KxKt07mAjLdYmYhFRhNmFshycbwa+cG0L5Aw+WvbwQaHnjVdSZjiq+kTUjNr2ng18lKjJv4BtmfGq5Gj4tyapsjGqyVIIwU1a4pSV1mikhj7dGSJIY20E1vadWSJqZqyUqdlvDq0H+6EaVki1cCIJpmwoEUPgp76gAbNWUJbci1BaSfX3JO0RNA89//P3r9G23Fd970gR48e/aU/9O3+0OnE995O4pd049zE6Zt3Oonj5LqdmzgCTcZxJ3bsKLZjO7Zbfiqxh0UAlG3JD1mOdWPLimUJIEWGEmmZJCBZVkQSAGWRlGSCIkE9bFEEQICgSIkkgHMAnHPQ//+cc62a61FVq/bZGy/WHBP/+u1Vqx67dtWsOffadaBTmsOUbboqWzhgtttq6+rTKIDXLlb1J17/srJW0dBfePOZn3/zGei//5GXjh7frC6V6YjFAoPGKqIrAE6clPZQWnTFT9otFFcV2/7KO2O7q51i3TK4WjFXdCVV0BdPyFJ+Jx2zuHJ1dW3Tvp6UPsYTaiqcAIgUFigZz8hoPHOO1S/0jKjnD31q4zc+fP43/tv5738n62rw9//OOtrRctOd57S9b9nI49b72fEYGnd9ugPujwnavxgPVDyAZV2tHbpGrE0PYDyS7pBio/YhHn5rWKdt9JG3hw837rA7E6p1tXSI+3b4EUB1Wys8B2CatyHniKruGX2s9zYMgTKmiWFz3YZUra7uHtKLv10MesA9vyc5n7ZzwWA7D3TPRYP9LG/IAZI1i2q+6Fk0jFdri+WO0n6DrQ22ExmatEOT/TkoCVxgpE1Y/MKh9EFE5mqW+UFjf9iuQ107UqiCO43DL9aCw3jDXlnz5gXLO7sd3sU9CUMr0KdQkga7Ye9RbEPaqVw2GpZDJqeZn6h/I9ft2PNU/9ybmHXJKkW/0G1x16HG56v5a09sAnyUSxuznMaRjePV3ZpRbXbBDdPkOeqbMDMMpATtxqvTdqldg+08xGa0Dyp3xnrKUeh4x96npc9W+lz3TpTGyN7YfhzLBtt9SHLWwNontmBvn2l8vvoLe6/HAdEtYqkt2zaXtD7P7HXns7T7wRZuJtoO/qDbhlay56tZZsN2PihzT+y9URYQux5LSShQfeZWKcj1J9m03YekXcMF1W8zeRZ6QJP9jBaer873xy17kc9jYyvd/tz8INoxT+YuZoh4i6WJWGQ0YFL1OWexnQeS8eok9IWnnd2y+tx1sBhO/TPSzsKPfZKl/Ni4C5hhvFpbQsCUuvrA0W6vdh0IARMdoDFuw0I4RYDyYXbXA/zJd+QkAjvbdaA5YEokUcNSNjcPidftxq5au6q/cuU+hSiBdlU+bBxs1yEJYmgfUy6FIBlbECoQiolo4bvGvSBe+Dv4vIv0NPUf2268EWmXoJe+EQ2JbDdNPm8EYa4RQWw8YHbBHdFgfr5atByptiA5rBTPcvknXKiGCM+XXCEZi1KMF7CFA2a7rbyuRsJMVXeMuvrTn9t47oWtn915+rnnt1Bamx++8LQU1Vn/Ko9YUj7BpMzQX8D+4NsPn0iqjq74kXpDSg7UD6G/ul/bdleuczpj3aKLJw/WZqu1lrg415xURGENP7jPNtftZLcqeWuhDGNJJotwo0nJJLsti3QrmVBT4QRAeKJrABVHI6tf8dMBSkZF/eBnN8EopKH6Eu36UvsolDxu/Z9d93Z8H/nU9Jh3x+QbXv/WR9g5+QhqdTXMLYVNHJaCXDYqm+jmyh9v54YeOfz+bhGu8PAj+7ozwTbnzoR6XQ3LT4nKtmArOwdgmrcNO/pY720YAmVME/vcEsE3HHAvkbKEDiyqd92PXE1eMgnjj8bJSIZEmXW85oYdKKdDS8z23Hi1tnTlcabqnnW8Onu+WlK6OPwiTxuiuHMd4GF/snZmMkzFwBekzw5ZORIyaBxFyedusIO1K6caF7QWpmV7kJaRn8Kh3LHrpl0c0tGMDTvwlDA6cK7lf2jheiSf41ybZT0lB2X2qT3jehBDpDNma2nNqCKpp2R4MpeJF4peYU31duxFIc25moY2PF/NYW3sJ/jgTrydHYGl0card++6yf50GY9dHL7WuZpWinMuPpjwUr3+fLW8EV0QFlcLkxaX/3FIJDCWkhIdBe2Onbt3RyawjxTPMs4sS8n+sG52zg3vuPF6lLjWItkbe2JB4c2n+a5sPdqSqTr+sef1O245tMmK+vodN7Jwto3i4kRRjYLTmKtFbbn7QWEK94RVsRk7x5d+vFpGlbVUz0wXYXEb7KIsePPunViW8YHLcj1kzJWimj/kJscdkEgi6gZbUg6K5fufr5YCG8W2vRKTuvrmm3fZQ9cP4uze9gPYiHiLpYlYZDRgalGtA9Tw+9/AUyG+hCOOQRlIu+drtF3K4/Azb4REhlMdnfYtA89X51FR282trnYtcC2b4yLyErWxm9uNZvPEzEKuFNQ7dtzARRDT2BI1G69W1dDnOVUGOqxxzzFtkZcsyDk3xj321G/xZBbDl34ryp+dkxn3ZOshnFoc1p5StUqgC8uKIjoJdAyNC1qLBBALmFpX37RbI57GT+MwN8ZPC3Q6K/ZEsNJvHjWUYRY1LGgt8rshlNYS2aSoTgNmFxu5YIyfslq+RBzDy5pqePRcBMzAY6ruOXUJdwlPUnX8i0ycrEsyxoFgyoMqcSPhTkcDZgiSnkXVPK/OFg6Y7bbauvplFMBnL1b105/d+NX/dObn33z6wY9d6OszqrNdyYYTALEDwVQVUygaT6+z+jXv4T0HL3znb6x//++s/9K+82i/85GN75CXP/6ec7FP37KzXVGmeRsyjwFFH+u9DUOgzNJEXbnnNP+D+uEXsi+Yz50/wKKgaPFDKE6ZR4YhF2vXfHGENYMcf766yzLZrnMlE8T+CCOjsnZmkPLwobYzaZMCmH2eQn2eDrYwh4sDKa694C6f03ZsXEaeyTLmLBmk9pfxYWR7MtcWFIYiDlz4/LGjzOeEtV1Z81ctpC9sSB27RzI/aUn0mNTAocUNuYDlLcehGGjr89XI7WSgBvnf7oOHdktGCN6x5wucK+kdOAQ05pTX65g2avK9WpPLeiToHdwt3wL4MFh7vvoZvBFUitaH7fxI2ELu0a0tGZBnVrr54O4dtxzcix0WluwTedtx1HXYH/SUhbAU9uf6vV9Q1na08LBoH1xB0lOy3QdtKf5j/WlltvYRLZjj1Vg/kOPSsgYpiXeyCpY+frDlBPqgjhW+aMtKiev66ABL93y1Fcb1xwVjKW7tEI5XW+krLSzmZSsyV8pa+/vhCCBShMvyoWVMh5+vPnGL7I8whO3y98Ntf9jEgWtbg5q1i3U0aIh4uM79go2GRbDgNgKm9YFaletaasEzV1bpQ89X599UWrsEyVBXa09rZ5xJnq+WfTgo/Pl375AaG1e6BU+JFg8oaztbpHi20OeUYZOFrmufHjAlXOsPebRnaMf6cS1LIS0tHHIPoRUtiAzaTmRP++ZR26U6lZ/5aE9trykL1zhejaCHMJOOV3NuCIbc+Z3YeXIaTjVgHjsmITH2N5Y3EkLipsQoFzDxD+HLuBowJWSBteQO/aH6DZ31hIT2CldVg14/i1jLMEfFFVRjYM6iw1zVcnR6jKmeeZkjNPQx1XMIQY28VIXUmGis1ss2HbGFA2a7rbyufukMa2D1pfNsV7LhBEAwomsYFUcjSt+X1aUYXjrPdkWZz9v6HH2s9zYMgRLhcnhzliaGl0iwmNvJ8LWOYJSGtIxzrb/c3A8qI3PiGgK7LFDd2uHpmnWIxlz7aM3sljXPdykbYAn7Q0ZSFdslLbPBHLSHHFT6SPpUGH8wqWuQkZNoNkRDLx8XZFqGcp15VRhREYbLO0KBmjM99InMFMqbrkfd7a5lhOqwnnciYyzxF4/q5fPVOoIdLIy96JgzFzy464a9x5AjKodMUer8MKKC/pY16rLcH90659JDi72kxfFqWOyWvX81zS+lY/pOrWyG4QPYgTr2+C07UIQf3KnMOlbnMjPlQDSMWZq1SLlra9ASWvsgG9N2vimWx/YS7ZrjKtPTD0zLZjjHq7EyMEtckpXEe+Un3MWCLOltRBrmznXW8N6k4t29W35fHQexxXjVB5PxYZtr7bFUNrM6Vua6QhqMf/JSfwourn9gPBh/5i2pm3TXHM7Gq9Wk2UxY6mpZKpqMV7sBatmfuIbFDBFvsTQRi4wGzBge1fuer/ZxVdsR/dhoZv8FVxow3Qi2urVHjt8kuvbAEtNGn6/uvovsi3s6mq2hTytt+em4ubZTs/FqRK04V0aeo/G7RZmFPnnAlB+c89tGfZnvEr9PZDv7u3mszHVxuP6EpzT+bMcCIyvbzmRzGjDjeDWZYRCXo4xXM2TJFi18ydwYQsm4NmM4RciyRQIXl3UIidTkuraIp3N7PhIJmBonj8f+9nMbG75GXIrtyjBlwT7WnsZB1fsZgWuEJfplHFU9ZdgIExNVa+FB41UfrIElSlhLzuoMOzlLT7J6YJmrU5rDlG26Kls4YLbbCuvqM2fXTp/deknKYPNl82xXrCHsnD6zhkiBAKqKKRSNp89toQB+aZ1l8Cp0tivH9DTokq0eXXpdHVbu8jxRy/9cS8ztWgZYBvtUxqtL1dzRs2WQleerbWxcsjfpg5xMskxp0ayOv1y3AZnYzqESdI4DL0xfsHLhODode2ZaDrx0GodfrAUrzp+vFmaLH69Oxq6jIhpQmZPJT7u1JfzikWz5nypHp8WQQWqLJoK+D8TUD7aINj5fLTkiMkhVZn7IBVUZvmJVDEZ/SWHDiLSks1oMc41UN/xiLdXnq/mRoAolomWCHt97w/V7vqDKNeMNqiLN0sfLufVusIX7E5+g7mnZlPKYT2hLg6iMV7NgthZIxqL6fDU5Ha/ms9Zg+aSljJX+7HP9LSdsWYoOvMjT1PpJP2hDK9rCZR+8WdvZnA7IcIVaV0soULXnq4UZIsJ4tYYL+al2HD3mm8z/D+oRHR6vfsaNV2tLeL5amC1Ps67eK3W1tixgS6mrcUWL5gHThUdtaR+vdhr+aEUWHsvx6lTzbbmAmWwxBEz7YTkiSWixNbBFokU3Oi19jGMAtD6uJaqNVxftFgw99wfMUOeD9TfnXcCUSPLuMF7tlIFLLIxI8zJqGp0u1ZXK0sIvRvPnq4WpPoQG7gmYUkhrSwyJZG0Jar/l5lYYPGIlL5yrfYPpWrjz8/PV2ei0tigPKsWzXP4JF6ohwvMlV0jGohTjBezqrqvX18+jsHpRCuAV6WxXrOFGuLZ+DuGpcwmvaDxzbhMF8Op8tivHcBqcObsOHfal19V9bnV11+L/O1Y3FCNzORrMIlxZc7swPty1xGwv/9VizPYKVffsxlWi5w9U1/q48eqyPRozOWmnlg8KHty1rOerOd7Cucix5FsAjlEzD4v5JRdMVYbHWSpbS8wv056IHgLMMpGKgTdiuU7WFhlkFo5pmc6d8P9XYwdu2Htw7w5Up7LUjj2H2MJRnTheHfvH8WqGOG4C+yasjsQxzjXvKnNxmJTH8mPsruXgbukjDHP5H4dEOsbHvPsQElsZTMZ05yFp0T5S3MYHp2no4Qai6UyK3Qg20zVrxIKSyVGloe356t38ubiOV8uz0yyJdbxantPmtthsfazMlpbMWJDLSLhxeL6aLEV+YVpXl89XyzY0g9Q6Vser2cI3Fk2LamRbQd1gS8pBbbzaWih+sKXv+eqrabx6MGDSEceg6feV2m7hUZUhEVeD9AkBczXPVyfj1S48Mpwy7nVzD+7Kwqn+TFvHqxHHEi3DJhTRKeNU84AZalewxD15oJrt+RPUwZWhPBRcsNKT5bGEQd8/BsyOoTEOWwtO/y5IWpWrc+H+Jz9d3IuO0BRn4aqPLclPeCrK/vhI8DJ71pouAVzZYk6cZePViGLgmmpI9FwJmMpjqu45dQl3CU9SdfyLTJysSzLGgWDKgypxI+FORwNmCJKeRdU8r86u7roa5/HpM+uofl88zRr4y1D15fFsV6ydPrOGmyViRwypyrg74qxA9fviGmvgVehsV47hNFg/l6dZ6p5XVFeXm2M2JpmWtshL+40iW5haMHmSufJ3d27YdX+cS502Xq354ghrBqnj1Zo7Wnvyw/ILB/e8JmSZSMXQU5R7vBOZm/bhsAmZ74v5pW/vlNmn/iwc1yPzxetec9PBozaoon16mLlOHFSRgZTwEGCXd1r/UPTKXP2L3/pgIVokFbPiWcc3jJH57dmhWSP4wgUZo/bPV2ej2bI/HOtmg+7P7gOYu6k9WdiTrSdzOOmJFgalkvFPiuFdu3bakAsy0V07deyac0NVHMIadz4+Xy1j0ZoL4jWKXhxnnGrCUdknf75aHybkeshSZuONHPxCkh3mKoMqWNuOnbt36F8CR0Z7w+7dN3LsGrPxz/YHeZj0l8SUw0O6LDQbr8YVpHPZU96ItLNeDetBl9iz4JHx6rSQfvCWHe75atlVjv2CZW1WJMvc7vlqzMU/ju/HPz82+nx1+CvfbEmfr2aVi9JdWAPIRF3o+Wqpq7VP8nz1RzkUz/4C/O++OJfAuf22xLoaV27G5+3vloWwJiPPGpS6PtXxaunpQ2U5Or2S56slCuEl4gwDnY4/x7n2ZyYuICLhfe246WD8IxSIAA3j1Q1B0rHtjwXMTdkf+7vfWcA8sFfjXiW0oiUPknwjMs6MuRJa8UYYJDnb+tS1EjCbxqt1bgin9sNvfocITgtpfSMxJMoYNeaGgCnPVPP7TQ2Suj+yrP6ZCcY9+4MU8nRM/Avh8uubbrwaou1VrqoGvX4WsZZhjoorqMbAnEWHuarl6PQYUz3zMkdo6GOq5xCaGnmpCqkx0Vitl206Yld3XQ3jkPXZDVTCK/LZrkzD554MVmtIDcwh6/UN1MAr8tmuEMMHfRb/QqI24Cuqq0tnNvat734X8jkzFtU6C8kNIfxkkWbPXbNdKvDcJOezZZntMffoTHNKc+uj3rFkWrl148+S/JmhTtb1y4BMz/7Y8AsSrGLNMb/M52JzSJ7QaHNrTBd2T1/vePfexuer0WKFbjAZqKGzv3+iG7maJKM2coK5UoRH002wHZGEIBmeWRiKUZeh72A37Y3j1eyjQUk9Z46iaFXJFlm/jF1zro3bYJbOzQZn4H5/pKiutEdDB5mLxChbEE3aTullW0rGutEiBblUtjqXnqzWimr8kzo2NxS9tlTytPP1e2/BSmS1krHpGjzDCKPPV/snllHQ8pfXAuVcqTC1HebHq8X0XWELuOq17M9NR8vdeLVY+nw1xP7P6s528z++4mo5V6IK0zVjday1skWuVozldGk3f1Tm9Y9XcxbWcuuJWFd3MGhLrKvr7v6fretuenf3o56eKGSlMpflt5CdyTd9ScD08S2U6zpMXZpFxTTGmoUxaoa1m/a4xRn0dFbXIdiu9KdAheWj0xqXgnV/kMJCXMp0G69O9keLanZD3HNHDzsT4p4uqz/kiYa4Jyu3ZS1Iqtk4tnkIjD3sA/ieva3PVzNMSYdgUjlbuw5Zm2ENXFBXJXP1gZRoNkCty8LTuNe106W0Vrth715+LegCZuimDFMW7GPtaRxUvZ8RmkbYwmDCUdVTho0wMVG1Fh40XvXBGliihLXkrM6wk7P0JKsHlrk6pTlM2aarsqu+roadXTunpfWXXmYlvFyd7Qo0FNWophAjECK9YhoZHU6vsbT+slTCy9XZrgTjtyfyC3BkG6O69Lo6rNxyuCtH0wGZHrXC21QytoRFNZOrsfzZVyRbwqZheKdr6dFy4KVJsempijhQUch2NAy2tCmDUsoWpjIe1TLELaKQ7Wg5CJMqZKrimqopJOOFtRx4seGUYaV4lss/4UI1RDg+hLIV5be2q0rd2/i/WC+gkIxFKcYL2FLqalzFoq/MgLkd1QDouUmx6amKKFFRyHa0C4AtyhCXchcGPY8qrvSMF1HIdnQOmNMC5qVXSMaiFOMF7Fqoq2EotF4+s3767OaLUg8v0We7cgxRAze/02fWkpFq7xpGA6PbaZwV63zWGvWw+dkl8GyX0eJpcLbhseroS6+rJzmSs441szTWfM7zWLan+Zzniqp7Xq7rk2iaO5rKyAmf9IstFS0HXiapOpm5V/CSF1FEDIGOJ+m4a4DyPEnVPbc5rEc15/NsSkl5TNU9py4ZW8KTVB3/IhMn65KMcSCYcp/KX2P7qGu5eJHD17tkpFvyNlGmXxl3SklZVM3z6mwpdfUk1/BofC0ETHPErmmK6JTxJFUnI0zFlpIXUcQigY4n6bgjNGU8SdU9tzmsR+eAuZgxDgRTHlSJGwl3mgRJz51SUhZV87w6u0bqahjOaflN+Bre0uzXpGtFjRsk4gXCJUyUjGmVcadcW0N1PZ8V147j0zxzdr18ptqrumcsaJFiG4ZAmaWJ5eZwyjleuZbZZIU1X6wxUq7ATL9sbmBRZFE1lofuEuNvpGOfqDp4MpVtCKWFo+J6r7DmfClTS66qDp60cBhOGWb8K5iq7HQorHmGOU4V0shVLQdbUhaxlmGOiiuoxsCcRYe5quVgyxhTPfMyR0LUx1TPohocKqwPMHuTolr7bFshNSYaq/WyTUcMEQ/XvF+w0bAIFpwDZg9XtQyGLWwhroWjIgJUGFgwteSqanBr4RD6hhn/CqYqO2VAU8bVPcAwx6lCGrmqtSDpWcRahjkqrqAaA3MWHeaqXmEBs+ClKqTGRGO1XrbpiC0cMNvtEtXV2zd/DPwBUdaG+DHQlXHGeO5OIM+cOFZr4RHTU1yt5Ezp8qLKBpEzVe8Y4thglKldiEGAoyQMVe9ng8Q1dLYwVH2MGb6D9zPjPVrEm7im6oHDLW2MCw037D6mKouq9/E23FKNnEM64pjqUhlLsGSuS7Z6ErLABsO+orp61P3u9b+F5G36tw9IWXqSxfvYH/Zenuwtp42yadtpqUxXjqpOnnJp0EuuqnrvpVpyVPUhnhJqOo6qPpXT8JhyHk5hyoJ9rD2Ng6r3c99ton4rKW436in33bY6Jiaq1sKDNvGWXbndR1YHFiw9yeqBZa5OaQ5TtumqbIl19ahruAs8B8xepitHVScjBDUzveSqqlcCYx9HVR9ihJ3Y0s5R1acyo1IfzwGzlweNV32wBpYoYS05qzPs5Cw9yeqBZa5OaQ5Ttumq7BVUV+t71DcaPx7Hl0gxGVc5gaByKo+pTBJeUDmJjGloMW5UhhWvGmg8NyonKY8pphkPaBfKu+9HVdGWcU15M0i5UTnpFBvKeEDltppwj+oN2HOzhps3UgpjqrIpGiuqaUqhmtZcdl16XR1W7vK8K0P9Ye/7ULKPz3+s7oP2H33KfSdPk5anaJNi01O1vNyokO1oX0CoaxlSqHid8aiWIW4RhWxHy5CeKmSq6i2mUEjGC2t5ixy7pYpSPPtbc3nLFtUQ4fmSKyRjUYrxAraUuhpXsegcMKeqBkDPTYpNT1VEiYpCtqNdAGxRhriUuzDoeVRxpWe8iEK2o3PAnAPmauyqGa9W80fCDq6qTDLFNH4kyqbliTWi2zI5lc2UB5RefsvVqOqeO+dFnfKQduEmUcbBlCUyNrqGUc+Nqu655gzr48p7AlhcbleLqHpg3vaCKzdqd4Nv0qW6ph2eC3WJS5niiPokTJmqjs6RR33pdfUk97tqb8S4fJum5WEx9Yeu78Cae16JD59OFfWnZd9JO6Dq5Npl0vEi6i/b4qIe13EvQ80kVffc5hpCa1qGXFNKymOq7jn18vYxSdXxLzJxsi7J/C27vJUXWqYEnVZSCOVOKSmLqnlenS2lrp7kGh6N54DpeZKqkxGmYkvJiyhikUDHk3TcEZoynqTqntsc1qNzwFzMGAeCKQ+qxI2EO02CpOdOKSmLqnlenV2zdXVyKN0L/1btI/EsLwIvQTEZ53ACyancy071pC+/VUqZ6jlTTqqMaco61zhThpI+1kDTwlBpauJOGTSVMR1g/+wNlGG9Y0wT1rlDzBuDttQ4U5ldZawwYZ0bOFO5rfax3nSnsmi4YSOlGGGqpR3kkKZ4pjruS3d8MrQiVfe8orq63Bzeu+OVa9/hTbj4aCK7j6/4iHVudhoMcVUXO0VbTnvjqPVLyV9igaklV7XlklemjocR/CuYqux0PKwpwxynCmnkqo6FbhFrGeaoPbcVYM6iw1zVsdtiyVTPI7djqmd/K2/gpSqkxkRjtV626Ygtsa7GlZsxYojjlWtvkPQ8B8yUqSVXVYNbC4fQN8z4VzBV2SkDmjKu7gGGOU4V0shVrQVJzyLWMsxRcQXVGJiz6DBXdQ6YV1DAbLf5+WrRhNVaeMT0FFcrOVO6vKiyQeRM1TuGODYYZWoXYhDgKAlD1fvZIHENnS0MVR9jhu/g/cx4jxbxJq6peuBwSxvjQsMNu4+pyqLqfbwNt1Qj55COOKa6VMYSLJnrkq2ehCywwbCvqK4edb97/W8heZv+7QNSlp5k8T72h72XJ3vLaaNs2nZaKtOVo6qTp1wa9JKrqt57qZYcVX2Ip4SajqOqT+U0PKach1OYsmAfa0/joOr93HebqN9KituNesp9t62OiYmqtfCgTbxlV273kdWBBUtPsnpgmatTmsOUbboqW2JdPeoa7gLPAbOX6cpR1ckIQc1ML7mq6pXA2MdR1YcYYSe2tHNU9anMqNTHc8Ds5UHjVR+sgSVKWEvO6gw7OUtPsnpgmatTmsOUbboqewXV1foe9Y3Gj8fxJVJMxlVOIKicymMqk4QXVE4iYxpajBuVYcWrBhrPjcpJymOKacYD2oXy7vtRVbRlXFPeDFJuVE46xYYyHlC5rSbco3oD9tys4eaNlMKYqmyKxopqmlKopjWXXZdeV4eVuzzvylB/2Ps+lOzj8x+r+6D9R59y38nTpOUp2qTY9FQtLzcqZDvaFxDqWoYUKl5nPKpliFtEIdvRMqSnCpmqeospFJLxwlreIsduqaIUz/7WXN6yRTVEeL7kCslYlGK8gC2lrsZVLDoHzKmqAdBzk2LTUxVRoqKQ7WgXAFuUIS7lLgx6HlVc6RkvopDt6Bww54C5GrtEdTVO0/n/2VrA9T+vQgDF9YBLaETFEfXkv4aeD/XsneN8OLvG//4KN3XxCJELdYlLmeKI+iRMmaqOzpFHHXtokWIbhkCJcIlVZSuP/uxzL3iP7X5X7Y0Yl2/TtDwspv7Q9R1Yc88rccSNabpBSHiSqpOZewUveRFFIiXQ8SQdd835PE9Sdc9tDutRzfk8m1JSHlN1z6m720fvzWVA1fEvMnGyLskYB4IpD6rEjYQ7ZQInmHOnlJRF1TyvzhDxFksTschwwOxzDY/Gc8D0PEnVyQhTsaXkRRSxSKDjSTruCE0ZT1J1z20O69E5YC5mjAPBlAdV4kbCnSZB0nOnlJRF1TyvzhYOmO12KepqqajXcbnifcjRpi6Fr3nDlY1bkVbXOA/kSg8XpDFfREY3HGpEEDlIOExyqPRYbZdnu4oN54aeSKiuNUHpS3eUV6rqnldUV2ebQy199wfuV3/f3R+Ou/TymbORl6t9hzdhzRdrjJQrMNOvmFkqiyKLauSq6uDJVLYhlBaOihSqwprzpUwtuao6eNLCYThlmPGvYKqyUw6YKCPyDjDMcaqQRq5qOdiSsoi1DHPUntsKMGfRYa5qOdgyxlTPvMxxN+pjqmdRDQ4tvFSF1JhorNbLNh2xJdbVuHIzPnnq+Yc/+fjDn3yi1FWEzd4g6XkOmClTS66qBrcWDqFvmPGvYKqyUwY0ZVzdAwxznCqkkataC5KeRaxlmKPiCqoxMGfRYa7qHDCvoIDZbiuvq8+uIRhuoM5Txye4RLZtvAJsff08KiJcD7iEKiqODjhdcNWI4+AoLItnuxZMv3nBJSluaQc9pCOOqS6VsWRL5naJF15GVk/ZYNhXVFdnrnU14IUvv4S6GoDdQ3Z4590ffv5LL/a/heRt+rcPSFl6ksX72B/2Xp7sSKoa2XSD0Mh05ajqZOZVrUwvuarqTNkaOap6Hz97YN879h+JL5EwtXJUdeFTB8Pa0vYKM9XqY+ZtnmHKgn2sPY2Dqvdzd8vo4+S2YhxVPWXYCBMTVWvhQeNVH6yBJUpYS87qwIKlJ1k9sMzVKc1hyjZdlS2xri79oU88/kef+szTx06aHzf4yIGHnz7+bOyGWJHxUMB89L533PHQSQl3J3Ax7jsis5YfME/6lReOuNTIpohLzUxXPvbQbXvuewKsznaGrFamOz72sKxNWzJVrwTGPo6qPsQIO7GlnaOqT2VGpT6eA2YvDxqv+mANLFHCWnJWZ9jJWXqS1QPLXJ3SHKZs01XZVV9XoxpErNnACXNxa0Mq4eWybeaVYTiY6ag1L6XImIWbDAtgHKGV6GzXiOFU4ai1JC5dGnRZdel1dVi5y/NkvHrP7feo63g1i+p7PvzEp/8k9ulV5IJ77jLf98RQz0H1h/38+SfuQZIU2p/Yd5ekm8jPpCUo0qyMRZl+Vfgo0ri7bjuA1Uj7ZK0MtrQoNi053133Hg7DKWOKFKqikBF99uAdspXYcphp+inlblClV60SJjPlCu3K0CP37rnvSGjBvGHVtSEKx5bN43ocwhBKaLn94KmupaqQXn3yXpx4+5/sWmSdckLuO3RcWspBmFQhLXpkfzjP9x/RW0yhpw69F+fqw8+R0XDq0B22CN6jtGxtHr5fd0z6xxYelq6FWg682HDKsFI8y+WfcKEaIjxfcoVkLEoxXsCWUlfjKhZNAib04U8+jioafOLU80ePP4uAefrMGqrrjxx85NFPfUZbfH9THy3hEtYel/h2QudKC3pK6VsJp10w9C2IlqEl0Z6AGepqbekJmAxHgyrh5dmu5STizz2Pxj4aAD2nyoDM3c7asWnVJ/bfdS/X1rX0KaLEhbA2suoG96c71PuPsJntbdoFQFVZG8Jp0s6Qi+v9WQlxoV25C4Z4I+HjfhjR2NpjmEJj6NlFGOmJq55rTHr2B0mvkBGVsCnO8Nu1iyJyIXpbh/sRxNGi7S4GPindpeW9fAvgriUspS0iVYVkvLDOAfOyBcx2W2FdjROUv0m+uIlKeEVqW3rFGG5pCKa4NnA5eUWkw6HGIedRWZHOdg0ZTqTiWetCXeKi6Y5nUZ+EKVPV0TnyqC+9rm70hz7xOO6dd3/ggeg2ji1z41vjoMqefQee1pYTB5BkuFzQH6jk0PUdWHPwEdbVXcsSHKnkbfvuu01SUsQKtEzQDULCk1SdzNwreMmLKBIpATgzP9TV2kJl4is5X2wp1PspfJp+vFpdcz661NWxZUxtbXgZ25Eg3rHvttgoY9psOYh9tJbosB7V/E84ZpxSwEvzKTkIwscfvp1fBGh7VdU9p663EnH5muBJbTmyn4VxebvZfOz+d9yx73ZmmWyxRUAosN97172PsXHrsftvf+8+ltlgLLW19eR+LBLmSkumSzLGgWDKgypxI+FOmcAJ5twpJWVRNc+rs6XU1X2OCIm6+vkvvfieOz8ABsAf/sTjCJUfOfAwWj743z6KblnA1BFpltBpkDTWulrYSl9td0HyiX28ZA4c1Rb4kXtwBU2MlsPj1aUjdlU0fAsQWk4i/qMSJiM6xXblUuN4dTlX/Il99i1h8JKdVsaru+8Z0cLilsHQB0zjBkXokzijhbF5tZFxyTOLTPvZDveHsY7MZTVWW4fYk5Et9CSHnkmHisN61AVMYxckpcAObMrAJdU+WDYqJXSMgcIWAwVuv8MY8YorRAxEXU3WlmZVx7/IxMm6JGMcCKY8qBI3Eu40CZKeO6WkLKrmeXV2ddfVHKzGFX1x88IWy2DRJbNt6RVjuC2hHrILklc9FYxGBFUcGymDoavg2a4dw4lkvwb36Y7jlaq65xXV1dnm/PPVcJ37kYOPwJWhyBdPnuKfNIst5ziqfNc9j9ZbHt/HwWFrT7JJKb8lI9GeOLwy5CKOslwGlu2lZJlx6EY+CJTcNvcezeG0Zj5wJK425nbIRDFXGDnfvoNHRY9Ze77UYVuq25n9SD1ZrIZRbs8cTgkcEqw92ARzOGm57wm8azTuOyJpGbaLdqxh38FH4xvsBlhYA1ujuKRikmUiq7M+1JI7dePVOnii49Vk2e5ht93YR/I2adx3736rhG3Z0H7w+OaGDLlbi6RcmrZqi6Z9WNdGNwzSrY1bQj6Huayr77v3jvvZyvZnD91x/7373Xi13+gxtFjOZ3O5q9qu2aGpZnuhRfocjyzDJtozjFqLWMswR0Xxy+8UjGO7v90A0Wffoccevt3GqzeflLemfcA2In34/nfsv//ebkwbaej996Ku1rmUcuClxlTPvMxx4+9jqmdRDQ4tvFSF1JhorNbLNh2xJdbVuN4z1vHqp48/i8CoLb4dnLWbhlrUt+tPvrO5Mehpn3gPQnS6Z999MbRiEX5dKJFECm8Wt3YRxbJcAxGd3TT0aV2txbBuFFe9C5gDHFTCSxiv1uCm49Wb3OL+IzGQYm9DH4ZNawy7zXYX82UNXWyJQZVPqWjL/iPYGbRAWfVJ4237u7Vh9xAx8NYkHipDpVg98KzNjaFGi+086PELRLZou7K8Xwl9DHc2l41xDV07/hW8yTVoSJSlZOx6S0IrN4erW3tyGUaJpCcMPVmvhp6dcmYb1xTHEDW8tYQg2fFxBjREsS6cpkESi9+7//7bsQZteez+2/ffx/0MfXA1sZ0KzFl0mKvaFiQ9U+eAOWRXd1195uwaqt/zW5sXUAaLL51tS68Yw3V4+swargpcSF7RyNKXfn5VOts1ZHoiSYLCdIGuqUnCVEtryJZsydwu8cLLyOopGwz7iurqzLWuhsLj89XQgx/7o/hrRqSJmi/qIuyjP71TDm9fy2kAkqrbDqCUlkMU8kVN+0K7DErH9UjPkwfuO3BUf7OtLfSYBWpRreW0pmLK7LBnHxdUtm2ZMyNEZ2lEcqZJHlwSNVuKjBxRe2p+xlXdhyKcszQHRXl8xz4b8dZKFQmfFNXye8UN/hZREk0b35BfHtLZh/W8LGWZnLLmeTJ0w4oRrHuo2WGPqjNlS1nratduOR8rWN2ujqswtZW63dpLRqqnWR2SqThIEvI/abckVRjvTtvdGhIOjjXc8fCRgzJAjXbJMkNWJ1W3bAgWN9rll4ExlT4I7zSw1NWuHauVg68rCT3lpjDMeuOosiSXh0ISrz/q9jcaKDbHduzke5mGSrt+y3D/kcfuhz6pjeD9T7LMlgFq7PztB5/sRrODtfCg8aoP1sASJawlZ3UmWzlLT7J6YJmrU5rDlG26KltiXV26jlcjPJ449TxeasCEP/+lF3EHASBgSnsa/yUS6jeM8X4RSuim56tZVz8af85z4sAd+w7wezq8ZJ8n9mlA06CqzJAbY+Y9Ev1iRGXpG7cSnAGzhUNUZ1xiCzeE+EPGLPmmkix7wuhnIdfGqDtmXGXsRciKcThGM64haWf4YruEyrCGyAyGEv2guoYQDDc29Bc0ZIYI7cw1SxDWRg3CjFQHZTBZnS0aryycunZptPFqhKAwq8oa6wAasthOl7vDcWXrX+nJ9q4nzPrnzKLXM0xZsODwox7XHpWu31qS8xiohbTU1YexYyik8fLUoTvku0V7yXil6+k4qnrKsBEmJqrWwoPGqz5YA0uUsJac1Rl2cpaeZPXAMlenNIcp23RVdnXX1dj781sb51kGr0ptS68kw1GVaICLkJeSMhql+j23Qp3t2jJenpIMXQnKE3jbhsshpolh5SHPE9W6+sBHP3HgDz/p/x64V9bV8vd4unYmT/c9nvaM5XR9vDrUt9qOPkj4ZD3xx+TU809Lpqjc5Z2xZpb2CxeYS8V2ZoewWBgLIw8LPWUYpCuwwVyKI9KhnUvpuDGzQGuH2u8VOQgj49vMvWQ9D3M96M8RaX048FmbK4U0klqma2yXfAgZmw6hhIcGZf8lz+PxeVizxshIpNCnU8gLxz/1+GcfffyP//iF0JJozCNDC3M+fb46DN3oEAp/Pi1PSoefYmp7zOGwG9YTGrPJuJQysklkZiEL5JB1UnjHcpcDJtqCzIxL2bKakFk5yrnS0xTVqfSRot3Gn7U/u4QsUFUzv9jCl5L54XNseL761Kd5SD/76RN2y6grC2MpfWHMRPV3j5gZFEdJfv69xZ+Ch7Fo9oxpKEtx6cm62pTPY2NV8ivx+fnqhEUpxgsYIh6igK52kmERLDgQMKE6Lo0SGnW1b3/hyy/pd5G949XhlIDrD3ZaxqujSl1tatEsxC4rv035syBZP7+L1O86NRhirgXMsDltF/UcLo1npN3GpZ1KeEHxF1oYZCzMShDTMIu54ac9CI8MFNK/5/lqiSGseJNQCbZRazgDpmwAAP/0SURBVOxYCLkh3GEbaA9rI6vGoEemMt7Kd50SSUI7v+xjkJSdlLir7TEARtVwmrXH8Ni1K1vQ65hvjXcBMCp8+26RfezuoP1xpceeYPR04dF6Ctf1+S/8MT6yR588/jxa0DCg2HOehF1R3WkXMLFFHmS2SH/lEAPJPJiPmbIdYZBzu/FqVVxTNYVkvLDOAfOyBcx2W3FdfXHj3NbGOdE+/skj+6vtLWxbeiUZjiquDVxOXtEoBXCvb7302fU/+OcbT90J3vziIxc+8844a+NPbj//8E/Hl70+27VlvDwlw9AUpKIufdF0x7OoT8KUqeroHHnUeQJv2xAoES6xqmzl0VFXf/qzT0WP7X5X7/6gjFdru761kMp0LTIAouV0LLB5cEICl6WVcPsp+AX+gBwvNf+La5ZZ3ehKhK5dVpu0p8uKu99G0pmiod3KWukT0zJy+Kk5c0FZXLM6yWW5FLM9y+1kJd2abeWhkNbBlvCS7WEQRgZVYp7XFd4b8jtJHVTpBl5Mj32OaS78U194MbYjnRKAax7J5AwvqTHnc0M3bI8ZnnSwwZb8+WouYm9K+0jKa2Wz5WSdc7u6tpBN9jxfjQ5YM0rlI/dKZyRkTDG1m45y6zplLgwdbmcHHCJW9Yjq6AkTJccCng2haCejdg0FsM3NNaSh8D9+TlsS11sJXEehQ4vk5f5GY79XJ3c9tWZmH/mJuJTWQBmvlpb7n0QCSnZPX2MNhS7JGAeCKQ+qxI2EO3WDLSl3SklZVM3z6gwRb7E0EYsMB0y4jlfDdVxaHbFO28EImNpCjfcCiYQ6Xq3t6CDhiyW0xUlpt5gmgdSuCOmjUYg99x1BH3IW8VyM5VxptJVoELbIuU9+ep2Fys6/+FS4ND53yoJkprq3XYuMV8sWreTWdn2yBjFNf7bDMMhFsueru7epjSGaSaDDgnGuOCOkxkzMlT4SnQaer4ZqvE0imzqDpHVGpLIgWWoaLc3LRg13nql8C+EHPj2j0NpTSlnrGYKbsh+vLh0WvyWU7161BZYHzKgUiV3yrR9MW6I+yz+7GEMrIlsMrTEGRpAvCrG3ZK2rGRgZwSaoOv5FJk7WJRnjQDDlQZW4kXCnSZD03CklZVE1z6uzhQNmu622rl7fuoACuKrvOv6Jmz/33974uY/8xQd+Fao80L+qtqVLb4+8/a//4L4v2otLajiqdkHyqqeC0cjSd3O9T89/Ytf5j+88u/dPgzee+TA4zj3/0H84e/vXZP0rekXZCo//yff/4I63PmIvlmDY1W/YAV/mOpdhOGeQDbAadKlP5JWqumeewNs2XAhZmphtDnX1p4587rEnPqfqdylqZbx6geerQyrWc3jtJ4vndLw6fIsR805JB7E4LLazMFZAB7aHLBOJFBJZtthGhaWcRsFvS5Xj1TLAgkRKymnuDBg5n4xUM2/j2uTRQSRhujYpsG3IBasVlgRIBlikXV7a89X5eDXSKTDLaUvyHkJKhd3QdvYM3I1XP6/ldK7dkIsMnsTKNiSp1m4VMpjlMd+U9Q85XPxdOjkmjihZZSmsJSuhTbk2Pv6nLT53RDBmn1D0ouftSOsP8KE+KZtlQIbrZyHK7iGHI+tSzNuY4dnaIEFlQzZe7RkqOZ8x7gfaLmItSFyz8erQnqiMPHfPV3O1HLsOtxsZzU79/iM2aGN9Qjkdxqs3t547yL/3I+tx49WUcuClxlTPvMyREPUx1bOoBocWXqpCakw0Vutlm44YIh6ueb9go2ERLDgQMKGPfuoz+hcobrvzgw9/8onnv/TiC19+6eFPPn7PBx/QEeze8WoJgL598vPVDLAIlftuQ0RCe1dXSxGuJTp/Im6h2MJgFjDlBzXan3MZakJP42fdeLVvDxoKY2mxoKc/0olvRAOgxCW84veViELWvwu52i7hLpbHLvy6glzCoKgW27YUWnRnlCUY2oi3jkuzKZbZrl17UjjXB8/Q7phBb7Hnq7mrsaimxqCK0BS+r8TVHXsKJz1h6MmQYj2dcqZxZbza96mpBmpr6Uaqn0VQ0nZp2EL0jqPQ6GOhtQuG6I+AJkHS9RSxNQjmLDrMVW0Lkp6pc8AcsoUDZrutvK6mSxmc8buOfRy19M2fpX/vY3f+7Qd/A5D1GWXbUmYn9r1OKpnE33LY5i7Fxuq6L975+tfdeTLyX/+Gty9r8ziquCpwIXlF48XNtYtbqIHFC9784sNr+/7XC4+9Bcy6+hM7t9ZOothGe+xj3sf9dvgt+dFeag15+K1xzfGYT6ursYae4+9OlfB5LaGu5gGx3ePawpr5RuJZ0WbJnrvVLsFwziCNwL3fXNOahKmW7pAt2ZK5XeKFl5HVUzYYdp7A2zYESoRLrCpbeXT9HfgfPvIY3D9fra67jXbkjtnb8X8PXH95GLI6yeEktbKxaGNWzuH5alSzJzk48+h9YVyFa+CQNVMufTiQrumgMDMw68w+xq5DMXoThtCVkWDFnE/LWmuPSz16H8eT2VN3lUPW/G32vvvsIUDsA//4luR58pLpHfM/vnz2GFasOZ+NP0u7vgRrSmftOn5CRlIYyungylVVZ8qWMytb3S5cdozbQjInQzSocnO2NC6ytndDzdqZJW7yO0Zt7zLF489Kjihrs983dmtjH3WsQVflBl6QlmERJFKxDkeKFjeKGC4pL0oIvBfN+bgUI7wYOOSF0s6ULhTngdmP/dX7WW8cPYz91AHnkDXiFhOHeuR2o46y2Z6v5n+yZYt4jgW2rgeUjlertfCg8aoP1sASJawlZ3UmWzlLT7J6YJmrU5rDlG26KkPEWyxNxCLDAVP96WMn/+hTn0FgBJw+c/blM2cB8WfhOo6trE6W4KPj1fF+IeELYbP9+WpCWCpGPPRBeNSwKSW6hccj94T1xJicRtQu0qozSDYxoxnWw7ik4VR+Fg6W56sl4qXMMj70kW8SWZZLado9O61D6FifRBgJv/rTntAHu/8sY6msQdemzLVhrvaBcg0aAJWlAJaXjJPaGX7ylAZMFzw3jh15AoEOIM4WuJS4WEPSLo02Xo1YhP2UIhkcurlIGNslPGLf8BKBxYJtjJkSyvKeoK4nLXZLmaHPM0xZUBmBC4FRWULT/icRxRhI+c0pespINSOwLsUACLcYCGbs0u8Zpa6WAlu/02T3GCHh0sdcOap6yrARJiaq1sKDxqs+WANLlLCWnNUZdnKWnmT1wDJXpzSHKdt0VbZwwGy31dbVayyAL/TpZ858EaU1+CeO7Lvr5OMDPfvUttRnqxvVnFJXL9dwVCUC4CLkpaSMxotba1IGV/T8o286/+gvgM9/cveFx35l45k/OP/xneceeC3K7LV9/6hvqVzbjF8iLPOY+7pUVq7fjyylrsZKvuH17z9hr1C1Sjm93PHqba6t/xuBbRvOGeQNMSW6vMoTeNuGyyGmiWHlIc8T1br66DPPHnvmlD5f/cSn/+TuD8r/sBX04Mf+KPZPVBI4czfAomPX0n7f40+zbgy/fpTyW2fdcd8TRzcef/ShrkUzxe7XjxyQ6TJIyx2ts/zmkOsMaSKsyzLJyMP4kjUw2YZWdHhElirGq5847HYmzpXiWYZcwJJNMp/TsWgdqLFF8DY5pi2FtA7dSF7Il8zYauPVypIXdo4+SKTQ3qkVzyMqiZ2txIZf2C7b3R8+KSnmkVRROUqs/e87ghxR8jatyaVx38HDLIaZTWoKqD2Rn2mqqt3uuO+I/o6xWBuiMNvZP4w8S4u2QzVXE2aZqhs9JBuNPSXhkwEQyxRNpV0XoXM9aEZlG1qsFO8GYQKnCmlQGcDhapmS8hbD9BEbfVZvN6JbyfPVUnjbzsgYNfuE8WrrT52fr4ZkLEoxXsAQ8RAFdLWTDItgwYGAGfXkKf6hx6PHn336+MlMFxmvZhRinLGeUhj7nlCpq8labIelGLvQwpim59u+J6zno0cOdDFBy+8kYMoiCFYhYFIDW8Ac0C70ceWxnW/zvnssPjD0STtUgqf0v+dRLGsldAy58hewdLxa6m1tlIDJaCkvscJ7H8UhRJ9ubfeGtSFiYFmqBL2wSAh6bBcNK8fabjvAohftLnjuu/ewG7vuOotrkCwauQaJSPZtpgS9EDOjh598x1ApXyBWe/L7Qawl6ZkEz16FDOjxZ/lHLuKGwg98ZAcYZrO4Cr/3MVkWETd8pvZ9ZTdeTTa1utq1UKoKyXhhnQPmZQuY7bbyulr9bICM/9aD//l3jn38bz/4n7P2RrYt9VlafUmtexh1jlVTbqwyVD6ogl7//kdieyhpup6hDPNrLtbjx2/ZkuwGCy2d5QpvKZ9Y5smswYoRRxXXBi4nr2i8uNnr5+7/7nP3fRdg7d5vPHfg+zaOf+j8x2/a+Px7z+790wDfc8hbjIeiq1SlcE3fEVrecliPjx4rGcwfeNc4Mm6F0eSQHg7LuiNZHF736aSbwJqrFW9SCXcfZVy2PBmKllj/p4snay7feL6tYs+7rxVg8di6wtud4fTau+sM5wySAHFJX0qVhKZLa1IW9UmYMlUdnSOPOk/gbRsCJcIlVpWtPDrq6g98+MHosd3vqr0R4/xt6kAEkp3ysJj6Q9d3YM09r8RdBtmmMsCS8CRVJzNLC+5YStDQ4p+7tpYBRSIl0HFNu6GbbO64S86X8CRV99zmsE7l+IQWzQ49m1JSHlN1z6nrrcTzJFXHv8jEybokYxwIpjyoEjcS7tQNtqTcKSVlUTXPqzNEvMXSRCwyHDC9P/SJx6M/7BjVNeb2Bcyo12bAxAUrw8tkRKqpqk5GmIotJS+iiEUCHU/SUT+yn98j2EuEqamq7rnNYT06B8zFjHEgmPKgStxIuNMkSHrulJKyqJrn1dnCAbPdVltXn9k6j+r3LPTihSrf+ezj/90Hd/7B85+L7eB3Hnukr3/GtqU+k+orllJSycQi7fBbwyy2G0tV5lhrMxQ8oXI7/NZ8vLS6Hr+I79ytkwt2BSE5lEx99Z4ZjqpdkLzqqWA0SvV75uKWaMqbz31s85k/2HzmQ6oXPvWr5x54rW+B9i3b8bj5d6SFn5V8Uu9Ju1SDSW3pjlhkb1pt5gdE1uPWqRvqObwsUItR32ojrat+sXthu92HUp4MZQvfl32a3dqyNXcH55F9h0/Ut5XtZLfa9Nim7Gr72vGMhnMG2UCZ7pQp0dJV3TNP4G0bLoQsTSw3h+TM8SLaDZUUAzKl9h3ehDVfrDFSrsBMv2JmqSyKLKqRq6oDLFNZhlnaOCpSKCukhXUQ5rYDJ4Wlj+Z/OrfkquowS8cyXh2fr/Z9qEy5hhn/CqYqO+WAiTIC8ADDHKcKSVh/tVi292g5Op2yiLUMc9TsthIYmLPoMFe1HGwZY6pnXuZIiPqY6llUg0MLL1UhNSYaq/WyTUcMEQ/XuV+w0bAIFlx1wJykvUHS85UTMMOwfN6es4W4Fo6KCFBhYMHUkquqwa2FQ+irsfyER3+q7doDWwBMlQFNGVf3AMMcpwpp5KrWgqRnEWsZ5qi4gmoMzFl0mKs6B8wrKGC228rr6gFH/fx19/+qKsppbQTDT5x/KXYbcNtSn6H8cKVFV5Zk1lUvvgqK/dmYL5iu2cxVQfW62pVDtG4lKKW6Udlk2cJwVHFV4ELyikYpg+t+7qOvW7v3H3X+u3/t7H/92qTl3n+ULVLxMWMB7I4JXsYjySMTj0DXxxWQtOQgJMYDK2Ow8VPI1yML9h3etDo1yzp3lpwD0cKHUp4MldPDnWl+bZHrm4jWnQDpnvsT0p0h3dqSM7z6rp3hnIlpBz2kI46pLpWxBEvmumSrJyELbDDsPIG3bQiUCJdYVbbyAfe71/8Wkrfp3z4gZelJFu9jf9h7ebIjqWpk0w1CI9OVo6qTmVe1cfKTRfnBubZXVZ0pWyPH8Wq8DC09rNmeejtHVZ/KTLXqrL+N1D8Yjnge5zLCi/Wwywg7Ve9nvXEMcbi5eI6qnjJshImJqrXwoPGqD9bAEiWsJWd1Jls5S0+yemCZq1Oaw5RtuipDxFssTcQic8BsYVPEpYylrrZnrdM+dOWo6mSGrFaml1xV9Upg7OOo6kOMsBNb2jmq+lRmVOpjhj7PMGXBPp4DJq/6YA0sUcJaclZn2MlZepLVA8tcndIcpmzTVdnCAbPdVltXn946h+q3qk+eee57HrsTGhm19Iee/+y3feLWX33q4K8+dai6VKa2pT5Lqq9KXc1SUAs2q0CSgsf3J3fd8jUX63GlESx2Tpdypd3EutouaV5Kymi8uHl61Deeet/GsX0bx3///MffsPn8Ixc+/VtZhyEfNr4XXxXzSIZjIl45AlkhbWV2ONTl8LWsUz+R6nr6Dm+1wuwtO5NzQLrZW4gfSn4yFC3uzPFri5y98WDlttKdDKtN91Bean+33YE3aIZzBnmApjWXXXkCb9twOcQ0Mazc5XlXhmq+6Lmili+aIs3KWJTpV52TAZapWhlsaVFseqoihaooZDvaDaq0KFOulKl4nfGoIgpnvIhCtqPlIEyqkKmqt5hCIRkvrOXAiw2nDCvFs1z+CReqIcLzJVdIxqIU4wUMEQ9Xvq52kmERLDgHTESAbagGQM9Nik1PVUSJikK2o10AbFGGuJS7MOh5VHGlZ7yIQrajc8CcA+ZqbOV19SRHdf3ek489c/7Fr7v/V7NZVbct9VlaaCVVR6jiaF0FklQsaX+xuMJuzdX19NfVvtRJVjKlrpYLzCsaL26+POprd/0va/f+w43jH0Rdfe7g9535nf/rxXOn0L715cd8t7oPGQ9Cus9Z7Rese8swd+hoyUGoWzyA+XpCXV09vPUKM9t6tG7PeQKErVQ+lGQfxEKLO3P8cYhc2XR9W/11tduZdIcn1tXildSE6hKXMsUR9UmYMlUdnSOPOk/gbRsCJcIlVpWtfNT9rtobMS7fpml5WEz9oes7sOaeV+JIvKbpBiHhSapOZu4VvORFFImUQMeTdNw15/M8SdU9tzmsRzXn82xKSXlM1T2nrrcSz5NUHf8iEyfrkoxxIJjyoErcSLhTN9iScqeUlEXVPK/OEPEWSxOxyBwwvSN2TVNEp4wnqToZYSq2lLyIIhYJdDxJxx2hKeNJqu65zWE9OgfMxYxxIJjyoErcSLjTJEh67pSSsqia59XZwgGz3VZbV7+8uY7q92WpgUcZ5fR/98Gd3/zQO+H/44ff9Ptf/Mxwf6htqc/SyiepOtwslDGhHvNVkCtj3hJWgqW0Z1y8vh7hyrZ8RcTiyvGUulquKrnqqWA0svTdeKmqW2ePbTx1B3Rr/ZmL55+Xuvpn2f7iE9D1j/zLs3v+VN+ynfYb36w7zmbxWNFOflHfnTtcsKyY7FlJcmTsqCbriUev5/CywqxV7MXK5aOP54A7GbgGXVt5MlROD3em+TOqY3aIB+eRw1xJZVv5nnerDRuyRs/xrOOyYRM1wzmDbKBMd8qUaOmq7pkn8LYNF0KWJpabQ3LmeOXad3gT1nyxxki5AjP9ipmlsiiyqEauqg6eTGUbQmnhqEihKqw5X8rUkquqgyctHIZThhn/CqYqO+WAiTIC8ADDHKcKaeSqloMtKYtYyzBHzW4rgYE5iw5zVcvBljGmeuZljoSoj6meRTU4tPBSFVJjorFaL9t0xBDxcJ37BRsNi2DBOWD2cFXLYNjCFuJaOCoiQIWBBVNLrqoGtxYOoW+Y8a9gqrJTBjRlXN0DDHOcKqSRq1oLkp5FrGWYo+IKqjEwZ9FhruocMK+ggNluK66rt869tLUOfVl0mN/y1MGdn/2w8u8//5nveezO0WVtS32WVF9p1SGVjP7m9q2PoAbTCsRXQaH/icOH70QZo51DkeMLrcp6rAMa2SfZja6/1U60iXU1Lpv0OzA0SgH8UlXXP/St/JPg93+3tmwc+wDr6o2Xzn/iDVtnn958/uGNz+3tW7bTXmMFq+8ouh1Dln/awr9TbS1p8SxlofQpi2q1biWuT7Ief/Sqh1cKcjYWdaZf+VsOf/EEF5HO0jPOlb89zhWWJ0Pt9HBnmj+jkrMr7BJW/vr3o7HcVtKN+5OcwN2ed28q6dBUV1vaQQ/piGOqS2UswZK5LtnqScgCGww7T+BtGwIlwiVWla18wP3u9b+F5G36tw9IWXqSxfvYH/ZenuxIqhrZdIPQyHTlqOpk5lWtTC+5qupM2Ro5qvoQa7an3s5R1acyU60+Zt7mGaYs2Mfa0zioej/rjWOIw83Fc1T1lGEjTExUrYUHjVd9sAaWKGEtOasz2cpZepLVA8tcndIcpmzTVRki3mJpIhaZA2YLmyIuNTNdOao6GSGomeklV1W9Ehj7OKr6ECPsxJZ2jqo+lRmV+ngOmL08aLzqgzWwRAlryVmdYSdn6UlWDyxzdUpzmLJNV2ULB8x2W21d/dLm+ktb514UHeV/+9idT5x+Nrb/80/cWvbJ2Lb0SjIc1XBJ81JSRqMUwC9W9dxHf2Ttd/+ajFGzZeM46+rN5w6d+e3/84Ujb+tbKtfZri3DOYM8QNOay648gbdtuBwQLs+cWVs/h6xMV+7yvCtDNV/0XFHLF02RZmUsyvSrzjZUsphWBltaFJueqkihKgrZjnaDKi3KlCtlKl5nPKqIwhkvopDtaDkIkypkquotplBIxgtrOfBiwynDSvEsl3/ChWqI8HzJFZKxKMV4quFTQMTDla+rnWRYBAvOARMRYBuqAdBzk2LTUxVRoqKQ7WgXAFuUIS7lLgx6HlVc6RkvopDt6Bww54C5GltxXb21/qL6ZoClsm3plWSsq+UC88qyBKUv/ctV3Tr7tOetL38KvPXCJ8uevTrbtWVaV4tXUhOqS1zKFEfUJ2HKVHV0jjzqy6qrccdfW1s/u3YuW/+w+121N2Jcvk3T8rCY+kPXd2DNPa/EkXhN0w1CwpNUnczcK3jJiygSKYGOJ+m4a87neZKqe25zWI9qzufZlJLymKp7Tl1vJZ4nqTr+RSZO1iUZ40Aw5UGVuJFwp26wJeVOKSmLqnlekSEEIeLhktB9m2RYBAvOATM6Ytc0RXTKeJKqkxGmYkvJiyhikUDHk3TcEZoynqTqntsc1qNzwFzMGAeCKQ+qxI2EO02CpOdOKSmLqnlekSEELRww222FdfWZs2svSRn85c21Falt6RVjuNBPn1mzC5JXPRWMRv5ae+NLFzdRA39JKuFl82zXkOmJhGygTHd8MrQiVfe8lLoahve1fu4c3lq5ibADnleufYc3Yc0Xa4yUKzDTr5hZKosii2rkqurgyVS2IZQWjooUqsKa86VMLbmqOnjSwmE4ZZjxr2CqslMOmCgjAA8wzHGqkEauqg6q9LOItQxz1Oy2EtjfboxFh7mqOsCieWEbUz0zV0NC1MdUz6KIDI28VIXUmGis1ss2HTLEOkQ8HHl7PdGw4Bwwe7iqZTBsYQtxLRwVEaDCwIKpJVdVg1sLh9A3zPhXMFXZKQOaMq7uAYY5ThXSyFWtBUnPItYyzFFxtdYYmLPoMFd1DphXVsBstBXW1evr509vnEMB/OWttRWpbekVY7iv4KTAVYELySsaNy6clkp4ZT7bNWQ4kc6urce0gx7SEcdUl8pYgiVzXbLVk5AFNhj2JdbV58+fx1uDZ5uout+9/reQvE3/9gEpS0+yeB/7w97Lkx1JVSObbhAama4cVZ3MvKqV6SVXVZ0pWyNHVR9izfbU2zmq+lTG2dnLzNs8w5QF+1h7GgdV72e9cQxxuLl4jqqeMmyEiYmqtfCgMa8K1sCSh1lLzurMzXKWnmT1wDJXpzSHKdt0+YYUi2H8/Hl8etY00bDgHDBH2RRxqZnpylHVyQhBzUwvuarqlcDYx1HVhxhhJ7a0c1T1qcyo1MdzwOzlQWPICtbAEuKsJWd1hrecpSdZPbDM1SnNYco2Xb5tP2A22grrapzBL59lAfylzbMrUtvSK8ZOn1ljRLRLmpeS8oULF86cPXvxwgssgFeks11DJl/aIXWxtOay67LqalwRuEDW19fPnmWmGHO4K0c1X/RcUcsXTZFmZSzK9KvONlSymFYGW1oUm56qiGQVhWxHu0GVFmXKlTIVrzMeVUThjBdRyHY0DLx0nCpkquotplBIxgtrOfBiwynDSvEs137ChWp88HzJFZKxKMW43TRHhOH81XUuYFgQi88BcxuqAdBzk2LTUxVRoqKQ7WgXAFuUIS7lLgx6HlVc6RkvopDt6Bww54C5GlthXQ3jkPX5c6iBX9g8+6WtNepS2TbzyjAcTA5Wl9+HiWLWhXNnLl54/uIGKmHx5fJs14ohfZIUiqmAeCU1obrEpUxxRH0SpkxVR+fIo76suhqGiIl0iZni2vrpM2tn186tnzufbS5zv6v2RozLt2laHhZTf+j6Dqy555U4Eq9pukFIeJKqk5l7BS95EUUiJdDxJB13zfk8T1J1z20O61HN+TybUlIeU3XPqetNxPMkVce/yMTJuiRjBAimPKgSMRLu1A22pNwpJWVRNc/LMnyKCDsS2ZgjItbpXi1sWHwOmOqIXdMU0SnjSapORpiKLSUvoohFAh1P0nFHaMp4kqp7bnNYj84BczFjBAimPKgSMRLuNAmSnjulpCyq5nlZhk8RYWeJAbPFVltXwxCmT59fRzH8PCrhLRbD0GWxbeMVYPpdi1zp4YI05gtldLhwXkvr54N+cWk82zVhKKjPnGVR3ZfuKK9U1T0vsa6G4YrQTFH+JA+TRax/9tlnn/1qd00QEdmWmCPOAXP22We/Jn0VAXPUVl5Xw1ATvnx27eUN/rEx1MMojMXPBFicbQPXrqFeRtWBM4Mj1fI1FQroiqrLqPWZs2c3LrwsP96Wepjlseg2ebar2eKJhBCDgjY4i1tzKXRTttI3sBXDMrcrjPEysnrKBsOO8Gc7uiRD9NzYwK6el2Rx7exss80229VviGaIaYhsiG9LzBHngDnbbLNde7aigDlsl6Kuhm1sbuKtnUF1XXydMPuAn5GKmr/jQd0sJbQop6HFOOqFC/zbZvNXzrN714pan6m2sjkUwFeCYg8tUizP7OrY3MTlg4tittlmm+1qN00GYIhvFumWZHPAnG222a4xW13AHLBLVFcvy/yBUTaVSaaY6qGMbOqfDVAe0W0ZP89gygNK11Foz42q7rlznlcpD6k+i1Jo+RwLtdXL52oaVd1zzYefFwrKqwwsPvKEUr+qB86flWpWjgNP0KU6C9qUC5Wi15ffRSnOoeaUuyFodI486quoq6PJtTXbbLPNdi2YxbWVmW1mttlmm+3qN4trl9AuT13t36h/1/4QKOtrY3kReAmKyThraX0x+Ut6JTtlZUuWSS9TPWfKSZUxTVnnGmeKWrSXtTxuYag0NXGnrGOVMR1g/38bQlGvOsY0YZ07xFLu9nKmMrvKWjZ3rHMDZ4rasp9Zu05nUZa4yvzebYipLHGNpdy1uYGpjlHNxpaSV6rqnldaV88222yzzTbbbLPNNtuq7aoZr2aZGwwVaTRlbUCFqUxXRvXrORTDKXPiWK2FRwzVabSSM6XLiyobRM5UvWOIY4NRpqIKDcxKNWOoej8bJC5FbxND1ccYVWhs7Gcphs2buKbqgVnQtnChLGiHmKosqt7H23AWtBWWQjdlK30DWzEsc7vCGC8jq6dsMOxzXT3bbLPNNttss80221VtV0pdzaIWasyptRhfIsVkXK20LkeqayqThBdUTiJjGlqMGxV1aaJaSGej0y3KScpjimnGA4oaNeOgaMu4plLiJtyonHTKUjnlAUU9mXGPsnZNuVml7hVORqq9orGiUu6Wipr2StC5rp5tttlmm2222Wab7aq2+fnqFt2WoWqNpjyg9M0AkRtV3XPnLGhTHlJUpDVlNZsytdW1VPbcqOqea45KtUGlSDZHgbqYqgeWQjfhRmXRO0GX6ixoUy5Uil5ffhelOIeaU+6GoNE58qjPdfVss80222yzzTbbbFe1zc9Xj7GW1vPz1Y3cKetYZUwHeH6+usaiLHGVdSy6n6kscY2l3LW5gamOUc3GlpJXquqe57p6ttlmm2222Wabbbar2i5RXY3aZv5/tq4B1/9JG4UZC2NUvOIpGyQuRW8TQ9XHGFVobOxnKYbNm7im6oFZ0LZwoSxoh5iqLKrex9twFrQVlkI3ZSt9A1sxLHO7whgvI6unbDDsOLUsUsw222yzzTbbbLPNNttVaJeirpaKeh3Vy8WLHORl07J0tktrKJtRBbG6Xjun5bSOPLcophkPKGrUjIOiLeOaSombcKNy0ilL5ZQHFPVkxj3K2jXlZpW6VzgZqfaKxopKuVsqPs0rQee6erbZZpttttlmm222q9pWXlefXTuHfP/ilhbVS9fZLo+tr58/u7aOKhm1a6trqey5UdU91xyVaoNKkWyOAnUxVQ8shW7Cjcqid4Iu1VnQplyoFL2+/C5KcQ41p9wNQaNz5FGf6+rZZpttttlmm2222a5qW21djeorFNWbF7c2CMvl2S6f4cMdHLVmHetHp/t4fr66xqIscZV1LLqfqSxxjaXctbmBqY5RzcaWkleq6p7nunq22WabbbbZZptttqvaVlhXo545c3ZdyuCV+WyX1U6fWUORxiIZ1a84LHLnUvQ2MVR9jFGFxsZ+lmLYvIlrqh6YBW0LF8qCdoipyqLqfbwNZ0FbYSl0U7bSN7AVwzK3K4zxMrJ6ygbDPtfVs80222yzzTbbbLNd1bbCunp9/TwKB1a/myvT2S6roSJaWz+HujSMUQ8pphkPKGrUjIOiLeOaSombcKNy0ilL5ZQHFPVkxj3K2jXlZpW6VzgZqfaKxopKuVsqPsErQee6erbZZpttttlmm222q9pWWFefObvG6nelPttlNVTJp8+soXxtci2VPTequueao1JtUCmSzVGgLqbqgaXQTbhRWfRO0KU6C9qUC5Wi15ffRSnOoeaUuyFodI486nNdPdtss80222yzzTbbVW0rrKuZK2+elwJ4ZTrb5TZ8yqh4a2PUrGP7Rqo9z89X11iUJa6yjkX3M5UlrrGUuzY3MNUxqtnYUvJKVd3zXFfPNttss80222yzzXZV2+rr6o1zUgOLO97Y//Pnb/4r53/p72984BfQDoVvHf3kuZ/+yvNv+xY44OLLz8X+9fXMdrkNnzKLZFS/4rDInUvR28RQ9TFGFRob+1mKYfMmrql6YBa0LVwoC9ohpiqLqvfxNpwFbYWl0E3ZSt/AVgzL3K4wxsvI6ikbDPtcV88222yzzTbbbLPNdlXbiutqFsDnqoqiWhml9cXTz6HM3vjAz29+5j5U1Nb+tm/By2ypXGfL7JG3//Uf3PdFe3EpDJ8y6tIwRj2kmGY8oKhRMw6KtoxrKiVuwo3KSacslVMeUNSTGfcoa9eUm1XqXuFkpNorGisq5W6pqGmvBJ3r6tlmm2222WabbbbZrmpbfV29sR48YZbTwoCt5z7Hunp/rKulnXX1R5R719NvX7zz9X/9G3aov/URa5xih9/6DW8/bLxaq+0qtr7jdXee1Bdj5jpPq6v9VhZ8v6yrUQC3uJbKniv67ME77nrHHQ+fcu2nDux7x5674LcdeJaNPY5KtUGlSDZHgbqYqgeWQjfhRmXRO0GX6ixoUy5Uil5ffhelOIeaU+6GoNE58qjPdfVss80222yzzTbbbFe1rbiuvrAmBbBoyvwd+C/9vfO//A0X/su/QsvG/p9Dy9YXPn7+5q/f+ADHrs/f/FdQb1eX7bjPUF52VSIqxte//4S9aLYF68zJtoRddbb4ePW26uraGDXr2PpI9bGHb0flHNrd89VH7pX6mXW1tKD+3Dh83zv23HeEzLn3Hg7t1BpLudvLmcrsKmvZ3LHODZwpast+Zu06nUVZ4irrWHQ/U1niGku5a3MDUx2jmo0tJa9U1T2vtK7emm222Wa7Vszi2srMNjPbbLPNdvWbxbVLaKser16TMjh4xi8/u/XcZ5Wlrv45MErrzc98BM5ZA8sq9BhHgN9SLRJPvv8Hs5FhtLz+/Y/se52NGEtteSK+3BHK1LYFzVCjZp1jS1429+wqN2fLolR+y+HDb+lWGMe33Z64zrGudu8i9OSyr7vzsLwX7ElYMH2/n8X6XXHefzBp+JRZMKMqFmc5XbqWzZGPP3ybH5GO7aoopOPcjc0j+++67aANU3Pgev8RZVShCoMsxbB5E9dUPTAL2hYulAXtEFOVRdX7eBvOgrbCUuimbKVvYCuGZW5XGONlZPWUDYZ9FXU14ik+2rX1c2fOrmH9s88+++xXuyOayf9tubH0fHEOmLPPPvs15qsLmAO26vFqLa0rqoU0h6ZFL7ztW+C+BSqlde8aqH12QgrFfOSWZWTxs2c2+uLZOnANsVSesqCU0IFPvv9O9GWLVbZYbbZX/bvalcphca2obeVdCZ12tsbDbw3r5FKeu9reLZi/X99naAgdnzLq0nSkuq6YGh+Tujq05Cp19bPWwp+FhzFqnfUQSm6yjkL3qZS4CTcqJ52yVE55QFFPZtyjrF1Tblape4WTkWqvaKyolLuloqa9EpSxYqmGGIpgevrM2tm1c+vnzrvNsYzv242lqz/Ivew/oJT5gRrbB+1ZND0ZhriqfSfkMPee8CVHrV9QwIKpJVe172IvmeqDRp3xr2CqstM0oPUzzHGqkEauKqqgQRaxlmGOiqumxsCcRYe5qheNKW1M9SyjqRd7mepZVANCCy9VITUmGqv1sk0Tw0FG3EBkQ3zznbdpWNUcMFOuahkMW9hCXAtHRQSoMLBgaslV1eDWwiH0DTP+FUxVdsqApoyre4BhjlOFNHJVa0HSs4i1DHNUXDU1BuYsOsxVnQPmFRowh231dXWPs3h2fv5t/xSeNW6d+ky2VO6DFsZ4Q1mY1rSYKyWlqy392KyvMyct6EeM1dDSjffWy9R8V/3K/QqzvXIVfqVzNLdUt580t6Bfc3YcyhU6Y12NUNjiMvisT0pHZ83cBV9RP169ibp638FjYa7OAtecAX1ceQcAi9vNabqqB+ZNLrhyo+otvFmX6pZwOC5UExTRMrkR9SmXMlUdnSOP+nLrakTPs2vr8GwrpSc7LG+kj/XtgOUgBOaB8qyHAizex3aQh3my+5NkmE39STjGdOWo6mR/IYwxveSqqicX7DBHVR9iDS/q7RxVfSrjDt/LzNs8w5QF+1h7GgdV72dkXSPMtCznqOopw0aYmKhaCw8aL/ZgDSzBwVpyVmfilbP0JKsHlrk6pTlM2aZLtvX18whxfgcWNqxkDpgDbIq41Mx05ajqZISgZqaXXFX1SmDs46jqQ4ywE1vaOar6VGZU6uM5YPbyoPFiD9bAEhysJWd1xp+cpSdZPbDM1SnNYco2XbItMWCO2orr6vNnLl44I1rjl05cfOmk8sa+N6KQ7vqEdtH+9bQYSk2tVwlavpoPlce+zpyyYFq40tiSLN4//Bt3dUl1dfz1ePyZerp7vXV1fIn+Yey9blpXIyIgIKbKEKmMacK9z1dTL8Timd99nnTj1fKstZTc5PDNaM68DWhLjTOV2VXGChPWuYEzlRtqH+vtdiqLhls1EogRplqSQQ6Ji2eqY5f0VHilqu55iXU14ibyQ80R8U7jhq4QLQ94Re2jNHUfrrGofegV9qfQZC1PyCbFpqdqd9F5hWxHu4DQomUAoeJ1xqNaBrpFFLIdLQdhUoVMVVxNNYVkvLCWAy82nDKsFM9y4SdcqAYHz5dcIRmLUownGTLF7Q/CYPE5YG5DNQB6blJseqoiSlQUsh3tAmCLMsSl3IVBz6OKKz3jRRSyHZ0D5hwwV2Orr6vPn5Yy+HTJF279/gtv+xbljf1vRGkd2yP3LWvcZKweWRymNWewenmc19XNCxLSzmkpO2xhV/3KF6yr0x+fh6XSnXEL+jWLybA8OvR/CyCGTxnXNsKQOixy5xpAI8/PV9eYqiyq3sfbcE04CrYExTPVEhqypVwy16VfRZGcssGwL6uuRsRErnT6zFq2/mHXt2Csb81Y375nUz1Enk31kHquqLrnlbg/nZq0PDknqTq5duF0vIiWF+wkHfcQcDqepOqe21wDZk0lt0vYlJLymKp7Tl0ytoQnqTr+RSZO1iUZg0Aw5UGVoJFwp26wJeVOKSmLqnlekSHWIeLpji1gWHAOmNERu6YpolPGk1SdjDAVW0peRBGLBDqepOOO0JTxJFX33OawHp0D5mLGIBBMeVAlaCTcaRIkPXdKSVlUzfOKbJsBs9FWXVejAEYl/LLUwwlvnXry/O6vP//r/3TzyT9A+8a+m6WWlmL7ln9X9q9z3VgudtUjK8ZuELgbfT1xUupPV1vmdXUsKacsmD5fffgRgCtx0fNEWA9taFdtkcXqareUjFq31NVpCc01vH74R+AwfMoIjogRo4qp8YTnq+Vl8vfAuQ6GfqpyTXl7SLlROekUG8p4QOXmmnCP6m3Yc7OGWziSDGOqsikaK1omNKJlAnRZdFl1NU4vGXs5h3XGlXvG0XC8cvUHuZf9B5QyP1Bj+6A9i6YnwxBXte+EHObeE77kqPULClgwteSq9l3sJVN90Kgz/hVMVXaaBrR+hjlOFdLIVS0HW1IWsZZhjopso8bAnEWHuarlYMsYUz1fFOllqmdRRIZGXqpCakw0Vutlmw4ZYggiHo68vZ5oWHAOmD1c1TIYtrCFuBaOighQYWDB1JKrqsGthUPoG2b8K5iq7JQBTRlX9wDDHKcKaeSq1oKkZxFrGeaouFprDMxZdJirOgfMKytgNtqq62pUv3VH8bz50d9BUY3SGi+lrr4ZgMbzv/T3Lr50PPYc8l6TejX89DrWtEn7D779cFbBpmVn+BF1V7s2Lig1qnV+3VsOS13K0tpafnBfrGvFqrvqVj6prpZNxxHvsE5sfayuzt8vG9wXBL3GuhqhsMU1gHrOlCW0e/p6/xFtn///atWluiUcjgvVBEW0TG5Ey/TLcjJygBZfYl195gzCJv/uTosnOyxvpI/17YDlIATmgfKshwIs3sd2kId5svuTZJhN/Uk4xnTlqOpkfyGMMb3kqqonF+wwR1UfYg046u0cVX0q4+zsZeZtnmHKgn2sPY2Dqvczsq4RZlqWc1T1lGEjTExUrYUHjXlVsAaWPMxaclZnbpaz9CSrB5a5OqU5TNmmKzF8boh4UHs90XTxOWAOsyniUjPTlaOqkxGCmpleclXVK4Gxj6OqDzHCTmxp56jqU5lRqY/ngNnLg8aQFayBJcRZS87qDG85S0+yemCZq1Oaw5RtuhLD57adgNloK66rz73E6rfUF4+d+7H/+4W3/VM4YOsLH7O6WuZufvSdHLLuW9brbCs0VN0jPwKHaV2NiICAmCpDpDKmA5w/X50wpgnr3CHmbUBbapypzK4yVpiwzg2cqdxQ+1hvt1NZNNyqkUCMMNWSDHJIXDxTHbukp8IrVXXPS6mrEbLxIWFVceV4p46vCC0PeEXtozR1H66xqH3oFfan0GQtT8gmxaananfReYVsR7uA0KJlAKHidcajWga3RRSyHQ0DLx2nCpmquKZqCsl4YU0HW7RFeVApnuXyT7hQDRGeL7lCMhalGC9gvAUv9L/IYBEsOAdMRIBtqAZAz02KTU9VRImKQrajXQBsUYa4lLsw6HlUcaVnvIhCtqNzwJwD5mps9XV1j2899bHNIx+CA/jy5BNw36HJZ1uZpSPbvYZPGdc2wpA6LHLnGkBbGKo+xgziwfuZdwC0iDdxTdUD601unAsNt/A+piqLqvfxNlwTjoItQfFMtYSGbCmXzHXpV1Ekp2ww7IwV2zYESiRKmiZOcn0LxvrWjPXtezbVQ+TZVA+p54qqe16J+9OpScuTc5Kqk2sXTseLaHnBTtJxL4PMJFX33OYaMGsquV3CppSUx1Tdc+qSsSU8SdXxLzJxsi7JGAeCKQ+qxI2EO3WDLSl3SklZVM3z6gwRD3FP922SYZE5YHpH7JqmiE4ZT1J1MsJUbCl5EUUsEuh4ko47QlPGk1Tdc5vDenQOmIsZ40Aw5UGVuJFwp0mQ9NwpJWVRNc+rs4UDZrutsK4+c3Zt65wMLJ97cVU62yrMfscefxDeawg8p8+sITgiRowqphkPKIN7ykHRlnFNeXtIuVE56RQbynhA5eaacI/qbdhzs4ZbOJIMY6qyKRorWiY0omUCdFl0RXW1rtwzjobjlas/yL3sP6CU+YEa2wftWTQ9GYa4qn0n5DD3nvAlR61fUMCCqSVXte9iL5naF0A6xr+CqcpO6wGtZJjjVCGNXNVysCVlEWsZ5qi4gmoMzFl0mKtaDraMMdUzL3MkRH1M9SyqwaGFl6qQGhON1XrZpiO2xLoaV27GiCGOV669QdLzHDBTppZcVQ1uLRxC3zDjX8FUZacMaMq4ugcY5jhVSCNXtRYkPYtYyzBHxRVUY2DOosNc1TlgXkEBs91WWFevr5/fOHdWCuCV+WyX1XCfW1s/h4jT5BpAPTequueaM6CPK+8AYHG7OU1X9cC8yQVXblS9hTfrUt0SDseFaoIiWiY3omX6ZTkZOUCLr6iuHvVkh+WN9LG+HbAchMA8UJ71UIDF+9gO8jBPdn+SDLOpPwnHmK4cVZ3sL4QxppdcVfXkgh3mqOpD3BdYhjmq+lRmqtXHzNs8w5QF+1h7GgdV72dkXSPMtCznqOopw0aYmKhaCw8ar/pgDSxRwlpyVmeylbP0JKsHlrk6pTlM2aarsiXW1aOu4S7wHDB7ma4cVZ2MENTM9JKrql4JjH0cVX2IEXZiSztHVZ/KjEp9PAfMXh40XvXBGliihLXkrM6wk7P0JKsHlrk6pTlM2aarsqu7rsYZfOaM1NXrX7547ssr0dkuq8nfrEckZIyRiVeGSGVMB3h+vrrGouFWjQRihKmWZJBD4uKZ6tglPRVeqap7XnpdHTbXbegK0fKAV9Q+SlP34RqL2odeYX8KTdbyhGxSbHqqdhedV8h2tAsILVoGECpeZzyqZXBbRCHb0XIQJlXIVMU1VVNIxgtrOfBiwynDSvEsl3/ChWqI8HzJFZKxKMV4AVtKXY2rWHQOmFNVA6DnJsWmpyqiREUh29EuALYoQ1zKXRj0PKq40jNeRCHb0TlgzgFzNbbCuhq2vn7+wvpZ1sDrXwq+VJ7t8pn+H+sxMKnDIneuAbSFoepjzCAevJ95B0CLeBPXVD2w3uTGudBwC+9jqrKoeh9vwzXhKNgSFM9US2jIlnLJXJd+FUVyygbDvvS6epLrWzDWt2asb9+zqR4iz6Z6SD1XVN3zStyfTk1anpyTVJ1cu3A6XkTLC3aSjnsZZCapuuc214BZU8ntEjalpDym6p5Tl4wt4Umqjn+RiZN1ScY4EEx5UCVuJNypG2xJuVNKyqJqnldnS6mrJ7mGR+M5YHqepOpkhKnYUvIiilgk0PEkHXeEpownqbrnNof16BwwFzPGgWDKgypxI+FOkyDpuVNKyqJqnldnV31dDTu7du7C+hnWwGsvLF9nu0yGovrs2roFL07GFdOMB5TBPeWgaMu4prw9pNyonHSKDWU8oHJzTbhH9TbsuVnDLRxJhjFV2RSNFS0TGtEyAbosuqK6WlfuGYZT0BboMZyietDiUgurP8i97D+glPmBGtsH7Vk0PRmGuKp9J+Qw957wJUetX1DAgqklV7XvYi+Z2hdAOsa/gqnKTusBrWSY41QhjVzVcrAlZRFrGeaouIJqDMxZdJirWg62jDHVMy9zJER9TPUsiou6kZeqkBoTjdV62aYjtsS6Glduxn/09Avff8vH//LuD37VT93x5/7VTV95w4/++R/6rVf97D2vvukDr77pg/Cvv/lDP3DrJ9BN+29Te4Ok5zlgpkwtuaoa3Fo4hL5hxr+CqcpOGdCUcXUPMMxxqpBGrmotSHoWsZZhjoorqMbAnEWHuapzwLyCAma7rbyuhqEGO3Pm7Ma50xfXXwwl8QvL0dkurSEQ4N52+sxaGKmWmNjoGkA9N6q655ozoI8r7wBgcbs5TVf1wLzJBVduVL2FN+tS3RIOx4VqgiJaJjeiXfoV2HIycoAWX1FdnTl2rzGYold8L/YGA8tBCMwD5TkcKPU+toM8zJPdnyTDbOpPwjGmK0dVJ/sLYYzpJVdVPblghzmq+hD3BZZhjqo+ldOAmTLzNs8wZcE+1p7GQdX7GVnXCDMtyzmqesqwESYmqtbCg8arPlgDS5SwlpzVGR9ylp5k9cAyV6c0hynbdFW2xLo6809+4Yt/aZfUz2/4wNf89Pu/4l++4Sv+2Q995Y/f8qo37It1dayuUVrrUogb1MBzwDRVJyMENTO95KqqVwJjH0dVH2KEndjSzlHVpzKjUh/PAbOXB41XfbAGlihhLTmrM+zkLD3J6oFlrk5pDlO26arsGqmrYTibWV2fXcNbmv3qdVbUa+cQ9zU8ycRClXGnDJHKmA7w/Hx1jUXDrRoJxAhTLckgh8TFM9WxS3oqvFJV94xTyyLFNgyBEm8Yq4orxzt1jBMhHaneOH9x/fTF089ffOm5i2e/fPH8mo/uOEX9sm169N03XBdt58F07oFdNgP2re9+On4cpdpHaeo+XGNR+9Ar7E+hCXoMO79j73FteeCm66676aCwnpYjik236AFZrXJ30XmFbEe7gNCimwd3cn+Uo2JexqNaBrdFFLIdLQdhUoWM6xf27rju+r1fsBZcUzWFZLywlgMvNpwyrBTPcvknXCgu6owvuUIyFqUYL2CIeLjydbWTDItgwYGA+X1v+a+v+snbX/WG/airv/Zn7v7a7/vFP/ftP/41P3Xrq95wb1ZXw3/g1k/4ZZel/q50hQVMVQ2AnpsUm56qiBIVhSyux/Z0N6wde74g7V14LJUhLuUuDHoeVVzpGS+ikEwP7b7uut0HfcuALiVgpoprqqYQ6EHu3CHlhfUKCJhHb7nezhnYzXhDydxlKiRjUYrxArZwwGy3S1RXb9/8MfAHJD3QnOIfXRnnjefuNPLMiWO1Fh4xOYnNSs6ULi+qbBA5U/WOIY4NRpmqgUaYUS9jqHo/GySuAbSFoepjzCAevJ95B0CLeBPXVD2w3uTGudBwC+9jqrKoeh9vwzXhKNgSFM9US2jIlnLJXJd+FUVyygbDvvS6uurWT2xrc2PrxWe3jj6++akPb3787s3P/uHWqc9vrZ/BWqzHxYv2BtNcU1UPkWd/0M6fP4B6bedB1+L06T07rrthz9PWskL3p9O4fn7Pjhv2HA0toQDmy1ZVJ9cuHOFutdbequUFW+jBXdftePdTZTu1zw/K/pDLIDNJ1T23uQbMmmoo9mxKSXlM1T2n7m8iT0td/bRrH1V1/ItMnKxLsvJmPahlAtBpbyLRKSVlUTXPq7Ol1NVV/95vuuFrv/ctX/0f7/zan3n/V/3k7V93w3f8g2/5m1/zQ297tfsdePSvv/lDiBtYaqGA6bii6p7NGUvl67+lOGLXNEV0ynhIJUB93rWokxGmYkvJiyhikUDHFUUVesPeY7W5447QlPEkVffc5rAeZUg8uPO6XYeMo1JWFzCVm1SK/gflpbTAjImTdUnGOBBMeVAlbhh/9GZ+UaAt1CRIeu6UkrKomufV2Suork4POqfWYnyJFJNxDSeZnNxjKpOEF1ROImMaWowblcHFq4Ybz43KScpjimnGA8rgnnJQtGVcU94eUm5UTjrFhjIeULm5Jtyjehv23KzhFo4kw5iqbIrGipYJjWiZAF0WXVFdrSuPbP1guCLPvri5/9c23/B3Nn/0azZf99X0//Ttm08e2jq/Zn3kv5Erd7VNOQ6682BykCMzF5Txamv3H1DK/ECN7YP2LJqeDENc1e4kRMX7mj3HYnsxXu37957wJUfVi0jr6u6CQnPB1JKrmlzsTFv3PFW2R60EEDdezXb8K5iq7LQe0EqGOU4V0shV7QvggUWsZZij8oYSxqvdzQWYs+gwV7V2Qxxmqme7Efcx1bMoruVGXqpCakw0Vutlm47YEutqXLme/8KNP/6nv+0//Olv++k//z2/8j/86zf+2W+6/m/8i3/+1T/2O696w75X3fQB5x9Uj2tYWKsBM2cXJK2uDu2IGIFXHjCnsIY4BBt+8Sfs2xOOighQYWDB1JKrqsEt8qFdqKuPlu2mDGXDjH8FU5WdMqAp4+oeYJjjVCHDzNi1+2BsybQWJD2LWMswR8UVVGNgzqJSVx9S9u3KVb2yA+aDeENLHK+G1JhorNbLNh2xV1Bd3Wj+SCibyiRTTO2DCWxanlIjui3TE11NeUDp+v2W50ZV99w5L+GUh1QDTaGMgClLTGx0DaCeG1Xdc80Z0MeVdwCwuN2cpqt6YN7kgis3qt7Cm3WpbgmH40I1QREtkxvRLhULbDkZOUCLr6iuztz6wXCVrL28+Qe/tXHLT25++B2bD7xr81dv3Hr9/7x523/cOv0C5movXcreYGA5CIF5oDyHA6V19QFlcWunh/Fqfen6JDzgXDmMdbtr9yfJMJv6k1AHez/ftSfj1er+1IWqk/2FMMJuvFpd26uqnlywQ/wUkiobr1bHrF4OwYTj1TiOcW5fwFGOqj6V04CZMoOqZ5iyYB9bcFYOqt7P3Q2ixlpX63i1urZHVU/Z37zqTExUrYUHzd+gG7gvASCrAwuWnmT1wDJXpzSHKdt0VbbEujrzr/kP7/2qn3jP1/ybN/+Nv/r3/8Lf/da/8ZO/9U1vfv83v/XD//jXH/iHv3b/N771/r//lo/8418/8L/+2gPf9J8OfPOvH9ClEEOogdsCpngf9wbJC/HrvwUccamRTRGXmpmuHPUpxH0dr2bICu1jTC+5qupFYOznOF7NMKUtfYywE1vaOar6VGZU6uNKwDx2y47rdnK0Gqbtgp5XEDDVS46qTg7j1Xjt2utMTFSthQeNV32wBpYoYS05o67mePWDytIuUeWaDZjtdnnqav+G0oPbvSg/DJ0ZeAmKyTiH00hO3F52qqf42HdLVM+ZclJlTFPWucaZMoj0sYaYFoZKUxN3yhCpjOkAz89X11g03KqRQIww1RIOckhcPFMdu6SnwitVdc9Lr6vD5roNQa0fDNfi2Rc3P/exzU/u3/zDOzYffM/mnh/b+tGv2fqV67e+dOLipvX0y3q9/w1S14q9Zs/RWh8bry7aeZDjeLV9HFFR20Zjsijt9hFLBhlsx55jmCXt/onuXQdCyU2OJ1K6Wju1Uo0/p4wnoSWscdkb9mKHw1wqK+Rguw7Fy4EPaQfDPsTlweyjdfVR1G5muw5Ke7gAj7vFUSRLO6RbD/fkKDPUwMkjgt52HeyCQ5/a89VMyMz4bB7m6Vx5Wi+YJqBhLtQtBduNN6LtGtziJwHbfYgtITD2KwTKyjbYDXuPx3ZItj+xXfS4f+pNHzIMYR+a7A9zO7arHvdvf36+eoUKyViUYryALaWuxhUtmgTMV9/0ga/92Xtf/UO/+df+7mu+7lt+4B/83O997y0f/+nffezH3vfJH33vH/30+w/f+FsP3nzv4z/yXz/x2j0P/7tbHvHLpsrnYqK9Zs+xZK7/wxNvOJAttfNAF/o02Oah0hnWXA2YiDJYUNsvfH7Pa1jfdpFq16EkGPKJGJuDq2zP0TRU+tUiwoeQqHO78OjUh0RvFhJdMGQgjXE1BFUfEvlGECskVEJ5FLDz/HJQbAdClAVMUR8ufDhN2qP556v7nrtO4vCxGKkkFmHPd9xyUBfEzlt49GEzjWyx3Ye1Hbc8cyz0AnMugtsGn0yO1rVDOkWf67GraQv25Pje+F52Iih2S/mQyHWyWeZyGNYM7TE8gmXRrSwOH2cok3a8FZ6yB5/pQjGfp+Y8qj1fzT5iWKG0Q57pdpKhO26Aw+/s84W9LrjLOtkcA2byiV5/ywmbq+reznU7OboszaK9AVOX4Vi0tlzkT76vu/7WE9IgLdDKePUzt9yI+8sh9saOHL1ob3bnIdenR91+Xn/LMydutXcMlj6QoBTjBeyarasXMH8M/AFJDzSn+EdXxnnjuTuNPHPiWK2FR0xOa7OSM6XLiyobRM5UvWOIY4NRpmrQEWbUyxiq3s8GiWsAbWGo+hjjjhUb+1lvJOpNXFP1wHKzbOBCeVMfYqqyqHofb8OZslRYUpmUNbmJbCmXzHXpV1Ekp2ww7Euvq6tu/WAb57dOfHbzfTdv/uzf3PzxV2/+2Ku2fuArtr7vT239wjdvPX+Uf89MzN5gmmuyqL5hz9PWwsyvy/bCQVvk+WrJkR4IL5m9dWPa6Uv2tLHlzqXxNTd0a4DjhOG2ZLV68thwcZgblJmf5Ih86Xru2iWd9U+a6YLax9aj/SXZQkrHl9YivzC8bseOGyR31Ha6ZY3Sme3yMvZhBrlj73HtKcM7zBq1p16kzEFvwGpRTncXu6kbr07aRftcU1JsVAOLvAylNXMV5pRk5ESY5XJByRq7Ilw6h9JaMzYcUGGslcfiFtlE4RowO5XUMz4iyPXE0lr2Z+8XLFCHWWT8k6KaSRgboLo/ytpZckpxWRFzRL6UBVFId7P4cn6+mtqbSHRKSVlUzfPqbCl1ddVfzV93f+CrfvL2r/jOm//sv/tPf+cXPvDavQ//f9/5hz/7e5/66fc/dtM9j3/bf3lw172P/fB//cT1bz/0r377o4gbWCoLmD5ISsuxd3+rFMAaMDXuKWsxfIN858jwKKftTbtwJXHu59/9GrmqpKc5+9fGq9N2rkdKbnkpdTVWayHUhVPELi2qGQylMwOU/NUJnRtXK3NlyT3HtCej1rByzTpeHVrUyQhTsrYbdrCcjrO0qNZIyxCXhERdZNdNu60lC5hylb9bn4vh3Fp4RB+JaV0LVfYEIVFebsiCe9zvgOAsfSUOMx7CEbgQ5nZi/3cjYOrcXVgDo1kIoZ51bNkFTPHjqMlxBIpQKfsTGhHq2E1ewjqV94J4KC0SMDd5E5Lgpi1SnFsY7EIrO3YhESZhjc6CfMcN18dQac6+MWCGOIwghpdb8lJXJS3ycvdBm8sld+/cbY9YS6kcH7eGUVlbXn/9jVzEWqBPW09tkaJd//yZmuyQlOhiz+y90b3kCrvOrHJv3Bu7ijEOBFNWPYH1XLdTtqq7dQuLakQZaRGtPF/Npa6/8ZZDWxdx9K+/8fqbP6rD2ii/JeZY5HEsqiZ8ApU5FkQNL23Lt1dQXZ0caHlhLcaXSDEZ13DyyUk/pjJJeEHlJDKmocW4URluvGoA8tyonKQ8pphmPKAM8SkHRVvGNeXtIeVG5aRTbCjjAZWba8I9qrdhz80abuFIO4ypyqZorKgmLoWmCdBl0xXV1bryyF2/M1/avP1nWFG/6X/bvP9dW4/ctfm+XVs/9ZeyujquYUBZZt90sGjn/ZQpoGuPHMerrZ0fEHNNSRljT66BwyyBZYxa+7PKRZ6anAySe7z7KeGunaPZzBG7Uwj9LLeTFjkJn9rzGg7LJCcnM0gtyLWdKQoLYOmPDCmwnvC8s9vwS+iPFsnqQh+9iLQg7y4oTQ1l2Mce9tOeEFTROFD6+F9swZav230gtvgLXPO27T5fLftzSLkLO2RJikIhvRV/QF4JbtwTlrjSIu22rLYE5cyc9bxJ2r/wTDYubSoJkz1MqPnczkPK2kdEmH+KjImazmW7Lks+jjQISaq284bCnZ+fr16KQmpMNFbrZZuO2BLraly5nvkHyd7wga/5iff8j//ypj/7PW/5u2/a/69/52M7737ih2775I+995MorV/zGwd373vsO9/1h//kfz/wHb/90biGRG18uGinSoCS52W0xb6R1CdohHUEW4OnBlsXJEOhq8tq6ANzi7sOSIu0b+jIrrRIuLNwirCDFu5D+NMSF+I3hmEu+h+Tn+poyb1bQqLO3YxfhpJ9e84S7p7ay+PQ+3w118VvFUM7IkDB4ZtHbZdFLJyii/urkGD5CjIJnrKalDXkZu0IVjfsQczBJrSdNbALkmiXqpg/CMLr2K4hkUtxDRzBlnAqNbkETBbD6IeQxaW0PObIM9qk/RmOdbugaiqBHXUsG2J7wVg/ykdrN9Xx6tCCABhiONn6GFvg9UFyS/dHflietHeKKyjEdmFbD8vY0Edm2zPVZGxFGGJBW9mUXVi0h5ZaYGSZHfrg39P4MOSbUDZkAZM1NgvyLkjK/rBY5uWPdmpvwJRy+tYTrMZ3yTJZH86vjFfrCDPrf12Kfz9cfy7ue5YKgXINqOe7djYbq/WyTUfsFVRXN5o/EsqmMskUU/tgApuWp9SIbsv0RFdzJ31d6fK9V8KNqu65c17lKQ9pF3QSZQRMWWJio2sA9dyo6p5rzoA+rnrzUNeb0wKqHpg3ueDKjaq3+WZdqlvC4bhQS3SoIenpWLRLxQJbTkYO0OIrqqszt36bG1tffpYPVP/4/7R5769sffHprS+f3PrDO7Z+9m+VdTXc3mDg8+l/pkXTzE8OmvXRunrC89XsX5pU2nAtpMN4S/5SPGR4+jKcMFyt/gYytHct8SREbsoB5LCUtscUTdtjYsoWSecK07nhomAfFsC2htDerda8G6P2v4fsDBkel1XfkJEQDlaD/YVM7n++Ov6wUEz6hGASy2NzzfZ0rGYzWxDmBliY7ZlpvmjtcEmfCpN8TjrYLyTNOIajQbXLNUNPtAsaux9sq+k6pU+yP+nAy8D+aJp1SNagNw6tq+fnqz1LT7J6YJmrU5rDlG26KltiXZ25/Pdav/fq7//1v/c3vvGvfPO/+Vs73/cdv/PQ99/68dfuefhfv+tj33vLw9/yG4e++11/+E//84G/9ysf+f/82v26FGIINTBAvqRTQ4wK7Qx9/XGPfSSKMlhpf62iNdiqy5rL8eqeAGXhMdT5Gu40nGqZnTJd+5jq8GduuxjlpI/8OCgav+Vku86FjjxfLRGWo81puxbPzuKPfexLSRvfhlpdjVgkL92CodjWbpGtAAaIs8XGou0HQepaosdujGCxBaEptOPTssgZVytDzdhDjWyl2Si0rifGXgl36jCoO0sYtaRdA2ZgRi0XDK1dg5v1oYQSnbzpf3otJiVuWAOUHcJ3jqFdOH3iBqbfUdLllJX1aItUtjbiHWpaMAyinZW1vw4vc23s4pTt3mIhTYs/LOfbtTY1brE0HfpGwMg7yM5IezD5+fd1N8rwt0QMmyshrvZ8Ncerd3+UzB9yo67GXP0i4KjFnAeTE/pmFtCyhmAy4r2ywWrYNVtX+zfk351/q8r62lheBF6CYjLO4TSSk7iXnerpPvZlPNVzppxUGdOUda5xpgwofazlcQtDpamJO2W4VMZ0gOfnq2ssqrdzMmyQqZZ8kEPi4pnqOCY9VV6pqnteel0dNtdtCGr9cPK9+Nzmb30v6+pf/47NT9yzefj3N3/vTVuvbxmvlqLaPQQ4PF5dtPMgx/Fq+zhcf2uxj9KUH+Ln3/0a+ek4mblKzAulBR+6JDA2IBNPofibxtgi/YqWYgS7/P+ref8NY9Seg2I3Eg19svZQV5PlguKDfEhkweXodKn1QRhV/b3i+H/B6lUyNhuK0QDS5XY6Nm6FNJTvSLK62GLKQ0qzIZqe0ekW1V8eIpMja0un8Zfn1hK3ElssaIf0X36dyJZudJp9UtVH/lyL1tXz89UrUUjGohTjBWwpdTWuYtEkYL7qx2/5yte9689/+8/8hb/1v/3Zv/utX/9jb/+bu9/319/0ob/55g//tV/4g7/6Cx+C/r9+7vf/8ht//y+98UNff/OH/LJVjZVnGKPuRqdr/fO5D+h4tbCqtWQBU4KhVNEaHlVDwAx1dWi3rykRE1JGyEo1rDZtt4DpuEfDeLW2YNOpytq7P1ShykaER23RPz/BbyHJNpc/rgFD0vFqp/JjctouRLNkbjZeLcqQaOPV1sLIU45gS7jTFvSDYuv2DSNCkwVDe1PhG8Pw/SP+IWSlrOPVOoIdWnKNBW03Ci3K9vTZadF0vBoqwU2+x7Sx6Nhfb8CIJ2wx1bpavqNksDK1J25iS7gv6FyWuDYijQao/iCIZWkItkAwVTYqwTe0xKjetUBl2Fi/68RVjhDNMvVpYW1hR2p8rltHpGMlD2aX0FPEWjoulCHC6vme0WbOrT5fTU7Hq/msdWUNFe1GvKUhtiuLUowXsGu2rl7A/DHwByQ90JziH10Z543n7jTyzIljtRYeMTmJzUrOlC4vqmwQOVP1jiGODUaZqoFGmFEvY6h6PxskrgG0haHqY8yAHryf5eZh3sQ1VQ+sN7xxLlRv7f1MVRZV7+NtuCYfBWuykjBVExphS7lkrku/iiI5ZYNhX3pdXXXrh57rZzYffv/mz3/z5g//uc0fe9XmT/yFzR/581s/8me33vKtWy8cv7hhPe0NJrkmb4UcKw4trD/dLxXlQIElI5zwfHU5BH1wp3u+Oh2B0aQwdUlg4ni1uv4VMRudthbsvy2uLaxmkYrpXHdCdgPL2i7rt/Hq8kHBg7tqz1fLSE68KIS71Vq7JV5k5gBSioe5qGy7xwWpWlfbCLa2dFp9gDBon3O8WkZ4QmAJiaCW3KFU5lzNeGLPTMP/8kWWURE+Sch28UO7bWSmcA2YUTW9E9ZQrO1k3Z/wQKD7TWMawIMih5PyGBxLZbaLc3/0ocGYO4ZZ8/9f3WlvItEpJWVRNc+rs6XU1VX/a9/0HX9mx4/+me9+05//kf/yFd+566/8k3/8jT/0ulfd9P5X3bRf/v9qKjj+P1uIG1gqDZimMWBCGQCtPA5xT/swJCLupc9XS/jSueXodH28uqucgx/c1UXXfG4XexG7uu8TZW6ibkFrObSLRa8yotawNjxfHcOdOWNaCI9s5672j1e7by3TrEaYla1GNjSaIobIwHLXAg2NYHWOTmuQ9C2+D8KUBEwbbY5riOPVGk7lG0zrz0q79nx1T5yMDmNVb0FSWxh7ww/FtUWZNyH7MY62WMxEC0tc/QZT58qpxhJcWtT1W04XG8XZ80aNw+Ldt5aMY2E9ZFHZpIxRdywRDy1ShPM34nAYVdaGSrVr4d+VtEoexpYQovuMBa2NPKfPWtMO7W59vlrHmbnD4alsaZe4YX16nq/mM9VbF5+5lbvBOPM06+q9T0vMoXBux6JqwvJ89TxevRRLDrS8sBbjS6SYjGs47eSkH1OZJLygchIZ09Bi3KgMLl413HhuVE5SHlNMMx5QhviUg6It45ryJpFyo3LSKTaU8YDKzTXhHtXbsOdmDbdwJBnGVGVTNFbUJTdeywTosuiK6mpdeWTrB8MVefalrWc+vfmpj2w+dBf9Y3ey0j5yYGvtZf//bJW7Ggtptnx+z85vZc5R9JTbaxh/1vbIcbza2vUD4h/FRdalPWVU/IZdB5DDsU8sj7W//9DDySAJjPwN8KSdZbO1s6XLGq2FQxlI5oSTk9P1lHark+00ZgIX/kK45IvXveamg/x5dtcf+YEUydoH6xXlglJ/aru83B1+QC5/hNZ+x2h/d2fH3mT4JR+v9he4VrMLPF8tNbC2s8zW57cRfCQLwlsgIyvduUN/aoh5GzomjA8Y8zSgSWcdo7b1SGYJk4H063bcdBCpWOxP5cyC+S6YWGm7jFGHP3JrmxBGzrfzenuKmw26P+75au0cQr3mgseEdRM7dh7UP2MrSdsOKaQxU3pKEe5uLsCcRYe5qrUb4jBTPduNuI+pnkV5xbfxUhVSY6KxWi/bdMSWWFfjyvX8TX//n33ND7ztq3/6rq/+yff8P7/nF//qN/4vf/vbb3zVz7xP6uoPZt43Xi1j1KwnQ4s+zxz+jBkDV5zLPzOBuHe/hUH/rDW19flqbY/fXVqAOmB/fozPV4fymOEueb5aiucYEvWn3VYMo0XiXvgL4Rx/5nWtYTCsrcoS7hqer2aRHNoRAfSLUf05D9sP7d3Bujp5vhr3Bc6FJOPVMkbtQ6X/OY+2IAAibPjxamuXj+SW49gE2/Xn6/KNoS2lz1fLCDZex3YEPVsKwQfRj3PtTfE3ieE7Alzd3AY+2ht27JVAh2W5vI1Xc0xb+6hagOqer3Y/6tE+ujll7WPK4hmzGHjZkARJuUOHgPmFvbtvkBHvEDAh+nx1HK/u2rtCGg3P7A33BVxNaLH4KeEUTfIyzpWg7P7/at0HZVML8rFFgqEV0hYYD95yPcvU8Hy1jFHLFtmAYGi/PBdGoY7Sl7/Blrnyp8huvPmgjHWzQYJAwiEExZpcmLsV2PVZ8ng1BKrj1ayrrR3/uAacPQ8a3Lj3xEW+Mf2BOhcT62jQXkF1daP5I6FsKpNMMY0fmLJpeUqN6LZMTmgz5QGl63ddnhtV3XPnvMpTHlIGkYoyAqYsMbHRNYB6blR1zzVnQB9XvXmo681pAVUPzJtccOVG1dt8sy7VNRHxXKgmK6KaEnkW7VKxwJaTkQO0+Irq6sxxulpXGIrnc2cunn7h4ounzF967uKZL13cvIB5mI/OupS9wcD213TUbnj30/JXakMOd+xdyAtzkxwOyzLHKMw/Lug7SE3rXG7Q3qwDE6/CLGtUl5I7mJbK2o6TkBmn/bIRjV378PPVolISm8kwi7bH3xx6k2wSHWxEZa/ro89gx4vLvx1bSi7Y6rFj0avL6kUtVXE0KYmDY27CIZhwkZ17Wc+byRCKupXZaqjYOZxCCAPR2eZYcqvrsu4Dk8yS7gNmygyqxlL3BuNqYdrHrRN5lf7/MelfCO+MyRkaxfO5MoYj7XKzkMzVbPctmsNpB/HsRqOesr951ZmYqFoLD5q/QTdwXwJAVue1n7P0JKsHlrk6pTlM2aarsiXW1Zl/167//DU/tuerfvTdr/ru3f+P61/36n//y3/xp975qp+9+9Vv+EBWVMO//5aP61KIIdTADH0aJIPZf6Agdxn2qcY9LstzVsarZT1WRSfPV8vgc2f+Jz/sHMwNenfj1YhX0mK1a+ygpXUwCY/aLiEuiXuyWm33fcyVo6YxM4RTfd4kNxbPsoiUtcEQkFnghMicj1dbXY34oy/TNet3juJ+ndFYYIfQl4RxK6qlvR6HNWbi0+J4NTgUumjTPbTgJqW1mRbe6vqmMnMj0vKdaWdWVIfFuV22kBmXxISlSGVACxZ++E2RuWaovXUT+oOg+v7IaDZCUx6H9TltFro2a+deaVGTClyW8mPXMIh05sCyFsOFMQ5rz/gDb5gNIAuYsfDuLB2gTp/Ntr/vrcYIECxjOQQ37kUlbS0ydi17GzgzK3HHn6+W2ENzKKzPcmeGWhqms7q6+pYTEWTxSXbN1tX+DaUHt3uhrK+N5UXgJSgm46wnXP8X5MpOeRmQZdLLVM+ZclJlTFPWucaZMoj0sZbHLQyVpibulKFWGdMBnp+vrrFouFUjgRhhqiUfZE1odG5gquMu6anxSlXd89Lr6rC5bkNQvMRF12Kyqm6Hl6jlAa+ofZSmWtOmjwvGXFBb7AQw9qfQoCIPCwM1sb08IZsUm56q3UXnFbId7QJCi5YBhIrXGY9qGdwWUch2tAzgqUKmqt5cCoVkvLCWN8fyllpRiuf0Zl1Vva49X3KFZCxKMV7AllJX4yoWTQLm4U9/+i/+xG9/5Q+9/X/6N2/8H75j91f/1H991Rv2vYq/AM+L6q+/+UN/9PSX/LLL0kUCZsGiaZD0nATAqaoB0HOTYtNTFVGiopDtaBcAW5QhLuUuDHoeVVzpGS+iENbD/A6RnGvxfHWmKwiYxfPVqpCMF9Y5YF62gNlu8/PVogmrtfCIyUlsVnKmdHlRZYPImap3DHFsMMpUDTTCjHoZQ9X72SBxDaAtDFUfYwbx4P3MOwBaxJu4puqB9SY3zoWGW3gfU5VF1ft4G64JR8GWoHimWkJDtpRL5rr0qyiSUzYY9qXX1X2O/cEpagv0GDpgPdJZ375nUz1Env1BS7ii6p4Hvauro8s33X4Qpub+dKpo+bhgeXJOUnVy7cLpeBEtL9hJOu5lkJmk6p7bXANmTTUUezalpDym6p5TL28ik1Qd/yITJ+uSrLxZD2qZAHTam0h0SklZVM3z6mwpdXWf/9Hnn/nedz7w6h/637/yR9/Nvw1eFNWoqL//lo//0dMvoDPihigXDHw5Auaijtg1TRGdMp6k6mSEqdhS8iKKWCTQ8SQdd4SmjCepuuc2h3m1GyGxDJj84XfyfDXiVZOqe04dISvjoDoEXbYnqo5/kYmTdUnGOBBMeVAlbiTcaRIkPXdKSVlUzfPq7BVUV6cHnVNrMb5Eism4hpNMTu4xlUnCCyonkTENLcaNyuDiVcON50blJOUxxTTjAWVwTzko2jKuKW8PKTcqJ51iQxkPqNxcE+5RvQ17btZwC0eSYUxVNkVjRcuERrRMgC6Lrqiu1pV7xtFwvHL1B7mX/QeUMj9QYXnML7GdB/XjTk+GIa5q3wk5zL0nfMlR6xcUsGBqyVXtu9hLpvYFkI7xr2CqstN6QCsZ5jhVSCNXtS+ABxaxlmGO2nNzAeYsOsxVrd0Qh5nq2W7EfUz1LKrBoYWXqpAaE43VetmmI7bEuhpXbsaIIY5Xrr1B0nNDwISTdW5gUYSaRq5qGQxb2EJcC0dFBKgwsGBqyVXV4NbCIfQNM/4VTFV2yoCmjKt7gGGOU4U0smk6Xl0Lkp5FrGWYo+IKyrj7abe1A+PcYa7qHDCvoIDZbvPz1S26LdMTXU15QOnpd1oTVN1z57yEUx5SDTSFMgKmLDGx0TWAem5Udc81Z0AfV94BwOJ2c5qu6oF5kwuu3Kh6C2/WpbolHI4L1QRFtExuRMv0y3IycoAWX1FdPerJDssb6WN9O2A5CIF5oDzroQCL97Ed5GGe7P4kGWZTfxKOMV05qjrZXwhjTC+5qurJBTvMUdWHuC+wDHNU9amcBsyUGVQ9w5QF+9iCs3JQ9X7ubhB9XLuhRFVP2d+86kxMVK2FB83foBu4LwEgqwMLlp5k9cAyV6c0hynbdFW2xLp61DXcBZ4DZi/TlaOqkxGCmpleclXVK4Gxj6OqDzHCTmxp56jqU5lRqY/ngNnLg8arPlgDS5SwlpzVGXZylp5k9cAyV6c0hynbdFV2zdbV/g2lB7d7UX4YOjPwEhSTcZbTBS/kxO1lp3qKj323RPWcKSdVxjRlnWucKYNIH2uIaWGoNDVxpwyRypgO8Px8dY1Fw60aCcQIUy3JIIfExTPVsUt6KrxSVfe89Lo6bK7b0BWi5QGvqH2Upu7DNRa1D73C/hSarOUJ2aTY9FTtLjqvkO1oFxBatAwgVLzOeFTL4LaIQrajZQBPFTJV9eZSKCTjhbW8OZa31IpSPKc366pqiPB8yRWSsSjFeAFbSl2Nq1h0DphTVQOg5ybFpqcqokRFIdvRLgC2KENcyl0Y9DyquNIzXkQh29E5YM4BczU2P18tmrBaC4+YnMRmJWdKlxdVNoicqXrHEMcGo0zVQCPMqJcxVL2fDRLXANrCUPUxZhAP3s+8A6BFvIlrqh5Yb3LjXGi4hfcxVVlUvY+34ZpwFGwJimeqJTRkS7lkrku/iiI5ZYNhX3pdPcn1LRjrWzPWt+/ZVA+RZ1M9pJ4rqu55Je5PpyYtT85Jqk6uXTgdL6LlBTtJx70MMpNU3XOba8CsqYZiz6aUlMdU3XPq5U1kkqrjX2TiZF2SlTfrQS0TgE57E4lOKSmLqnlenS2lrp7kGh6N54DpeZKqkxGmYkvJiyhikUDHk3TcEZoynqTqntsc1qNzwFzMGAeCKQ+qxI2EO02CpOdOKSmLqnlenb2C6ur0oHNqLcaXSDEZ13CSyck9pjJJeEHlJDKmocW4URlcvGq48dyonKQ8pphmPKAM7ikHRVvGNeXtIeVG5aRTbCjjAZWba8I9qrdhz80abuFIMoypyqZorGiZ0IiWCdBl0RXV1bpyzzgajleu/iD3sv+AUuYHamwftGfR9GQY4qr2nZDD3HvClxy1fkEBC6aWXNW+i71kal8A6Rj/CqYqO60HtJJhjlOFNHJV+wJ4YBFrGeaoPTcXYM6iw1zV2g1xmKme7Ubcx1TPohocWnipCqkx0Vitl206Ykusq3HlZowY4njl2hskPc8BM2VqyVXV4NbCIfQNM/4VTFV2yoCmjKt7gGGOU4U0clVrQdKziLUMc1RcQTUG5iw6zFWdA+YVFDDbbX6+ukW3ZXqiqykPKD39TmuCqnvunJdwykOqgaZQRsCUJSY2ugZQz42q7rnmDOjjyjsAWNxuTtNVPTBvcsGVG1Vv4c26VLeEw3GhmqCIlsmNaJl+WU5GDtDiK6qrRz3ZYXkjfaxvBywHITAPlGc9FGDxPraDPMyT3Z8kw2zqT8IxpitHVSf7C2GM6SVXVT25YIc5qvoQ9wWWYY6qPpXTgJkyg6pnmLJgH1twVg6q3s/dDaKPazeUqOop+5tXnYmJqrXwoPkbdAP3JQBkdWDB0pOsHljm6pTmMGWbrsqWWFePuoa7wHPA7GW6clR1MkJQM9NLrqp6JTD2cVT1IUbYiS3tHFV9KjMq9fEcMHt50HjVB2tgiRLWkrM6w07O0pOsHljm6pTmMGWbrsqu2brav6H04HYvyg9DZwZegmIyznK64IWcuL3sVE/xse+WqJ4z5aTKmKasc40zZRDpYw0xLQyVpibulCFSGdMBnp+vrrFouFUjgRhhqiUZ5JC4eKY6dklPhVeq6p6XXleHzXUbukK0POAVtY/S1H24xqL2oVfYn0KTtTwhmxSbnqrdRecVsh3tAkKLlgGEitcZj2oZ3BZRyHa0DOCpQqaq3lwKhWS8sJY3x/KWWlGK5/RmXVUNEZ4vuUIyFqUYL2BLqatxFYvOAXOqagD03KTY9FRFlKgoZDvaBcAWZYhLuQuDnkcVV3rGiyhkOzoHzDlgrsbm56tFE1Zr4RGTk9is5Ezp8qLKBpEzVe8Y4thglKkaaIQZ9TKGqvezQeIaQFsYqj7GDOLB+5l3ALSIN3FN1QPrTW6cCw238D6mKouq9/E2XBOOgi1B8Uy1hIZsKZfMdelXUSSnbDDsS6+rJ7m+BWN9a8b69j2b6iHybKqH1HNF1T2vxP3p1KTlyTlJ1cm1C6fjRbS8YCfpuJdBZpKqe25zDZg11VDs2ZSS8piqe069vIlMUnX8i0ycrEuy8mY9qGUC0GlvItEpJWVRNc+rs6XU1ZNcw6PxHDA9T1J1MsJUbCl5EUUsEuh4ko47QlPGk1Tdc5vDenQOmIsZ40Aw5UGVuJFwp0mQ9NwpJWVRNc+rs1dQXZ0edE6txfgSKSbjGk4yObnHVCYJL6icRMY0tBg3KoOLVw03nhuVk5THFNOMB5TBPeWgaMu4prw9pNyonHSKDWU8oHJzTbhH9TbsuVnDLRxJhjFV2RSNFS0TGtEyAbosuqK6WlfuGUfD8crVH+Re9h9QyvxAje2D9iyangxDXNW+E3KYe0/4kqPWLyhgwdSSq9p3sZdM7QsgHeNfwVRlp/WAVjLMcaqQRq5qXwAPLGItwxy15+YCzFl0mKtauyEOM9Wz3Yj7mOpZVINDCy9VITUmGqv1sk1HbIl1Na7cjB/+5OPvufMD77v7ww994nG0QNFy4tTz777tnrs/8MDdH3zgXbfdffrMWuy/Te0Nkp7ngJkyteSqanBr4RD6hhn/CqYqO2VAU8bVPcAwx6lCGrmqtSDpWcRahjkqrqAaA3MWHeaqzgHzCgqY7TY/X92i2zI90dWUB5Sefqc1QdU9d85LOOUh1UBTKCNgyhITG10DqOdGVfdccwb0ceUdACxuN6fpqh6YN7ngyo2qt/BmXapbwuG4UE1QRMvkRtTXqMqWk5EDtPiK6upRT3ZY3kgf69t5Yt9dtx04Ed84IGU9FGDxPraDvHHywL537HuibF/M/UkyzKb+JBxjunJUdbK/EMaYXnJV1ZMLdpijqg9xX2AZ5qjqUzkNmCkzqHqGKQv2sQVn5aDq/dzdIPq4dkOJqp6yv3nVmZioWgsPmr9BN3BfAkBWBxYsPcnqgWWuTmkOU7bpqmyJdXXpKKoVUFq/fOYsimqU1k8fO4miGvEB7QC8VFZX1oAJltAXeHsBc5AnO+JSI5siLo3xswzpR/SltUdVJyMENTO95KqqVwJjH0dVT/nIvXv2HTwW2hF24tx2jroZ1kb27YPMqNTHc8Ds5UHjVR+sgSVKWEvO6gw7OUtPsnpgmatTmsOUbboqu2brav+G0oPbvSg/DJ0ZeAmKyTjL6YIXcuL2slM9xce+W6J6zpSTKmOass41zpRBpI81xLQwVJqauFOGSGVMB3h+vrrGouG2jQRihKmWZJBD4uKZ6tglPRVeqap7XnpdHTbXbUj0xIE77rrnUdfy6H3vuOMhFMquz5A+zrr6ZH+fJ+7Zc9/jvXOp/iBLXX3EPhTVpx+6bY/toX1kR9kixXz4QE3tQ6+wO4WY5+25S/y+J2K7rFMa9x04mvSvnZBNik2n20ouEFMc7dgB20BLtyd33fuotEAOoxuTMHLXcte9h11Ln3YBoUXLAELF64xHtQxuiyhkO1oG8FQhU1VvLoVCMl5Yy5tjeUutKMVzerOuqoYIz5dcIRmLUowXsKXU1biKRbOAyXJaGfDCl1/S8WrW1R98QNtZVx9/NlsKilAZLvZ9B57O544rIsMdD51EQHPtSaj0avc+UwmDCYv2Bswn9ne7evCYzD3GuHTvYeEkPJa6qeH0nkfJDOn7j1j7oGLTF449LFsJfMdDz2p7jyJKVBRSVy1luxbG5/1H8p5dAMxUFj+etTPEGXOHH8YOg/E6zlUu1NYWW3Cld3z4/nfc8fApadH2VoWInjrY3XqOhHbXeP+R0DPRhQLmkXi2YJ+LubimNuVuFbcrLffj1Dp0XJiiLfjohbWlVeeAedkCZrvNz1eLJqzWwiMmJ7FZyZnS5UWVDSJnqt4xxLHBKFM10Agz6mUMVe9ng8Q1gLYwVH2MGcSD9zPvAGgRb+KaqgfWm9w4F6q3836mKouq9/E2XBOOgi1B8Uy1hIZsKZfMdelXUSSnbDDsS6+re9zq6q5F6uqT8WXYYf+m/JsN49V2iLTdDhf1COpqVIzWEnK7QtUvWF3dtUjFe8e+21wj+rDlAEeEpjoTKeRqwthzHUg5f+HkQR4EOZ2YEXKH9dSqnJzNatsSZp4qCZx4uCiYqrK2Z8vh+247gP2yzI8tnBsYc+Uth2U3n9iPg6B1NVv0gp2k414GmUmq7rnNNWDWVEOxZ1NKymOq7jn18iYySdXxLzJxsi7Jypv1oJYJQKe9iUSnlJRF1TyvzpZSV/c5CmlU1Hfe/eEP/reP6kv4yVPPv+fOD6DAht925wef/9KLWcA8IdHAvq/kd4X3PdHN9QEToaZgVaurtUXd89KcQVgLeISyEA/JLYrolDLDIMKsto+qOlkCoMZPhqw4V3kR3QgDzsLQzVPYt/1HlEc1Li5c8+Osq08pI0yNaFhbbFFXRiHKutq19zssU9bPWpZr0bufxWxcJzh0IFPGVd2zuaz/ScQrbPbQHXfdfhCblVlRjz98O8tp4cP3swP4sftvv2OfMdq3sJ59t+OO9pi9hMOm6pKMcSCY8qBK3Ei40yRIeu6UkrKomufV2Suork4POqfWYnyJFJNxDSeZnNxjKpOEF1ROImMaWowblcHIq+ZnnhuVk5THFNOMB5TBPeWgaMu4prw9pNyonHSKDWU8oHJzTbhH9TbsuVnDLRxJhjFV2RSNFS0TGlGXDF1OXVFdrSt3PDBeffLAHfsOPIq8Sr9pZnZlfWQMWRvv8ePV3eirDMjIqIW1dOu0FtSx4YA/cY9123fPPq2r2W4fCrPJ++65Q3M7GNZgG7UP12304FG0IF/hO7KTYQOFvbanJwnY8tTQR4dlyHEcu+/kLLn3hDfmth5G3RxboMhI7pUhHXB3QQGNn0W1r0M3TIn233evrIFzkZPdgZdaV2tL0L6LvWRqXwDpGP8Kpio7rQe0kmGOU4U0clX7AnhgEWsZ5qg9NxdgzqLDXNXaDXGYqZ7tRtzHVM+iGhxaeKkKqTHRWK2XbTpiS6yrcRVnjBjy8pmzL3z5JW3R8WrwyVMvPH3s5NPHn2VRbT07jd8/Zu3yzaOFrwNHY8t9T8SwJjHKhda77EkZFzYPHOXapCVbUMId1tktHopk1w21qw+Msqv81hLs2rsgiVL5tgNHEJ1kDdztML4df+zDolHZjVdvZLEakUR6hn1m+W3VpuuJ9o9jW9glRIAsMGINWG1o75hasq28a3Hj1dyNI3FY9Y6HUPB1fbQRERiL23g19yG0Yw1SCetLa0EfrFNbunFpVvLW535dG9pxdevcjq0GBtsB0XbcMm478CyZd4Qn40AxClQuKn2onlHWhhLa2iHHH74N77cWJD2LWMswm8pexRZcTVDe5mwUWtuF2fN+3NGes55P3nvH/fe+lx+r9NT+VZ0D5hUUMNttfr66RbdleqKrKQ8oPXyn1XGjqnvunJdzykOqgaZQRsOUJT42ugZTz42q7rnmDPfjyjsAWNxuTtNVPbDcRBNuVL2FN+tSXVKThAuVZMiX355Fk/Qrqjo6Rx71FdXVhefj1UxopAbWWYGZFyKfkz7M55gjesbbRLFtJfEF/uJR2Q13aFGtGZvljqExPFPtWd1+/fgEMzlZELu370jM/Dg3DGXr+HPsA+BJ4jm4so6N2+mkadx+rtn6+xOyxnR/6kLVyf5CIIdBG9+OzGzfwQNdJicjM9afKqmVDdcwCzmC5FULaSRnmtfeezi5eEuOqj7EfYFlmKOqT+U0YKbMoOoZpizYxxaclYOq93N3g+jj2g0lqnrK/uZVZ2Kiai08aP4G3cB9CQBZHViw9CSrB5a5OqU5TNmmq7Il1tWlayEd/Z4P8m+VhZdPKMTSWp2sEcZK4nh3YFGt4Td+zWffM2ooE+ZXkGCsIQwjIx537WFBXZstaGuWDtw0q1ztfACNbLGwzJ//hN/+IC6F/nE9sZ11NUtljZyhbGbAlA2BNcqxv1aDskULfWjEfqLg5FxZyn6/I8WnMVauZaQw368GQFk2MsexA9MRvvpUPQZGq1Fju45Xh1ncDenJmhnlKxmBVxbJ+In9oVTmD9cDS8RGBUuWFSJcWx9tj2vIWN0Hxriq+HNxaUeNevvBZ62Drt/YfuytjjgjSubQNMerrZ1i65Fae6kBM6xWXlr7qUN37Dt0kL/xpuNNSSjbfOx+/V5AB6ixk7cffPIQ6mq8xGvtI2vomJioWgsPGq/6YA0sUcJaclZn2MlZepLVA8tcndIcpmzTVdk1W1f7N5Qe3O5F+WHozMBLUEzGWU4XvJATt5ed6ik+9t0S1XOmnFQZ05R1rnGmDCJ9rCGmhaHS1MSdMnQqYzrA8/PVNRbV2zwZNshUJjTGktzY3MBUx5oAaUvJK1V1z0uvq8Pmug2JDo9XM2GydvntIp+Ulg6S/7G9/nx17BOXUrY1sw9SK24XCZNkeLqs1rr2oajKUictd7SloJXnq22jMKaAktVp//QUMmXeGR8XlExRv/uPQy5RixPyheOfevyzjz7+x3/8fJxbUexSUMuxXAtUMjkttuWH6DiMekHZmEY3Oq3j1UdMtSA/FgZt9JIc0C4gtGgZQKh4nfGolsFtEYVsR8sAnipkqurNpVBIxgtreXMsb6kVpXhOb9ZV1RDh+ZIrJGNRivECtpS6GlexaBYw+ffAtbRW1b8B7lugcTQ7VfmmkkEmhESErH1H3Fx97toqWG3vAmwIqgyMXDCW6AjUUjazCLcntzX0aZAUQOVs7YgwaLlXSm5wLMvJMpftiIhhV8OPdySohvFqCVzSzgpQR7wjIwCyGpRA2vN8da0nNt2Vvq6PtIdwJ1vH29H+iBJev/jUHz+KyPzk8efRgoZcdeVdC9+Iji3H7bK5+04ThR/DsoVE6ZM/X6075peSoAeWip1sEdtGm6UF7bY24SJgds9X247pXF0DWbYlg9RpH0iiT8qtJ7Y8e0g+1m58O1MfML8UbnMvhHYGqH7l7733HTqetWPf+IUFrqxNbP29/F6AjDe4/8kwvn3q0HuxIJS7Kj0n6RwwL1vAbLfLUVd/9Obrrrv5QXvRav4Y+AOSHmhMT+y98brrbz0RPwxMOu5OI8+cOFZr4RGTk9is5Ezp8qLKBpEzVe8Y4thglKkaaIQZ9TKGqvezQeIaQFsYqj7GDOLB+5l3A96Tmrmm6oHlZtnAhdotvJepyqLqfbwN14SjYElrUqZquiNsKZfMdelXUSSnbDDsS6+rezwfr9a6GlmYK7nZLn8/jAkQHxSUgWt9U12Jy5aYjXEAFiuJZTPnYs06KziHU2KyKIfU6mp7KW5pn+aOR+6RziFT1D7pRqUxdEAuaMMy6uF0kkV0aAUtPrPkTkqepz3Lk3Nj49jnkG3QP/WFL2tLr9JZM2v2FlwvCkuGrMXysDhXn69mxubmYpH7nkA7mauVupr99YKdpONeBplJqu65zTVg1lRDsWdTSspjqu459fImMknV8S8ycbIuycqb9aCWCUCnafLguFNKyqJqnldnS6mrB/zlM2fhyqii4RrzdZYFTHmJuCHqWUIoLmRGrvh1njojj4xFV8rjECrZ3rOgFeSyYIyBjHU2cG3uAqZ6CJuls/K0EMp4pYExRmlwDOCR+bSLfrN5jI2xCJeQKMEwbFSei9EwqHPDSy2zWTrqr3gQsmQ9jHtWo1qQTPTUpyUsSylYzoV1IRdxCY3u+Wqbpe02AhyLebTQuz509rH3gj7WIrUu2P2FMHWU0LY29kQQi2sji6or66rYIt3cePVtcbzaOsDY55D0gTk9pbce4SRgyu7Zny6j1PT4H8fb3EvSGOfWfIsFfPe8dKdo12JbWrScBst4tcy9/wgKcjIfz56fr46q5nl1drXX1Q+ygP6ovYj24K7rrrvu+luOyotQY6cHnVNrMW7Wp/dej9XvPDTUp18xGddwksnJPaYySXhB5SQypqHFuFEZaLxq6PHcqJykPKaYZjygDO4pB0VbxjXVW4vnRuWkU2wo4wHlrTflHuUtPOVmldu/MCy2KJuisaKS4pTqE6DLqCuqq3XlnmPepi1SNusQiiVk1jOMPDOpkkRK27vhFJbQTLzIIdlKxqu1UVmUBxyNsfAOW08+IC16paf9uTLbZ3m+OmyUHDM8nAy6FFo4VKKnQTxJJK2Mj/y5zE9aNrHyMI7dc3LWxquxiRozlwrbSi4NHaPunq9GnqRj1+GiAz8hoxM2V/I/7CoOgvz8O+Z80j9q38VeMrUvgHSMfwVTlZ3WA1rJMMepQhq5qn0BPLCItQxz1J6bCzBn0WGuau2GOMxUz3Yj7mOqZ1ENDi28VIXUmGis1ss2HbEl1tW4cjNGDPnIgYfvCX/9W8eoldEeeVBRdjK0umCbzQ2BNJTH5BhUNQjLeLX2CQFTCnJ71toCu4Y7KbDtzqiBXcarYRrcBvgJ1Evykx8rldGeBEwWeNl4NZiVHsOyf77a/hCjBEbrCZa6UQpptodq0/fBzmCu/MDnvicY2MMPebQdGPiLXyjGq7Wnsn0X2bUwwB6Ulel2dSwac7nph5+V/jh0iAyymq4P6lsGZLbb95tkWYp19aaNhGMZ6aMBkBHbjVez4NTxalzd2rNjXRXZDoi25+PVaI59jlsfKtu1qH5S2AKgUyt3u5YyYMbbnIxXy0ybW3AsqtFgcwNzjLp7vlrKaTL2Xwrs5w7uux13NP78W8erdSntX9U5YF5BAbPdLkNdndjEsWt/JJRNZZIppvbBBDYtT6kR3Zbpia6mPKB0/a7Lc6Oqe+6cF37KQ9oFnUQZDVOW+NjoGkw9N6q655ozoI+r3EjM7UY1XdUD8yYXXLlR9XberEt1TVA8F6pJjKglNI5Fk/Qrqjo6Rx71FdXVFWdly+EReamP7SnLrxbtmWrHLJXZh29HWMpy+YO3VhJLZ6aAXbHNY+UyPPrTJ5AjaoppAzLC9vtG8+6xw5huol1zTUAcOdGVy0Z1QbzkX8yOAzXhhMm6SXssznESeo4nW43pylHVyXohyOCM/J7QtSMH4s6TmRvpgIwWyelfCM96cqxG/isXDqRgri3iL96So6oPcV9gGeao6lM5DZgpM6h6hikL9rEFZ+Wg6v3c3SD6uHZDiaqesr951ZmYqFoLD5q/QTdwXwJAVgcWLD3J6oFlrk5pDlO26apsiXV16c9/6cX33PkB/U+q8VJ/+A1AUf2Rg4/EbogVji3Kabt+mSjfNrJd4y3aTx7V7ze1PJYAFb5DjEuFX99ooFa+cPKoBtUuQsLTBUP70SNPAJJVnTwZFkGMCrsq31rCu562crR349WIVOgg4TRlqfRktVaEKyCIca4GRg19VjdycbiW2fH5aguGFjxZBif/LYK6clXVu8DIfeA65WWo82WWBlvsUcLaH/E2MPvHOpahDJHZj1fzOWdhWYm1b2yeOsZyWh7wkbWx6g1rk/BF19guHOt2bBbHqhujZvkaOIxXa8GvRbK6FdWh8KboFlnK0mRVHOLGXOmvfaKq9zPCUccsqm3NsZ2/CcfbFz58/zvey+8IkhFpG6/u/lr4FubOz1d3mLJNV2XXbF3dvaG0rk4POllfG8uLwEtQTMZZThe8kBO3l53qKT723RLVc6acVBnTlHWucaYMIn2sIaaFodLUxJ0ywipjOsDz89U1FtXbPxk2yFRJYpQ1odG5gamOLekRLXmlqu556XV12Fy3oajyu0T7xZobu2YJes8+3svpbnhESnFpvOOhx+VpN2lnQibt8lfEJcFCO9I7adQ8UspaW/a+x/W3jlKcax/+YVhkbLIVU8zlqoSD6lCMMIdT0o3aCWAplHA8hZgRWn9zGYrR9DFt6ZYqT8gmlR3oNgTn6DSTJyZt2EnfR1s2jj37rNsTHFi7AJEGyegH2bRnvLrULiC0aBlAqHid8aiWwW0RhWxHywCeKmSq6s2lUEjGC2t5cyxvqRWleE5v1lXVEOH5kiskY1GK8QK2lLoaV7FoHjBRPD/x6T95+vizKK3REserj3zm8++7+8Onz6xl/YPG8Ai3n3ln7QhfodhmBavLuj9gEYKnhVaW1nFB+cmPFOTlD8gvXHCxaN+9j6LYvmB/qVFabjsgRbL0FGVNGOfqGDWqTay8bbwaAZCVHotwsGwojHiHdR62/5va95SQyJesdckMcewvPdki1WCYS0WUqCikX1Gch7cWv5eEcbv37o/ROI5Rd/1vO3iEP0GX8eruZ977j6AG5nrYP+6wjlp3h/G2Ox7WcpoFs7aEtWFJtHN5hCm+Qe1wP6tqtEDlP3am7z+C7Sbj1dbHDhoZkvw/1ea4U5w6fipunYfRFdWdTg+Ybp22ZpbrKJuxURbJ6NLtTzembT8IF6ZA5+erWxSSsSjFeAG7RutqX0srU9W6GvvBXXxMWn40ft11ux7EMTAWi6vdwkZuvOUZW8PuB4vnqw/FpdAtnEbP3MJfi4tdv/dpNmk71ayFR0xOYrOSM6XLiyobRM5UvWOIY4NRpmqgEWYEzBiq3s8GiWswbWGo+hgz6w3ez3ojUW/imqoH1pvcOBfKW/gQU5VF1ft4G66JSMGS1qRM1XRH2FIumevSr6JITtlg2JdeV09yeQvp89Xabm/fs6keIs+mekg9V1Td87ZdS/S00Z9OTVqenJNUnVy7cDpeRMsLdpKOexlkJqm65zbXgFlTDcWeTSkpj6m659TLm8gkVce/yMTJuiQrb9aDajd9x50ygRPMuVNKyqJqnldnS6mrq/7ymbMoD+6WvwEOOHHqef98Nertjxx4GHEMrC2IG6Ker9SAWXPErmmK6JTxJFUnI0zFlsDH5EfgXcs0RSwS6NiplaZFe6fjjtCU8SRV99zmsB6dA+ZixjgQTHlQJW4k3GkSJD13SklZVM3z6uwVUldL2SzvUWrgmw/Ji0M7OWc3C2p+PCduvf76W56JLCU02a1Bez5zC+rq0JOleHjWmkvduPcEmM9gs5xmO5j1tm0Fk3ENJ5mc3GMqk4QXVE4iYxpajBuVwcWrhhvPjcpJymOKacYDyuCeclC0ZVxT3h5SblROOsWGMh5Qubkm3KN6G/bcrOEWjiTDmKpsisaKlgmNaJkAXRZdUV2tK/eMo+HYq4xXx+erl6T+IPey/4BS5gdqbB+0Z1E9GZ4IwyBZu+eq9p2Qw9x7wpcctX5BAQumllzVvou9ZGpfAOkY/wqmKjutB7SSYY5ThTRyVfsCeGARaxnmqD03F2DOosNc1doNcZipnu1G3MdUz6IaHFp4qQqpMdFYrZdtOmJLrKtx5WZ88tTz+v9Uo6hGy/NferHnr38vR3uDpOclBMwWrmoZDFvYQlwLB5UnmTnuzRZEA2sHFkwtuaoa3LSujs9Xd+2OQ+gbZvwrmKrslAFNGVf3AMMcpwpp5KrWgqRnEWsZ5qi4gmoMzFl0mKs6B8wrKGC225UxXq1M6xZhSbzLzRGzI3H0FhbG8pfPtmQNh2SGHKg4Xn1xi920/OaCW1uHduvoNDeyG9U2jixnjeu2TE90NeUBpaffaU1Qdc+d8xJOeUg10BTKCJiyxMRG1wDquVHVPdecAX1ceQcAi9vNabqqB+ZNLrhyo+otvFmX6pZwOC5UExTRMrkR9TWqsuVk5AAtvqK6etTdDtt4dWjnm/KsbwcsByEwD5RnOzjmfWwHeZhbXX95zsfwXOMwm/qTcIzpylHVyf5CGGN6yVVVTy7YYY6qPsR9gWWYo6pP5TRgpsyg6hmmLNjHFpyVg6r3c3eD6OPaDSWqesr+5lVnYqJqLTxo/gbdwP6mn7M6c4icpSdZPbDM1SnNYco2XZUtsa4edQ13gfMg6fmKDZjREZca2RRxqZnpylHVyQhBQ2y/Y+ffmJBZsU/HVVWvBMaC8/Fq9SFG2Ikt7RxVfSozKvXxHDB7edB41QdrYIkS1pKzOsNOztKTrB5Y5uqU5jBlm67KXnF1NUpfvEZVDOZ4tfz8W9o5hWQ/BWc71xDGrqluvPrB3da1M/4dcvS09RQj1RWW0wUv5MTtZad6io99t0T1nCknVcY0ZZ1rnCmDSB9riGlhqDQ1cacMkcqYDvD8fHWNRcOtGgnECFMtySCHxMUz1bFLeiq8UlX3vPS6Omyu29AVouUBr6h9lKbuwzUWtQ+9wv4UmqzlCdmk2PRU7S46r5DtaBcQWrQMIFS8znhUy+C2iEK2o2UATxUyVfXmUigk44W1vDmWt9SKUjynN+uqaojwfMkVkrEoxXgBW0pdjatYdA6YU1UDoOcmxaanKqJERSHb0S4AtihDXMpdGPQ8qrjSM15EIdvROWDOAXM1dtWMV8u4tLXoiLN2U+RPvdmOI9U9Xy0//GaFHT8Ycb5Q1p+aX7eTPzwP7dFaeMTkJDYrOVO6vKiyQeRM1TuGODYYZaoGGmFGvYyh6v1skLgG0BaGqo8xg3jwfuYdAC3iTVxT9cB6kxvnQsMtvI+pyqLqfbwN14SjYEtQPFMtoSFbyiVzXfpVFMkpGwz70uvqSa5vwVjfmrG+fc+meog8m+oh9VxRdc8rcX86NWl5ck5SdXLtwul4ES0v2Ek67mWQmaTqnttcA2ZNNRR7NqWkPKbqnlMvbyKTVB3/IhMn65KsvFkPqt30HXfqBltS7pSSsqia59XZUurqSa7h0XgOmJ4nqToZYSq2lLyIIhYJdDxJxx2hKeNJqu65zWE9OgfMxYxxIJjyoErcSLjTJEh67pSSsqia59XZK6iutgOtP/CWJ59tvFpmZE9NY827Zc1krqFnvNo/R21zC5U16RPdaMFkXMNJJif3mMok4QWVk8iYhhbjRmVw8arhxnOjcpLymGKa8YAyuKccFG0Z15S3h5QblZNOsaGMB1Rurgn3qN6GPTdruIUjyTCmKpuisaJlQiNaJkCXRVdUV+vKPeNoOF65+oPcy/4DSpkfqLF90J5F05NhiKvad0IOc+8JX3LU+gUFLJhaclX7LvaSqX0BpGP8K5iq7LQe0EqGOU4V0shV7QvggUWsZZij9txcgDmLDnNVazfEYaZ6thtxH1M9i2pwaOGlKqTGRGO1XrbpiC2xrsaVmzFiiOOVa2+Q9DwHzJSpJVdVg1sLh9A3zPhXMFXZKQOaMq7uAYY5ThXSyFWtBUnPItYyzFFxBdUYmLPoMFd1DphXUMBstyvi75bpD78vXjyBkjiOUafj1TZLjoQwimkdr5a11Z+vtj9+xrlglNyHPspx7K0Hb0bhLS2owvnHzOzvhPfqtkxPdDXlAaWn32lNUHXPnfMSTnlINdAUygiYssTERtcA6rlR1T3XnAF9XHkHAIvbzWm6qgfmTS64cqPqLbxZl+qWcDguVBMU0TK5ES3TL8vJyAFafEV19agnOyxvpI/17YDlIATmgfKshwIs3sd2kId5svuTZJhN/Uk4xnTlqOpkfyGMMb3kqqonF+wwR1Uf4r7AMsxR1adyGjBTZlD1DFMW7GMLzspB1fu5u0H0ce2GElU9ZX/zqjMxUbUWHjR/g25giRLWkrM6sGDpSVYPLHN1SnOYsk1XZUusq0ddw13gOWD2Ml05qjoZIaiZ6SVXVb0SGPs4qvoQI+zElnaOqj6VGZX6eA6YvTxovOqDNbBECWvJWZ1hJ2fpSVYPLHN1SnOYsk1XZddCXZ3YjbewgM7q6htveZB/31s78O91q+XPV3MoW+36W46yeB4fr5YW+7G32PU7D2H9hx7cq5W5mC2LrmENKcvpghdy4vayUz3Fx75bonrOlJMqY5qyzjXOlEGkjzXEtDBUmpq4U4ZIZUwHeH6+usai4VaNBGKEqZZkkEPi4pnq2CU9FV6pqnteYl195sza+jm8Fd1ct6ErRMsDXlH7KE3dh2ssah96hf0pNFnLE7JJsemp2l10XiHb0S4gtGgZQKh4nfGolsFtEYVsR8sAnipkqurNpVBIxgtreXMsb6kVpXhOb9ZV1RDh+ZIrJGNRivFUw6eAiIcrX1c7ybAIFpwDJiLANlQDoOcmxaanKqJERSHb0S4AtihDXMpdGPQ8qrjSM15EIdvROWDOAXM1ttK6epnmj4E/IOmB5hT/6Mo4bzx3p5FnThyrtfCIyUlsVnKmdHlRZYPImap3DHFsMMpUDTTCjHoZQ9X72SBxDaAtDFUfYwbx4P3MOwBaxJu4puqB9SY3zoWGW3gfU5VF1ft4G64JR8GWoHimWkJDtpRL5rr0qyiSUzYY9mXV1fgs19bWz66dy9Y/7PoWjPWtGevb92yqh8izqR5SzxVV97wS96dTk5Yn5yRVJ9cunI4X0fKCnaTjXgaZSaruuc01YNZUQ7FnU0rKY6ruOfXyJjJJ1fEvMnGyLsnKm/WglglAp72JRKeUlEXVPK/IEIIQ8XBJ6L5NMiyCBeeAGR2xa5oiOmU8SdXJCFOxpeRFFLFIoONJOu4ITRlPUnXPbQ7r0TlgLmaMA8GUB1XiRsKdJkHSc6eUlEXVPK/IEIIWDpjtdqXU1elB59RajC+RYjKu4SSTk3tMZZLwgspJZExDi3GjMrh41XDjuVE5SXlMMc14QBncUw6KtoxryttDyo3KSafYUMYDKjfXhHtUb8OemzXcwpFkGFOVTdFY0TKhES0ToMuiS6mrYTi91s+dO31mDeuMK/eMo+F45eoPci/7DyhlfqDG9kF7Fk1PhiGuat8JOcy9J3zJUesXFLBgaslV7bvYS6b2BZCO8a9gqrLTekArGeY4VUgjV7UvgAcWsZZhjtpzc/E3HWPRYa5q7YY4zFTPdiPuY6pnUUSGRl6qQmpMNFbrZZsOGWIdIh6OvL2eaFhwDpg9XNUyGLawhbgWjooIUGFgwdSSq6rBrYVD6Btm/CuYquyUAU0ZV/cAwxynCmnkqtaCpGcRaxnmqLhaawzMWXSYqzoHzCsrYDbaVTNereYPnLKpTDLFNH5IyqblKTWi2zI90dWUB5Sefqc1QdU9d85LOOUh1UBTKCNgyhITG10DqOdGVfdccwb0ceUdACxuN6fpqh6YN7ngyo2qt/BmXapbwuG4UE1QRMvkRrRMvywnIwdo8SXW1efPnz/LEZj1bBNVT3ZY3kgf69sBy0EIzAPlWQ8FWLyP7SAP82T3J8kwm/qTcIzpylHVyf5CGGN6yVVVTy7YYY6qPsR9gWWYo6pP5TRgpsyg6hmmLNjHFpyVg6r3c3eD6OPaDSWqesr+5lVnYqJqLTxo/gbdwH0JAFkdWLD0JKsHlrk6pTlM2abLt/V1xjpEPHx61jTRsOAcMEfZFHGpmenKUdXJCEHNTC+5quqVwNjHUdWHGGEntrRzVPWpzKjUx3PA7OVBY8gK1sAS4qwlZ3WGt5ylJ1k9sMzVKc1hyjZdvm0/YDba5amr/YFLD273ovwwdGbgJSgm4yynC17IidvLTvUUH/tuieo5U06qjGnKOtc4UwaRPtYQ08JQaWriThkilTEd4Pn56hqLhls1EogRplqSQQ6Ji2eqY5f0VHilqu55WXU1rgV8Tuvr62fPMlPEO40bukK0POAVtY/S1H24xqL2oVfYn0KTtTwhmxSbnqrdRecVsh3tAkKLlgGEitcZj2oZ3BZRyHa0DOCpQqaq3lwKhWS8sJY3x/KWWlGK5/RmXVWND54vuUIyFqUYt5vmiDCcv7rOBQwLYvE5YG5DNQB6blJseqoiSlQUsh3tAmCLMsSl3IVBz6OKKz3jRRSyHZ0D5hwwV2Pz89WiCau18IjJSWxWcqZ0eVFlg8iZqncMcWwwylQNNMKMehlD1fvZIHENoC0MVR9jBvHg/cw7AFrEm7im6oH1JjfOhYZbeB9TlUXV+3gbrglHwZageKZaQkO2lEvmuvSrKJJTNhj2ZdXVMFw1SJeYKa6tnz6zdnbt3Pq589nmMte3YKxvzVjfvmdTPUSeTfWQeq6ouueVuD+dmrQ8OSepOrl24XS8iJYX7CQd9zLITFJ1z22uAbOmGoo9m1JSHlN1z6mXN5FJqo5/kYmTdUlW3qwHtUwAOu1NJDqlpCyq5nlZhk8RYUciG3NExDrdq4UNi88BUx2xa5oiOmU8SdXJCFOxpeRFFLFIoONJOu4ITRlPUnXPbQ7r0TlgLmaMAMGUB1UiRsKdJkHSc6eUlEXVPC/L8Cki7CwxYLbY/Hx1opiMazjJ5OQeU5kkvKByEhnT0GLcqAwuXjXceG5UTlIeU0wzHlAG95SDoi3jmvL2kHKjctIpNpTxgMrNNeEe1duw52YNt3AkGcZUZVM0VrRMaETLBOiy6BLrahiuCxwLxFD5kzxMFrH+2Wefffar3TVBRGRbYo44B8zZZ5/9mvRVBMxRm5+vbtFtGarWaMoDSk+/05qg6p47Z0Gb8pCiIq0pq9mUqa2upbLnRlX3XHNUqg0qRbI5CtTFVD2wFLoJNyqL3gm6VGdBm3KhUvT68rsoxcsxCqo6OkcedYQ/XrTLM1xN+KTPnz8vyeLa2dlmm222q98QzRDTENkQ3xDlLN5t2+aAOdtss117tqKAOWzz89VjrKX1/Hx1I3fKOlYZ0wGen6+usShLXGXYIFNZ4hpLuWtzA1Mdo5qNLSWvVNU9L72uhtl1wfMHx2S22Wab7ao33ofEEN8s0i3J5oA522yzXWO2uoA5YPPz1aIJq7XwiPGTDFZypnR5UWWDyJmqdwxxbDDKVDkHlVmpZgxV72eDxKXobWKo+hijCo2N/cxrCi3iTVxT9cAsaFu4UBa0Q0xVFlXv4204C9oKS6GbspW+ga0YlrldYYyXkdVTNhj2VdTV0XghzTbbbLNdE2ZxbWVmm5ltttlmu/rN4toltPn56kQxGVcrrcuR6prKJOEFlZPImIYW40ZFXZqoFtLZ6HSLcpLymGKa8YCiRs04KNoyrqmUuAk3KiedslROeUBRT2bco6xdU25WqXuF+U1caFE2RWNFpdwtFTXtlaArratnm2222WabbbbZZptt1TY/X92i2zJUrdGUB5S+GSByo6p77pwFbcpDioq0pqxmU6a2upbKnhtV3XPNUak2qBTJ5ihQF1P1wFLoJtyoLHon6FKdBW3KhUrR68vvohTnUHPK3RA0Okce9bmunm222WabbbbZZpvtqrb5+eox1tJ6fr66kTtlHauM6QDPz1fXWJQlrrKORfczlSWusZS7Njcw1TGq2dhS8kpV3fNcV88222yzzTbbbLPNdlXb/Hy1aMJqLTxiqE6jlZwpXV5U2SBypuodQxwbjDIVVWhgVqoZQ9X72SBxKXqbGKo+xqhCY2M/SzFs3sQ1VQ/MgraFC2VBO8RUZVH1Pt6Gs6CtsBS6KVvpG9iKYZnbFcZ4GVk9ZYNhn+vq2WabbbbZZpttttmuapufr04Uk3G10rocqa6pTBJeUDmJjGloMW5U1KWJaiGdjU63KCcpjymmGQ8oatSMg6It45pKiZtwo3LSKUvllAcU9WTGPcraNeVmlbpXOBmp9orGikq5Wypq2itB57p6ttlmm2222Wabbbar2ubnq1t0W4aqNZrygNI3A0RuVHXPnbOgTXlIUZHWlNVsytRW11LZc6Oqe645KtUGlSLZHAXqYqoeWArdhBuVRe8EXaqzoE25UCl6ffldlOIcak65G4JG58ijPtfVs80222yzzTbbbLNd1TY/Xz3GWlrPz1c3cqesY5UxHeD5+eoai7LEVdax6H6mssQ1lnLX5gamOkY1G1tKXqmqe57r6tlmm2222WabbbbZrmqbn68WTVithUcM1Wm0kjOly4sqG0TOVL1jiGODUaaiCg3MSjVjqHo/GyQuRW8TQ9XHGFVobOxnKYbNm7im6oFZ0LZwoSxoh5iqLKrex9twFrQVlkI3ZSt9A1sxLHO7whgvI6unbDDsc10922yzzTbbbLPNNttVbfPz1YliMq5WWpcj1TWVScILKieRMQ0txo2KujRRLaSz0ekW5STlMcU04wFFjZpxULRlXFMpcRNuVE46Zamc8oCinsy4R1m7ptysUvcKJyPVXtFYUSl3S0VNeyXoXFfPNttss80222yzzXZV26Wqqzc2ttbWtk6f3nrppWvN8abw1i5cYMlbM1St0ZRRe12zR2Mp/vLpzbNnURGiMEZNa+659PMXsAgWzFd1Zfvmyy9jtzliywI4lM09ulRnQZtyoVL0+vK7KMW54ynb+DPZDUeP+lxXzzbbbLPNNttss812VdulqKtRQ158+eVYS1yzvLZuZTOlNlItvHX2lXE0lsGsrrtRa0yhFUa3hTdxhfDm2bXKCDZL3HR0uo+pLHGNpdy1uYGpjlHNxpaSV6rqnue6erbZZpttttlmm222q9pWXldvnT0bi4dr38+cYe0cTAppM2N0yBaZfdA3z5xB8czSWj0wi2pxdMgWuUodb8QVyeZ9vA1nQVthKXRTttI3sBXDMrcrjPEysnrKBsM+19WzzTbbbLPNNttss13Vttq6+pUyUu3ZRq0rz1fPI9WLsY5aX6sj1Z5l1JqVba9K3SucjFR7RWNFpdwtFTXtlaBzXT3bbLPNNttss80221Vtq6yrNzZiwfDK8vRZaxupRmPWbfZmR3WIQloHqDs/fw0eUo7eshgOJbTTpToL2pQLlaLXl99FKc6dTdnGn8luOHrU57p6ttlmm2222Wabbbar2lZYV/NPc4VqYTuDeFcf442Xz1e/Yo/GMph/3Kt4vhqNWbdrgPGmUKn2jk73MZUlrrGUuzY3MNUxqtnYUvJKVd3zXFfPNttss80222yzzXZV2yrr6lfs37vGGxezkWq1+a9/b8dfPm1j1O756pX+9e/1Z17IWi6Nb778spbK6n28DWdBW2EpdFO20jewFcMytyuM8TKyesoGwz7X1bPNNttss80222yzXdW2yt+ByxDc6ADdxlOfW3vnW8r2q5vL56v7etb4wufevf6Rv7PxJ3c09r/mGV4+X60dpq5K+ZNHXvq+d5z5d7995shtZ0/tvnDq585/7M2fRPvao6dPvfHCqZsvHPmFo2tSWi+2/u0w6lIbry5V6l7hZKTaKxorKuVuqahprwRdaV3NK3C22Wab7Zowi2srM9vMbLPNNtvVbxbXLqGtcrw6VAsDvnn86Is/8s/Purr6GnFnNmqddejxjaf3n3vwBhTV6uce+rebz/xh1ueV6XGkOnrWod03X3rp5vee/te/cfY9e86wqN618eQvH/0vf/H9zz507ItvO4+XJ9/28jv/8u8desMj2YKXxl3BnOhSnQVtyoVK0evL76IU51Bzyt0QNDpHHvVV1NW45jY2NtbWz505u4b1zz777LNf7Y5ohpiGyIb4ZpFuSTYHzNlnn/0a89UFzAG7RHV1dYBu87lTL//H177wz/7SuQMfxMuXfvI7tR0+PKB3GfjgL/7ma299briP42SkWjnO6llq88THz3/8h6ycPvTtL+65fv3gt+nLC5964+bzT2X9r1R+4O6v+K4Dn1ls2SG+4EaqlWMf+KRV7fnI6V/83dO/9runf/iXbv+Jt9z6+l+77be+7ffe8erf/aP/+CcvvHv9hXevPfxTT77ja3/3jn/8++Wyl4BRqfaOTvcxlSWusZS7Njcw1TGq2dhS8kpV3TPCn0WNJRmuOgTT02fWzq6dWz93Pm4O793xyrXv8CZcfDSR3cdXfMQ6NzsNhriq/r9Mb+dNrLaRo+KKrTCwYGrJVdVQ0MLU/AcvJeNfwVRlp6hAjBGHBhjmOFVII1cVVdAgi1jLMEflDavCvIdlLDrMVS1ui6NM9SyjqRd7mepZVANCCy9VITUmGqv1sk0Tw0FG3EBkQ3zznbdpWNUcMFOuahkMW9hCXAtHRQSoMLBgaslV1eDWwiH0DTP+FUxVdsqApoyre4BhjlOFNHJVa0HSs4i1DHNUXDU1BuYsOsxVnQPmFRowh+3yjVe/+OWXb/qBL3373zl/8A+0RevqzWdPWoceP/Kmr/jNr+j87gN5B/MDb/bd1G99z5/k3Rodawt1dZOL6SlulnVwvvn80+cfe6OW0OsHvuX0Pf/i+V/5qi+9/duP/8u/+PLt37N+/z9A+9pHXpst1fly3+kE/5MDr002Kp+F1dVF5+064iZdwzRvAAuOV59+/sUf/p0z3/mfz37v3gf++ze+/s/81L+/7vv/1b/4ubfc/o0fvPXv7j/2a1869p9euPX/vf/2f/iBT9/+6WzZumfnxmduvdUf/9rL3pNWHHdH3GjV+7jZTx6446533PHQya7lBFv20O951NIOekhHHFNdKmMJlsx1yVZRJKdsMOzLratxtZ1dW4dnWynd717/W0jepn/7gJSlJ1m8j0O2N8iTveW0UTbdIDQyXTmqOpl5VSvTS66qOlO2Ro6qPsQhjNDbOar6VMYdvpeZt3mGKQv2sfY0Dqrez7gxjTDvXDlHVU8ZNsLERNVaeNB4sQdrYAkO1pKzOhOvnKUnWT2wzNUpzWHKNl2yra+fR4jzO7CwYSVzwBxgU8SlZqYrR1UnIwQ1M73kqqpXAmMfR1UfYoSd2NLOUdWnMqNSH88Bs5cHjRd7sAaW4GAtOasz/uQsPcnqgWWuTmkOU7bpkm2JAXPULtvz1Wd/4+e+/G+/+cIn+SPn2H7hsY+f/c2fV/btnlFX33rb562dVUpXxVX7P/ee7/rNNx0o26ex1k7t/cN3RfFSSeamfObu72FFfd/fP/uh70JFrf7SHf+/Uz+5A6U1XFtGtrisdzqBP4+6WktE157U1dPXWWe4BnfeEsJ3otph6qp+84Mvv+mu02/5vZe+9s27/w//4Yf+zi/v+lu/8Ia//aabfuXfv+8dr/rdD772Y/u+/RBg33ffvzm4no55brwn1tXP3fZdt77pzbdKpW19eJa++Ygwz973/MnwOvX2XNdw80ZKYQw9+tBtrJy1RVMN1SfukfqZdbWmKWjad9c79j1Bfvqh2/bsO/A02zW5uYy6xLoalxryQ80RcRyqm7uMqvmi54omH2LysSp3H32V+06eJq0MtrQoNj1VcSVXFLId7UJEi+YhRRWvMx5VxPmMF1HIdrQchEkVMlV586ooJOOFtRx4CbfOQaV4lgs/4UI1OHi+5ArJWJRiPMmQKW5/EAaLzwFzG6oB0HOTYtNTFVGiopDtaBcAW5QhLuUuDHoeVVzpGS+ikO3oHDDngLkauzzj1ec/+t9e+Cdfd+6+/Xn7wwfQfv6jH8navWtlEl5yyPTuA9SiUVmrzQfCLDgKPz++Ko7S6E0P6Ei4NnIp7aZDkVJXHwmNbls97kxO5aHxal88f/md//hLb/92wJn7fhGzXnzPW3SWvhz24p1248nhneK9v/kI3ot/a3AZRJVGN9Qcu2ERbeEKv+vAAenMZZPjHDypq/PDKIuEo+c5+1grjkiHYOo969DiR4++9NrfOvNvfvPsa/d+4Jvf8db/08/8yMEnHvurP/fT/8cf/M7rf+mXf/vrf+/e775/33cf+C9f9/4TB45my/a6nBuhrpZD9BnVro+9Oxznrmev437JW2+hvW51ddGu/uh9brz65IE79h04CmDacfLAvtsOoOKupCZUl7iUKY6oT8KUqeroHHnUl1VX4zpDrnT6zFq2/mH3u2pvxLh8m6blYTH1h67vwJp7Xon3nUi9ukFIeJKqk5l7BS95EcX1LtDxJB13H15c/teq6p7bHNajmvN5NqWkPKbqnlOXjC3hSaqOf5GJk3VJxiAQTHlQJWgk3KkbbEm5U0rKomqeV2SIdYh4umMLGBacA2Z0xK5piuiU8SRVJyNMxZaSF1HEIoGOJ+m4IzRlPEnVPbc5rEfngLmYMQgEUx5UCRoJd5oESc+dUlIWVfO8IttmwGy0y/N89elf/g8v/Uf+sLkcrHvp9d+FuWV75FiAsZ21HwuYi599T6xYEn755VhtynpYVKMUJIdf5Ib1BLZq0AYYnztw6xGs/+AvooNtl2ylpvZX94xPLV42xnFWsdRzO79Ti+eX3vvDJ773b2hdHX/4vf7I/Rc+033R0LdFrid5p1a1Cks5zfYD/E7B+pC1KnY17QN364g3D0g4At3R4DpZQ9p2+fLug8q2D7Gutv2RDwLM7yyEn7stFv8P3P3a7wqf1J8cdIWorCdn3h7Sb0xjH3h1ER1w3nzpRYOXX/7Vu1/+7t8888O/8/L/bdeP/6k3/tRN996Fub//8Y/9d6/7nv/L677nZ27a887/+fdu+Zv3Prj74349IxzqanKonDlqHb4msPavwJstD1eF9Sbtb95IKYY4jFej0ZYKaQrZ6mppOf/EPXtQV5OZ4mCWjF2Xqc9KVd3zsupq3All7OVcuYmwA55Xrv6Q9rJ9NBV2H6WeDAmLNpwexlXVwZOpbEMoLRwVV2yFgQVTS66qhoIWpnaho4/xr2CqslMOmCgjDg0wzHGqkEauajnYkrKItQxzVN6wKsx7WMaiw1zVcrBljKmemashJ+pjqmdRRIZGXqpCakw0Vutlmw4ZYggiHo68vZ5oWHAOmD1c1TIYtrCFuBaOighQYWDB1JKrqsGthUPoG2b8K5iq7JQBTRlX9wDDHKcKaeSq1oKkZxFrGeaouFprDMxZdJirOgfMKytgNtrlGa9+8d/9076/AX76F1+PuVmj9/8/e//ifNl13feBqKmamkkq/0AqNa6amqlyZewkqspEki0nlbJlKVY0YzUEWn4ktiyNHWpMS3bsUSyPPCQA2jIoy45j5yGLerEbYkOARNkEGqBsmQC7G5QAiRQAAg2RAkmg0WigSYAA+v0C5vtda+29136ds8/93dsvnFWrv+dz99n3vO4566x19z2/ZoWmQ6D0WIxJjReGmmNJoy/jKG6s9MpZ5ZCjLSp51iEbkm27mJ7iZkWH3FE8X3r+30BRYJ9+4Lv0WeuLR/Zd/Pw/LnpOuN/TYmgUB032KNvyUE5LtRzfSGeLO4bxgBRHRt4YPwtbXVwFwHcO7aww5VuJwx/5pY9/xsZ1861tOuImXcM0bwDz49X3Psoq+od+9vSdP8u/+va5595+/8+egX/gU4/+oYMf/bZf3f/HP/FLgI/+2uf1/9b6wt9/8Rf/00+irn7s7z7hlzPj7tzAR2C7XO1R/GYhtvQcd0fcaNV7rM4fdesvvYM/+FTZJx+v5ltkjBp8jL8Sj7NCOuKY6lIZS7Bkrku2OglZYINp32JdffYswib/7s6I+83r70K2m373ATlLT7J4j0O2N8mLfeS0UTa9QhhkunJUdTLzqlGm19xUdaZsgxxVfYpDGKGPc1T1pYyzs8vM2zzDlAV7rD2Ng6r3GTemGeadq+So6jnDZpiYqdoITxrzqmADLHmYtZSsztysZOlJVg8sc3VKc5izTXdi+NwQ8aD2eqHp29eAOc2miEvDTFeOqk5GCBpmes1NVW8Exh5HVZ9ihJ3YMs5R1Zcyo1KP14DZ5UljyAo2wBLirKVkdYa3kqUnWT2wzNUpzWHONt2J4XPbS8ActOvzfPU3/uwfvfQbn6zb4ef+xT/E3Lo9MqvB+Hy1a5eq8rAUeyzVqvaS+fJgKLNZGsVHZLPi05ajtZMtkx2OfMmWn/p4Dt8VxUslm9thravVUV2//o//r4Mj1QrZ3mGDY8UrHuvquOX+SPK97Kbj8F8+4upnfY5aK/C6XV+67YlHrzhKeKmd+QGhXUt9rJdDuNhaKex7+wjX4M5bQvhOVDv03vLCV07zJ98/c+7v/MzLf/QDz/32U1+7+37+31p/5xNf+Y/v+7n/+ODP/Xf/+pMoqr/5/o+d/J/On7rzyqv//PTPfdO/+uRfePTh/xd/B/7qb75cL7PN6eTJv2jwuy99DvufWmh7i/X23NZw80ZKYQztPl8tqnW1pilUKafpjz6Xj1dfR91KXY2LDGcHFuUW7vK8G0M1X/Tc0PxD9B+rcvrom9w7eYa0Mdgyolj1UsWV3FDIXjSFiBEtQ4oqXhc8q4jzBW+ikL1oGHhJnCtkqfLm1VBIwRuru0Uah1vnpFI8y+WfcaUaIjxfc4UULEox3sAQ8XAK62IXGd6CN64BExFgD6oB0POQYtVLFVGioZC9aAqAI8oQl3MKg55nFVd6wZsoZC+6Bsw1YO7Grs949Zl/8DfPfORHi0b103f+NcwtGr2HUdayXSq3jxyrxgml2lw0Xh0rQOdlh1R4t92ZnMoz49XqWld/7cN/Gqq/Br/0e1OPmhfu9zTf4OjZljeOpL2L9eHgeHV5oNIqisOYVi3r/czhH7RRa2yz1tihZ9sR6RBMvRcdvF99++2f+MTpv/8rZ/7+r7799z526u/+wqm/8TOn7vm1M//o185860995Nv/6U/8qf/5J7/5H37wW+/54P3/7He+/tELX/+Z84//6FMf/Q9/7Z9936/89F/g/7n1yPuP6K/H5z0e6vyY4+MIxzAcqDhWP+m4X/LWW2nXFzxfHZ1pxy32fDUuMiRKmiYucr+ptiPG9W6a1ofF1B+63oE197wT751IXb1CyHiRqpOZewWveRPF9S6QeJHOuw8vLv8bVXXPYw7rqOZ8nk0pOc+puufcJWPLeJGq419k4mLdkjEOBFOeVIkbGSd1gy05J6XkLKrmeXeGiIe4p9u2yPCWNWB6R+xapohOBS9SdTLCVGypeRNFLBJIvEjnHaGp4EWq7nnMYR1dA+ZmxjgQTHlSJW5knDQLkp6TUnIWVfO8O9s4YI7b9Xm++sKv7X/ju//w5aeeKNrRgnbMLdo9x2qw7sNZP/j9xWh2rDaljxV7ZBY5gUNFpMvhW+yJYhRLn+GTydn/X304lojaX90zPrJ42RjHWf13Xfgd1tWoqN/8ue9+82f+wpWXv1D3meB8T602tj5f/LLsXbbl4Uh++cg9Yawex+E/+Ekrd8MRcEfD6sOwXnm58Plq9sHC7/nIJ/FS+/8geG78f+Hz1b/1hbe//3879wM/fe4ffuLMX/xfz/31Xzjz13/e/m+tP/APfuzf/9G/dtsP/Td//Kc+/M1/74Ov/pPzpz505eV/9sbH/4tH7vvOT/3puz/yff/gH9//nb9+77cd+uIvf9EvM+NQIYO5a3LY45lpfcJJlX4WIX3idx/lMgPrTdrfvJFSTPGC56svX3rq2HPCF/n3wB99TnrWqc9OVd3zjurqenU4Jo53rv6Qdjl+NBW7j1JPhoxFB04P46bq4MlStiGUEY6KK7bBwIqpNTdVQ8EIU1Po6DH+VUxVdsoBE2XEoQmGOc4VMshNrQdbchaxlmmOyhtWg3kPK1h0mptaD7bMMdUzL3MkRD2mehbV4DDCW1VIi4nGal226Yxtsa7GlVswYojjnWs3SHpeA2bO1JqbqsFthEPom2b8q5iq7JQBTRlX9wTDHOcKGeSmtoKkZxFrmeaouIJaDCxZdJqbugbMGyhgjtv1Ga+Gv/XX7zj99/7qleetdISD0YL22NJ0X72UzmrQ6sbosdoMztJaf6mbxkhDCRS7sf7Rbj/4/YfRLeuQhl67LqanuFnRoeU6Xu3/Hvgir/aUdW/Yi48c4wZnW25H8oufOfbx+KNxt188mNoYh1jDuGv7ZWqMx0pqct0Af3j5EYQPMes/4YibdA3TvAFMjVf/+EeP/w8//fJf/okv/djPvPx3fvrlH/iJL+Hlj//My//hT979v/uxv/ad//wnv/9n/5e/8gs//a/+0VMv/eO3Xvof39L/W+uH/sb/8v/45x+B/38/8Isf/YO/dv93/Xqx2MyL48NKuzgruJuf/Ln8nJRvc7onsDjujrjRqvd4yFFRh+eu6YeO8e+BHzp00Foe1QLbPKQjjqkulbEES+a6ZKuTkAU2mPYd1dWz7jevvwvZbvrdB+QsPcniPfaHvcuLfeS0UTa9QhhkunJUdTLzqlGm19xUdaZsgxxVfYpDGKGPc1T1pcxUq8fM2zzDlAV7rD2Ng6r3GTemGeadq+So6jnDZpiYqdoITxqv+mADLFHCWkpWZ7JVsvQkqweWuTqlOczZpruyLdbVs67hLvAaMLtMV46qTkYIGmZ6zU1VbwTGHkdVn2KEndgyzlHVlzKjUo/XgNnlSeNVH2yAJUpYS8nqDDslS0+yemCZq1Oaw5xtuiu7uetqHYLrDcqB3/6xH/jGn/nW03f/8Pmf/ydQMFom+s9z/pfAU/u15/BdUbxUsrkdDnX1n4919XT/9w7DNbjzlhC+E9UOzbd89L4X/vCfffz//hd/88/86Of+0Pc9/mf/zuf/kz//2f/ozz3+3f/TL/4ffvxHDj/3jHZ74fBXfuT/8nN//f/8s7/w337qZ/+jf/naEy//p+H/3Pr5b/pXv/UTn/PLvGast+e2hps3UgpjqrIpGhuqaUqlmtZcd916XR0W7vK8G0P9Ye99KMXH5z9W90H7jz7n3skzpI3BlhHFqpcqruSGQvaiKUSMaBlSVPG64FlFnC94E4XsRetBmFwhS5U3r4ZCCt5Y64GXcOucVIpnufwzrlRDhOdrrpCCRSnGG9hW6mpcxaJrwFyqGgA9DylWvVQRJRoK2YumADiiDHE5pzDoeVZxpRe8iUL2omvAXAPmbuy6jVern/v5f/L2j/7FN777D0N7fyF82JvDp9fJncmpPDReDT/9qz/9jX/x5zYYrL7lHZEOwdR70cH71bff2ve3Pvef/cXf/Msf/N1v/f7f/CPf/5vv//DTePnH/urRv/nAL/uev/xjj/3NP/hzf/sP/cKn/3+/hZef+p3f+nd/+Af+nR/+gR//8IELJ9/wPa+Z437JW2+lW3VNOzxX6hKXOsUR9UmYMlUdnSPP+tbr6kXuN9V2xLjeTdP6sJj6Q9c7sOaed+K9E6mrVwgZL1J1MnOv4DVvorjeBRIv0nn34cXlf6Oq7nnMYR3VnM+zKSXnOVX3nLtkbBkvUnX8i0xcrFsyxoFgypMqcSPjpG6wJeeklJxF1TzvzrZSVy9yDY/Ga8D0vEjVyQhTsaXmTRSxSCDxIp13hKaCF6m65zGHdXQNmJsZ40Aw5UmVuJFx0ixIek5KyVlUzfPu7Cavq8+ciaXCZgN348xHWP+D+D8zb76c7TB2/J3qW6VreDRuQT5zhreH/BvTd05vckjf/vrXm+0FX3z1G7N9dsKnT+tN2t+8kVLMMNXSDnJIUzxTHffSHZ8M7UjVPe+orq5Xh32/8Pbpc7937K2PffSNH/3A1/7Mnzr1nX/kte/4I1+74ztf/1s/9PYv/Itzx567ePq0678n7R3ejKuPJrL7+KqPWOcWp8EUN1UHT5ayDaGMcFRcsQ0GVkytuakaCkaYmkJHj/GvYqqyUw6YKCMOTTDMca6QQW5qPdiSs4i1THNU3rAazHtYwaLT3NR6sGWOqZ55mSMh6jHVs6gGhxHeqkJaTDRW67JNZ2yLdTWu3IIRQxzvXLtB0vMaMHOm1txUDW4jHELfNONfxVRlpwxoyri6JxjmOFfIIDe1FSQ9i1jLNEfFFdRiYMmi09zUNWDeQAFz3HZZV58/H8uG95Zjx8X0FDd7zx6NbfjVc+cQN+kapnkDuIrGotst4FfO4ZrnLVm9x3twSzVKDumIY6pLZSzBkrku2eokZIENpn1HdXXtF1577exvHnnjx//2qe/7rlP/9X/+2nd862vf/s2v/Ylvfu1Pfutr3/Wfn/q+//qND/84qu5L/C9d7S3Fbvrdzw5LdrjEe+wPe5cX+8hpo2x6hTDIdOWo6mTmVaNMr7mp6kzZBjmq+hSHMEIf56jqS5mpVo+Zt3mGKQv2WHsaB1XvM25MM8w7V8lR1XOGzTAxU7URnjRe9cEGWKKEtZSszmSrZOlJVg8sc3VKc5izTXdlW6yrZ13DXeBezF8DprhyVHUyQtAw02tuqnojMPY4qvoUI+zElnGOqr6UGZV6vAbMLk8ar/pgAyxRwlpKVmfYKVl6ktUDy1yd0hzmbNNd2c1dV7975QrqhM0H7m5avsrPzL4TipfKO5ffo0djK4xbjQZ33hLid6KXLhXdbgHGTuntua3h5o2UwpiqbIrGhmqaUqmmNdddt15Xh4W7PO/ylQtvnz77W4+/8WM/8tp/9W2oqE99zx//2p//f379B7/v6z/4Z7/2F/70qdu/49S+b3/97/7Nc194+uK5837ztqj+sPc+lOLju/yV/d9zW7S7PmMftP/oc+6dPNAjd8kShL+yf99t+z72laJPY7BlRLHqpYoruaGQvWgKESNahhRVvC54Vq8evfu22+4+oozsbTOF7EXrQZhcIUuVN6+GQgreWOuBl3DrnFSKZ7n8M65UQ4Tna66QgkUpxhvYVupqXMWiWcCc0+Mfu8OiEuzOIxM9N9dFAfMzH7rttg8dRiSJLcqiGwXMedUA6HlIL1+WWHxEeUgRJRoK2YumADiiDHc5pzDoeVZxpRe8iUL2omvAXAPmbmyXdTX2/D04SBsGq6PJqSy2Dllv5DZYrUE591tsyBq7w5tuuvFnulXXtMNzpS5xqVMcUZ+EKVPV0TnyrG+9rm76ud9/9o0P/vev/alve+1Pfuup2//k63/jr57+xH3nnn/u/O8dO/OvfvWNH/sbb/ydHz7/xWMX3npLN761m6b1YTH1h653YM09j3moisv2jten0OEPMf21FpbrrKtTnyuEjBepOvnq8QP7bvvQEWlhHhZceRPFxS6QeJHOuw8vLv8bVa2rlWdcPsWj9hLWUc35PItyRTQsAS1sDnrkTp1D23fvCTdX3XNyvutOLCu0vCN78rjyjPo1qvGNmCu3u6vvvHLgfdL6vgMnrCVq2A3a7Qdekhnj9ri8+86j9jIzxoFgyqZHs629++h8fukGW3JOSslZVM3z7mwrdfUi1/BozPB4mCfRYQa00BL5mgZMqauPFI3jjtjVV1y28i2kb0d0KniBahxQRpiSdnrNmyjikkDiRTrvCE0FL1J1z2MO62gnYCJSVTyn6p5z1wDoeZGq419k4mLdkjEOBFOeVIkbGSfNgqTnpJScRdU8785u+roa9o5UPhsM3N2UfPYsPi056etvlYTPvpeOxjYY548GX94e0rekia/imO9tFTcIY0f09pw03LCRUsww1dIOckhTPFMd99IdnwztSNU976iuLlZ37vGfeuNH/stT3/2ffW3ft7/5z/7R+d//0oW33750/vzFc+cunj5z/uQr57/y5Yv4FNx796i9w5tx9dFEdh+ffKyhrtaPW3Ts9CBfufxVFNJ3HVaGluPVOpAywjaEMsGhrrYWKK7YBgMrptbcVA0FIxzCxTTjX8VUZaccMFFGbCfbeHXZDnOsehSfIqo6mRnapznoiXtvZ0V4Lw4uK9g010pcDpiDUbqz7rwTm2NzpWObpa5mT2mA8r13H1XWPsCSRVk2h56+XVkq5zsOHMDy7zjwirXrrVBm3fm49ZQiGQeETMlvnVTPvMxxBPe9D4fiw3zT2OOCUlSzvwYKfgY4hHmfbSukxURjtS7bdMa2WFfjyi0YMcRxT/Ukmu4zpN0g6bkfMA+jrv7gYeHqnpgFwxEuFPu472NfrYPhCFuIy1miuYxX+/aoiAANBlZMrbmpGtxGOIS+aca/iqnKThnQlHF1TzDMca6QQW6qBr0+i1jLNEfFFdRiYMmi09zUTh0xwdQyYCI09JjaCpiDvFWFtJhorNZlm87YrVBXw94r47QTI9We11HrYW+MVAdm+A5+C4xa1yPV6j3eg1uqUbKlJp6pLpWxBEvmumSrk5AFNpj2HdXVpT/+3efu/fff/Lv/yRt/+78998VjF06fuXjp0rkLl19/+/KpNy+d+sbF114/T3jz8tffvvz1ty7pu4rd9LsPeAnFKeyO/S9lh8sfxop7H4FjDrzAirGXUFdnjcFnTxtuqixQW1Bma11NvsJGa28xXTmqOpl5VcFMcG28Ou9Dr7mp6kzZBjmq+hS3Qso8R1X3bHV11e6ZqRZUx6mUYzvzNs8wZUGyFdVHr155UepqGa+2Pmy5/cCLwuIn7mUXlMvyMrULZiwlkWyNtcue6Hi1urZHVSdLXS09YbFd2IpqlNNcfj5ezR15H2fBpEWGtbEN0qLmuTLdQlVe9cEmmJ1vv/cVZFLW8tKB222cXFrYrs5kq2R5F1k9sMzVKc1hzjbdlW2xrp51DX2BY5y38Wrf3guYOW8tYKovGq9GXBpkavwWUlsQo+LcFtOVo6qTEYKgFgeq9pzpNTdVvREYexxVfYoRgmLLOEdVX8qMSj2eD5gtDgEzU/U+S1ibZAt9GUdVzxk2w8RM1UZ40njVBxtgiRLWUrI6w07J0pOsHljm6pTmMGeb7spukbqaduUK60n5m9gbDOXd0IydOn/+Kj8n+x4oU5lkDMXROHeLHo2t8JkzrKhxe5Egq2E9KNoKpuI+zOpaFrKnVV9jPn36CvZ0+plqr+HmjZTCmKpsisaGappSqaY11123XleHhbs8D8fq8B+4/PC/e+GTf/Dco//0wpk3MRdF9am3Lv/Kb53/+UfPf/Tfnv/ZT1M/+ukLePnQ5y7692b6lY/lDzzncw9zLMxMBk/c3Jc/9r025zZmaWxPHwcTrWChUMdGpo/V6mptcR89c75gd+w/3j558GY/Oh0zxbTW79n/Mq6s9C4pjsy0INe5rn3fgRP2zYKwjME0bJ8sGZvqFddtpro9XxWGXDnxsTss70SLFeoyzki74wB2M/Skvsxi02zfvSdkkRYc+N5g+w687EIHwwiH1pPZM9Jh7on96QnSffu/ynadC2XFqHbHgRNS5Y09Xx3Gq+u5kK6+cuTeuw/oKLfV1VWfMPAClSL87iNhmEVVZpaqQ42uxcartUVuWLVCoOiJYl45VxS999oYNQeK03h1rbgtWl0dbpGt26hXjm9z5JlLlndJs6jnTE9iFbf/0snY8u5xHKDb7z0urC3XQiEFi1KMN7Ct1NW4rkWzgAnVL/jcWDRL6Nu+92MvuT7N8erHPoh4cuQXJdyh5E7fP7o+bXUxBBEphhdwCJX+uW79ZlDaw/PVx8NbEC0PxyCJsBO/rxS780hov/KyWyACbNwCsJ/lLf7w52oRt4+H8edpDc9Xu+XnP/DJo9YJtCBKBH3ZxSV9o7Sb+rmMWvncXFMwHFENjJ5TSPQ8q7jSC95EIXtRFzCNBwLmtOKaaimk4I21rinmAqYoxbNc/hlXqiHC8zVXSMGiFOMN7Baqq7dk/kjYwVWVSaGYxo9E2bQ+sWZ0TyanspnyhNLrb7kGVd1zcl7UOU9pCjeZMg7mLJFx0DWMeh5Udc8tZ1ifV94TwOJyA9tE1QPzRhhceVDDzX5Qt+qaiHiu1JIYqiY9nkV9EqZMVUfnyLO+9bq66Zd/4/945d/87y8//ocuvvbrFy+cQcuZ85e/9MrFH/rZs9/9k2e+4yfo3yn+3T959od/kb8gQJ98N62oDg8TSoanvzyUuZIF3vUZYbxR8zxlzT6ZHfIlylosR5NCcWZZ8WXeM7pkYuV4tXtMGm4bIJydQngvcj7fYn8OjZ3ZIsU5C2Cdy+qPuSNe4nRlkidvdycwM8J9d+xDpWot6uRQBrOlvmQ6+lWUi/ylZWjh8vn8ocw9wgXefZdlkCy5NQ3Vy1mKaj7eLC1yKJGJylzZchbT4DhLmB7faOHFlcdaVKNEF756hZvHOl97SlEdekqhm5Yz4/Ipbvx8NUK0rE5GvLsBXAodGYVmi7rn5KFnaLHRYOVpRc/bDzyu+067/d5XOEtcb2cwLr/xfLVXW+OghXL6nVfkuwNSZv6W7W7ij39YOluLLERWWacESRsphHJSSs6iap53Z1upqydcQ8pjwqiWWVS7uRIeG89XI/R9z/fehbjEt9+x7879x1Ok0vAYgmTJpgyt33PHPhcGtT2PjXn007I5zpWXKWBKNI6j2flyxKXU3bfvjhASvRbj1aqIVPIWtoO1GJZvNsHa0le+M7wXgUteamkd462wzpKApj2lFA89oXmwlbCMgEe+KkFVv51kpBrXeUdoKniRqnsec1hH+wGz4jlV95w7AlfBi1Qd/yITF+uWjHEgmPKkStzIOGkWJD0npeQsquZ5d3bL1tXZoXQv/K7aR+JZXgTegmIyz+EEklO5y071pK+/VcqZ6rlQTpqMac4617hQhpIea6AZYag0DXFSBk1lTCfYP3sDZVhPjGnGOneKeWPQlhYXKrObjAVmrHMDFyq32B7rDXgpi4abN1KKGaZq2iEc0hTPVMeSAFlLzTtVdc87qquL1V35jX+HdfXRP3Tp1L9BXY0W1tUnL//Vnzn7p+45+yf+/tlvD46XP/RzfhgnaSxcrYWj00w6ZS5zQZdf4pAi49z3izoujZ5huEYPNRfFPI/8GSSscYxa13LHx1gGa3/9WJlfxZzPPnokVd+Dglf78JSQfJTjw6kPGDklatTQRzRmitZyVVaKbQDrWLG1izINRX6mLVis9mE+l/qk0z6ketYCbV9K/hILv0tnC9ur8erb7kadb3NZi92FslbmynEJPalXrx7/qmaW5KAMEVzOnVKQSzv36kMoh60P/iXGKjgqzhZtZxGuJbqMOWON7KfhS2rDXT9fbarj1fnz1RlzY+Rn4do+eTvgEahs6PnqFw/wh+lxLFofk44/Cw+aj1c3boXcgPfde6JqN6Z65mpu/6WT5JeksP6s3XxFRD37W/m7+ufOaHiXa0+8VYW0mGis1mWbztgW62pcuQUjhkBZkaKcDvHHzVXlZ1iPV+PKIuNdNr6dgqHv6dXdg+SnPboE346l3cGlhcAoodKeqa6fr5aCX/9S41c+tk+DZwyMtjvC2s4WHxKd8o9TDDxfzSXcHdYS2y3E5Syr989Xy3sPhz6IANoTKt8qWiFdhjvtGZlRaz9ilLXoD3liiwa3EaamYNhj/KuYquyUAU0ZV/cEwxznChnkphZBsmIRa5nmqLiCWgwsWXSam9oImDNM9czLHKGhx9RuwJznrSqkxURjtS7bdMZu2bp6A/PHwB8QZW2IHwNdGWeM53QCeebEsdoIz5ie4mo1F0qXF002iFyoemKIY4NZpqYQgwBHyRiq3meDzDV0jjBUfY4ZvoP3mfEeLeJD3FL1wHqrm+dKw827x1RlUfUe78E1HanY0hTPVEtryJZsyVyXeHUSssAG076jurrwy5/5P53+N//e8Uf/b7/z/E+fOf82Ws5fvHzyG5f2f+b8P3vk/P946MJH/tX5/+Z/Poei+rvuOfu39ttyit1keRwKaR4WSXKQSJFDfimz9ECFlktaKrNqlZ7soMWztGQcXmpn52ld0d3PCJ3tc+MwPG04qiM5ZWyB1s9Xy/JljJpUm2SB0hOqAyP6A2++1HbjUFeX7eI1i5bj1Vbb6+WpC0yXqhSnh5V1R75aX/4K/rfcYlJIWzcZz1HjOLYLNS/zEeXKUFejDyvb8LNwdaurQ4CK7Z4tPFpa7EOlD6F5octlas8Uiu134Gxx7UG5JTY3eM75bUJOUNkaa5c9cc9Xl9uDbtpTQVhvVeypJbRrZ2NrvFpNFm5jzr7dc2ZSVoe/H84fkPOpaTN9ajpZmMXIwPIeKwq3e3kpf/ZMb+guJZBIkrO8i6weWObqlOYwZ5vuyrZYV/ddfv7NcJEaETf83OL56s/wd+DHwfJ9Ioe4QzC0PgxxySQQSbu5fEcZxpNTex4Yw3JCSz4iDWeMsoV0QpqGUw2JGmDlp+Pm2k4txqsRo8Lc/EESGJfAudJHvhCMJrMYmqAWB4SlvwVAfa98a+ktjUj7NfqASa82Riw9NaNuHFV9ihF2Yss4R1VfyoxKPS7zT5iyYI9bAdO8zxLKJtnCXcZR1XOGzTAxU7URnjRe9cEGWKKEtZSszrBTsvQkqweWuTqlOczZpruy91BdrfuoOxo/HsfXSDGZVzmBoHIqz6lMMt5QOYmMaWgxHlSGFa8aaDwPKic5zymmBU9oCuXp+1FVtBXcUt4Mch5UTpJiRQVPqNxuM+6o3pg9D2u4qSOlMKYqm6KxoZqyVBrToOurW6+rw8JTngf9xm9/9xf+9R/4qYf+y7/5G3/7977+5dMXzl24ePnchSun3rr86jcuv/i1S7/95Ut/6X87910fOfun/9HZn3rwvH+v18c+GAdnXuYDhGFQJf+JuKplnPou5n9uOZYRCkseybSVLMMslpX6j1Uysfz56jA6nXo2Tpvw17nz9nK82g+2MANGziftPWXmh1pUW7Bqr6GuLtu9lpebbs/089XCVFZ/KPOFJR+1xwjLUCCPGroRaS6H49U614eRkHfbiLRkqIHRI1Otq19MLVqN6s/CewEw6GbPVzu1urpqf+cd+S7gdmxYI9R3bgpa67iW8eerPVP1D5LZyHPQfLza6ysH8NE0Rqqnbqny2+/c3ifVs78117ds9zS1trzrnrjW0HFNFFKwKMV4A9tKXY2rWDQLmE7lC5AyuGVz6/FqxCUyLqw0Xt1bQq0MrSyJXTvYRs6FtUWiqH4jac/dIJ5YwLTwKD/naQTPwFkAlO8fY0tUG68u26WOLZ611hA6HTwR9GRlMl6NzaCmgCxFNcKd9bRwiigBzlS+RIOFH37no9MjmgLgiDLc5ZwCo+dZxZVe8CYK2YvWOfBAwJxWXFMthRS8sdY1xVTAjErxLJd/xpVqiPB8zRVSsCjFeAN7D9XVg+aPhB1cVZkUimn8SJRN6xNrRvdkciqbKU8ovf6Wa1DVPSfnRZ3zlKZwkynjYM4SGQddw6jnQVX33HKG9XnlPQEsLjetTVQ9MG9+wZUHNd3gh3SrrmmH50pD4gKNaU1kUZ+EKVPV0TnyrG+9rm76c8//0//PQ+/74/ff/u2f+Ms/9Tu/8KXXX3zr/JmzF86fu3D+9Plzx984/bEjb33vPzn9HT9x5i/887O/8lvnW7sJZa6W7ENH3IHi8IiOTrOFhxEZpwzFgJFHuRFptDAjzJ/3ixYGanIPaZ9vbA3RZIMwSODCNuQnlXs0UVukktRnsONYNGeJHrmr+Xy176NOTmVw65LpaDlezeXnz1dLoa6XsNSxh+2iluMiv8pGh1w5yz9QrcuJL5NbeEH/UDC7UlnnomrlWshhjXGudlaecffeEEJbWodc097z1XIC6fh2EdLVPSeXkohltPmC56vfOfFieqAaLfLQMn9BDtfbGYzL7/3/1dI5tIxYMUD9jvuz3tH8LTvcxNMod7yth0XVKUHSRgqhnJSSs6ia593ZVurqST/Obw8/eFieeZEfcru5Eh6LgpntiEv6zSDfZd8eyrkWw2MKkjmbWkksrC7tuHrSeDXdR9EqGIY1gvMnselH7irCrP1gR7nQYrxaNTwIje1h1ILmf5ZiUjUOKDOshfdaQOPfrdD2RjjNFW+UEWnj9LwMHVEdsQKgLYM67whNBS9Sdc9jDutoP2BWPKfqnnNHyCp4karjX2TiYt2SMQ4EU55UiRsZJ82CpOeklJxF1Tzvzm7Zujo7lO6F31X7SDzLi8BbUEzmOZxAcip32ame9K2Ras9Uz4Vy0mRMc9a5xoUylPRYA80IQ6VpiJMyaCpjOsHr89UtFg03b6QUM0zVdEQ4pCmeqY4lAbKWmneq6p53VFcXq/vS157/0cN//4898Oe+9Zf/zJ/8tR/4oX/7oV/94r9+/vWvfvH1F3/9xcd/9NF/se/n//W333Pqv7rn7b/3y2e/9Ar/ny2/2abIeWz4pTGXSZ6UvtrCp6bTI4JuFBotKc/j3JAdak/R7OOTj1UyMQ6GxLlgDqRw3IMsz1Hfdse+j8kPtq0P3hUyP+1jykzR0jVZzgEsB8mczNWBF8kjeVqyxL3tjrsO2+OFenrbeLWwbyeHRNBaoO1LyV9icWxZ26XykoyQfWJm6ebG56v1148xm5TlhEEbZtY2Xs1x5rvu2IdEm8wW7mSYi7daeczlMOzID8hlLl7LrH377pWaHOUotkcKaQll+lNz1LRliIM5Vg3j1ZwZ2qe5UB2vzp6vfiVsANjaRXTuFGvloSy3ktH/v1r/369997JkZsP489UvcfvT/19N6d8uqYFjFZ1uwfKpsNK22zHVs93KOTqtI9saKF75JWx8MYK9dYW0mGis1mWbztgW62pcuQUjhsiAsP2spv7RjaiVr65lw/Fqdw/i89W2hKyd64rj2OGvPwrr89WheEZUYTB0AVNrYPnvEi7rX23c96EjL2mQZAhyATYGyai956tDMSwtL3/sQ/v2LXm+WrZH2y0W6VwLd9r/K/sZtTScIjTlgRF94neFYP2Zj3SWudy8ffsOhKDqc6Fpprbyq5zxr2KqslMGNGVc3RMMc5wrZJCb6nPaFotYyzRHxRXUYmDJotPc1H5g7DHVMy9zhIYeUxsBc5S3qpAWE43VumzTGbtl6+oNzB8Df0CUtSF+DHRlnDGe0wnkmRPHaiM8Y3qKq9VcKF1eNNkgcqHqiSGODWaZmkIMAhwlY6h6nw0y19A5wlD1OWb4Dt5nxnu0iA9xS9UD661unisNN+8eU5VF1Xu8B9cUpGJNUzKmWlpDtmRL5rrEq5OQBTaY9h3V1YW/de7MZ47/9l979O4/ej9K6+/7E5/4/j/9yf/3n3vkv//zj/ytfQ9+4Dt+9f3f9ks/8l/89P4f/vhXj/ze+bfOdv//aimevaXymH0kQTKLgyf6Xi1QzdK71CUL9GbDzlKBV+YHbXwHLaHjLClx47CMP8HsseT9aWv1qex0WuY7Yu1Q3x4tZoHoANeiPVj4Y7bqoU/iqG7RHzqAYzX0fLVcwlJaBws9oTq2bHbHgZf1gWr9gbc48/RkMuaszlCT/z9b8Yff4lJaq+3bf4DVKAe3NUCFPhlbeGwcPlSGro8GYWNBYcmnK9t34EX0YTFcGxYblgBVh2QsJdHU89XWHlVdWStkM1a8aJfblvzXWbXJ3/GW/yirtmzYWRZTGkv08B9fB9MnpfUJ7ZlbtvSMlp61FoGqZ6mCsfQkqweWuTqlOczZpruyLdbVtbOQljFqfXnpsoxd2x+YkKdgSmNYQ8/Z56tTwJRwRFfuhJfQDX3qKGou49X73dwQQn2HYPpdpLhfYDQu2UKlqBTt0ewh6rwdhbcuSt6LmKYBU1U9BUnu510H3BWkMdb6ZFHrOGM1Y7v/C+HJrORWh0lpbaa/ANJ234cQVX2KEYJiyzhHVV/KjEo9LvNPmLJgj7WncVD1Pmu4m+IQEj1HVc8ZNsPETNVGeNJ41QcbYIkS1lKyOsNOydKTrB5Y5uqU5jBnm+7K3kN1te6j7mj8eBxfI8VkXuUEgsqpPKcyyXhD5SQypqHFeFAZVrxqoPE8qJzkPKeYFjyhKZSn70dV0VZwS3kzyHlQOUmKFRU8obwB59xRuVVnPKx6myfDYouyKRobqilLpZrWXHfdel0dFm45XNSTp7/+2PHf/pHH/sGf+pd/5Y898Oe/9Zf/zDf/8vu++b73fcsv/5lvu//Pfccn/spf/dQ//o3ff+HEGxf8u5ymXyemdkmQkMC1+pfqD7v7IGzAx1rkI7NhovxjdR+0/+hz9icMh1ni438jWp+iQ4pVL9X6cqNC9qK9gNDWOqRQ8brgWa1D3CYK2YvWIT1XyFLVW0ylkII31voWOXdLFaV49rfm+pYtqiHC8zVXSMGiFOMNbCt1Na5i0TJgXnftBMxc8/sdIknBomMBc7FqAPQ8pFj1UkWUaChkL5oC4IgyxOWcwqDnWcWVXvAmCtmLrgFzDZi7sfX56hHdk8mpbKY8ofT6W65BVfecnBd1zlOawk2mjIM5S2QcdA2jngdV3XPLGdbnlfcEsLjcrjZR9cC87QVXHtR0gx/SrbqmHZ4rdYlLneKI+iRMmaqOzpFnfet19YS/ee70M1/74v/69MH/7t9+8E/+2g98i9TVf/xX/9Jf+vX/4ad+5+efPPnMG2ff1o1v7Wbjr+n430lmh653YM09h7o6tejYi39WcLHjtIkL6Z1UmfrTsnfSTqg6uXWZJN5E/WVbXdTzOu91qFmk6p7HXENoS+uQa0rJeU7VPede3z4WqTr+RSYu1i2Zv2XXt/JK65QgaSOFUE5KyVlUzfPubCt19SL3sR1xLLYgbqSWEB5VfcC08KiqYdBzQ9U978QRu5YpolPBi1SdjDAVW2reRBGLBBIv0nlHaCp4kap7HnNYR9eAuZkxDgRTnlSJGxknzYKk56SUnEXVPO/Obtm6OjuU7oXfVftIPMuLwFtQTOY5nEByKnfZqZ709bdKOVM9F8pJkzHNWecaF8pQ0mMNNCMMlaYhTsqgqYzpBK/PV7dYNNywkVLMMNXSDnJIUzxTHffSHZ8M7UjVPe+orq5Xh313vJnmPxyGNR44NO0d3oz1o5G/JZ6Z/mFb61N9xPGjHzk9yE3d7BQdOe2No/LyyX8cHi3+Z116iVFrburIJa9MnQ8j+FcxVdnpVFhr/wdd8oPt2IcKGeSmzoVuEbI+CF3ZPvnZNvtE7dxWgCWLTnNT526L+vfMauMPyGduwcpUz/5WPsBbVUiLicZqXbbpjG2xrsaVWzBiiOOd64KA2WJEjMC3RMCsGVgxteamanAb4RD6phn/KqYqO2VAU8aVPsEwx7lCBrmpwwETOs1RcQW1GFiy6DQ3dTZgVkz1zMscoaHH1DVgbtvW56tFM1Yb4RnTU1yt5kLp8qLJBpELVU8McWwwy9QUYhDgKBlD1ftskLmGzhGGqs8xw3fwPjPeo0V8iFuqHjjc0ua40nDD7jFVWVS9x3twSzVKDumIY6pLZSzBkrku2eokZIENpn1HdfWs+83r70K2m373ATlLT7J4j/1h7/JiHzltlE3HTktlunJUdfKSS4Nec1PVu5dqzVHVp3hJqEkcVX0p5+Ex5zKcwpQFe6w9jYOq97l3m2jfSqrbjXrOvdtWYmKmaiM8aQtv2Y3bfWR1YMXSk6weWObqlOYwZ5vuyrZYV8+6hrvAa8DsMl05qjoZIWiY6TU3Vb0RGHscVX2KEXZiyzhHVV/KjEo9XgNmlyeNV32wAZYoYS0lqzPslCw9yeqBZa5OaQ5ztumu7D1UV+s+6o7Gj8fxNVJM5lVOIKicynMqk4w3VE4iYxpajAeVYcWrBhrPg8pJznOKacETmkJ5+n5UFW0Ft5Q3g5wHlZOkWFHBEyq31Yw7qjdgz8Mabt5IKYypyqZobKimKZVqWnPddet1dVi4y/NuDPWHvfehFB+f/1jdB+0/+px7J8+Q1qfokGLVS7W+3KiQvWgvILS1DilUvC54VusQt4lC9qJ1SM8VslT1FlMppOCNtb5Fzt1SRSme/a25vmWLaojwfM0VUrAoxXgD20pdjatYdA2YS1UDoOchxaqXKqJEQyF70RQAR5QhLucUBj3PKq70gjdRyF50DZhrwNyNrc9Xj+ieTE5lM+UJpdffcg2quufkvKhzntIUbjJlHMxZIuOgaxj1PKjqnlvOsD6vvCeAxeV2tYmqB+ZtL7jyoKYb/JBu1TXt8FypS1zqFEfUJ2HKVHV0jjzrW6+rF7nfVNsR43o3TevDYuoPXe/AmnveiU+fTg31p2XvpJ1QdXLrMkm8ifrLtrqo53Xe61CzSNU9j7mG0JbWIdeUkvOcqnvOvb59LFJ1/ItMXKxbMn/Lrm/lldYpQdJGCqGclJKzqJrn3dlW6upFruHReA2YnhepOhlhKrbUvIkiFgkkXqTzjtBU8CJV9zzmsI6uAXMzYxwIpjypEjcyTpoFSc9JKTmLqnnend2ydXV2KN0Lv6v2kXiWF4G3oJjMcziB5FTuslM96etvlXKmei6UkyZjmrPONS6UoaTHGmhGGCpNQ5yUQVMZ0wlen69usWi4YSOlmGGqpR3kkKZ4pjrupTs+GdqRqnveUV1drw777njn2ju8GVcfTWT38VUfsc4tToMpbupmp+jIaW8ctX0p+UssMLXmpo5c8srU+TCCfxVTlZ3OhzVlmONcIYPc1LnQLWIt0xy1c1sBliw6zU2duy3WTPU8czumeva38gHeqkJaTDRW67JNZ2yLdTWu3IIRQxzvXLtB0vMaMHOm1txUDW4jHELfNONfxVRlpwxoyri6JxjmOFfIIDe1FSQ9i1jLNEfFFdRiYMmi09zUNWDeQAFz3Nbnq0UzVhvhGdNTXK3mQunyoskGkQtVTwxxbDDL1BRiEOAoGUPV+2yQuYbOEYaqzzHDd/A+M96jRXyIW6oeONzS5rjScMPuMVVZVL3He3BLNUoO6YhjqktlLMGSuS7Z6iRkgQ2mfUd19az7zevvQrabfvcBOUtPsniP/WHv8mIfOW2UTcdOS2W6clR18pJLg15zU9W7l2rNUdWneEmoSRxVfSnn4THnMpzClAV7rD2Ng6r3uXebaN9KqtuNes6921ZiYqZqIzxpC2/Zjdt9ZHVgxdKTrB5Y5uqU5jBnm+7KtlhXz7qGu8BrwOwyXTmqOhkhaJjpNTdVvREYexxVfYoRdmLLOEdVX8qMSj1eA2aXJ41XfbABlihhLSWrM+yULD3J6oFlrk5pDnO26a7sPVRX6z7qjsaPx/E1UkzmVU4gqJzKcyqTjDdUTiJjGlqMB5VhxasGGs+DyknOc4ppwROaQnn6flQVbQW3lDeDnAeVk6RYUcETKrfVjDuqN2DPwxpu3kgpjKnKpmhsqKYplWpac91163V1WLjL824M9Ye996EUH5//WN0H7T/6nHsnz5DWp+iQYtVLtb7cqJC9aC8gtLUOKVS8LnhW6xC3iUL2onVIzxWyVPUWUymk4I21vkXO3VJFKZ79rbm+ZYtqiPB8zRVSsCjFeAPbSl2Nq1h0DZhLVQOg5yHFqpcqokRDIXvRFABHlCEu5xQGPc8qrvSCN1HIXnQNmGvA3I2tz1eP6J5MTmUz5Qml199yDaq65+S8qHOe0hRuMmUczFki46BrGPU8qOqeW86wPq+8J4DF5Xa1iaoH5m0vuPKgphv8kG7VNe3wXKlLXOoUR9QnYcpUdXSOPOtbr6sXud9U2xHjejdN68Ni6g9d78Cae96JT59ODfWnZe+knVB1cusySbyJ+su2uqjndd7rULNI1T2PuYbQltYh15SS85yqe869vn0sUnX8i0xcrFsyf8uub+WV1ilB0kYKoZyUkrOomufd2Vbq6kWu4dF4DZieF6k6GWEqttS8iSIWCSRepPOO0FTwIlX3POawjq4BczNjHAimPKkSNzJOmgVJz0kpOYuqed6d3bJ1dXYo3Qu/q/aReJYXgbegmMxzOIHkVO6yUz3p62+VcqZ6LpSTJmOas841LpShpMcaaEYYKk1DnJRBUxnTCV6fr26xaLhhI6WYYaqlHeSQpnimOu6lOz4Z2pGqe95RXV2vDvvueOfaO7wZVx9NZPfxVR+xzi1Ogylu6man6Mhpbxy1fSn5SywwteamjlzyytT5MIJ/FVOVnc6HNWWY41whg9zUudAtYi3THLVzWwGWLDrNTZ27LdZM9TxzO6Z69rfyAd6qQlpMNFbrsk1nbIt1Na7cgl899fqTn3/2yc8/V+vps+e0zxa1GyQ9rwEzZ2rNTdXgNsIh9E0z/lVMVXbKgKaMq3uCYY5zhQxyU1tB0rOItUxzVFxBLQaWLDrNTV0D5g0UMMdtfb5aNGO1EZ4xPcXVai6ULi+abBC5UPXEEMcGs0xNIQYBjpIxVL3PBplr6BxhqPocM3wH7zPjPVrEh7il6oHDLW2OKw037B5TlUXVe7wHt1Sj5JCOOKa6VMaSLZmbEi+8jKyes8G076iunnW/ef1dSLv53KFPHDx8Mh0Kf1iywyXe43DYXz186KOHnqvbN/OR00bZdOy0VKYrR1UnL7k06DU3Vb17qdYcVX2Kl4SaxFHVl3IeHnMuwylMWbDH2tM4qHqfe7eJ9q2kut2o59y7bSUmZqo2wpO28JbduN1HVgdWLD3J6oFlrk5pDnO26a5si3V17U987tnf/cIXX3r5VfMTBp8+/ORLJ16L3RArCo73BQl9gTVgPvXoR+9/4lUJdycZA4/JrPmAOckNlwAbFl454tIgmyIuDTNd+eUnDu5/9DmwOtsRgoaZ7vjlJ2Vp2lKoeiMw9jiq+hQj7MSWcY6qvpQZlXq8BswuTxqv+mADLFHCWkpWZ9gpWXqS1QPLXJ3SHOZs013Ze6iu1n3UHY0fj+NrpJjMq5xAUDmV51QmGW+onETGNLQYDyrDilcNNJ4HlZOc5xTTgic0hfL0/agq2gpuKW8GOQ8qJ0mxooInVG6rGXdUb8CehzXcvJFSGFOVTdHY0JTiZJrSoOuqW6+rw8Jdnkc9efj+Tzz4lGuRDA+Fsuszpc+yrn613+e5B/c/+mx3LtUfdkv7XPull5CE2RbaB3ecLVLMs8V90P6jz7k4bbCPXGbZQmd2W/SvT9Ehxapfw+7oYjXzC+25HnsIHR4+BuZlJXun73roKWmBPI3NO3TkZXdhsuUTDz2dX6pN7QWEttYhhYrXBc9qHeI2UchetA7puUKWqt5iKoUUvLHWt8i5W6ooxbO/Nde3bFENEZ6vuUIKFqUYb2BbqatxFYsWAfPyk59/FlU0+OSp14+feO302XNnzp5Hdf3pI7/91Be+qC2+v2mMMOoSZxA8LdhaXc2e8bvFYgnPSWf0iQGTLQgsoSXT/H6H2KIc6mptGQ6YhSL43P/Ea6nl1SO8icQ+GgA958oQx80u2rFq1eceZujzLT1FlJCAyaWRVa9we9KhRmhFM9vHNAVAVVna/U+cytpfk8YnX5MQF9qVUxjEjoSP+8lTsf3Ekxbh0Rh6Hst74qrnErOe/SDpFTKjz/N2I37fEVmVn/vOO6eOxHvWY8ekRdupzzyG9oeeQZhiC7f5Ae4COLWEd2mLSFMhBW+sa8C8bgFz3Nbnq0d0TyanspnyhNLrb7kGVd1zcl7UOU9pGYBMGRNzlug56BpqPQ+quueWM8TPK+8JYHG5XW2i6oF52wuuPKjpBj+kW3VNQTxX6hIXTXc8i/okTJmqjs6RZ33rdXXHra5OLZrhxZdxR4zL3Qzj1Y3DInoMdXXK/HoH1lxzynw4BWnT/YcOukb0YcthprZLHacNVyF5A1I3bZE879Dh42SmqrIuO838adk7aTvKolqyTzATLEnvxLNLBt2wOzIX7aixUT9LH47qBH76Ud3l8F5kbHiX1tVs6VzaUzrvdahZpOqex1xDaEvrkGtKyXlO1T3nXt8+Fqk6/kUmLtYtmb9l17fySuuUIGkjhVBOSslZVM3z7mwrdXXPn/gc6+rXv/HWx3/1ETAA/uTnnv3kI5/59OEn0fKpf/tZdCsDpsRV/31lFjBD1AVbDNR2FySfO8RAcfi4tsCPPYiYoNF12BsBdtIRuxoavgUILa/iJmLhFNEptivXGser67niCML6jWHwmp02xqtZ9IYlSHHLAjjLc5QHVL70hGthbN5sZFzyzCLz4WPC3J6DR7AJYL4X2wa2DrEnI1voSQ49sw4Nh3W0Dpgsqh96GqFJi/ZDR08II2qp8hsTKZWvSoEdWPzU0fsP3ccDqy9RReOlLoHxigtHB9TVZG0ZVnX8i0xcrFsyxoFgypMqcSPjpFmQ9JyUkrOomufd2S1bV2eH0r3wu2ofiWd5EXgLisk8hxNITuUuO9WTvv5WKWeq50I5aTKmOetc40IZXHqsoWeEodI0xEkZTJUxneD1+eoWi4YbNlKNGaZaCkIOaYpnquNGuuN4p6rueUd1dbW6ifFqZEuHDj+FHEhyCEngrI+MIWvjg368Gu+19kOHX7qsA8vWkpZpLQ8iD9N3cUxbGw89eMiGU9BuHw3r6kcfvF/TRxiWYCu1j9it9MhxtDAdwR7ZaXAFhb22+9MjH2BBMnGIRW/oz5xPePwUnTvtuRdPvuZaoLx8eIgefQ67IOPVbE+XWMgO0YQtfPjRh2QJnIvE7n68lLnaP+rIJa9MnQ8j+FcxVdnpfFhThjnOFTLITZ0L3SLWMs1RO7cVYMmi09zUudtizVTPM7djqmd/Kx/grSqkxURjtS7bdMa2WFfjKi5Yx6tfOvEaCmlt8e3got001KK+XX/yXcyV0jeNV8d7EKrNBw89GoMt3nIQLyUsS8BMoTWV5Sk2shsiSayrtRi28pihTAPjNAfNxqsR6GI4vco1PnxMBtLp+u2n9JHvFrUxbDbb3T1CliDjwNbCuI0+6Vc/MUgiAsalPZyWhs1DlMCuSeRUhkqxelhiJ1qw8bo0LbY1uMXG/Y+ifmWLtivL/sp4NUOZzWVjXEJqx7+Kr0oAl8JY3iVj1+9ckW8E2AqWnnzP048VPWHoyXo19EzKmWPcUhxD3EGspQ6YJ7DSR1EkawvL7Ief51vcePVDDz9235HXwGx55rH7Hn6U2ylzRaynYMmi09zUNWDeQAFz3Nbnq0UzVhvhGdNTXK3mQunyoskGkQtVTwxxbDDLVA03wox9BUPV+2yQuYbREYaqzzHDd/A+M96jRXyIW6oeONzS5rjScMPuMVVZVL3He3BJQWrW1CRjqqU1ZEu2ZG5KvPAysnrOBtO+o7q68nK8WjM81MA6K7D85vDQc9KHZTDHqD1jp1BsW0l8mb9vVNa6UY+hZHKaaaVxbE3v7Jlqz+qyhPufeO5wGKDG5h1i3mYvMVfSRLhsobD0AfD08Bxcc0oW9tKeck2efsjJZOx67hSlK0dVJ5eXAFNDlu5Fu+V/nFuOZqe8zRjZqqRBmHuKR+NYyh3NGxxVfYqXhJrEUdWXch4ecy7DKUxZsMfa0zioep97t4n2raS63ajn3LttJSZmqjbCk7bwlt243UdWB1YsPcnqgWWuTmkOc7bprmyLdXXtOl59+uy5k6dex0uGPml//RtvnTl7HoC6Wtrz+B/iKjjeL0IJPfR8Nevqp2LYRHDWLz3xkn2eO8SoFUKucopyaHxQAmaMdSlaOkdcGmLbWolXbOGKEIvImMURUWHZEv7ixp6IsTHqxPx+8An9tpGNxvp9oiwha+fIM9s1nNoSIiOIsY9o+EZSIhtaEDAlwGpRrZ3lh0IstrVRn7JhBXtEBpPV2QK3UJy3S6ONVyMEhVlNRgmq49VSnUrZTD/GXyedULb+jZ5sTz1h1r/kpQHzeSzz6IluwMSW3HfklL1kjY2C+dRRObCIRdrhoaexYWjHS45mH31Gu8lbpI+5clT1nGEzTMxUbYQnjVd9sAGWKGEtJasz7JQsPcnqgWWuTmkOc7bpruw9VFfrPuqOxo/H8TVSTOZVTiConMpzKpOMN1ROImMaWowHlWHFq4Ykz4PKSc5zimnBE8pQnnNQtBXcUt4Mch5UTpJiRQVPKG/DOXdUbtgZD6ve7Mmw2KJsisaGphQn05gSXV/del0dFh7yPNPp8WqmZdbOMWp5UjokVdrefr469onvUg6ZJfpImpgKb32vpX3CpvKuV6W6Rt6p74I2nq+2lcLiGLX2r0+bfLw6/C4RLFkaszfXvzpF3zjxhWe/9NSzL7zwepzbUGxSUPtFn2uhcl1SbBNkKEYvK2aBOpyiLRCpq02ZNSILjLlj6NPTXkBoax1SqHhd8KzWIW4ThexF65CeK2Sp6i2mUkjBG2t9i5y7pYpSPPtbc33LFtUQ4fmaK6RgUYrxBraVuhpXsWgRMG1cGiU06mrf/sabb+uT1d3xahsRpWvIHRmvjhpDn4VNjYoSPLOAmUI6wqB+/whD+OJcC7Bhddou6vnU7zG+fen3XpF2C31OWVJ2nq/mkp/UkhtzUQljAxjqJFhJ/87z1fHn3DraLKPf+i2njlpjw7heBswQ+rAOtIelkVXD95XCVAuwVzjKndqxxvufQGEsGymxVttTAAwq+yvj1a6djVJsp3blFAaNuWus28H8StR+EI4+Ui1LO9+PMBV6gtGTzz9re+gp3NbXX3wBH9lTz594HS1omFP/M+/UzsD4GopnnKL29LW2PMADjjexrvbj1c+YsvB+4MlTVn5zrvYRaSqk4I11DZjXLWCO2/p89YjuyeRUNlOeUHr9LdegqntOzos65yn1IckpY2LOEiUHXcOr50FV99xyhvV55T0BLC43sE1UPTBvlsGVBzXd4Id0qy5JScaVuvRF0x3Poj4JU6aqo3PkWd96Xd3xcrzaRk7cLNsRVsgcBpFckAPXupupxGWLDDhr7igLiSkU5+aZJZxjKSHD4xI48CFpn70U1wySS8bajz0onWWlMl5Nz1cqjaFDHN4xDyePDLCE8WqoltNwGQeW8Wrt6U/LcKK+/PtMOuFfePFNbekqnUmeZnXB5TKxx6fJIe3zF5Q+X80skCwVtdTnjz6HdrLmjujM/p1Le0rnvQ41i1Td85hrCG1pHXJNKTnPqbrn3OvbxyJVx7/IxMW6JfO37PpWXmmdEiRtpBDKSSk5i6p53p1tpa7uuY5Xw3VcWh3xTdvBn/yUjFdre7wXSKCL3ypC0SGW0BYGpT1+t4ggZmFN+khFLT0PHUMfcoyu6i7Acq402kJCYOTC+VQ2GrPA6P3rX5UKDf77pxDH0FJqCNqhxYVTLbm1XZ6UZuDiszlaHvMtxfPVaTe1MUU2ML+RDHPFLQwicmKu9IkFOfurpu8ctUUDrHwjmS+NhbF1RqTy4TFTKaFtaDp63YjQVDCVu4C6XVvao9DaUx6otp6owCdGtnOHYcH6VQi/7X1DW2D9gImNlwWStaVSboAUyaECRztLbvd8tTB/uP48+pC1rkb4QodFqo5/kYmLdUvGOBBMeVIlbmScNAuSnpNSchZV87w7u2Xr6uxQuhd+V+0j8SwvAm9BMZnncALJqdxlp3rS198q5Uz1XCgnTcY0Z51rXCjDSo816IwwVJqGOCkDqDKmE7w+X91i0XDDRqYyw1RJX5Q1ldG5gamOY7rT5J2quucd1dX16pDWaGGsLWkIRbIl5GfWM4w8S2bGpErb03g1kzxWpOSQeGXj1S531Pfy8DIXDIV3+g1kmAvVulp62p8rs22W56vDSskx28NpoO9Cy8NcmpwM/vRg6oZdk5ZCpRQ37pyKrfFqLLbFkslxG1K7aqzkXZL3BFIqrCJeVjaQgiapq9GOd+EgyJBLyh3ZP+rIJa9MnQ8j+FcxVdnpfFhThjnOFTLITZ0L3SLWMs1RO7cVYMmi09zUudtizVTPM7djqmd/Kx/grSqkxURjtS7bdMa2WFfjyi34qS988dNHfht+8Fc/9eTnn3v9G2+98ebbT37+2Qc/9Rkdwd7h89X8lpPfMB68/1H0cHW1FOFaovMn4tqT7YgeWILW6mCCBEztz7kMNaGn8WtuvNq3B3U/8EE0kD9pIUEYnIIwA2AYr+bIefgRkB+v1nYJd/l4tQRnsNSTYbwaEYA9pVK1d6FFN0YZXdhHK3Pr70Kla9eeFM618WoNdNbuWL4s2Oj5alcqaxiM1ThCk+xyeGraegpnPWHoyXrVejrlTOPGeLXv4xVL1qK6FSQd6w/FT+H4FLcqfabaRqo5mn3ovvv5JHb4uTiXIEIVBpYsOs1NXQPmDRQwx219vlo0Y7URnjE9xdVqLpQuL5psELlQ9cQQxwazTNVwI8x4VzBUvc8GmWvoHGGo+hwzuAfvM+M9WsSHuKXqgcMtbY4rDTfsHlOVRdV7vAfXFKRiTWsyplq6Q7ZkS+amxAsvI6vnbDDtO6qrG45MCFnRS/qSz0uH4WvmZ8i9SmapLHlbYH2+2pXE0pnZVSq25YixUJdhZHn50kkkXOHHivqjbnLIDtXDLx5tri1K62qA5ovSkwvXlC685F/MjuM2+WljXxmE9lefe4pLwykXlzx7itKVo6qT9bRnGqe5l2vX/AmbGS6NNF7tRnKKnsiE2AFJpKViYUxm5hKOqj7FS0JN4qjqSzkPjzmX4RSmLNhj7WkcVL3PvdtE+1ZS3W7Uc+7dthITM1Ub4UlbeMtu3O4jqwMrlp5k9cAyV6c0hznbdFe2xbq66S+9/OrvfuGLv/LJ35AfhJ87ffYcIP4sfJfPVxNi4R0KVPSJsVG/qdSexx4My+FbJDa6gJnCaXTEqDGWyCZ/6xEvGb7kZ+Fgeb5aBpNzZhkf+qDYtqFplo7p2Wn9glUjG+OetNvChbH5r3GYWpagS1Pm0jBX+0BjIQ0nSxCWlywptTP81VPyTLU06k+HUMEee+5liVHibIFzO+Un375dGm28Wn7gjSUjMoNDNwZw/T5U+4iyEduGl6hLOZfhK/SUUFb2BKWetNgt57GAyZ3lYkM7e8rQ9KPHdIz64edllo1p25/7lp7oWD1fTQ7v0j91tj5fzbBTsvQkqweWuTqlOczZpruy91BdrfuoOxo/HsfXSDGZVzmBoHIqz6lMMt5QOYmMaWgxHlSGG68akjwPKic5zymmBU8oQ3nOQdFWcEt5M8h5UDlJihUVPKG8DefcUblhZzyserMnw2KLsikaG5pSnExjSnR9det1dVh4yPOcMpMLX0W7sWspPg/xNkx3QyhSikvj/U88K39RTNqlKqbLH9SRNA7tzKjYqKPWUv3aex999iUuTf+Hau3Dv4yNtE/WYoq5IeOMH59Uv1qKP8eMhB5Xqh+6/rSbiRrYnTCSEdpb5F3Hkd7hjdYSi2qn9Sk6pLIBcUV0PjTIwhhr4Sg0+qhaXY11vPzaa/HYysbYZYgUR/IzsmnMHWNLR3sBoa11SKHidcGzWoe4TRSyF61Deq6Qpaq3mEohBW+s9S1y7pYqSvHsb831LVtUQ4Tna66QgkUpxhvYVupqXMWijYAJffXUG6irj5947aUTrxa6yXg1K2Re79ZTCmPfEyp1NTndp8LAL1pYMGvcOPSc9Xzq2OEUgrT8jnU1g6S8BQHQAqZo4CwANlVKwbjw2M7dfPRBC6csVqUdmgLvg0/hvVZCx9uB/AUsqXjRLrUfG2XU2gXSQw89Jd8zuqU9FJaGiIH3UvULTXsLK1I2azCEhoVjaQcPW1C1P2khjQ897cauU2dxKW7dEqyRS2BpqgHZgh6LYd9NhojRXv//1XVP/jkxLGX7/3+1/9TSkmUD5CffJ065jXFj2qaN56uVTdfnq3eukIJFKcYb2Huorh40fyTs4KrKpFBM40eibFqfWDO6J5NT2Ux5Qun1t1yDqu45OS/qnKe0FaRoFUtUHXQNwZ4HVd1zyxni51VuNua8jW2k6oHlFpvxoKYb/JBu1SVlybhSSWhSWpOzqE/ClKnq6Bx51rdeVy9y2dT8+Wptb+ymaX1YTP2h6x1Yc897di3R88bp06mh/rTsnbQTqk5uXSaJN1F/2VYX9bzOex1qFqm65zHXENrSOuSaUnKeU3XPude3j0Wqjn+RiYt1S+Zv2fWtvNI6JUjaSCGUk1JyFlXzvDvbSl0960987tnoTzpGdY25N1/AbDli1zJFzSnDy2REqqWqTkaYii01b6KIRQKJF+msH3uY3yPYS4SpparuecxhHV0D5mbGOBBMeVIlbmScNAuSnpNSchZV87w7u9nr6sc/fJu3Dz9u7fmhdC/8rtpH4lleBN6CYjLP4QSSU7nLTvWkr79VypnquVBOmoxpzjrXuFCGkh5roBlhqDQNcVIGTWVMJ3h9vrrFouGGjZRihqmWdpBDmuKZ6riX7vhkaEeq7nlHdXW9Ouy7Y6/2Y+nO3A21d3gzrj6ayO7jqz5inWunwXPhkb+i3XNTNztFR05746jtS8lfYoGpNTd15JJXps6HEfyrmKrsdD6sKcMc5woZ5KbOhW4Ra5nmqJ3bCrBk0Wlu6txtsWaq55nbMdWzv5UP8FYV0mKisVqXbTpjW6yrceUWjBjieOfaDZKetxAwR7ipeTAMw/Jle8kW4kY4KiJAg4EVU2tuqga3EQ6hr8Xya3P9qbZrD2wBMFcGNGVc3RMMc5wrZJCb2gqSnkWsZZqj4gpqMbBk0Wlu6howb6CAOW47r6s//Fl7cfKXbmdtHV72De+6/d7j9iKaPwb+gChrQ/wY6Mo4Y95995V7b7/trsfJ6QTyzIljtRGeMT3F1WoulC4vmmwQuVD1xBDHBrNMTSEGAY6SMVS9zwaZa+gcYaj6HDN8B+8z4z1axIe4peqBwy1tjisNN+weU5VF1Xu8B7dUo+SQjjimulTGEiyZ65KtTkIW2GDad1RXz7rbPBuvDu3l7sTd9LsPyFl6ksV77A97l0ddf3muT+jFxmk2HTstlenKUdXJSy4Nes1NVe9eqjVHVZ/iJaEmcVT1pZyHx5zLcApTFuyx9jQOqt7n3m2ifSupbjfqOfduW4mJmaqN8KQtvGU3bveR1YEVS0+yemCZq1Oaw5xtuivbYl096xruAvdi/g0dMKMjLg2yKeJSwVJX27PWeR+6clR1MkLQMNNrbqp6IzD2OKr6FCPsxJZxjqq+lBmVerwGzC5PGq/6YAMsUcJaSlZn2ClZepLVA8tcndIc5mzTXdktVVfDpLROo9bedB9FHr9b6mprkckeleu98+h0H1VM5lVOIKicynMqk4w3VE4iYxpajAeVYcWrBhrPg8pJznOKacETmkJ54KBoK7ilvBnkPKicJMWKCp5Qua1m3FG9AXse1nDzRkphTFU2RWNDNU2pVNOa665br6vDwl2ed2OoP+y9D6X4+PzH6j5o/9Hn3Dt5hrQ+RYcUq16q9eVGhexFewGhrXVIoeJ1wbNah7hNFLIXrUN6rpClqreYSiEFb6z1LXLulipK8exvzfUtW1RDhOdrrpCCRSnGG9hW6mpcxaJrwFyqGgA9DylWvVQRJRoK2YumADiiDHE5pzDoeVZxpRe8iUL2omvAXAPmbuya1tXvHr+XhXVoefwu/X04zRo/m/1yXBt9t7vT0lB+B5OxaBhUR8XFPoxK+p13Xrn3ffaa9r57X8HhdL9Pv/tx9OERntQ9mZzKZsoTSq+/5RpUdc/JeVHnPKUp3GTKOJizRMZB1zDqeVDVPbecYX1eeU8Ai8vtahNVD8zbXnDlQU03+CHdqmva4blSl7jUKY6oT8KUqeroHHnWt15XL3K/qbYjxvVumtaHxdQfut6BNfe8E58+nRrqT8veSTuh6uTWZZJ4E/WXbXVRz+u816Fmkap7HnMNoS2tQ64pJec5Vfece337WKTq+BeZuFi3ZP6WXd/KK5W4kXFSJnCCJSel5Cyq5nl3tpW6epFreDReA6bnRapORpiKLTVvoohFAokX6bwjNBW8SNU9jzmso2vA3MwYB4IpT6rEjYyTZkHSc1JKzqJqnndnt1xd/e5JVLm3/9JJ7BAKYACasHdxHJu7ytpbx6u5269IN2Opyo8KH71Ty2ljrAXM5bzvABbKdpboqJrJrMzTeDULcu3/Dtn6YIZIxeEEklO5y071pK+/VcqZ6rlQTpqMac4617hQhpIea6AZYag0DXFSBk1lTCd4fb66xaLhho2UYoaplnaQQ5rimeq4l+74ZGhHqu55R3V1vTrsu+Oda+/wZlx9NJHdx1d9xDq3OA2muKmbnaIjp71x1Pal5C+xwNSamzpyyStT58MI/lVMVXY6H9aUYY5zhQxyU+dCt4i1THPUzm0FWLLoNDd17rZYM9XzzO2Y6tnfygd4qwppMdFYrcs2nbEt1tW4cgtGDHG8c+0GSc9rwMyZWnNTNbiNcAh904x/FVOVnTKgKePqnmCY41whg9zUVpD0LGIt0xwVV1CLgSWLTnNT14B5AwXMcbs+dbW9ihZq6ZKd8RikkjstRw+OSPgBubzUkWoOR6cinJ3feekAFnLgJWE5tNSM1UZ4xvQUV6u5ULq8aLJB5ELVE0McG8wyNYUYBDhKxlD1PhtkrqFzhKHqc8zwHbzPjPdoER/ilqoHDre0Oa403LB7TFUWVe/xHtxSjZJDOuKY6lIZS7Bkrku2OglZYINp31FdPet+8/q7kO2m331AztKTLN5jf9i7vNhHThtl07HTUpmuHFWdvOTSoNfcVPXupVpzVPUpXhJqEkdVX8p5eMy5DKcwZcEea0/joOp97t0m2reS6najnnPvtpWYmKnaCE/awlt243YfWR1YsfQkqweWuTqlOczZpruyLdbVs67hLvAaMLtMV46qTkYIGmZ6zU1VbwTGHkdVn2KEndgyzlHVlzKjUo/XgNnlSeNVH2yAJUpYS8nqDDslS0+yemCZq1Oaw5xtuiu79epqtvi6uvgpOHc0Fc/soDvPwjjY3Y+zRcpjmo48+xZvOjcfr9Zxb5iNVMd2KCbzKicQVE7lOZVJxhsqJ5ExDS3Gg8qw4lUDjedB5STnOcW04AlNoTx9P6qKtoJbyptBzoPKSVKsqOAJldtqxh3VG7DnYQ03b6QUxlRlUzQ2VNOUSjWtue66xbr67NnzFy5i53ThLs+7MdQf9vrjMM0/Pv+xug/af/Q5906eIa1P0SHFqpdqfblRIXvRXkBoax1SqHhd8KzWIW4ThexF65CeK2Sp6i2mUkjBG2t9i5y7pYpSPPtbc33LFtUQ4fmaK6RgUYrxUsOngIiHK18Xu8jwFrxxDZiIAHtQDYCehxSrXqqIEg2F7EVTABxRhricUxj0PKu40gveRCF70TVgrgFzN3b9nq/WR6nv0r9i5noW49V5N/0JtxqPiiwQpfqBl7Sn/phc5uKMofJF/NG4tnDWO++8ok9i6xPX4bTr6J5MTmUz5Qml199yDaq65+S8qHOe0hRuMmUczFki46BrGPU8qOqeW86wPq+8J4DF5Xa1iaoH5m0vuPKgphv8kG7VNe3wXKlLXOoUR9QnYcpUdXSOPOvbqqvx6Z4/f+Hc+YvF8qfdb6rtiHG9m6b1YTH1h653YM0978SnT6eG+tOyd9JOqDq5dZkk3kT9ZVtd1PM673WoWaTqnsdcQ2hL65BrSsl5TtU9517fPhapOv5FJi7WLZm/Zde38kpdSmCcNKUQBSel5Cyq5nlHhhCEiIdLQrdtkeEteOMaMKMjdi1TRKeCF6k6GWEqttS8iSIWCSRepPOO0FTwIlX3POawjq4BczNjHAimPKkSNzJOmgVJz0kpOYuqed6RIQRtHDDH7ZrW1ek5aveDcNk768ldTePVmHHygD6PLZ30D47ZeLW1+D5HW3OpxXh11HfcutCU2j2HE0hO5S471ZO+/lYpZ6rnQjlpMqY561zjQhlKeqyBZoSh0jTESRk0lTGd4PX56haLhhs2UooZplraQQ5pimeq4166o7xTVfe8lboahhPrwsWLZ86er1cRNsDzzrV3eDOuPprI7uOrPmKdW5wGU9zUzU7RkdPeOGr7UvKXWGBqzU0dueSVqfNhBP8qpio7nQ9ryjDHuUIGualzoVvEWqY5aue24m83xqLT3NS522LNVM8zt2OqZ38rH+CtKqTFRGO1Ltt0yhDrEPFw5O31QsMb14DZ4abWwXCELcSNcFREgAYDK6bW3FQNbiMcQt8041/FVGWnDGjKuLonGOY4V8ggN7UVJD2LWMs0R8XV2mJgyaLT3NQ1YN5YAXPQrl1drX+pO7xkXR1GoYXTLP+u1E1Lce0mz1fb/9dlxbb0lx94335AH7HGjM/KM9XaznFpNMrPxeOz1jL+fRRd+RIiajbCM6anuFrNhdLlRZMNIheqnhji2GCWqSnEIMBRMoaq99kgcw2dIwxVn2OG7+B9ZrxHi/gQt1Q9cLilzXGl4YbdY6qyqHqP9+CWapQc0hHHVJfKWIIlc12y1UnIAhtM+xbr6kuXLp3jCMyFYhVN95vX34VsN/3uA3KWnmTxHvvD3uXFPnLaKJuOnZbKdOWo6uQllwa95qaqdy/VmqOqT/GSUJM4qvpSzsNjzmU4hSkL9lh7GgdV73PvNtG+lVS3G/Wce7etxMRM1UZ40hbesu0WL1qyOrBi6UlWDyxzdUpzmLNNt28XLjDWIeLh07OmhYY3rgFzlk0Rl4aZrhxVnYwQNMz0mpuq3giMPY6qPsUIO7FlnKOqL2VGpR6vAbPLk8aQFWyAJcRZS8nqDG8lS0+yemCZq1Oaw5xtun3be8ActJ3X1c7y/7nafsINu/3e4xxzZsEsc8LzzzIKnZ6avv3AS/qnyN559/jjR+9N/83W7fe+gnfpBxbfC7v9fQc4Ns52FtBid3/ys48fSE9rY5l8l33wMplXOYGgcirPqUwy3lA5iYxpaDEeVIYVrxpoPA8qJznPKaYFT2gK5en7UVW0FdxS3gxyHlROkmJFBU+o3FYz7qjegD0Pa7h5I6UwpiqborGhmqZUqmnNdddt1dW4IvBpXbhw4dw5ZorY94mVXhf1h733oRQfn/9Y3QftP/qceyfPkNan6JBi1Uu1vtyokL1oLyC0tQ4pVLwueFbrELeJQvaidUjPFbJU9RZTKaTgjbW+Rc7dUkUpnv2tub5li2p88HzNFVKwKMV43DRHhOH81WVuYHgj3r4GzD2oBkDPQ4pVL1VEiYZC9qIpAI4oQ1zOKQx6nlVc6QVvopC96Bow14C5G9tpXb198wfDDq6qTArFNH4kyqb1iTWjezI5lc2UJ5Ref8s1qOqek/OiznlKU7jJlHEwZ4mMg65h1POgqntuOcP6vPKeABaX29Umqh6Yt73gyoOabvBDulXXtMNzpS5xqVMcUZ+EKVPV0TnyrG+rrobhOkK6xEzx/IUzZ8+fO3/xwsVLxeoK95tqO2Jc76ZpfVhM/aHrHVhzzzvx6dOpof607J20E6pObl0miTdRf9lWF/W8znsdahapuucx1xDa0jrkmlJynlN1z7nXt49Fqo5/kYmLdUvmb9n1rbzSOiVI2kghlJNSchZV87wtw6eIsCORjTkiYp1u1caGt68BUx2xa5kiOhW8SNXJCFOxpeZNFLFIIPEinXeEpoIXqbrnMYd1dA2YmxkjQDDlSZWIkXHSLEh6TkrJWVTN87YMnyLCzhYD5ohdn7o6O5Tuhd9h+0g8y4uhyf1tAADg7UlEQVTAW1BM5jmcQHIqd9mpnvT1t0o5Uz0XykmTMc1Z5xoXylDSYw00IwyVpiFOyqCpjOkEr89Xt1g03LCRUsww1dIOckhTPFMd99Id5Z2quuct1tUwXBHYc8RQ+ZM8TBax/NVXX331m901QURk22KOuAbM1Vdf/Zb0XQTMWbtpxqv9wfBHRlkb9JDhH125+AYlFMM5c+JYbYRnDNVptJoLpcuLJhtELlQ9McSxwSxTUYUGZqVaMFS9zwaZS9E7xFD1OUYVGhv7LMWw+RC3VD0wC9oRrpQF7RRTlUXVe7wHZ0HbYCl0c7bSN7AVwzI3FcZ4GVk9Z4NpR/jjBbw9w7WDz/XSpUuSLJ4/t9pqq6128xuiGWIaIhviG6Kcxbs92xowV1tttVvPdhQwp+1Gqat1f3WndeetxfgaKSbzaqV1PVLdUplkvKFyEhnT0GI8qKhLM9VC2vOgcpLznGJa8ISiRi04KNoKbqmUuBkPKidJWSrnPKGoJwvuKGvXnIdV6l5hWGxRNkVjQ6XcrRU17Y2gW6+rYXZ18CzCUVpttdVWu+mNdyMxxDeLdFuyNWCuttpqt5jtLmBO2Pp89Yjuyfh5BlOeUPrVAJEHVd1zcp5XOU+pnIm1sprNmTrqWip7HlR1zy1HpTqgvMrA4ihQN1P1wFLoZjyoLHoX6FadBW3OlUrR68vvqhTnUHPOaQganSPP+i7q6mhyba222mqr3QpmcW1nZqtZbbXVVrv5zeLaNbT1+eo51tJ6fb56kJOyjlXGdILX56tbLMoSV5nfu00xlSWusZS7Njcw1TGq2dhS805V3fNO6+rVVltttdVWW2211Vbbta3PV4tmrDbCM4bqNFrNhdLlRZMNIheqnhji2GCWqahCA7NSLRiq3meDzKXoHWKo+hyjCo2NfZZi2HyIW6oemAXtCFfKgnaKqcqi6j3eg7OgbbAUujlb6RvYimGZmwpjvIysnrPBtK919WqrrbbaaqutttpqN7Wtz1dnism8Wmldj1S3VCYZb6icRMY0tBgPKurSTLWQLkanR5STnOcU04InFDVqwUHRVnBLpcTNeFA5ScpSOecJRT1ZcEdZu+Y8rFL3Cmcj1V7R2FApd2tFTXsj6FpXr7baaqutttpqq612U9v6fPWI7slQtUZTnlD61QCRB1Xdc3IWtDlPKSrSlrKazZk66loqex5Udc8tR6U6oFIkm6NA3UzVA0uhm/GgsuhdoFt1FrQ5VypFry+/q1KcQ805pyFodI4862tdvdpqq6222mqrrbbaTW3r89VzrKX1+nz1ICdlHauM6QSvz1e3WJQlrrKORfeZyhLXWMpdmxuY6hjVbGypeaeq7nmtq1dbbbXVVltttdVWu6ltfb5aNGO1EZ4xVKfRai6ULi+abBC5UPXEEMcGs0xFFRqYlWrBUPU+G2QuRe8QQ9XnGFVobOyzFMPmQ9xS9cAsaEe4Uha0U0xVFlXv8R6cBW2DpdDN2UrfwFYMy9xUGONlZPWcDaZ9ratXW2211VZbbbXVVrupbX2+OlNM5tVK63qkuqUyyXhD5SQypqHFeFBRl2aqhXQxOj2inOQ8p5gWPKGoUQsOiraCWyolbsaDyklSlso5TyjqyYI7yto152GVulc4G6n2isaGSrlbK2raG0HXunq11VZbbbXVVltttZva1uerR3RPhqo1mvKE0q8GiDyo6p6Ts6DNeUpRkbaU1WzO1FHXUtnzoKp7bjkq1QGVItkcBepmqh5YCt2MB5VF7wLdqrOgzblSKXp9+V2V4hxqzjkNQaNz5Flf6+rVVltttdVWW2211W5qW5+vnmMtrdfnqwc5KetYZUwneH2+usWiLHGVdSy6z1SWuMZS7trcwFTHqGZjS807VXXPa1292mqrrbbaaqutttpNbevz1aIZq43wjKE6jVZzoXR50WSDyIWqJ4Y4NphlKqrQwKxUC4aq99kgcyl6hxiqPseoQmNjn6UYNh/ilqoHZkE7wpWyoJ1iqrKoeo/34CxoGyyFbs5W+ga2YljmpsIYLyOr52ww7Wtdvdpqq6222mqrrbbaTW3r89WZYjKvVlrXI9UtlUnGGyonkTENLcaDiro0Uy2ki9HpEeUk5znFtOAJRY1acFC0FdxSKXEzHlROkrJUznlCUU8W3FHWrjkPq9S9wtlItVc0NlTK3VpR094IutbVq6222mqrrbbaaqvd1LY+Xz2iezJUrdGUJ5R+NUDkQVX3nJwFbc5Tioq0paxmc6aOupbKngdV3XPLUakOqBTJ5ihQN1P1wFLoZjyoLHoX6FadBW3OlUrR68vvqhTnUHPOaQganSPP+lpXr7baaqutttpqq612U9v6fPUca2m9Pl89yElZxypjOsHr89UtFmWJq6xj0X2mssQ1lnLX5gamOkY1G1tq3qmqe17r6tVWW2211VZbbbXVbmq7RnX15Svvnj3/7ltn3n3j7RvR3zzz7pnz7166ooW02gxfuvwO3oI3FotaXZ2H9Nw7Fy+z3EVFSlXvMzrjLW+eLhd1szi2/PS5dy5c0qKa9W2mLGhLvnDxKt5y8+6y9zdPv/P22avnL3DXYsGMGj7yhK919WqrrbbaaqutttpqN7Vdi7oaFfU3XOVwI/PZCzYKPaXvsAjfy1reU4xSuT9SnRTdNl7FjcYolaVs1vHnWrWoZkW98SpuZEZ1jR1EtTyua1292mqrrbbaaqutttpNbTuvq0+fS5n3TeFvn7Xi2Wlm6FC8ZfVpf/ssfytuJbR6XlSjQ/GWm92xR2mkuqW33i57f+vMO5eqQekJX+vq1VZbbbXVVltttdVuatttXX0TjVR71lFrbL/U0aLhOep1pHoznhi1Rkm18WJvZA6j1jY67flWHan2PDFqre55p3U1L9zVVltttVvCLK7tzGw1q6222mo3v1lcu4a2w7r68pWUbd90fumyv7sYX7p8K48x7tqbz1qjseh2K3n2rLV7prrodqu6PGttlfO076KuxgV75cqV8xcunj13HstfffXVV7/ZHdEMMQ2RDfHNIt2WbA2Yq6+++i3muwuYE7bDuvrs+ZRhbzzqdb34zPn8v+YSRWPdc+VBPnOOY9TF89VoLLrdSnz63Dv5k9VUNBbdblXGnqJm1nHpaUX4s6ixJcPVimB65uz5c+cvXrh4ya2OZXxvM7auvT+3nnH1p9oj87sYY/1eJuP4Tc0YNzX7GcUwj/wZfOOouN4bDKyYWnNTR/4LAGUqA84041/FVGWnKYj1foajDHOcK2SQmzr3XzmIWMs0R8VV02JgyaLT3NS5/yajZqpnux33mOpZVAPCCG9VIS0mGqt12aaZ4SAjbiCyIb75zns0LGoNmDk3tQ6GI2whboSjIgI0GFgxteamanAb4RD6phn/KqYqO2VAU8bVPcEwx7lCBrmprSDpWcRapjkqrpoWA0sWneamrgHzBg2Y07bDuvqG/evfI/7mGX5g+BREzda//r0Xf/P0u4hN5iGY3tqHFLsst0De16PeGn/9e8TfPM26esS3W1fjUj13/gK8WEvtSKQcM2/rMdI1ZcnbAjNv8xwzP/Eeh2xvkhc7zq5BNs1/RjHN9HACm6qTmVeNMr3mpqozZRvkqOpTrCFIfZyjqi9l3OG7zLzNM0xZsMfa0zioep+Rdc0w07KSo6rnDJthYqZqIzxp/gY9wBIcrKVkdSZeJUtPsnpgmatTmsOcbbplu3DhEkKc34CNDQtZA+YEmyIuDTNdOao6GSFomOk1N1W9ERh7HFV9ihF2Yss4R1VfyoxKPV4DZpcnjRd7sAGW4GAtJasz/pQsPcnqgWWuTmkOc7bplm2LAXPWdlhX65jV+OjWjcYwfAb4FET4HY+fu/JShlvQZGzj94uYavteFnsjMxz3MLndpu+wtX0vi71ZGK5pU6HqnrdYV+NiRX6oOWLM4W4c1XzRc0MtXzRFmlWwKNOvNtspt5mmEzXwkGLVSxURoKGQvWgaVBlRhqCcLUAVPKsxoEXeRCF70XoQJlfIUsXV1FJIwRtrPfBiwynTSvGc36ybqsHB8zVXSMGiFONFhkxx74MwePsaMPegGgA9DylWvVQRJRoK2YumADiiDHE5pzDoeVZxpRe8iUL2omvAXAPmbmyHdbVPr29Gx9EPZlx0WH2pI1ohIFLVr9z6TxrLTVG+gQ638KLDre3IXZASzfq26mpcpMiVzpw9Xyx/2pGcJdbM0ljzOc9z2Z7mc54bqu55J26n3LjmJyp5kaqTmXsFr3kTRawQSLxI511Dk+dFqu55zGEd1ZzPsykl5zlV95y7ZGwZL1J1/ItMXKxbsvrGPakSNDJO6gZbck5KyVlUzfOODLEOEU83bAPDG9eAGR2xa5kiOhW8SNXJCFOxpeZNFLFIIPEinXeEpoIXqbrnMYd1dA2YmxmDQDDlSZWgkXHSLEh6TkrJWVTN845sjwFz0K5RXb3pqNfJj79//0eO1u3XgnHgw+liGmfB97Lk9ywjjCKa+PAa+8A3XizOk4M4T47U7XvlFw4+8C33PFO3jzPuUnLTTRr7wDdebOBnPvJND3z8hc3e22S54uRI7m05BnWC1dRt1dU4vWTs5SKWGRfuGcmZ451rnU02WPPFFiPlCsz0K2aWyqI4qQa5qXpaLmUbQhnhqLjeG6w5X87UmpuqgycjHIZTphn/KqYqO01BzAe0mmGOc4UMclPrwZacRaxlmqPiHtdi3gALFp3mptaDLXNM9Wy34x5TPYsiMgzyVhXSYqKxWpdtOmWIIYh4OPL2eqHhjWvA7HBT62A4whbiRjgqIkCDgRVTa26qBrcRDqFvmvGvYqqyUwY0ZVzdEwxznCtkkJvaCpKeRaxlmqPiam0xsGTRaW7qGjBvrIA5aDfieDWLmW965Cg5ZfnOUUvs/8DBk3nj9h0fGD4yUbOiw3vGt3bAEZvMQzAtOizymfPkhcMf+Kb65Fnmsa7eeGlyC9TvpE2LDoUfvWf/t7z/8AtVe8etrgYvfGPPm1fc5o4oNuJbrKvPnkXY5N/dGXEkUo6Zt/UY6Zqy5G2Bmbd5jpmfeI9DtjfJix1n1yCb6gk5xvRwApuqk5lXjTK95qaqM2Ub5KjqU6whSH2co6ovZZydXWbe5hmmLNhj7WkcVL3PyLpmmGlZyVHVc4bNMDFTtRGeNH+DHmDJw6ylZHXmZiVLT7J6YJmrU5rDnG26E8PnhogHtdcLTd++BsxpNkVcGma6clR1MkLQMNNrbqp6IzD2OKr6FCPsxJZxjqq+lBmVerwGzC5PGkNWsAGWEGctJaszvJUsPcnqgWWuTmkOc7bpTgyf214C5qDd4M9Xb3n0bBHDcBLg8xbhdzx+7spLGW5Bk7GN3y9iqu17WaywnScbvXeKU13d7zPBcNzD5HabvsPW9qWL6nCqq/P2jVmO5JZ+IQLXtKlQdc9bqatxmSJfwKLiwmMOd+Oo5oueG2r5oinSrIJFmX612U65zTSdqIGHFKteqogADYXsRdOgyogyBOVsAargWY0BLfImCtmLhoGXxLlCliquqZZCCt5Y88EWbVGeVIrn/GbdVA0Rnq+5QgoWpRhvYIh4OIV1sYsMb8Eb14CJCLAH1QDoeUix6qWKKNFQyF40BcARZYjLOYVBz7OKK73gTRSyF10D5howd2PXY7yaQ3+hGPAci4Qjj4SRNzd6JgOGMmrqh9TI3/JN4rH+wdvveYbDd9IeBlrR84GPH+FCpF3HOaccxz2YcdGhco7r6krD5tUt5eahXcZdfZ98U3kownKyAcnGwrGoDxx8Jh6TtNLSedzcEDQXpZ3jxrh1+QMu22/rDccwtsyNlyJaISBS1afHq/d6noRuU4eFb6wa44F94CP3xLo6Lc2/y1fdTZebonwDHW7hRYfCXSWPNT5ytHVs02d0zyN+H9PG1J9R0+VYhZ66inQkswWyZ1qU24CZ3UfugpRo1rdVVyNR0jRxkSM5S6yZpbHmc57nsj3N5zw3VN3zTtxOuXHNT1TyIlUnM/cKXvMmilghkHiRzruGJs+LVN3zmMM6qjmfZ1NKznOq7jl3ydgyXqTq+BeZuFi3ZPWNe1IlbmSc1A225JyUkrOomufdGSIe4p5u2yLDW9aA6R2xa5kiOhW8SNXJCFOxpeZNFLFIIPEinXeEpoIXqbrnMYd1dA2YmxnjQDDlSZW4kXHSLEh6TkrJWVTN8+5s44A5btfl+eqUu3/j6CMfeD9qHhZ43/jy4Q9o8VDUSxw9Y6mDt8hy7O3go/dYUcH3hgIDy0TGH5cffyp8ELVQWCw4VpW90TYc9XC6mMZZ8Opd3EIsU/jkxw+i3rBtZh8pYIyl5lHW+sS2JN/rwFxI9mtnG0uUhSvrwoVlgQ8c/HLg9x/+cmfveGRsFenI61u00f2u2K2XG2/Lx3o/jk8hHeF3v3yffZTw6vjQEUYRTXx4jX3g1VvSB4316nlCxv7KhrldiM9Xu8NCdocibjZ34ZHHw/LDR4bO3As9ObVczJlL0xPs8XDWyZEPbNtcMu5SctNNGvvA67dwU+95JmwS1x45HgrsC1ZasL6Ry+EnYluFTxafkV++43yZbpf15MSnqY1s5/WVPuV4kjx+j3Vmn3L59DrBauqO6mpduGckZ453rnU22WDNF1uMlCsw06+YWSqL4qQa5KbqabmUbQhlhKPiem+w5nw5U2tuqg6ejHAYTplm/KuYquw0BTEf0GqGOc4VMshNrQdbchaxlmmOiiuoxcCSRae5qfVgyxxTPdvtuMdUz6IaHEZ4qwppMdFYrcs2nbEt1tW4cgtGDHG8c+0GSc9rwMyZWnNTNbiNcAh904x/FVOVnTKgKePqnmCY41whg9zUVpD0LGIt0xwVV1CLgSWLTnNT14B5AwXMcbs+z1fHGoCF8ZFU1DUrTCTuqPFiwRYbw8uqMb0dHsuh7F1xAyYcxx1HXtSs6JB5tlI6V+Fa0hp9Tz8MmDjbVL/vkXsLz/bLL7zhqVDEu2SxVmhVHdL25B+EtaS3hIo3zi0csck8BNOiQ+FxdzY6T7IddB932J18a21H8oPm3piW5jz7pJout0D9Ttq06FB4b41x12b3Me/Qd7+nidMeZQct6+COAz6CdGAbjkxoxHdUV886EinHzNt6jHRNWfK2wMzbPMfMT7zHIdub5MWOs2uQTfWEHGN6OIFN1cnMq0aZXnNT1ZmyDXJU9SnWEKQ+zlHVlzJTrR4zb/MMUxbssfY0DqreZ2RdM8y0rOSo6jnDZpiYqdoIT5q/QQ+wRAlrKVmdyVbJ0pOsHljm6pTmMGeb7sq2WFfPuoa7wGvA7DJdOao6GSFomOk1N1W9ERh7HFV9ihF2Yss4R1VfyoxKPV4DZpcnjVd9sAGWKGEtJasz7JQsPcnqgWWuTmkOc7bpruzmrqt1zKo5ihVKGlQFSNZPHnw/U3YtbNgn1Es6evaB98uPTlP6nv/xZ47O2Q9Z05hqGqdl4SEDlVYw6DZozdDetsAwHHocfBF+x+PnFpwVIdIeW7QPX+pWpb1L5UrOqbZBeyyTPGdjibouWWZvjJHLd9umwKXdh6VZpeSHZKUPXuoRi9uTH3n24ax4/On9EXK4BU3GNn6/iKm2NzePbAPpPE8e52aE86T8oLkZxXnidyc/XNIZhxFv91uu3ylkJ4//WOOJJO7eK4cotOfbD8c9TG636Ttsbe+9xW1q2n744/bRy2dhA/JZn7Cp0qG/SY7TpxnPH798XaD1l5NTBvmxxnTE6P3rCK5pU6HqnrdeV4fVpRXdIKr5oueGWr5oijSrYFGmX222U24zTSdq4CHFqpcqIkBDIXvRNKgyogxBOVuAKnhWY0CLvIlC9qL1IEyukKWKa6qlkII31nrgxYZTppXiOb9ZN1VDhOdrrpCCRSnGG9hW6mpcxaJrwFyqGgA9DylWvVQRJRoK2YumADiiDHE5pzDoeVZxpRe8iUL2omvAXAPmbuz6jFdbPXDk8AekhJCKGi1WBMbKU7N8YWbzoaKzykHfGIqQ1OjeDo+Fh+uQ1Utdx3EPZlx08O4qk3ZLWqPfvDQM6Dnb1FhLe+4tPK0F7hfedN0SdLNF+YOsL4tDR4gbI55t6qwjWiEgUtXn/x64bMOG50nc/vywxG3OzpPgaHQHzb0xLa191nVcboryDXS4hRcdCm+uER4++uIjqPex6DDlfIuVx3GX0x5lBy2dS9lWzTpyF6REs771unqRIzlLrJmlseZznueyPc3nPDdU3fNO3E65cc1PVPIiVScz9wpe8yaKWCGQeJHOu4Ymz4tU3fOYwzqqOZ9nU0rOc6ruOXfJ2DJepOr4F5m4WLdk9Y17UiVuZJzUDbbknJSSs6ia593ZVurqRa7h0XgNmJ4XqToZYSq21LyJIhYJJF6k847QVPAiVfc85rCOrgFzM2McCKY8qRI3Mk6aBUnPSSk5i6p53p3dOnV1PaLFGumeR2S8lMXMB8CxyMnrpebgJFL/bHxVRq21HnCP3fo+2VhrMd6rUDCOejhdTOMsePUurIj1jPDJo0ewX2yxNbIsCezLOV/6JnZ7rQWVHqWMZeHaRxc+NcZoXm0zVsQ/zRWXz7eHbWP12PwU0vPVzxx9QY+87cI3Tp98IVRc1broCKOIJj68xj7w5lvsPNFCMT9P3AftPtx0ePnRu+er7VA45n6Fjwy7cxLvkhPG6lJlPU/ciRRBj0Y6qZrbj7uU3HSTxj7w+i3uE7QVaTs/etlU3X4c8MDWR9/I5fATsS38xpf5GQHI5bqwfFuOa7eLi5w+WR7e+Cnz2okfwdwnXidYTd1RXa0L94zkzPHOtc4mG6z5YouRcgVm+hUzS2VRnFSD3FQ9LZeyDaGMcFRc7w3WnC9nas1N1cGTEQ7DKdOMfxVTlZ2mIOYDWs0wx7lCBrmp9WBLziLWMs1RcQW1GFiy6DQ3tR5smWOqZ7sd95jqWVSDwwhvVSEtJhqrddmmM7bFuhpXbsGIIY53rt0g6XkNmDlTa26qBrcRDqFvmvGvYqqyUwY0ZVzdEwxznCtkkJvaCpKeRaxlmqPiCmoxsGTRaW7qGjBvoIA5btdrvNoqNKtPpDK02kln+YrO6haytKdGqSvsx6gswMrKCh6rIL+oVIRMOI47jryoWdGhdNkL3Z4P3PNMKO2sJa4627xUS3vONlULqpqbC8/2yy+847Ewy1tksekYVofO1vvAR/j1gZWX2vIB/sE261k7YpN5CKZFh4Zvfp7Ej14PyyPhcPljIp21/f2PWAmaPkf5c9x2PPOlaQd/1nVcboH6nbRp0aFw2dRyjfDiNNANkD9y7vfRDn7jM2q566ZLY09ZeDoPdSEffwEbk47b4PLhyIRGfEd19awjkXLMvK3HSNeUJW8LzLzNc8z8xHscsr1JXuw4uwbZVE/IMaaHE9hUncy8apTpNTdVnSnbIEdVn2INQerjHFV9KTPV6jHzNs8wZcEea0/joOp9RtY1w0zLSo6qnjNshomZqo3wpPkb9ABLlLCWktWZbJUsPcnqgWWuTmkOc7bprmyLdfWsa7gLvAbMLtOVo6qTEYKGmV5zU9UbgbHHUdWnGGEntoxzVPWlzKjU4zVgdnnSeNUHG2CJEtZSsjrDTsnSk6weWObqlOYwZ5vuym7uulrHrJqjWDcFw3DocfBF+B2Pn7vyUoZb0GRs4/eLmGr7XhY7wiM/T9gFw3EPk9tt+g5b2/ey2O3wC9mfbct/5RH67I3hmjYVqu5563V1WF1a0V71Kx/7ntv2/eJXJvsMqOaLnhtq+aIp0qyCRZl+tfnKlc986LbbPnREeaGmEzXwkGLVSxURoKGQvWgaVBlRhqCcLUAVPKsxoEXeRCF70XoQJlfIUsU11VJIwRtrPfBiwynTSvGc36ybqiHC8zVXSMGiFOMNbCt1Na5i0dmAefxjd9z2PfuP5+0v/+L33nbbNsJjpgy5t932vR97KbQwPB65C21md+x/SdtDeJS4dxiRJLYoi04FzD2oBkDPSQ9LHK7boVj1UkWUaChkL5oC4IgyxOWcwqDnWcWVXvCsHsHBvPNo1g6Z0hMH7rAzBSfngRfzuUfvZvOdR6zl5g+YR+7E7jxu7ZVSPMvln3GlGiI8X3OFFCxKMd7Abu662qfXN6PjuAczLjrcJO6GZKO7gupaOqIVAiJVfWS8ehvux3KvsctNUb6BDrfwosPuvXMC+N9N6DD1Dg4RchekRLO+9bp6kSNdS6yZpbHmc8Jf2Y+6+mNfcRmez/aiaj7nuaHqiV/av0/L4G251tV2yo1rfqKSF6k6mblX8Jo3UcQKgcSLdN41NHlepOqexxzWUc3/PJtScp5Tdc+5S8aW8SJVx7/IxMW6Jatv3JMqcSPjpG6wJeeklJxF1TzvzrZSVw/7cZTQqKtjC+IYi22pq0N41HYLj6rNgClx73BqKUIlQ64Uz3wZ28353jTLPH6fuJkjdvUVBT13MGtHdCo4V6urO3PNyQhTsaXmTRSxSCDxIp13hKaCF6m65zmXuhFlMLmlVcBE5XzHgROIUbElMMXq6qOpJam659wRsgoWPXHv7Vyga2mrOv5FJi7Wwo7y+By1FwuMcSCY8qS++y5WdPu9J5VzzYKkZ9grODjR7nyc7Wy+mQLmuO2wrn7rTEqvl450XXd+84ydFl7RWPdceZBx9BBGEU18eH1zo0UtZa2rN3vvXhh7h7uU3HSTXptdHuH4e3K6FNWbLafH2NM6wWrqjupqXbjnZs43pBuNV9fZZIMvS375QckvNXcM7Ui5AjP9ipmlsihOqgaH/NK3N1VPy6VsQygjHBXXe4M158uZWnNTdfBkhMNwyjTjX8VUZacpiPmAVjPMca6QQW6qDqr0WcRapjkqrqAWA0sWneam1oMtc0z1bLfjHlM9i2pwGOGtKqTFRGO1Ltt0xrZYV+PKLRgxxDG0OV69oer3ib5lMGBCQ12d5qKddawF0mUBs8WFoqzb97Gv1sFwiuN4NRYb2mc4KiJAg4EVU2tuqga3EQ6hb5rxr2KqslMGNGVc3RMMc5yp1NVhvBoS59asKnX1y7GlFSQ9i1jLNEfFFeT5xL37pFAHxnZj0Wlu6oKA6cardxswj96FuvoV5UGVovr2Ay9pyyv3vu+229534GTZE9JiorFal206Yzd3XX32fMqzbzo/c54fGI68qBkai26rj/uZc4xN5iGYorHodiv56XNIXHAL1O+kTdFYdLtV/e2zV5EJjfiO6upZRyLlmHlbjy9JXf2xr2gOZ+2AnGPmJ97jkO1FjmWwa1/sOLsiaz5Xtyub6gk5xvRwApuqk5lXjTK95qaqM2Ub5KjqU6whSH2co6ovZaZaPU6xURmmLNhj7WkcVL3PkrFNMtOykqOq5wybYWKmaiM8af4GPcASJaylZHUmWyVLT7J6YJmrU5rDnG26K9tiXT3rly6n8WrEB9deBczAEwGTcS+UwaFni6uACZe6mj8Rd7OWjVcjLg0y9StYnxuvRoyKc1tML8ar1ckIQcNMr7mp6o3A2OOo6lOMsBNbxjmq+lJmVEqs49WhfSBg2ni1b99twNTK39qjqucMm2FipmoTPDZezas+2ABLlLAW45MH3nfb7b90UhrZIpEkZ+lJVn/n6Idvu+3uz1rM4eSzaECZTVTTZjWHO7Gbu66+fIW59WYjXdedL13mQceh51kinwBO3EuXGz1XHmQcUguajG3GFy/xp+B7WeyNzOcv4p6kt9v0HfaFW3qXPWP3U5rlVN3z1uvqsLq0IlEOtgS76zOXeR8MHPocds/vSc5n7TZefTi8pRq08Q/++UEY1/49+19mOhj4kl+XM12yZJAvuw1GUoH9Y7so0682h+er47p0B8OpqDlitLsOx3bq1WxH0hBN0Hw3YztWvVQRARoK2YumQZURZQjK2QJUwbMaA1rkTRSyF60HYXKFLFVcUy2FFLyx1gMvNpwyrRTP+c26qRoiPF9zhRQsSjHewLZSV+MqFi0Cpqg+5yx25+FsvNpFmPS0s3uvPncdLIbTLMIkQ0is3+XDrBXbojZerS1WbNvz1cdd3Dss7Ygt6ABlh2AhnCIc+TDL+0LYPnAWgZ3FsJnu7IGTal19/IDbHh8wv5pmoO7Aplq7qv5Q2YwlPaIE2lX5sHGwu45KEEP7nPJdHzqSWo7ehcrzOBEt3Ou7jnJcXm3fgZelXedC/cd2N3ZE2iXo5TtypAyP2ee9794TXCLClM71u/khedo5zEVlHN+4795XRp+vzo5btPh8dVomDIfO3iVBUkv3q4+HJaAsZ5jiXNV4cGB3y8g022P/3G6/9xV5Ezfo7sfZE1ccBAUwdkf5nZdw6FBhvhIfBZeemBnU7w7LZh8w04pvv/ekbPzg89WvoDZOdtfj0ix9qNkRlBKasSX7IbczbLAGkwUqdfW9xyf7UCFQGd82k3ex2eZuYDd3XQ27SYesZbCaH22wxOuQ9WaeBqs1mDq+VYesOVh9Rb97FrVbOOG9MGStg9WaqM361uvqGWdBu+97vveux+QlEjVtlzSRjdoShlOkjz7sp3PxUUquaQUwWiQnQe2qGd5nPqiPBfqcT/LRO/axnObL2E7vDLNIShfbZQPuPOI7dF3Tx32yLpxv8lJKa8yVohrL0RNSclDNPuXk5I7IsIyetF/FbJTWwjaXnXUus8Y79iMts7nqZOZhwWveRBErBBIv0nnXcOR5kap7HnNYRzUL9GxKyXlO1T3nLjlcxotUHf8iExfrlqy+cU+qxI2Mk+pgC6TkpJScRdU87862Uld3PQQ6ffkYIpt7CdfwyMiZ/sCYtkt5HL5hRJAM4TSVxxL3XEsWMKEMgM2ACbe62rXANe7Ft8S4l16mMMsyxC2crtFv3x0hVHotxqtVEZ0KzpURUuKwtshLFuScy9DKOlZ6XpXaW2YxZKGoljj8VWG0xDgsc20YXHtK1brvwAnrGRSxSCAxNL7RWlBA3cHqWVokuH/obpSajGNSKhuHuViLvnyZM+8+orNiTwQrzLpXZglLS3ijtZzYf4eU1hLZpKiWzuL8SOSX2/KSb+SNSl7KYvkSUQwvW1oFTNm77vPVUrDefdS36IDz3XfLb7mv6p89M6bLXHmJkCVvR3ksrC2d8eq4otCCbnwjCC1SV999590HXuIsqZT5e2mZ+Y5U7HcfUdYR6fcdQEUuc7lYqbRpVvQOPV8tRXXqyeelw0vEBC72dpb9jA82Oq1PU1uQdC0SQ5xmQdJzUoqU6O+Tn5KHFjXPwU6iqEZtb6+Ocz8/LEPfG9tNX1fDTp9jkj0yunWD8Ntn7YToKTrU71p5gnEOhCDL4BU56ttnWWfuZRU3GmOPcK+S222tvA3fervsGXun6dSg7qiu1oV7tpxP7vVIklw7lKXvnYd9y2HcAe2ZahuvTnPdgAkzSCQAbmlyh0UNnFpe5p/2cVlmXA42KWWcmjuGdqRcgZF+pSyT7TrXTqeSQwYZ27k9yOFCn6hXU74oJyfTO5bK6UR1Jy03gAU58jBrl1RMWrRnaE8cFVd6gzXny5lac1N1IGWEw9DKNONfxVRlpymI+YBWM8xxrpBBbmo9Op2ziLVMc1RcQS0Gliw6zU1tPBY4w1TPdiPuMdWzqAaHEd6qQlpMNFbrsk1nbIt1Na7cgvOCGdp+vtoFw9jOn/bk4bRUVulTz1endRUBExrqau1p7awbLZBqkJRt0O8Qv/KxffrdYgiS4ctQYW0P9wXpk+tX5e9W7vX56n7AZHUqhbS08OvOOw4cD30QDbSdyJ4c9ybrXJZCWGroqe0t5W0gjlcjuFXj1ZwbAiA3/k5sPFnuC7EnQ9zlKy+//KIGQ+tvLDuy/6sWBq+8KC9fZEDTFlzdgVm4xoJc2o9wT3QMWapRbo30h2rVaj0hob3BqljC1PPVYV2hHf+4CqljpSFsgzJ3xFjnar0cW3A16RYKhj5k6ZdGoa+68eqrMl7NOjbMZdEb5pJRkJMpCIy2KLKu3do5kbWPPF8txfrj1k71AZOFq40ka4sGCsfcqqXPV2f60gHUxq1RbkjFn/0wKvD4JDbs5C+hJuez2WqxHZZo0m6Fuhp2E41au5Fqr6Wto9bjPjFS7fVWGrUuR6o7equOWttIdcjPRnxHdXXXOV6tI8+p0Yrhw8rM4Sw11IJZ6mqkXJLDSZ+Qlmm32jSnNLccUYdHXLtwLINdu7rc6J1lAyw2hG6mo9Pqms8p40yLIz/COizjLYxIc66bx4zQ2umaeFWG1ITd7MSeY3rNTVVnyjbIUdWnWMOO+jhHVV/KTLV6HIJkYJiyYI9TRuhUvc/M2KaZaVnJUdVzhs0wMVO1EZ40f4MeYIkS1lKyOpOtkqUnWT2wzNUpzWHONt2VbbGurp2l7wcPx5e956ut/LY+EjC10cy+kZQQZ30A7vtEcWuPHL9JdO2Bpa6efb7ajXh3opeOZiNGhT724x11badOPF8tI8/RODots9DH6mrtT+f2hLHlThzWJftgq3HYXIrhhvGH2ZyLWCgDyNFkddKexqtDwERtJuPVDFOyRla58hJz3Y/GpeAUpiME2VsCs8bzhkI6zXX3sDQ0LXM7H4lU2hygDk9Hq8vt2YavEVtsBNts3wGsUdop2kf2rv98tRSmHK9Wh8XCmEwPtWvg2rBHMlf62Nu1f1Stq3W8WhzdWEjryxdRZNoANfpr1arD4MXvsaNJXS0D1DJ2rYb+xfPV+c+2uQoxXvVuVtauxuWY3X1UooTNNZ54vprPUSe7i6Wzvot9NOZIUa3jz9Kslv+S/i7+tXA1VtG1SaW9sd0idTXs8hVW1/oXwkfGu64xv3mGFXV6pjqeCvJCpPzuB3uEt+hfCN9gjbc847NGqdx8prrgOPhz8RKra/1z2XtZ9fVi7DJKZXmm2r5d1huwaJsvXLqKt9y8u+wZu4+KuvdMtVd1z1uvq8Pq0ooylRQCGVXWLr94rMarQ0s1Xu2Kc7vjZ0srlWPazPBcu+WR9S8kLXfk2pHC4lySFsnJJMuUFs3qWtz4/6s1n2N+GX5haD3deLW2xJOTbxG7WwZ50M6jpuPVrqcpVr1UcdU3FLIXDYMnY8qwk3MMShnPah3cNlHIXrQckCkVslRxTbUUUvDGWo9U2612Wime85t1UzVEeL7mCilYlGK8gW2lrsZVLFoGzBiaQsv4eLXT8IckEAB9ez1enWu5rhgwoX6NEh6pEvf8/19tS2CLxfwQJKWPsYW+2Me1RLXx6qo9BczISeN4dWixOh8s0dR+KITNKMarnZ4IcTiMSLPaGhqdrnXk+Wphqh+vDswQpy2BtahmIa0tUtW78WqnsRK2EelQyQuXyp+aS80fW0afr46KDfPj1aW6sXEKVW/kylCrq2vuaCzLcWU5lXfOPF8tzBY/Xi1v1PFqmwuxgMn/0+t9957QFjZs9P9XP26FMH/4rS1OY02bjy3zaecNx6ulfG6NVPc0jU5LQ1CK8QZ269TVezd/DPwByQ80p/hHV8Y55DmdUp45caw2wjMmJ7FZzYXS5UWTDSIXqp4Y4thglqkadIQZDQuGqvfZIHMNrCMMVZ9jBvHgfeYdAC3iQ9xS9cB6w5vnSsMtvMdUZVH1Hu/BNeGo2BIUz1RLaMiWcslcl35VRXLOBtO+9bp6xkNJrC91F5DGyc+5U4sUzEh9hKv/v9qGaMju94p6SFGQx98omvo+6tpOr4ZZxMsHqi0nSx36rgt0pxMzGJbEmo9q7qhzw36R65NTCvLwEKAV58LacuSu9flqVXXPY64Bs6Uaij2bUnKeU3XPudc3kUWqjn+RiYt1S1bfrCe1TgCSdhOJpJScRdU87862Ulf33KJcamn+/9XFz8W13UKlKu8aCEIWMK1Fy2Cby7BTaBHxtN3c6mrXAq8CqX3jSXaBzvzIXUU4lTAZ/thEoRs/Xx3Hq6lWu4LTA9VsL5+gDq4MDXG40ZPl8daer+aPjySOufFqeKxy9SUdoSnOQsEZW+yH365PruyPjwQvi2et6bihhOerQx0b51rRK9zSKmDK3i1/vhqFMV5Ki22DvOR+3X7gRWGELOjRu3VgObbY212LaFhRaGFWkT9frYPG0mC/sq5Y5x69Oz5fLb/mTsPT1Xj1sEm5a0PfjAmFcnD7Thki15ZNn6+WQfLbDxwna4vIXMDkXzj7cBq/lhHs9fnqUcsOtPswAl8jxWRewwknJ/qcyiTjDZWTyJiGFuNBZXDxquHG86BykvOcYlrwhDK45xwUbQW3VG8nngeVk6RYUcETKjfXjDuqt2HPwxpu4UgyjKnKpmhsaExucq0ToOuiO6qrdeGecTSokkKU49WX9IeLqT0bTpHR7DR0I0VvGn6RP4QWRrNZQt/2vXd9xn40rkvj89Wxvz/42CSmg/Z7yDS3KMUvH9n/PSHLdB93fjIE5gKZBVo70ynJ4dgnG3i5+pn9+yxf5AnJ7A3Zanq+uhjNZoYQB1VkIOWOuw/be6nSnnHU9gUFrJhac1N7F3vN1F4ASYx/FVOVnbYDWs0wx7lCBrmpvQAeWMRapjlq5+YCLFl0mpvauiFOM9Wz3Yh7TPUsqsFhhLeqkBYTjdW6bNMZ22JdjSu34PAHGn2gs6CU+jTHq6Wn//lPPTq9k+erJV7hJaKHhMH4HaLMle8BGSQlgO/70BEsQeYiBI2MV/u79jzb9oRRaNke+7vfLrSy/+EDEoez0Oqer46j2QhNWh7rODPmcjncEZbHnG192ioBfL8u52pYCxktVvMLU7mWMF6tc8NfCLcffvMH2+C8kNYdiePVMkbtn6/mM9Xxz5hJyWnvvXr1FZTZuKEc+SoDiT5rjdND+KqUtawbpaeFuC6rYuF+vLoMkp3nq2W8Whry56t1bvwL4fpc9J1HT8hckVSWw9iHSmb7+1jhs50//I4j0vp8dW+8WseTZS5nnuTfUXvf3UdYhCNIymfAQloCpmzP0PPVrFRv889XW0Eu/ArHqP3z1TY6rXMlaFi1rwFkUOW35dyRiT4iNbOM179Yzib5Q+K333s0sGyJAPvIXzXzPyNv2nuorh40fySUTWVSKKbxQ1I29fdd5Rndk+mJruZO+rbS9Vsuz4Oq7jk5L+GcpzQFoEwZGXOWWDnoGlg9D6q655Yz3M8r7xJgcblpbaLqgeUmmvGg6u18WLfqmoh4rtSSG2qd9Ii69MvYcjJygBHfUV1dOQdbKmM9jLm6I/EnizR5sNB2UJ+v3p/mSokbd1xzsmA6ZlK3R0uDKtJHkrxoWjyX7XiLLqoe2Rb3JwlTog/tZ9pkxmQx9nH/Dw1/UCjZnvw1HZ0rN+hgkvnJG+2k9U/32VCMvxDmmF5zU9WzC3aao6pPcS+wTHNU9aWcB8ycGVQ9w5QFe2zBWTmoep/TDaLHrRtKVPWc/c2rzcRM1UZ40vwNeoB7CQBZHVix9CSrB5a5OqU5zNmmu7It1tVt168U1T70Mf98tXwRWZmWynxv/ocnGLhcwASXcU+/Q5Rv9yqzYefJcMpQ1op70TXWqSHihfbmGi30oYOqFMnR5L9F0LmIVxXTbbw62x4tqtntShmHdfnhx0H6F8KjoaiWhdt7pegNZuPY5o3A6Nktdt/+AzZezTAVxquF2dM/X404ox2CSeVs7TpkbYYl8I26KJkrhXcyG6DW98KlPDTDZxzb6VJaq91x4ADLv/R8deymDFO2CrwwedBaerplJrv9wIucGwpjMl3raoQjfamldTD5aXdw7VPuaXiIGrVoPAfexx3RUWj61PPVbHT/l5Zsm7Sr6ZPbZrJYKbOnjVe9/jQ9mRTn0i72Unr8GsaiWudKxCDnT0N/+LNoV2cIIkhPsroU5w27MxXA0t3MoRr/JHgw1vzWKj9T56Paoa5OMGm3bF3tdyg7oO5F/GASy4vAW1BM5jmcUnJCd9mpXAz6HdIEUz0XykmTMc1Z5xoXykDTYw1GIwyVpiFOytCpjOkE+8EfaAjiyphmrHOnWG8hPS5UZjcZC8xY5wYuVG6oPdbb7VIWDbdqpCAzTNVkRTgkLp6pji25Ea15p6rueet1dVhdWtENovUBb6h9lKbuwzUWtQ+9wf4UWqz1CTmkWPVSTRedV8heNAWEEa0DCBWvC57VOrhtopC9aB3Ac4UsVb25VAopeGOtb471LbWhFM/5zbqpGiI8X3OFFCxKMd7AtlJX4yoWXQPmUtUA6HlIseqliijRUMheNAXAEWWIyzmFQc+ziiu94E0UshddA+YaMHdj6/PVohmrjfCMyUlsVnOhdHnRZIPIhaonhjg2mGWqBhphRr2Coep9NshcA+gIQ9XnmEE8eJ95B0CL+BC3VD2w3uTmudJwC+8xVVlUvcd7cE04KrYExTPVEhqypVwy16VfVZGcs8G0b72uXuS6C8a6a8a6+55N9RB5NtVD6rmh6p534v50GtL65Fyk6uTWhZN4E60v2EU673WQWaTqnsdcA2ZLNRR7NqXkPKfqnnOvbyKLVB3/IhMX65asvllPap0AJO0mEkkpOYuqed6dbaWuXuQaHo3XgOl5kaqTEaZiS82bKGKRQOJFOu8ITQUvUnXPYw7r6BowNzPGgWDKkypxI+OkWZD0nJSSs6ia593Ze6iuzg86p9ZifI0Uk3kNJ5mc3HMqk4w3VE4iYxpajAeVwcWrhhvPg8pJznOKacETyuCec1C0FdxS3h5yHlROkmJFBU+o3Fwz7qjehj0Pa7iFI8kwpiqborGhdUIjWidA10V3VFfrwj3jaDjeufqD3GX/AeXMD9TYPmjPovnJMMVN7Z2Q09w94WuO2r6ggBVTa25q72KvmdoLIInxr2KqstN2QKsZ5jhXyCA3tRfAA4tYyzRH7dxcgCWLTnNTWzfEaaZ6thtxj6meRTU4jPBWFdJiorFal206Y1usq3HlFowY4njn2g2SnteAmTO15qZqcBvhEPqmGf8qpio7ZUBTxtU9wTDHuUIGuamtIOlZxFqmOSquoBYDSxad5qauAfMGCpjjdo3qav//bK1+Ezn/BzL+d1mMfbh6oep9vnrxsvx3Wdfvs37zNLf54iUJ97xVqE+x//+ubmHHDp4+x/8KCzdycUs76CEdcUw9f4H/z/abpzf5r7bxLq7uAvOhad9RXT3r2EfHaTtrxjFRloMTmIfIc8z8/GGs2B/2Li92JFWDbHqFMMh05ajqZF5Eo0yvuanq5aU6wVHVpxjBKraMc1T1pSzhscNlOIUpC/ZYexoHVe8zsq4ZZlpWclT1nGEzTMxUbYQnjVd9sAGWKGEtJasz2SpZepLVA8tcndIc5mzTXdkW6+pZ13AXeA2YXaYrR1UnIwQNM73mpqo3AmOPo6pPMcJObBnnqOpLmVGpx2vA7PKk8aoPNsASJaylZHWGnZKlJ1k9sMzVKc1hzjbdld0idTUq6t7/ebvyzcIofDQkmXKSsygK2s2WvwvGxoRbyJSibtxs+TcvY5c1cdG0pqdvn726l7VExnIm1gLdel0dFu7yvBtD/WG33LFWyxdNkWYVLMr0q802VLKZ6uCJ5yHFqpcqL8BaIXvRNKgyoky5cqbidcGzqgHQ8yYK2YvWgzC5QpYqrqmWQgreWOuBFxtOmVaKZ7n8M65UQ4Tna66QgkUpxhvYVupqXMWia8BcqhoAPQ8pVr1UESUaCtmLpgA4ogxxOacw6HlWcaUXvIlC9qJrwFwD5m5s53X16XMpz179pva3z7owhIAYNDo6FG+57o5NYrjnLUSV9wSwOG9XN+A2Xxt/6+w7uMGLN1IT6FtntnlksDTkRj3fel29yJGcJdbM0ljzOc9z2Z4eQM8NVfe8E0fitUxxRRS8SNXJenGp17yJ+svWXc6jOu+IYAUvUnXPY64htKU+zPrAi+yq5DlV95y7ZGwZL1J1/ItMXKxbMsaBYMqTKnEj46RusCXnpJScRdU87862Ulcvcg2PxmvA9LxI1ckIU7Gl5k0UsUgg8SKdd4Smghepuucxh3V0DZibGeNAMOVJlbiRcdIsSHpOSslZVM3z7uymr6vXkepbjDlqzfgUwpbjG2qk2vPEqPV7cKTaczFq7XlbI9We46i1uucd1dX16pCcOd659g5vxpovthgpV2CmXzGzVBZFFjXITdXBk6VsQygjHJUXXc1yGRZMrbmpOngywmE4ZZrxr2KqslOGPmUNgz2GOc4VMshNrQdbchaxlmmOiiuoxcCSRae5qfVgyxxTPfMyR0LUY6pnUQ0OI7xVhbSYaKzWZZvO2Bbraly5BSOGON65doOk5zVg5kytuaka3EY4hL5pxr+KqcpOGdCUcXVPMMxxrpBBbmorSHoWsZZpjoorqMXAkkWnualrwLyBAua47bCuvnwl5dar3zJ+sfWsNRqLbjeUX2g9a43Gott70NOz1iEdgZ+/cKXoti3vPWu9o7p61pFIOU7bVjOOj7IcqMDM2zzHzE+8xyHbm+TFjqRqkE2vEAaZrhxVncwLapTpNTdVvbxsJziq+hRL9mY+zlHVlzJDZY+zcCpqLNhj7WkcVL3PyLpmmGlZyVHVc4bNMDFTtRGeNF71wQZYooS1lKzOZKtk6UlWDyxzdUpzmLNNd2VbrKtnXcNd4DVgdpmuHFWdjBA0zPSam6reCIw9jqo+xQg7sWWco6ovZUalHq8Bs8uTxqs+2ABLlLCWktUZdkqWnmT1wDJXpzSHOdt0V3Zz19VnL6R8euMxrpVvND5zjtc/wh9Vogv47Pmsz43Gp89xI8PtxBQ74vu8N7n55DMOV91zK4wlF+uyNa7PV3u1fNEUaVbBoky/2mxDJZupDp54HlKseqnyMqwVshdNgyojypQrZypeFzyrGgY9b6KQvWg9CJMrZKnimmoppOCNtR54seGUaaV4lss/40o1RHi+5gopWJRivIFtpa7GVSy6BsylqgHQ85Bi1UsVUaKhkL1oCoAjyhCXcwqDnmcVV3rBmyhkL7oGzDVg7sZ2WFevf/37lvQ3z+gfMGNMVIVfx7/+PeJvnn6XQT/dTui3/F//HnEchJiCRN3sr3+POJaMDKn2rdfVixzJWWLNLI01n/M8l+3pYfTcUHXPO3EkXsv0CiHjRapO5oUWvOZNNF62kRfpvEuGl/EiVfc85hpCW6o5n2dTSs5zqu45d8nYMl6k6vgXmbhYt2SMA8GUJ1XiRsZJ3WBLzkkpOYuqed6dbaWuHvRz5y/CNTyqv/X2aai2IG6Iel4DZl/VyQhTsaXmTRSxSCDxIp13hKaCF6m65zGHdXQNmJsZ40Aw5UmVuJFx0ixIek5KyVlUzfPu7OauqzWT3mxca+UbmRERLGxxQq773GiMTQ03GFOdu3Q5tx4jJyhSHz93uwzH8tV1Xeo7qqv9KpRfPfX6k59/9snPP1fr6bPntM8Wtc4mG6z5YouRcgVm+hUzS2VRZFGD3FQdPFnKNoQywlF50dWsF2POenmW3FQdPBnhMJwyzfhXMVXZKUOfsobBHsMc5woZ5KbWgy05i1jLNEfFFdRiYMmi09zUerBljqmeeZkjIeox1bOoBocR3qpCWkw0VuuyTWdsi3U1rtyCEUMcX/mVT/4GHHz8ldegh3/z8x/d/4m33j7j++xFu0HS8xowc6bW3FQNbiMcQt8041/FVGWnDGjKuLonGOY4V8ggN7UVJD2LWMs0R8UV1GJgyaLT3NQ1YN5AAXPcdl5Xr37ruYYhqDq46HADOkM/bxvq5KLDe9ZxU6eHdARedNiuIzeqfUd1de1PfO7Z3/3CF196+VXzEwafPvzkSyeYKarjIBSM46MsByow87bLF5969KP3P/Eq+dXD93/iwafsMPpDmnHI9ia59nzhlSOpGmTTK4RBpgu/dvjQRw8dI6uznRfUKNMdv/zkwf2PPmcthaqXl63jYw/tP3Tk5dQeVX2KJXszH+eo6kuZobLHWTgVNRbssfY0DqreZ2RdM8y0rOSo6jnDZpiYqdoITxqv+mADLFHCWkpWZ7JVsvQkqweWuTqlOczZpruyLdbVs/47Tx2DI1SinP7t333u979y/PBnP492xIrYR3kqYCaOpbJ4j7tB8uR0SJx2xKWSjz9xcP8nsGsHD7/q200Rl4aZrhxVnYwQNMz0mpuqbsFwhKGM5w8fA8SWNiPsxJYJlnh+LLZHVZ/kU0fClsR2RqUerwGzy5PGqz7YAEuUsJaS1Rl2SpaeZPXAMlenNIc523RXdnPX1cNjWc985Jse+PgL4/1vJn7h4APfcs8zdftNzbj+Ef6oEl3Amy1ne8xT6OCX6/bE2MhwOzGNHWIfz/rB1e1L+OTH37//I0fq9huLkQ1ochM1zoLvZclNLtaluvW6Oizc5XmiT37+WVTR4JOnXj9+4rXTZ8+dOXse1fWnj/z2U1/4orb4/qrPHWKyFf3g4ZOXLh97UNIv9rG6Gj01z8vem5YgfTRfDMt89LnQkqnljqZIs4RZVz/EJFJbmH612YZK2hoL49DC6vTw8dinMdji9VXJw+p2rFr1uYe5kb6lp7wMmcjyIJBVUS4/HA71/U+ckhZ/2TrVujpvT4MqI8qUK2cqXhc8qxoGPW+gzCztHHvsGBrQnms8Mg89rS04AtZi/jDe986Vpx/9KI7MCaC8F8oWeRcyMGkZVFxTLYUUvLHWAy82nDKtFM9y+WdcqYYIz9dcIQWLUow3sK3U1biKRcuA2VMU1V9/460TJ7/2uaefL+ciHhanZQyVRc+GPvfg/kefdS0xYMaiV5cm7fGrRulZBszEou2AyYAmIQhxgDHZgpuFd2Hp31UNgGBcjLoctiAMIny9lgdJr1j1UkWUqFULYyJaqMW3jTNq1WwKg6Uynj8dWxjick5hkHxC6uosPL525P50GsCxNL4fYSr0UdYtUW6rBDH6/U/ivsAWyF5Uw6MF3jxginLmiSfvi1v+jLSgXQLsUenvWrhruKakoVBIwRvrGjCvW8ActxthvNrq6qr9VvCirraWb3rkqGu5nv7C4Q98k9V+4y7BiDFRFV50uOY+fwox6KfbCb3oUHj9wS33VFffyB5TkKhFh1E/8si3DJxLyJBq33pd3fMnPse6+vVvvPXxX30EDIA/+blnP/nIZz59+Em0fOrffhbdkLRRLbe7/OyhMi9M2R5Y62pyyPO03R3S5w4dOng/yldtgR978P5DWlKGllmfGa+uHYlXrcwjD0nuaC3IIA8dOS58hY3WrlxpGq+u54ojN9U8LHjNTqvxauajTBPBryIhw2HX9njZRnYZpF7aUzrvmgV6XqTqnsdcQyiMuR3yRWHWzw8/7+Yyzwsd3pFUT7JAsMwVfR419kNPCz/92H33H7rviC6M2dixhw/dd7/ObblkbBkvUnX8i0xcrFsyxoFgypMqcSPjpG6wJeeklJxF1TzvzrZSVy/yw5/9/O9/5Tjg0aO/jXLijTffLgKm8kmJFcpZwPSqQTIxwpFGRTcXzlqdX//pSwQZiYSLQ+K0Y7EMOMJDiqiVWOpqbRlUdTLCVGypeUpDXR1bNCo2A2ZD49uL9uiIRVoJl47QVDC0MV7NuppLSC2i6o5jhd92Vuwa9DQ8sifMqRbJnk0pOXsNX1aiTo7t6jAEVW23qGv8TAywjF1QC7AsvK1lStXxLzJxsW7JGAeCKU+qxI2Mk2ZB0nNSSs6iap53Z7dCXT0wlpUVRQP9byaO5dkG771hGRHBwhYn5LrPteV0CvX6YFPDDcZU5/b6f/m+VFf3+syx1NVH6/Ybi5ETFKmPnzvMJw++/4EPzI3Pw7F8dV2X+o7qar8KZR2vfunEayiktcW3g4t2Vcm3TubtltKR++PV8ZAyETz0aCzO8ZaDeBkzSDcUk8ryNOwjwyAuiZRhXj+cArM8r8+m+Xj1VTdezbHog4ePYS263oee9n208dCDhyQP0/a0hboEPzTBWh19uDptCe+CxnHXgw/bQWC7XpicBMbymfO5litXTqUFPuoySBbhoZ0pF9d7/5Ovcaa8y5K2tIUHj7zGZaaBl3ycHPmZtLPO10ZxSzGffiy0PHpMwiB6clkVwxznCukxkkgpoa2dekrSU2vBpiKrS3PfkSEXLcV1dPrhxx7CEmzg5fmH7n/0oQe48Zgrb7J3KUfFFdRiYMmi09zUerBljqmeeZkjIeox1bOoBocR3qpCWkw0VuuyTWdsi3U1rveCEUOgX37xxIO/fvjt02fBb719Bmf+4d/kz78vXLz08slToWep8hXec67luQftqjl0+CW0MKDFwvvZQ5/46L/8jRgGwzeV+l6OHluw1YAJJYeQqO1FOGJPBmRtRABnS4q0GrLCt6IaGG0WymMuOQTAKhRnLbIc1pP6kqOptjSGWR2djvGT16CGQQ22MRzF9qiIBg0G5swVxSDJidbVEvTKVaCLvivF84cetmqW7XFAWCrYMp6zRQKptmgNnAdJjef1eDUDVxEk/eCztEsEe/KoOyDa0xQhN41mo9y1wMsWSMFN1aDXZh40HX/WFplZ8GtHYwgNAfZrNrcIsLiw7F2RRae5qWvAvIEC5rhdt/FqGbbdT7/nEV9Xp/b3H36h7Gl1Dlo+cPAZFC3a3hoiQ6Glc8OSOTAbR4ndQOKRR7DYo/ek5dQbIEt75KgMx+XtXI41NraNq+bSimFPLMeWgLc/8PEjHDEOLXGzmzvVWB283uDUcs9hrkKOAPbxAwdP2rvSNrgvNeIOzg2nx5CkDi46FK6HV7Ynbrb7gA7yI2Bj7zOa3fH8FGq63E5wY1AnFx0Klw8OR8/W6z6OtDHpeMIbR8/tgvwuIPSvP+XW544FujMT29M6593Hl33EnY1suWQqITURLzqU3jpP5HBxa9P2dxy5Ue07qqtr1/Hq02fPnTz1Ol5q1gh//RtvnTl7HoC6WtqZw6mDZbz6pLIcKLTHEnro+WoZYIlDMXjvocNP6U+g0efYg5LYYVHMRI3doo4/8SBL6NhiWWZcuDqSqhFO49VX+NLGq18mS2ZpzJqW+SVYSlYbo3b88hMHQ6lsg8xkzaJkCZrz3f8E6lftw3YdYNElJ+YlqRemU77l4OFXs8uW2ZhljZ6PPUxgTya4ytgSa0yMt7DYZtQ69rCMq4RsDzkosiJhJp3KrOGR9rGPLOGE9Od6LXeULNPlkeo9ZqjscRZOw7cAZArbQ+YnfbSDvkv6xLnMxjQHRe2tA9TofN+R55Gw2mg2si7t1mWmZSVHVc8ZNsPETNVGeNJ41QcbYIkS1lKyOpOtkqUnWT2wzNUpzWHONt2VbbGuLvzc+Yu/8snf+J2njj397Jd+7t5/iZipAfPNt+RvgOdBsmBEPx2vDgGTRbUFzJeeOFgETA2h6BkeDPHBMzXSXbt/vhp9JKzBEWw1xMWaWRq5BB2Rlm7HHnJ9EKP0jWG82jaMzEhoRfjl408cweqwLg13ui5l3Uj0kXYrdxn6NJ7I2xEztYS20Gfhy+KkhDi6BExjes1JbUVkdYYCWax+p2l8WaKWRFqGaH2LBEapkMGInAoyRm2NFs8lNEmQjGWwhBeJRVyLjVF75tup2RKo8BhCJZQdPIxd11gqUdp1gOlbYkgUf+2oRmN5iTgjaizY4xgwvcLduHTWLoxwBEBQlW8qjbsBlrErU/WcYTNMzFRthCeNV32wAZYoYS0lqzPslCw9yeqBZa5OaQ5ztumu7Oauq6fGuJiasx4oWColJuvkI4ePal0aykVm7TL6J93C87R8+yOPZ8tnlaL5/Te+fPgD+va8Zjvo62pXzHCUMqzucZQ0qfiMFR3fq7XK0XvCNnDhxtm2hXpDGW77K4vV528DYy7XgiULn/z4Qb5L2Pzxe0IFJatTjkeMfOTw41hv59jGosvabb2xMHPH5IXDH7kvFWN+G5Rx/SNEUiXqgOs+icORB79w8JG4Ln88jTufUfM46641WLzeHmwkbkJeY4fYx3PnHONHFj4j93lJh7CP8RyOdXU6IeO7hE8evA87bnO5HO5gYC4zcHd77OPTPvoR23vDeXX0Hjuk2kfdM/KJlAyJxlnw8l1cdVjjC4c/LjvFq4y7LPs7Nz5frEt163V1WLjL80R1XBoltOSIqf2NN9/WJ6t749X25TpdH/+zxIt9QvY2/Xw12lWZlqF/TBaljykyTjaCWTxLCghDnocWrvGhp1JqqO2ink/93rNfeurZL/3eK9Ju49JJmWyl8Woo8zDNGlly2+OFaGcJzcwy/FRb2jvPV7NelbxQ3hWerwbbqDU2jH24Xkuz0MLLMBwEsmq4POUnhZZ3xnaptGWcmS2aQaa5VBseIcdUjOmsjF1L6sZ0TXpCmXLlTEXCJKPZApLzoUdgLh9z0SLttjoNg9ozhUfVN058gZ/IC19+I2+vFULVRNy3wMIQjbTIrukPxdmie4rUylpwqDFXlcMsyEHdYAu7jCquqZZCCt5Y64EXG06ZVopnufwzrlRDhOdrrpCCRSnGG9hW6mpcxaJZwIx1NZg/9nYB7RuutO6pfIUXxqsRJNPYNUIZY4613//og3jJEewrLLnr56tZ2VahkhrKciu2TfmdoARkboCEUwmMCJLsryW3tEh76Iw4kEKrG6/Wb/cwV9ortdCXin/EDbSHMCsBU0awi3arh6U9hkFyqSmeI0qgpVBbDhAt1FhXy097pDyW9hAYWduz/tf2FCTZEJQ7pT/2SeFU2EatrQ/Xax3kvQPPV0tZrhWp9Xn5yfvku0sLYghfaMUrLtaFR1syWcK4zMVMCYANDVH3hTfydq8xYMr3ktg1a8lHqnV1oaiWdgmtphJgj4YAi2tKuhQKKXhjXQPmdQuY43Z9xqvdwBrcagMtWkL5oY4WnSWOnF4KsLxYrd4VuoW5soS8ZktvCRWItPv6R1/q2iOIZ29RTwvMt618SU9vz7e8sdiexzfmSxDvHNu8Pa3L76N/45RLMGJMVIUXHTLHuvzRg2efhTtEvc8oeWrs7WbPGfTtBsMbALzoUHj7HOO+pC0Ph5Fz3cbEjTToHHlzrsi1pPXmPdvbk+94XBE7++2c9CIpgRYdvOeHXb3c33xu6ciWat96Xd1zHa+G67i0OlI0bQd/8lMyXq3tlts1n69OJbRlZmwPeV76xaP1kYpaeh46hkyLHOtqcXTQ/pZB+kZJ+3ThB++Xe3wYman96199AUkY/fdPIfFCS6GS7MbMEirj1fLgouV82h4fb5a9kzFnviXkhWTtY5styaW2aB7mf/1ozjzMimG5BhvPVwfle7WE1ha5bDWHs0vYZZByaSPLCStinoeWsHDklyyJpZuU66lPcsnw4o85rZyW/FLGq7k98paUR8q7+JJltkZCXU7uL78gH8ezX/rCi28Vs+AaQp2+xl9CprFobXeZ31VkVykH5Uz5WbgMlZAp9ptJKcVPPHkfk79TaTildsnYMl6k6vgXmbhYt2SMA8GUJ1XiRsZJ3WBLzkkpOYuqed6dbaWunvbPPf08LoqXT57SlydOnsJLNJ6/cBGMliJgKvvnq6XEtYtL3App+6m2lNy8+6SoqHNFrVFbvMe6Or205VvM1PeyJXYLWxLCbKirwVpXSwBkHMNbtBSXvxYZA6Zqua5Yr0pglDgpZXMeMOWlhFProO0WqdghBEDjr7/o4nmamzQsJ7ZYgEUsiqsAa/BkWAsxDZ1dH7L2CTslcS+GOzJjoM1VZ9FucYkdEKxkR2bGq/1a1LGuakSaWxvjqrjVtwjOR47peDUM7UGzgOmi7tvSrHMRppoax6thUHXP+nx1CKHPIMAiqOJdj1mAfUcC7Pp8tbDEloyTUnIWVfO8O7sV6urWmJXk32lcy5d2xagji73wo1NxKS3yZ1+5NM31dflSVPh3yTJDzSZ9rAAgh+pF2m1LwnZy7bKdebsUVzZCTrYVaUVRPJcby6GwTF8vZcchL5zCe3Xhsb7KV9d6rrh3bKUiiqPQaRvYIR5z9PGrS9ucMyKCBTJOyHWfnHkksWR8TGwPa9c+uuPk3meEhZTH2Y2u2/KL41AyNjXcYEx1bq9/+xxLx03aj2qZXRxzbpuckPKu98uHGM6B+nwoPvf0i4l8XZ1zPu04/HFX9MoYcjxPrAPbK0aWoKmPjgOA/dyciz1lO7ffPk2ZKx9K673mWL66rkt9R3W1X4XyU1/44qeP/Db84K9+6snPP/f6N9564823n/z8sw9+6jM6gr3D56vZjncdOnj/o8giXQYZfyEZMkh9L3ZGlqCVsGVyXBHzG/TnXMnVrKfxa2682rcHZV4lAyxsuRo3A8wVpbHokFmyXmXOp+3aJ+SCUtaCbZng8C6ypFxhvJoXHZfAjUeaJRwGT5T1wuSEfVhU67u0nSrJVmu82n6IHvpw+dJHfnyOFh1mYcrFdmp4rw6zhOexpYSWUXEbkeaWhGROWzThOyXMHFHqVWULiZ5h3fFqzizYFdW+nTrxfLXsi5Tc8iYbrwYiEz14/yHJ9srxau2pHBVXUIuBJYtOc1PrwZY5pnrmZY6EqMdUz6IaHEZ4qwppMdFYrcs2nbEt1tW4xgv+4gsvIkieOPm1R4/+9oWLl7Qd5fSjR38HZTYUl0Zv7FoCl41Re676PBHDpv5gR8erY/CMT77ou6zYJlsQJjOOyfeD4PBVoPVEeHHfHiICMDBqH3CI3mjXulr6cMkyXs2A5sar5ec5jFoMfeQQ+nwgRTvjiYxLcwfvf1K+l6zaNZDm77XAKHFKOI1Xs13CmvVR5vbL2LK0x8oW2B6vDvEcESD1AdtOSTt3Sp+XieEUbHFGWJbJQMcIye86Jej1xqsZuEJLCGXKqT18V+j7hDCoCjEOP94p2wND6vHqNDpdM3dt7vlqDrPfd+Q1toQA+7Ujh+5rBFhcWPauyKLT3NQ1YN5AAXPcrst4dawK9GWsDVh9xWIpn5V5XoqkbL41N/gOxqtZhdqK0gKLtTc2Jq0x2/JYn6SeubdWVx8xzmod23ykMW1Dvmvi7QPoPIYkdXDRoeNhs7F2N5qaVtf5jFo73t3NnsuthbeEeEMqOhSeH4T2lofDWGxMtpHSIX1M9adctKT1ps8ob6fHVWQ7nn3E5vNHBjd7uqUs9KKD82JPrSV+5aFebUPmyI1q31Fd3fSXXn71d7/wxV/55G/ID8LPnT57DhB/Fr7L56sJMb8MuVQqy7EodAsDKcceDIPSkpmhg1t4ektyJFVjzExIM0W85BrDcApzPkm5cvb9ydqfHSwvZObE5DKwjvngQmMfa8dF9+prHKaO/wNNYK6Fl6RemFp4c8nuUo2sz+9pf2Ep7K1IZkImWaMlbXiLDM6gtpQckQPXj9r4thTM2o1BLCSdZBm11vHqNNAt2Z6pJKaaO8oQissj1XvMUNljhFM9jPHPlUm7DZKQuS4Z2EHyKn+ZVt8FqwZbwt/4kbfrf9kVx6vRLH3oPWZaVnJU9ZxhM0zMVG2EJ41XfbABlihhLSWrM9kqWXqS1QPLXJ3SHOZs013ZFuvqwlFR42r69JEnY4uEPg5f658rg4KlPQuY1InnqxEoj/MJFymY+YNwBjRc1+gp5TFbZDnB9afg0i6lcginGoTZR+KqBsz4/SMaH5W3wLUyR68QNnWNChKjdLEau2IkJ4dVMzodP/bccVmXhr6wLoY4Lez1j1No/LTwqKFG3s7alZczmB2kwA7tMQzKEiLTa/bK5cu3jXgZv2EkhxhecOpvQ9BSe8ciXIMhQ01gDYbWR0OQzDolz0Kj2rTOylLVa39RF1q1BR5DKN2WY7FU+sS4CrO3XH3t2NOyGVbf8ltFdcQZUWPBHmtP46DwGELF49A04dFjCEfWh+sla4BFuwVYxCv3gyDtH1U9Z9gMEzNVG+FJ41UfbIAlSlhLyeoMOyVLT7J6YJmrU5rDnG26K7u56+remJWNcUl9Epipf9H+xpFnAGxJ1cXJL8vIqnbTEWPP6jo6qmUMl/mCZvl5bRPL0bx6SaOF1fPVoVqIzIrFPe9q43hcuPt/j/WlMlz2y9ZYje/ZkoVPHj3C1cXl6OqsQNLVpX0Jld6RZ3AcYgveK+yOra2XK3Ksi33mI2E7Y0996bbBGNc/QiRVAhO47pP4yCNhH2NVZnsqfWRjbNXk6jPiFraPc3YOFL90KBkbiZuQ19gh9vFcLV/ZdkH62F6QuWH+mCu7z/eo/yIj7vvJx4/a0UA3LoffLARO5wmXML89cnCUw3PseG86pLocdc/IJ1IyJBpnwct3cS12Rr3xwjNHbS3aQfY3H82uuViX6tbr6rBwl+c5ffXUG6irj5947aUTrxa6yXg1MyomZNqT6UtrCTYgE9vDGIX+pFwHRR98CtmnFtvHDjMT0nZpkTWG/7865o4wZoSigW1opa+ywbZwKXG1XfLFR8Mv2JmESbsOp4RGZJnuv3iVxkNHnn7yIJcjAylS2cL1KWvJ56zbQ08hp0UfDgGFPkjseBB4SYpKpe1dhlDcZSupG13+1K3NTWt5+BiWoJkrolNIIl8VfvXY008yj9S3yxPU0i49w2YjRzyGctrmSt6ZPJTQsXNo0TCIaeSlKqlkXBGdwzWyIuwO+1jaqrOshRqL5zgIg3fFvw1uuj5fDSn5miukYFGK8Qa2lboaV7RoGTDr8epvvHUaZ2Acrz6PVtffq9S6fow6PR2DcPGc1LoaNjGXfw/8/icQZCWEwqtRa9a39nZUpK++xIAc4i36uHD0lP5dtFefe+qJdL2jYEZVfDguBKWyBEypqxGYEAckzpfPV6OdOxLeheK8FfrYM8QuBrRYNssSYn9eg9IiIcsK7+nnq5MiSjQUIl8F6vIRvmzs2saiHw1HAPG87v/oMcYKGa/WGpuOnbI/SMH2EOskqPowdeihpzVISh1ufRjPLUiahro6tUiQRGkalnPfkefRgiUfPBzj82PcJgmMpjzOOkuKam2H7EUZGIsIL4H0GW4bR6FPvHZKWF2Kar4rjleTTdfnq3ehkIJFKcYb2M1dV8eUuukchLTRrWf8Q9Sx/Vv4t7LZIlWENj7wEak22cI/Aa2NoarMnOWEvusD7z+sHeJyUHtgLVpsxOolvjetLrVjaQ985J709rKn+0vIsm2pkNZqx9YVW2zJqe4yl7LK1nLPM36r4M3VwesjFls+cPBwqsbTMXnkKFaUyjzpcOSZj8fltw9pcglMjI+q8KKD96NH0l/VTkcm7ekjXK8v6aXdf0azO16cQk3nDUBvKlR60aFwrFSOnq7CHxB+amG9dibQ5YPOO/vPV96lx7zxKafTNZ0P+Zkp51XrnI/rxaK4zdikkzjm6fz3G9lyHQHwWnQoPH0c4XoMXp3PLUeGVPvW6+pZf+Jzz0Z/0jGqa8xFMke1jNCzacr2ApvqYfTcUHXPO3EkXovUckFtucLGZapO5oUWvOZNNF62kccUWZ0NZc+45nyepY51GWE18OJV3fOYawhtqeZ/nk0pOc+puufcJWPLeJGq419k4mLdkjEOBFOeVIkbGSd1gy05J6XkLKrmeXe2lbp62uvnq1FOa+PE89Veb8mAyehU8CJVJyNYxZaaN1HEIivdlwVM03lHaCp4kap7HnNYR9eAuZkxDgRTnlSJGxknzYKk56SUnEXVPO/OboW6ujlmtUcunmGO7TtjKz43em/Feb20+XKGOJTNU30WMyKChS1OyHWfcZaKMY3wx/btMjY13GBMde7S5Vwv3t05j5ygSH383O0yHMtX13Wp76iu9qsIu+l551pnkw3WfLHFSLkCM/2KmaWyKLKoQW6qDp4Y6+8b6/aKbQhlhKPyoqtZL8ac9fIsuak62tznlFmyhSnXNOOf47yQ5qOD8S/0MPRpu4bBHsMc5woZ5KbG0ekOi1jLNEfFFdRiYMmi09zUerBljqmeeZkjIeox1bOoBocR3qpCWkw0VuuyTWdsi3U1rtyCEUOUi78HrjryV8EXaTdIer5hAuYwW4gb4aiIAA0GVkytuXy+WlSD2whTGcqmGf8qpio7ZUBTxtU9wTDHuUIGuamtIOlZxFqmOSquoBYDSxad5qauAfMGCpjjdt3Gq/fiWokVjbv0rDrdo1/bjd/mlkfXMARVBxcdFvm1OSAM/bxtqJOLDje47+4o4aZOD+kIvOiwXUduVPuO6upZRyLlmHlbj3F8lOVABWbe5jlmfuI9DtneJC92JFWDbHqF4Dn8drFsN1eOqk7mBTXK9Jqbql5ethMclW6/dZQH9ny7smRv5hPsf2CZiurgS5mhssdZOBU1Fuyx9jQOqt5nZF0zzLSs5KjqOcNmmJip2ghPGq/6YAMsUcJaSlZnslWy9CSrB5a5OqU5zNmmu7It1tWzruEu8Hs9YE4wXTmqOpkha5TpNTdVPQXG8K1i2R45qvoUI+zElnGOqr6UGZV6vAbMLk8ar/pgAyxRwlpKVmfYKVl6ktUDy1yd0hzmbNNd2c1dV791NiXTexnjqvnajHA63taoL5bDH9Dqb2X3sJxxti3f6L1tfvMMvx5D+KNKdAG/dSbrs4ivwa8P3jzdGK9Go+9zg/OOzvk3T7+DbECTm6i7OzJYcrEu1a3X1WHhLs+7MVTzRc8NtXzRFGlWwaJMv9psQyWbaWOwZUSx6qXKy7BWyF40DaqMKFOunKl4XfCsahj0vIlC9qL1IEyukKWKa6qlkII31nrgxYZTppXiWS7/jCvVEOH5miukYFGK8Qa2lboaV7HoGjCXqgZAz0OKVS9VRImGQvaiKQCOKENczikMep5VXOkFb6KQvegaMNeAuRvbYV199nzKqle/ZRwVkAQjxkRV+Jlzexqy3rVj8xj00+2EfvrG3uZr4zgIMQWJevrcrgbz3z57FRlS7Vuvqxc5krPEmlkaaz7neS7b08PouaHqnnfiSLyW6RVCxotUncwLLXjNm2i8bCMv0nmXDC/jRaruecw1hLZUcz7PppSc51Tdc+6SsWW8SNXxLzJxsW7JGAeCKU+qxI2Mk7rBlpyTUnIWVfO8O9tKXb3INTwarwHT8yJVJyNMxZaaN1HEIoHEi3TeEZoKXqTqnscc1tE1YG5mjAPBlCdV4kbGSbMg6TkpJWdRNc+7s5u7rr58hZn0ZuNaK9+wfPESw42FLU7Ily6zRt14mbvmC5cYwcMNxhQ74vu8N/n8BeYERepz/uKujgyWjOWr67rUd1RX+1WE3fS8c62zyQZrvthipFyBmX7FzFJZFFnUIDdVB0+Wsg2hjHBUXnQ168WYs16eJTdVB09GOAynTDP+VUxVdsrQp6xhsMcwx7lCBrmp9WBLziLWMs1RcQW1GFiy6DQ3tR5smWOqZ17mSIh6TPUsqsFhhLeqkBYTjdW6bNMZ22JdjSu3YMQQxzvXbpD0vAbMnKk1N1WD2wiH0DfN+FcxVdkpA5oyru4JhjnOFTLITW0FSc8i1jLNUXEFtRhYsug0N3UNmDdQwBy3HdbVsHXI+hbzM+cYXzQMKUS+YYesT8tgNUM/Vd34PT5kffoc7+jmIR1R3sWR6Q1Ww3dUV886EinHzNt6jGOiLAcqMPM2zzHzs8PY5pDtTfJiR1I1yKZXCINMV46qTuYFNcr0mpuq3rhsexxVfYolezMf56jqS5mhssdlOIUpC/ZYexoHVe8zsq4ZZlpWclT1nGEzTMxUbYQnjVd9sAGWKGEtJasz2SpZepLVA8tcndIc5mzTXdkW6+pZ13AXeA2YXaYrR1UnIwQNM73mpqo3AmOPo6pPMcJObBnnqOpLmVGpx2vA7PKk8aoPNsASJaylZHWGnZKlJ1k9sMzVKc1hzjbdld30dTXs9Dmm1HsZ41r5BuG3z/LK18CE8FfylatvyxP1Gy9/F4xtxoaFG0lD0cH3f+/wW2caT1Z7fWurRwbHeWJdW6yrz57l/6gaFu7yvBtDNV/03FDLF02RZhUsyvSrzTZUspnq4InnIcWqlyovwFohe9E0qDKiTLlypuJ1wbOqAdDzJgrZi9aDMLlCliquqZZCCt5Y64EXG06ZVopnufwzrlRDhOdrrpCCRSnGSw2fAiIernxd7CLDW/DGNWAiAuxBNQB6HlKseqkiSjQUshdNAXBEGeJyTmHQ86ziSi94E4XsRdeAuQbM3djO62rYOmp9C/iZc7iqGQdxJYsqS2R0fkONWp8+x/jLcJ+U9wSwuNyurlx5D45aZyPVeVKSdHuj1hMj1erbqqvx6Z4/f+Hc+YvF8qcde5pYM0tjzec8z2V74dAlbqi65504Eq9liiui4EWqTtaLS73mTdRfttVFPa/zjvBV8CJV9zzmGkJb6sOssikl5zlV95y7ZGwZL1J1/ItMXKxbMsaBYMqTKnEj46RusCXnpJScRdU878gQghDxcEnoti0yvAVvXANmdMSuZYroVPAiVScjTMWWmjdRxCKBxIt03hGaCl6k6p7HHNbRNWBuZowDwZQnVeJGxkmzIOk5KSVnUTXPOzKEoI0D5rhdi7oadvkKq2v9q9F7GfVa+Rrzm2f4wTWfqW4zv459B9U13uiXcy35zdPvnjn/TnqmWr9hVS5UZkOwg6gh9e9gb7zeG5/fPP0OdvP8ReYBvXRHOeqFi1dRFW92ZHC94736TLVfprrnrdTVMJyCFy5ePHP2fL2KsAGed669w5ux5ostxvkZmOlXzCyVRZFFDXJTdfBkKfPCGeSoermVHC5Dz3pJltxUf2lPM5Up1zTjX8VUZacMdMoa9HoMc5wrZJCbqoMqfRaxlmmOilSjxcCSRae5qfVgyxxTPTNXQ0LUY6pnUUSGQd6qQlpMNFbrsk2nDLEOEQ9H3l4vNLxxDZgdbmodDEfYQtwIR0UEaDCwYmrNTdXgNsIh9E0z/lVMVXbKgKaMq3uCYY5zhQxyU1tB0rOItUxzVFytLQaWLDrNTV0D5o0VMAftGtXVezd/yNyRNNaG+DHQlXHGeE4nkGdOHKuN8IzpKa5Wc6F0edFkg8iFqieGODaYZWoKMQhwlIyh6n02yFxD5whD1eeY4Tt4nxnv0SI+xC1VDxxuaXNcabhh95iqLKre4z24pRolh3TEMdWlMpZgyVyXbHUSssAG077FuvrSpUvnOAJzoVhF0/3m9Xch202/+4CcpSdZvMf+sHd5sY+cNsqmY6elMl05qjp5yaVBr7mp6t1Lteao6lO8JNQkjqq+lPPwmHMZTmHKgj3WnsZB1fvcu020byXV7UY9595tKzExU7URnrSFt+zG7T6yOrBi6UlWDyxzdUpzmLNNt28XLjDWIeLh07OmhYY3rgFzlk0Rl4aZrhxVnYwQNMz0mpuq3giMPY6qPsUIO7FlnKOqL2VGpR6vAbPLk8aQFWyAJcRZS8nqDG8lS0+yemCZq1Oaw5xtun3be8ActBulrtbDqgc0fjyOr5FiMq9yAkHlVJ5TmWS8oXISGdPQYjyoDCteNdB4HlROcp5TTAue0BTK0/ejqmgruKW8GeQ8qJwkxYoKnlC5rWbcUb0Bex7WcPNGSmFMVTZFY0M1TalU05rrrtuqq3FF4NO6cOHCuXPMFLHvEyu9LuoPe+9DKT4+/7G6D9p/9Dn3Tp4hrU/RIcWql2p9uVEhe9FeQGhrHVKoeF3wrNYhbhOF7EXrkJ4rZKnqLaZSSMEba32LnLulilI8+1tzfcsW1fjg+ZorpGBRivG4aY4Iw/mry9zA8Ea8fQ2Ye1ANgJ6HFKteqogSDYXsRVMAHFGGuJxTGPQ8q7jSC95EIXvRNWCuAXM3dtOMV6v5g2EHV1UmhWIaPxJl0/rEmtE9mZzKZsoTSq+/5RpUdc/JeVHnPKUp3GTKOJizRMZB1zDqeVDVPbecYX1eeU8Ai8vtahNVD8zbXnDlQU03+CHdqmva4blSl7jUKY6oT8KUqeroHHnWt1VXw3AdIV1ipnj+wpmz58+dv3jh4qVidYX7TbUdMa5307Q+LKb+0PUOrLnnnfj06dRQf1r2TtoJVSe3LpPEm6i/bKuLel7nvQ41i1Td85hrCG1pHXJNKTnPqbrn3OvbxyJVx7/IxMW6JfO37PpWXmmdEiRtpBDKSSk5i6p53pbhU0TYkcjGHBGxTrdqY8Pb14Cpjti1TBGdCl6k6mSEqdhS8yaKWCSQeJHOO0JTwYtU3fOYwzq6BszNjBEgmPKkSsTIOGkWJD0npeQsquZ5W4ZPEWFniwFzxK5PXZ0dSvfC77B9JJ7lReAtKCbzHE4gOZW77FRP+vpbpZypngvlpMmY5qxzjQtlKOmxBpoRhkrTECdl0FTGdIL9szdQhvXEmGasc6eYNwZtaXGhMrvJWGDGOjdwoXJb7bHedJeyaLhhI6WYYaqlHeSQpnimOu6lO8o7VXXPW6yrYbgisOeIofIneZgsYvmrr7766je7a4KIyLbFHHENmKuvvvot6bsImLO2Pl8tmrHaCM8YqtNoNRdKlxdNNohcqHpiiGODWaaiCg3MSrVgqHqfDTKXoneIoepzjCo0NvZZimHzIW6pemAWtCNcKQvaKaYqi6r3eA/OgrbBUujmbKVvYCuGZW4qjPEysnrOBtOO8McLeHuGawef66VLlyRZPH9utdVWW+3mN0QzxDRENsQ3RDmLd3u2NWCuttpqt57tKGBO2/p8daaYzKuV1vVIdUtlkvGGyklkTEOL8aCiLs1UC2nPg8pJznOKacETihq14KBoK7ilUuJmPKicJGWpnPOEop4suKOsXXMeVql7hWGxRdkUjQ2VcrdW1LQ3gm69robZ1cGzCEdptdVWW+2mN96NxBDfLNJtydaAudpqq91itruAOWHr89Ujuifj5xlMeULpVwNEHlR1z8l5XuU8pXIm1spqNmfqqGup7HlQ1T23HJXqgPIqA4ujQN1M1QNLoZvxoLLoXaBbdRa0OVcqRa8vv6tSnEPNOachaHSOPOu7qKujybW12mqrrXYrmMW1nZmtZrXVVlvt5jeLa9fQ1uer51hL6/X56kFOyjpWGdMJXp+vbrEoS1xlfu82xVSWuMZS7trcwFTHqGZjS807VXXPO62rV1tttdVWW2211VZbbde2Pl8tmrHaCM8YqtNoNRdKlxdNNohcqHpiiGODWaaiCg3MSrVgqHqfDTKXoneIoepzjCo0NvZZimHzIW6pemAWtCNcKQvaKaYqi6r3eA/OgrbBUujmbKVvYCuGZW4qjPEysnrOBtO+1tWrrbbaaqutttpqq93Utj5fnSkm82qldT1S3VKZZLyhchIZ09BiPKioSzPVQroYnR5RTnKeU0wLnlDUqAUHRVvBLZUSN+NB5SQpS+WcJxT1ZMEdZe2a87BK3SucjVR7RWNDpdytFTXtjaBrXb3aaqutttpqq6222k1t6/PVI7onQ9UaTXlC6VcDRB5Udc/JWdDmPKWoSFvKajZn6qhrqex5UNU9txyV6oBKkWyOAnUzVQ8shW7Gg8qid4Fu1VnQ5lypFL2+/K5KcQ4155yGoNE58qyvdfVqq6222mqrrbbaaje1rc9Xz7GW1uvz1YOclHWsMqYTvD5f3WJRlrjKOhbdZypLXGMpd21uYKpjVLOxpeadqrrnta5ebbXVVltttdVWW+2mtvX5atGM1UZ4xlCdRqu5ULq8aLJB5ELVE0McG8wyFVVoYFaqBUPV+2yQuRS9QwxVn2NUobGxz1IMmw9xS9UDs6Ad4UpZ0E4xVVlUvcd7cBa0DZZCN2crfQNbMSxzU2GMl5HVczaY9rWuXm211VZbbbXVVlvtprb1+epMMZlXK63rkeqWyiTjDZWTyJiGFuNBRV2aqRbSxej0iHKS85xiWvCEokYtOCjaCm6plLgZDyonSVkq5zyhqCcL7ihr15yHVepe4Wyk2isaGyrlbq2oaW8EXevq1VZbbbXVVltttdVualufrx7RPRmq1mjKE0q/GiDyoKp7Ts6CNucpRUXaUlazOVNHXUtlz4Oq7rnlqFQHVIpkcxSom6l6YCl0Mx5UFr0LdKvOgjbnSqXo9eV3VYpzqDnnNASNzpFnfa2rV1tttdVWW2211Va7qW19vnqOtbRen68e5KSsY5UxneD1+eoWi7LEVdax6D5TWeIaS7lrcwNTHaOajS0171TVPa919WqrrbbaaqutttpqN7XdeOPVrzx53/5PfFT80DPW9u67zz9kjQ8//goKWmt995nHtOdH9z/2e7EYVldG9es5FMM5c+JYbYRnDNVptJoLpcuLJhtELlQ9McSxwSxTUYUGZqVaMFS9zwaZS9E7xFD1OUYVGhv7LMWw+RC3VD0wC9oRrpQF7RRTlUXVe7wHZ0HbYCl0c7bSN7AVwzI3FcZ4GVk9Z4NpX+vq1VZbbbXVVltttdVuartudfXXjz5839FT9kIKXeq7zx/a//DRE1IGn0CBrXzq6AOfQGfUmVJIhxKaFXjgZx677wiXRt6DYjKvVlrXI9UtlUnGGyonkTENLcaDiro0Uy2ki9HpEeUk5znFtOAJRY1acFC0FdxSKXEzHlROkrJUznlCUU8W3FHWrjkPq9S9wtlItVc0NlTK3VpR094IutbVq6222mqrrbbaaqvd1Haj1NUtO/X4A5+QIWsU26yfYahsf+9hbXz3eQG0sF0mhWKKytOzqRbGnmd0T4aqNZryhNKvBog8qOqek7OgzXlKUZG2lNVsztRR11LZ86Cqe245KtUBlSLZHAXqZqoeWArdjAeVRe8C3aqzoM25Uil6ffldleIcas45DUGjc+RZX+vq1VZbbbXVVltttdVuartuz1fHuhq1azTUlkbgZx776ANPfg2UxqXZ/rUjh2TsGlX3w0ePhN+Bo6fM1iUMKibzrKX1+nz1ICdlHauM6QSvz1e3WJQlrrKORfeZyhLXWMpdmxuY6hjVbGypeaeq7nmtq1dbbbXVVltttdVWu6nt2tfVzx+yJ6Kj21h0NJTcWip/3Ro4cP2QPmutT18//Lwt5+HnUZ1mPxRHMayujOrXcyiGc+bEsdoIzxiq02g1F0qXF002iFyoemKIY4NZpqIKDcxKtWCoep8NMpeid4ih6nOMKjQ29lmKYfMhbql6YBa0I1wpC9oppiqLqvd4D86CtsFS6OZspW9gK4ZlbiqM8TKyes4G077W1autttpqq6222mqr3dR2oz1frcypPF+NcjqxFtvP23i1exIb/Z95TIrt8N5NFZN5tdK6HqluqUwy3lA5iYxpaDEeVNSlmWohXYxOjygnOc8ppgVPKGrUgoOireCWSomb8aBykpSlcs4Tinqy4I6yds15WKXuFc5Gqr2isaFS7taKmvZG0LWuXm211VZbbbXVVlvtprYb+flqPkpd9EFlG56v5hh1er5a6mrUruSgWgZ7NtXC2POM7slQtUZTnlD61QCRB1Xdc3IWtDlPKSrSlrKazZk66loqex5Udc8tR6U6oFIkm6NA3UzVA0uhm/GgsuhdoFt1FrQ5VypFry+/q1KcQ805pyFodI4862tdvdpqq6222mqrrbbaTW032PPVHJcOf6KMI9JWV3/tmee/rj3loWthGaM21h+Kc0FsH1ZM5llL6/X56kFOyjpWGdMJXp+vbrEoS1xlHYvuM5UlrrGUuzY3MNUxqtnYUvNOVd3zTutqXi2rrbbaareEWVzbmdlqVltttdVufrO4dg3tuo1Xt+2VU19Htaw/+Q5F9bvvPv/4A/LEtfwO3ApsMXsSO/TUI4h/dGVUv55DMZwzJ47VRnjGUJ1Gq7lQurxoskHkQtUTQxwbzDIVVWhgVqoFQ9X7bJC5FL1DDFWfY1ShsbHPUgybD3FL1QOzoB3hSlnQTjFVWVS9x3twFrQNlkI3Zyt9A1sxLHNTYYyXkdVzNpj2XdTVuHzw0Z6/cPHsufNY/uqrr776ze6IZohpiGyIbxbptmRrwFx99dVvMd9dwJywG6Wu1l3W/db9txbja6SYzKuV1vVIdUtlkvGGyklkTEOL8aCiLs1UC2nPg8pJznOKacETihq14KBoK7ilUuJmPKicJGWpnPOEop4suKOsXXMeVql7hbORaq9obKiUu7Wipr0RFOEPF+AWDZcGgumZs+fPnb944eIltzqW8b3N2Lr6g9xl/wHlzA/U2D5oz6L5yTDFTe2dkNPcPeFrjtq+oIAVU2tuau9ir5naCyCJ8a9iqrLTdkCrGeY4V8ggN7UXwAOLWMs0R+3cXIAli05zU1s3xGmmerYbcY+pnv//7b3btiTHdZ6LKz+arkmKuvYT6BFskSDv5XvpQiZAb2pQti4MNow9xt4ChQatQXhIJi0CtmyL3DiQAIZ5BIHuBtDg/v/5z4icccqMrMrqA5hzzP7yW1G5KldlZc6IWLEKMKogzPihBHpOdVcM3bdF4CSjbqCyob7Fna8MPNVZMEvvsi2GM+4lbsYzUQE6Dm2cbL1LFbcZT6Vv3fGvcVIeyIImx9294ojgJYFJ77JXJKMbvGXdM3HX9Bxau3HduzwL5hNaMNfjCVuv3op4VuRO21TE1t+Y5M72ktrgVaELXSFfIRN3eOWTVEZfkrdw6WtUoWnICli61cTJVAGNPkll9F6yoG+TPQDc0jun/VQmZyeXUj5JdeHTPDR9wBG8oQYoxnZwY4xDLjmpxM7ZN/PYeTVupY/u3UdWR2mz+IHthYxcLwduJyE5T1R0nQq45cj9JK/77owXybo740W45Ux5ppIeb4QtZ7bepbK4Ydc9U7nmo8Ky7pnKvV4WzNJZVKMj5KYj9+IsT1SOfekgRt7rUDKVpcfOq+/UgooZX43YQU/4aABAV0Ibtz3pyuT2qLaMoKX79uC4f/9jlLj4A1wceJKzYK64E3Vp2pnyTCUdJWjama13qewUxpFnKtccZSe3zHumcq+zKo38LJhDXw3e7Ckm3IqDt9SuZP2p3fakK5Pbo9oygpbu24PjwIK5GY/t89U5ypO7fNG+GXow+QHEZtvtcsEXduEOPVCX+NbvlsjoFbnpOral61H3iiwiI1eJmXHQmqZ8IUukHNsVPz9f3XNj6qoxgNhw0gcZ9DRwiU4GD4Oejt+UyugHzqtxO2B8qDEiXmk+0BPC9oR36G+lM7y57kZ/0zseL6HdbC/IKeLQe7ncdJHANVwKwgzbAkLi68o32Ra3Swhcw7aAlwT2Up1LQ6Dyi9l2jm2X2iERveysu1RxiP7ICVRuJNx3BUaK1y/C4NvPgnkFVQCjTxGH3ktUiQ6Ba7gUwBmyxJW+lMHom8SdXvklBK7hWTDPgnmbeGrWq+OZiKelPNHc4h9Tjusm+nIZRecmuGLGN8IuYo/WKzLti667ZK+oXBwI7rLppAqNOate5aBy7C5FqoDOOKjcchbxlGNnD4AWyynvUZlcndy2N0xd+MhJuVE58itSA47GfYASnfQBDd2HXPZoGH41k+TSXdbzqHk1bhmMlX774b3q+ddTL8FdL81dLz+6U6coulOnNHqHyug3yXg5TbG9OHdRSe/dOItfwvaG3cXtbIvMLiqjz6UKZo8qxdGdROlbVEYvs+1EdlGJf9mpu3lQtJ31KtsBwMLhQGIhUbpREf1GgVqHiqcf7ILAN54FMydq1z6iOlW+i0o6ylRuaf0SohaZLL6L24nSVPkuKqPPJWLAs2BeFiwCKeSrtKJR+MKiSEZfSJRuVES/UVxZMCfj/Hx1QWy2mS4yu7i3aJvCLyQ32bFNLe6TZHGJVLmJPkluSt8itpWvkMW99ES0Vd4ju4fSJ8nNQhyo8hVa51r4gOqGo08zdeEYZLiTcicaO2wHNMZ2APRYeNS8GpeXrb08wHPmJ4+OsxH85owneejxDSqdb6i7v9HRjeXFsOZdji7IdR9e8K1n9m8oaONk612ObvbWyVEBWRz/Giflgf2C1joieElg0rscFfDkBm9Z98xB5xI7HXfjunfZ6xDXnYzuHfHIyehGVIZJP5RAz6nuiqH7di1QQ1DxcOb9652BbzwL5sC7bIvhjHuJm/FMVICOQxsnW+9SxW3GU+lbd/xrnJQHsqDJcXevOCJ4SWDSu+wVyegGb1n3TNytPYfWblz3Ls+C+WQVzMk4P189w6tCF7pCvkJm+TutHVRGX5K3cOlrVKFpyApYutXEyVQBjT5JZfResqBvkz0A3NI7p/1UJmcnl1I+SXXh0zw0fcARvKEGKMZ2cGNsh18+JqMnmckD59Uffoiyyf/uzkwWP7C9kJHr5cDtJCTniYquUwG3HLmf5HXfnfEiWXdnvAi3nCnPVNLjjbDlzNa7VBY37LpnKtd8VFjWPVO518uCWTqLanSE3HTkXpzlicqxLx3EyHsdSqay9Nh59Z1aUDHjqxE76AkfDQDoSmjjtiddmdwe1ZYRtHTf3iTwvqHigf71ztC3nwVz3Z2oS9POlGcq6ShB085svUtlpzCOPFO55ig7uWXeM5V7nVVp5GfBHPpqsGSlmHArcd5Su5LlrXbbk65Mbo9qywhaum9vEnjfrimYk3F+vnrL7XLBF3bhDj1Ql3j5O6TWyegVuek6tqXrUfeKLCIjV4mZcdCapnwhS6Qc2xU/P1/dc2PqqjGA2HDSBxn0NHCJTgYPg56O35TK6IfMq3Ej4E3CU+UnxysN/kSwPeEd+lvpDG+uu9Hf9I7HS2g32wtyijj0Xi43XSRwDZeCMMO2gJD4uvJNtsXtEgLXsC3gJYG9VOfSEKj8YradY9uldkhELzvrLlUioj9yApUbCfcLAhUPl7CedlfgW/CNZ8FEBbiCKoDRp4hD7yWqRIfANVwK4AxZ4kpfymD0TeJOr/wSAtfwLJhnwbxNnJ+vNhaumPGNsIvYo/WKTPui6y7ZKyoXB4K7bDqpQmPOqlc5qBy7S5EqoDMOKrecRTzl2NkDoMVyyntUJlcnt+0NUxc+clJuVI78itSAo3EfoEQnfUBD9yGXPRqGX80kuXSX9TxqXo2BkoaJu1IvwV0vzV0vP7pTpyi6U6c0eofK6DfJeDlNsb04d1FJ7904i1/C9obdxe1si8wuKqPPpQpmjyrF0Z1E6VtURi+z7UR2UYl/2am7eVC0nfUq2wHAwuFAYiFRulER/XaBioe6p59tV+BbzoIZE7VrH1GdKt9FJR1lKre0fglRi0wW38XtRGmqfBeV0ecSMeBZMC8L1oEU8lVa3Sh8YVEkoy8kSjcqot8uLi6Y83F+vrogNttMF5ld3Fu0TeEXkpvs2KYW90myuESq3ESfJDelbxHbylfI4l56Itoq75HdQ+mT5GYhDlT5Cq1zLXxAdcPRp5m6cAwy3Em5E40dtgMaYzsAeiy80bxaTx4dZyP4zRlP8tDjG1Q631B3f6OjG8uLYc27HF2Q6z684FvP7N9Q0MbJ1rsc3eytk6MCsjj+NU7KA/sFrXVE8JLApHc5KuDJDd6y7pmDzgVau3Hdu+x1iOtORveOeORkdKOKw4wfSqDnVHfF0H27EQfOq3HnVo4aEvzmHBbJ6GfBLJ1svUsVtxlPpW/d8a9xUh7IgibH3b3iiOAlgUnvslckoxu8Zd0zcQf1HFq7cd27PAvmE1Qw5+P8fPUMrwpd6Ar5Cpnl77R2UBl9Sd7Cpa9RhaYhK2DpVhMnUwU0+iSV0XvJgr5N9gBwS++c9lOZnJ1cSvkk1YVP89D0AUfwhhqgGNvBjbEdfvmYjJ5kJm80r97M4ge2FzJyvRy4nYTkPFHRdSrgliP3k7zuuzNeJOvujBfhljPlmUp6vBG2nNl6l8rihl33TOWajwrLumcq93pZMEtnUY2OkJuO3IuzPFE59qWDGHmvQ8lUlh47r75TCypmfDViBz3howEAXQlt3PakK5Pbo9oygpbu21vFgfPqzVS5S34WzKEz5ZlKOkrQtDNb71LZKYwjz1SuOcpObpn3TOVeZ1Ua+Vkwh74avOtTTLhVCW+pXcmyU7vtSVcmt0e1ZQQt3be3is/tvDq+oPLkLl+0b4YeTH4Asdl2u1zwhV24Qw/UJb71uyUyekVuuo5t6XrUvSKLyMhVYmYctKYpX8gSKcd2xc/PV/fcmLpqDCA2nPRBBj0NXKKTwcOgp+M3pTL64fPqdLjlQE8I2xPeob+VzvDmuhv9Te94vIR2s70gp4hD7+Vy00UC13ApCDNsCwiJryvfZFvcLiFwDdsCXhLYS3UuDYHKL2bbObZdaodE9LKz7lIlIvojJ1C5kXC/IA6ZV+MuNp4Fcy9VAKNPEYfeS1SJDoFruBTAGbLElb6UweibxJ1e+SUEruFZMM+CeZs4P19tLFwx4xthF7FH6xWZ9kXXXbJXVC4OBHfZdFKFxpxVr3JQOXaXIlVAZxxUbjmLeMqxswdAi+WU96hMrk5u2xumLnzkpNyoHPkVqQFH4z5AiU76gIbuQy57NAy/mkly6S7refi8elfqJbjrpbnr5Ud36hRFd+qURu9QGf0mGS+nKbYX5y4q6b0bZ/FL2N6wu7idbZHZRWX0uVTB7FGlOLqTKH2Lyuhltp3ILirxLzt1Nw+KtrNeZTsAWDgcSCwkSjcqot8uDplX70qVR/ezYEbfRSUdZSq3tH4JUYtMFt/F7URpqnwXldHnEjHgWTAvC9aBFPJVWt0ofGFRJKMvJEo3KqLfLn6P5tXlSefWW9wfEbHZZrrI7OLeom0Kv5DcZMc2tbhPksUlUuUm+iS5KX2L2Fa+Qhb30hPRVnmP7B5KnyQ3C3GgyldonWvhA6objj7N1IVjkOFOyp1o7LAd0BjbAdBj4Y3m1Xry6DgbwW/OeJKHHt+g0vmGuvsbHd1YXgxr3uXoglz34QXfemb/hoI2Trbe5ehmb50cFZDF8a9xUh7YL2itI4KXBCa9y1EBT27wlnXPHHQu0NqN695lr0NcdzK6d8QjJ6MbVRxm/FACPae6K4bu2404cF6NO7dy1JDgN+ewSEY/C2bpZOtdqrjNeCp9645/jZPyQBY0Oe7uFUcELwlMepe9Ihnd4C3rnok7qOfQ2o3r3uVZMJ+ggjkf5+erZ3hV6EJXyFfILH+ntYPK6EvyFi59jSo0DVkBS7eaOJkqoNEnqYzeSxb0bbIHgFt657SfyuTs5FLKJ6kufJqHpg84gjfUAMXYDm6M7fDLx2T0JDN5o3n1ZhY/sL2QkevlwO0kJOeJiq5TAbccuZ/kdd+d8SJZd2e8CLecKc9U0uONsOXM1rtUFjfsumcq13xUWNY9U7nXy4JZOotqdITcdORenOWJyrEvHcTIex1KprL02Hn1nVpQMeOrETvoCR8NAOhKaOO2J12Z3B7VlhG0dN/eKg6cV2+myl3ys2AOnSnPVNJRgqad2XqXyk5hHHmmcs1RdnLLvGcq9zqr0sjPgjn01eBdn2LCrUp4S+1Klp3abU+6Mrk9qi0jaOm+vVV8bufV8QWVJ3f5on0z9GDyA4jNttvlgi/swh16oC7xrd8tkdErctN1bEvXo+4VWURGrhIz46A1TflClkg5tit+fr6658bUVWMAseGkDzLoaeASnQweBj0dvymV0Q+fV6fDLQcS33zui8/84TfeDC2PmNUJzz+PvzWiv5XO8Oa6G/1N73i8hHazvSAXvvKVZ575yt22HcShna9+7RkEdqvaGy43XSRwDZeCMMO2gJD4uvJNtsXtEgLXsC3gJYG9VOfSEKj8YradY9uldkhELzvrLlUioj9yApUbCfcL4pB5Ne5iY10wr+ArX33mma++sr7PNquC2efVBfNvUdlSfOG5t9Gi9gmqAEafIg69l6gSHQLXcCmAM2SJK30pg9E3iTu98ksIXMOzYJ4F8zZxfr7aWLhixjfCLmKP1isy7Yuuu2SvqFwcCO6y6aQKjTmrXuWgcuwuRaqAzjio3HIW8ZRjZw+AFssp71GZXJ3ctjdMXfjISblROfIrUgOOxn2AEp30AQ3dh1z2aBh+NZPk0l3W8/B59Sh9Hls26iW466W56+VHd+oURQcxOPvCc28tLWls11BJ58/z5ec4r75Bxstpiu3FWdLn1YNHPe8+y8Epp99qaW+iS9jesLu4nW2R2UVl9LlUwexRpTi6kyh9i8roZbadyC4q8S87dTcPirazXmU7AFg4HEgsJEo3KqLfLg6ZV+9KlUf3YcG8y3n13bWCSaokRu9QGfzHz33hmWf/Nn95UH7yydvf+PIzX3zubXO1rBLVqfJdVNJRpnJL65cQtchk8V3cTpSmyndRGX0uEQOeBfOyYB1IIV+l1Y3CFxZFMvpConSjIvrt4vdoXl2edG69xf0REZttpovMLu4t2qbwC8lNdmxTi/skWVwiVW6iT5Kb0reIbeUrZHEvPRFtlffI7qH0SXKzEAeqfIXWuRY+oLrh6NNMXTgGGe6k3InGDtsBjTEMhh4nbzSv1pNHv+l69Xf/hGsdsSWe5K7bvDqsV8c3S2+ou7/R0Y3lxbDmXY4uyL7n9Wo8bWrf8Mz+DQVtnGy9y9HN3jo5KiCL41/jpDywX9BaRwQvCUx6l6MCntzgLeueOehcoLUb173LXoe47mR074hHTkY3qjjM+KEEek51Vwzdtxtx4Lwad27lqCHBd/GS9epukawdZXCZVy+Perlzv6xgvoV59ep6db8wbrmXuBnPRAXoOLRxsvUuVdxmPJW+dce/xkl5IAuaHHf3iiOClwQmvctekYxu8JZ1z8Qd1HNo7cZ17/IsmE9QwZyP8/PVM7wqdKEr5Ctklr/T2kFl9CV5C5e+RhWahqyApVtNnEwV0OiTVEbvJQv6NtkDwC29c9pPZXJ2cinlk1y6/Ckemj7gCN6wHdAEN7bDLx+T0ZPM5I3m1W1W69XFD2wvZOR6OXA7Ccl5orJrTGbr1Upvb9xPMrNcr4777M54kay7M16EW86s1quV9HgjbDmz9S6VxQ277pnKNR8VlnXPVO71smCWzqIaHSE3HbkXZ3micuxLBzHyXoeSqSw9dl59pxZUzPhqxA56wkcDALoS2rjtSVcmt0e1ZQQt3be3igPn1Zupcpd8vWDavPruSsFMe1qpYY68KIzm/KMYzKtj++5EXWp8Wa+O7U7UpWlnyjNVP+Pf8rAd5WjVma13qewUxpFnKtccZSe3zHumcq+zKo38LJhDXw3e9Skm3KqEt9SuZNmp3fakK5Pbo9oygpbu21vF53ZeHV9QeXKXL9o3Qw8mP4DYbLtdLvjCLtyhB+oS3/rdEhm9Ijddx7Z0PepekUVk5CoxMw5a05QvZImUY7vi5+ere25MXTUGExtO+iCDngYu0cngPqAxtn5TKqMfPq9Oh1sOJPq8Wn+rzPjiN368PEq+kh965pk/eSW38xtTfPUV/sl39vhQjPw3kJHVCe/+POmtBDnOy/FVTGmtPb3p/EvLHJjSe7sunlfDC/nK3aV9g7gIl58Gg9dX/LIkfV6tT1AjvvzcW+lR/DDhu5559tXiBvFvzLt8+fm3cDx7VLybzieC36tbT4wvBIPen4wf/crdUBBm2BYQEl9Xvsm2uF1C4Bq2BbwksJfqXBoClV/MtnNsu9QOiehlZ92lSkT0R06gciPhfkEcMq/GXWzsFcwUKGjf/ZPFfZ9BwUzr1fxto0fxqNacU3z5uTdzeygkX3ju7fwDwDVX74T+5Ge7YA58KYD89s569Y9jiUdJXB596/nlERSu3C/AbZ+H8eWUBZP0mbbFs3eX9i5RJToEruFSAGfIElf6UgajbxJ3euWXELiGZ8E8C+Zt4vx8tbFwxYxvhF3EHq1XZNoXXXfJXlG5OBDcZdNJFRpzVr3KQeXYXYpUAZ1xULnlLOIpx84eAC2WU96jMrk6uW1vmLrwkZNyo3LkVySHLx23YU3ppIZB5j7kskfD8KuZJJfusp6Hz6tH6aO0P3klfImpbNqBY8Rnv5teFAdJ/KNxur9Ym8p+4ctfjH/f6Kfr47BerRY/vS2VdP95bN6bvtTUGmljxPSQhqFc/9GX+kmWpRUfEfqXHMw9+4o5Lhu+EFsVjxdYh7wg+Z0aWaLRRo0cR8LRYtPjZ5/Vekv68bQnqbRnePbV3MKbovjGT99ZvnF5FF/q9uG3f/H5d/xRTpu/+I2fpJv0J/iBNLW2G5wz/K9h5q9HOTn/8vNvm6slczvbIrOLyuhzqYLZo0pxdCdR+haV0ctsO5FdVOJfdupuHhRtZ73KdgCwcDiQWEiUblREv10cMq/eSk5ov/CHrHv4UuURafWqLJicPNs+/lu/L/7bH1vR05TYptYsj6xmy+8Q9Txp/Vm0cvrlL4YSp3ZLqzHN56vXC+Z29j9fbZNqPI9a3rIf1UsrapTvaVXry19k3cvtoDXjZVqLfZFKK+h1j44ypbqHR+G2P1N+CVGLTBbfxe1Eaap8F5XR5xIx4FkwLwvWgRTyVVrdKHxhUSSjLyRKNyqi3y5+j+bV5Unn1lvcHxGx2Wa6yOzi3qJtCr+Q3GTHNrW4T5LFJVLlJvokuSl9i9hWvkIW99IT0VZ5j+weSp8kNwtxoMpXaB1t4QOqS44+zdSdY5DhTsqdaOxQM72GNgB6/LzRvFpPHp0DuOLz1ctkWF5+IFBLLnVL2j+2g2//2z/kjDS2x5Pcdf48xeerw+Rcb6i3441eZs5s//E3MMPUiM33XC4M7okRXmjnGNda0oXkrC9ITqSXRRW1v/OmH8UGgraCjafl/hwKfg1DzPQ8atexilvDvtH3RMunnA/b8+De4lQ5eXj0rvnb9vPYVNkeLfj2c/YyvYU3O38gtrRFgBwVkMXxr3FSHtgvaK0jgpcEJr3LUQFPbvCWdc8cdC7Q2o3r3mWvQ1x3Mrp3xCMnoxtVHGb8UAI9p7orhu7bjThwXo27uHLUEKP/Fi+0g92C6RPpTvHMv7Xs/KcovHiGIvn2N/6Q007tUxRP9Gh2o299vrosmHqUJWjFVX5XP1/NabYKbyySPD+YFfs+qZ11z8opDoF2q/b8U530PGXxDOUU+6MC6LsKhzZOtt6lituMp9K37vjXOCkPZEGT4+5ecUTwksCkd9krktEN3rLumbiDeg6t3bjuXZ4F8wkqmPNxfr56hleFLnSFfIXM8ndaO6iMviRv4dLXqELTkBWwdKuJk6kCGn2Syui9ZEHfJnsAuKV3TvupTM5OLqV8kt79z/LQ9MFH8IbtgCa4sR1++ZiMnmQmbzSvbtPn1elL/MD8y0Zbvk4LLHX4Xzz6/tznq3flOlHZ45TY0tuR5TN/xRZtLG2kVXy+mms+ecml+ZE0TFTmvznUEDNcJBygtZGXXKo/ILe1Yj6E1NHl3DO1M4sFFksuJvuU2Pek25hXf9OY2v0b9SXa/Ruzt8FV6PxsHl+5G2/eT+NDITClx275lu/7qLCse6Zyr5cFs3QW1egIuenIvTjLE5VjXzqIkfc6lExl6bHz6ju1oGLGVyN20BM+GgDQldDGbU+6Mrk9qi0jaOm+vVUcOK8eZ54kL41eDLWCbb2A7+YF093KoO3ja9Rwlso2yuLpU1z7ks+f0txqTO/z1WsFs6nDucyyRpks69WxvSmo/Asaljs9yhLHPVhgrdx5fmqL25hXax98mX5ryS/7dc+qIve3fTxb71JZjHDWPVO55ig7uWXeM5V7nVVp5GfBHPpq8K5PMeFWJbyldiXLTu22J12Z3B7VlhG0dN/eKj638+r4gsqTu3zRvhl6MPkBxGbb7XLBF3bhDj1Ql/jW75bI6BW56Tq2petR94osIiNXiZlx0JqmfCFLpBzbFT8/X91zY+q2MZjYcNIHGXQb6PijycngGgyppfWbUhn98Hl1OtxyILFZr85/uwhvFlg6XNmns17dsjrh+efxt0Y/j0+8eSyMNdObm8d8xZsO8kk0KLM/ekwrxvJ9zBNga9EFuTA9mlo4mPwap/TF7WBj3u7nq3NLWJGOq9PLDdghV6ctMOVWiw4U9wF2sS0gJL6ufJNtcbuEwDVsC3hJYC/VuTQEKr+YbefYdqkdEtHLzrpLlYjoj5xA5UbC/YI4ZF6NO9pYF8xEm0JX/8GIH3+Df2hdr1fnlqZUcn+tZs+UynYFeymSy3q1Wrzvmy2YtS8F0Je4Q4smwJxIe8uyXq0WFUOV3Lpg2mde8L3LPl98/h09Wq1OzxBVokPgGi4FcIYscaUvZTD6JnGnV34JgWt4FsyzYN4mzs9XGwtXzPhG2EXs0XpFpn3RdZfsFZWLA8FdNp1UoTFn1ascVI7dpUgV0BkHlVvOIp5y7OwB0GI55T0qk6uT2/aGqQsfOSk3Kkd+RWoI0rgPUKKTPqCh+5DLHg3Dr2aSXLrLeh4+rx5ltV794OO3bPT2VuXpRd39Kie9cp0KH1/qFKndT1f4k3Jv8dPbUkkv16uRy98u2vKOHat9qJd8Kl+BSaNDazfefXbu89VaYOGe7QVcrVeDedlZLUobMGK6m1p4U+QJubfwGzk9pv/Exqk/8T316PL56sR0k/Kz2Zy08wan2575xsfLPD9f3VIZvcy2E9lFJf5lp+7mQdF21qtsBwALhwOJhUTpRkX028Uh8+qt9EmyvlR5VMH0P95RO8ujVqTly1ScJTFMhvNvM70ksnjm9Wex+fOfWDzzU8XcWTDb7H6+2n5ryWN5y/LJcJQmNlq7/UB5vXoh23Pw49P50Vz32IIyxXJ6fr56MxEDngXzsmAdSCFfpdWNwhcWRTL6QqJ0oyL67eL3aF5dnnRuvcX9ERGbbaaLzC7uLdqm8AvJTXZsU4v7JFlcIlVuok+Sm9K3iG3lK2RxLz0RbZX3yO6h9ElysxAHqnyF1tEWPqC65OjTVNdOR+QWuRONHebBTUkNeh47bzSv1pNH19JuHu3Zl/kDgRr5ccxkj3LB5JkvP/vd/CjZLMIs7CzCxJPc9fzzqEU/Dwam3KccWX5y97kvpGEi3ui0Z74YbETID/jZZeAvxNwewgt5JS+5LGwvyLiooufhj6d9lumx9uej05+vDuvVxeer9eiXn3/LHMNWvrCv3OX0+BNbo46fr06DUTrA50n/hfCH9p9D+/LX8DI7RYAcFZDF8a9xUh7YL2itI4KXBCa9y1EBT27wlnXPHHQu0NqN695lr0NcdzK6d8QjJ6MbVRxm/FACPae6K4bu2404cF6Nu7hy1BNjMUnOtCrEOada+Jma9Ilolcrwx0F8huXRcq2b0+w//OI3it9X8vPVuZwWxRM9GiuP5tXLo+sF0wsjS9CK6xnK9eqf2ETai+fDv33ui6PPV6c/EVra+YffsWAas1vdS/8xC/5nJlj39Olr7IMKoD0LhzZOtt6lituMp9K37vjXOCkPZEGT4+5ecUTwksCkd9krktEN3rLumbiDeg6t3bjuXZ4F8wkqmPNxfr56hleFLnSFfIXM8ndaO6iMviRv4dLXqELTkBWwdKuJk6kCGn2Syui9ZEHfJnsAuKV3TvupTM5OLqV8kurOp3loaiASvWE7oAlubIdfPiajJ5nJG82r2+RA8A+/8W/T/y0GIxlMqvWQXkj1v43J7ZoAV2FjOP9enjSbhebQqNHT91Euzqf98nPfwADLgyO2/KhWYDy+ctefv/wP3uaoPxYYf5j0LcVFFS/CwuN32gjSHsI+YXps6evVuhHKF+/hCzJpQq5vLD9fbeQOKWytG+1K+0+XLYEn9Ha/kTm1TmFDWLQr13xUWNY9U7nXy4JZOotqdITcdORenOWJyrEvHcTIex1KprL02Hn1nVpQMeOrETvoCR8NAOhKaOO2J12Z3B7VlhG0dN/eKg6cV7c5qnt4CLWC+wwKpn+++rmygrH08Wmxz8c2tfbQ3+yovVtIctHz4mm/K8yR/+RnvWCWqXKnjPUnB3+faI/afwPc49m7/h+24Oq0/36zDpRiL5KdZ84Fs37U/gzH21nZ5MzWu1QWI5x1z1SuOcpObpn3TOVeZ1Ua+Vkwh74avOtTTLhVCW+pXcmyU7vtSVcmt0e1ZQQt3be3is/tvDq+oPLkLl+0b4YeTH4Asdl2u1zwhV24Qw/UJb71uyUyekVuuo5t6XrUvSKLyMhVYmYctKYpX8gSKcd2xc/PV/fcqO6fjlh1UgMOcw1Q9GhyMrgPboyt35TK6IfPq9PhlgM9IWxPeIf+VjrDm+tu9De94/ES2s32gpwiDr2Xy00XCVzDpSDMsC0gJL6ufJNtcbuEwDVsC3hJYC/VuTQEKr+YbefYdqkdEtHLzrpLlYjoj5xA5UbC/YI4ZF6Nu9h4Fsy9VAGMnslpvf2xz9K+/KfLrGUXUSU6BK7hUgBnyBJX+lIGo28Sd3rllxC4hmfBPAvmbeL8fLWxcMWMb4RdxB6tV2TaF113yV5RuTgQ3GXTSRUac1a9ykHl2F2KVAGdcVC55SziKcfOHgAtllPeozK5Orltb5i68JGTcqNy5FekBhyN+wAlOukDGroPuezRMPxqJsmlu6zn4fPqXamX4K6X5q6XH92pUxTdqVMavUNl9JtkvJym2F6cu6ik926cxS9he8Pu4na2RWYXldHnUgWzR5Xi6E6i9C0qo5fZdiK7qMS/7NTdPCjaznqV7QBg4XAgsZAo3aiIfrs4ZF69K1Ue3c+CGb0g59X865vQ7n+5oxYlHWUqt7R+CVGLTBbfxe1Eaap8F5XR5xIx4FkwLwvWgRTyVVrdKHxhUSSjLyRKNyqi3y5+j+bV5Unn1lvcHxGx2Wa6yOzi3qJtCr+Q3GTHNrW4T5LFJVLlJvokuSl9i9hWvkIW99IT0VZ5j+weSp8kNwtxoMpXaJ1r4QOqG44+zdSFY5DhTsqdaOywHdAY2wHQY+GN5tV68ug4G8FvzniShx7foNL5hrr7Gx3dWF4Ma97l6IJc9+EF33pm/4aCNk623uXoZm+dHBWQxfGvcVIe2C9orSOClwQmvctRAU9u8JZ1zxx0LtDajeveZa9DXHcyunfEIyejG1UcZvxQAj2nuiuG7tuNOHBejTu3ctSQ4DfnsEhGf4oK5k+ejx/VYfikuiiS0TNRAToObZxsvUsVtxlPpW/d8a9xUh7IgibH3b3iiOAlgUnvslckoxu8Zd0zcQf1HFq7cd27PAvmE1Qw5+P8fPUMrwpd6Ar5Cpnl77R2UBl9Sd7Cpa9RhaYhK2DpVhMnUwU0+iSV0XvJgr5N9gBwS++c9lOZnJ1cSvkk1YVP89D0AUfwhhqgGNvBjbEdfvmYjJ5kJm80r97M4ge2FzJyvRy4nYTkPFHRdSrgliP3k7zuuzNeJOvujBfhljPlmUp6vBG2nNl6l8rihl33TOWajwrLumcq93pZMEtnUY2OkJuO3IuzPFE59qWDGHmvQ8lUlh47r75TCypmfDViBz3howEAXQlt3PakK5Pbo9oygpbu21vFgfPqzVS5S34WzKEz5ZlKOkrQtDNb71LZKYwjz1SuOcpObpn3TOVeZ1Ua+Vkwh74avOtTTLhVCW+pXcmyU7vtSVcmt0e1ZQQt3be3is/tvDq+oPLkLl+0b4YeTH4Asdl2u1zwhV24Qw/UJb71uyUyekVuuo5t6XrUvSKLyMhVYmYctKYpX8gSKcd2xc/PV/fcmLpqDCA2nPRBBj0NXKKTwcOgp+M3pTL64fPqdLjlQE8I2xPeob+VzvDmuhv9Te94vIR2s70gp4hD7+Vy00UC13ApCDNsCwiJryvfZFvcLiFwDdsCXhLYS3UuDYHKL2bbObZdaodE9LKz7lIlIvojJ1C5kXC/IA6ZV+MuNp4Fcy9VAKNPEYfeS1SJDoFruBTAGbLElb6UweibxJ1e+SUEruFZMM+CeZs4P19tLFwx4xthF7FH6xWZ9kXXXbJXVC4OBHfZdFKFxpxVr3JQOXaXIlVAZxxUbjmLeMqxswdAi+WU96hMrk5u2xumLnzkpNyoHPkVqQFH4z5AiU76gIbuQy57NAy/mkly6S7refi8elfqJbjrpbnr5Ud36hRFd+qURu9QGf0mGS+nKbYX5y4q6b0bZ/FL2N6wu7idbZHZRWX0uVTB7FGlOLqTKH2Lyuhltp3ILirxLzt1Nw+KtrNeZTsAWDgcSCwkSjcqot8uDplX70qVR/ezYEbfRSUdZSq3tH4JUYtMFt/F7URpqnwXldHnEjHgWTAvC9aBFPJVWt0ofGFRJKMvJEo3KqLfLp72efX3vv7MM1//z/7FepQnnVtvcX9ExGab6SKzi3uLtin8QnKTHdvU4j5JFpdIlZvok+Sm9C1iW/kKWdxLT0Rb5T2yeyh9ktwsxIEqX6F1roUPqG44+jRTF45Bhjspd6Kxw3ZAY2wHQI+FN5pX68mj42wEvznjSR56fINK5xvq7m90dGN5Max5l6MLct2HF3zrmf0bCto42XqXo5u9dXJUQBbHv8ZJeWC/oLWOCF4SmPQuRwU8ucFb1j1z0LlAazeue5e9DnHdyejeEY+cjG5UcZjxQwn0nOquGLpvN+LAeTXu3MpRQ4LfnMMiGf0smKWTrXep4jbjqfStO/41TsoDWdDkuLtXHBG8JDDpXfaKZHSDt6x7Ju6gnkNrN657l2fBfIIK5nw8KfPqyYhnQu60TUVs/Y1J7mwvqQ1eFbrQFfIVMsvfae2gMvqSvIVLX6MKTUNWwNKtJk6mCmj0SSqj95IFfZvsAeCW3jntpzI5O7mU8kmqC5/moekDjuANNUAxtoMbYzv88jEZPclM3mhevZnFD2wvZOR6OXA7Ccl5oqLrVMAtR+4ned13Z7xI1t0ZL8ItZ8ozlfR4I2w5s/UulcUNu+6ZyjUfFZZ1z1Tu9bJgls6iGh0hNx25F2d5onLsSwcx8l6HkqksPXZefacWVMz4asQOesJHAwC6Etq47UlXJrdHtWUELd23t4oD59WbqXKX/CyYQ2fKM5V0lKBpZ7bepbJTGEeeqVxzlJ3cMu+Zyr3OqjTys2AOfTV416eYcKsS3lK7kmWndtuTrkxuj2rLCFq6b28Vn9t5dXxB5cldvmjfDD2Y/ABis+12ueALu3CHHqhLfOt3S2T0itx0HdvS9ah7RRaRkavEzDhoTVO+kCVSju2Kn5+v7rkxddUYQGw46YMMehq4RCeDh0FPx29KZfTD59XpcMuBnhC2J7xDfyud4c11N/qb3vF4Ce1me0FOEYfey+WmiwSu4VIQZtgWEBJfV77JtrhdQuAatgW8JLCX6lwaApVfzLZzbLvUDonoZWfdpUpE9EdOoHIj4X5BHDKvxl1sPAvmXqoARp8iDr2XqBIdAtdwKYAzZIkrfSmD0TeJO73ySwhcw7NgngXzNvF45tXfe9b/pwOIZYf//PVn/uibP3vrm19Kj3zPH0D87Pk/8lYP7InWf/clic7Qq1995plnvwdnvvl8eh7s/Dz34TWE/GnzVM//1C6vV/NPlVoQP80/zjNfev5NNHj7TNhF7NF6RaZ90XWX7BWViwPBXTadVKExZ9WrHFSO3aVIFdAZB5VbziKecuzsAdBiOeU9KpOrk9v2hqkLHzkpNypHfkVqwNG4D1Cikz6gofuQyx4Nw69mkly6y3oePq/elXoJ7npp7nr50Z06RdGdOqXRO1RGv0nGy2mK7cW5i0p678ZZ/BK2N+wubmdbZHZRGX0uVTB7VCmO7iRK36IyepltJ7KLSvzLTt3NgyJ20PJVWt0ofKENJKi1LyRKNyqi3y4OmVfvSpVH97NgRt9FJR1lKre0fglRi0wW38XtRGmqfBeV0ecSMeBZMC8L1oEU8lVa3Sh8YVEkoy8kSjcqot8uPp/zakyGv/TvMM911/yZrxHzasQfffOn/MIm0pgk9xzfrpPCSa/vz7eQ0/Wvvmr+6tef+fqr3g5/5kvf/Kn5T7+Jp/J96KndvxffAP8pp+vP4wGbnNt0Wm7HwhfE+DdDBW1T+IXkJju2qcV9kiwukSo30SfJTelbxLbyFbK4l56Itsp7ZPdQ+iS5WYgDVb5C61wLH1DdcPRppi4cgwx3Uu5EY4ftgMbYDoAeC280r9aTR3/3/Z+/9g8/eu0fXm/5wYcftT/YlYwneejxDSqdb6i7v9HRjeXFsOZdji7IdR9e8K1n9m8oaONk612ObvbWyVEBWRz/Giflgf2C1joieElg0rscFfDkBm9Z98xB5wKt3bjuXfY6xHUno1vne35cMMSB82rcuZWfBbNkWwxn3EvcjGeiAnQc2jjZepcqbjOeSt+641/jpDyQBU2Ou3vFEcFLApPeZa9IRjd4y7pn4g7qObR247p3eRbMJ6hgzsfj/nw1V6e/9M23zDmvXtao81p0fB6cibhGvbidIl+vNgex1blL7fjie1975pmvfY/teCBPyz97E/Y1zKrZTr76NU2neWS0XxW60BXyFTLL32ntoDL6kryFS1+jCk1DVsDSrSZOpgpo9Ekqo/eSBX2b7AHglt457acyOTu5lPJJqguf5qHpA47gDTVAMbaDG+My/EruYzJ6kpm80by6ze///Y/+6z/+05tvv+v5jsvfvPLam++8l3fTi4qulwO3k5CcJ+qTBz94+c+//f136e++8u2/fuEHaLf0fRr3k7zubZZP3mS8SNbdGS/CLWeav/fKnT+/8wZdyfZ4I2w5M/jbr33ruZdf95aKyuKGXfdM5ZqPCsu6Zyr3elkwS2dRjY6Qm47ci7M8UTn2pYMYea9DyVSWHjuvvlMLKmZ8NdRlKybcqoS31K7kgKF225OuTG6PassIWrpvbxUHzqvbvHHB/PRnVk/socML5qfvxidvEnVp0p2oS9POlL/9fStx9qVaWIKmnRn8LJjuZ8Ec+mrwrk8x4VYlvKV2JctO7bYnXZncHtWWEbR0394qPrfzarygzp+Cc16NOa+HT3rNw4R5cr3aWjp/Ch734R+E+3o1p9tVcLaP/XloxB998x18YZcOntou4sIDdYlv/W6JjF6Rm65jW7oeda/IIjJylZgZB61pyheyRMqxXfHz89U9N6auGgOIDSd9kEFPg5joZPAw6On4TamMfvi8Oh1uOZD42j/8CINC+M/e//lb77z3wYcf/fbDexgs/s3d//KDf/wntcT9xdfv/PWfP7fkt1752cefvPEC5V3u48NE7Pkzm/oW37s8g+2TT7I958uv5zcl0t9KZ3pzOa/+DufVavE3vePxEmqYJ8ap5Y3vPHfnlbfyPu0FWZBD1Rc5VK3acWjx9Rf5Q8aWEXlDvWWDTrn46bt3v/3XOIR5aPn2999fWsZcCkIkXqM959Jiz+nv5rvWogJCYo/KN7kUtB++jOf8zg+9peR/549h+Zd330/t9rPpJ7n7HlvQ/A5Gz3iSVFpTC78rt7RsC3hJYC/VuTQEKr+YbefYdqkdEtHt9i+8oUpE9EdOoHIj4X5BHDKvxl1sPKZgsiSmq5pppe9HVgNROkPBRFFDOXq9fYYjCmaeV6vlwoL5Ce7ob3//vaWFpeMFlji1qABGL5lKXNWOQ4tPXsEsKlJof8+e9jWcCmthuRPxdeWbxJ2e3IuzWlq+f/fOnz9359V3cst7r6bSzXzxv7P5LJhbJKLb7V94Q5WI6I+cQOVGwv2C+JzOq/X33s9qBh32Ga5Xa1k7RVqRRsS1a0T4fLU+j/2l59/kQ2n6zXeimGznP/zmDvaH30sU7rNr7F+0b4RdxB6tV2TaF113yV5RuTgQ3GXTSRUac1a9ykHl2F2KVAGdcVC55SziKcfOHgAtllPeozK5Orltb5i68JGTcqNy5FekBh+N+wAlOqmBjrkPuezRMPxqJsmlu6zn4fPqUX7/7zlM/Pkvf/1//Yf/BIcgX/v7H/3H//S3f/PKa2h56f/9z9gtvii8BIwIfQqdT0h0DRPpaUlZ7X56ydfv3PnWtzF9VQvyjRe+fUcjpNSymRvr1W3Gyykzr+GkljdeeO7O3bfM24uz4bJe3T5qieEvZpX5y95NFNhZfsHoDedKKzxpH7ZgPIcv2aIbdpY8hEZgHHoq37ffDphzsIgfuFNkdtGST2vH8icsksNHb+eY787dd5ZGls13XvtLa3THS+ZYk/ujwGJwyRabeLOhKNSbVEYvs+1EdlGJf9mpu3lQtJ34Kr3TD76QgwfT2hcSpRsV0W8Xh8yrR3lZwdSKNKfQGwUzlSO1H1wwN9ar20Qd6zD9FiC15N9vokCx0dvlLfN6dfuo5RNWMFOZgvNYd+6+bZ5//fft1zBVZQtKkyT7Lipjcc6NId940R71asl8yOk9ptnyxLNgXh6sAynkq7S6UfjCokhGX0iUblREv118LufVP+OnmtPnq/M+fI1pXq0XHNaiOUm2fdgeOVyv5vpz/nz1Zzb91no1P1P9NU69i+cpPkdtTR2m58QXxPg3QwVtU/iF5CY7tqnFfZIsLpEqN9EnyU3pW8S28hWyxJeeiLbKe2QnUfokuVmIA1W+QutcCx9Q3XD0aaYuHIMPd1LuRGOHYXATGQZDj5M3mlfryaNr+eXNd97DuDD+AHlZpmoXMfSxNerY7rNc+rL8Uq9X55OMZ3jhzst5co5v+Ra+9GHiJ7amoQFEmJbjab3RljvCvNpWLTjVp8eLYc2d5Xo1LkIMmLRezbXob73yBo6i437nh3EfNd554Q5npHhati8/oZ7BFjRSC+bq2IeHU0v6LjCNlv76Wy/6SWC734DvYtj0nRfvcKzJJj7Dd17kGfbllzwUs/GfFmfSmjPC58n+bATJRY+wXv36i8tSDNeLtP9ScKKT8sB+QTP3HyCVvmWfSJyBsI8XUms0x8T72y9/59vf5TiRj7736re/+50Xy+WXUQFPbvCWdc8cdC7Q2o3r3mWvQ1x3Mrp3xyMnoxtVHGb8UAI9p7orhu7bjThwXo37vfLLCmYoiUu7/uS7etSmvst69WzBtGKoIrBMy+uCGX+HaPtrerxWJKMnFuvVKIZ5vdoK4Itv4EfVcVNBZs+OquKN6cdme1nnn+SCaVSZcid5Kjhdz/vgX+NeDEuy0Mlxd1eu4ixH5PbE/44eiuvVfBAt6IxSYdQ+wFkws5NnwVyLz+28Oi1Wm+d9huvVy1Q8ngk6v0V/sL0sgyf/0vP2h9zebtNvrWN/6Zv8M3LsReqS8hXpPG3+6avfs32+9/Uv2fchfE1bX+wJXegK+QqZ5e+0dlAZfUnewqWvUYWmIath6VYfJ1PFNPokldF7yYK+TfYAcEvvnPZTmZydXEr5JNWFT/PQtIFI4Q3LgU7lxmX4ldzHZPQkM3mjeXWbWn754MOPfvb+z/Fl/oF//stf//bDexAME62dL0oJt/Xqn8ntJKA9T6GnPl/NYeIP3njBx4X43juv/EB/0Yd93njBBnx4Kg4E3cNTvfX9Fzhiyy38E/TlKCnjRbLiy1jTL0Jbr36bbkM69084SNUyC8dhmooX/vb3v5VGfhxEumuUZs+gMSKGpGkftmMcx6NodSV7vLnwDHfu/hDjJK23YBT18us+nkO88R1v/5SLw3L8JBrt0fmNWlHBE4JKHiitV7Pd/9bx5Tf4Z9svY8Lte44KTixHoLLvPk5t2pmpYPJ3GVpySe0oqmy09erPbPnltTfu3sG4kA/ih3zxDYxx/5LLL74/kYqzPFE59qWDGHmvQ8lUlh47r75TCypmfDXUfSsmPHb6tSuhjduedGVye1RbRtDSfXurOHBe3eZlBTOuV6eCacWNU+ipz1evFkyuZr/CP7HRbyTlbcFcah2nvvkoKVGjptx/WtYrljU7kJc4Fsnk9pOEQqriGcsdapcXQza6P6EF09wq0tvmKDugPS1KUt7N20eeqRy4//VQ045EhMKoRnzpvyZg6WbzWTALXw3e9Skm3KqEt9SuZNmp3fakK5Pbo9oygpbu21vF52FeXYTmycsfdWNK7GvRDE6A+5+vLv542+Jrac15+Zz2V1/lM6TPV9ft6fPVxZ+UW2glHE/nf+xt8aWvvor9X/3e8/b35Ir0Hzbr/e4nUJf41u+WyOgVuek6tqXrUfeKLCIjV4mZcdCapnwhS6Qc2xU/P1/dc6O6djpi1UkflNB9gGKPJieDawCkltZvSmX0w+fV6XDLgUQts2BEiGFibP/Fr36jDwqO1qtTR86+/Eds92Ec90kDr/XPV6Nd5FgQ+3McZqNG28f5Zphs+wKI3mK02HjuBxwj+sKItRujv/8/fvQ/f/Cj//k/fmrt8XIycmQ2+Hw1h6H+2Wm0p2UZLndwJGftg89Xc7z1fY7Y7LvSxwXhvgiDH4z78Lg+7UQLb6h0EnRzkVxOwWBO/JQjOQy57Pl9+SUz/YFlXHJ5/+6dZSkmsFqvtu/1NzTtrwJCYo/S8yldHq0YCho//ocfJpXBDjXAtZWUpZ0/IcesVk65/PLa+2Jax7ZhYlh+adkW8JLAXqpzaQhUfjHbzrHtUjskotvtX3hDlYjoj5xA5UbC/YI4ZF6Nu9h4TMEMS8dMlcSZ9erM2YK5lNy2YKZ5dZ4YW7txR8Fc+3w1n/k1TbnxKGbC+AFYYq0C2P6Dz1enovrEFkzQJvk2Oc/tfNrcwnIn4uvK3/5fPKU/+F+sV2qpiDs9e7le3TKtV6vl7ffeeMf/4xSpZp4Fc5tEdLv9C2+oEhH9kROo3Ei4XxBP+7z6oOBM+OucLqfQlNuWsqsTzS3+MeW4bqLrv/stt0vK/5da8fLymPGNsIvYo/WKTPui6y7ZKyoXB4K7bDqpQmPOqlc5qBy7S5EqoDMOKrecRTzl2NkDoMVyyntUJreuccIbqjsfOyk3Kkd+RdpwpHUNUwonNdAx9yGXPRqGX80kuXSX9Tx8Xj1KLb8gtcyixEtQO/w/vmTLL2r3l9/9fPUyhfZxG9vTmsnHr2OQF8eUNkC0Pe+8gdEePQ8TLbFDGoY2jTYo1JN/69t32NKsveT8Pz/53xzQ2JgmXk6ZPtZcWmy92j5f7VNutac/F+SajC+h8FvytByuffzHtqGnWjRis5WT9KjyRXyj1kDwqO2Tx5fcX7Tll7c5qvvWK++9/iI9jeewA/fBgCk9J5ea0egrHul70aKbOjPtoMrgu5nzh8SBOkUm8ef/Xzql/5vjuepRp5LuY9alsUq8Flt+0ZcqmFxgSSvYbLHll/f514zYkytOGDviVZ8fF5yItrNepXf6wRcOBxILidKNiui3i0Pm1aO8rGB2P1+dp9ChYKZyFH93aftsFkw+5EXA9rTGsmDak/NT2WgM31jmZsH0n3Zp8d9v0m36itpIt09Ks8TZbyptesxvybNZuvbxH9san9SC6WXKi2RKe1pMVekoU7ldnvmLd/5RpxTXSPuoqDTPv/QcpJ2BqmA6+RDL5lkwLw/WgRTyVVrdKHxhUSSjLyRKNyqi3y5+j+bV5Unn1luwsT/q1n+d21ri/4NaLdPkZ6T5OerUwtVy+9Nxb8Fmm+kis4t7i7Yp/EJykx3b1OI+SRaXSJWb6JPkpvQtYlv5ClncS09EW+U9WndS+CS5WYgDVb5C61wLH1DdcPRppi4cYxd3Uu5EY4dpcFMxDIYeJ280r9aTR//BP/7T39z9L8hv/YeXXvuH13/+y1//4le/ee0ffvTCS3+rBZnRevUBn69mO77L/hszaF+GiZyE+xRdjfpevaE+HsVwjUe0A3H4hf39AogXA/29sPwS2xM1FvSWYjmFB1rWojngwwvk/jYEVLv2wdPaCM/GZHB/Tnj6LrqNh9LyS7qJ+MNr+YXtGnTKsQv3wTPgaeEYHum/vmOLML78om+3PZflF7W//Dpa4gKLntM8D93Y8g7Hpm/IQTw5x690/PN9iuLTrlf3C5q5z6tT6Vv2IXkyedLofNDaf/hdm1SnfYC08IL98399p15+GRXw5AZvWffMQecCrd247l32OsR1J6N7RzxyMrpRxWHGDyXQc6q7Yui+3YgD59W4xyu/rGCGkri07/589bBg2iRcU3T+iXgqnip3S8G0ebX9MY7256MsNWlP962CmeqMtaAwsqzZH/jY56s15bbePK1Xc+UcNcr3Xwqs2q30levVT1zBTGXK270MLtP13I5/jZPNejWLmxx3d+XlevXSnlh9vtoof/u1v9Rnrc+CmZ08C+Za/B7Nq9eDn7UOET9r7bRNRWz9jUkOcq07hH/WWhfWkFeFLnSFfIXM8ndaO6iMviRv4dLXqELTkNWwdKuPk6liGn2Syui9ZEHfJnsAuKV3TvupTM5OLqV8kurCp3loaiASvWE50KncuAy/kvuYjJ5kJm80r+7mm2+/+1//8Z/+/X/8fyC//fCjDz78CJL/ylHLMnIl/KDPV1PyODKNt5ZRJp6K4z8bNvHPGtOiNL+FO4QnX75lyXiRrDpHWjb+45c24vT1Z65F28Cr9Lg/XftzB02wbSaJnyd/XJA704uPC+J1vMdVF/sv+tjE3p1HiTfXsoSicSpvWB/PabzIgR32tEUY/17sw50xonoFe/FO122eXT9tatcPqT2jjwtOLEegsu98QoyDU7udvbu2Hs7R6vKQF09vZFFN7f5xQfukoMaUbLdhIsaLvg+RirM8UTn2pYMYea9DyVSWHjuvvlMLKmZ8NWIHPeEaBqildiW0cduTrkxuj2rLCFq6b28VB86ru3lBwTzo89WUTsG0P7Hxz1fjqXy9ui2Y+l415m9ZEjVqzjn1LYqkyhce5dE5p62cRTXtk0rcsritysZG+pNXMHOZiu0oO3rI6lJ+yNtHnqkcePX5av6cOgqrEogSuny+Go/iZ0NtkfuE/CyYvt0M3vUpJtyqhLfUrmTZqd32pCuT26PaMoKW7ttbxed2Xh1fUHlyly/aN0MPJj+A2Gy7XS74wi7coQfqEt/63RIZvSI3Xce2dD3qXpFFZOQqMTMOWtOUL2SJlGO74ufnq3tuVNdOR6w66YMSug9Q7NHkZHANgNTS+k2pjH74vDodbjlQ5Lvv/wLDxLfeee/Nd96teMl6NQd8HNBoTw2Ywp6kDRPpy0nOyy82Rbc/0sM+r9t/qgfjxTdewWgj/fGejSZ5RPv7QwQdgy0Mx+IF4B4voS7tB/YntxGb2jkMffHl9BfsHIFZe/zP2L78OoaPvqbN5Rdr1H81B89jl70Ny5D60CBHimm37/wAY13sY+NU3wdjJp6EcAPm5Zd0AwIcz2n5JXzvD+174z42/LKnWcjFEP8BmBx3st1+QaD0dWwVEBLfWfkmraBxiJwPZINsW0jx/1t1OKKSA0GMAtvGsPwCT7RhYlh+adkW8JLAXqpzaQhUfjHbzrHtUjskopeddZcqEdEfOYHKjYT7BXHIvBp3tPGYghlXpHP7sl4dCmaeGMc9wdWCaVN03S93Xvc9OwUzz6tZGO1bMLVORXJXwdTvE/OT53a+zJdf8Lue97u1g0uZeiGVOE7J69r1BBbM+Eot8TxoTz+kJ//Ah6UvFsnom8SdDjbFGdPg9/kDo/pxn6o8vvzGw/fff5v/AQtvyUXyLJhbJKLb7V94Q5WI6I+cQOVGwv2C+NzOqy+IeA7iCSlPNLf4x5Tjuom+XEbRuQmumPGNsIvYo/WKTPui6y7ZKyoXB4K7bDqpQmPOClg5qBy7S5EqpjMOKrecRTzl2NWFKKe8R2Vy6zgnvGHqwkdOyo3KkV+RNlhp3YYvpZMa6Jj7kMseDcOvZpJcust6Hj6v3szv//2Pcr4WHINFPBpfVPFiE3WKojt1SqN3qIx+k4yX0ww5pLO/n2RLewFvUknv3TiLX8L2hq35NodW/BvF3qPb2RaZXVRGn0sVzB5ViqM7idK3qIxeZtuJ7KIS/7JTd/OgaDvrVbYDgIXDgcRConSjIvrt4pB59WbmCok8C6bTpq/589Vo3EclHWUqt7R+CVGLTBaveRbMIZXRy0TJqnwXlfiXnbqbBwXrQAr5Kq1uFL6wKJLRFxKlGxXRbxe/R/Pq8qRz6y3uj4jYbDNdZHZxb9E2hV9IbrJjm1rcJ8niEqlyE32S3JS+RWwrXyGLe+mJaKu8R3YPpU+Sm4U4UOUrtM618AHVDUefZurCMchwJ+VONHbYDmiM7QDosfBG82o9eXScjeA3ZzzJQ49vUOl8Q939jY5uLC+GNe+yuCBtvVpr0aMLVT684FvP7N9Q0MbJ1rtcbnAuy/jHCIv25OSogCyOf42T8sB+QWsdEbwkMOldjgp4coO3rHvmoHOB1m5c9y57HeK6k9G9Ix45Gd2o4jDjhxLoOdVdMXTfbsSB82rcuZWjhgS/OYdFMvoTUzDzsnzdXruXuBnPRAXoOLRxsvUuVdzoZ8Gs3eAt656JO6jn0NqN697lWTCfoII5H0/NerUingm50zYVsfU3JrmzvaQ2eFXoQlfIV8gsf6e1g8roS/IWLn2NKjQNWQFLt5o4mSqg0SepjN5LFvRtsgeAW3rntJ/K5OzkUsonqS58moemDziCN9QAxdgObozt8MvHZPQkM3mjefVmFj+wvZCR6+XA7SQk54mKrlMBtxy5n+R1353xIll3Z7wIzX29umn3lGcq6fFG2HJm610qixu26/6Xk/bxaXwJKtd8VFjWPVO518uCWTqLanSE3HTkXpzlicqxLx3EyHsdSqay9Nh59Z1aUDHjqxE76AkfDQDoSmjjtiddmdwe1ZYRtHTf3ioOnFdvpspd8t/3gun/r4S2XSnPVNJRgqad2XqXyn6RjH4WTHmicuwoRxvOelV7prJ0xIZTCypmfDV416eYcKsS3lK7kmWndtuTrkxuj2rLCFq6b28Vn9t5dXxB5cldvmjfDD2Y/ABis+12ueALu3CHHqhLfOt3S2T0itx0HdvS9ah7RRaRkavEzDhoTVO+kCVSju2Kn5+v7rkxddUYQGw46YMMehq4RCeDh0FPx29KZfTD59XpcMuBnhC2J7xDfyud4c11N/qb3vF4Ce1me0FOEYfey+WmiwSu4VIQZtgWEBJfV77JtrhdQuAatgW8JLCX6lwaApVfzLZzbLvUDonoZWfdpUpE9EdOoHIj4X5BHDKvxl1sPAvmXqoARp8iDr2XqBIdAtdwKYAzZIkrfSmD0TeJO73ySwhcw7NgngXzNnF+vtpYuGLGN8IuYo/WKzLti667ZK+oXBwI7rLppAqNOate5aBy7C5FqoDOOKjcchbxlGNnD4AWyynvUZlcndy2N0xd+MhJuVE58itSA47GfYASnfQBDd2HXPZoGH41k+TSXdbz8Hn1rtRLcNdLc9fLj+7UKYru1CmN3qEy+k0yXk5TbC/OXVTSezfO4pewvWF3cTvbIrOLyuhzqYLZo0pxdCdR+haV0ctsO5FdVOJfdupuHhRtZ73KdgCwcDiQWEiUblREv10cMq/elSqP7mfBjL6LSjrKVG5p/RKiFpksvovbidJU+S4qo88lYsCzYF4WrAMp5Ku0ulH4wqJIRl9IlG5URL9d/B7Nq8uTzq23uD8iYrPNdJHZxb1F2xR+IbnJjm1qcZ8ki0ukyk30SXJT+haxrXyFLO6lJ6Kt8h7ZPZQ+SW4W4kCVr9A618IHVDccfZqpC8cgw52UO9HYYTugMbYDoMfCG82r9eTRcTaC35zxJA89vkGl8w119zc6urG8GNa8y9EFue7DC771zP4NBW2cbL3L0c3eOjkqIIvjX+OkPLBf0FpHBC8JTHqXowKe3OAt65456FygtRvXvcteh7juZHTviEdORjeqOMz4oQR6TnVXDN23G3HgvBp3buWoIcFvzmGRjH4WzNLJ1rtUcZvxVPrWHf8aJ+WBLGhy3N0rjgheEpj0LntFMrrBW9Y9E3dQz6G1G9e9y7NgPkEFcz7Oz1fP8KrQha6Qr5BZ/k5rB5XRl+QtXPoaVWgasgKWbjVxMlVAo09SGb2XLOjbZA8At/TOaT+VydnJpZRPUl34NA9NH3AEb6gBirEd3Bjb4ZePyehJZvJG8+rNLH5geyEj18uB20lIzhMVXacCbjlyP8nrvjvjRbLuzngRbjlTnqmkxxthy5mtd6ksbth1z1Su+aiwrHumcq+XBbN0FtXoCLnpyL04yxOVY186iJH3OpRMZemx8+o7taBixlcjdtATPhoA0JXQxm1PujK5PaotI2jpvr1VHDiv3kyVu+RnwRw6U56ppKMETTuz9S6VncI48kzlmqPs5JZ5z1TudValkZ8Fc+irwbs+xYRblfCW2pUsO7XbnnRlcntUW0bQ0n17q/jczqvjCypP7vJF+2boweQHEJttt8sFX9iFO/RAXeJbv1sio1fkpuvYlq5H3SuyiIxcJWbGQWua8oUskXJsV/z8fHXPjamrxgBiw0kfZNDTwCU6GTwMejp+UyqjHziv/vDDe/cf4KXocMuBnhC2J7xDfyud4c11N/qb3vF4Ce1me0FOEYfey+WmiwSu4VIQZtgWEBJfV77JtrhdQuAatgW8JLCX6lwaApVfzLZzbLvUDonoZWfdpUpE9EdOoHIj4b438C6g4uHO19PuCnwLvvEsmKgAV1AFMPoUcei9RJXoELiGSwGcIUtc6UsZjL5J3OmVX0LgGp4F8yyYt4nz89XGwhUzvhF2EXu0XpFpX3TdJXtF5eJAcJdNJ1VozFn1KgeVY3cpUgV0xkHllrOIpxw7ewC0WE55j8rk6uS2vWHqwkdOyo3KkV+RGnA07gOU6KQPaOg+5LJHw/CrmSSX7rKeR82r8V7eu3f/o3sPqudfT70Ed700d7386E6douhOndLoHSqj3yTj5TTF9uLcRSW9d+MsfgnbG3YXt7MtMruojD6XKpg9qhRHdxKlb1EZvcy2E9lFJf5lp+7mQdF21qtsBwALhwOJhUTpRkX0GwVKECoebgn9bLsC34JvPAtmTtSufUR1qnwXlXSUqdzS+iVELTJZfBe3E6Wp8l1URp9LxIBnwbwsWAdSyFdpdaPwhUWRjL6QKN2oiH6jQAm6uGDOx/n56oLYbDNdZHZxb9E2hV9IbrJjm1rcJ8niEqlyE32S3JS+RWwrXyGLe+mJaKu8R3YPpU+Sm4U4UOUrtM618AHVDUefZurCMchwJ+VONHbYDmiM7QDosfCQeTUCl9f9Bw9+++E9PGd+8ug4G8FvzniShx7foNL5hrr7Gx3dWF4Ma97l6IJc9+EF33pm/4aCNk623uXoZm+dHBWQxfGvcVIe2C9orSOClwQmvctRAU9u8JZ1zxx0LrHTcTeue5e9DnHdyejeEY+cjG5EZZj0Qwn0nOquGLpv1wK1DhUPZ96/3hn4xrNgDrzLthjOuJe4Gc9EBeg4tHGy9S5V3GY8lb51x7/GSXkgC5ocd/eKI4KXBCa9y16RjG7wlnXPxN3ac2jtxnXv8iyYT1bBnIzz89UzvCp0oSvkK2SWv9PaQWX0JXkLl75GFZqGrIClW02cTBXQ6JNURu8lC/o22QPALb1z2k9lcnZyKeWTVBc+zUPTBxzBG2qAYmwHN8Z2+OVjMnqSmTxwXv3xxx9/xBWY+9Uhuln8wPZCRq6XA7eTkJwnKrpOBdxy5H6S1313xotk3Z3xItxypjxTSY83wpYzW+9SWdyw656pXPNRYVn3TOVeLwtm6Syq0RFy05F7cZYnKse+dBAj73UomcrSY+fVd2pBxYyvRuygJ3w0AKAroY3bnnRlcntUW0bQ0n17fNy/z1qHiod3z5t2Br7xLJib7kRdmnamPFNJRwmadmbrXSo7hXHkmco1R9nJLfOeqdzrrEojPwvm0FeDJSvFhFuJ85balSxvtduedGVye1RbRtDSfXt8XF8wJ+P8fPWW2+WCL+zCHXqgLvGt3y2R0Sty03VsS9ej7hVZREauEjPjoDVN+UKWSDm2K35+vrrnxtRVYwCx4aQPMuhp4BKdDB4GPR2/KZXRj5pX417A+3T//v2PPuJIEa80H+gJYXvCO/S30hneXHejv+kdj5fQbrYX5BRx6L1cbrpI4BouBWGGbQEh8XXlm2yL2yUErmFbwEsCe6nOpSFQ+cVsO8e2S+2QiF521l2qPkR/5AQqNxLu86ExIgLXr57zgsA34tvPgnkFVQCjTxGH3ktUiQ6Ba7gUwBmyxJW+lMHom8SdXvklBK7hWTDPgnmbOD9fbSxcMeMbYRexR+sVmfZF112yV1QuDgR32XRShcacVa9yUDl2lyJVQGccVG45i3jKsbMHQIvllPeoTK5Obtsbpi585KTcqBz5FakBR+M+QIlO+oCG7kMuezQMv5pJcuku63nUvBqBuwbDJY4U793/7Yf3Prr34P6Dj6vDVamX4K6X5q6XH92pUxTdqVMavUNl9JtkvJym2F6cu6ik926cxS9he8Pu4na2RWYXldHnUgWzR5Xi6E6i9C0qo5fZdiK7qMS/7NTdPCjaznqV7QBg4XAgsZAo3aiIflTgXUTZscrGMSJqnX6qiwPffhZMJWrXPqI6Vb6LSjrKVG5p/RKiFpksvovbidJU+S4qo88lYsCzYF4WrAAp5Ku0ilH4wqJIRl9IlG5URD8q8C6i7BxYMGfi/Hx1QWy2mS4yu7i3aJvCLyQ32bFNLe6TZHGJVLmJPkluSt8itpWvkMW99ES0Vd4ju4fSJ8nNQhyo8hVa51r4gOqGo08zdeEYZLiTcicaO2wHNMZ2APRYeOC8GoH7AucCNdT+kzwcLOL5zzzzzDOf9tQAEZXtwDHiWTDPPPPMz2XeomBuxiOaV3/yu4e//ezBLx9+9H8efnhB4hvx7XiSeErkTttEfvzZww+uOKIS3/7Bwwcff8Y/G5gM7IxvueC4OtYDzlU5D/cc/cbL+ODhvmP5IR5+wm9nDIkfQsQhfvPw/gUvJ+cvHn6EZ7jPeatNlZXyAe9/+gm+Bd9YPdVm+rE42eRT9WiTZE9MUHGsj+ePlZ4fk0F+r7H93XPB+5/seP42ecRPUQ8wC+ZRj0tOaEtvaJPeOP1upuLtGgWpxM7ZNxPljzftcYE7Ee/0xx9/bIPFex+dccYZZzz9gWqGmobKhvqGKuf17uo4C+YZZ5zx+YsbFcz1eBTzakyJf/7ZMq+4xn+LeaudmnVi9njNUVrHFB3zWDy3wdeio4uHHBdPwkmtTZ6NhWMr/+Dh/cueH4mZHp4EM1g8EznwY08jng1zWjwtyKMkrz5fjZ/tgievHE+COS2ekByvWmPKevnzc9o8XKmWX/z8rePttmfmpBdPXjvJKa67TXf90eRkcMxmc0vrN6Uy+uHzaoTfL7yucE7OOOOMM576YL9lgfrmle6gOAvmGWec8TmL2xXMlbj5vPrXn93Pc4NDEk8Yz49cDTpxhx9R+euHOC6PYOwEdqi+5eLEU9llYEvTNpF2SX79sX798B6fypNXnWjXIB07VN9yff7q4T1c5DiEZ+PYofqWixNPpckzZ7nM2q88Fp/fJs94Nmbtn/7q04NPIJ4Qc9GrkxPajttEt3Sf+ib3ybA9ukyM8WV2Zeku63mLeXUOuzvPOOOMMz4P4XXtZuGHOeOMM854+sPr2iOM286rD1ypjr6yan34SnX0Dz57gKPgOCLmt5mHH9dWrfnUidzIr1mpjt5ftTbe7jTGVeuKh6xURy9WrUsecqyVVesDV6qjd1atbdIbicYObbrbEnPaJ4E3nVefccYZZ5xxxhlnnHHGreOG8+pPfvcwzwcOz/xZa6dtPv7shkdUdj9rjcZqt0Ny+ax14IOHRx6r+1nrYw/R5vJZ68D7n35S7XZINp+15qT6wGPxs9Y20fU0v//Jx9VuB+a9Yz5rzQlt6Q1t0hun381UnEvNpS9L0Ng5+2ae8+ozzjjjjDPOOOOMM57quOG8+refPciTgWvW6Lr+wcMHOATmtJG//WyZz1z8zOuO4+Jgtmxsq9bmNzouj1WuVMPRGPe50rl6bNNpMq1a/yb8kfllT7vueH4ecZlU0z+4zdWCY+Eomk5nHngOf4NJdPP56puewN98yr8/tyNyAuxOcorrbtNdfzQ5GRyz2dzS+k2pjH7Oq88444wzzjjjjDPOeKrjhvPqa/4j0puJJ9dRtFJtuO0RlTgEJ7hl3Oi4eFrMdZlar7Y89lg8RLlejVnojV5Ozl88/Igz6rxebXnxfzF7PfG0nPEuk2rmgcey588r1TwK8kavRYknx4z0iuSEtuM20S3dp77JfTJsjy4TY3yZXVm6y3qe8+ozzjjjjDPOOOOMM57quOG8Witsu9bidjkOgTltZHz0du4r1eHz1e0+RzkOgLmukRug3ecaR2JOi6cmtWp99CG6zrn0sl5NVjsc6Dad9pVqsd3nGud0Oq1Uy/Xo3ueZd7wEzFe5Fp1WqiPR2KFNd1tiTvsk8JxXn3HGGWecccYZZ5zxVMcN59V5MnCj1Bq10zbVDjdKTnPLqHY4MDHLxaQ3strh+uTT2nQ6s9rhFpmn05nVDgcm5ro2nRY5ta52uDIxm8XTeppXOxyemItenZzQlt7QJr1x+t1MxbnUXPqyBI2ds2/mOa8+44wzzjjjjDPOOOOpjkc0r75sXW7dcQjMaSNXv+uf/+zZP/hXP5x65nXHwbh2HD5ffdnzzDiOwMmuTarleYe8T+l8mf/iX/7Bv3j2r14f7lM45rR4cjKtWud9kBvf/t5ffeFf/ulL6/s0juQRl0k1feYbL3McRdPpzM3vfekv7BzipY33yR5XquV5h7zPqvuV2bQP3dalOYHHPNad5BTX3aa7/mhyMjhms7ml9ZtSGf2cV59xxhlnnHHGGWec8VTHU7xeraNopdqwfsRi9nJNcoJbRrXDgYm5LlPr1ZbVDnX+8E/zbHAy+eQ2nRYxC612WEvNq6vGieSMOq9XW1Y7HJic8S6Tama1Q518UX/8Z+/RX3/pjzfPJ56Zqam1ebXDVu6+MjEjvSI5oQ3+xgvP/fWf33kjTbk//fgHL/85WjzvvPJm2ifmndfzxBjfmF1Zust6nvPqM84444wzzjjjjDOe6vj9+Xz17lXBkftK9ZP5+er/9m/mV6qVmNPiqckLPl/9/jKvnto/OefSy3o1We1woNt02leqxXafwne+KE6n00q1XI/OfK+5X5nT+3Mmj/kq16LTSnUkGiu++8qdb73yM7pNehe+9f1vpXkyWjC/JTGvvvOGe4evY479wg9Gj17Ic159xhlnnHHGGWecccZTHY9vvZorq8VfLLPlL/4u/QnuH3zhpX8u9m9Sa9RO21Q7lGmzl5cwZSr+xPf/PPy7fwVvf5hxcppbRrVDne2Tc0VUP4avi44Ss1xMeiOrHYpcntZ+gxDO58pyKJ/WptOZ1Q5tLu/RX/xpWK/mGV6OHvZvM0+nM6sdqsxHxCuytywcIiwvdxNzXZtOi5xaVzuUySfXsXgF4hxuXRKYzeJpPc2rHYoM75EfwufV+bgby+NIzEV3pc2r3y0bOaFV4lGuV69OwjUBhtvOeRLOpebSlyVo7Jx9M8959RlnnHHGGWecccYZT3U8ps9Xc6rJ+QP89Zf+WFNorrWm+ZL5xpohDoE5beRoT3Ob9T37V29YO+dpf/F31m7zmeD4AVafZ+fnq3uvFEdPvzX4u3/NieLge329mkcz1JPedn/NA/Ua7dB/8K//W7NP6ZjT4snJuc9X+59G187Tixdl++A0+kuuvjcnj7hMqrc+X825aD6Hf4opNI+bzxte5uo5xFE0nc5s9ylch5PH8xn3CR5XquV5h7xPx3mF69cB+n2ErnY/jZ39g9u6NCfwmMe6kzb1ldt01x+1yXAxVbY9MZtVS54q69Gf4cv09976Lu1pfOOF5+688lZsuYTK6Oe8+owzzjjjjDPOOOOMpzoez3o1JpbFeqOWBG0Ok9YGMTfbWMvVUbRSbdjz+eo0VasOFGa8w+QEt4xqh5i9V2rzqDAVXEnMdZlar7asdqgznsPifA6TT27TaRGz0GqHMgenMb+JlsWr7iVn1Hm92rLaoUi8kOpiWN6+7WNxxrtMqpnVDnWGJ585h3hmpqbW5tUOvfTf4JgXp7T4lcEgMSOdy+Zz0c+9/HpYqdYE29erbaLLL996411+phq+/Mm37fkJp9y+5zIxxpfZlaW7rOc5rz7jjDPOOOOMM84446mOx/L5aptYpj+FZWrqYnOYtDZo09332+9dHIfAnDYyPtp4MXuxz9BqqvZ3/3owrx48z67PVw9eqa/0osXnb+NjLSvV8nafwuPnqyc+a43EnBZPTU59vro4XctHkTn1LV7pcqp7z8O59LJeTVY7NM6JKJ42LYn7u2ntG+fQptO+Ui22+xQeP189cQ45nU4r1XI9Onz+z37Mq8Imz9aeXwv30bx6/L10vATMV7kWnVaqI9FYsfun3SLmtOWfdhfURDq3vH7n+E9Wi+e8+owzzjjjjDPOOOOMpzoey3p1McVdslgbLJaRu6k1aqdtqh3KLA+6rEkWBzp6vXrwSnNuLYdilotJb2S1Q53xCbeeXMmntel0ZrVDmZzfdk7j3LFy5ul0ZrXDIJfz6eu6OO7W6i7mujadFjm1rnaoc+96NafTKc2rHaq0X6mk52cWF8mh69WeNq+e/Xz1QptIhz8Fb/8InEvNpS9L0FrHnsxzXn3GGWecccYZZ5xxxlMdj//z1ZhXvG7T2nJtcJnutut1ShwCc9rI0Z46CmYvaSUwuh9I+2te3Xxv4TgY1473f746vdJ//rO/CFPff/lv8hSrfR4cgZNdm1TL8w55n8JtHrh8vnrrs8FIzGnx5OTc56txivLT0vHq6Dyly6l775/xAruHU/KIy6R66/PVP/xTTTu10qtffGil+l/ZH4Gvfa+9QE2nM9f393m1fOIcxpVqed4h77M4n3y53iyLefUb//cyr26PpbR1aU7gMY91X6bE132++q3vf+vb339Xj/K/Fh4n0q9zXm1/Ip5aLqQy+jmvPuOMM84444wzzjjjqY7H8/lqJqeU+pvhP/7CSzaRsDlMWhsslpG7qaNopdqwuV79x3/2UjrosiRYHOjo9WrL6pW+93cv5R9j6zVirsvUerVltUOd8RwW53OYfHKbTouYhVY7NGm/lbCf3/4r1nnpdWn/F8/+6Uurr4sz6rxebVntEPOlH/7V8sxhLddn9WHPbnLGu0yqmdUOde5dr15WqnkUZLVDzPT3//HlFPPqW6xXl8kJrZIz6vDpa65pv/Xuu+H/X/3CD3wyzO/iNPvl1+FsWSbG+DK7snSX9Tzn1WecccYZZ5xxxhlnPNXxWD5ffYzjEJjTRsZHb+e+Uv1k/v+rdzoSc1o8NTn1+epjnHPpZb2arHY40G067SvVYrvPNc7pdFqpluvRvc8z73gJmK9yLTqtVEeisUOb7rbEnPZJ4E3n1bx/zjjjjDM+F+F17WbhhznjjDPOePrD69ojjMe3Xn116mw5bVPtcKP09ypEtcOBiVkuJr2R1Q7XJ5/WptOZ1Q63yDydzqx2ODAx17XptMipdbXDlYnZLJ7W07za4fDEXPTq5IS29IY26Y3T72YqzqXm0pclaOycfTNvMa/GjYk3+979Bx9+dA/Pf+aZZ575tCeqGWoaKhvqm1e6g+IsmGeeeebnLG9XMFfihvPqXz68ZC1u0vHkOITOVOavPrvX7nms47g4GNeOw+erb3RcHiusVMuPPat4Nsxp8eRkWrX+xS3fOCSen0dcJtX0X97mHOJYOIqm05kHHgvPH1eq5Tc9gTwip76cwGMe605yiutu011/NDkZHLPZ3NL6TamMjvLHknFc4GZBMf3th/c+uvfg/oOP8+Hw2oPfnKPTW3jz1mQPb1/zFuvR6jJY8y516e71+g80VjyTt17ruiVL101ae5fx1l53kgVn3fGvcVIeuJSvWMpaRwQvCUx6l/GPjHpu8JZ1z1QX07h3PdGN697l79yJOSejL91u18noYXgw44cS6DnVXTF03xaBk4y6gcqG+hZ3vjLwVGfBLL3LthjOuJe4Gc9EBeg4tHGy9S5V3GY8lb51x7/GSXkgC5ocd/eKI4KXBCa9y16RjG7wlnXPxF3Tc2jtxnXv8iyYT2jBXI8bzqt/+9mDPCs4PPHkOopOlM7WBw9veEQlDoH3poobHRdPixuAiXtecvSxeIiwUg2ifNzo5eT8zcP7OApTJdUSjdVuhySe1nogK/0uRx7Lnh+dHHtuHQX5m09v8lqUeHJ0w1ekDzVqT8OR4GQYyvgAyx4Ng61mkly6y3oeO6/GLfnRvfvI6ihtxh9v/BKKlxlfPqR025NuOfJ42oe+O3EFTrpTF+2cM+WZSrpf9lPObL1L5XLbbnqmcs1VfJTznqnc61ZhB87CGx0hNx259iwKeMqxq0NZ89TpRM9Ulo7YcGpBxYyvBm/2FBNuxcFbaldyKFG77UlXJrdHtWUELd23B8f9+x+jxMUf4OLAk5wFc8WdqEvTzpRnKukoQdPObL1LZacwjjxTueYoO7ll3jOVe51VaeRnwRz6avBmTzHhVhy8pXYl60/ttiddmdwe1ZYRtHTfHhwHFszNuOG8+pPf8Q9uL16XW/ePP/sUh0hvpPOmR5TguDgajgXapUx+/NlNjvvgIf90wW822wD4AeI+VzoOwUITf3v38LP7Dz+pdjvW77MXSAU08UYHvf/px3jy1JE48QPEfa7x++zZ/ffH1q3SD3z+1u/xiOjC02/WSbkTjR1qmNJQw5rHzgPn1bhNMD7UGBGvvXu4x8h42kdvSvX2xbc1vNHxrS/dl0ouoy7j6FPEofeSN2NL4BouiyozZPEpfSlK0TfJ4ln6JQSuYSzjaYElEtjL0PVEApVfzHbhZelYV0hEtxu/8IYqDtEfOYHKjYT7rsBI8fpFGHz7WTCvoApg9Cni0HuJKtEhcA2XAjhDlrjSlzIYfZO40yu/hMA1PAvmWTBvEzecVyNutGStxWo/uaJtwJsukn/QW6xWHL7Gy5Vku5FaHnUsHoKB56x5o9VjJJ4ZNQtlseXhB+Wx2HnwyRPZJ8APORaeBE9o3V5K+SefHv5alFcvVufUsCN6wzBwaYc4xjgIk5NK7Jx9M4+aV+NOxFjptx/eq55/PeOP6i/EvX2Zzva0OOOpG51Yz+g3SVyH+6gLOPouKum80VK2fgl120bfxe1UIYq+i8roc4kYUGO+6E6i9C0qo5epziX6LirxLzt1Nw8KFoEU8lVa0Sh8IUcXprUvJEo3KqLfKFDrUPH0g10Q+MazYOZE7dpHVKfKd1FJR5nKLa1fQtQik8V3cTtRmirfRWX0uUQMeBbMy4JFIIV8lVY0Cl9YFMnoC4nSjYroN4orC+Zk3HZejfj1Z5xgXLAuN/JfP7yPp01vWIeHH1GC4+IIeH4cxi7rwkHsEPe/xn/98B6e1G8822THFv+uPxbOEgtN+xu75L96yA8hX3OI1vGceHIUUx5lKbKLH3hQHoudBGCUB155rF99eg89kDpF61ZrP/wE2hHRhSO8I1+c9GEHPQ1TopPBR8OdOBi6EZXRj5pX48KytZcH7SHSDxD95hyd3sKbtyZ7ePuat1iPVpfBmnepy3Wvr132lWfypmtdN2Ppuj1r7zLe2utOssisO/41TsoDl5IVS1nriOAlgUnvcly65QZvWffM0MVEL7oeuXHdu2wXW7acjD7sguVk9DAwmPFDCfSc6q4Yum/XAjUEFQ9n3r/eGfjGs2AOvMu2GM64l7gZz0QF6Di0cbL1LlXcZjyVvnXHv8ZJeSALmhx394ojgpcEJr3LXpGMbvCWdc/E3dpzaO3Gde/yLJhPVsGcjJvPqxEHriH7SnU4f3I15Lfhg6NXrcNKNY8yikNWkvNKNYJi7pL9ulXrsFKtlJNLiTn6M89cPfYim7L141aS8VQs96Sy4xcfC9+IJ8ndW8/JQ16L8jef3scTogO+On2oUXsajgQnw1DGB1j2aBhsDQZkyV3W88B59Ycfomzyv7szk/HHG7+E4mXGlw8p3fakW448nvah7854eay7UxfqnDPlmUr66BboObP1LpWd23bkmco1V8FRznumcq9bUR04C290hNx05LFcZyrHrk5kzVNHEz1TWXrsqvpOLaiY8dWI3fGE2yDBW2pX2hCictuTrkxuj2rLCFq6b28SeN9Q8UD/emfo28+Cue5O1KVpZ8ozlXSUoGlntt6lslMYR56pXHOUndwy75nKvc6qNPKzYA59NViyUky4lThvqV3J8la77UlXJrdHtWUELd23Nwm8b9cUzMl4FPNqxCe/e4gpsf5b1hes1OEb8e3dz1SPiCNi9njxESV23I/5mWp7TqddQKBdyjU//uzC4+o1lp+pjuQmO7YAfrAPHt6fP9YvP7uHmd4D/9Wel5jaSz54+Alejv4D1zOHaB3fi2d4wMrPoslnVukcEwfFz3nBQXmszx4sn6nWb1LlA+IHmz+WXkv8THWkdauFg/fx/J9e8lrk+Eb8eOkz1exx7ZnB9Jt1Uu5EY4capjTUsOax85B5NW4NvKl4qvDkYZz3ZDCe9tGbUr198W0Nb3R860v3pZLLqEs3+hRx6L3kDdgSuIZp8WSOLDilL4Uo+iZjWZNfQuAatiW9JLCXoeuJBCq/mOVii1rkqySix645dtaBKhHRHzmByo2E+wWBiodLWE+7K/At+MazYKICXEEVwOhTxKH3ElWiQ+AaLgVwhixxpS9lMPomcadXfgmBa3gWzLNg3iYe0bz6qIhnwk+uaJuK2Oa3RO5sL6wNXhV2KXvIV8hsf8s1SWX0JXlTl77GpdwUZB0s3SrjZKqMRp+kMnovWda3yT4Bbmnd1SVUJme3l1I+yaWDn+KhqWFH9IZh4NIOcYxxECYnldg5+2YeNa/GQEnDxF0Zf1R/Ie7ty3S2p8UZT93oxHpGv0muX04dxstydNGuUEnv3SaLX8J42zY39Ta3sy01u6iMPpcqoT22JddJlL5FZfQy2+5jF5X4l526mwdF7LLbrrxhOyRY2BlCyBcSpRsV0W8XqHioe/rZdgW+5SyYMVG79hHVqfJdVNJRpnJL65cQtchk8V3cTpSmyndRGX0uEQOeBfOyYB1IIV+l1Y3CFxZFMvpConSjIvrt4uKCOR+PZ15dnMrwRXyp/pZEty+SH0Bstj1dQHYpDz1QF337W6XSyegVuek6tqXrUfeKLCUjV6GZcdCapnwhi6Yc2xWPn70BWdYXx7ZwPbrm7BjU0vOK9nDX8YSF69HkFa1bHbk63b1uTB02hhQbTvqwg56GKdHJ4KPhThwM3YjK6DeaV7eHw2sPfnOOTm/hzVuTPbx9zVusR6vLYM27vOwSnbns3TP7t1K8xZKTrXc5c8vLye0ygn+Nk/LA7bImRwQvCUx6l1ul2+At65456FagtRvXvcutbrF1MvpGd0xGj135hB9KoOdUd8XQfbsRB86rcedWjhoS/OYcFsnoZ8EsnWy9SxW3GU+lb93xr3FSHsiCJsfdveKI4CWBSe+yVySjG7xl3TNxB/UcWrtx3bs8C+YTVDBn43e/+/8BX6vd65m6qmIAAAAASUVORK5CYII=) FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. Financial PhraseBank by Malo et al. (2014) is used for fine-tuning. For more details, please see the paper FinBERT: Financial Sentiment Analysis with Pre-trained Language Models and our related blog post on Medium.The model will give softmax outputs for three labels: positive, negative or neutral.* https://huggingface.co/ProsusAI/finbert
###Code
model_name = "ProsusAI/finbert"
###Output
_____no_output_____
###Markdown
The typical flow of information through a sentiment classification model consists of four steps:* Raw text data is tokenized (converted into numerical IDs that map that word to a vector representation of the same word)* Token IDs are fed into the sentiment model* A set of values are output, each value represents a class, and the value represents probability of that being the correct sentiment class, from zero (definitely not) to one (definitely yes).* The argmax of this output array is taken to give us our winning sentiment classification.We do not always have the three outputs classes [positive, negative, and neutal], often we will find models that predict just [positive, negative], or models that are more granular [very positive, somewhat positive, neutral, somewhat negative, very negative]. We can change the number of outputs classes (step 3) to fit to the correct number of classes. Install transformer library:
###Code
!pip install transformers
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained(model_name)
###Output
_____no_output_____
###Markdown
Initialize Tokenizer:
###Code
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained(model_name)
text = "Japan’s biggest brokerage will stop offering cash prime-brokerage services in the US and Europe, and has given some clients about six months to find a new provider, according to people familiar with the matter, who asked not to be identified discussing the private information. A spokesman for Nomura declined to comment. The pullback comes after Nomura notched up some of the biggest losses from the implosion of the US. family office built by Bill Hwang."
text_2 = "Stock market regulator Securities and Exchange Board of India (Sebi) has banned Authum promoter director Sanjay Dangi and his associates including Alpana Dangi, a promoter director in Authum, along with promoters of four companies from dealing in the equity markets following allegations of price manipulation more than a decade ago."
print(text)
print()
print(text_2)
tokens_1 = tokenizer.encode_plus(text, max_length=512, truncation=True, padding='max_length', add_special_tokens=True, return_tensors='pt')
tokens_2 = tokenizer.encode_plus(text, max_length=512, truncation=True, padding='max_length', add_special_tokens=True, return_tensors='pt')
###Output
_____no_output_____
###Markdown
Bert Special Tokens:* [CLS] = 101, The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.* [SEP] = 102, The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.* [MASK] = 103, The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict.* [UNK] = 100, The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.* [PAD] = 0, The token used for padding, for example when batching sequences of different lengths.
###Code
tokens_1
###Output
_____no_output_____
###Markdown
Pass tokens to model as keyword arguments:
###Code
output_1 = model(**tokens_1)
output_2 = model(**tokens_2)
output_1
output_1[0]
###Output
_____no_output_____
###Markdown
Predictions:
###Code
import torch.nn.functional as funct
probs_1 = funct.softmax(output_1[0], dim=-1)
probs_2 = funct.softmax(output_2[0], dim=-1)
probs_1
import torch
preds_1 = torch.argmax(probs_1)
preds_2 = torch.argmax(probs_2)
print(preds_1.item())
print(preds_2.item())
###Output
_____no_output_____ |
vqt_qmhl.ipynb | ###Markdown
Copyright 2020 The TensorFlow Quantum Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
VQT in TFQ Author : Antonio J. MartinezContributors : Guillaume VerdonCreated : 2020-Feb-06Last updated : 2020-Mar-06 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tensorflow/quantum/blob/research/vqt_qmhl/vqt_qmhl.ipynb) In this notebook, you will explore the combination of quantum computing and classical energy-based models with TensorFlow Quantum. The system under study is the [2D Heisenberg model](https://en.wikipedia.org/wiki/Heisenberg_model_(quantum)). You will apply the Variational Quantum Thermalizer (VQT) to produce approximate thermal states of this model. VQT was first proposed in the paper [here.](https://arxiv.org/abs/1910.02071) Install and import dependencies
###Code
!pip install --upgrade tensorflow==2.1.0
!pip install tensorflow-quantum
%%capture
import cirq
import itertools
import numpy as np
import random
from scipy import linalg
import seaborn
import sympy
import tensorflow as tf
import tensorflow_probability as tfp
import tensorflow_quantum as tfq
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
2D Heisenberg model Hamiltonian definition This Hamiltonian is supported on a rectangular lattice of qubits:$$\hat{H}_{\text{heis}} = \sum_{\langle ij\rangle_h} J_{h} \hat{S}_i \cdot \hat{S}_j + \sum_{\langle ij\rangle_v} J_{v} \hat{S}_i \cdot \hat{S}_j,$$where $h$ ($v$) denote horizontal (vertical) bonds, while $\langle \cdot \rangle $ represent nearest-neighbor pairings. You can build this Hamiltonian using Cirq `PauliString` and `PauliSum` objects:
###Code
def get_qubit_grid(rows, cols):
"""Rectangle of qubits returned as a nested list."""
qubits = []
for r in range(rows):
qubits.append([])
for c in range(cols):
qubits[-1].append(cirq.GridQubit(r, c))
return qubits
def get_bond(q0, q1):
"""Given two Cirq qubits, return the PauliSum that bonds them."""
return cirq.PauliSum.from_pauli_strings([
cirq.PauliString(cirq.X(q0), cirq.X(q1)),
cirq.PauliString(cirq.Y(q0), cirq.Y(q1)),
cirq.PauliString(cirq.Z(q0), cirq.Z(q1))])
def get_heisenberg_hamiltonian(qubits, jh, jv):
"""Returns the 2D Heisenberg Hamiltonian over the given grid of qubits."""
heisenberg = cirq.PauliSum()
# Apply horizontal bonds
for r in qubits:
for q0, q1 in zip(r, r[1::]):
heisenberg += jh * get_bond(q0, q1)
# Apply vertical bonds
for r0, r1 in zip(qubits, qubits[1::]):
for q0, q1 in zip(r0, r1):
heisenberg += jv * get_bond(q0, q1)
return heisenberg
###Output
_____no_output_____
###Markdown
For visualization and verification purposes, the following function recovers an explicit matrix from a Cirq `PauliSum` given a linear ordering of the qubits which support it:
###Code
def pauli_sum_to_matrix(qubits, pauli_sum):
"""Unpacks each pauli string in the pauli sum into a matrix and sums them."""
matrix = np.zeros((2**len(qubits), 2**len(qubits)), dtype=np.complex128)
for pauli_string in pauli_sum:
coeff = pauli_string.coefficient
bare_string = pauli_string/coeff
matrix += coeff*bare_string.dense(qubits)._unitary_()
return matrix
###Output
_____no_output_____
###Markdown
Target density matrix Here you define the parameters of the system to be learned. The 2D Heisenberg model is defined by the number of rows and columns in the qubit lattice, the bond strengths in the horizontal and vertical directions, and the inverse temperature $\beta$. Here, we use the same parameters as in the associated paper:
###Code
num_rows = 2
num_cols = 2
jh = 1
jv = 0.6
beta = 2.6
# Get the grid of qubits.
all_qubits = get_qubit_grid(num_rows, num_cols)
all_qubits_flat = [q for r in all_qubits for q in r]
###Output
_____no_output_____
###Markdown
Given a Hamiltonian $\hat{H}$ and an inverse temperature $\beta$, the thermal state $\rho_T$ is given by$$\rho_T = e^{-\beta \hat{H}}.$$Since our target system is small, you can compute this matrix exponential directly, using the `PauliSum`-to-matrix converter defined above:
###Code
num_H = pauli_sum_to_matrix(
all_qubits_flat, get_heisenberg_hamiltonian(all_qubits, jh, jv))
heisenberg_exp = linalg.expm(-beta*num_H)
exact_thermal_state = np.true_divide(heisenberg_exp, np.trace(heisenberg_exp))
seaborn.heatmap(abs(exact_thermal_state))
###Output
_____no_output_____
###Markdown
Recall that any density matrix $\rho$ [can be written as](https://en.wikipedia.org/wiki/Density_matrixDefinition)$$\rho = \sum_i p_i |\psi_i\rangle\langle\psi_i|,$$where $|\psi_i\rangle$ is a pure state and $p_i$ is the classical probability of encoutering that state in the mixture. Since TFQ is a pure state simulator, we will emulate density matrices by outputting pure states according to their probabilities $p_i$, which by the equation above is equivalent to outputting the full density matrix. We define here a function that converts such a list of pure states into the associated density matrix:
###Code
def pure_state_list_to_density_matrix(pure_states):
"""Return the uniform mixture of the given list of pure states."""
dim = len(pure_states[0].numpy())
n_s = pure_states.shape[0]
thermal_state = np.zeros((dim, dim), dtype=np.complex128)
for i in range(n_s):
psi = pure_states[i].numpy()
thermal_state += np.outer(psi, psi)
return np.true_divide(thermal_state, n_s)
###Output
_____no_output_____
###Markdown
Finally, to track the performance of our models, we need a measure of the distance of our estimated density matrix $\tilde{\rho}$ from the target density matrix $\rho_T$. One common metric is the [fidelity](https://en.wikipedia.org/wiki/Fidelity_of_quantum_states), which is defined as$$F(\tilde{\rho}, \rho_T) = \text{tr}\left[\sqrt{\sqrt{\tilde{\rho}}\rho_T\sqrt{\tilde{\rho}}}\right]^2.$$This is tractable to compute because our model system is small. Below we define a function that computes this quantity:
###Code
def fidelity(dm1, dm2):
"""Calculate the fidelity between the two given density matrices."""
dm1_sqrt = linalg.sqrtm(dm1)
return abs(np.trace(linalg.sqrtm(
np.matmul(dm1_sqrt, np.matmul(dm2, dm1_sqrt))))) ** 2
###Output
_____no_output_____
###Markdown
Energy based models Energy based models are a type of machine learning ansatze inspired by physics and exponential families. The advantage of using energy based models for probabilistic modeling is that fair samples can be drawn from the distributions they define without requiring computation of their partition functions.One specific class of EBM is the Boltzmann machine. The energy of a spin configuration $x \in \{-1, 1\}^n$ in this model is defined as:$$E(x) = -\sum_{i, j}w_{ij} x_i x_j - \sum_i b_i x_i.$$This classical model can be easily converted into a quantum mechanical Ising model by replacing each bit with the Pauli $Z$ operator, and considering the usual mapping of the spin to qubit pictures 1 -> $|0\rangle$ and $-1$ -> $|1\rangle$.In the special case where the connection weights $w_{ij}$ are all zero, the Boltzmann machine is reduced to a product of independent Bernoulli distributions over the set of qubits. This "Bernoulli EBM" has many simplifying properties, and hence you will explore this EBM first in the examples below. Later in the notebook, you will apply the full Boltzmann EBM to VQT. Energy functions Here we define functions which compute the energy of a Boltzmann or Bernoulli EBM given the weight, biases, and bitstrings:
###Code
def bitstring_to_spin_config(bitstring):
"""Implements the mapping from the qubit to the spin picture."""
return [-1 if b == 1 else 1 for b in bitstring]
def spin_config_to_bitstring(spin_config):
"""Implements the mapping from the spin to the qubit picture."""
return [0 if s == 1 else 1 for s in spin_config]
def ebm_energy(spin_config, biases, weights=None):
"""Given a rank-2 tensor representing the weight matrix and a rank-1 tensor
representing the biases, calculate the energy of the spin configuration."""
energy = 0
if weights is not None:
for w_row, xi in zip(weights.numpy(), spin_config):
for wij, xj in zip(w_row, spin_config):
energy -= wij*xi*xj
for bi, xi in zip(biases.numpy(), spin_config):
energy -= bi*xi
return energy
def ebm_energy_avg(spin_config_list, biases, weights=None):
"""Average energy over a set of spin configuration samples."""
energy_avg = 0
for spin_config in spin_config_list:
energy_avg += ebm_energy(spin_config, biases, weights)
energy_avg /= len(spin_config_list)
return energy_avg
###Output
_____no_output_____
###Markdown
We also define functions which initialize TF Variables for our weights and biases. Initializing all weights and biases near 0 means we begin near the uniform distribution, which can also be thought of as starting with a high temperature prior:
###Code
def get_initialized_ebm_biases(num_units):
return tf.Variable(
tf.random.uniform(minval=-0.1, maxval=0.1, shape=[num_units],
dtype=tf.float32), dtype=tf.float32)
def get_initialized_ebm_weights(num_units):
return tf.Variable(
tf.random.uniform(minval=-0.1, maxval=0.1,
shape=[num_units, num_units],dtype=tf.float32), dtype=tf.float32)
###Output
_____no_output_____
###Markdown
EBM derivatives The derivative of an EBM given a bitstring is easy to compute. In fact, the derivatives are independent of the weights and biases:$$\nabla_{w_{ij}}E(x) = -x_ix_j\quad \text{and}\quad \nabla_{b_{i}}E(x) = -x_i.$$Information about the weights and biases enters by averaging these derivates over samples from the EBM.
###Code
def ebm_weights_derivative(spin_config):
w_deriv = np.zeros((len(spin_config), len(spin_config)))
for i, x_i in enumerate(spin_config):
for j, x_j in enumerate(spin_config):
w_deriv[i][j] = -x_i*x_j
return w_deriv
def ebm_biases_derivative(spin_config):
b_deriv = np.zeros(len(spin_config))
for i, x_i in enumerate(spin_config):
b_deriv[i] = -x_i
return b_deriv
def ebm_weights_derivative_avg(spin_config_list):
w_deriv = np.zeros((len(spin_config_list[0]), len(spin_config_list[0])))
for spin_config in spin_config_list:
w_deriv += ebm_weights_derivative(spin_config)
return np.true_divide(w_deriv, len(spin_config_list))
def ebm_biases_derivative_avg(spin_config_list):
b_deriv = np.zeros(len(spin_config_list[0]))
for spin_config in spin_config_list:
b_deriv += ebm_biases_derivative(spin_config)
return np.true_divide(b_deriv, len(spin_config_list))
###Output
_____no_output_____
###Markdown
Classical VQT loss gradients As discussed in the paper, the gradient of the VQT loss function can be calculated without computing entropies or partition functions. For example, the gradient of the VQT free energy loss with respect to the classical model parameters can be writtent as:$$\partial_{\theta} \mathcal{L}_{\text{fe}} =\mathbb{E}_{x\sim p_{\theta}(x)}[(E_{\theta}(x)-\beta H_{\phi}(x) ) \nabla_{\theta}E_{\theta}(x) ]-(\mathbb{E}_{x\sim p_{\theta}(x)}[E_{\theta}(x)-\beta H_{\phi}(x)]) ( \mathbb{E}_{y\sim p_{\theta}(y)}[\nabla_{\theta}E_{\theta}(y)] ).$$Below these gradients are defined for the general Boltzmann EBM. In the VQT gradients, each entry in `bitstring_list` corresponds to the entry with the same index in `energy_losses`, where for each bitstring $x$, we compute the product $\beta\langle x|H|x\rangle$. The list of bitstrings is assumed to be sampled from the EBM.
###Code
def get_vqt_weighted_weights_grad_product(
energy_losses, spin_config_list, biases, weights):
"""Implements the first term in the derivative of the FE loss,
for the weights of a Boltzmann EBM."""
w_deriv = np.zeros((len(spin_config_list[0]), len(spin_config_list[0])))
for e_loss, spin_config in zip(energy_losses, spin_config_list):
w_deriv = w_deriv + (
ebm_energy(spin_config, biases, weights) - e_loss
)*ebm_weights_derivative(spin_config)
return np.true_divide(w_deriv, len(energy_losses))
def get_vqt_weighted_biases_grad_product(
energy_losses, spin_config_list, biases, weights=None):
"""Implements the first term in the derivative of the FE loss,
for the biases of a Boltzmann EBM."""
b_deriv = np.zeros(len(spin_config_list[0]))
for e_loss, spin_config in zip(energy_losses, spin_config_list):
b_deriv = b_deriv + (
ebm_energy(spin_config, biases, weights) - e_loss
)*ebm_biases_derivative(spin_config)
return np.true_divide(b_deriv, len(energy_losses))
def get_vqt_factored_weights_grad_product(
energy_losses, spin_config_list, biases, weights):
"""Implements the second term in the derivative of the FE loss,
for the weights of a Boltzmann EBM."""
energy_losses_avg = tf.reduce_mean(energy_losses)
classical_energy_avg = ebm_energy_avg(spin_config_list, biases, weights)
energy_diff_avg = classical_energy_avg - energy_losses_avg
return energy_diff_avg*ebm_weights_derivative_avg(spin_config_list)
def get_vqt_factored_biases_grad_product(
energy_losses, spin_config_list, biases, weights=None):
"""Implements the second term in the derivative of the FE loss,
for the biases of a Boltzmann EBM."""
energy_losses_avg = tf.reduce_mean(energy_losses)
classical_energy_avg = ebm_energy_avg(spin_config_list, biases, weights)
energy_diff_avg = classical_energy_avg - energy_losses_avg
return energy_diff_avg*ebm_biases_derivative_avg(spin_config_list)
###Output
_____no_output_____
###Markdown
Model components Ansatz unitary The parameterized unitary ansatz you will use consists of alternating layers of general single qubit rotations and nearest-neighbor entangling gates:
###Code
def get_rotation_1q(q, a, b, c):
"""General single qubit rotation."""
return cirq.Circuit(cirq.X(q) ** a, cirq.Y(q) ** b, cirq.Z(q) ** c)
def get_rotation_2q(q0, q1, a):
"""Exponent of entangling CNOT gate."""
return cirq.Circuit(cirq.CNotPowGate(exponent=a)(q0, q1))
def get_layer_1q(qubits, layer_num, name):
"""Apply single qubit rotations to all the given qubits."""
layer_symbols = []
circuit = cirq.Circuit()
for n, q in enumerate(qubits):
a, b, c = sympy.symbols(
"a{2}_{0}_{1} b{2}_{0}_{1} c{2}_{0}_{1}".format(layer_num, n, name))
layer_symbols += [a, b, c]
circuit += get_rotation_1q(q, a, b, c)
return circuit, layer_symbols
def get_layer_2q(qubits, layer_num, name):
"""Apply CNOT gates to all pairs of nearest-neighbor qubits."""
layer_symbols = []
circuit = cirq.Circuit()
for n, (q0, q1) in enumerate(zip(qubits[::2], qubits[1::2])):
a = sympy.symbols("a{2}_{0}_{1}".format(layer_num, n, name))
layer_symbols += [a]
circuit += get_rotation_2q(q0, q1, a)
shifted_qubits = qubits[1::]+[qubits[0]]
for n, (q0, q1) in enumerate(zip(shifted_qubits[::2], shifted_qubits[1::2])):
a = sympy.symbols("a{2}_{0}_{1}".format(layer_num, n+1, name))
layer_symbols += [a]
circuit += get_rotation_2q(q0, q1, a)
return circuit, layer_symbols
def get_one_full_layer(qubits, layer_num, name):
"""Stack the one- and two-qubit parameterized circuits."""
circuit = cirq.Circuit()
all_symbols = []
new_circ, new_symb = get_layer_1q(qubits, layer_num, name)
circuit += new_circ
all_symbols += new_symb
new_circ, new_symb = get_layer_2q(qubits, layer_num + 1, name)
circuit += new_circ
all_symbols += new_symb
return circuit, all_symbols
def get_model_unitary(qubits, num_layers, name=""):
"""Build our full parameterized model unitary."""
circuit = cirq.Circuit()
all_symbols = []
for i in range(num_layers):
new_circ, new_symb = get_one_full_layer(qubits, 2*i, name)
circuit += new_circ
all_symbols += new_symb
return circuit, all_symbols
###Output
_____no_output_____
###Markdown
Bitstring injector You also need a way to feed bitstrings into the quantum model. These bitstrings can be prepared by applying an X gate to every qubit that should be excited. The following function returns a parameterized circuit which prepares any given bitstring:
###Code
def get_bitstring_circuit(qubits):
"""Returns wall of parameterized X gates and the bits used to turn them on."""
circuit = cirq.Circuit()
all_symbols = []
for n, q in enumerate(qubits):
new_bit = sympy.Symbol("bit_{}".format(n))
circuit += cirq.X(q) ** new_bit
all_symbols.append(new_bit)
return circuit, all_symbols
###Output
_____no_output_____
###Markdown
Factorized latent state Bernoulli EBM The Bernoulli EBM can be used to parameterize a factorized latent state. The probability of sampling a 1 from a unit with bias $b$ is:$$p = \frac{e^b}{e^b + e^{-b}}$$Since the units of a Bernoulli EBM are independent, the probability of a given spin configuration is simply the product of the individual unit probabilities:$$p(x) = \prod_i\frac{e^{x_ib_i}}{e^{b_i} + e^{-b_i}}$$This distribution is easy to sample from.
###Code
def bernoulli_spin_p1(b):
return np.exp(b)/(np.exp(b) + np.exp(-b))
def sample_spins_bernoulli(num_samples, biases):
prob_list = []
for bias in biases.numpy():
prob_list.append(bernoulli_spin_p1(bias))
# The `probs` keyword specifies the probability of a 1 event
latent_dist = tfp.distributions.Bernoulli(probs=prob_list, dtype=tf.float32)
bit_samples = latent_dist.sample(num_samples).numpy()
spin_samples = []
for sample in bit_samples:
spin_samples.append([])
for bit in sample:
if bit == 0:
spin_samples[-1].append(-1)
else:
spin_samples[-1].append(1)
return spin_samples
###Output
_____no_output_____
###Markdown
The entropy of a single unit with bias $b$ in our Bernoulli EBM is:$S = \frac{be^{b} - be^{-b}}{e^{b} + e^{-b}}- \log[e^{b} + e^{-b}]$For a factorized latent distribution, the entropy is simply the sum of the entropies of the individual factors.
###Code
def bernoulli_factor_partition(b):
return np.exp(b) + np.exp(-b)
def bernoulli_partition(biases):
partition = 1
for bias in biases.numpy():
partition *= bernoulli_factor_partition(bias)
return partition
def bernoulli_factor_entropy(b):
Z = bernoulli_factor_partition(b)
return (b*np.exp(b) - b*np.exp(-b))/Z - np.log(Z)
def bernoulli_entropy(biases):
entropy = 0
for bias in biases.numpy():
entropy += bernoulli_factor_entropy(bias)
return entropy
###Output
_____no_output_____
###Markdown
Finally we define a function for converting the classical Bernoulli distribution into an Ising model whose expectation values can be simulated in the TFQ ops:
###Code
def bernoulli_ebm_to_ising(qubits, biases, bare=False):
pauli_s_list = []
for i, bi in enumerate(biases.numpy()):
if bare:
coeff = 1.0
else:
coeff = bi
pauli_s_list.append(cirq.PauliString(coeff, cirq.Z(qubits[i])))
if bare:
return pauli_s_list
return cirq.PauliSum.from_pauli_strings(pauli_s_list)
###Output
_____no_output_____
###Markdown
VQT Build and view our unitary model and set up the TFQ Expectation Op inputs:
###Code
# Number of bitstring samples from our classical model to average over
num_samples = 300
# Number of rotations-plus-entanglement layers to stack.
# Note that the depth required to reach a given fidelity increases depending on
# the temperature and Hamiltonian parameters.
num_layers = 4
# Build the model unitary and visible state circuits
U, model_symbols = get_model_unitary(all_qubits_flat, num_layers)
V, bit_symbols = get_bitstring_circuit(all_qubits_flat)
visible_state = tfq.convert_to_tensor([V + U])
# Make a copy of the visible state for each bitstring we will sample
tiled_visible_state = tf.tile(visible_state, [num_samples])
# Upconvert symbols to tensors
vqt_symbol_names = tf.identity(tf.convert_to_tensor(
[str(s) for s in bit_symbols + model_symbols], dtype=tf.dtypes.string))
# Build and tile the Hamiltonian
H = get_heisenberg_hamiltonian(all_qubits, jh, jv)
tiled_H = tf.tile(tfq.convert_to_tensor([[H]]), [num_samples, 1])
# Get the expectation op with a differentiator attached
expectation = tfq.differentiators.ForwardDifference().generate_differentiable_op(
analytic_op=tfq.get_expectation_op())
SVGCircuit(U)
###Output
_____no_output_____
###Markdown
We can use gradient descent on the model parameters thanks to TFQ, and our classical model parameters are tractable due to our use of an energy based model. The factorized nature of our latent space also allows us to efficiently obtain a loss function.
###Code
optimizer = tf.keras.optimizers.Adam(learning_rate=0.03)
# Initialize our model variables
vqt_model_params = tf.Variable(
tf.random.uniform(minval=-0.1, maxval=0.1,
shape=[len(model_symbols)], dtype=tf.float32),
dtype=tf.float32)
# Keep track of metrics during training
vqt_loss_history = []
vqt_fidelity_history = []
vqt_model_params_history = []
vqt_bias_history = []
vqt_density_matrix_history = []
# Initialize our EBM variables
vqt_biases = get_initialized_ebm_biases(len(all_qubits_flat))
# The innermost training step, where gradients are taken and applied
def vqt_train_step():
# Sample from our EBM
spin_config_list = sample_spins_bernoulli(num_samples, vqt_biases)
bitstring_list = [spin_config_to_bitstring(s) for s in spin_config_list]
bitstring_tensor = tf.convert_to_tensor(bitstring_list, dtype=tf.float32)
# Use the samples to find gradient of the loss w.r.t. model parameters.
with tf.GradientTape() as tape:
tiled_vqt_model_params = tf.tile([vqt_model_params], [num_samples, 1])
sampled_expectations = expectation(
tiled_visible_state,
vqt_symbol_names,
tf.concat([bitstring_tensor, tiled_vqt_model_params], 1),
tiled_H)
energy_losses = beta*sampled_expectations
energy_losses_avg = tf.reduce_mean(energy_losses)
vqt_model_gradients = tape.gradient(energy_losses_avg, [vqt_model_params])
# Build the classical model gradients
weighted_biases_grad = get_vqt_weighted_biases_grad_product(
energy_losses, spin_config_list, vqt_biases)
factored_biases_grad = get_vqt_factored_biases_grad_product(
energy_losses, spin_config_list, vqt_biases)
biases_grad = tf.subtract(weighted_biases_grad, factored_biases_grad)
# Apply the gradients
optimizer.apply_gradients(zip([vqt_model_gradients[0], biases_grad],
[vqt_model_params, vqt_biases]))
# Sample pure states to build the current estimate of the density matrix
many_states = tfq.layers.State()(
tiled_visible_state,
symbol_names=vqt_symbol_names,
symbol_values=tf.concat([bitstring_tensor, tiled_vqt_model_params], 1)
)
vqt_density_matrix_history.append(pure_state_list_to_density_matrix(many_states))
# Record the history
vqt_loss_history.append((energy_losses_avg - bernoulli_entropy(vqt_biases)).numpy())
vqt_fidelity_history.append(
fidelity(vqt_density_matrix_history[-1], exact_thermal_state))
vqt_model_params_history.append(vqt_model_params.numpy())
vqt_bias_history.append(vqt_biases.numpy())
print("Current loss:")
print(vqt_loss_history[-1])
print("Current fidelity to optimal state:")
print(vqt_fidelity_history[-1])
print("Current estimated density matrix:")
plt.figure()
seaborn.heatmap(abs(vqt_density_matrix_history[-1]))
plt.show()
###Output
_____no_output_____
###Markdown
With setup complete, we can now optimize our Heisenberg VQT.
###Code
def vqt_train(epochs):
for epoch in range(epochs):
vqt_train_step()
print ('Epoch {} finished'.format(epoch + 1))
vqt_train(100)
###Output
_____no_output_____
###Markdown
We plot our metrics and visualize the motion of the parameters during training:
###Code
plt.plot(vqt_loss_history)
plt.xlabel('Epoch #')
plt.ylabel('Loss [free energy]')
plt.plot(vqt_fidelity_history)
plt.xlabel('Epoch #')
plt.ylabel('Fidelity with exact state')
###Output
_____no_output_____
###Markdown
Classically correlated latent state Boltzmann machine EBM The Bernoulli distribution is only able to inject entropy into our density matrix. To encode classical correlations, we need to move beyond a factorized latent state. This can be accomplished by allowing the weights of our Boltzmann machine to be non-zero.Now that there are correlations, sampling from the model becomes non-trivial. The probability of bitstring $x$ is:$P(x) = \frac{\exp(-E(x))}{\sum_{y\in\{-1, 1\}^n} \exp(-E(y))}$In general this function is intractable to compute directly; however, we can still obtain samples from the distribution efficiently. Markov chain Monte Carlo (MCMC) is one family of procedures for this efficient sampling. Here, we use the simplest example of MCMC, the [Metropolis-Hastings](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm) algorithm:
###Code
def make_proposal(y):
"""Flip spins in y to generate a new sample."""
coin = tfp.distributions.Bernoulli(probs=[0.75]*len(y))
samples = coin.sample(1).numpy()[0]
x = []
for s_i, y_i in zip(samples, y):
if s_i:
x.append(y_i)
else:
if y_i == 1:
x.append(-1)
else:
x.append(1)
return x
def sample_boltzmann(burn_in, num_samples, skip, initial_state, biases, weights):
"""Walk towards and sample from regions of high probability."""
current_state = initial_state
all_samples = []
for i in range(burn_in + skip*num_samples):
proposal = make_proposal(current_state)
proposal_energy = ebm_energy(proposal, biases, weights)
current_energy = ebm_energy(current_state, biases, weights)
acceptance = min(np.exp(-proposal_energy)/np.exp(-current_energy), 1)
threshold = random.random()
if threshold <= acceptance:
current_state = proposal
if i >= burn_in:
if (i - burn_in)%skip == 0:
all_samples.append(current_state)
return all_samples
###Output
_____no_output_____
###Markdown
Since there are now correlations between the bits, the partition function and entropy can no longer be computed in a scalable way. However, for the small system size considered here, these quantities can be illustrative.
###Code
def boltzmann_partition(biases, weights):
partition_value = 0
for spin_config in itertools.product([-1, 1], repeat=biases.shape[0]):
partition_value += np.exp(-ebm_energy(spin_config, biases, weights))
return partition_value
def boltzmann_entropy(biases, weights):
Z = boltzmann_partition(biases, weights)
Z_log = np.log(Z)
unnormalized = 0
for spin_config in itertools.product([-1, 1], repeat=biases.shape[0]):
this_energy = ebm_energy(spin_config, biases, weights)
unnormalized += np.exp(-this_energy)*(-this_energy - Z_log)
return -unnormalized/Z
###Output
_____no_output_____
###Markdown
Finally we define a function for converting the classical Boltzmann machine into an Ising model whose expectation values can be simulated in the TFQ ops:
###Code
def boltzmann_ebm_to_ising(qubits, biases, weights, bare=False):
pauli_s_list = []
for i, w_row in enumerate(weights.numpy()):
for j, wij in enumerate(w_row):
init_list = [cirq.Z(q) for qi, q in enumerate(qubits) if qi == i or qi == j]
if bare:
coeff = 1.0
else:
coeff = wij
pauli_s_list.append(cirq.PauliString(coeff, init_list))
for i, bi in enumerate(biases.numpy()):
if bare:
coeff = 1.0
else:
coeff = bi
pauli_s_list.append(cirq.PauliString(coeff, cirq.Z(qubits[i])))
if bare:
return pauli_s_list
return cirq.PauliSum.from_pauli_strings(pauli_s_list)
###Output
_____no_output_____
###Markdown
VQT
###Code
# Initialize our model variables
vqt_model_params = tf.Variable(
tf.random.uniform(minval=-0.1, maxval=0.1, shape=[len(model_symbols)],
dtype=tf.float32), dtype=tf.float32)
# Define the learning hyperparameters
burn_in = 100
skip = 7
optimizer = tf.keras.optimizers.Adam(learning_rate=0.025)
# Keep track of metrics during training
vqt_loss_history = []
vqt_fidelity_history = []
vqt_model_params_history = []
vqt_weights_history = []
vqt_bias_history = []
vqt_density_matrix_history = []
# Initialize our EBM variables
vqt_weights = get_initialized_ebm_weights(len(all_qubits_flat))
vqt_biases = get_initialized_ebm_biases(len(all_qubits_flat))
# The innermost training step, where gradients are taken and applied
def vqt_train_step():
# Sample from our EBM
spin_config_list = sample_boltzmann(burn_in, num_samples, skip,
vqt_train_step.initial_state,
vqt_biases, vqt_weights)
vqt_train_step.initial_state = spin_config_list[-1]
bitstring_list = [spin_config_to_bitstring(s) for s in spin_config_list]
bitstring_tensor = tf.convert_to_tensor(bitstring_list, dtype=tf.float32)
# Use the samples to find gradient of the loss w.r.t. model parameters.
with tf.GradientTape() as tape:
tiled_vqt_model_params = tf.tile([vqt_model_params], [num_samples, 1])
sampled_expectations = expectation(
tiled_visible_state,
vqt_symbol_names,
tf.concat([bitstring_tensor, tiled_vqt_model_params], 1),
tiled_H)
energy_losses = beta*sampled_expectations
energy_losses_avg = tf.reduce_mean(energy_losses)
vqt_model_gradients = tape.gradient(energy_losses_avg, [vqt_model_params])
# Build the classical model gradients
weighted_biases_grad = get_vqt_weighted_biases_grad_product(
energy_losses, spin_config_list, vqt_biases, vqt_weights)
factored_biases_grad = get_vqt_factored_biases_grad_product(
energy_losses, spin_config_list, vqt_biases, vqt_weights)
biases_grad = tf.subtract(weighted_biases_grad, factored_biases_grad)
weighted_weights_grad = get_vqt_weighted_weights_grad_product(
energy_losses, spin_config_list, vqt_weights, vqt_weights)
factored_weights_grad = get_vqt_factored_weights_grad_product(
energy_losses, spin_config_list, vqt_weights, vqt_weights)
weights_grad = tf.subtract(weighted_weights_grad, factored_weights_grad)
# Apply the gradients
optimizer.apply_gradients(
zip([vqt_model_gradients[0], weights_grad, biases_grad],
[vqt_model_params, vqt_weights, vqt_biases]))
# Sample pure states to build the current estimate of the density matrix
many_states = tfq.layers.State()(
tiled_visible_state,
symbol_names=vqt_symbol_names,
symbol_values=tf.concat([bitstring_tensor, tiled_vqt_model_params], 1)
)
vqt_density_matrix_history.append(pure_state_list_to_density_matrix(many_states))
# Record the history
vqt_loss_history.append(
(energy_losses_avg - boltzmann_entropy(vqt_biases, vqt_weights)).numpy())
vqt_fidelity_history.append(
fidelity(vqt_density_matrix_history[-1], exact_thermal_state))
vqt_model_params_history.append(vqt_model_params.numpy())
vqt_weights_history.append(vqt_weights.numpy())
vqt_bias_history.append(vqt_biases.numpy())
print("Current loss:")
print(vqt_loss_history[-1])
print("Current fidelity to optimal state:")
print(vqt_fidelity_history[-1])
print("Current estimated density matrix:")
plt.figure()
seaborn.heatmap(abs(vqt_density_matrix_history[-1]))
plt.show()
vqt_train_step.initial_state = [1]*len(bit_symbols)
def vqt_train(epochs):
for epoch in range(epochs):
vqt_train_step()
print ('Epoch {} finished'.format(epoch + 1))
vqt_train(100)
plt.plot(vqt_loss_history)
plt.xlabel('Epoch #')
plt.ylabel('Loss [free energy]')
plt.plot(vqt_fidelity_history)
plt.xlabel('Epoch #')
plt.ylabel('Fidelity with exact state')
###Output
_____no_output_____ |
interview-cake/temperature-tracker.ipynb | ###Markdown
[You decide to test if your oddly-mathematical heating company is fulfilling its All-Time Max, Min, Mean and Mode Temperature Guarantee™.](https://www.interviewcake.com/question/python/temperature-tracker)
###Code
import operator
class TempTracker:
def __init__(self):
self.min = None
self.max = None
self.mean = None
self.mode = None
self.mode_qnt = 0
self.sum = 0.0
self.qnt = 0
self.dict = {}
def insert(self, temperature):
# set min temperature
if self.min is None or self.min > temperature:
self.min = temperature
# set max temperature
if self.max is None or self.max < temperature:
self.max = temperature
# set mean temperature
self.qnt += 1
self.sum += temperature
self.mean = self.sum / self.qnt
# set mode temperature
if temperature in self.dict.keys():
self.dict[temperature] += 1
else:
self.dict[temperature] = 1
if self.mode_qnt < self.dict[temperature]:
self.mode = temperature
def get_max(self):
return self.max
def get_min(self):
return self.min
def get_mean(self):
return self.mean
def get_mode(self):
return self.mode
tracker = TempTracker()
tracker.insert(1)
tracker.insert(2)
tracker.insert(1)
tracker.insert(3)
tracker.get_min()
tracker.get_max()
tracker.get_mode()
tracker.get_mean()
###Output
_____no_output_____ |
machine-learning-notebooks/transfer-learning-custom-azureml/4.Convert_to_OpenVINO.ipynb | ###Markdown
Copyright (c) Microsoft Corporation.Licensed under the MIT License. 4. OpenVINO ConversionIMPORTANT: The conversion command within this notebook is to be run with Intel's OpenVINO Toolkit docker container.In this notebook we will:- Convert the TensorFlow model to OpenVINO format Prerequisites- Trained TensorFlow model (frozen graph format) downloaded from the experiment from following `3.Train_with_AzureML.ipynb`More information on OpenVINO toolkit installation can be found at [install and set up the OpenVINO Toolkit](https://docs.openvinotoolkit.org/latest/installation_guides.html). Convert the model from frozen graph to an intermediate representation and then to a blob format The following script to be run in the docker container, will convert the TensorFlow frozen graph to the OpenVINO IR format and then to `blob`.
###Code
%%writefile experiment_outputs/compile.sh
#!/bin/bash
# OpenVINO compilation script
cd experiment_outputs
source /opt/intel/openvino_2021/bin/setupvars.sh
python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--tensorflow_object_detection_api_pipeline_config ../project_files/ssdlite_mobilenet_retrained.config \
--transformations_config \
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \
--reverse_input_channels > openvino_log1.txt
/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64/myriad_compile -m \
frozen_inference_graph.xml \
-o ssdlite_mobilenet_v2.blob \
-VPU_NUMBER_OF_SHAVES 8 \
-VPU_NUMBER_OF_CMX_SLICES 8 -ip U8 -op FP32 > openvino_log2.txt
###Output
_____no_output_____
###Markdown
Here we will use the OpenVINO to leverage the OpenVINO model converters and optimizer. You may need to replace `$(pwd)` with your current working directory path. The `xLinkUsb` error in the logs file is expected and should be ignored.
###Code
! docker run --rm --privileged -v $(pwd):/working -w /working openvino/ubuntu18_dev:2021.1 bash experiment_outputs/compile.sh
###Output
_____no_output_____ |