path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
notebooks/community/managed_notebooks/subscriber_churn_prediction/telecom-subscriber-churn-prediction.ipynb
###Markdown Telecom subscriber churn prediction on Vertex AI Table of contents* [Overview](section-1)* [Dataset](section-2)* [Objective](section-3)* [Costs](section-4)* [Perform EDA](section-5)* [Train a logistic regression model using scikit-learn](section-6)* [Evaluate the trained model](section-7)* [Save the model to a Cloud Storage path](section-8)* [Create a model with Explainable AI support in Vertex AI](section-9)* [Get explanations from the model](section-10)* [Clean up](section-11) OverviewThis example demonstrates building a subscriber churn prediction model on a [telecom customer churn dataset](https://www.kaggle.com/c/customer-churn-prediction-2020/overview). The generated churn model is further deployed to Vertex AI Endpoints and explanations are generated using the Explainable AI feature of Vertex AI. *Note: This notebook file was designed to run in a [Vertex AI Workbench managed notebooks](https://cloud.google.com/vertex-ai/docs/workbench/managed/create-instance) instance using the `Python (Local)` kernel. Some components of this notebook may not work in other notebook environments.* DatasetThe dataset used in this tutorial is publicly available at Kaggle. See [Customer Churn Prediction 2020](https://www.kaggle.com/c/customer-churn-prediction-2020/data). ObjectiveThis tutorial shows you how to do exploratory data analysis, preprocess data, and train a churn prediction model on a tabular churn dataset. The steps include the following:- Load data from a Cloud Storage path- Perform exploratory data analysis (EDA)- Preprocess the data- Train a scikit-learn model- Evaluate the scikit-learn model- Save the model to a Cloud Storage path- Create a model and an endpoint in Vertex AI- Deploy the trained model to an endpoint- Generate predictions and explanations on test data from the hosted model- Undeploy the model resource Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Installation ###Code import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") USER_FLAG = "" # Google Cloud Notebook requires dependencies to be installed with '--user' if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ###Output _____no_output_____ ###Markdown Install the latest version of the Vertex AI client library.Run the following command in your virtual environment to install the Vertex SDK for Python: ###Code ! pip install {USER_FLAG} --upgrade google-cloud-aiplatform ###Output _____no_output_____ ###Markdown Install the Cloud Storage library: ###Code ! pip install {USER_FLAG} --upgrade google-cloud-storage ###Output _____no_output_____ ###Markdown Install the `category_encoders` library: ###Code ! pip install --upgrade category_encoders ###Output _____no_output_____ ###Markdown Install the `seaborn` library for the EDA step. If a Vertex AI Workbench managed notebooks instance is being used, this step is optional as the library is already available in the `Python (Local)` kernel. ###Code ! pip install --upgrade seaborn ###Output _____no_output_____ ###Markdown Before you begin Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`. ###Code PROJECT_ID = "" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) ###Output _____no_output_____ ###Markdown Otherwise, set your project ID here. ###Code if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} ###Output _____no_output_____ ###Markdown TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. ###Code from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") ###Output _____no_output_____ ###Markdown Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you create a model in Vertex AI using the Cloud SDK, you give a Cloud Storage path where the trained model is saved. In this tutorial, Vertex AI saves the trained model to a Cloud Storage bucket. Using this model artifact, you can thencreate Vertex AI model and endpoint resources in order to serveonline predictions.Set the name of your Cloud Storage bucket below. It must be unique across allCloud Storage buckets.You may also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Make sure to [choose a region where Vertex AI services areavailable](https://cloud.google.com/vertex-ai/docs/general/locationsavailable_regions). You maynot use a Multi-Regional Storage bucket for training with Vertex AI. ###Code BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ###Output _____no_output_____ ###Markdown **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. ###Code ! gsutil mb -l $REGION $BUCKET_NAME ###Output _____no_output_____ ###Markdown Finally, validate access to your Cloud Storage bucket by examining its contents: ###Code ! gsutil ls -al $BUCKET_NAME ###Output _____no_output_____ ###Markdown Tutorial Import required libraries ###Code import matplotlib.pyplot as plt import pandas as pd %matplotlib inline # configure to don't display the warnings import warnings import category_encoders as ce import joblib import seaborn as sns from google.cloud import aiplatform, storage from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix, plot_roc_curve from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler warnings.filterwarnings("ignore") ###Output _____no_output_____ ###Markdown Load data from Cloud Storage path using Pandas ###Code df = pd.read_csv( "gs://cloud-samples-data/vertex-ai/managed_notebooks/telecom_churn_prediction/train.csv" ) print(df.shape) df.head() ###Output _____no_output_____ ###Markdown Perform EDA Check the data types and null counts of the fields. ###Code df.info() ###Output _____no_output_____ ###Markdown The current dataset doesn't have any null or empty fields in it. Check the class imbalance. ###Code df["churn"].value_counts(normalize=True) ###Output _____no_output_____ ###Markdown There are 14% churners in the data which is not bad for training a churn prediction model. If the class imbalance seems high, oversampling or undersampling techniques can be considered to balance the class distribution. Separate the caetgorical and numerical columns. ###Code categ_cols = ["state", "area_code", "international_plan", "voice_mail_plan"] target = "churn" num_cols = [i for i in df.columns if i not in categ_cols and i != target] print(len(categ_cols), len(num_cols)) ###Output _____no_output_____ ###Markdown Plot the level distribution for the categorical columns. ###Code for i in categ_cols: df[i].value_counts().plot(kind="bar") plt.title(i) plt.show() print(num_cols) df["number_vmail_messages"].describe() ###Output _____no_output_____ ###Markdown Check the distributions for the numerical columns. ###Code for i in num_cols: # check the Price field's distribution _, ax = plt.subplots(1, 2, figsize=(10, 4)) df[i].plot(kind="box", ax=ax[0]) df[i].plot(kind="hist", ax=ax[1]) plt.title(i) plt.show() # check pairplots for selected features selected_features = [ "total_day_calls", "total_eve_calls", "number_customer_service_calls", "number_vmail_messages", "account_length", "total_day_charge", "total_eve_charge", ] sns.pairplot(df[selected_features]) plt.show() ###Output _____no_output_____ ###Markdown Plot a heat map of the correlation matrix for the numerical features. ###Code plt.figure(figsize=(12, 10)) sns.heatmap(df[num_cols].corr(), annot=True) plt.show() ###Output _____no_output_____ ###Markdown Observations from EDA- There are many levels/categories in the categorical field state. In further steps, creating one-hot encoding vectors for this field would increase the columns drastically and so a binary encoding technique will be considered for encoding this field.- There are only 9% of customers in the data who have had international plans.- There are only a few customers who make frequent calls to customer service.- Only 25% of the customers had at least 16 voicemail messages and thus there was skewness in the distribution of the `number_vmail_messages` field.- Most of the feature combinations in the pair plot show a circular pattern that suggests that there is almost no correlation between the corresponding two features.- There seems to be a high correlation between minutes and charge. Either one of them can be dropped to avoid multi-collinearity or redundant features in the data. Preprocess the data Drop the fields corresponding to the highly-correlated features. ###Code drop_cols = [ "total_day_charge", "total_eve_charge", "total_night_charge", "total_intl_charge", ] df.drop(columns=drop_cols, inplace=True) num_cols = list(set(num_cols).difference(set(drop_cols))) df.shape ###Output _____no_output_____ ###Markdown Binary encode the state feature (as there are many levels/categories). ###Code encoder = ce.BinaryEncoder(cols=["state"], return_df=True) data_encoded = encoder.fit_transform(df) data_encoded.head() ###Output _____no_output_____ ###Markdown One-hot encode (drop the first level-column to avoid dummy-variable trap scenarios) the remaining categorical variables. ###Code def encode_cols(data, col): # Creating a dummy variable for the variable 'CategoryID' and dropping the first one. categ = pd.get_dummies(data[col], prefix=col, drop_first=True) # Adding the results to the master dataframe data = pd.concat([data, categ], axis=1) return data for i in categ_cols + [target]: if i != "state": data_encoded = encode_cols(data_encoded, i) data_encoded.drop(columns=[i], inplace=True) data_encoded.shape ###Output _____no_output_____ ###Markdown Check the data. ###Code data_encoded.head() ###Output _____no_output_____ ###Markdown Check the columns. ###Code data_encoded.columns ###Output _____no_output_____ ###Markdown Split the data into train and test sets. ###Code X = data_encoded[[i for i in data_encoded.columns if i not in ["churn_yes"]]].copy() y = data_encoded["churn_yes"].copy() X_train, X_test, y_train, y_test = train_test_split( X, y, train_size=0.7, test_size=0.3, random_state=100 ) print(X_train.shape, X_test.shape) ###Output _____no_output_____ ###Markdown Scale the numerical data using `MinMaxScaler`. ###Code sc = MinMaxScaler() X_train.loc[:, num_cols] = sc.fit_transform(X_train[num_cols]) X_test.loc[:, num_cols] = sc.transform(X_test[num_cols]) ###Output _____no_output_____ ###Markdown Train a logistic regression model using scikit-learn The argument `class_weight` adjusts the class weights to the target feature. ###Code model = LogisticRegression(class_weight="balanced") model = model.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Evaluate the trained model Plot the ROC and show AUC on train and test sets Plot the ROC for the model on train data. ###Code plot_roc_curve(model, X_train, y_train, drop_intermediate=False) plt.show() # plot the ROC for the model on test data plot_roc_curve(model, X_test, y_test, drop_intermediate=False) plt.show() ###Output _____no_output_____ ###Markdown Determine the optimal threshold for the binary classification In general, the logistic regression model outputs probability scores between 0 and 1 and a threshold needs to be determined to assign a class label. Depending on the sensitivity (true-positive rate) and specificity (true-negative rate) of the model, an optimal threshold can be determined. Create columns with 10 different probability cutoffs. ###Code y_train_pred = model.predict_proba(X_train)[:, 1] numbers = [float(x) / 10 for x in range(10)] y_train_pred_df = pd.DataFrame({"true": y_train, "pred": y_train_pred}) for i in numbers: y_train_pred_df[i] = y_train_pred_df.pred.map(lambda x: 1 if x > i else 0) ###Output _____no_output_____ ###Markdown Now calculate accuracy, sensitivity, and specificity for various probability cutoffs. ###Code cutoff_df = pd.DataFrame(columns=["prob", "accuracy", "sensitivity", "specificity"]) # compute the parameters for each threshold considered for i in numbers: cm1 = confusion_matrix(y_train_pred_df.true, y_train_pred_df[i]) total1 = sum(sum(cm1)) accuracy = (cm1[0, 0] + cm1[1, 1]) / total1 speci = cm1[0, 0] / (cm1[0, 0] + cm1[0, 1]) sensi = cm1[1, 1] / (cm1[1, 0] + cm1[1, 1]) cutoff_df.loc[i] = [i, accuracy, sensi, speci] # Let's plot accuracy sensitivity and specificity for various probabilities. cutoff_df.plot.line(x="prob", y=["accuracy", "sensitivity", "specificity"]) plt.title("Comparison of performance across various thresholds") plt.show() ###Output _____no_output_____ ###Markdown In general, a model with balanced sensitivity and specificity is preferred. In the current case, the threshold where the sensitivity and specifity curves intersect can be considered an optimal threshold. ###Code threshold = 0.5 # Evaluate train and test sets y_test_pred = model.predict_proba(X_test)[:, 1] # to get the performance stats, lets define a handy function def print_stats(y_true, y_pred): # Confusion matrix confusion = confusion_matrix(y_true=y_true, y_pred=y_pred) print("Confusion Matrix: ") print(confusion) TP = confusion[1, 1] # true positive TN = confusion[0, 0] # true negatives FP = confusion[0, 1] # false positives FN = confusion[1, 0] # false negatives # Let's see the sensitivity or recall of our logistic regression model sensitivity = TP / float(TP + FN) print("sensitivity = ", sensitivity) # Let us calculate specificity specificity = TN / float(TN + FP) print("specificity = ", specificity) # Calculate false postive rate - predicting conversion when customer didn't convert fpr = FP / float(TN + FP) print("False positive rate = ", fpr) # positive predictive value precision = TP / float(TP + FP) print("precision = ", precision) # accuracy accuracy = (TP + TN) / (TP + TN + FP + FN) print("accuracy = ", accuracy) return y_train_pred_sm = [1 if i > threshold else 0 for i in y_train_pred] y_test_pred_sm = [1 if i > threshold else 0 for i in y_test_pred] # Print the metrics for the model # on train data print("Train Data : ") print_stats(y_train, y_train_pred_sm) print("\n", "*" * 30, "\n") # on test data print("Test Data : ") print_stats(y_test, y_test_pred_sm) ###Output _____no_output_____ ###Markdown While the model's sensitivity and specificity are looking decent, the precision can be considered low. This type of situation may be acceptable to some extent because from a business standpoint in the telecom industry, it still makes sense to identify churners even though it means there'd be some mis-classifications of non-churners as churners. Save the model to a Cloud Storage path Save the trained model to a local file `model.joblib`. ###Code FILE_NAME = "model.joblib" joblib.dump(model, FILE_NAME) # Upload the saved model file to Cloud Storage BLOB_PATH = ( "[your-blob-path]" # leave blank if no folders inside the bucket are needed. ) BLOB_NAME = BLOB_PATH + FILE_NAME bucket = storage.Client().bucket(BUCKET_NAME) blob = bucket.blob(BLOB_NAME) blob.upload_from_filename(FILE_NAME) ###Output _____no_output_____ ###Markdown Create a model with Explainable AI support in Vertex AIBefore creating a model, configure the explanations for the model. For further details, see [Configuring explanations in Vertex AI](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanationsscikit-learn-and-xgboost-pre-built-containers). ###Code MODEL_DISPLAY_NAME = "[your-model-display-name]" ARTIFACT_GCS_PATH = f"gs://{BUCKET_NAME}/{BLOB_PATH}" PROJECT = "[your-project-id]" LOCATION = REGION # Feature-name(Inp_feature) and Output-name(Model_output) can be arbitrary exp_metadata = {"inputs": {"Inp_feature": {}}, "outputs": {"Model_output": {}}} # Create a Vertex AI model resource with support for explanations aiplatform.init(project=PROJECT, location=LOCATION) explanation_parameters = {"sampledShapleyAttribution": {"pathCount": 25}} model = aiplatform.Model.upload( display_name=MODEL_DISPLAY_NAME, artifact_uri=ARTIFACT_GCS_PATH, serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-24:latest", explanation_metadata=exp_metadata, explanation_parameters=explanation_parameters, ) model.wait() print(model.display_name) print(model.resource_name) ###Output _____no_output_____ ###Markdown Alternatively, the following `gcloud` command can be used to create the model resource. The `explanation-metadata.json` file consists of the metadata that is used to configure explanations for the model resource.```gcloud beta ai models upload \ --region=$REGION \ --display-name=$MODEL_DISPLAY_NAME \ --container-image-uri="us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-24:latest" \ --artifact-uri=$ARTIFACT_GCS_PATH \ --explanation-method=sampled-shapley \ --explanation-path-count=25 \ --explanation-metadata-file=explanation-metadata.json``` Create an endpoint ###Code ENDPOINT_DISPLAY_NAME = "[your-endpoint-display-name]" endpoint = aiplatform.Endpoint.create( display_name=ENDPOINT_DISPLAY_NAME, project=PROJECT, location=LOCATION ) print(endpoint.display_name) print(endpoint.resource_name) ###Output _____no_output_____ ###Markdown Save the endpoint ID after the endpoint is created. ###Code ENDPOINT_ID = "[your-endpoint-id]" ###Output _____no_output_____ ###Markdown Deploy the model to the created endpointConfigure the depoyment name, machine-type, and other parameters for the deployment. ###Code DEPLOYED_MODEL_NAME = "[deployment-model-name]" MACHINE_TYPE = "n1-standard-4" # deploy the model to the endpoint model.deploy( endpoint=endpoint, deployed_model_display_name=DEPLOYED_MODEL_NAME, machine_type=MACHINE_TYPE, ) model.wait() print(model.display_name) print(model.resource_name) ###Output _____no_output_____ ###Markdown Save the ID of the deployed model. The ID of the deployed model can also checked using the `endpoint.list_models()` method. ###Code DEPLOYED_MODEL_ID = "[your-deployed-model-id]" ###Output _____no_output_____ ###Markdown Get explanations from the deployed model Get explanations for some test instances from the hosted model. ###Code # format the top 2 test instances as the request's payload test_json = {"instances": [X_test.iloc[0].tolist(), X_test.iloc[1].tolist()]} ###Output _____no_output_____ ###Markdown Get explanations and plot the feature attributions ###Code features = X_train.columns.to_list() def plot_attributions(attrs): """ Function to plot the features and their attributions for an instance """ rows = {"feature_name": [], "attribution": []} for i, val in enumerate(features): rows["feature_name"].append(val) rows["attribution"].append(attrs["Inp_feature"][i]) attr_df = pd.DataFrame(rows).set_index("feature_name") attr_df.plot(kind="bar") plt.show() return def explain_tabular_sample( project: str, location: str, endpoint_id: str, instances: list ): """ Function to make an explanation request for the specified payload and generate feature attribution plots """ aiplatform.init(project=project, location=location) endpoint = aiplatform.Endpoint(endpoint_id) response = endpoint.explain(instances=instances) print("#" * 10 + "Explanations" + "#" * 10) for explanation in response.explanations: print(" explanation") # Feature attributions. attributions = explanation.attributions for attribution in attributions: print(" attribution") print(" baseline_output_value:", attribution.baseline_output_value) print(" instance_output_value:", attribution.instance_output_value) print(" output_display_name:", attribution.output_display_name) print(" approximation_error:", attribution.approximation_error) print(" output_name:", attribution.output_name) output_index = attribution.output_index for output_index in output_index: print(" output_index:", output_index) plot_attributions(attribution.feature_attributions) print("#" * 10 + "Predictions" + "#" * 10) for prediction in response.predictions: print(prediction) return response test_json = [X_test.iloc[0].tolist(), X_test.iloc[1].tolist()] prediction = explain_tabular_sample(PROJECT, LOCATION, ENDPOINT_ID, test_json) ###Output _____no_output_____ ###Markdown Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial: ###Code # undeploy the model endpoint.undeploy(deployed_model_id=DEPLOYED_MODEL_ID) # delete the endpoint endpoint.delete() # delete the model model.delete() # remove the contents of the Cloud Storage bucket ! gsutil -m rm -r $BUCKET_NAME ###Output _____no_output_____
notebooks/1.0-vanilla-autoencoder.ipynb
###Markdown Vanilla AutoencoderBuild a simple "vanilla" autoencoder that can be used on the fashion-mnist data. "Hands-On Machine Learning", by Aurelien Geron, is the basis for much of the code. https://github.com/ageron/handson-ml2 ###Code import numpy as np import datetime import matplotlib as mpl import matplotlib.pyplot as plt import pandas as pd import tensorflow as tf from tensorflow import keras import tensorboard print('TensorFlow version: ', tf.__version__) print('Keras version: ', keras.__version__) print('Tensorboard version:', tensorboard.__version__) %matplotlib inline ###Output TensorFlow version: 2.0.0 Keras version: 2.2.4-tf Tensorboard version: 2.0.0 ###Markdown Left align tables: ###Code %%html <style> table {float:left} </style> ###Output _____no_output_____ ###Markdown 1.0 Data ExplorationLet's look at the fashion-MNIST data set, and make sure we understand it. ###Code # load fashion MNIST fashion_mnist = keras.datasets.fashion_mnist (X_train_all, y_train_all), (X_test, y_test) = fashion_mnist.load_data() # check the shape of the data sets print('X_train_full shape:', X_train_all.shape) print('y_train_full shape:', y_train_all.shape) print('X_test shape:', X_test.shape) print('y_test shape:', y_test.shape) # print off some y labels to check if it's already shuffled y_train_all[0:10] # to access, say, the first sample, you can index into the array as follows # show the shape of the first sample np.shape(X_train_all[0,:,:]) # show the sample sample_to_display = 0 fig, axes = plt.subplots(1, 1) axes.imshow(np.reshape(X_train_all[sample_to_display,:,:],[28,28]), cmap='Greys_r') axes.axis('off') plt.show() ###Output _____no_output_____ ###Markdown Each training and test example is assigned one of the following labels (from https://github.com/zalandoresearch/fashion-mnist):| Label | Description || :--- | :--- || 0 | T-shirt/top || 1 | Trouser || 2 | Pullover || 3 | Dress || 4 | Coat || 5 | Sandal || 6 | Shirt || 7 | Sneaker || 8 | Bag || 9 | Ankle boot | ###Code class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"] # lets visualize some of these # k - number of samples # w - width in pixels # h - height in pixels k, w, h = X_train_all.shape # Plot a random sample fig, axes = plt.subplots(1, 10,figsize=(15,2.3),dpi=300) # fig.suptitle('Digits for Sample %i' %num, size=15, x=0.2) for i in range(0, 10): axes[i].imshow(np.reshape(X_train_all[i,:,:],[28,28]), cmap='Greys_r') axes[i].axis('off') axes[i].set_title(str(class_names[y_train_all[i]])+', '+str(y_train_all[i])) ###Output _____no_output_____ ###Markdown 2.0 Prepare Data ###Code # need to scale the data between 0 and 1 # find out what the min/max values are print('Max: ',X_train_all.max()) print('Min: ',X_train_all.min()) # split the data between train and validation sets, and scale X_valid, X_train = X_train_all[:5000] / 255.0, X_train_all[5000:] / 255.0 y_valid, y_train = y_train_all[:5000], y_train_all[5000:] # also scale the X_test X_test = X_test / 255.0 print('X_valid shape:', X_valid.shape) print('y_valid shape:', y_valid.shape) print('X_train shape:', X_train.shape) print('y_train shape:', y_train.shape) ###Output X_valid shape: (5000, 28, 28) y_valid shape: (5000,) X_train shape: (55000, 28, 28) y_train shape: (55000,) ###Markdown 3.0 Simple Sequential Model ###Code model = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(300, activation="relu"), keras.layers.Dense(100, activation="relu"), keras.layers.Dense(10, activation="softmax") ]) model.summary() model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"]) # create a name for the model so that we can track it in tensorboard log_dir="logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + "_ae_vanilla" # create tensorboard callback tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=0, update_freq='epoch',profile_batch=0) history = model.fit(X_train, y_train, epochs=30, verbose=1, validation_data=(X_valid, y_valid), callbacks=[tensorboard_callback]) # put history of training into a dataframe df_hist = pd.DataFrame(history.history) df_hist.plot(figsize=(8, 5)) # plot plt.grid(True) # apply grid plt.title('Training Parameters') # plot title plt.xlabel('Epoch') # x-axis label plt.show() # evaluate the model model.evaluate(X_test, y_test, verbose=0) ###Output _____no_output_____ ###Markdown 4.0 Vanilla AutoencoderMake a simple stacked autoencoder (3 hidden layers, 1 output layer) ###Code # build model # encoder stacked_encoder = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(100, activation="selu"), keras.layers.Dense(30, activation="selu"), ]) # decoder stacked_decoder = keras.models.Sequential([ keras.layers.Dense(100, activation="selu", input_shape=[30]), keras.layers.Dense(28 * 28, activation="sigmoid"), keras.layers.Reshape([28, 28]) ]) # combine encoder & decoder into one to make autoencoder stacked_ae = keras.models.Sequential([stacked_encoder, stacked_decoder]) # compile, and get summary stacked_ae.compile(loss="binary_crossentropy", optimizer=keras.optimizers.SGD(lr=1.5)) stacked_ae.summary() # fit model history = stacked_ae.fit(X_train, X_train, epochs=10, validation_data=[X_valid, X_valid]) def plot_reconstructions(model, index_list, X_valid): """Plot some original images, and their reconstructions Parameters =========== model : keras model Autoencoder model index_list : list List of indices. These indices correspond to the index of the X_valid images that will be shown X_valid : numpy array X_valid set """ reconstructions = model.predict(X_valid) # get the length of index_list to set number of # images to plot n_images = len(index_list) # Plot a random sample fig, axes = plt.subplots(2, n_images,figsize=(n_images*1.5,3),dpi=150) # fig.suptitle('Digits for Sample %i' %num, size=15, x=0.2) for i in range(0, n_images): axes[0][i].imshow(np.reshape(X_valid[index_list[i],:,:],[28,28]), cmap='Greys_r') axes[0][i].axis('off') axes[0][i].set_title(str(index_list[i])) axes[1][i].imshow(np.reshape(reconstructions[index_list[i],:,:],[28,28]), cmap='Greys_r') axes[1][i].axis('off') plt.show() # plot a random number of items import random index_list = random.sample(range(0,len(X_valid)), 5) plot_reconstructions(stacked_ae, index_list, X_valid) ###Output _____no_output_____ ###Markdown 5.0 Visualize Results of Stacked Autoencoder Using T-SNE ###Code # code from https://github.com/ageron/handson-ml2/blob/master/17_autoencoders_and_gans.ipynb np.random.seed(63) from sklearn.manifold import TSNE X_valid_compressed = stacked_encoder.predict(X_valid) tsne = TSNE() X_valid_2D = tsne.fit_transform(X_valid_compressed) X_valid_2D = (X_valid_2D - X_valid_2D.min()) / (X_valid_2D.max() - X_valid_2D.min()) plt.scatter(X_valid_2D[:, 0], X_valid_2D[:, 1], c=y_valid, s=10, cmap="tab10") plt.axis("off") plt.show() # adapted from https://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html plt.figure(figsize=(10, 8)) cmap = plt.cm.tab10 plt.scatter(X_valid_2D[:, 0], X_valid_2D[:, 1], c=y_valid, s=10, cmap=cmap) image_positions = np.array([[1., 1.]]) for index, position in enumerate(X_valid_2D): dist = np.sum((position - image_positions) ** 2, axis=1) if np.min(dist) > 0.02: # if far enough from other images image_positions = np.r_[image_positions, [position]] imagebox = mpl.offsetbox.AnnotationBbox( mpl.offsetbox.OffsetImage(X_valid[index], cmap="binary"), position, bboxprops={"edgecolor": cmap(y_valid[index]), "lw": 2}) plt.gca().add_artist(imagebox) plt.axis("off") plt.show() ###Output _____no_output_____
notebooks/zoning.ipynb
###Markdown URLhttps://knoxgis.maps.arcgis.com/home/item.html?id=ca4ac10098dd4de995b16312c83665f4 DescriptionThe location and boundaries of the zoning districts established by theCode of Ordinances of Knoxville and Knox County, TN are shown andmaintained by the Metropolitan Planning Commission under the directionof its Executive Director. The zoning GIS layer constitutes the Cityof Knoxville’s Official Zoning Map and is incorporated into, and thesame is made a part of the Code of Ordinances by reference.This data is updated monthly through actions of the Knox CountyCommission and the City of Knoxville. Check back frequently todownload the latest data or consider using the REST service to gainaccess to the latest features. Fields - OBJECTID (alias: OBJECTID): Stable, unique value for each zoning district in a GUID format - ZONE1 (alias: ZONE1): Base zoning district code - ZONE2 (alias: ZONE2): Overlay district code - AREA_ACRES (alias: AREA_ACRES): Calculated acreage of a zoning district - HIGH_DENSITY (alias: HIGH_DENSITY): Maximum dwelling units per acre allowed in a zoning district - CONDITIONS (alias: CONDITIONS): MPC file number for a zoning district with specific conditions - FORM_DIST (alias: Form District): Name of form district - FORM_CORR (alias: Form Corridor): Name of form corridor - FORM_DESCR (alias: Form Description): Form district description - FORM_CODE_PDF (alias: Form Code PDF): URL to more information about a form district or corridor - ZONE_TYPE (alias: ZONE_TYPE): Type of zoning district (e.g. City of Knoxville, Knox County, Form District) ###Code import json import tempfile import requests import geopandas as gpd # gpd read_file reqires a file not url so this is a hack... ## arcgis provides download links that are dynamic... why? So we will save file and use lfs to download # response = requests.get('https://ago-item-storage.s3-external-1.amazonaws.com/ca4ac10098dd4de995b16312c83665f4/Knoxville-Knox_County_Zoning.geojson?X-Amz-Security-Token=FQoDYXdzEO3%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDF3dK5lyT8t%2BDhV8SSK3A9I%2B0lFJLORN8Ds36P4shkRQYIn7iCMb9JiiBVVnzlzrPo8%2FG1K72RE0zCguK22hvZdUoMYlF4jHNad1soJTXxmKBZDdxbHgwkK051CIzI3I9VA3gDs0TyyZcaPz7g%2BWX7LxLZZ575gqipOxOVSrxKK6kxPQeFs2Dimsk6aMcoBVywHDp4ZJReDihXVhA3NlZn0kU6DfMUTLBCHRTRkPUeM5x6rTNDAa4YNFcNliYMTaRxrp%2BqqNaVYhkW6hCfteZOYhDUBGP5sRHoWGD8jC1vmosvEn0uv9JPATGsvbyFd%2FgTOfPdhEku0jIWwNsKjL0u4iFjoq%2FSDYTG8Br5k6cWNecE4pgR3DOSak977cQUAtOE8CuhgyMkjW7MQTSfGsc4HXcnbHFqVb2xTVjZr5G2TZdj37ZNZjEc287kxgz2Z609YVrbI4lGr%2BSMwVBIbRtJFDRPmil%2FvAfEW6Tl%2FMttPNyH0k2gpPAs6FXK9fk0QBhG%2BgO%2FLt5DqeNQc%2B%2B3SSlVXSOzJL0tmnVVGj%2B7sGWytlzoLoxOw9W7k9k2ad%2F31SKsATTXRqX7AAJI1VGey%2BuRs4ofxyqlco5MHY2QU%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20180629T134706Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAINEFONIE23UY6VOQ%2F20180629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=17cde7bceaa920375f413683ebe7da5c4f9e461b86aaeae3c12d27fffd482232') # with open('../data/zoning/zoning.geojson', 'wb') as f: # f.write(response.content) # zoning = gpd.read_file('../data/zoning/zoning.geojson') response = requests.get('https://gitlab.com/costrouc/knoxville-opendata-notebooks/raw/master/data/zoning/zoning.geojson') with tempfile.NamedTemporaryFile() as f: f.write(response.content) zoning = gpd.read_file(f.name) # knoxville_bnd = gpd.GeoDataFrame.from_file('../data/knoxville_boundary.geojson') response = requests.get('https://gitlab.com/costrouc/knoxville-opendata-notebooks/raw/master/data/knoxville_boundary.geojson') with tempfile.NamedTemporaryFile() as f: f.write(response.content) knoxville_bnd = gpd.read_file(f.name) zoning['simple_zone'] = zoning['ZONE1'].apply(lambda z: z.split('-')[0]) # strip off - to make easier to plot (still too many fields) import matplotlib.pyplot as plt fig, ax = plt.subplots() knoxville_bnd.plot(ax=ax, color='white', edgecolor='black') ax = zoning.plot(ax=ax, column='ZONE1', markersize=5) # , legend=True) fig.set_size_inches((20, 10)) ax.set_aspect('equal') ax.axis('off') fig.savefig('../images/zoning-colors.png', transparent=True) zoning.info() zoning.sample(5) ###Output _____no_output_____ ###Markdown How many acres per zone type? ###Code ECKERT_IV_PROJ_STRING = "+proj=eck4 +lon_0=0 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs" zoning_eckert = zoning.to_crs(ECKERT_IV_PROJ_STRING) zoning_eckert['area_m2'] = zoning_eckert.geometry.area print('square miles', zoning_eckert.groupby('ZONE1').area_m2.sum().sort_values(ascending=False) / 1e6 * 0.6213712**2) print('acres', zoning_eckert.groupby('ZONE1').area_m2.sum().sort_values(ascending=False) * 0.0002471052) ###Output acres ZONE1 A 179577.627251 PR 23710.807273 R-1 20656.353537 RA 18208.090482 F 9214.604380 I 8747.688615 RB 7701.708341 RP-1 4797.685851 F-1 4218.874383 CA 4170.302980 R-2 4017.499416 OS-1 3722.899798 A-1 3506.898689 R-1A 3110.844892 C-3 2854.314165 I-3 2714.565964 C-6 2292.387136 C-4 2097.315973 CB 1977.848202 I-4 1935.049320 EN-1 1709.903358 PC 1636.031012 RAE 1514.202168 O-1 1256.187816 R-1E 1221.889895 OS-2 1206.842531 BP 1137.904824 I-2 1007.947057 O-2 919.258548 OB 804.930164 ... E 451.379321 EC 431.204283 PC-2 399.806637 SC-3 379.509035 FD 340.001734 C-2 337.793579 LI 334.098275 SC 256.540864 R-3 245.940034 O-3 221.726694 BP-1 217.425767 C-1 210.642207 EN-2 197.192966 OA 113.781430 HZ 113.395803 SC-1 113.150750 RP-2 100.979158 TC-1 100.763838 C-5 89.782597 SC-2 78.604123 CN 59.657542 I-1 57.143135 CR 42.666132 TND-1 39.700577 T 35.330377 RP-3 33.537112 CH 32.341463 OC 17.560784 R-4 4.544334 H-1 4.523470 Name: area_m2, Length: 62, dtype: float64
src/Chapter8.ipynb
###Markdown Examples for Chapter 8 ###Code import warnings # these are innocuous but irritating warnings.filterwarnings("ignore", message="numpy.dtype size changed") warnings.filterwarnings("ignore", message="numpy.ufunc size changed") %matplotlib inline ###Output _____no_output_____ ###Markdown Algorithms for simple cost functions K-means clustering ###Code run scripts/kmeans -p [1,2,3,4] -k 8 imagery/AST_20070501_pca.tif run scripts/dispms -f imagery/AST_20070501_pca_kmeans.tif -c \ #-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_1.eps' ###Output _____no_output_____ ###Markdown K-means on GEE ###Code import ee from ipyleaflet import (Map,DrawControl,TileLayer) ee.Initialize() image = ee.Image('users/mortcanty/supervisedclassification/AST_20070501_pca').select(0,1,2,3) region = image.geometry() training = image.sample(region=region,scale=15,numPixels=100000) clusterer = ee.Clusterer.wekaKMeans(8) trained = clusterer.train(training) clustered = image.cluster(trained) # function for overlaying tiles onto a map def GetTileLayerUrl(ee_image_object): map_id = ee.Image(ee_image_object).getMapId() tile_url_template = "https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}" return tile_url_template.format(**map_id) # display the default base map and overlay the clustered image center = list(reversed(region.centroid().getInfo()['coordinates'])) m = Map(center=center, zoom=11) jet = 'black,blue,cyan,yellow,red' m.add_layer(TileLayer(url=GetTileLayerUrl( clustered.select('cluster').visualize(min=0, max=6, palette= jet, opacity = 1.0) ) )) m ###Output _____no_output_____ ###Markdown K-means with Tensorflow ###Code import os import numpy as np import tensorflow as tf from osgeo import gdal from osgeo.gdalconst import GA_ReadOnly,GDT_Byte tf.logging.set_verbosity('ERROR') # read image data infile = 'imagery/AST_20070501_pca.tif' pos = [1,2,3,4] gdal.AllRegister() inDataset = gdal.Open(infile,GA_ReadOnly) cols = inDataset.RasterXSize rows = inDataset.RasterYSize bands = inDataset.RasterCount if pos is not None: bands = len(pos) else: pos = range(1,bands+1) G = np.zeros((cols*rows,bands)) k = 0 for b in pos: band = inDataset.GetRasterBand(b) band = band.ReadAsArray(0,0,cols,rows) G[:,k] = np.ravel(band) k += 1 inDataset = None # define an input function def input_fn(): return tf.train.limit_epochs( tf.convert_to_tensor(G, dtype=tf.float32), num_epochs=1) num_iterations = 10 num_clusters = 8 # create K-means clusterer kmeans = tf.contrib.factorization.KMeansClustering( num_clusters=num_clusters, use_mini_batch=False) # train it for _ in xrange(num_iterations): kmeans.train(input_fn) print 'score: %f'%kmeans.score(input_fn) # map the input points to their clusters labels = np.array( list(kmeans.predict_cluster_index(input_fn))) # write to disk path = os.path.dirname(infile) basename = os.path.basename(infile) root, ext = os.path.splitext(basename) outfile = path+'/'+root+'_kmeans'+ext driver = gdal.GetDriverByName('GTiff') outDataset = driver.Create(outfile,cols,rows,1,GDT_Byte) outBand = outDataset.GetRasterBand(1) outBand.WriteArray(np.reshape(labels,(rows,cols)),0,0) outBand.FlushCache() outDataset = None print 'result written to: '+outfile run scripts/dispms -f imagery/AST_20070501_pca_kmeans.tif -c ###Output _____no_output_____ ###Markdown Kernel K-means clustering ###Code run scripts/kkmeans -p [1,2,3,4] -n 1 -k 8 imagery/AST_20070501_pca.tif %run scripts/dispms -f imagery/AST_20070501_pca_kkmeans.tif -c \ #-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_2.eps' ###Output _____no_output_____ ###Markdown Extended K-mean clustering ###Code run scripts/ekmeans -b 1 imagery/AST_20070501_pca.tif run scripts/dispms -f imagery/AST_20070501_pca_ekmeans.tif -c \ #-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_3.eps' ###Output _____no_output_____ ###Markdown Agglomerative hierarchical clustering ###Code run scripts/hcl -h run scripts/hcl -p [1,2,3,4] -k 8 -s 2000 imagery/AST_20070501_pca.tif run scripts/dispms -f imagery/may0107pca_hcl.tif -c ###Output _____no_output_____ ###Markdown Gaussian mixture clustering ###Code run scripts/em -h run scripts/em -p [1,2,3,4] -K 8 imagery/AST_20070501_pca.tif run scripts/dispms -f imagery/AST_20070501_pca_em.tif -c -d [0,0,400,400] \ #-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_5.eps' ###Output _____no_output_____ ###Markdown Benchmark ###Code from osgeo.gdalconst import GDT_Float32 image = np.zeros((800,800,3)) b = 2.0 image[99:699 ,299:499 ,:] = b image[299:499 ,99:699 ,:] = b image[299:499 ,299:499 ,:] = 2*b n1 = np.random.randn(800,800) n2 = np.random.randn(800,800) n3 = np.random.randn(800,800) image[:,:,0] += n1 image[:,:,1] += n2+n1 image[:,:,2] += n3+n1/2+n2/2 driver = gdal.GetDriverByName('GTiff') outDataset = driver.Create('imagery/toy.tif', 800,800,3,GDT_Float32) for k in range(3): outBand = outDataset.GetRasterBand(k+1) outBand.WriteArray(image[:,:,k],0,0) outBand.FlushCache() outDataset = None run scripts/dispms -f 'imagery/toy.tif' -e 3 -p [1,2,3] run scripts/ex3_2 imagery/toy.tif run scripts/hcl -k 3 -s 2000 imagery/toy.tif run scripts/em -K 3 -s 1.0 imagery/toy.tif run scripts/dispms -f imagery/toy_em.tif -c -F imagery/toy_hcl.tif -C ###Output _____no_output_____ ###Markdown Kohonen SOM ###Code run scripts/som -c 6 imagery/AST_20070501 run scripts/dispms -f imagery/AST_20070501_som -e 4 -p [1,2,3] -d [0,0,400,400] \ #-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_9.eps' ###Output _____no_output_____ ###Markdown Mean shift segmentation ###Code run scripts/dispms -f imagery/AST_20070501_pca.tif -p [1,2,3] -e 4 -d [300,450,400,400] run scripts/meanshift -p [1,2,3,4] -d [500,450,200,200] -s 15 -r 30 -m 10 imagery/AST_20070501_pca.tif run scripts/dispms -f imagery/AST_20070501_pca_meanshift.tif -p [1,2,3] -e 4 \ -F imagery/AST_20070501_pca.tif -P [1,2,3] -E 4 -D [500,450,200,200] \ %-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_10.eps' run scripts/dispms -f imagery/AST_20070501_pca_meanshift.tif -p [1,2,3] -e 3 \ -F imagery/AST_20070501_pca_meanshift.tif -P [6,6,6] -E 3 -o 0.4 ###Output _____no_output_____ ###Markdown Toy image for Exercise 2 ###Code from osgeo.gdalconst import GDT_Float32 import numpy as np import gdal image = np.zeros((400,400,2)) n = np.random.randn(400,400) n1 = 8*np.random.rand(400,400)-4 image[:,:,0] = n1+8 image[:,:,1] = n1**2+0.3*np.random.randn(400,400)+8 image[:200,:,0] = np.random.randn(200,400)/2+8 image[:200,:,1] = np.random.randn(200,400)+14 driver = gdal.GetDriverByName('GTIFF') outDataset = driver.Create('imagery/toy.tif',400,400,3,GDT_Float32) for k in range(2): outBand= outDataset.GetRasterBand(k+1) outBand.WriteArray(image[:,:,k],0,0) outBand.FlushCache() outDataset = None run scripts/scatterplot -s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_11.eps' imagery/toy.tif imagery/toy.tif 1 2 ###Output _____no_output_____
vmfiles/IPNB/Examples/b Graphics/40 Cartopy.ipynb
###Markdown Cartopy[Cartopy](https://scitools.org.uk/cartopy/docs/latest/) is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.We test here a few [map examples](https://scitools.org.uk/cartopy/docs/latest/matplotlib/intro.html) using cartopy. ###Code %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (16, 10) import cartopy.crs as ccrs ###Output _____no_output_____ ###Markdown There is a list of the [available map projections](https://scitools.org.uk/cartopy/docs/latest/crs/projections.htmlcartopy-projections) in Cartopy. ###Code # Set the projection to use ax = plt.axes(projection=ccrs.PlateCarree()) # Draw coastlines ax.coastlines(); ax = plt.axes(projection=ccrs.Mollweide()) # Add a land image ax.stock_img(); ###Output _____no_output_____ ###Markdown ExamplesThis has been taken from the [gallery](http://scitools.org.uk/cartopy/docs/latest/gallery/index.html) ###Code fig = plt.figure(figsize=(16, 10)) # Set the projection to use ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson()) # make the map global rather than have it zoom in to # the extents of any plotted data ax.set_global() # Add a land image ax.stock_img() # Draw coastlines ax.coastlines() # Plot a point ax.plot(-0.08, 51.53, 'o', color="r", markersize=8, transform=ccrs.PlateCarree()) # Draw a straight line ax.plot([-0.08, 132], [51.53, 43.17], linewidth=3, transform=ccrs.PlateCarree()) # Draw a geodetic line ax.plot([-0.08, 132], [51.53, 43.17], linewidth=3, transform=ccrs.Geodetic()); # Set the projection to use ax = plt.axes(projection=ccrs.PlateCarree()) ax.stock_img(); ny_lon, ny_lat = -75, 43 delhi_lon, delhi_lat = 77.23, 28.61 # Draw a geodetic line plt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat], color='blue', linewidth=2, marker='o', transform=ccrs.Geodetic()) # Draw a straight line plt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat], color='gray', linestyle='--', transform=ccrs.PlateCarree()) # Write two labels plt.text(ny_lon-3, ny_lat-12, 'New York', horizontalalignment='right', transform=ccrs.Geodetic()) plt.text(delhi_lon+3, delhi_lat-12, 'Delhi', horizontalalignment='left', transform=ccrs.Geodetic()); ###Output _____no_output_____
Note/10- Veri Analizi/Pandas/PANDAS.ipynb
###Markdown Pandas Serileri ###Code import numpy as np import pandas as pd liste1=["a","b","c","d","e"] liste2=[1,2,3,4,5] pd.Series(data=liste2) pd.Series(data=liste2, index=liste1) npArray= np.array([10,20,30,40,50]) npArray pd.Series(data=npArray,index=["a","b","c","d","e"]) sozluk={"a":30,"b":40,"c":70} pd.Series(sozluk) ser1=pd.Series([1,2,3,4,5],["a","b","c","d","e"]) ser2=pd.Series([7,5,6,9,8],["a","b","c","f","e"]) ser1 ser2 ser1["a"] ser1+ser2 top=ser1+ser2 top top["d"] top["g"] ###Output _____no_output_____ ###Markdown Dataframe ###Code from numpy.random import randn randn(3,3) df=pd.DataFrame(randn(3,3), index=["A","B","C"], columns=["C1","C2","C3"]) df df["C1"] type(df["C1"]) df.loc["A"] type(df.loc["A"]) df[["C1","C2"]] df["C4"] df["C4"]=pd.Series(randn(3),index=["A","B","C"]) df df["C5"]=df["C1"]+df["C2"]+df["C3"]+df["C4"] df df.drop("C5",axis=1) df df.drop("C5",axis=1,inplace=True) df ###Output _____no_output_____ ###Markdown Koşullar ###Code df > -1 boolDf=df > -1 boolDf df[boolDf] df[df<-1] df["C1"]<-1 df df[(df["C1"]<-1) & (df["C3"]>0)] df[(df["C1"]<0) | (df["C4"]>-1)] df["C5"]=["new1","new2","new3"] df df.set_index("C5") df df.set_index("C5",inplace=True) df df.index.names outerIndex=["Group1","Group1","Group1","Group2","Group2","Group2","Group3","Group3","Group3"] innerIndex=["Index1","Index2","Index3","Index1","Index2","Index3","Index1","Index2","Index3"] list(zip(outerIndex,innerIndex)) hierarchy=list(zip(outerIndex,innerIndex)) hierarchy hierarchy=pd.MultiIndex.from_tuples(hierarchy) hierarchy df2=pd.DataFrame(randn(9,3),hierarchy,columns=["A","B","C"]) df2 df2["A"] df2.loc["Group1"] df2.loc[["Group1","Group2"]] df2.loc["Group1"].loc["Index1"] df2.index.names df2.index.names=["Groups","Indexes"] df2 df2.loc["Group1"].loc["Index1"]["A"] df2.xs("Group1") df2.xs("Group1").xs("Index1") df2.xs("Group1").xs("Index1").xs("A") ###Output _____no_output_____ ###Markdown Kayıp Veriler ###Code arr=np.array([[10,20,np.nan],[5,np.nan,np.nan],[23,np.nan,14]]) arr df=pd.DataFrame(arr,index=["i1","i2","i3"],columns=["c1","c2","c3"]) df df.dropna() df df.dropna(axis=1) df.dropna(thresh=2) df.fillna(value=1) ###Output _____no_output_____ ###Markdown NaN değerlerini değerlerin ortalaması ile değiştirmek ###Code df.sum() df.sum().sum() df.size df.isnull().sum().sum() def calculateMean(df): totalSum=df.sum().sum() totalNum=df.size-df.isnull().sum().sum() return totalSum/totalNum df.fillna(value=calculateMean(df)) ###Output _____no_output_____ ###Markdown GroupBy Sorguları ###Code dataset = { "Departman":["Bilişim","İnsan Kaynakları","Üretim","Üretim","Bilişim","İnsan Kaynakları"], "Çalışan": ["Mustafa","Jale","Kadir","Zeynep","Murat","Ahmet"], "Maaş":[3000,3500,2500,4500,4000,2000] } dataset df=pd.DataFrame(dataset) df depGroup=df.groupby("Departman") depGroup depGroup.sum() df.groupby("Departman").count() df.groupby("Departman").min()["Maaş"]["Bilişim"] df.groupby("Departman").mean().loc["Bilişim"]["Maaş"] ###Output _____no_output_____ ###Markdown Merge, Join ve Concate Concate ###Code dataset1 = { "A": ["A1","A2","A3","A4"], "B":["B1","B2","B3","B4"], "C":["C1","C2","C3","C4"], } dataset2 = { "A": ["A5","A6","A7","A8"], "B":["B5","B6","B7","B8"], "C":["C5","C6","C7","C8"], } df1=pd.DataFrame(dataset1,index=[1,2,3,4]) df2=pd.DataFrame(dataset2,index=[5,6,7,8]) df1 df2 pd.concat([df1,df2]) pd.concat([df1,df2],axis=1) ###Output _____no_output_____ ###Markdown Merge ###Code dataset1 = { "A": ["A1","A2","A3"], "B":["B1","B2","B3",], "Anahtar":["C1","C2","C3",], } dataset2 = { "X": ["X5","X6","X7","X8"], "Y":["Y5","Y6","Y7","Y8"], "Anahtar":["C1","C2","C7","C8"], } df1=pd.DataFrame(dataset1,index=[1,2,3]) df2=pd.DataFrame(dataset2,index=[1,2,3,4]) df1 df2 pd.merge(df1,df2,how="inner",on="Anahtar") ###Output _____no_output_____ ###Markdown Join ###Code dataset1 = { "A": ["A1","A2","A3"], "B":["B1","B2","B3",], } dataset2 = { "X": ["X5","X6","X7","X8"], "Y":["Y5","Y6","Y7","Y8"], } df1=pd.DataFrame(dataset1,index=[1,2,3]) df2=pd.DataFrame(dataset2,index=[1,2,3,4]) df1.join(df2) df2.join(df1) ###Output _____no_output_____
how-to-use-azureml/automated-machine-learning/classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-with-whitelisting/auto-ml-classification-with-whitelisting.png) Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a selected list of models, see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code #Note: This notebook will install tensorflow if not already installed in the enviornment.. import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace import sys whitelist_models=["LightGBM"] if "3.7" != sys.version[0:3]: try: import tensorflow as tf1 except ImportError: from pip._internal import main main(['install', 'tensorflow>=1.10.0,<=1.12.0']) logging.getLogger().setLevel(logging.ERROR) whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"] from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=whitelist_models, path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-with-whitelisting/auto-ml-classification-with-whitelisting.png) Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code #Note: This notebook will install tensorflow if not already installed in the enviornment.. import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace import sys whitelist_models=["LightGBM"] if "3.7" != sys.version[0:3]: try: import tensorflow as tf1 except ImportError: from pip._internal import main main(['install', 'tensorflow>=1.10.0,<=1.12.0']) logging.getLogger().setLevel(logging.ERROR) whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"] from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=whitelist_models, path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-with-whitelisting/auto-ml-classification-with-whitelisting.png) Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a selected list of models, see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code #Note: This notebook will install tensorflow if not already installed in the enviornment.. import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace import sys whitelist_models=["LightGBM"] if "3.7" != sys.version[0:3]: try: import tensorflow as tf1 except ImportError: from pip._internal import main main(['install', 'tensorflow>=1.10.0,<=1.12.0']) logging.getLogger().setLevel(logging.ERROR) whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"] from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment. experiment_name = 'automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=whitelist_models) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown Opt-in diagnostics for better experience, quality, and security of future releases. ###Code from azureml.telemetry import set_diagnostics_collection set_diagnostics_collection(send_diagnostics = True) ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, n_cross_validations = 3, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"], path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code #Note: This notebook will install tensorflow if not already installed in the enviornment.. import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace import sys whitelist_models=["LightGBM"] if "3.7" != sys.version[0:3]: try: import tensorflow as tf1 except ImportError: from pip._internal import main main(['install', 'tensorflow>=1.10.0,<=1.12.0']) logging.getLogger().setLevel(logging.ERROR) whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"] from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=whitelist_models, path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, n_cross_validations = 3, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"], path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code #Note: This notebook will install tensorflow if not already installed in the enviornment.. import logging from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace try: import tensorflow as tf1 except ImportError: from pip._internal import main main(['install', 'tensorflow>=1.10.0,<=1.12.0']) from azureml.train.automl import AutoMLConfig ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) outputDf = pd.DataFrame(data = output, index = ['']) outputDf.T ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, n_cross_validations = 3, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"], path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ###Code import logging import os import random from matplotlib import pyplot as plt from matplotlib.pyplot import imshow import numpy as np import pandas as pd from sklearn import datasets import azureml.core from azureml.core.experiment import Experiment from azureml.core.workspace import Workspace from azureml.train.automl import AutoMLConfig from azureml.train.automl.run import AutoMLRun ws = Workspace.from_config() # Choose a name for the experiment and specify the project folder. experiment_name = 'automl-local-whitelist' project_folder = './sample_projects/automl-local-whitelist' experiment = Experiment(ws, experiment_name) output = {} output['SDK version'] = azureml.core.VERSION output['Subscription ID'] = ws.subscription_id output['Workspace Name'] = ws.name output['Resource Group'] = ws.resource_group output['Location'] = ws.location output['Project Directory'] = project_folder output['Experiment Name'] = experiment.name pd.set_option('display.max_colwidth', -1) pd.DataFrame(data = output, index = ['']).T ###Output _____no_output_____ ###Markdown Opt-in diagnostics for better experience, quality, and security of future releases. ###Code from azureml.telemetry import set_diagnostics_collection set_diagnostics_collection(send_diagnostics = True) ###Output _____no_output_____ ###Markdown DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method. ###Code from sklearn import datasets digits = datasets.load_digits() # Exclude the first 100 rows from training so that they can be used for test. X_train = digits.data[100:,:] y_train = digits.target[100:] ###Output _____no_output_____ ###Markdown TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).| ###Code automl_config = AutoMLConfig(task = 'classification', debug_log = 'automl_errors.log', primary_metric = 'AUC_weighted', iteration_timeout_minutes = 60, iterations = 10, n_cross_validations = 3, verbosity = logging.INFO, X = X_train, y = y_train, enable_tf=True, whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"], path = project_folder) ###Output _____no_output_____ ###Markdown Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console. ###Code local_run = experiment.submit(automl_config, show_output = True) local_run ###Output _____no_output_____ ###Markdown Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details. ###Code from azureml.widgets import RunDetails RunDetails(local_run).show() ###Output _____no_output_____ ###Markdown Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log. ###Code children = list(local_run.get_children()) metricslist = {} for run in children: properties = run.get_properties() metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)} metricslist[int(properties['iteration'])] = metrics rundata = pd.DataFrame(metricslist).sort_index(1) rundata ###Output _____no_output_____ ###Markdown Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*. ###Code best_run, fitted_model = local_run.get_output() print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value: ###Code lookup_metric = "log_loss" best_run, fitted_model = local_run.get_output(metric = lookup_metric) print(best_run) print(fitted_model) ###Output _____no_output_____ ###Markdown Model from a Specific IterationShow the run and the model from the third iteration: ###Code iteration = 3 third_run, third_model = local_run.get_output(iteration = iteration) print(third_run) print(third_model) ###Output _____no_output_____ ###Markdown Test Load Test Data ###Code digits = datasets.load_digits() X_test = digits.data[:10, :] y_test = digits.target[:10] images = digits.images[:10] ###Output _____no_output_____ ###Markdown Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works. ###Code # Randomly select digits and test. for index in np.random.choice(len(y_test), 2, replace = False): print(index) predicted = fitted_model.predict(X_test[index:index + 1])[0] label = y_test[index] title = "Label value = %d Predicted value = %d " % (label, predicted) fig = plt.figure(1, figsize = (3,3)) ax1 = fig.add_axes((0,0,.8,.8)) ax1.set_title(title) plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest') plt.show() ###Output _____no_output_____
notebooks/nve_neighbor_list.ipynb
###Markdown ###Code #@title Imports & Utils !pip install jax-md import numpy as onp from jax.config import config ; config.update('jax_enable_x64', True) import jax.numpy as np from jax import random from jax import jit from jax import lax import time from jax_md import space, smap, energy, quantity, simulate, partition import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_style(style='white') def format_plot(x, y): plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 1)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() ###Output _____no_output_____ ###Markdown Constant Energy Simulation With Neighbor Lists Setup some system parameters. ###Code Nx = particles_per_side = 80 spacing = np.float32(1.25) side_length = Nx * spacing R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing R = np.array(R, np.float64) #@title Draw the initial state ms = 10 R_plt = onp.array(R) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown Construct two versions of the energy function with and without neighbor lists. ###Code displacement, shift = space.periodic(side_length) neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement, side_length) energy_fn = jit(energy_fn) exact_energy_fn = jit(energy.lennard_jones_pair(displacement)) nbrs = neighbor_fn(R) # Run once so that we avoid the jit compilation time. print('E = {}'.format(energy_fn(R, neighbor=nbrs))) print('E_ex = {}'.format(exact_energy_fn(R))) %%timeit energy_fn(R, neighbor=nbrs).block_until_ready() %%timeit exact_energy_fn(R).block_until_ready() displacement, shift = space.periodic(side_length) init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3) state = init_fn(random.PRNGKey(0), R, neighbor=nbrs) def body_fn(i, state): state, nbrs = state nbrs = neighbor_fn(state.position, nbrs) state = apply_fn(state, neighbor=nbrs) return state, nbrs step = 0 while step < 40: new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs)) if nbrs.did_buffer_overflow: nbrs = neighbor_fn(state.position) else: state = new_state step += 1 #@title Draw the final state ms = 10 R_plt = onp.array(state.position) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown Imports & Utils ###Code !pip install --upgrade -q https://storage.googleapis.com/jax-releases/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-$(pip search jaxlib | grep -oP '[0-9\.]+' | head -n 1)-cp36-none-linux_x86_64.whl !pip install --upgrade -q jax !pip install -q git+https://github.com/conference-submitter/jax-md.git import numpy as onp from jax.config import config ; config.update('jax_enable_x64', True) import jax.numpy as np from jax import random from jax import jit from jax import lax import time from jax_md import space, energy, simulate, partition import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_style(style='white') def format_plot(x, y): plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 1)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() ###Output /usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm ###Markdown Constant Energy Simulation With Neighbor Lists Setup some system parameters. ###Code Nx = particles_per_side = 80 spacing = np.float32(1.25) side_length = Nx * spacing R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing R = np.array(R, np.float64) #@title Draw the initial state ms = 10 R_plt = onp.array(R) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown Construct two versions of the energy function with and without neighbor lists. ###Code displacement, shift = space.periodic(side_length) neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement, side_length) energy_fn = jit(energy_fn) exact_energy_fn = jit(energy.lennard_jones_pair(displacement)) nbrs = neighbor_fn(R) # Run once so that we avoid the jit compilation time. print('E = {}'.format(energy_fn(R, neighbor_idx=nbrs.idx))) print('E_ex = {}'.format(exact_energy_fn(R))) %%timeit energy_fn(R, neighbor_idx=nbrs.idx).block_until_ready() %%timeit exact_energy_fn(R).block_until_ready() displacement, shift = space.periodic(side_length) init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3) state = init_fn(random.PRNGKey(0), R, neighbor_idx=nbrs.idx) def body_fn(i, state): state, nbrs = state nbrs = neighbor_fn(state.position, nbrs) state = apply_fn(state, neighbor_idx=nbrs.idx) return state, nbrs step = 0 while step < 40: new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs)) if nbrs.did_buffer_overflow: nbrs = neighbor_fn(state.position) else: state = new_state step += 1 #@title Draw the final state ms = 10 R_plt = onp.array(state.position) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown ###Code #@title Imports & Utils !pip install jax-md import numpy as onp from jax.config import config ; config.update('jax_enable_x64', True) import jax.numpy as np from jax import random from jax import jit from jax import lax import time from jax_md import space from jax_md import smap from jax_md import energy from jax_md import quantity from jax_md import simulate from jax_md import partition import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_style(style='white') def format_plot(x, y): plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 1)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() ###Output _____no_output_____ ###Markdown Constant Energy Simulation With Neighbor Lists Setup some system parameters. ###Code Nx = particles_per_side = 80 spacing = np.float32(1.25) side_length = Nx * spacing R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing R = np.array(R, np.float64) #@title Draw the initial state ms = 10 R_plt = onp.array(R) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown JAX MD supports three different formats for neighbor lists: `Dense`, `Sparse`, and `OrderedSparse`. `Dense` neighbor lists store neighbor IDs in a matrix of shape `(particle_count, neighbors_per_particle)`. This can be advantageous if the system if homogeneous since it requires less memory bandwidth. However, `Dense` neighbor lists are more prone to overflows or waste if there are large fluctuations in the number of neighbors, since they must allocate enough capacity for the maximum number of neighbors.`Sparse` neighbor lists store neighbor IDs in a matrix of shape `(2, total_neighbors)` where the first index specifies senders and receivers for each neighboring pair. Unlike `Dense` neighbor lists, `Sparse` neighbor lists must store two integers for each neighboring pair. However, they benefit because their capacity is bounded by the total number of neighbors, making them more efficient when different particles have different numbers of neighbors.`OrderedSparse` neighbor lists are like `Sparse` neighbor lists, except they only store pairs of neighbors `(i, j)` where `i < j`. For potentials that can be phrased as $\sum_{i<j}E_{ij}$ this can give a factor of two improvement in speed. ###Code # format = partition.Dense # format = partition.Sparse format = partition.OrderedSparse ###Output _____no_output_____ ###Markdown Construct two versions of the energy function with and without neighbor lists. ###Code displacement, shift = space.periodic(side_length) neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement, side_length, format=format) energy_fn = jit(energy_fn) exact_energy_fn = jit(energy.lennard_jones_pair(displacement)) ###Output _____no_output_____ ###Markdown To use a neighbor list, we must first allocate it. This step cannot be Just-in-Time (JIT) compiled because it uses the state of the system to infer the capacity of the neighbor list (which involves dynamic shapes). ###Code nbrs = neighbor_fn.allocate(R) ###Output _____no_output_____ ###Markdown Now we can compute the energy with and without neighbor lists. We see that both results agree, but the neighbor list version of the code is significantly faster. ###Code # Run once so that we avoid the jit compilation time. print('E = {}'.format(energy_fn(R, neighbor=nbrs))) print('E_ex = {}'.format(exact_energy_fn(R))) %%timeit energy_fn(R, neighbor=nbrs).block_until_ready() %%timeit exact_energy_fn(R).block_until_ready() ###Output 1000 loops, best of 5: 1.08 ms per loop ###Markdown Now we can run a simulation. Inside the body of the simulation, we update the neighbor list using `nbrs.update(position)`. This can be JIT, but it also might lead to buffer overflows if the allocated neighborlist cannot accomodate all of the neighbors. Therefore, every so often we check whether the neighbor list overflowed and if it did, we reallocate it using the state right before it overflowed. ###Code displacement, shift = space.periodic(side_length) init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3) state = init_fn(random.PRNGKey(0), R, kT=1e-3, neighbor=nbrs) def body_fn(i, state): state, nbrs = state nbrs = nbrs.update(state.position) state = apply_fn(state, neighbor=nbrs) return state, nbrs step = 0 while step < 40: new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs)) if nbrs.did_buffer_overflow: print('Neighbor list overflowed, reallocating.') nbrs = neighbor_fn.allocate(state.position) else: state = new_state step += 1 #@title Draw the final state ms = 10 R_plt = onp.array(state.position) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown ###Code #@title Imports & Utils !pip install jax-md import numpy as onp from jax.config import config ; config.update('jax_enable_x64', True) import jax.numpy as np from jax import random from jax import jit from jax import lax import time from jax_md import space, smap, energy, quantity, simulate, partition import matplotlib import matplotlib.pyplot as plt import seaborn as sns sns.set_style(style='white') def format_plot(x, y): plt.xlabel(x, fontsize=20) plt.ylabel(y, fontsize=20) def finalize_plot(shape=(1, 1)): plt.gcf().set_size_inches( shape[0] * 1.5 * plt.gcf().get_size_inches()[1], shape[1] * 1.5 * plt.gcf().get_size_inches()[1]) plt.tight_layout() ###Output _____no_output_____ ###Markdown Constant Energy Simulation With Neighbor Lists Setup some system parameters. ###Code Nx = particles_per_side = 80 spacing = np.float32(1.25) side_length = Nx * spacing R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing R = np.array(R, np.float64) #@title Draw the initial state ms = 10 R_plt = onp.array(R) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____ ###Markdown Construct two versions of the energy function with and without neighbor lists. ###Code displacement, shift = space.periodic(side_length) neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement, side_length) energy_fn = jit(energy_fn) exact_energy_fn = jit(energy.lennard_jones_pair(displacement)) nbrs = neighbor_fn(R) # Run once so that we avoid the jit compilation time. print('E = {}'.format(energy_fn(R, neighbor=nbrs))) print('E_ex = {}'.format(exact_energy_fn(R))) %%timeit energy_fn(R, neighbor=nbrs).block_until_ready() %%timeit exact_energy_fn(R).block_until_ready() displacement, shift = space.periodic(side_length) init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3) state = init_fn(random.PRNGKey(0), R, neighbor=nbrs) def body_fn(i, state): state, nbrs = state nbrs = neighbor_fn(state.position, nbrs) state = apply_fn(state, neighbor=nbrs) return state, nbrs step = 0 while step < 40: new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs)) if nbrs.did_buffer_overflow: nbrs = neighbor_fn(state.position) else: state = new_state step += 1 #@title Draw the final state ms = 10 R_plt = onp.array(state.position) plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5) plt.xlim([0, np.max(R[:, 0])]) plt.ylim([0, np.max(R[:, 1])]) plt.axis('off') finalize_plot((2, 2)) ###Output _____no_output_____
sequence_model/Week 2/Word Vector Representation/Operations on word vectors - v2.ipynb
###Markdown Operations on word vectorsWelcome to your first assignment of this week! Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings. **After this assignment you will be able to:**- Load pre-trained word vectors, and measure similarity using cosine similarity- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______. - Modify word embeddings to reduce their gender bias Let's get started! Run the following cell to load the packages you will need. ###Code import numpy as np from w2v_utils import * ###Output _____no_output_____ ###Markdown Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`. ###Code words, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt') ###Output _____no_output_____ ###Markdown You've loaded:- `words`: set of words in the vocabulary.- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.You've seen that one-hot vectors do not do a good job capturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are. 1 - Cosine similarityTo measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows: $$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value. **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$ ###Code # GRADED FUNCTION: cosine_similarity def cosine_similarity(u, v): """ Cosine similarity reflects the degree of similariy between u and v Arguments: u -- a word vector of shape (n,) v -- a word vector of shape (n,) Returns: cosine_similarity -- the cosine similarity between u and v defined by the formula above. """ distance = 0.0 ### START CODE HERE ### # Compute the dot product between u and v (≈1 line) dot = None # Compute the L2 norm of u (≈1 line) norm_u = None # Compute the L2 norm of v (≈1 line) norm_v = None # Compute the cosine similarity defined by formula (1) (≈1 line) cosine_similarity = None ### END CODE HERE ### return cosine_similarity father = word_to_vec_map["father"] mother = word_to_vec_map["mother"] ball = word_to_vec_map["ball"] crocodile = word_to_vec_map["crocodile"] france = word_to_vec_map["france"] italy = word_to_vec_map["italy"] paris = word_to_vec_map["paris"] rome = word_to_vec_map["rome"] print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother)) print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile)) print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy)) ###Output _____no_output_____ ###Markdown **Expected Output**: **cosine_similarity(father, mother)** = 0.890903844289 **cosine_similarity(ball, crocodile)** = 0.274392462614 **cosine_similarity(france - paris, rome - italy)** = -0.675147930817 After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave. 2 - Word analogy taskIn the word analogy task, we complete the sentence "*a* is to *b* as *c* is to **____**". An example is '*man* is to *woman* as *king* is to *queen*' . In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity. **Exercise**: Complete the code below to be able to perform word analogies! ###Code # GRADED FUNCTION: complete_analogy def complete_analogy(word_a, word_b, word_c, word_to_vec_map): """ Performs the word analogy task as explained above: a is to b as c is to ____. Arguments: word_a -- a word, string word_b -- a word, string word_c -- a word, string word_to_vec_map -- dictionary that maps words to their corresponding vectors. Returns: best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity """ # convert words to lower case word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower() ### START CODE HERE ### # Get the word embeddings e_a, e_b and e_c (≈1-3 lines) e_a, e_b, e_c = None ### END CODE HERE ### words = word_to_vec_map.keys() max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number best_word = None # Initialize best_word with None, it will help keep track of the word to output # loop over the whole word vector set for w in words: # to avoid best_word being one of the input words, pass on them. if w in [word_a, word_b, word_c] : continue ### START CODE HERE ### # Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line) cosine_sim = None # If the cosine_sim is more than the max_cosine_sim seen so far, # then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines) if None > None: max_cosine_sim = None best_word = None ### END CODE HERE ### return best_word ###Output _____no_output_____ ###Markdown Run the cell below to test your code, this may take 1-2 minutes. ###Code triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')] for triad in triads_to_try: print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map))) ###Output _____no_output_____ ###Markdown **Expected Output**: **italy -> italian** :: spain -> spanish **india -> delhi** :: japan -> tokyo **man -> woman ** :: boy -> girl **small -> smaller ** :: large -> larger Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?. Congratulations!You've come to the end of this assignment. Here are the main points you should remember:- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.) - For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started. Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook. Congratulations on finishing the graded portions of this notebook! 3 - Debiasing word vectors (OPTIONAL/UNGRADED) In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded. Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.) ###Code g = word_to_vec_map['woman'] - word_to_vec_map['man'] print(g) ###Output _____no_output_____ ###Markdown Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity. ###Code print ('List of names and their similarities with constructed vector:') # girls and boys name name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin'] for w in name_list: print (w, cosine_similarity(word_to_vec_map[w], g)) ###Output _____no_output_____ ###Markdown As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable. But let's try with some other words. ###Code print('Other words and their similarities:') word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist', 'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer'] for w in word_list: print (w, cosine_similarity(word_to_vec_map[w], g)) ###Output _____no_output_____ ###Markdown Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch! We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing. 3.1 - Neutralize bias for non-gender specific words The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$. Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below. **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. **Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$: $$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$$$e^{debiased} = e - e^{bias\_component}\tag{3}$$If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.<!-- **Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:$$u = u_B + u_{\perp}$$where : $u_B = $ and $ u_{\perp} = u - u_B $!--> ###Code def neutralize(word, g, word_to_vec_map): """ Removes the bias of "word" by projecting it on the space orthogonal to the bias axis. This function ensures that gender neutral words are zero in the gender subspace. Arguments: word -- string indicating the word to debias g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender) word_to_vec_map -- dictionary mapping words to their corresponding vectors. Returns: e_debiased -- neutralized word vector representation of the input "word" """ ### START CODE HERE ### # Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line) e = None # Compute e_biascomponent using the formula give above. (≈ 1 line) e_biascomponent = None # Neutralize e by substracting e_biascomponent from it # e_debiased should be equal to its orthogonal projection. (≈ 1 line) e_debiased = None ### END CODE HERE ### return e_debiased e = "receptionist" print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g)) e_debiased = neutralize("receptionist", g, word_to_vec_map) print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g)) ###Output _____no_output_____ ###Markdown **Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$). **cosine similarity between receptionist and g, before neutralizing:** : 0.330779417506 **cosine similarity between receptionist and g, after neutralizing:** : -3.26732746085e-17 3.2 - Equalization algorithm for gender-specific wordsNext, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this. The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works: The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are: $$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$ $$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{5}$$ $$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{7}$$ $$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{8}$$$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||} \tag{9}$$$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||} \tag{10}$$$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck! ###Code def equalize(pair, bias_axis, word_to_vec_map): """ Debias gender specific words by following the equalize method described in the figure above. Arguments: pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor") bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender word_to_vec_map -- dictionary mapping words to their corresponding vectors Returns e_1 -- word vector corresponding to the first word e_2 -- word vector corresponding to the second word """ ### START CODE HERE ### # Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines) w1, w2 = None e_w1, e_w2 = None # Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line) mu = None # Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines) mu_B = None mu_orth = None # Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines) e_w1B = None e_w2B = None # Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines) corrected_e_w1B = None corrected_e_w2B = None # Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines) e1 = None e2 = None ### END CODE HERE ### return e1, e2 print("cosine similarities before equalizing:") print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g)) print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g)) print() e1, e2 = equalize(("man", "woman"), g, word_to_vec_map) print("cosine similarities after equalizing:") print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g)) print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g)) ###Output _____no_output_____
03_math.ipynb
###Markdown Advent of Code Utils> A collection of somewhat handy functions to make your AoC puzzle life solving a bit easier ###Code #exporti from collections.abc import Iterable from collections import namedtuple, deque import contextlib from functools import reduce import hashlib import heapq import logging from math import sqrt, gcd from pathlib import Path import time import pickle import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Mathy functions ###Code #export def factors(n): """ return set of divisors of a number """ step = 2 if n%2 else 1 return set(reduce(list.__add__, ([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0))) assert factors(20) == {1, 2, 4, 5, 10, 20} #export def gcd(a,b): largest = max(a,b) smallest = min(a,b) while True: rest = largest % smallest if rest == 0: return prevrest else: prevrest = rest largest = smallest smallest = rest def lcm(a): lcm = a[0] for i in a[1:]: lcm = lcm*i//gcd(lcm, i) return lcm assert gcd(12,8) == 4 assert lcm([4,6,7]) == 84 a = [1,2,3,8,8,8,2,3] a.index(8) len(a) - 1 - a[::-1].index(8) def power(a,b,M=None): # computes a**b. Actually python pow does this with optional third argument res = 1 while(b): if b % 2 == 1: res = (res * a) % M if M else res * a print('res',res) a *= a print('a',a) b //= 2 print('b',b) return res power(3,12) #hide from nbdev.export import notebook2script; notebook2script() !nbdev_build_lib !nbdev_build_docs !nbdev_clean_nbs !git add . !git commit -am "change future upwards" !git push ###Output Converted 00_core.ipynb. Converted 01_context_free_grammar.ipynb. Converted 02_norvig.ipynb. Converted index.ipynb. Converted 00_core.ipynb. Converted 01_context_free_grammar.ipynb. Converted 02_norvig.ipynb. Converted index.ipynb. converting: d:\Documenten\GitHub\adventofcode\aocutils\00_core.ipynb converting: d:\Documenten\GitHub\adventofcode\aocutils\01_context_free_grammar.ipynb converting: d:\Documenten\GitHub\adventofcode\aocutils\02_norvig.ipynb converting: d:\Documenten\GitHub\adventofcode\aocutils\index.ipynb converting d:\Documenten\GitHub\adventofcode\aocutils\index.ipynb to README.md [main 47c0ec4] change future upwards 3 files changed, 14 insertions(+), 7 deletions(-) To https://github.com/jvanelteren/aocutils.git 68e7b8a..47c0ec4 main -> main
pig-hive/pig-hive.ipynb
###Markdown NoSQL (Hive & Pig) Esta hoja es una introducción al uso de Hive y Pig.Utilizaremos la imagen Quickstart de Cloudera.Usaremos la librería `happybase` para python. La cargamos a continuación y hacemos la conexión. ###Code !pip install happybase import happybase host = 'quickstart.cloudera' connection = happybase.Connection(host) connection.tables() ###Output _____no_output_____ ###Markdown Para la carga inicial, vamos a crear todas las tablas con una única familia de columnas, `rawdata`, donde meteremos toda la información _raw_ comprimida. Después podremos hacer reorganizaciones de los datos para hacer el acceso más eficiente. Es una de las muchas ventajas de no tener un esquema. ###Code %%bash file=../Posts.csv test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file %%bash file=../Users.csv test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file %%bash file=../Tags.csv test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file %%bash file=../Comments.csv test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file %%bash file=../Votes.csv test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file # Create tables tables = ['posts', 'votes', 'users', 'tags', 'comments'] for t in tables: try: connection.create_table( t, { 'rawdata': dict(max_versions=1,compression='GZ') }) except: print("Database already exists: {0}.".format(t)) pass connection.tables() ###Output _____no_output_____ ###Markdown El código de importación es siempre el mismo, ya que se coge la primera fila del CSV que contiene el nombre de las columnas y se utiliza para generar nombres de columnas dentro de la familia de columnas dada como parámetro. La función `csv_to_hbase()` acepta un fichero CSV a abrir, un nombre de tabla y una familia de columnas donde agregar las columnas del fichero CSV. En nuestro caso siempre va a ser `rawdata`. ###Code import csv def csv_to_hbase(file, tablename, cf): table = connection.table(tablename) with open(file) as f: # La llamada csv.reader() crea un iterador sobre un fichero CSV reader = csv.reader(f, dialect='excel') # Se leen las columnas. Sus nombres se usarán para crear las diferentes columnas en la familia columns = next(reader) columns = [cf + ':' + c for c in columns] with table.batch(batch_size=500) as b: for row in reader: # La primera columna se usará como Row Key b.put(row[0], dict(zip(columns[1:], row[1:]))) for t in tables: print("Importando tabla {0}...".format(t)) %time csv_to_hbase('../'+t.capitalize() + '.csv', t, 'rawdata') posts = connection.table('posts') ###Output _____no_output_____ ###Markdown Obtener el Post con `Id` 5. La orden más sencilla e inmediata de HBase es obtener una fila, opcionalmente limitando las columnas a mostrar: ###Code posts.row(b'5',columns=[b'rawdata:Body']) ###Output _____no_output_____ ###Markdown El siguiente código permite mostrar de forma amigable las tablas extraídas de la base de datos en forma de diccionario: ###Code # http://stackoverflow.com/a/30525061/62365 class DictTable(dict): # Overridden dict class which takes a dict in the form {'a': 2, 'b': 3}, # and renders an HTML Table in IPython Notebook. def _repr_html_(self): htmltext = ["<table width=100%>"] for key, value in self.items(): htmltext.append("<tr>") htmltext.append("<td>{0}</td>".format(key.decode('utf-8'))) htmltext.append("<td>{0}</td>".format(value.decode('utf-8'))) htmltext.append("</tr>") htmltext.append("</table>") return ''.join(htmltext) # Muestra cómo queda la fila del Id del Post 9997 DictTable(posts.row(b'5')) ###Output _____no_output_____ ###Markdown En otra terminal podemos ejecutar, para arrancar un _shell_ dentro del contenedor:```docker exec --user cloudera -ti pighive_quickstart.cloudera_1 bash``` El siguiente script carga todos los Posts directamente del fichero `Posts.csv`. Habrá que añadirlo primero desde la interfaz en la pestaña de gestión de ficheros. ###Code register '/usr/lib/pig/piggybank.jar'; define CSVLoader org.apache.pig.piggybank.storage.CSVLoader(); A = LOAD '/user/cloudera/Posts.csv' using CSVLoader AS (Id:chararray,AcceptedAnswerId:chararray,AnswerCount:chararray,Body:chararray, ClosedDate:chararray,CommentCount:chararray,CommunityOwnedDate:chararray, CreationDate:chararray,FavoriteCount:chararray,LastActivityDate:chararray, LastEditDate:chararray,LastEditorDisplayName:chararray,LastEditorUserId:chararray, OwnerDisplayName:chararray,OwnerUserId:chararray,ParentId:chararray, PostTypeId:chararray,Score:chararray,Tags:chararray,Title:chararray,ViewCount:chararray); ILLUSTRATE A; ###Output _____no_output_____ ###Markdown El siguiente código coge la misma información que hemos almacenado en la tabla HBase `posts`. Sólo se cogen un conjunto limitado de columnas y se muestra cómo se puede usar el tipo mapa de Pig. ###Code register '/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.7.0.jar'; register '/usr/lib/hbase/hbase-client-1.2.0-cdh5.7.0.jar'; register '/usr/lib/hbase/hbase-common-1.2.0-cdh5.7.0.jar'; raw = LOAD 'hbase://posts' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage( 'rawdata:Body rawdata:OwnerUserId rawdata:*', '-loadKey true -limit 5') AS (Id:chararray, Body:chararray, OwnerUserId:chararray, rawdata:map[]); DUMP raw; ###Output _____no_output_____ ###Markdown El siguiente código relaciona a la tabla usuarios de HBase con los Posts obtenidos de un fichero CSV. Lista los usuarios qué más entradas (preguntas+respuestas) tienen, ordenados por número de posts. ###Code register '/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.7.0.jar'; register '/usr/lib/hbase/hbase-client-1.2.0-cdh5.7.0.jar'; register '/usr/lib/hbase/hbase-common-1.2.0-cdh5.7.0.jar'; register '/usr/lib/pig/piggybank.jar'; define CSVLoader org.apache.pig.piggybank.storage.CSVLoader(); -- Cargar Posts del fichero CSV Posts = LOAD '/user/cloudera/Posts.csv' using CSVLoader AS (Id,AcceptedAnswerId,AnswerCount,Body, ClosedDate,CommentCount,CommunityOwnedDate, CreationDate,FavoriteCount,LastActivityDate, LastEditDate,LastEditorDisplayName,LastEditorUserId, OwnerDisplayName,OwnerUserId,ParentId, PostTypeId,Score,Tags,Title,ViewCount); -- Cargar Users de HBase Users = LOAD 'hbase://users' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage( 'rawdata:AboutMe rawdata:AccountId rawdata:Age rawdata:CreationDate rawdata:DisplayName rawdata:DownVotes rawdata:LastAccessDate rawdata:Location rawdata:ProfileImageUrl rawdata:Reputation rawdata:UpVotes rawdata:Views rawdata:WebsiteUrl' , '-loadKey true') AS (Id,AboutMe,AccountId,Age:int, CreationDate,DisplayName,DownVotes, LastAccessDate,Location,ProfileImageUrl, Reputation,UpVotes,Views,WebsiteUrl); ILLUSTRATE Users; PostByUser = GROUP Posts BY OwnerUserId; ILLUSTRATE PostByUser; PostByUser = FOREACH PostByUser GENERATE group as userId, COUNT($1) AS n; MaxPostByUser = FILTER PostByUser BY n >= 150; DUMP MaxPostByUser; Result = JOIN MaxPostByUser by userId, Users by Id; Result = FOREACH Result GENERATE userId, DisplayName, n; Result = ORDER Result BY n DESC; DUMP Result ###Output _____no_output_____
Day8/hackathon.ipynb
###Markdown Semi supervised learning aim of this notebook : build a classifer for defaults (that is classify a comment as a review related to a default, issue) first build a classifier in supervised approach using labeled data second build a classifer based on labeled data + unlabeled data to which we propagated labels This time we want to build a classifier that classifies the comment in one or more of this categories:- screen- software_bugs- locking_system- system- apps_update- battery_life_charging- customerservice ###Code import pandas as pd from tqdm import tqdm, tqdm_notebook # progress bars in Jupyter #import newspaper # download newspapers' data easily from time import time # measure the computation time of a python code import pandas as pd # the most basic & powerful data manipulation tool import numpy as np # Here, mostly used for np.nan import langdetect # detect the language of text import stop_words # handles stop words in many languages without having to rebuild them everytime import spacy # NLP library for POS tagging import nltk from nltk.tokenize import word_tokenize from nltk.stem.snowball import SnowballStemmer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report import re import itertools # For spacy use "pip install spacy", then "python -m spacy download en" to download English text mining modules tqdm.pandas() #tqdm_notebook() ###Output _____no_output_____ ###Markdown Read data ###Code df = pd.read_csv('labeled_data.csv', engine='python') # label data only -> used for supervised model dfu = pd.read_csv('data_unlabeled.csv', encoding = 'utf-8') # unlabeled data -> used to together with lable data for semi supervised learning print(df.shape) print(df.head(1)) df[[c for c in df.columns if c not in ['text', 'tokens']]].sum().map(int) ###Output _____no_output_____ ###Markdown **Reminder**: we want only these:- screen- software_bugs- locking_system- system- apps_update- battery_life_charging- customerservice ###Code del df['issue'] del df['water_damage'] del df['sound'] del df['battery_overheat'] del df['connectivity'] del df['memory_storage'] del df['camera'] df[[c for c in df.columns if c not in ['text', 'tokens']]].sum().map(int) ###Output _____no_output_____ ###Markdown Let's see what we have for 'screen' ###Code df.loc[df.screen==1].head() ###Output _____no_output_____ ###Markdown Create features and prepare the data into a NMF matrix before Machine Learning one important thing to have in mind when building a model : to make feature engineering separately on train and test. If you don't do that, you will incoporate info from the test set into the train ###Code from gensim.models import Phrases from gensim import corpora import stop_words from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import NMF #nlp = spacy.load('en') ## Function to clean and process the reviews def cleaning_data(df) : STOPWORDS = stop_words.get_stop_words(language='en') #df.drop_duplicates(inplace= True) # Drop duplicated sentences df = df[~df['text'].isnull()] # Remove empty sentences # Remove special characters and punctucation df['clean_review']= [ re.sub('[^A-Za-z]+',' ', e ) for e in df['text'].apply(lambda x : x.lower())] # Remove empty clean_review df = df[~df['clean_review'].isnull()] df = df[~(df['clean_review']==' ')] df.reset_index(inplace=True, drop=True) # Reset index df['tokens'] = df['clean_review'].map(word_tokenize) df['nb_tokens'] = df['tokens'].map(len) ## keep only sentences with at least 3 tokens df = df[df['nb_tokens']>2] # remove stopwords df['tokens'] = df['tokens'].apply(lambda x: [i for i in x if i not in STOPWORDS]) stemmer = SnowballStemmer("english") df['stemmed_text'] = df["tokens"].apply(lambda x: [stemmer.stem(y) for y in x]) df['joined_stemmed_text'] = [' '.join(word for word in word_list) for word_list in df.stemmed_text ] return df ## split between train and test at the beginning # we will use the same test set for supervised and semi supervised learning, so that we can compare the performances of # both approaches df_train, df_test = train_test_split(df, test_size=0.3, random_state=42) # Preparing data df_train = cleaning_data(df_train) df_test = cleaning_data(df_test) dfu = cleaning_data(dfu) ## in order to have the same features on train data sets (for both supervised and semi-sup) and test data sets # build the tf idf with vocab which is the union the 3 above data sets vocab = list(set(itertools.chain(*dfu.stemmed_text.tolist()))|set(itertools.chain(*df_test.stemmed_text.tolist()))|set(itertools.chain(*df_train.stemmed_text.tolist()))) vocab_dict = dict((y, x) for x, y in enumerate(vocab)) print(len(vocab)) # build tf idf matrix separately for train and test and unlabeled data sets tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, ngram_range=(1,3), use_idf=True, vocabulary = vocab_dict) td_train = tfidf_vectorizer.fit_transform(df_train.joined_stemmed_text.tolist()) td_test = tfidf_vectorizer.transform(df_test.joined_stemmed_text.tolist()) td_u = tfidf_vectorizer.transform(dfu.joined_stemmed_text.tolist()) #td_test = tfidf_vectorizer.fit_transform(df_test.joined_stemmed_text.tolist()) #td_u = tfidf_vectorizer.fit_transform(dfu.joined_stemmed_text.tolist()) #td_test ###Output _____no_output_____ ###Markdown Tried without the NMF. Just a tf-idf matrix as X. But it did not work. It seems like we should keep NMF. ###Code #X_train = pd.DataFrame(td_train) #X_test = pd.DataFrame(td_test) #X_u = pd.DataFrame(td_u) ## same with NMF dimensionality reduction ## the NMF decomposes this Term Document matrix into the product of 2 smaller matrices: W and H n_dimensions = 50 # This can also be interpreted as topics in this case. This is the "beauty" of NMF. 10 is arbitrary nmf_model = NMF(n_components=n_dimensions, random_state=42, alpha=.1, l1_ratio=.5) #X_u = pd.DataFrame(nmf_model.fit_transform(td_u)) X_train = pd.DataFrame(nmf_model.fit_transform(td_train)) X_test = pd.DataFrame(nmf_model.transform(td_test)) X_u = pd.DataFrame(nmf_model.transform(td_u)) #X_test = pd.DataFrame(nmf_model.fit_transform(td_test)) #X_u = pd.DataFrame(nmf_model.fit_transform(td_u)) ###Output _____no_output_____ ###Markdown Here I decided to reduce the number of topics to 10 instead of 50 to see if it improves our performance. ###Code X_train ###Output _____no_output_____ ###Markdown So far I've tried:- Keeping 'fit' to X_train, X_test and X_u gives very low performances (particularly for f1 for our relevant labeling. It basically labels 0 or only one of the testing data as 'relevant'.- Putting 'fit' only for X_train. Gives the best overall results: tf = 0.09 for the Normal classifier for our relevant topic. The unsupervised propagation with nn = 10 does not improve performance (it a actually decrease them if we consider the relevant category: 0.06. However, if we lower the threshold we get to 0.17- Increasing the the number of topics of NMF to 100 (instead of 50): It increases the performances: 0.11 for Normal Classifier. For unsupervised propagation it decreases f1 to 0.04. If we lower the threshold we get 0.09NOTA: So far both last solutions give an overall f1 of 0.96 for Normal, unsupervised, and threshold reduced (against 0.80 for 'fit' eveywhere).- Putting 'fit' only for X_u (because higher number of comments) with topics = 50. Increases the performances: f1 = 0.11 for Normal (still with 0.96 overall). But only 0.02 for unsupervised (still 0.96 overall). However: it increases the f1 of the lowered threshold to 0.19! (still 0.96 overall) -> Next steps: change nn to 20? Try to find a Classifier that puts more weight on the relevant category during the optimization.- With 'fit' only on X_train. Topics = 50. (Normal is the same of course) With nn = 20: f1 = 0.06 for unsupervised. (0.96 overall). However 0.22 for lower threshold! (0.96 overall)- Topics = 20, nn = 50: f1 = 0.11 for Normal (0.96 overall) 0.02 for unsupervised (0.96 overall) and 0.23 for lower threshold! (0.96 overall)- Topics = 20, nn = 5: f1 = 0.10 for unsupervised (0.96 overall) and only 0.10 with lowered threshold. (0.96 overall) Machine Learning approach ###Code from sklearn.ensemble import GradientBoostingClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix ###Output _____no_output_____ ###Markdown Let's try with "screen first" ###Code y_train = df_train.screen.map(int) y_test = df_test.screen.map(int) # get the labels for both train and test #for i in df.columns if i not in ['text', 'tokens'] # y_train[i] = df_train.columns[i].map(int) # y_test[i] = df_test.columns[i].map(int) # lets look at the number of positive in the data sets print(len(X_train), '(Number of comments in X_train)') print(sum(y_train), '(Number of relevant labels in X_train)') print(len(X_test), '(Number of comments in X_test)') print(sum(y_test), '(Number of relevant labels in X_test)') # lets estimate a gradient boosting classifier model = GradientBoostingClassifier(n_estimators=100, random_state=42, learning_rate=0.1) model.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Here with 'screen' again ###Code print(confusion_matrix(y_train, model.predict(X_train))) print(confusion_matrix(y_test, model.predict(X_test))) ###Output [[7432 2] [ 140 82]] [[3177 15] [ 89 5]] ###Markdown Here we see that only 5 comments are labeled as "screen" by our prediction model on the testing set. And 89 that should have been detected did not get detected! This is pretty pretty bad. The reason might be that our Gradient Boosting method focuses on optimizing the prediction error, which is not the metric that makes sense in our case. ###Code print(classification_report(y_test, model.predict(X_test))) ###Output precision recall f1-score support 0 0.97 1.00 0.98 3192 1 0.25 0.05 0.09 94 avg / total 0.95 0.97 0.96 3286 ###Markdown semi supervised learning ###Code from sklearn.semi_supervised import LabelPropagation label_prop_model = LabelPropagation(kernel = 'knn', n_neighbors=10, max_iter = 3000) label_prop_model.fit(X_train, y_train) #label_prop_model.fit(pd.concat([X_train, X_test]), pd.concat([y_train, y_test])) ###Output _____no_output_____ ###Markdown What distance is used here? Because we are using a TF-IDF Matrix... Euclidian distance does not make sense.Here we are actually using it on the NMF. So the number of dimension is way lower. ###Code y_semi_proba = label_prop_model.predict_proba(X_u) # first column gives the proba of 0, second column gives the proba of 1 y_semi = pd.Series(label_prop_model.predict(X_u)) print(y_semi.value_counts()) proba_1 = y_semi_proba[:,1] # get the proba of 1 pd.Series(proba_1).describe() # with n neigh = 10 X_train_semi = pd.concat([X_train, X_u]) y_train_semi = pd.concat([y_train, y_semi]) model.fit(X_train_semi, y_train_semi) print(confusion_matrix(y_train_semi, model.predict(X_train_semi))) print(confusion_matrix(y_test, model.predict(X_test))) print(classification_report(y_test, model.predict(X_test))) ###Output precision recall f1-score support 0 0.97 1.00 0.98 3192 1 0.27 0.03 0.06 94 avg / total 0.95 0.97 0.96 3286 ###Markdown We see that here the Label Propagation does not really improve our model... (or a bit only). Here we see that with a 50% threshold it's maybe too strict for this case... Maybe we should lower this. ###Code # try to spread more labels (use thereshold lower than 0.5 in order to predict more labels) # here we spread the same proportion of 1 in the unlabeled data set as in the labeled train data set y_semi_bis = pd.Series([1 if x > pd.Series(proba_1).quantile(q=1-np.mean(y_train)) else 0 for x in proba_1]) y_train_semi_bis = pd.concat([y_train, y_semi_bis]) model.fit(X_train_semi, y_train_semi_bis) print(confusion_matrix(y_train_semi_bis, model.predict(X_train_semi))) print(confusion_matrix(y_test, model.predict(X_test))) print(classification_report(y_test, model.predict(X_test))) ###Output precision recall f1-score support 0 0.97 0.99 0.98 3192 1 0.26 0.13 0.17 94 avg / total 0.95 0.96 0.96 3286 ###Markdown Lowering the threshold improves the f1 score for the category. Let's try XGBoost ###Code import xgboost as xgb # lets estimate a XG boosting classifier XGmodel = xgb.XGBClassifier(n_estimators=100, random_state=42, learning_rate=0.1) XGmodel.fit(X_train, y_train) print(confusion_matrix(y_train, XGmodel.predict(X_train))) print(confusion_matrix(y_test, XGmodel.predict(X_test))) ###Output [[7430 4] [ 198 24]] [[3188 4] [ 93 1]] ###Markdown Here we see that only one comment was label as "screen" by our prediction model on the testing set. And 431 that should have been detected did not get detected! This is pretty pretty bad. The reason might be that our Gradient Boosting method focuses on optimizing the prediction error, which is not the metric that makes sense in our case. ###Code print(classification_report(y_test, XGmodel.predict(X_test))) ###Output precision recall f1-score support 0 0.97 1.00 0.99 3192 1 0.20 0.01 0.02 94 avg / total 0.95 0.97 0.96 3286 ###Markdown semi supervised learning combined to XGBoost ###Code # with n neigh = 10 XGmodel.fit(X_train_semi, y_train_semi) print(confusion_matrix(y_train_semi, XGmodel.predict(X_train_semi))) print(confusion_matrix(y_test, XGmodel.predict(X_test))) print(classification_report(y_test, XGmodel.predict(X_test))) ###Output precision recall f1-score support 0 0.97 1.00 0.99 3192 1 0.00 0.00 0.00 94 avg / total 0.94 0.97 0.96 3286 ###Markdown This does not work. It does not label any comment as 'screen'... ###Code # try to spread more labels (use thereshold lower than 0.5 in order to predict more labels) # here we spread the same proportion of 1 in the unlabeled data set as in the labeled train data set y_semi_bis = pd.Series([1 if x > pd.Series(proba_1).quantile(q=1-np.mean(y_train)) else 0 for x in proba_1]) y_train_semi_bis = pd.concat([y_train, y_semi_bis]) XGmodel.fit(X_train_semi, y_train_semi_bis) print(confusion_matrix(y_train_semi_bis, XGmodel.predict(X_train_semi))) print(confusion_matrix(y_test, XGmodel.predict(X_test))) print(classification_report(y_test, XGmodel.predict(X_test))) ###Output precision recall f1-score support 0 0.97 0.99 0.98 3192 1 0.27 0.11 0.15 94 avg / total 0.95 0.97 0.96 3286
Data-Lake/notebooks/1_procedural_vs_functional_in_python.ipynb
###Markdown Procedural ProgrammingThis notebook contains the code from the previous screencast. The code counts the number of times a song appears in the log_of_songs variable. You'll notice that the first time you run `count_plays("Despacito")`, you get the correct count. However, when you run the same code again `count_plays("Despacito")`, the results are no longer correct.This is because the global variable `play_count` stores the results outside of the count_plays function. InstructionsRun the code cells in this notebook to see the problem with ###Code log_of_songs = [ "Despacito", "Nice for what", "No tears left to cry", "Despacito", "Havana", "In my feelings", "Nice for what", "Despacito", "All the stars" ] play_count = 0 def count_plays(song_title): global play_count for song in log_of_songs: if song == song_title: play_count = play_count + 1 return play_count count_plays("Despacito") count_plays("Despacito") ###Output _____no_output_____
00_download_and_preprocess/caltech_for_detectron.ipynb
###Markdown Start create ###Code origin_data_dir = '/root/notebooks/final/caltech_conver_data' img_data = glob.glob(origin_data_dir+'/**/*.jpg', recursive=True) # json_data = glob.glob(origin_data_dir+'/**/*.json', recursive=True) img_data[:10] # json_data[:10] # Image read dir street_dir = '/root/notebooks/0858611-2/final_project/caltech_pedestrian_extractor/video_extractor/*' # Image save dir save_dir = '/root/notebooks/final/result_dataset_9' # num_imgs = 10000 num_imgs = 'all' # Check dir folder exit # If not, create one if os.path.exists(save_dir) == False: os.makedirs(save_dir) for s in ['street', 'street_json']: if os.path.exists(os.path.join(save_dir, s)) == False: os.makedirs(os.path.join(save_dir, s)) #street_imgs = glob.glob(street_dir+'/**/*.jpg', recursive=True) street_imgs = img_data #street_imgs = random.shuffle(random.sample(street_imgs, 5000)) if num_imgs not in 'all': street_imgs = random.sample(street_imgs, num_imgs) random.shuffle(street_imgs) street_img_refined = [] # street_json_refined = [] len(street_imgs) pbar = tqdm(total=len(street_imgs)) for i in range(len(street_imgs)): #if (i%500==0): #print("Process (",i,"/",len(street_imgs),") ","{:.2f}".format(100*i/len(street_imgs))," %") pbar.update() img_path = street_imgs[i] json_dir = img_path.replace('images', 'annotations') json_dir = json_dir.replace('jpg', 'json') input_file = open (json_dir) json_array = json.load(input_file) #if json_array != []: if json_array == []: street_img_refined.append(street_imgs[i]) input_file.close() pbar.close() len(street_img_refined) pbar = tqdm(total=len(street_img_refined)) for i in range(len(street_img_refined)): pbar.update() img_path = street_img_refined[i] json_dir = img_path.replace('images', 'annotations') json_dir = json_dir.replace('jpg', 'json') shutil.copyfile(json_dir, save_dir+'/street_json/'+str('{0:06}'.format(i))+'.json') shutil.copyfile(img_path, save_dir+'/street/'+str('{0:06}'.format(i))+'.jpg') pbar.close() ###Output 100%|██████████| 113278/113278 [43:12<00:00, 43.70it/s]
Exemplo - 01/Questao 01 - bs.ipynb
###Markdown Questão 01 - Riyadh Levi ###Code # Importando pacote request externo ao Python import requests from bs4 import BeautifulSoup # Definindo uma função para efetuar o download da página def download(url, num_retries=2): print('Downloading: ', url) page = None try: response = requests.get(url) page = response.text if response.status_code >= 400: print('Download error:', response.text) if num_retries and 500 <= response.status_code < 600: return download(url, num_retries - 1) except requests.exceptions.RequestExceptions as e: print('Download error: ', e.reason) return page #efetuando o download do site e iniciando a 'sopa' url = 'https://www.rottentomatoes.com/browse/tv-list-1' html = download(url) soup = BeautifulSoup(html, 'html.parser') # Capturando a tag tabela dentro do site e armazenando os dados em uma variável chamada t t = soup.find_all('table') # Imprimindo a variável len(t) t_01 = t[0] t_02 = t[1] # Imprimindo o conteúdo da variável t_01.contents # Capturando o número de linhas da tabela (Número de elementos) numFilmes = (len(t[0].contents)-1) numFilmes # Ao pegar o texto dentro do conteúdo da tabela vem também caracteres que não são interesantes, é preciso 'varrer' t[1].contents[1].get_text() # Cria uma lista vazia, para cada filme dentro do conteúdo da tabela adiciona o filme a lista, sendo que os caracteres não interessantes serão varridos lista_filmes = [] for filme in t[1].contents: if filme != '\n': # O primeiro .replace substitui os '\n' presentes no conteúdo por '' # O segundo .replace substitui os '%' presentes no conteúdo por '% - ' # O terceiro .replace substitui os 'No Score Yet' presentes no conteúdo por 'SA' (Sem Avaliação) lista_filmes.append(filme.get_text().replace('\n','').replace('%', '% - ').replace('No Score Yet', 'SA - ')) # Imprime os filmes dentro da lista, já tratados lista_filmes # Cria uma lista vazia (será armazenado dicionários dentro dela) list_dict = [] # Para cada filme na lista de filmes, é criado um dicionário com o Nome: nome, Avaliação: avaliação e esse dicionário é adicionado a lista list_dict for filme in lista_filmes: list_dict.append({'Nome' : filme.split('-')[1], 'Avaliação' : filme.split('-')[0]}) # Imprime lista de dicionários list_dict # Importa pacote externo ao Python import pandas as pd # Cria uma tabela recebendo a lista de dicionários como parâmetro e a ordem das colunas passa a ser 'Nome' e 'Avaliação' pd.DataFrame(list_dict).filter(items=['Nome','Avaliação']) ###Output _____no_output_____
numpy-data-science-essential-training/Ex_Files_NumPy_Data_EssT/Exercise Files/Ch 1/01_01/Starting/Intro.ipynb
###Markdown What is a Jupyter notebook? Application for creating and sharing documents that contain:- live code- equations- visualizations- explanatory textHome page: http://jupyter.org/ Notebook tutorials- [Quick Start Guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/)- [User Documentation](http://jupyter-notebook.readthedocs.io/en/latest/)- [Examples Documentation](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/examples_index.html)- [Cal Tech](http://bebi103.caltech.edu/2015/tutorials/t0b_intro_to_jupyter_notebooks.html) Notebook Users- students, readers, viewers, learners - read a digital book - interact with a "live" book- notebook developers - create notebooks for students, readers, ... Notebooks contain cells- Code cells - execute computer (Python, or many other languages)- Markdown cells - documentation, "narrative" cells - guide a reader through a notebook Following cells are "live" cells ###Code print ("Hello Jupyter World!; You are helping me learn") (5+7)/4 import numpy as np my_first_array = np.arange(11) print (my_first_array) ###Output [ 0 1 2 3 4 5 6 7 8 9 10]
Semantic_Segmentation.ipynb
###Markdown Semantic Segmentation ###Code import os.path import tensorflow as tf import helper import warnings from distutils.version import LooseVersion import project_tests as tests import sys import cv2 import scipy import numpy as np ###Output /home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters /home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ec2-user/.config/matplotlib/matplotlibrc", line #2 (fname, cnt)) /home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ec2-user/.config/matplotlib/matplotlibrc", line #3 (fname, cnt)) ###Markdown Check for a GPU ###Code if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) ###Output Default GPU Device: /device:GPU:0 ###Markdown Check TensorFlow Version ###Code assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) def load_vgg(sess, vgg_path): """ Load Pretrained VGG Model into TensorFlow. :param sess: TensorFlow Session :param vgg_path: Path to vgg folder, containing "variables/" and "saved_model.pb" :return: Tuple of Tensors from VGG model (image_input, keep_prob, layer3_out, layer4_out, layer7_out) """ # TODO: Implement function # Use tf.saved_model.loader.load to load the model and weights vgg_tag = 'vgg16' vgg_input_tensor_name = 'image_input:0' vgg_keep_prob_tensor_name = 'keep_prob:0' vgg_layer3_out_tensor_name = 'layer3_out:0' vgg_layer4_out_tensor_name = 'layer4_out:0' vgg_layer7_out_tensor_name = 'layer7_out:0' # Refer https://stackoverflow.com/questions/45705070/how-to-load-and-use-a-saved-model-on-tensorflow tf.saved_model.loader.load(sess, [vgg_tag], vgg_path) graph = tf.get_default_graph() image_input = graph.get_tensor_by_name(vgg_input_tensor_name) keep_prob = graph.get_tensor_by_name(vgg_keep_prob_tensor_name) layer3_out = graph.get_tensor_by_name(vgg_layer3_out_tensor_name) layer4_out = graph.get_tensor_by_name(vgg_layer4_out_tensor_name) layer7_out = graph.get_tensor_by_name(vgg_layer7_out_tensor_name) return image_input, keep_prob, layer3_out, layer4_out, layer7_out def layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes): """ Create the layers for a fully convolutional network. Build skip-layers using the vgg layers. :param vgg_layer3_out: TF Tensor for VGG Layer 3 output :param vgg_layer4_out: TF Tensor for VGG Layer 4 output :param vgg_layer7_out: TF Tensor for VGG Layer 7 output :param num_classes: Number of classes to classify :return: The Tensor for the last layer of output """ std_dev = 0.001 reg = 0.0001 # 1x1 Convolutions conx_1x1_layer3 = tf.layers.conv2d(vgg_layer3_out, num_classes, 1, padding='SAME', kernel_initializer = tf.random_normal_initializer(stddev = std_dev), kernel_regularizer = tf.contrib.layers.l2_regularizer(reg), name = "conx_1x1_layer3") conx_1x1_layer4 = tf.layers.conv2d(vgg_layer4_out, num_classes, 1, padding='SAME', kernel_initializer = tf.random_normal_initializer(stddev = std_dev), kernel_regularizer = tf.contrib.layers.l2_regularizer(reg), name = "conx_1x1_layer4") conx_1x1_layer7 = tf.layers.conv2d(vgg_layer7_out, num_classes, 1, padding='SAME', kernel_initializer = tf.random_normal_initializer(stddev = std_dev), kernel_regularizer = tf.contrib.layers.l2_regularizer(reg), name = "conx_1x1_layer7") upsample_2x_l7 = tf.layers.conv2d_transpose(vgg_layer7_out, num_classes, 4, strides = (2, 2), padding='SAME', kernel_initializer = tf.random_normal_initializer(stddev = std_dev), kernel_regularizer = tf.contrib.layers.l2_regularizer(reg), name = "upsample_2x_l7") fuse1 = tf.add(upsample_2x_l7, conx_1x1_layer4) upsample_2x_f1 = tf.layers.conv2d_transpose(fuse1, num_classes, 4, strides = (2, 2), padding='SAME', kernel_initializer = tf.random_normal_initializer(stddev = std_dev), kernel_regularizer = tf.contrib.layers.l2_regularizer(reg), name = "upsample_2x_f1") fuse2 = tf.add(upsample_2x_f1, conx_1x1_layer3) upsample_2x_f2 = tf.layers.conv2d_transpose(fuse2, num_classes, 16, strides = (8, 8), padding='SAME', kernel_initializer = tf.random_normal_initializer(stddev = std_dev), kernel_regularizer = tf.contrib.layers.l2_regularizer(reg), name = "upsample_2x_f2") return upsample_2x_f2 def optimize(nn_last_layer, correct_label, learning_rate, num_classes): """ Build the TensorFLow loss and optimizer operations. :param nn_last_layer: TF Tensor of the last layer in the neural network :param correct_label: TF Placeholder for the correct label image :param learning_rate: TF Placeholder for the learning rate :param num_classes: Number of classes to classify :return: Tuple of (logits, train_op, cross_entropy_loss) """ logits = tf.reshape(nn_last_layer, (-1, num_classes)) labels = tf.reshape(correct_label, (-1, num_classes)) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) reg_constant = 0.0001 loss = loss_operation + reg_constant * sum(reg_losses) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate) training_operation = optimizer.minimize(loss) return logits, training_operation, loss def train_nn(sess, epochs, batch_size, get_batches_fn, train_op, cross_entropy_loss, input_image, correct_label, keep_prob, learning_rate): """ Train neural network and print out the loss during training. :param sess: TF Session :param epochs: Number of epochs :param batch_size: Batch size :param get_batches_fn: Function to get batches of training data. Call using get_batches_fn(batch_size) :param train_op: TF Operation to train the neural network :param cross_entropy_loss: TF Tensor for the amount of loss :param input_image: TF Placeholder for input images :param correct_label: TF Placeholder for label images :param keep_prob: TF Placeholder for dropout keep probability :param learning_rate: TF Placeholder for learning rate """ for i in range(epochs): for images, labels in get_batches_fn(batch_size): _, loss = sess.run([train_op, cross_entropy_loss], feed_dict={input_image : images, correct_label : labels, keep_prob: 0.5, learning_rate : 0.0001}) print('Epoch {}/{}; Training Loss:{:.03f}'.format(i+1, epochs, loss)) def gen_test_output_video(sess, logits, keep_prob, image_pl, video_file, image_shape): """ Generate test output using the test images :param sess: TF session :param logits: TF Tensor for the logits :param keep_prob: TF Placeholder for the dropout keep robability :param image_pl: TF Placeholder for the image placeholder :param image_shape: Tuple - Shape of image :return: Output for for each test image """ cap = cv2.VideoCapture(video_file) counter=0 while True: ret, frame = cap.read() if frame is None: break image = scipy.misc.imresize(frame, image_shape) im_softmax = sess.run( [tf.nn.softmax(logits)], {keep_prob: 1.0, image_pl: [image]}) im_softmax = im_softmax[0][:, 1].reshape(image_shape[0], image_shape[1]) segmentation = (im_softmax > 0.5).reshape(image_shape[0], image_shape[1], 1) mask = np.dot(segmentation, np.array([[0, 255, 0, 127]])) mask_full = scipy.misc.imresize(mask, frame.shape) mask_full = scipy.misc.toimage(mask_full, mode="RGBA") mask = scipy.misc.toimage(mask, mode="RGBA") street_im = scipy.misc.toimage(image) street_im.paste(mask, box=None, mask=mask) street_im_full = scipy.misc.toimage(frame) street_im_full.paste(mask_full, box=None, mask=mask_full) cv2.imwrite("video_output/video%08d.jpg"%counter,np.array(street_im_full)) counter=counter+1 # When everything done, release the capture cap.release() cv2.destroyAllWindows() def run(): num_classes = 2 image_shape = (160, 576) data_dir = './data' runs_dir = './runs' tests.test_for_kitti_dataset(data_dir) # Download pretrained vgg model helper.maybe_download_pretrained_vgg(data_dir) # OPTIONAL: Train and Inference on the cityscapes dataset instead of the Kitti dataset. # You'll need a GPU with at least 10 teraFLOPS to train on. # https://www.cityscapes-dataset.com/ with tf.Session() as sess: # Path to vgg model vgg_path = os.path.join(data_dir, 'vgg') # Create function to get batches get_batches_fn = helper.gen_batch_function(os.path.join(data_dir, 'data_road/training'), image_shape) # OPTIONAL: Augment Images for better results # https://datascience.stackexchange.com/questions/5224/how-to-prepare-augment-images-for-neural-network correct_label = tf.placeholder(dtype=tf.float32, shape=(None, None, None, num_classes), name='correct_label') learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate') # TODO: Build NN using load_vgg, layers, and optimize function input_image, keep_prob, layer3_out, layer4_out, layer7_out = load_vgg(sess, vgg_path) outputs = layers(layer3_out, layer4_out, layer7_out, num_classes) logits, training_operation, loss_operation = optimize(outputs, correct_label, learning_rate, num_classes) epochs = 50 batch_size = 20 # TODO: Train NN using the train_nn function sess.run(tf.global_variables_initializer()) train_nn(sess, epochs, batch_size, get_batches_fn, training_operation, loss_operation, input_image, correct_label, keep_prob, learning_rate) saver = tf.train.Saver() saver.save(sess, './fcn_ss') print("Model saved") # TODO: Save inference data using helper.save_inference_samples helper.save_inference_samples(runs_dir, data_dir, sess, image_shape, logits, keep_prob, input_image) # OPTIONAL: Apply the trained model to a video video_file='project_video.mp4' gen_test_output_video(sess, logits, keep_prob, input_image, video_file, image_shape) run() ###Output Tests Passed INFO:tensorflow:Restoring parameters from b'./data/vgg/variables/variables' WARNING:tensorflow:From <ipython-input-6-df592e219464>:13: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version. Instructions for updating: Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default. See tf.nn.softmax_cross_entropy_with_logits_v2. Epoch 1/50; Training Loss:0.694 Epoch 1/50; Training Loss:0.700 Epoch 1/50; Training Loss:0.693 Epoch 1/50; Training Loss:0.691 Epoch 1/50; Training Loss:0.687 Epoch 1/50; Training Loss:0.681 Epoch 1/50; Training Loss:0.665 Epoch 1/50; Training Loss:0.632 Epoch 1/50; Training Loss:0.596 Epoch 1/50; Training Loss:0.553 Epoch 1/50; Training Loss:0.492 Epoch 1/50; Training Loss:0.469 Epoch 1/50; Training Loss:0.467 Epoch 1/50; Training Loss:0.477 Epoch 1/50; Training Loss:0.480 Epoch 2/50; Training Loss:0.428 Epoch 2/50; Training Loss:0.415 Epoch 2/50; Training Loss:0.364 Epoch 2/50; Training Loss:0.380 Epoch 2/50; Training Loss:0.407 Epoch 2/50; Training Loss:0.393 Epoch 2/50; Training Loss:0.387 Epoch 2/50; Training Loss:0.353 Epoch 2/50; Training Loss:0.337 Epoch 2/50; Training Loss:0.326 Epoch 2/50; Training Loss:0.330 Epoch 2/50; Training Loss:0.314 Epoch 2/50; Training Loss:0.301 Epoch 2/50; Training Loss:0.245 Epoch 2/50; Training Loss:0.245 Epoch 3/50; Training Loss:0.238 Epoch 3/50; Training Loss:0.268 Epoch 3/50; Training Loss:0.212 Epoch 3/50; Training Loss:0.209 Epoch 3/50; Training Loss:0.218 Epoch 3/50; Training Loss:0.211 Epoch 3/50; Training Loss:0.194 Epoch 3/50; Training Loss:0.179 Epoch 3/50; Training Loss:0.199 Epoch 3/50; Training Loss:0.170 Epoch 3/50; Training Loss:0.190 Epoch 3/50; Training Loss:0.174 Epoch 3/50; Training Loss:0.196 Epoch 3/50; Training Loss:0.202 Epoch 3/50; Training Loss:0.150 Epoch 4/50; Training Loss:0.165 Epoch 4/50; Training Loss:0.138 Epoch 4/50; Training Loss:0.165 Epoch 4/50; Training Loss:0.159 Epoch 4/50; Training Loss:0.157 Epoch 4/50; Training Loss:0.162 Epoch 4/50; Training Loss:0.166 Epoch 4/50; Training Loss:0.154 Epoch 4/50; Training Loss:0.162 Epoch 4/50; Training Loss:0.151 Epoch 4/50; Training Loss:0.155 Epoch 4/50; Training Loss:0.148 Epoch 4/50; Training Loss:0.156 Epoch 4/50; Training Loss:0.150 Epoch 4/50; Training Loss:0.138 Epoch 5/50; Training Loss:0.138 Epoch 5/50; Training Loss:0.151 Epoch 5/50; Training Loss:0.163 Epoch 5/50; Training Loss:0.129 Epoch 5/50; Training Loss:0.115 Epoch 5/50; Training Loss:0.152 Epoch 5/50; Training Loss:0.129 Epoch 5/50; Training Loss:0.127 Epoch 5/50; Training Loss:0.136 Epoch 5/50; Training Loss:0.114 Epoch 5/50; Training Loss:0.117 Epoch 5/50; Training Loss:0.139 Epoch 5/50; Training Loss:0.131 Epoch 5/50; Training Loss:0.110 Epoch 5/50; Training Loss:0.125 Epoch 6/50; Training Loss:0.126 Epoch 6/50; Training Loss:0.130 Epoch 6/50; Training Loss:0.116 Epoch 6/50; Training Loss:0.108 Epoch 6/50; Training Loss:0.125 Epoch 6/50; Training Loss:0.088 Epoch 6/50; Training Loss:0.098 Epoch 6/50; Training Loss:0.121 Epoch 6/50; Training Loss:0.113 Epoch 6/50; Training Loss:0.124 Epoch 6/50; Training Loss:0.120 Epoch 6/50; Training Loss:0.109 Epoch 6/50; Training Loss:0.095 Epoch 6/50; Training Loss:0.102 Epoch 6/50; Training Loss:0.093 Epoch 7/50; Training Loss:0.101 Epoch 7/50; Training Loss:0.110 Epoch 7/50; Training Loss:0.104 Epoch 7/50; Training Loss:0.096 Epoch 7/50; Training Loss:0.133 Epoch 7/50; Training Loss:0.100 Epoch 7/50; Training Loss:0.106 Epoch 7/50; Training Loss:0.098 Epoch 7/50; Training Loss:0.093 Epoch 7/50; Training Loss:0.110 Epoch 7/50; Training Loss:0.104 Epoch 7/50; Training Loss:0.098 Epoch 7/50; Training Loss:0.100 Epoch 7/50; Training Loss:0.101 Epoch 7/50; Training Loss:0.091 Epoch 8/50; Training Loss:0.097 Epoch 8/50; Training Loss:0.084 Epoch 8/50; Training Loss:0.086 Epoch 8/50; Training Loss:0.105 Epoch 8/50; Training Loss:0.100 Epoch 8/50; Training Loss:0.068 Epoch 8/50; Training Loss:0.096 Epoch 8/50; Training Loss:0.087 Epoch 8/50; Training Loss:0.101 Epoch 8/50; Training Loss:0.095 Epoch 8/50; Training Loss:0.095 Epoch 8/50; Training Loss:0.087 Epoch 8/50; Training Loss:0.085 Epoch 8/50; Training Loss:0.092 Epoch 8/50; Training Loss:0.090 Epoch 9/50; Training Loss:0.080 Epoch 9/50; Training Loss:0.083 Epoch 9/50; Training Loss:0.074 Epoch 9/50; Training Loss:0.086 Epoch 9/50; Training Loss:0.081 Epoch 9/50; Training Loss:0.070 Epoch 9/50; Training Loss:0.086 Epoch 9/50; Training Loss:0.076 Epoch 9/50; Training Loss:0.076 Epoch 9/50; Training Loss:0.092 Epoch 9/50; Training Loss:0.079 Epoch 9/50; Training Loss:0.075 Epoch 9/50; Training Loss:0.087 Epoch 9/50; Training Loss:0.082 Epoch 9/50; Training Loss:0.081 Epoch 10/50; Training Loss:0.075 Epoch 10/50; Training Loss:0.083 Epoch 10/50; Training Loss:0.083 Epoch 10/50; Training Loss:0.076 Epoch 10/50; Training Loss:0.076 Epoch 10/50; Training Loss:0.055 Epoch 10/50; Training Loss:0.071 Epoch 10/50; Training Loss:0.061 Epoch 10/50; Training Loss:0.066 Epoch 10/50; Training Loss:0.097 Epoch 10/50; Training Loss:0.076 Epoch 10/50; Training Loss:0.086 Epoch 10/50; Training Loss:0.076 Epoch 10/50; Training Loss:0.079 Epoch 10/50; Training Loss:0.078 Epoch 11/50; Training Loss:0.072 Epoch 11/50; Training Loss:0.067 Epoch 11/50; Training Loss:0.063 Epoch 11/50; Training Loss:0.100 Epoch 11/50; Training Loss:0.083 Epoch 11/50; Training Loss:0.080 Epoch 11/50; Training Loss:0.067 Epoch 11/50; Training Loss:0.079 Epoch 11/50; Training Loss:0.078 Epoch 11/50; Training Loss:0.069 Epoch 11/50; Training Loss:0.065 Epoch 11/50; Training Loss:0.073 Epoch 11/50; Training Loss:0.070 Epoch 11/50; Training Loss:0.087 Epoch 11/50; Training Loss:0.068 Epoch 12/50; Training Loss:0.077 Epoch 12/50; Training Loss:0.063 Epoch 12/50; Training Loss:0.072 Epoch 12/50; Training Loss:0.061 Epoch 12/50; Training Loss:0.053 Epoch 12/50; Training Loss:0.077 Epoch 12/50; Training Loss:0.054 Epoch 12/50; Training Loss:0.059 Epoch 12/50; Training Loss:0.071 Epoch 12/50; Training Loss:0.054 Epoch 12/50; Training Loss:0.064 Epoch 12/50; Training Loss:0.064 Epoch 12/50; Training Loss:0.069 Epoch 12/50; Training Loss:0.064 Epoch 12/50; Training Loss:0.048 Epoch 13/50; Training Loss:0.065 Epoch 13/50; Training Loss:0.077 Epoch 13/50; Training Loss:0.055 Epoch 13/50; Training Loss:0.051 Epoch 13/50; Training Loss:0.066 Epoch 13/50; Training Loss:0.061 Epoch 13/50; Training Loss:0.067 Epoch 13/50; Training Loss:0.048 Epoch 13/50; Training Loss:0.051 Epoch 13/50; Training Loss:0.053 Epoch 13/50; Training Loss:0.062 Epoch 13/50; Training Loss:0.061 Epoch 13/50; Training Loss:0.052 Epoch 13/50; Training Loss:0.057 Epoch 13/50; Training Loss:0.051 Epoch 14/50; Training Loss:0.056 Epoch 14/50; Training Loss:0.053 Epoch 14/50; Training Loss:0.061 Epoch 14/50; Training Loss:0.059 Epoch 14/50; Training Loss:0.046 Epoch 14/50; Training Loss:0.050 Epoch 14/50; Training Loss:0.058 Epoch 14/50; Training Loss:0.057 Epoch 14/50; Training Loss:0.049 Epoch 14/50; Training Loss:0.048 Epoch 14/50; Training Loss:0.071 Epoch 14/50; Training Loss:0.054 Epoch 14/50; Training Loss:0.056 Epoch 14/50; Training Loss:0.050 Epoch 14/50; Training Loss:0.063 Epoch 15/50; Training Loss:0.062 Epoch 15/50; Training Loss:0.050 Epoch 15/50; Training Loss:0.053 Epoch 15/50; Training Loss:0.062 Epoch 15/50; Training Loss:0.050 Epoch 15/50; Training Loss:0.059 Epoch 15/50; Training Loss:0.056 Epoch 15/50; Training Loss:0.046 Epoch 15/50; Training Loss:0.052 Epoch 15/50; Training Loss:0.049 Epoch 15/50; Training Loss:0.039 Epoch 15/50; Training Loss:0.034 Epoch 15/50; Training Loss:0.046 Epoch 15/50; Training Loss:0.056 Epoch 15/50; Training Loss:0.067 Epoch 16/50; Training Loss:0.054 Epoch 16/50; Training Loss:0.057 Epoch 16/50; Training Loss:0.048 Epoch 16/50; Training Loss:0.052 Epoch 16/50; Training Loss:0.045 Epoch 16/50; Training Loss:0.059 Epoch 16/50; Training Loss:0.047 Epoch 16/50; Training Loss:0.052 Epoch 16/50; Training Loss:0.046 Epoch 16/50; Training Loss:0.044 Epoch 16/50; Training Loss:0.046 Epoch 16/50; Training Loss:0.036 Epoch 16/50; Training Loss:0.048 Epoch 16/50; Training Loss:0.051 ###Markdown NEW CNN Model ###Code # Setup import os from keras.preprocessing.image import ImageDataGenerator # data directory os.chdir("C:/Users/Sudhanshu Biyani/Desktop/folder") image_dimensions = 80 # batch size training_batch_size = 64 # larger = better but more computationally costly and memory intensive validate_batch_size = 1 # optimize at runtime for parallel cores, otherwise doesn't matter much # normalization # normalize each chip samplewise_center = True samplewise_std_normalization = True # normalize by larger batches featurewise_center = False featurewise_std_normalization = False # adjacent pixel correllation reduction # never explored zca_whitening = False zca_epsilon = 1e-6 # data augmentation # training only transform = 0.1 zoom_range = 0.1 rotate = 360 flip = True datagen_train = ImageDataGenerator( samplewise_center=samplewise_center, featurewise_center=featurewise_center, featurewise_std_normalization=featurewise_std_normalization, samplewise_std_normalization=samplewise_std_normalization, zca_whitening=zca_whitening, zca_epsilon=zca_epsilon, rotation_range=rotate, width_shift_range=transform, height_shift_range=transform, shear_range=transform, zoom_range=zoom_range, fill_mode='nearest', horizontal_flip=flip, vertical_flip=flip, rescale=1./255, preprocessing_function=None) # data augmentation # evaluation only transform = 0 rotate = 0 flip = False datagen_verify = ImageDataGenerator( samplewise_center=samplewise_center, featurewise_center=featurewise_center, featurewise_std_normalization=featurewise_std_normalization, samplewise_std_normalization=samplewise_std_normalization, zca_whitening=zca_whitening, zca_epsilon=zca_epsilon, rotation_range=rotate, width_shift_range=transform, height_shift_range=transform, shear_range=transform, zoom_range=transform, fill_mode='nearest', horizontal_flip=flip, vertical_flip=flip, rescale=1./255, preprocessing_function=None) generator_train = datagen_train.flow_from_directory( 'train', target_size=(image_dimensions,image_dimensions), color_mode="rgb", batch_size=training_batch_size, class_mode='categorical', shuffle=True) generator_verify = datagen_verify.flow_from_directory( 'verify', target_size=(image_dimensions,image_dimensions), color_mode="rgb", batch_size=validate_batch_size, class_mode='categorical', shuffle=True) print('Done') # define MobileNet architecture from keras.applications import MobileNet model = MobileNet( input_shape=(image_dimensions, image_dimensions,3), alpha=0.25, depth_multiplier=1, dropout=0.5, include_top=True, weights=None, input_tensor=None, pooling=None, classes=8 ) model.compile(loss='categorical_crossentropy', optimizer='adam') #model.summary() print('Done') # Train CNN from PIL import Image from keras.callbacks import ModelCheckpoint nEpochs = 500 checkpointer = ModelCheckpoint( filepath='sat_mobilenet_v0.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='auto', save_weights_only=False) nFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/train")]) nBatches = nFiles//training_batch_size nValFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/verify")]) nValbatches = nValFiles//validate_batch_size hist = model.fit_generator( generator_train, steps_per_epoch=nBatches, epochs=nEpochs, verbose=2, validation_data=generator_verify, validation_steps=nValbatches, max_queue_size=10, callbacks=[checkpointer]) print('Done') ###Output Epoch 1/500 ###Markdown OTHER EVAL TOOLS ###Code # LOAD Pretrained MOBILENET MODEL from keras.applications import mobilenet from keras.models import load_model model = load_model('sat_mobilenet_v0.h5', custom_objects={ 'relu6': mobilenet.relu6, 'DepthwiseConv2D': mobilenet.DepthwiseConv2D}) print('Done') # EVALUATE nValFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/verify")]) nValbatches = nValFiles//validate_batch_size evaluation = model.evaluate_generator( generator_verify, steps=nValbatches, max_queue_size=10) print(model.metrics_names) print(evaluation) # PREDICT generator_predict = datagen_verify.flow_from_directory( 'verify', target_size=(image_dimensions,image_dimensions), color_mode="rgb", batch_size=validate_batch_size, class_mode='categorical', shuffle=False) nValFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/verify")]) nValbatches = nValFiles//validate_batch_size predictions = model.predict_generator( generator_predict, steps=nValbatches, max_queue_size=10, verbose=1) #RUN ON IMAGE FROM DRONE import imageio print ("hello)") import os print (os.path) temp = imageio.imread('C:\Users\Sudhanshu Biyani\OneDrive - Arizona State University\Semester 2\CSE 591 - Perception in Robotics\Project\100x100 Slices\Camelback\x0y0.png') ###Output _____no_output_____ ###Markdown print ("hello") ###Code temp = imageio.imread('C:/Users/Sudhanshu Biyani/Desktop/x0y0.pmg') temp = imageio.imread('C:/Users/Sudhanshu Biyani/Desktop/x0y0.png') print (temp) import pandas as pd import scipy.misc scipy.misc.imsave('outfile.jpg', temp) ###Output C:\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: `imsave` is deprecated! `imsave` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0. Use ``imageio.imwrite`` instead. """Entry point for launching an IPython kernel. ###Markdown **Semantic Segmentation - Samay Gandhi** **Pytorch** *Check the specifications of gpu* ###Code # !pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html # memory footprint support libraries/code !ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi !pip install gputil !pip install psutil !pip install humanize import psutil import humanize import os import GPUtil as GPU GPUs = GPU.getGPUs() # XXX: only one GPU on Colab and isn’t guaranteed gpu = GPUs[0] def printm(): process = psutil.Process(os.getpid()) print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss)) print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal)) printm() %cd '/content/drive/MyDrive/Datasets /Semantic Drone Dataset' ###Output /content/drive/MyDrive/Datasets /Semantic Drone Dataset ###Markdown *Import necessary libraries* ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt from tqdm import tqdm import cv2 from PIL import Image %matplotlib inline import torch import torch.nn as nn from torch.utils.data import DataLoader,Dataset,random_split from torchvision import transforms from torchvision import datasets import torchvision.transforms.functional as TF import torch.nn.functional as F from torch.autograd import Variable DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' LR = 1e-4 ###Output _____no_output_____ ###Markdown *View the Images* ###Code images = 'dataset/semantic_drone_dataset/Original_Images' rgb_masks = 'RGB_color_image_masks' labels = 'dataset/semantic_drone_dataset/Labels' image = Image.open(images + '/original_images/594.jpg') rgb_mask = Image.open(rgb_masks + '/RGB_color_image_masks/594.png') label = Image.open(labels + '/label_images_semantic/594.png') fig = plt.figure(figsize=(32,32)) rows = 1 columns = 3 fig.add_subplot(rows,columns,1) plt.imshow(image) plt.axis('off') plt.title("Image") fig.add_subplot(rows,columns,2) plt.imshow(rgb_mask,alpha=0.9) plt.axis('off') plt.title("Label with RGB mask") fig.add_subplot(rows,columns,3) plt.imshow(label, cmap='gray') plt.axis('off') plt.title("Label with mask") ###Output _____no_output_____ ###Markdown *Dataset class and Dataloaders* ###Code # 0 : others - 0 # 1 : area - 1 # 9 : roof - 2 # 3 : grass - 3 # 5 : water - 4 # 15 : person - 5 # 17 : car - 6 class DroneDataset(Dataset): def __init__(self,images_path,labels_path): self.images = datasets.ImageFolder(images_path, transform=transforms.Compose([ transforms.Resize((256,256)), transforms.ToTensor(), transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)) ])) self.labels = datasets.ImageFolder(labels_path,transform=transforms.Compose([ transforms.Grayscale(), transforms.Resize((256,256)), transforms.ToTensor() ])) def __getitem__(self,index): img_output = self.labels[index][0] img_output = 255*img_output #Manipulate the label Images mask = np.array([[0,0], [1,1], [2,0], [3,3], [4,1], [5,5], [6,0], [7,5], [8,3], [9,2], [10,0], [11,2], [12,2], [13,0], [14,0], [15,5], [16,5], [17,6], [18,6], [19,3], [20,3], [21,0], [22,0], [23,0] ]) for i in range(0,24): img_output[img_output == i] = mask[i][1] img_output = img_output.to(torch.int64) return self.images[index][0],img_output def __len__(self): return len(self.images) dataset = DroneDataset(images,labels) torch.unique(dataset[2][1]) #Split the data into train and val dataset n_val = 10 train_dataset,val_dataset = random_split(dataset,[len(dataset)-n_val,n_val],generator=torch.Generator().manual_seed(42)) #Make the data_loader now so that the data is ready for training batch_size = 4 train_loader = DataLoader(train_dataset,batch_size) test_loader = DataLoader(val_dataset,batch_size*2) ###Output _____no_output_____ ###Markdown *Model* ###Code #Define the CNN block now #Defined as per the U-net Structure #Made some modifications too to the original structure class DoubleCNNBlock(nn.Module): def __init__(self,in_channels,out_channels): super().__init__() self.conv1 = nn.Conv2d( in_channels=in_channels, out_channels=out_channels, kernel_size=3, padding=1, stride=1, bias=False ) self.bn1 = nn.BatchNorm2d( out_channels ) self.act1 = nn.ReLU() self.conv2 = nn.Conv2d( in_channels=out_channels, out_channels=out_channels, kernel_size=3, padding=1, stride=1, bias=False ) self.bn2 = nn.BatchNorm2d( out_channels ) self.act2 = nn.ReLU() def forward(self,x): out = self.act1(self.bn1(self.conv1(x))) out = self.act2(self.bn2(self.conv2(out))) return out class UpConv(nn.Module): def __init__(self,in_channels,out_channels): super().__init__() self.tconv = nn.ConvTranspose2d( in_channels=in_channels, out_channels=out_channels, kernel_size=2, stride=2 ) def forward(self,x,skip_connection): out = self.tconv(x) if out.shape != skip_connection.shape: out = TF.resize(out ,size=skip_connection.shape[2:]) out = torch.cat([skip_connection,out],axis = 1) return out class Bottom(nn.Module): def __init__(self,channel=[128,256]): super().__init__() self.channel=channel self.conv1 = nn.Conv2d( in_channels=self.channel[0], out_channels=self.channel[1], kernel_size=3, padding=1, stride=1, bias=False ) self.bn1 = nn.BatchNorm2d( self.channel[1] ) self.act1 = nn.ReLU() self.conv2 = nn.Conv2d( in_channels=self.channel[1], out_channels=self.channel[1], kernel_size=3, padding=1, stride=1, bias=False ) self.bn2 = nn.BatchNorm2d( self.channel[1] ) self.act2 = nn.ReLU() self.bottom = nn.Sequential( self.conv1, self.bn1, self.act1, self.conv2, self.bn2, self.act2 ) def forward(self,x): # out = self.act1(self.bn1(self.conv1(x))) # print("1:{}".format(out.shape)) # out = self.act2(self.bn2(self.conv2(out))) # print("2:{}".format(out.shape)) return self.bottom(x) class Unet(nn.Module): def __init__(self,num_classes,filters=[16,32,64,128],input_channels=3): super().__init__() self.contract = nn.ModuleList() self.expand = nn.ModuleList() #64 - #128 - #256 - #512 - #1024 -#512 self.filters = filters self.input_channels = input_channels self.num_classes = num_classes self.pool = nn.MaxPool2d( kernel_size=2, stride=2 ) for filters in self.filters: self.contract.append( DoubleCNNBlock( in_channels=input_channels, out_channels=filters ) ) input_channels = filters for filters in reversed(self.filters): self.expand.append( UpConv( in_channels=filters*2, out_channels=filters ) ) self.expand.append( DoubleCNNBlock( in_channels=filters*2, out_channels=filters ) ) self.final = nn.Conv2d( in_channels=self.filters[0], out_channels=num_classes, kernel_size=3, padding=1, stride=1 ) def forward(self,x): skip_connections = [] for downs in self.contract: out = downs(x) skip_connections.append(out) out = self.pool(out) x = out bottom = Bottom() bottom.to(DEVICE) y = bottom(x) for idx in range(0,len(self.expand),2): skip_connection = skip_connections[len(skip_connections)-idx//2-1] y = self.expand[idx](y,skip_connection) y = self.expand[idx+1](y) return self.final(y) model = Unet(num_classes=8) model.to(DEVICE) def DICEloss(preds,outputs,smooth=1): preds = F.softmax(preds,dim=1) labels_one_hot = F.one_hot(outputs, num_classes = 8).permute(0,3,1,2).contiguous() intersection = torch.sum(preds*labels_one_hot) total = torch.sum(preds*preds) + torch.sum(labels_one_hot*labels_one_hot) return 1-((2*intersection + smooth)/(total)) model = Unet(num_classes=8) model.load_state_dict(torch.load('Only 7 classes2')) model.to(DEVICE) opt = torch.optim.Adam(model.parameters(),lr = 1e-5) ###Output _____no_output_____ ###Markdown *Training the model* ###Code #Training the model model.train() num_epochs = 15 loss_per_iteration = [] iters = [] for epochs in range(1,num_epochs+1): loss_per_epoch = 0.0 batch_num = 0 for inputs,outputs in tqdm(train_loader): torch.cuda.empty_cache() inputs,outputs = inputs.to(DEVICE),outputs.to(DEVICE) preds = model(inputs) loss = DICEloss(preds,outputs.squeeze(axis=1)) loss.backward() opt.step() opt.zero_grad() loss_per_epoch += loss batch_num +=1 #print("Batch num: {} | Dice Loss:{}".format(batch_num,loss)) loss_per_iteration.append(loss_per_epoch) iters.append(epochs) print("[{}/{}] Loss : {} ".format(epochs,num_epochs,loss_per_epoch)) #Saving the model after every epoch torch.save(model.state_dict(),'Only 7 classes2') print("Saved the model...") plt.title('Loss with epochs') plt.xlabel('Iterations') plt.ylabel('Loss') plt.plot(iters,loss_per_iteration) plt.imshow(torch.argmax(F.softmax(model(TF.to_tensor(TF.resize(image,size=(256,256))).to(DEVICE).unsqueeze(0)),dim=1),axis=1).cpu()[0],cmap='gray') plt.imshow((TF.to_tensor(TF.resize(label,size=(256,256))))[0],cmap='gray') img = torch.argmax(F.softmax(model(TF.to_tensor(TF.resize(image,size=(256,256))).to(DEVICE).unsqueeze(0)),dim=1),axis=1) color_array = np.array([[0,0,0], [128,64,128], [70,70,70], [0,102,0], [28,42,168], [125,22,96], [9,143,150]]) print(color_array) from skimage.color import label2rgb plt.imshow(label2rgb(img.view(256,256).detach().cpu().numpy())) ###Output _____no_output_____ ###Markdown ###Code ###Output _____no_output_____
cliffwalking_temporal_difference.ipynb
###Markdown Temporal-Difference Methods Mini Project: OpenAI Gym CliffWalkingEnvThis notebook contains my implementations of many Temporal-Difference (TD) methods. Part 0: Explore CliffWalkingEnvCreate an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment. ###Code import gym env = gym.make('CliffWalking-v0') ###Output _____no_output_____ ###Markdown The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below. ###Code print(env.action_space) print(env.observation_space) ###Output Discrete(4) Discrete(48) ###Markdown In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function. ###Code import numpy as np from plot_utils import plot_values # define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt) ###Output _____no_output_____ ###Markdown Part 1: TD Prediction: State ValuesImplementation of TD prediction (for estimating the state-value function).We will begin by investigating a policy where the agent moves:- `RIGHT` in states `0` through `10`, inclusive, - `DOWN` in states `11`, `23`, and `35`, and- `UP` in states `12` through `22`, inclusive, states `24` through `34`, inclusive, and state `36`.The policy is specified and printed below. Note that states where the agent does not choose an action have been marked with `-1`. ###Code policy = np.hstack([1*np.ones(11), 2, 0, np.zeros(10), 2, 0, np.zeros(10), 2, 0, -1*np.ones(11)]) print("\nPolicy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy.reshape(4,12)) ###Output Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1): [[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.] [ 0. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] ###Markdown Run the next cell to visualize the state-value function that corresponds to this policy. Make sure that you take the time to understand why this is the corresponding value function! ###Code V_true = np.zeros((4,12)) for i in range(3): V_true[0:12][i] = -np.arange(3, 15)[::-1] - i V_true[1][11] = -2 V_true[2][11] = -1 V_true[3][0] = -17 plot_values(V_true) ###Output _____no_output_____ ###Markdown The above figure is what you will try to approximate through the TD prediction algorithm.Your algorithm for TD prediction has five arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `policy`: This is a 1D numpy array with `policy.shape` equal to the number of states (`env.nS`). `policy[s]` returns the action that the agent chooses when in state `s`.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `V`: This is a dictionary where `V[s]` is the estimated value of state `s`. ###Code from collections import defaultdict, deque import sys def td_prediction(env, num_episodes, policy, alpha, gamma=1.0): # initialize empty dictionaries of floats V = defaultdict(float) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # begin an episode, observe S state = env.reset() while True: # choose action A action = policy[state] # take action A, observe R, S' next_state, reward, done, info = env.step(action) # perform updates V[state] = V[state] + (alpha * (reward + (gamma * V[next_state]) - V[state])) # S <- S' state = next_state # end episode if reached terminal state if done: break return V ###Output _____no_output_____ ###Markdown Run the code cell below to test your implementation and visualize the estimated state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code import check_test # evaluate the policy and reshape the state-value function V_pred = td_prediction(env, 5000, policy, .01) # please do not change the code below this line V_pred_plot = np.reshape([V_pred[key] if key in V_pred else 0 for key in np.arange(48)], (4,12)) check_test.run_check('td_prediction_check', V_pred_plot) plot_values(V_pred_plot) ###Output Episode 5000/5000 ###Markdown How close is your estimated state-value function to the true state-value function corresponding to the policy? You might notice that some of the state values are not estimated by the agent. This is because under this policy, the agent will not visit all of the states. In the TD prediction algorithm, the agent can only estimate the values corresponding to states that are visited. Part 2: TD Control: SarsaImplementation of the Sarsa control algorithm.The algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. ###Code def update_Q(Qsa, Qsa_next, reward, alpha, gamma): """ updates the action-value function estimate using the most recent time step """ return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa)) def epsilon_greedy_probs(env, Q_s, i_episode, eps=None): """ obtains the action probabilities corresponding to epsilon-greedy policy """ epsilon = 1.0 / i_episode if eps is not None: epsilon = eps policy_s = np.ones(env.nA) * epsilon / env.nA policy_s[np.argmax(Q_s)] = 1 - epsilon + (epsilon / env.nA) return policy_s import matplotlib.pyplot as plt %matplotlib inline def sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick action A action = np.random.choice(np.arange(env.nA), p=policy_s) # limit number of time steps per episode for t_step in np.arange(300): # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward if not done: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode) # pick next action A' next_action = np.random.choice(np.arange(env.nA), p=policy_s) # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action], reward, alpha, gamma) # S <- S' state = next_state # A <- A' action = next_action if done: # update TD estimate of Q Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma) # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa) ###Output Episode 5000/5000 ###Markdown Part 3: TD Control: Q-learningImplementation of the Q-learning control algorithm.The algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. ###Code def q_learning(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode, observe S state = env.reset() while True: # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode) # pick next action A action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # update Q Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)]) ###Output Episode 5000/5000 ###Markdown Part 4: TD Control: Expected SarsaImplementation of the Expected Sarsa control algorithm.The algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`. ###Code def expected_sarsa(env, num_episodes, alpha, gamma=1.0): # initialize action-value function (empty dictionary of arrays) Q = defaultdict(lambda: np.zeros(env.nA)) # initialize performance monitor plot_every = 100 tmp_scores = deque(maxlen=plot_every) scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() # initialize score score = 0 # begin an episode state = env.reset() # get epsilon-greedy action probabilities policy_s = epsilon_greedy_probs(env, Q[state], i_episode, 0.005) while True: # pick next action action = np.random.choice(np.arange(env.nA), p=policy_s) # take action A, observe R, S' next_state, reward, done, info = env.step(action) # add reward to score score += reward # get epsilon-greedy action probabilities (for S') policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode, 0.005) # update Q Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \ reward, alpha, gamma) # S <- S' state = next_state # until S is terminal if done: # append score tmp_scores.append(score) break if (i_episode % plot_every == 0): scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores)) return Q ###Output _____no_output_____ ###Markdown Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default. ###Code # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ###Output Episode 10000/10000
ML0101EN-Reg-Polynomial-Regression-Co2.ipynb
###Markdown Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to implement Polynomial Regression* Create a model, train it, test it and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages ###Code import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage. ###Code !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ###Output --2021-06-16 17:03:57-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.45.118.108 Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.45.118.108|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.1s 2021-06-16 17:03:58 (480 KB/s) - ‘FuelConsumption.csv’ saved [72629/72629] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)* **MODELYEAR** e.g. 2014* **MAKE** e.g. Acura* **MODEL** e.g. ILX* **VEHICLE CLASS** e.g. SUV* **ENGINE SIZE** e.g. 4.7* **CYLINDERS** e.g 6* **TRANSMISSION** e.g. A6* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in ###Code df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ###Output _____no_output_____ ###Markdown Let's select some features that we want to use for regression. ###Code cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ###Output _____no_output_____ ###Markdown Let's plot Emission values with respect to Engine size: ###Code plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ###Output _____no_output_____ ###Markdown Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. ###Code msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ###Output _____no_output_____ ###Markdown Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta\_1 x + \theta\_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**?Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, *ENGINESIZE*. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: ###Code from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly ###Output _____no_output_____ ###Markdown **fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).The equation and the sample example is displayed below.$$\begin{bmatrix}v\_1\\\\v\_2\\\\\vdots\\\\v_n\end{bmatrix}\longrightarrow \begin{bmatrix}\[ 1 & v\_1 & v\_1^2]\\\\\[ 1 & v\_2 & v\_2^2]\\\\\vdots & \vdots & \vdots\\\\\[ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix}2.\\\\2.4\\\\1.5\\\\\vdots\end{bmatrix} \longrightarrow \begin{bmatrix}\[ 1 & 2. & 4.]\\\\\[ 1 & 2.4 & 5.76]\\\\\[ 1 & 1.5 & 2.25]\\\\\vdots & \vdots & \vdots\\\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does.Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x\_1$, $x\_1^2$ with $x\_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta\_1 x\_1 + \theta\_2 x\_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems.so we can use **LinearRegression()** function to solve it: ###Code clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_) ###Output Coefficients: [[ 0. 50.24792065 -1.48002782]] Intercept: [107.13432424] ###Markdown As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line.Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it: ###Code plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") ###Output _____no_output_____ ###Markdown Evaluation ###Code from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y_ ) ) ###Output Mean absolute error: 23.65 Residual sum of squares (MSE): 974.78 R2-score: 0.76 ###Markdown PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy? ###Code # write your code here ###Output _____no_output_____ ###Markdown Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to implement Polynomial Regression* Create a model, train it, test it and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages ###Code import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage. ###Code !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ###Output --2021-07-16 14:09:09-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.45.118.108 Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.45.118.108|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.1s 2021-07-16 14:09:09 (515 KB/s) - ‘FuelConsumption.csv’ saved [72629/72629] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)* **MODELYEAR** e.g. 2014* **MAKE** e.g. Acura* **MODEL** e.g. ILX* **VEHICLE CLASS** e.g. SUV* **ENGINE SIZE** e.g. 4.7* **CYLINDERS** e.g 6* **TRANSMISSION** e.g. A6* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in ###Code df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ###Output _____no_output_____ ###Markdown Let's select some features that we want to use for regression. ###Code cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ###Output _____no_output_____ ###Markdown Let's plot Emission values with respect to Engine size: ###Code plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ###Output _____no_output_____ ###Markdown Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. ###Code msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ###Output _____no_output_____ ###Markdown Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta\_1 x + \theta\_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**?Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, *ENGINESIZE*. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: ###Code from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly ###Output _____no_output_____ ###Markdown **fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).The equation and the sample example is displayed below.$$\begin{bmatrix}v\_1\\\\v\_2\\\\\vdots\\\\v_n\end{bmatrix}\longrightarrow \begin{bmatrix}\[ 1 & v\_1 & v\_1^2]\\\\\[ 1 & v\_2 & v\_2^2]\\\\\vdots & \vdots & \vdots\\\\\[ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix}2.\\\\2.4\\\\1.5\\\\\vdots\end{bmatrix} \longrightarrow \begin{bmatrix}\[ 1 & 2. & 4.]\\\\\[ 1 & 2.4 & 5.76]\\\\\[ 1 & 1.5 & 2.25]\\\\\vdots & \vdots & \vdots\\\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does.Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x\_1$, $x\_1^2$ with $x\_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta\_1 x\_1 + \theta\_2 x\_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems.so we can use **LinearRegression()** function to solve it: ###Code clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_) ###Output Coefficients: [[ 0. 51.35397214 -1.61693093]] Intercept: [106.00963209] ###Markdown As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line.Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it: ###Code plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") ###Output _____no_output_____ ###Markdown Evaluation ###Code from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y_ ) ) ###Output Mean absolute error: 21.51 Residual sum of squares (MSE): 752.90 R2-score: 0.80 ###Markdown PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy? ###Code # write your code here poly = PolynomialFeatures(degree=3) train_x_poly = poly.fit_transform(train_x) lm = linear_model.LinearRegression().fit(train_x_poly, train_y) test_x_poly = poly.fit_transform(test_x) test_y_ = lm.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y_ ) ) ###Output Mean absolute error: 21.40 Residual sum of squares (MSE): 748.45 R2-score: 0.80 ###Markdown Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages ###Code import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage. ###Code !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ###Output --2020-12-02 12:10:17-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 67.228.254.196 Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|67.228.254.196|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.06s 2020-12-02 12:10:17 (1.25 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in ###Code df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ###Output _____no_output_____ ###Markdown Lets select some features that we want to use for regression. ###Code cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ###Output _____no_output_____ ###Markdown Lets plot Emission values with respect to Engine size: ###Code plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ###Output _____no_output_____ ###Markdown Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. ###Code msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ###Output _____no_output_____ ###Markdown Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: ###Code from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly ###Output _____no_output_____ ###Markdown **fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it: ###Code clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_) ###Output Coefficients: [[ 0. 49.24593366 -1.35528257]] Intercept: [109.17377855] ###Markdown As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it: ###Code plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") ###Output _____no_output_____ ###Markdown Evaluation ###Code from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y_ , test_y) ) ###Output Mean absolute error: 22.37 Residual sum of squares (MSE): 808.50 R2-score: 0.74 ###Markdown PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy? ###Code poly3=PolynomialFeatures(degree=2) train_3=poly3.fit_transform(train_x) clf3=linear_model.LinearRegression() clf3.fit(train_3,train_y) plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX+ clf3.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") test_x_5=poly3.fit_transform(test_x) test_y_hat=clf3.predict(test_x_5) r2_score(test_y_hat,test_y) ###Output _____no_output_____ ###Markdown Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages ###Code import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage. ###Code !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ###Output --2021-01-11 14:56:43-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104 Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.07s 2021-01-11 14:56:43 (960 KB/s) - ‘FuelConsumption.csv’ saved [72629/72629] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in ###Code df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ###Output _____no_output_____ ###Markdown Lets select some features that we want to use for regression. ###Code cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ###Output _____no_output_____ ###Markdown Lets plot Emission values with respect to Engine size: ###Code plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ###Output _____no_output_____ ###Markdown Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. ###Code msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ###Output _____no_output_____ ###Markdown Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: ###Code from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly ###Output _____no_output_____ ###Markdown **fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it: ###Code clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_) ###Output Coefficients: [[ 0. 46.9043068 -1.03462409]] Intercept: [112.73427353] ###Markdown As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it: ###Code plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") ###Output _____no_output_____ ###Markdown Evaluation ###Code from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y_ ) ) ###Output Mean absolute error: 24.83 Residual sum of squares (MSE): 1076.09 R2-score: 0.75 ###Markdown PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy? ###Code # write your code here poly3 = PolynomialFeatures(degree=3) train_x_poly3 = poly3.fit_transform(train_x) clf3 = linear_model.LinearRegression() train_y3_ = clf3.fit(train_x_poly3, train_y) # The coefficients print ('Coefficients: ', clf3.coef_) print ('Intercept: ',clf3.intercept_) plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") test_x_poly3 = poly3.fit_transform(test_x) test_y3_ = clf3.predict(test_x_poly3) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y3_ ) ) ###Output Coefficients: [[ 0. 34.9672599 2.31840676 -0.28374433]] Intercept: [125.20476164] Mean absolute error: 24.70 Residual sum of squares (MSE): 1065.72 R2-score: 0.75 ###Markdown Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages ###Code import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage. ###Code !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ###Output --2021-03-14 10:26:31-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104 Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.04s 2021-03-14 10:26:32 (1.77 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in ###Code df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ###Output _____no_output_____ ###Markdown Lets select some features that we want to use for regression. ###Code cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ###Output _____no_output_____ ###Markdown Lets plot Emission values with respect to Engine size: ###Code plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ###Output _____no_output_____ ###Markdown Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. ###Code msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ###Output _____no_output_____ ###Markdown Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: ###Code from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly ###Output _____no_output_____ ###Markdown **fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it: ###Code clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_) ###Output Coefficients: [[ 0. 47.38818147 -1.07885709]] Intercept: [112.28907701] ###Markdown As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it: ###Code plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") ###Output _____no_output_____ ###Markdown Evaluation ###Code from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y_ ) ) ###Output Mean absolute error: 23.68 Residual sum of squares (MSE): 997.21 R2-score: 0.75 ###Markdown PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy? ###Code poly3 = PolynomialFeatures(degree=3) train_x_poly3 = poly3.fit_transform(train_x) clf3 = linear_model.LinearRegression() train_y3_ = clf3.fit(train_x_poly3, train_y) # The coefficients print ('Coefficients: ', clf3.coef_) print ('Intercept: ',clf3.intercept_) plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") test_x_poly3 = poly3.fit_transform(test_x) test_y3_ = clf3.predict(test_x_poly3) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y3_ ) ) ###Output Coefficients: [[ 0. 23.4893808 5.65016492 -0.57089149]] Intercept: [137.18458186] Mean absolute error: 23.56 Residual sum of squares (MSE): 993.88 R2-score: 0.75 ###Markdown Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages ###Code import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline ###Output _____no_output_____ ###Markdown Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage. ###Code !wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv ###Output --2021-01-26 22:03:43-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104 Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.04s 2021-01-26 22:03:43 (1.82 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629] ###Markdown **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in ###Code df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head() ###Output _____no_output_____ ###Markdown Lets select some features that we want to use for regression. ###Code cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9) ###Output _____no_output_____ ###Markdown Lets plot Emission values with respect to Engine size: ###Code plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show() ###Output _____no_output_____ ###Markdown Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. ###Code msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk] ###Output _____no_output_____ ###Markdown Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2: ###Code from sklearn.preprocessing import PolynomialFeatures from sklearn import linear_model train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) poly = PolynomialFeatures(degree=2) train_x_poly = poly.fit_transform(train_x) train_x_poly ###Output _____no_output_____ ###Markdown **fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it: ###Code clf = linear_model.LinearRegression() train_y_ = clf.fit(train_x_poly, train_y) # The coefficients print ('Coefficients: ', clf.coef_) print ('Intercept: ',clf.intercept_) ###Output Coefficients: [[ 0. 47.714427 -1.1506322]] Intercept: [111.31518537] ###Markdown As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it: ###Code plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") ###Output _____no_output_____ ###Markdown Evaluation ###Code from sklearn.metrics import r2_score test_x_poly = poly.fit_transform(test_x) test_y_ = clf.predict(test_x_poly) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y_ ) ) ###Output Mean absolute error: 22.94 Residual sum of squares (MSE): 941.67 R2-score: 0.76 ###Markdown PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy? ###Code # write your code here poly3 = PolynomialFeatures(degree=3) train_x_poly3 = poly3.fit_transform(train_x) clf3 = linear_model.LinearRegression() train_y3_ = clf3.fit(train_x_poly3, train_y) # The coefficients print ('Coefficients: ', clf3.coef_) print ('Intercept: ',clf3.intercept_) plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') XX = np.arange(0.0, 10.0, 0.1) yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3) plt.plot(XX, yy, '-r' ) plt.xlabel("Engine size") plt.ylabel("Emission") test_x_poly3 = poly3.fit_transform(test_x) test_y3_ = clf3.predict(test_x_poly3) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y,test_y3_ ) ) ###Output Coefficients: [[ 0. 31.15465792 3.51291658 -0.39602495]] Intercept: [128.56348815] Mean absolute error: 22.76 Residual sum of squares (MSE): 930.74 R2-score: 0.76
FaceSwap.ipynb
###Markdown ![000.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAkACQAAD/4Q02RXhpZgAASUkqAAgAAAACADIBAgAUAAAAJgAAAGmHBAABAAAAOgAAAEAAAAAyMDIyOjAyOjI0IDEzOjQ5OjU1AAAAAAAAAAMAAwEEAAEAAAAGAAAAAQIEAAEAAABqAAAAAgIEAAEAAADEDAAAAAAAAP/Y/+AAEEpGSUYAAQEAAAEAAQAA/9sAQwAGBAUGBQQGBgUGBwcGCAoQCgoJCQoUDg8MEBcUGBgXFBYWGh0lHxobIxwWFiAsICMmJykqKRkfLTAtKDAlKCko/9sAQwEHBwcKCAoTCgoTKBoWGigoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo/8AAEQgAKgCgAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A3NPtxd3kUBbYHPLYzjjNaU+ixxzSRi5PywiUEp15IPfpx+oqrH4f8Sm3SaLQ7kuwyFEqA8++aT+wvFocE+H7rrgkToT/ADr5GOArpWlTv8z62WaYZu8alvk/8gt9PMwtTuIMsm1gR9wE4DfQ8/lVnT9GW6chrgqpClSsZO7JweuPT9RXQWPgG7mtIJp7t7eZ41Z4jCWMZwMqSGwcf0qprHg3VdOjjbTPPvnbO5Y08vbjkclu9NZfWjrKnf5jeZYd7VPwf+Rz7WBFskqyBt0mwDGOOzE9AOP84NN1KzNlc+UWLAgEMV257Hj6g059C8TJJEh0K4VZSylmlQAYBIzz3xRNoXiXyg50SZ5jjKeenH4k1DwNdrSmylmeHvrU/B/5FKitG20DWhPENT02Syt33Bpd6ybWGMDAPOfm/L3q9daDBbMRLqKjAyP3XJ/DOalZdibX5fxX+YSzjBx0c/wf+RgUVoXGnCK4eNJg6r/Fgc/kTUc9pFbW7z3N1HDCgLO78KoHcnNP+zcT/L+K/wAyP7bwP/Pz8H/kU6Kk8OzaT4jhkk0bV4LpYzhwinK/UHBx71tr4cLf8vQ/79//AF6P7NxP8v4r/MbzrBLRz/B/5GBRXSJ4XLf8vY/79/8A16qXnhvUkuRHY2s17Hs3GSMAYOT8uCc9h+dH9m4n+T8V/mNZzgntP8H/AJGNRV1tB8RBiBoF4RnAO5OffrWt4e8H6pqXnf2jbT6ZsClfNQPvznIGG7YH50LLsS/sfkV/a2E/n/B/5HOUV3M/w+kjhkdb8uVUsFFuRnHb71cmui61Kpa20e7mQD7yNGRnjjhuv+FKWXYmLs4fkV/amF/n/B/5FKirkGi65LHHIdJnVHjVxnk5Izjj09aaui+IWWQroV1kBioLD5iOg9s8f1pf2fif5GH9qYX+f8H/AJHoGtSXP9uyoNL1x4sLIs1ldpGudgX7pYc4Dc9cGpPB+iyjV4rmddct38s5Se/EqAAFF6dcgg/XHWuluFAucYycQ5whP3sqOcjPJ/DFNtrloyGw5kWMS4EbHdkMcfe/2T+Yr636vUvze0du1kfGcy2sYt34fRpniXV9a22s+9Qb+QHcfm52ryOcAHNJdaLFDG8c2o6oyZDOGup/vEZxkFSRwf8APFdFGAz3VxGGLPJu8symMn5Fx39zXOeMNZ0/QLE6rrtwLW0QBf8Aj53EuTwFAyWOCx4zW0rcuhdFJz948x8Vya34g8aX+gnxDc6FpVvbwQRvb5Et3KVL53MdwA3EYzzk11HwXu9aZ9c0jWr/APtO309oWsr48mWJw/BPcgpjkn8sVytr8QtI8Xa9eDw2lzm3SNpZ7gDEg3dVB5wMAZI7ivQ/C3inRE1e40i8vIbbXTszDL8m9SuVCk8H73Trz0rmjKftLPY6Kkafs7x3J/ilMIvCrNuAxOnJ/GvGY72K0F7eXF1vRkA8pTu2Y44A6ZJ/z29T+MgWXwmuZVj2XSMNyBlY4YbSD2OcV5RpEUC2Vs1qFW3wCm/gHcf6k/jxRKT5rI8qukpXZ10LDgjoazr3SjqvjDSXvLQX2m2sEkotnP7szbkAZgeDhS2AfU4qX7VDb2rzXE0ccUQzJIzBVX6ntWHZ/EXRp/EFjpOl3DXl5cyeWpi4QcHqx4ye3XqOlElJxaiRhLKsm1oa2q+CYtN8c6P4j8O2UVjFuNvdwQKqRyxMjfNgdwQOwzx6V3McvvXLeIte/wCEe8NQX99FdLZpdC2cyoFkDEMeFHGBj9aXQfFGlayqnTr+CZiM7A2HA91PP6VFFytaR05jFe05oLQ6yW9jtYGmmbai9cAn+VR6BcTv4c1O4ht9QuiNUkaNbaULLtMeAVJIx97IFYkmtWojeMv8zMYU3DhnweP0rtPA8ca+HHk2FJGuleVcFsOcAY9iNv50JzeIjHpYwoJrVlFtKa+njCjxTa28aI+5r6MAFXyq7VY5HJ564HQ81v6to8kNnp3l6rrEUojW1wLvGfkzuY4OW+U8+pq1G3l8FW2qRGcRtyd+wnGeeefpVlpftZs4m8xTHMGVyCgOYnI9/rXbOGlrnSmeU+NfCFxqH9lWeo3ur3Xh8ySJcwi+lkZ3ODHnIGFGWJ57KKzfhb4R1Lwp48u4reeaPwzeokkdtKd7xyxyoVyBkHI3YPXGM9Kq/E/4uPbeLH8JaZp6SJFPHBPdy3Tv97aTsHGMZxk5+lR+PfiDf+BNQ0VtOEc128bSTQXBZlMedq55BzlWOe2Pc55uScZ6O68zqhKHs7S3PeoV3QR/Qf0pVAQgVxPwl+IVp4802cw20lreWWwTxk7l+bOCp7j5T1HFdnO22VR7VtujBblfUpI0dGFtK8ghQ70Rzk54GVU/d6/WohsMp820ZFQAhj5mNhGAASnXHGP9o1l2N5c/ZI/9Im+6P4z6VY+2XPP+kTf99mq5gvboXLpZI4ZFkwJQwkUgkK4CgddjcH0r54/ao1Az6b4fhtxbqyzzF1j9Sq4z8o9xXuV/eXJtnBuJsY/vmvnX9pmWR9K0Yu7MRNJgk5xwtPm0sT1ucL8H9W+weNLLc3lC4f7PId5XIY8DPb5gKu/FjVDf/ETU3t0JfdHEiIdzErGq4478dPWuE8OSOviC0ZXYMLyIgg8g7hXs3wvijNz49nKKZ0uGVZMfMoLPkA9QKxqVHBG1KHO0j0rWtSj134O6PqF7I7qUt5rgo2GYrw+M98hsj6151ZahFdJB9tu5Cu0MpbLMW2k4PI6fdB/HmuP+23UFsNPguZ47BZpitskhEQOU5CDjue3c1NY3M6wDbPKMnPDmspx965w4h2lZln43a5FplnbaFpUpaK5UTTlmydoPyjPuRk/SvNfBOqRaT400LUnbbHbXsMsmeyhxu/TNReNZHl8Q3DSuzthRljntWH2WuimrJG9O3LdLc+o/2h7+JPAkNpA3libURPwQQ67HwQfbGPbp6V89aH4gn0XWbW9sm+aI5I7MOhH4jNehfH53+z+Fk3NsaxRyueCxjjyfr715IAPshbAz5gGfwNZUveimzaokm0fRDXP9qajpOoWgMtrHF9qeEHJlyudpHYgrjvX0h4dWGDQbiJoS6Iy7Y8E+mDgAng89O1fD/gaeVdPVVlkCiUgAMeOlfS3wiurhdObbPKMsc4c+1RSn+8aOSKtPl6I9SEi4bFpM0ayBWO2XJ6/Njy+fmyxPrTLiVbYQJFEnysMohJwVVhgZA4wQBx2qhPe3Xlj/AEmbr/z0Ncz4tu7n7Gj/AGiberqA285FdFSbSbR1wSlJRaPknxXfyjx1rU88M9pcveSTmKYYeMlycH3HFdNrkOq/ETxRqd9oFrJcWsQUeYzBY40VQOWJwOhOB68CuU+LMjzeMNSmmdpJmuCGdjliBFFgE+1dNoV7dW3wzVbe5niUxyAiOQqDlsHp61lOb5VJdS4QTk4voep/siQ+Vf8AikCeOYR+SjPG2VY7n5BwMjg4r6BvpNtwv+7Xzt+yIzf2t4tXJ27bU4zxnMlfQOpf8faf7taLYx6n/9n/4QHhaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLwA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjc3MDwvZXhpZjpQaXhlbFlEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4yOTI2PC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6VXNlckNvbW1lbnQ+U2NyZWVuc2hvdDwvZXhpZjpVc2VyQ29tbWVudD4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+CgD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCAENBAADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwDUooor4A/QgooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACnRRvNKkcSM8jkKqqMkn0App6HHWu70WbQ/7Mn1a30RjPprRMy/am3Hn74HTGR0rehRVV2bt9/wA9kzCvWdKN1G/3fLdo4eeGSCZ4p43jlQ4ZHGCD7io8j1Fd/qV5pMulnxC2jGO8ubhli33LElwMiTb0IBHT2rofD6eJXuZ5NZsLQQJCzIiRx5kfsAQePxrqhgFOfLGV/RN6dL7WOWePcIc0o29Wlr1tvc8fyPUUtd3rtz4rttJuW1Sxs4bR18t2WNMjdxxg571ymi2NvqF08V1fw2CBCwkmGQTkfL19/wBK5qlDlmoK933VvzZ00sQpwc3ay7O/5IzwCTgAknsKK9F8JeG7Gx1AavJrNjdWlllmMfAViOCTnjr/ACrL1DStJu7+4u77xTZ75nLsIYS2MnoOfwrV4GagpO135rbve5ksdTc3FXsvJ79rWOOorT1y30u2khXSL2W8UqfMd49mDnjAxWZXJOPI+VnXCXOuZBRWvoesQaZFMk2lWd+ZGBBuBnZgdBxXVJJaaz4I1u9TRrG0mgwqNDGM9iTnHHWuilh41VpLWzdrPoYVcRKk9Y6XSvddTz6iiiuU6Qorp/Ceg6o+taVd/YJ/somjkMpX5duc5+mKg8dwTReLb8TRlTLIHjH95SMAjH0roeHlGl7VrrbY51iIyq+yT6X3Oforo7XwVr9zbeeljsUjIWRwjH8D/XFc9LG8UrxyqUkQlWVhggjqDWc6U6aTnFq5pCtTqNqEk7Dale2nS2juHhkWCQlUkKnaxHUA1teGoEaCeSfw9c6sjMFR4mdRGR1GVHPUV2OoxW6+HNJj/wCEUu5kZpJPsqyyZgOepOM857100cJ7SDk5W07Pvbt+RzVsZ7Oago317rtfv+Z5eOTgdaluree0mMN1DJDKMEpIpUjPsa6/TbW3eUq/g+8lD3LBX82RREN2NpwP4fetnxQtvNr900vhK9v2DBftCySKJMKOgAx7fhTjgrw5ubqukvPyJljrVFHl6PrHy8zzKg8damvI2iu5o3haBlcgxN1TnofpUuk350zUIrtIYJmjziOZcqcjHT8a40lzWlodrb5bxVypRXWnxB4cvOdQ8ORxyHq1rNs/TisTXX0yS6jOjW9xBb7PmWdsktk8jk8YxWtSlGK5ozT++/4oyp1pSfLKDX3W/BmbRRV7RpbCG+D6rbSXNrtIMcbbTu7HORWUVzO17G0nypu1yjRXW2974ZnnSG18OXs80h2on2k5Y+nWtDxnb+H9K0oWkemxw6xMoYpHMX8jvyx/l3rq+qXg5qasvX/I5Prdpxg4O79Pv3OCooorjOwKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAD0r0dVjsrLTbXTfDq30l9YxvcFGZd3+9jjGeea83b7p+lek6xbyG7s4F1qPSYbfTIY7gs5VmBJ4AHXp6134LaTXl2/XQ8/HauKfn3/TXci8WX9jZadZ2V1punNeRxvGttFKzfZQehzjGc9qb40/sH+3pP7SvNTjufKj3LAqlMbeOtZ1z5FxpF7ZeGNOe5togJbu/nUF3x/dz07n1xniur8Qx+I31NjpVrp8tr5abWlWMtnHPU5ru1qxlpfbZX7rpa/rt2OFWpSjrb4t3bs+t7em/VnN3H9m/8IBqf9kz3k0f2uLebkAEHjpjtXJaZY3GpX0VpZxmSeQ4A7D1J9AK73X11VPA1+NZhtYpftERQW4QArkcnb71xWi61e6M1w1hIsbTx+WxK5I9CD2IrjxUYqpBT0Vu1nu+h24SUnTm6dm797rZdbHc66NB0PRrfRLq6uJRGfMngtcBpZPV27D2+npWU/hSDW9Jg1DwzFcR7pDHJDdOMAf3g3cdPX9KreH7OystEm8R6vEbw+d5VvbseHk7s3r3/ACNXrrX7rV/BGryXk0av9ojSKKPCBUyuQB1IrocqdRXqpLS6S3stten9aHMo1KbtSk371m3s299Ov9amXqOgaXpdhP8Aa9chl1IL+7gt13qG9GP/AOquYoHHSivKqTjJ+7GyPWpQlBe9K7CvUvDqQ6foemaFeALLrMU0j7uqbl+X9MflXJ+DNBW/mfUdSxFpFp88sj8ByOdo9ff8u9UfEOuTapr76lGWi2MPs47xqp+X8e/412YeX1aPtZby0S8ur/Q48RH61P2MXpHVvz6L9TMu7eW0upradds0LlHHuDioq7nxDZJ4n0lPEGloDdxqEvrdeoIH3gPp+Y+lcN245rmr0fZS01T2fdHTh63tY66NaNdmdL4O1HUJfEWk2v2y6a3WVVEQkbaFAPGPSuh8Q3lj4b1a8uwUv9fuJGdC/KWiH7v/AALGP/rDq7wk+maHdaTb2jx3up6ltMsw6QxkE7R6Hj6+vYVxXifnxJqv/X1J/wChGu5zlh8Otbyv620/P8jgUI4jEPS0bel9fy/M1/CN5qGr+NtPlubuaWXeXYljjaFJIx0A7YrI8TTrc+I9Tmj+49w+Pzx/St34a4hvtUvsZa0sndfqf/1VjaBos2srfSiRYo7aBpnlf7u7qAT781g1OdGMVq5Nv7tP8zoThCvOT0UUl9+v+Re8I2msXu5LO+ubHTI2LzzrIURPX6tiulm8TxavcXGmm6vNNhQgWd6pcEkDH7z1B68159bXlysK2qTyrbPKsjRBjtLAjkivVfE02vJrMw07XNNtLbC7YZ5EDrxzkFSetdODnek+W+lr9b3v5qy/G5y4yCVVc1tb26Wtbyd3+FjndCvrjSLOfVr7VprsB5EtrRJHPnSZ5dweg5zz659KnTUNQ8UWERstQuLLW4F2vB5jRR3I/vL2De1T/aPFP/Qy6N/3+j/+Jqt41lvv+ER0x7+9huroXjHzrdwV4VsYIA5FbXcabWvKlt3263evYyspVE9OZvda236cq07nD6lHdxX86aiJBeBv3vmHLZ9zWto3iJNMsBbnSNPu3DFvNuE3Nz26VhSSPLI0krM8jnczMckn1JrofC3hz+0ke/1KT7Lo8HMkzHG/H8K/4/1ryqHtJVP3W7/LzPVr+zjS/fbL8/L/ACOn8NateakJL2az0rTNJt+Zbhbfk4/hXJ6/561y3jTxCPEGpLJFCsVtACkWVw7D1Y/07V00fiHRtc1Kx0MaSf7MLeVETKU2nBwwUf1Oea4PWLQWGrXtopJWCZ4wT1IB4rsxdWXsVGMuaN9X5/5HFhKUfbOUocsraLy/z9SpVvS9OutVvEtbGFpZm7Doo9SewqpXV6V4luLfSrXSdCtI7a+nfZLcLy0jE4GM9Dz1PTtXDQhCUv3jsvLd+SO+vOcY/u1d+ey82dXbaEPC1hi1ubGPVJlw99dyBVjHpGvU/wCc+lcsfCM+oSSPZ65pmoXTEsyCY73Pfr1qW78M2LX7Wt94liOsscMskbMu8/wlya5rUdPvNJ1RrWZGjvImG0oeSf4Sp9+1d+IkkkpU/dWmktvu6+pwYaLbbjU9566x3+/p6aEF5az2V1Jb3cTRTxnDIw5FQ12nxP8A+P8A0vzsfbfsa/aMf3s9/wAd1cXXBiKSpVHBdDvw1V1qUZvqFFFFYmwUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAEtq8cd1C86GSFXVnQHBZQeRmuv1LUfDmqalc6tqcl7K5fZFZxpt+RQACW9D9RXF0VtTrumnGya8zGrQVRqV2n5HYrq0Guq9lcXcGhaLAN4t4kJMvPr3PT/A1leIdUsLiKCz0eyWGztySJZBmWUnqWPp7f8A6qw6KueJnONnu+v6eS9CYYWEJXWy6fr5v1udbdT6Jr+nPPM6aTq0EfzKifurgAcYA6H2/nXJUUVnVq+0s2tfzLpUvZXSen5HRaB4ijsdNl0zU7FL/TZH8zyy21kb1Bqwbvwbnd/ZeqZ/u+cMfnurlaKuOJmoqLSdu6TIlhYOTkm1fs2jodY1PQ5tOe20rRWtpSyn7RJLubAPTv1+tZ2g/wBmnVIf7a877Fzu8rrntnvj6c1n0VEqzlNTaWnlp9yLjRUYOCb163d/vZ0finxKdWSOysIfsmkw8RwLxux0LY/l/WucoopVasqsuab1HSpRpR5YLQ0/D+tXeh363Vmw9Hjb7si+h/x7Ve8W3ui6i8F1pNtLb3UoLXCHAQH2Hr7jj8a56iqVeSpul0/L0E6EHUVXZ/n6mp4XvINP8QWN3dMVghk3MVXJ6HtUGt3Ed3rN9cQEmKad3QkYJBORxVKip9o+T2fS9yvZrn9p1tY1/DWuTaDftcQxpMkieXLE/R1q5rfil77T/wCz7Cyg02wJ3PFD1kPucDiucoqo4ipGHs09CJYenKftGtR8RAmjJOBuGSe3Ndx4q1Dwpfa5PcTR391IyqDLbSKEOFA4zXCUU6dd04uNk7238r/5iqUFUkp3atfbTe3+R0nm+EP+fLWP+/qVZ8QX2kS+ELC00h5EMd0zmCZgZACG5OOMZrkqKr6y7OPKtVbYn6suZS5no77jo2CSKzIrhSCVbo3scdq3fE/iSXWhDBDELTT4FAjtkPAOOp9fb0rAorKNWUYuCej3NpUoykptarYnsbqSyvYLqDHmwyCRc9Mg5rrLzxB4b1C4a7vtAnN3JzJ5c+FY+vUfyrjKKqnXlTTirNeaT/Miph4VGpO6fk2vyNfxBqVhf/Z103S49Pji3A7X3F8468dse/Ws21nktbmG4hO2WJw6H0IORUVFRKpKUud7/cXCnGEeRbff+Z195rHhvWJmutV06+t7xx+8a0kG1z64NaGoeONO82G4sNJM19DGIo7m7IJUDvgdT+VcBRXSsdVV7WTfWyOZ4Gk7Xu0ul2WNQvbjULyW6vJTLPIcsx/l7Cq9FFcjbbuzrSUVZBRRRSGFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFRtc2avta/tA2cEFyCD6HikN5YL97UrEfWX/AOtXR9Ur/wAj+45Pr+G/5+L70S0VCL2wJIGpWJwcf63/AOtS/bLHGf7Rssevm/8A1qPqlf8Akf3B9fw3/PxfeiWioRe2G1T/AGlZYbp+8PP6Uv2yx4/4mNl/38P+FH1St/I/uD6/hv8An4vvRLRUP2ywP/MTsP8Av7/9al+12OSP7RsuP+mv/wBaj6pX/kf3B9fw3/PxfeiWioftthnH9p2J+kv/ANakF9YEf8hGy+nmf/Wo+qV/5H9wfX8L/wA/F96J6Kh+22H/AEEbL/v4f8KU3diOuo2X/f3/AOtR9Ur/AMj+4Pr+G/5+L70S0VD9ssc4/tGz/wC/n/1q29E0O61y1a40mazuYVcoxScfKw7EdRT+qV39h/cH1/DP/l4vvRlUV0i+C9aYsBFb5HB/fCl/4QrWv+eVv/3/ABR9SxH8j+4f17D/APPxfec1RXSp4J1p41dYrfawyP34pT4I1sdYrfrj/XCn9SxH8j+4Pr2H/wCfi+85mium/wCEH1v/AJ5W/p/rxTD4L1kOFMVvkjP+uFH1LEfyP7g+vYf/AJ+L7znKK6b/AIQfW/8Anlb/APf8Ui+CdaYZEVv1I/1wpfUsR/I/uD69h/5195zVFaWsaLdaPJDHfmJJZlZo0j3SswXGThFPAyPzql5J9ZP/AAGn/wDjdS8NVi7OLH9dw/8AOvvIqKl8k/8ATT/wFn/+N0hiwMszgYzk204/9p0vYVP5Q+uYf+dfeR0VIIwejPj/AK9p/wD43S+TwSGfjk/6NP8A/G6Pq9T+UPruH/nX3kVFSeVhN5LhcZybafGP+/dYa+KvDbsqp4i0t2YhQqu5JJ/4BVRwlefwwb+QvruHX2195sUVmT+I/D8EzxT6/psciOY2V2kBDDqPudfaof8AhLPDOcf8JJpX/fx//iKv6jif+fb+5h9ew3/PxfebNFY3/CV+Gf8AoZdJ/wC/j/8AxFC+LPDTEBfEWlknoA7k/wDoFL6jiP8An2/uD69h/wDn4vvNmisX/hLPDWM/8JFpmPXdJj/0ClHivw0U3f8ACQ6ZtzjdvfGfTOyj6jif+fb+4Pr2H/5+L7zZorF/4S3wz/0Meln6O/8A8RR/wlvhj/oY9K/7+P8A/E0fUcT/AM+39wfXsN/z8X3m1RWKPFvhg/8AMyaV/wB/H/8AiKd/wlfhn/oZdI/7+v8A/EUfUcT/AM+39zD69h/+fi+9GxRWOPFXhknA8SaST7SOf/ZKG8VeGlJDeJNKBHUF3BH/AI5R9RxH/Pt/cw+vYf8A5+L7zYopPDU1j4nuprbQNV06/nhTzXjilIKrkDPKjuR+dbLeHb5bhYGe0ExOAnnjOew9qX1LEL7D+4Tx+GW9RfejHoroP+EQ1f8A55wf9/hSN4S1VFLMluqjkkzDij6niP5H9wfX8N/z8X3owKK2z4Z1AQrMWtPKbG1/tC7Tnpg07/hFdU8wJst95GQPOHP0o+p1/wCR/cH1/Df8/F96MKittvDOoLI6M1oHQbmU3C5A9SPSnL4V1J4fOX7KYsbt4nXbj1zR9Tr/AMj+4P7Qwv8Az8X3owqK25/DGowBDObWMP8AdL3CgHjPelt/DGoXGfs7WcuMZ2XCtjPTpR9Tr/yP7g/tDC/8/F96MOit1vCmqCFpdluYwpbcswPArH+zyeg/Oj6nX/kf3CeY4Vb1I/eiKipvs0noPzpfskvov/fVH1Ov/I/uF/aWE/5+x+9EFFWPsc3ov50fY5vRfzo+p1/5H9wf2lhP+fsfvRXoqz9im9F/76pRYznsn/fVH1Ov/I/uD+0sJ/z9j96KtFW/7PuPRP8Avqj+z7j0T/vqj6nX/kf3B/aWE/5+x+9FSirf9n3Hon/fVOGm3J7J/wB9UfU6/wDI/uD+0sJ/z9j96KVFXv7Luj/Cn/fVH9lXX91P++xR9Tr/AMj+4P7Swn/P2P3oo0Vf/sm7/up/31S/2Pef3E/77FH1Ov8AyP7g/tLCf8/Y/ejPorR/sa8/ux/99ij+xrz+7H/32KPqdf8Akf3B/aWE/wCfsfvRnUVojRb0/wAMf/fYpf7Evf7sf/fYo+p1/wCR/cH9pYT/AJ+x+9GbRWn/AGHff3Y/++xSjQr4/wAMf/fYo+p1/wCR/cH9pYT/AJ+x+9GXRWp/YV9/dj/7+ClGg35/gi/7+Cj6nX/kf3B/aWE/5+x+9GVRWp/YN+f4Iv8Av4KX+wL/APuRf9/BR9Tr/wAj+4P7Rwn/AD8j96Mqitb/AIR/UP7kX/fwUv8Awj2of3Iv+/go+p1/5H9w/wC0cJ/z8j96Miitf/hHtQ/uRf8AfwUo8O6if4Iv+/go+p1/5H9wv7Rwn/PyP3ox6K2P+Ec1H+5F/wB/BR/wjmo/3Iv+/go+p1/5H9w/7Rwv/PyP3ox6K2f+Eb1H+5F/38FRz6De26q9wbaGMtt8ySdVUE9Bn3xR9Tr/AMj+4FmGFf8Ay8X3oyqK0P7M6/8AEw0vjj/j7WlOlEAn7fpeB6Xa0fU6/wDI/uH9fw3/AD8X3ozqK0BphOP9P0wZ9bpaQ6ZgA/b9LwTji6Wj6nX/AJH9wfX8N/z8X3ooUVof2Yf+f/S//AtaBphLFRf6XkDP/H0tH1Ov/I/uD6/hv+fi+9GfRWj/AGUf+f8A0v8A8C1pBpZIyL/S/wDwKWj6nX/kf3B9fw3/AD8X3oz6KvvphQZN/pn4XS0HTSP+X/TP/ApaPqdf+R/cH1/Df8/F96KFFX/7N5/4/wDTP/ApaDph5/0/S+P+noUfU6/8j+4f1/Df8/F96KFFXl07dnGoaXwcc3aitY+CtaAz5dvjr/rhR9Tr/wAj+4Pr2G/5+L70c3RXSDwVrRAIit8EZ/1woPgvWQVBjt/mOB++HWn9SxH8j+4Pr2H/AOfi+85uiun/AOEG1v8A55W3/f8AFJ/whGt5I8q3yBn/AF4o+pYj+R/cH17D/wDPxfeczRXTf8IRrf8Azyt/+/4pqeCtZcHbHb8Ej/XDtR9SxH8j+4Pr2H/5+L7zm6K6V/BOtIu5orfH/XcU7/hB9b/55W3/AH/FL6liP5H9wfXsP/z8X3nMUVeurBLW5kt7nVNGinjJV43v4wVI6gjPB9qi+z23/QZ0L/wYx/40/qWI/kf3B9ew/wDz8X3lairIt7Y9NZ0P0/5CMf8AjStbW6qWOsaHgcnGoR/40fUsR/I/uD67h/5195Voq19lt/8AoMaH/wCDCP8Axo+zW/8A0GdD/wDBjH/jR9SxH8j+4Pr2H/nX3lWirLQWqLl9c0BR0y2pxD+tMVbFiwGv+HflGT/xNYeBnHrR9RxD+w/uD69h/wDn4vvIaKm22H/QxeG//BtD/jQq2DAEeIPDn46rD/jT+o4n/n2/uYfXsN/z8X3ohoqVlsR/zH/Dp7capD/jSE6eOviHw5n/ALCsP+NH1HE/8+39zD69hv8An4vvI6Kfu07OB4g8On6apCf60o/s8jI8QeHf/BpD/jR9RxP/AD7f3MPr2G/5+L7yOipAdPOf+Kg8O8HH/IUh/wAaCbAAn+3/AA9x/wBRSH/Gj6jif+fb+5h9ew//AD8X3ojoqbZY/wDQweHP/BrD/jS+XY/9DB4c/wDBrD/jS+o4j/n2/uYfXsP/AM/F95BRUxXTwcHxF4az/wBhaH/GlEdgeniHw1/4Nof8aPqOI/59v7mH13D/AM6+9Hmvg0aP/wAJhJ9rW2OqN4gtvsvmRO5MfmPvxtIAOdnJzWrpcWj3Vpqh0u1il1COPN21tZF2DG/QL8spI+4WBI7E16Tr/g/4b6b4ytrS7ufsGs3sgmggjzu3sflKkKduWBI5rSHwl8GhZVAuAsv+sAwA/Oefl5555719YqUtj4DkZwn2GKPXrPybGNSdZ1KG2MltBDjiEQlDxlVZgQRk9eKz/EqWesWkEtrYQzWsmsR2UFzHb4a3iMm6SUfwgMdqKxBz5bda9LtfhV4Ttyvkm7iETlovLbbtJABIwOCemaePhX4SDOytdqXUKxDfeA6A8cgU/ZyHyM8q1S5ivPBGsNb2qSx2ylFRHjTMXILBhH8pZtshQYDEcelc9BoOjwSWFjcQWc9vKjz3V8sm5oLfywVmLg7VYvu2xkZIABGTXukvwp8I3EUa3JupVT7qO25VPsNuBUTfCHwUFVUhl2l9zLgY6dcbetJ0pMTptnilnGms6Npd9LZNb3V7FLp1xLFbRhJVhTzA0a7cCWThCfUEjJOKqWng6xkjsJ0iv381Wlmh2q3k4t/NVX47sMc4ypGMGvev+FR+CwAAk+0cgdgf++aP+FS+DfNkO24+bBJzyx568c1l9Xn3K5O6PDVkbVNM8Lx6hZRxQ6vPcQXMsVpHHkNMuwxnbwVH3fbIp1h4dtYvDqWl1Y6j5l9NZu+2JBJCxknjJBK524VTg9T1Ir24/CPwWcAxzlR0Bxgf+O02L4TeEigaUXXmEEMd3J/Sh4efcOXuj551fw9ZaRHb7IrnUrh/L2gRssNxvDfcdRwQQAACc85xitTxItjcwQy22ixXlrp9rb6feTJO5Nqy4JKkZGDllDkEZB9q90/4VN4QSMLGLkKM4AOAM9ccd6QfCPwaisqRzgMMMBgAj0Py8in9Xn3FyeR5C+s6aPDzTT2lhHBcF7SFDv8ALKqikhnChyMCMccnGSetdd+ysuBrJAxnGcd/u4rsZvhV4UlS3hl+0yW8Kt5cbNlUyRnAI4z7V0Xg3wnoPhGSdtHMiLMuHUjjqDngdeBVU6MoO7KUXe51cQ+eX/eH8hTyBz9Krx3cIklJfgkEfKfSnfa4P7/6Guk0JbUf6HEP9kU6fhFPq4/mKggu4VtolZ8MFwRtNLPeQMihZM4I/hNO4icnkn3qHGLxR6Rn+YpDeW5J/ecf7pqP7VD9p37/AJdhGdp9RTYy4eDimQD5D/vN/OomvIMqRJ0/2TTYruAIdz4JYnofWpA53xJ9qPiqyGnyQx3h0y68l5wWjD+ZDgsAQSM9ga+ffDmpavd6na6FqPiuxVJ/EM5vrWby4xGFl/h3P5pLnhFAwMnJNfRniHTrXVZILmO/vrS8gRo0ktJNhKsVLKcqQfuj8qxJPBWlyXYupNX1V7oEMJmMRcEdDu8vOa8mvgKlStOpG1mla+6aVuzNY1EkkeG6lr+lDwz4vuL7xnrNt4ogvb1LO0TVp0ACyERgIDjGKx/GE2py+KvFk8N4be3/ALNgE0s8vkGbdCAY1dn53tyePnCHoev0b/wiVmFaJdc1nyZNzSLvjwxPcjy+c96hn8C6Pcz+fc6pqk03yHfIY2OUzs6x/wAOTj0yaKWBq05X0+/0/ursDqJnzV4vlv73RfDsVzfXDXr6c628N6zJIqSSu0OXOVztjYDJU4Iq0TezfDa9t9InbUUOr5kisZN4jaWFBGo+XLY+f7gAyD7V9GX3g2x1JGi1PWdXu4g2VW4aOQexwYz6mp7XwrbWduIbPXtbt4lGFjilRFHtgR4q/qtblSSjo77u3/pIuaNzx79nC3nt7/XUm1CeSSCwjjmt2LbUznaVDKMcKehOTXimjav4bs/CWrR3On6gdeNzA9tcw3wQLjfkgbOMfjnI5GOfsLSvAulaXFKmnarqtkJuZhbmOMOcdTtj56muK1X4GfDbTtPuL68fUYre3XzJZGuWwqD7xPy+lb4XDTpVqlSbXvW28l6ClNNJI574OaguveB7cIJhqTXd0097IiyyGYKrl/m+8drrycH5BWjd+HvsXjo+Ip9Ylt9LsNPhR4Le2iZpELyPsCsAQHKgfKpYs5Axiui8I/CfwFqOki58K6xrbaezuM2upSRruwA2Rgc4AB46Yq9cfArwrcalFcXF1rstzAq+VM+ouzx4JI2seRg8jHSuSeVylUlOM7KV+l93/XoUqqslY8l1fQ/DEHxG8PyXq21xqGqGWbUNNvrcrOZZXDwxmNW2RNghAScDBJFb8Vnoun/FnTb62toraT7Nc3Mksem/Z7eMeSS+ZSwy6HHIVcBvU12sv7PPgme6a4m/teSd23tK96xYt6k4zn3p83wS8NXyo15feIJygZV8zU5G2g8EDPTIHNXLLZySTqdGn8wVRdjO8fJcHwnq/wDZZSZXkhJsYhtMhkkRiAd+VZtoxhTjJIBJOKU1lZv4f1231yO1up5rm6uRajTElnmCIyGSM/KZijOp80KDhWAB61tXXwQ8MSfaJZLzXnkmdJpWfUXJkdOEZj3IycE9KRvgR4S+0RzGfW/OiUpG/wDaD7kBzkA9QDk/maxhk0oxUfadb7en+W43W1vY830nVYbf4H6pr66doUdxLbxq/maWkUckonAWLyymJflG7duPJPAxVn4epOngazvbS1u9T/tWRh/oGl2uzTTJIVYrGw3OFbIyTtA7Ac13c3wF8HzW9tZSyazJaQhjHC1+xSMnrtU8DOe1T2XwL8K2ltJbWd1rsFtJy8UWouiNnjkDg1vLK7xklJayvt07fL/gE+08jw7wV4P0HRr7Wrrxtbo/h7TdQxb66szJ9peOTHlRxDIlVsHdgcYPNdR4HvdPm8U+LNNluodV1OVbvUY10/S7W6hbIVkMDN87OFIHlEYyDnvXoKfs+eCpoPKkGrNFCx8tDettTIBOBjjNPtv2efBFtMstuNWilXlXS9ZSPoQKurgHU5nOer+7+u9rBGfZHHeB7S3l13XW1G3voZWjsJTBqmlRWDMokfa22IgFdyj5eNzAE5wBXOftFh7vSrIiS4/f6vMqZkJhAWJCSqgE5y5B5PKn1wPYT8C/CxZ2N3r5Z9m8/wBpSZbYdy59cHkehqzZ/Bbw5aXdvdQX2vCe3Z3iZ9QZ9jPneQCCMsSSfU1lSy5wrqs57dLeVinO8bHjPwVtNL8G3ttrFvrFnqz3ukk3Fvabw8RNztz86gYwAOcHIPGMGvb1ureW3W7ZPLS7/es8v3CHYiNJB16Lww5U1zGvfCnw/wCDdDur/Qxem7naK2/f3BkG1pM8DtzzXR6nbSGCK2to96pJtPIAVIowgJJOANzNXoTk+ax5tdtTa8jbsb8w5iui4jUhS8hy8RPQOe4PZ+h781rkgctwB1zXHaZDOkSbpVlji4URLvCL/EhlbCbT6c47VsaRJLMt3DGypahQIWR/MMZOcqGxg44PfHTNVFhCT2Zk20UtjZWYtgZdOu5IiVXnyJd4JI/2Gxz6H61JrEztdy6tbqrx6dIoBEnVRkSjb3yDj/gNaGmrNJq2oRNd3DJbSRbFLAggpkg8c5NQ2urbvEz2/mIbadDHEAORInXP1Gcf7tT0ElZWKeqTM2vXE9qEubVbOJ54lOGlj3OSFPtwSO44rUu7q21azgtbNo54bpSXUPs/dDqOnGSQMY9aisdThhuNUF7dZ8q6aONDzhdqnAAHqTWAfEsr3V1awyzBjcs0crEA+WMfIvHXrn2pXS+ZLnGO73NvTLjzdHe3u2RruxSa3kJOei4DZ91xz9abDqNvZeGLAJLEt1LbRRIAfmztHXHpya5jVtav5bv/AEi5ligOBG0Qwi+obvz6mul8D6Rp+tz3rXpMwtyqrErkDJBJJxyT2xSUuZ2RMajm+WJlWGuLZ2t7pcMGEQs0O587Y37e+Dn9Kw66jxlplppOqrBYO2x4TI8TMWMRzgcnnBGeD6Vy9NX2ZnLmT5ZdAHWpV6VGtSjgVRIU6m05aYIXBzUiDvTKlHSpGLS0gpaAFAzUqrTVWsnxV4m0zwvphvNUmKKThI0GXkPoB/WgaTk7I2xTh1r5/wBV+O941yw0rTLaKAH5WnJdj9cECjTfjfq7TD7RY2Eqdwqsh/PJpvQ3WGmz6FXpUg6VyXg3xtpniaHbATBeKMvbyH5seo9RXWKcikmnqjKUXF2YtAooqiRw61IOlRjrUg6UALT8UynA+tA0KKctNFKOtAxVpaRaWgZIvQU4dKjU84qQdKAFpy02lWgY+gUUUAOXrXG/GMkfDzUiCQQV5BwfutXZCm32gWPijSb3SdTRmtp0A3KcMh7MD60NXVi4K7sfOOi2ekL4V1CZJrea5fR4ZrgzX7gwym7VSMBTs+Xb055x0Na8+j6Elndtp6wT20drqk0E7pJcFxFIojfepG4AE8d67TSPhX4RlbU9O03xIkkhURXkMMwZtoYEBwH4+dR+IrUHwh0uKwitYfEGoR28AcxRRzOiruOW4D9zWapy6o3UH2OKh03SLT/hI5JtPt0tIYon8x4WEar9mibjceGMrA4HPzYPFY2q2mmp4ruJp9OCiCwj1G7tgxZZWmmUJGmSojxG6cjPO4HNeqf8KosJDNJLr940tyIzO7tuZzGBsyS38OBj6D0FMk+E2mvFDHLr95LHEdqLK5kCjJb+Jj/ExP1NDhLsPkZ5R4x0oaldaJFaQSie73yPJZwInnyrGWkjGDnzSVXauAo3cD1xp4NN0hrrUGs/Pi0u8t7aaCa5aWG8LoTNGp4+dCp+ZeMc46Z9rvPgzot9Ij3etXkzRjCbpGwg9FG/C/hioG+CWgSNGkmrXJjgA8oF2wnrgb8DoKl0pE8jPH9T0C2+1/2ba3csca3sdvHc+QfNkedTIgcbuERdo45JJI4FZtzpy6FbXdy0xv2RoLdEnjYIpmiEhcfNyVwVA75z7V783wi0t5pJX8R6i0sihHc3DksozgE7+2T+dRj4PaU9rBC+vX5ii2mOMzPtQr93A38Y7elR7CoU4LseL6/p1nqHiLWLKzX+z4rDUUgj8iJyXillWMBhu6pyR3OcU+48NQ6ncWFnp17JbXUVlG0+YWCSfv3jaTIbJcgA4xzjGc17G/wj02O5kvIvEGo/a5dqySid97DPc78nGAac/wAINKdVV/EF+Qrhxmd+GByD9/qCSfqc0vYVB8i6o8L8KJpdt4s05rq6ub6yiK3k0gceXHEoJdZEIO7BGODg5A60WraXp2rj7bY6gWijaUfaLtDFchyCrYOwbNhPCnJOOmDXuF18G9Hu3k+1a3dy+bgyM8jHfjpk7+cdh0of4NaK8dpDJrN08VrkQI7syxAnJCgvgDNP2Excj7HjHxBksI9Kjt7WKOO8iZXkMU6rjegJBQEl8gKd2QoOcA5Jr6Z+FrvJ8PNJMrvIRCVBdiTgcAZPoK4q9+C+h6le3F1f6pPLczOS8jZy3AHZsdOPwr0rw3p9roWgw6bDdLKkCsFY4BIJJ6ZNa0acofEVGLUrmtF/qk/3R/Kmyj5ov+ugoinhEMYMsYO0Z+b2pkk8O6HEicSAnntzW5oW2H8qYxxcOP8AZ/qaQ3MG7/XR4/3hTDcQfaWPmx4K9dwqrgStzkVFaD92x/22/maX7RDjPmpkj+9UdrPCsZBlQfM3f3NJgSz8wt7Y/mKlbqaqTTxbJAJUOSOh9xUzXEH/AD2j/wC+hSA8Q+KGpa1pPw81u50LyIB/a9wJ7wzGOSAG7AXYAPmySQckYGawvEHiDxdBo/jLWLi+trRtO+z2dtNZo0cbs0yCR1jkLZUKcByBuzx3r1bVPCrXf260j11Rpd9JLPLaz2UE6Zd9zLlhyMnPNYdr8K9LtLO7tba90+K2uwq3Ea6Tb7ZQrblDDuAQDXhPLZ80m4xd5N+qbT/JW7G/tFbQ43xHrF1o+tw2ll45n1uyuNK1Gafc8J8l44gYyDAoYHJ4xzxxXBeG/EXi+NvBM8t9fKl1OY7Xzd0ouGZ1Egk3Asd2T2IUDK45r3ey+HltaXYurXUdNtblQ8ayw6LaxttYYIyAOCOKj0z4b2ui3K3ul6taxX0MC28U39mwM6oowFBOcccccnvVU8vnCDi4xbt9+/ZabidRN7nz1q+reIbnUdTfT9bvHtQ10wuY9RcQx/MXjy24BfkBwMc4wAa1fiX4j8QHxykNlql/awxok6NHI0cQiaMKsigSYbPLZwueuK9nuPhPpFzeJdz3emvOu3BOlQbRtGB8v3ePpWhq/gP+1bBLC912KS0+QbEsIYgAgIUZTBwASAM45rf6pNTjJQVkmvvt5a/gTzrueS/FVpZv2ePC82sXb3kkl7Gz3SgM8ikS4JJYgnGO+K801TUvC2k3+l22gXmotpd9Z2iaulxbx4ZcqzgFSWzxzj8DX0v4m+FsHiTw1a6BqHiR49KtGRoYba0hi2bQQoBA6YJrgrv9nnwVZXUUF/4wurZ5xiETSQqZDnBC5HPUfnXRgMNKhScJvVtvTz+QpyUndHSSeF9FutQkmtNL0OFfOZds9krM8qBCpB3AEDJYpjkZFcP4W+H1lH4SsNE8Tf2NHqesOn2Zl0w/aI4gRNM/ngHP7tgobGxSSMnFetRfCjUY3Bi8f+I0IcyYVYQCxABJG3B4A61i2PwISPT7m2i8beJEgu4Vt5k3od0Sk4TJGQvJ4GBXDTy3Exi4+0XTvfS/l/w+2hbqRfQ8y8CeBvCM1lr9te2+lXdzDdTW0FzJfs0aRglkZ2QgKzBQFKZJw5xgV0Xwo8N+GLzwdqUdvaabfwyahM0Fy0BDHakbAL5ql8JkrnHIJJHNdLovwCtvDXmy6J4v16zacqkgj8sBxnjIxg47VY/4UmyNcsvjXxDuuRMJSfLJbztvm9uN21ckY6VrWy6vU5kqmjae7/L/AIIo1Irocl430S1XxBPDPoOjiM6WkhkFjst1DXS7t4CLICNyovlZfrkZ6VPinofhTSfEHh83OleH0uL+5W0m3m4WFIgiJ5jFSADG6uCMg4I3c5NdzF8HJ4rtJYvHPiRJEihQMrRg7YuIx0521VtfgPDDHbRx+MvEIS3ujfRAshCT95BkfePc96mnllWLi3U2XS6vdf5g6qfQ86+JGlaPD4l8H6Bpvh3QbCa/toTFcXAkdoyJmwrLG7LIHxjncTuxu9KnxV8Pabc+Gb2PQ9N0+yvNFEVxeyDSjavcrI2xTE4ZlKZP3epx14Ir0fUP2frPVtWbVdQ8W6/PqJfd9ocoXBB4IOOMdsdK1dY+DtzqMNsb/wAeeJpxaOs0Ks6AI68q2AOoPfrWsMDWg6bU/h333/Hpp+InNO+h4/L4F0PT/AF7d2toLfXb6zhuH0/V3je4sbZHzLcRgYJ3ADCkBsE4zWzq9n4PbwhpN9ZweGw2opcoLmLw/duJCjbRsUPmMjP8WcnnpXZap+zZpOp3817qPijXLq7mbfJNNsZnPuSK1NE+B7aHYNY6P468T2VoxLGGCVVTJ6nGO9E8DVaT9pre/wCFrbPbT/h9RqS7HP6D4b0G18CxXq6bp00kuk2p82O3R1IALkqrg7myxJLck8cAAV4rrHhFdd+NV/oNnc2NrAdR+zF5jFarwQG2ouB2OFUc8etfR9t8EjbRQRweNfEEaQpBHGqrFhVhYtGB8v8ACST7981Dc/ASxu4pBc+I9RlnkvTfvdPbwGdpT3L7c474qsHg6lCpKc53v6+QTkpJJI9Lm8LaNeeOrbXbqxSbU44PJSWQlgqjkYU8AjJ561v6lcaPpcayanNYWcbnCtcOkYJ9AWxUUP8AyFov90/yrgfjhDbT6x8P472CG4hl1xYWimQMrhkIIIPtXpwjzOxgehaZc6NqsbyaXPp97GhwzW7pIAfQkZq79jtv+feL/vgVQt7HRfDltNNbW1hpkGP3joiQqQOmTwPWuC0v48+AbrQrXULzWo7GWYHNnIrSTRkMRhlQH0yPUEUnboM9JNvZrIsZigDsCVUqMnHXA/Gn/Y7b/n3i/wC+BXiXhG4s/FXiBfEVv4qin0rS9bkm028nkKMFuEAlsZEk2sOcFDyMY9MV69beJNDurprW21nTZrlELtFHdRs4UdSQDkAetSMv/Y7b/n3i/wC+BR9jtv8An3i/74FeKp8WdNuPjDeR3fiS003wvpdhtQPMhj1CaRhlgc5O3HGBngnoa2fAPxb0q7tby28Xavp+n6lBqN1axtNmCOaNJCFIZvlzggEZzxnvTEeo/Y7b/n3i/wC+BR9jtv8An3i/74FfN3xE+Nevadr2uW2l6v4esbSxv4bOJJYGmnlidQTOvzYYLnkAemK6j4ZfEnWdT8P+Nrm81HTNXj0OWM22o3EbadFOjLk7xtJQDHBxk5FAHtP2O2/594v++BR9jtv+feL/AL4FfMeg/tF+JtS1SPTLXw5pWo6hcXr20EVtdSZILfI3CkFADgvnnGcCvqGMsY1LqFcgZAOcH0oAi+x23/PvF/3wKPslt/zwi/74FTMwVSxIAHJJPSvNZfiJqRPiE2GiLqA0nV4bIrBPgtbyRIwl54J3OOOmDntQB6J9ktv+eEX/AHwKPslt/wA8Iv8AvgV594P+J8ep6VrupeJNOfw/Z6VKkEhuWYt5gRfNBXbnCOwXIBB4NdNovjTw9rmmQ3+kapBdW88kkMGzIaaRASyIpAZiACcAUAbf2S2/54Rf98Cj7Jb/APPCL/vgVweg/F/wpri6l9hl1Hdp8Us1wsmnzLsWMZbnbjPbb1yDxXlHhP8AaF1CS/vp9aXTbvTRZSX0NtZIy3CMZlWO3LsdjOFYE4HfueKAPpP7Jb/88Iv++RR9jtv+feL/AL4FeaaB8YLB01RvGNifDAsrsWW6af7Qjy7A7IXRdqsoK5BPU4q1ovxk8Lax4pi0G0/tRbyaf7NG01hJEjvs3kZYAjA7EA9+nNAHoP2S3/54Rf8AfAo+x23/AD7xf98CvKNU+Jni238cWXh218CFp7iOeRRNqcS+YqFcSKwztXBydwycjHQ1u6N491G/+KOp+EJtBWJLGzF017FeCVctjYrAKNhbJ4JzxnpQB3X2O2/594v++BR9jtv+feL/AL4Feb+Gvi1b+INe0zQrHRbxtakWY6nbB1/4lnlMUIkY4zlgAMdQQfak8P8AxP1PWfEd3pCeCNVjlsblLa9kN3blLdnGQSd3zcc4XJ/GgD0n7Hbf8+8X/fAo+x23/PvF/wB8CvILv4227ePrvSNHsJdV0+1gdR9jTfNdTqw3GMkhFiXBBZj8zcLmuctvjBrJ1J9bMdhpmj6rZQS2dp4g1DyF3KWDtAY1bKtleW28jpQB9BfY7b/n3i/74FR3EVjbW8k9yltFBEpd5JAqqigZJJPAA9aoeEtbGu+HdP1GRrIPdITi0ufPiJBIOx8DcOPSteQRyq0UgV1ZSCjYII6Hj0oAxdH1jw1rUzxaPqOkX8qLvZLWeOVlXpkhSeK0bvTrKa1mimtLeSJ0KsjxhlYY6EHqK8x0XQNJ0T9oWRNH02z0+JvDPmGO1hWJWY3WCxCgAngc+1erzECFySAAp5/CgDzz4Z6LY+H7C8sNKhMNol1I6oWLYLYJ5Nde3/H03+4P5mvINc+JuleD7XUNs0M968zFVJO1Bgcn1+leYzftB6lPdSGJiM4AdBtwPoBilsNRPrLoy5qlB/qR9T/OvAvDHx+kW6jXWrf7VZEgNLGAJY/fA4Yfka9y0DVLLWNKgvtLnjuLWUEq6HPfofQ+xpphaxZn/wBQ4+n9Kk/j59abcj9w/wBB/SpGXc5zmqAix++i9Sp/pUiDH+femEZuE9cEfyqX37UAJbfdm/3/AOgqQUy36zD0f+gp461nIpDqUUlKKQzA8dGFdAElyxEcVzDKMJuyytkAj0z1riINfspY4jdoEkTcdwt/NbLMWOCxAHJ9DXX/ABI/5FSX/rtH/OvJah7nn4mVp/I7pPEWibg8q3dw46NOm/H0BOB+Aq3/AMJlpfpc/wDfv/69eeJTR1ouYqtJbHZy+J7OGa7lso7nzLnG95G6YGAQB0wKwJtdS9lFk8tx/ogjdQPlxwdpz69ay5HWONnkYKiglmJwAPU1xei+NtEvfFF7FFc7VcRxxSsMJIQDnB+ppNXQLmmmejB7YNIQ848xtzDPBP8AkUMtrIjqfN+d/MJzghvUHtVMnmpo+lTyIzsh1xcWouobeQzk3AY47HaBnP6VdEsAbeFIfruUbT+Y5rBvv+Q3pf8Auzf+grWmOlHIhuKLgmj2ttBy3UnqT7mq9IvSlqkrCFWpaiWpQciqQBT16Uynr0psELUo6VFU1SMBTZpY4InlmdY40G5nY4Cj1JpJpY4IZJZ3WOKNSzuxwFA6kmvmn4p/ESfxLctY6dI8WkxseAcecc8E+3oKaVzSnTc3oem678ZNHtLprPR7ebUJg20y52Rg+x6n8q8f+KPiK/8AEOsxSXimOKOMLHGCSF5yee9ZXg/Qr7W9Vji0yJ5HUgnjAH419DWPwe/tPSkTWp1WcL8qhc7aznNQkmepRwyt7q1PlqrlhNEgZJVyG9a9+h/Z+EV85uLxWtv4do5rlvGHwZudNjkm0u4aTbzskHB9gar28J6GvsJrWx5xZa3dabdxz2k7/IflIOGX8a97+HnxestQWCx8QyC3uWISO7IxG57B/wC6ffofavm+e3mgkdJkZHQ7WBHINRoxU8HH9av2cd0c9SCmrSPvMEEZBBHtS15B+zx4tfV9FuNFvpt93YYaHd1MJ4xnvtPH0Ir1+jY86ceR2Y4dqkHSohUitmggdTuPxptKD06UDQ6lHWkozQMctLSZpaBjl61ItRA4p6tQA+lXrSA5ooGSUUgOaSR1jjeRzhFBYn0AobsA+tjw7xM7cHjj8DXi+teMNQu3KWbrbQlsDZ94j3Ndn8HNRmuH1G0mO7y1WYHOeScH+lcVPHRqVVCKOinTs7s6L4f+AdC8O3Gr3NnBJLd3tw0ks8zZfB+bYCMfKDkge9dfJZWEf30jX6vj+tR6L0uf98fyrykeG9F8SfHnxRb67p8N8sWn2k0YkLfIcBeAD3/pXpwine5vc9cXTbJgGWFSDyCCef1pf7Ms/wDniPzP+NZGtatongDw1DNd5tNJhkitl2AsI97hR1OdoJ59BWF438f+Go9E1OxsfG2iabrEkDpbSm7jYxS4+Ukc8ZxnipGdn/Zln/zwH5n/ABpf7Msx/wAsR+Z/xrivhjaawLrUdb1acImswW1zJYGQN9luETy5GjIJVon2qykGu1v9StbHTri+uJlFvApZ2U7unYY6nsB1JIFIYf2baf8APFfzP+NH9mWf/PBfzP8AjXi+i/FrUT8Jtd8X3U+mz37XM1xp+lysEkjtVlVCjAHLEAMd2O/NemT+NtIk8KanrWkXlnqQsrSW6aGC5RmyiFihxnaeMHjimI3P7MtP+eI/M/40f2Zaf88R+Z/xr5xvPjx4oZowg8F6asulDVo2uruWXep6QDGMTf7J+tdz4n+L17ovwv0TxVHoCTvqNmtwyNeRxrE528bWO9wck/KCRxmgD1X+zbT/AJ4j8z/jR/Ztp/zxH5n/ABry74YfGhPH3ihtGtfDtzbGOEzzXP2uOaOJMDbnb3JOMdR+ePXaAKf9mWn/ADxH5n/Gj+zbT/niPzP+NQeJtU/sPw3quqiBrj7DaS3XkqcGTYhbbntnGM1wmufF2y0S1tLy+0TVP7PubCDUFuok3xiORHYgnoCrBFPvIp6UAehf2baf88R+Z/xo/s20/wCeI/M/41hweONDi0TSb/XNQs9Fl1G2juY7W/uUjkAYA4wTzgnHFXPFPivRPCunG+1/UIrO1DiMuwLYYgkDCgnJAOPWgDQ/s20/54j8z/jR/Ztp/wA8R+Z/xrz/AMb/ABZ0jTfCMGp+GL7TdUu728Wxsg022FpcqXLtxhVU5J7ZHrXPaL8erObwxDqF/o13cXspu3EGkkTosNuRvlZ3K7VwQeR0I9RQB7D/AGbaf88R+Z/xpf7NtP8AniPzP+NcmvxT8Hx6FDqeoa3ZWG+GOZ7S4mX7RFvQOFaMEsGwRxil074l6BrXh++1bwx9s19LLy/OttOty067+R8jbc8Z4GehoA6eeysIImlmSOONeWZnIA/HNZl1qPhm02fatQ0+DzAShkuAu4DqRk814B4++NjagA40W9sRZagYo7bUEG2Qp94yx5B3AMPkPAI5JzXgHxL8a6t4v8RS3eqTsUQCOGNV8tVQdMJk7c+maB2PsjXvi58O9GuZLeS+N1MhwVtY2kH/AH1nH61yN5+0B4eWQraeHLuSPs8k4TP4DNfJlrK8kG77UhIONrgE1p2TvyxWCMeqMwz+FQ2y1FH1NZfH3wpj/iaaHf2wzw0LCYY/MGvVfBmueGfGWk/2j4fnjuoA2x15V42/usp5Br4Uju9nSQy56AY/wra+HfxC1DwF4uh1CC2cW05EVzH/AAzJnofcdQe30zSjK7G4I+8P7NtP+eC/maZLZ6fFjzUiTPTc+M/rS6HqlprWkWmpafKJbS6jEsbjuD/UdDWf4u8IaB4wtIbbxLpcGoQwvvjEuQUPfBBBGf1rQyNBNPsnUMkSMp6EMSP51wfxg8E6Lr2hW8t5A6T2colimibDDuV5z8pIGR7Co/2boVg+EOkImdonuwATkKBcyAAe3FdT8QP+Rdm/z2NAF6xYtboT1IyaLUfuEx6Umnf8esX+7TrT/Ux/SgBl2f3a/wC+v86ZJ3HpT7sfIuf+ei0h5J+tNDRCf9f/ANs/604e3U8fpSf8vP8A2zP86cBj5s0wGRD5WPbef50swzbvnstLAuFYf7ZH60s/+ok/3TQBO5yR9KRaVxzSLWTLHCnCm07oKQEcf/IXi/3T/KvPvj6pST4f3I6xeKLNfwbcK9Ah/wCQvH9G/lWB8VPCsnjHStMtbPV4tKu7DUodRjuGhE21o92BsJAPJHX0qnUjT1k7GaVy7478E+HPF9vC/iXS4r/7Fvkh3uy7TjkfKRkHA4NeP/BrT7Q/s/2p0zWNH8P6zeNOf7SmjiZ4gZmHcg52jAOeODXtvheK/sNKEHiDXYdYvd7E3K2y24KnouxSRx61jzfDvwDNeNdS+GNAedjuZjaR4J9cYx+lR9YpfzL70VyvseO/EHw14S0D4N6Fa6Fd2mpaND4htX1S/WUSic5YStIyk9jjA6Crfhfwt4P8T/FDQrnwPoiDwlptldfbLpLOSOG6lkGwRM8gBlG0k45GM175a2+l2litlaQ2cFmowsESKsYHoFAxVlZ7dVCrLGABgAMOKPrFL+Zfeg5X2PkbTfhhrOvfEPxto8en6NoVuywSpZPArD7L5x8vZImTGxEfPXdkg12/w48Nzav8Ttesftcd34Q0XULq5eKKP/R7i8uDzCwbIcRpkH0P1r2O28N6TBrOv6oLq4e61qOOG4LXJGxEUqqx4wUHzMeD1OasaFoei+HvD6aNoscdjYKjKqxOQ3zdW3Hksc53HJzR9YpfzL70HK+x8q+INEl8SeJ9X1fTZV0jwXd+JbbSCLaNMq8SeWk6krhED7RgYHzY7V0vw71XxP8ADbSdck8SaZLea/4gu7eKygvbtPNuLk7omLhm3GMEKcgH5W645HuWn+CPDFl4E/4RAQrNohVlaOWXLuWbeWLDB3bucjkECoPCXw/8KeF9Ql1KyhNzqspO6/vrhrm4x6B3JIH0x70fWKX8y+9ByvsfOPi74ffEDwjrtx4r1jWoZBf3NvbS/wBm3cltLcGRlXyU2qNir064wB1r6k8SeGn1nw9BpFvrWraXGhQPcWc+J5EUYKGRgTz3Yc+/WqOu+CtB8Qa/Y6pfy3ks9rPHcxwLfSeQZY/usYs7cj1AFdgKuM4zV4u4mrHnFj4W8ZwxXGi3+v2GpeGzb3EKy3MLm/lV0ZUSR87Ttzy4GWx0r518VeDJtI8PazFqF/LbXWh6lpdtI9oxEcoa2hVmycAbdgYMehHNfaNNKKQwKg7uvHWqEfKvwt1TVz4xi1WW/XxXrdrpd/qF1DZXYnY+YYVjt8gbFbKZwuR+NWPhPZeI7zx3p93r9rHpNlo32/XprZ7VjLAbpyAjlv4iqsVwMhR7ivp2K0t4ZDJFBEkhG0sqAEjOcZHvWe2uWwZ3WGdrZG2vchR5Y5wT1yQDwSARTSb2C586/B3Vrq80nxjDY6xey2kFneXWlaSvlyyTrMHfzJjGMmTdgBDgjd0ritN+Cev/APCQ6LpMl5ptxcRWY1qeynnnjjiTdGgR2XlWbB5AB+Q8nAx9fXN3YaLdWVtHbLG19OU/cxhQGI+82PU4GfUihNT0hriSdWiEzxS75fLwxSFsPk46Ak/mcU+SVr2FzI+bvh9pZ8O/DvTtYu08RJJqLPftNolxJDA0QYndeTSuUHHIIUHHqTVz4W2uo6T4o1LWIPAF7eatrkv23R7yeRvItLWQkESyuTsIAU4ALkMB7V77pfiHRbtFtbIlIFVlAa3aKNQqhiPmUAAKwP40ieMNEa7t7VbpzcXC74YxC5Mq5YBlwOQdpI9QM9Kr2VTblYueL6niWi+DtWh+PPiXUPDF7BcywWlxI97dxSFYrucALA7/APLQJjcAo4XAPNcx8NLHxNpXxMlv5tI1K91i1N5c6vMrzLLeYRlSNlcLDh3Ksm0kgDJr6zRw8auucMMjPFYdtqt7crcXdtZi4svOENuiMA7gNteUljjbnOB1wM96zuaxg5ao8y+HngTxHY+J9T1bxTHFcR+LLOR9WjhcRtYyhv3cSspDEeWxUkH7y9ehrn/DXwvutF8U6pE/gWDUvtGoyy2+sX2rkxwWrYVQFyZGkC7ucA5P3u9d9qmua+/i3ULWxuZxY21wkREMCvtyFbHK5Jxv4J5JHTvvr44s3tJLmGzu5YYtnnMAq+WWJG3kjLAjGBnk49ahVEzpngasUmtb226X2ueQ+JvgrBH8S9MbwtDJawJZT3ivcB3thOjIIIWIxtQZ3Acn5T161PrXw01Pwt4TtvDugW+qazb3k+nzSD92be2uIpUMsnzMHCuFJIAIHtXqM3i67k0rULgWcGnvBI8cX26fb5uATlVAzkfKduO+M1ka5r3iD/hE49WLpYrG8TKYE3mcMvzZzkKoz971FDqJK44YGrKSi7K7tv1+VzU8X/DPw54q1A6lfw3MWrLEsdvfW9y6SW205VoxnarA98c96p2Wl6b8N7bUPEfiLWNZ1i9udkVxf3ETTukYJ2oscS4RASScDr17U/RvEl7ZeE9MvbuVr6SbzpZ2cfMqIrHaMAc5Crnnqetcl8XfjNP4R8LJNZ2Ai1O7Z4rfzWLKChAY4wOme+KammRPCVIXb2V1f0PN/iJ8bYrP4mXWs+ElMg/sdNOiuLq2kXLGUyMwRgDxwOeK8v1Tx14q8VXIfW9ZvpbUklYxKVRfooxXE6xreq6/qlxqWp3c11eTNuklkOSf8PpW34S8P6rrt4kMDHyz1Y0pOyMYK70MzxXfy3F5sG8RKMZJzu981n6ZerbyYlXKHrivf4PgrJc6aY7m43yEZXttNchcfBDWI7p1aVFiHRgM5FZqtB6M1dGe6RzemeWGZ7cuQejI+R+IP9K6vwN481TwVqctzZNBLFJxLbsxCSfUZ4PvXNa54I1XwzG1xbyG4RfvptIP1FYMd/Dd4a5jYFeMg9DTTT1ixNNaSR9sfDv4k6Z47sp44YnstSiUM9tIc7hkDcjfxD17iu/A+9mvgfwLr1z4U8XaZq0LNNBBKC4j4MkROHXHrgmvva0nhvLSG5tnDwToJI3H8SsMg/ka2i7oykrDCP36Y7K38xUoHAprj9+mP7r/ANKmxj3qiSGD78/+/wD0FO70kQG64/3/AOgpazkUh9AOKKB1pDOZ+JJx4Um/67R/zryAvg1678TD/wAUnN/12j/nXjMj881EjzsX8ZZ8zHemiUE1SeXHSiNy3Wkcxn/ESVk8DayUOD5GM/UgV5x8ErOC7g8XtPGrtDpbPGWUHad3UZ6V6X4qtBqXhu8sTJ5ZuV8sNjOOQf6VyPgjwzqPhy51OCyvLV01C18mVpkIGw88HPBpqSSsdmGaUWel6c5fT7VmOSYlJP4Cr8fSs+wUR2NvGG3bEC59cDFX1bAoORopX3/Ib0v/AHZv/QRWmOlZN63/ABPNK91m/wDQVrVFA30Hr0paaDTqCRVqRT2qIdaevUU0A+nr0popwNMELU1QipFORUjPNvj5rj6X4NWygbbLqMvlMR1EajLfnwPzr588NaTNrmu2mnW4y87hc+g7mvVv2lZmN9oUH8Ihlf8AEsB/Ssz9nq1WTxRPcsoJijwD6Z61UpckHI9PCQTSXc+hfh94O0/wxpscVrCnm9WkI5Jruok4rEtbkbeeFHc1p29wjqMMMexry7tu7PdSSVkW3Vcc1mX9pFNG6suQR3rQZMnrxWTq2s2Gmp/pEjFj0VF3MfwFO19gTtueD/GH4dmRZNT0uIfaEBLqB98e/vXz6+3zGBXb7ehr7Zutcsr6Js2t4IzwfMgIGK+a/jF4PGj6j/amnLnT7k84/gaurD1GnySOTE001zxMr4Q62/h/4g6XMSBDcSfZZs/3HOP0OD+FfYPSvhPTrjyL+1mJ/wBVKj5+jA191RuHUOp4Ybh+PNdcjxcStUx1KDSUVNzlHhvWnjpUNOBoAlDYpwOaYpyKWgaZIDRmo6UGgZJmlpgNKDQBKrU8GoaKBk9Q3qvJZXCR8O0bAcZ5xTg34VIOSKUldWGmeIXP2cSZZWt5M4I/hz/n6V6V8E1H9r6mwOQbZP8A0OuO1vT7S8u7lLe5ihmV2BinbaCc/wALdPwNdd8ENOudP1XVftETKpt1CtwVb5+xHBrxMJB+2i3/AFodEHqeu6L0uf8AfH8q870VTD+0p4iX/nv4ftpfyl216JovS4/3x/KuD8TeEfF4+Js3irwlf6HAJtMj0949RilkJ2yFyQEIx25z68V9CnY6Div2mvAOkyaDceIWk1KXUp7y1gSOS8d4I98gVisZOBkZGBx6Ctzx/wDB/TtTTT9M0a38P+HvDO7zNTmitVS8mCnIRJCMBTjkk5+vQ+pa/oGneJdEbTPEVnBe2sm0yRNkKWHORzkYPQ5zXJW/wZ8BQ3CzSaAl3IpyPttxNcgfhI5H6Uhnl3jjQPB3/C5otK8WXiaZ4atvC8C2qtftbodk5VUDZ+bjdx+NUPD3grTr/wCHPxAv9O0G61LTDqjyeHtOieYK+xQiTKAQzKdwJJzkJX0ff+HtG1C5t7i/0nT7qe3G2GSa2R2jHopI4H0q/cRNJayRRStA7IVWRACUOOCAeOOvpQI+N/APw01DUvhydbntdFsrL+yNQiF8Z2ik8wyEbrgbTnbtYDbxjHGa7vQNN068+GviP4keNtKtjJPFJLpyJAsUsECoYo8OMEmQnPzZByD3r1dPhnpL+BNL8J3lzfTaVZuJJUWUR/bDuLkS7RypZskDHQVp+OPBlj4v8P2+iXcsttpsdxDM8NuFCypGciIgjhOB0x0FAHxxbeD/ABLZ27nyLS01GHSLNTb2thG8k1ncsytKxwf3inarOBuweoxXrd/LquseBPDPw/0vwpbTeL7bSFeaXVIEKabGo8sspcH94xAAwMDIPNeyeNPAlr4lu7W+g1PVNF1S3he2S90uYRSGFyCYzkEFcgEdweRVjwb4E0HwfZTw6JatHPc/8fF5JIZLic+ryHknn6DsKAPAv2edE8V+EfHtr4W8RPcadGbKTVBZ2rwMsgD+XmcqpY55x82eB2r27xnoPjC+1Mah4a8VR2EdvEDDpstmrQzyg/N5z8sVYcfLgjqKm8MeBLfQ/FN94gl1bVNU1C6tksw9+6MYolYttUqq8ZPfNdjQB4x8RNM8Z+IvCEdzq9hYWEljfPO9tZ3zSA2gtpUdnYhQx3MCFA6e9eKXLeLdLvtFt9C8Qec2r+F7TybO6dTFbwzIiSKS7bUAMe8MAcdMV9oSIskbJIqujAqysMgg9QRWTqHhjQdSjhTUdE0u7SFBHEs9pHII1HRVyOAPQUAfMPiK/N38KtZsv7Jutev9X8Q3Ftpl4sYuBAqNCGbzQDgMysBtwDkkcCrupTX+pfCrxlrWqm2juvE2uRJp1taK04vGgYKsYJIzETCfmwPlDHuBX0isWleF9IENnaW1jp0ZZvKgRY41yck7RxyT+tZFyfD2taeuk3+i289lGA6Wd1aKEAHAKowxxnt0z71y1cdQotxqStbfctQb2PD/AB3p2t/ED4XeC9NtJ9N1HU9Xv5LkJaxJBBZAW7sYDgHGzPO75ieOuK81tPAXiaGC8u3ja5t9T1RNCWa21IQJKRcCOWNlCZKO0Y+YAAbckHpX2DYa7o2nR3NjZW8Vlb2GxXjjjEcabumAOOvHHeqb3fhXTNLtYHtbC2sLC6LwIwVUhnUsxKjswLMfxNSsww7aipavyfVX7dtQ5Glc8Q8cpCmnalo1nf6zqeuXFumjW0QsURY7hioaEXbxrLMm3dxyNg+YjitHw1fWFt8Hdds/DVlrukSppTT6trtyrKyXSIAYVZ8M7cFPlACKeDmva5Nd0W4MOoyRwSvbOY4rhkBaNnABVSRkFgRwOteZftE+PLH/AIVffw6XdwyvdSC0k8twxUEEEHHTofyq6eMo1Hyxfls/8hcrR8fm/nurlpjE7R5O1d5OBn+8ep969C8P/CnUfFGni8kT7IWXMZzkmsn4PaKvifxP5c6f6HaRh/L7HnAzX1zpVvHb28cUYCoowAOAKmvWcZcqOyhQU480j5is/gbqoaX7bc+SqH5Sig7qwPFngTUPD6eYztNbL1cDp7kV9nNCjfeXOaxdf0C11G0eJolJYYORxWPt531N/q8LaHxVDdxW8LJcRo6dMqP51DdX0EsTRxsfL7A9q6n4m+Gm8N6lMgRfJJ3JkdBnpXnTuJH+6F+ldkLSV0cM7wdmfRP7PDa/q+o3+jaJ4tn0S5+yq5UwLcxSR7sFkRiAkgz94evIr6Il8W+Evh3psGh6z4kjW6s7YORfTlrifOTvOfvMxz+eK+O/2etbOi/FXR7yRpDB80EiJglgyEADJ9cV9d674xs/39xeaRDc2tvObVFmCmV5BksVzkbRge/I+lXzKO5VPDVK/wDDVxP2eEkX4Q6E8sbRmfz7hVYgnZJPI6nj2YV0XxA/5F6X/PajwVryazazJFpr2SWzeXgAbByflHTDAAEjAxkUvj//AJF6X6/0pp3VzGrTlSk4TWqL1h/x6x0tp/x7p9KTTwfsiGltP+PdKZAy7+6n/XRaAOOaW76R/wDXRacBTQIrn/j4z/sf1p7j5Vox/pA46J/7NUjDimMhgzg/77fzpLkf6PL/ALpp9uPlP+8386W5H+iy/wC6aQD37Ui0SHGKRTk1myx460opKO9ADIuNXi/3WP6V578Wtb1K2u7Dw94blWHXtcuGhhnYZFtCozLNj1UdPc16FF/yFIj6o38q818Slbf486LNeHbBc6RdWtozHgziVWZR/tFP5V4+dJckJNXtd280n/w5dB6sx9e8aHwlqWjaFb6paX0Z0+6t/tN7MGlkvoVTy0kcHAZiwyDzz1rMt/ifrh0vRbvUdPs9Otby8s4ZL2cN5RiaMm4IGeGWRSgHbIz1BrJ+I3ge1/4Su3hu2F4Nbu9VvvJjjw8afYl6DnLB41II7kV5VoHhK5ksrN7aK9k1ISW32e1OnXbrZuZIzJKzMoj5w24cjGMGvNw+CwlWkpvVvrbzf+XoaSnNM+ob/wCIOg6d4hu9G1F7y0ubaCS5eSe1dYjEi7mZX6MPp1PHWpdZ+IHhXRBZ/wBra5aWjXcSTQpLu3lGGVYqBlQffFeH+Jvh74y1fUpZfEz6teXMgt7BLlLpZIWMlzmVlWMDZCIxnawGCee1XfG/wnudLutZurS2S/0qQSyWMMayXF287xeTDE5bOI4yWfcTj17Vxxy/BNxjKpq10ta+mz+ZftKnY9f8ReN9N0i71GzlMwe10htWaeNQ6CPdsXAByTnB9Md68Q8CePdT8Ba1JB4p03xJcR39olzMJpROwddzS3CL/wA8yCo4OMAnPFegaV4Tur6/8Z2N7FJtXQbPQoJXUgSkW5Z2UnqN5HI7isXWPCP/AAmnh/wJb3OgTxarcW0EWpanJE0ZtbeEESRkkj5nOQBjoc1eGjhaUZUpq8Xa7/7d5k/zFJzbuj1uLxHay+EI/EcVvfSWElqt2saQFpihGfuDqcc4zU2malZeJtAF5ot+z2l1GwjubfhkPIJGRwynsRwRyK56DwV4f8KNca3oWiXU1/bxs0FrBdSMCcY2Roz7AT9OKz9IstbhEninxlI9pFaq91DoOkoWSJiDlpdozPLg/wC6Dz9PL9jRknKm+ul7XflbXy1ukbc0luavw21fV4fF+o+FfElwl7fafHHdW9+qBDdW75ALqOA4IwccGvWq8h+FGn6nd+IdU8W6/avY3ermOK2spDl7a1jB2B/9tiSxHavXq+xyZRVOajbfW21+VXt8/l2OKtugooor2TE4K21W+f42anpDXMv9nJoEFwkOfkWQzupbHqRj8qdaidxNZQ2U32w2EenyBoyqIVZwXLHgrhtwxnOcdaqWP/Jf9X/7Fy1/9KJa7WLWdOlnWKO6jZmbYp52s3oG6E+wNaQbV7K4pGFdwX15e6hNb+W0Nk0MUSNE3mO8ZWQlTkDBJA6HpVfU/DkiwapHaz3rL9iaKLOG++zF1AAGe3Gc11s17aw3cFrLcRJcz7vKiZwGfaMnA6nAqK61bT7WdYbm8t4pWbbtdwMHbuwfTjnmtI1pq3KiJU4vc4y38PNZzzWsAEkrxSkeZHhdpWJc9+eG655FVLLwpeW2JAby4eOO1mhh84xAspYOGYnhvmY8YwGPFdvJr2mRajcWUtysc1vbi6lLgqiRk4Dbz8v61bsL+1v4zJZzJMg/iXpV/WaqWq3IVGNyS2EwtoxclPOx83l52g+2eaxfD13BZ+HdOhuGMUgYWeACT5oJXHHTkHk1qy6jaIk5EySNCrM8cR3uMdRtHOfasex13QlitL8XaWY1aMXEa3DeX5gAA3EHgHBUe/Fc3JJ6pHVGaUXFlexi0pddjsba2vUvbTJkIZgrL95XkIOHDMTjOTndxwaZfa9oSxXN1eww+baeaYUYAtKq5Uso6YJDAZ9/WtKytdJi1KWK2kLajLG08jq5LsjnGWYcY/ug9McUyC80OdDoME0cmxPKa3TcSFHGCR06HqfWstTq5k3e0na3/B+8qrLoNp4YF7ZW0E2nWhEwSPB8v5gxPPp1x7fSszWda0xdMhhbSs2tzHLdqksixfJnG5RydzbsgYBHU4rotT1PQ2gtH1C7tPJll3QmRxtd1J598H8M4quLvw+dVuIvtFsk9tEYJYmwqBXYZByMH5jjg9Tg0PtcqnK3vSjJ6t9f636lS91HTIPDlpdLo8j2Cjy44ZESLajDAAViM7sgAd818n/tba1HfeMtK0y2thawafZn9yrIQru5J4QkA4Ucda+ttSXQNGtrSWe03wOxjhWKFplZmHoM8kcAn6CvkP8Aa40uGz+JUF7A8arqFjFL9nCFHix8vzD1OM/nVLcyqNOF0nu9Tyvwnp0uta1a2EQJ3sM+w9a+uvBXhmz0OxijgjUMBy2OteG/s/6QWurrU5VARcJGxHX1r6LtJsAV52Lm3LlR04WFo3ZvxgKo+lK8KyD5gDUNu5dAaslGwMEfnWCOowta8OWuoW0isq7iOOK+Vfid4Km0LVZZIEKxSEkgdPevsZlKrzXA/E3QF1jSZWiQGaMZXjvWtKbg9DKrBTifIelymG7+zuQQTgfl1r9DPh8uPA3h4Zz/AKBAc/VBX52alaz6ZqksMyskkbHGRX6KeADnwL4eIAUf2db8Dt+7WvTjrqjynfZmx/y8p/ut/SpSO9RHi4iPqjf0qcjIqiStF/rLj/fH/oIpaE/1tx/v/wDsopAaiRSHilpAelLSQzlPiaf+KSm/67R/zrxiQGvZvid/yKM3/XaL+deMucVMtzzsV8ZXPWnqRHGzscADJpMZNJfoWtSidWIH61JzorWskl5fQ7zhN42qenWtqG3A1N3IXa7MNuOOR/8AWpuhaFdXE0UkPlgRN8wZsHOCauadcSW17FczQib7POdy44PAGPzJrjrX57eR6VGCUTLtDcWjATxSLC3ILKR+IraU5Gab408Ux6nIkZtPKeNzFnfu3Dd9KZbNm1jZv7gJrrTujiqQSdkVL4/8TzSvpN/6CK10PArxK38baz4k+INpa6KLeFUd4YI5j8rjuzH1IHavYNHvft1ikxUI+SrqDkBgcHB9Miq2CpTlFJsvinA5pgpw60GI6lBpKKAJVPPNPqBTzU2eKYDgaevWo1606gaPGP2lbUGHQrvdyDLDtx/utmtD4I6R/Z3hN9SVC11ek7f90HA/xq78fdHm1PwfFeW4BOnSmWQdzGw2kj6HBrY+DjLc+ANIKYyFZD9QxFYYptU7Hs5dZo6q18PrfRqdW1Kc552K+1f0q9F4Vt9OAm029nODkq8hauS1jwzqGuSajHcXNxG7bVtfKl2ImMHLAEE103gfwpceHdPVLu+a6clnkOThgRwuDwMe1cy+Hc9W2ux19lMZ7IuTyBjrWRP9isvMu70pheSzECrGhvutpwOBuOBVn7DDeLEZVy8T7h9fWslK7sataHOReO/Dtw0UaXKgSZ2sVbaQDgnOMYzUHinwzZa7pF1agKsdyp5AyAccMKv2XgDStMub2ewtzG94GEvIIIY5Ix71rQ2UdnZLbxghIxgZOcUVdHdEw1VmfD/jfQG8MeJrzSmkMogI2yEY3AjINfWvw/v5tR8DaFd3JzNLaJvPqQMZ/SvM/iF4JbxR8UGtVLRRy20cskiKC2OVwM8ZOK9V8Px21to1paWXmCC2jECrIMOu3jDD14rup4lTUYvdni47DyjFzS0TNcGnA1CDTg1bnkktFNDcU4c0AKDUitxUVKDTAl+lFRgkUu6gCUHBp1QhqkU0DH04HNMGSeKq6nqdrpcBlvJNuP4Ryx/CpnOMFeTsUi9TkPIrjIfGLXM+baFFhI43g7vxrWj8QRoIvtEeN/dDnHTt+NcazGg21cpRZ5ZrmpRPqd0QTvMrgjHB56j/AAr0H4BSeZrGrrjH+jIf/H686utLke/mZwOZGP5mvTvgha/ZtY1ZgME2qD/x+vPwkouvGxvBNSPX9F6XP++P5VxGq+INZ134mjQfD14mn6PoIju9bvSgYyM3zJarngAryzdQPTHPb6PwLnPaQfyry/wNpR1rw98V9Hcxrqt/rGo28pmz8okjCwk4527CCPxxX0J0kt98W7i0vtQtRpttNLZ+IV0qSOOclxbNGCk5HbLlUz93LDmtTwl8S5Nb16W01HThpNvDo0WpSpcMwnEjFt6KhALBApyQM5x6184/ETwjBpV1qj3c5S+0/UNK0+a6hnMauHsVMgL44AeJWBI4zyKs+B4PFX/CWrqHhfVotd8SPpkj6hqEbNdssfnRAiHzQiGZU425IODzyKYH1L4e+IPhXxFFFJo+tW06y3P2OMENGzzbd+wBgCTt5rStfE2hXesvpFrrGnTapGCWtI7hGlXHXKg54718weDfh54g1rxVIt7qfiDT9Wks9Q1gSzj7M0N1LKYYXIUcM6ruYA9BgYFT+CPAd14W+Ifh641LTI7HVreR7uaO1fzxDZQWro0jyAYLTSsTjr0oEekfFr4hx3PhC2i8L6hqVvc3eqvZSGxty10Ybdj9qaEeqgA5yOKofBn40afe6fpnh7xTPqa+JPP+wiW5tWIncu2xSy5AcLsDbsc8+tZvh/S7jSPAXwm1+WJzcJrP2m6YKcql8ZAWPsN8Yp+ifCrSNa+KGuPYw61pugaXkeYt1NE0+pOSXniZjn5FIXcODx1FAHtut+KNG0LUdNsdYv4rO41F2jtfOBVZGGMrvxtB5GASM9s1U8dReJW0lbjwddWkeoWz+aba6i3x3agHMRbOUJ7MO/tXHeJdKHhTw9HoVho+t+NrvV5iqR6vctcwxFV+/LI/yxKOvABJ6UzVdc1zwL4fd9avZvEHjTW38qw06zhIt0kA4SNf4Y13ZZ2OWxQB3HgHxNB4x8I6drlrE8C3SHfC5y0TqxV0P0ZSM10Fct8MPDUnhHwHo+i3EglubeItPIOjSuxeQj23Ma6mgAoooNAHk3hHxDe698NdJ1nWpTPNFqc/2iQRgfu47iRQdqjooCn/AIDmrOl3KJbW05LvBYR3E0055VgS2FB78c8dAB60/wCBP2dfhNp73ZRYkub1iznaB/pUvJNd5appl2hNq1vMi8Hy3DAexwa8THZXVxNSVSNrO3V9mn0fd9extCqopI8/tNNu7gQC8EcMtzEru8YMgDLKJRuyAM/MfyqjqGkXF1a2gadiGW6ncFCNzOOR8uOoJ68V6klvZO0ioI2aM7XAbO04zg+nBpqR6e5k2GJjH9/D528A88+hB+hqKWCx1KfPFxT3/Brs3s/wJn7OSs7nmUum3Fw9qLZxmO8ZvPyc7gqqADyQOMk+2O9cJ8RPAuqa74Ak0/S4Zrm+lWCfaAkaeZEzqy9BkkNx645r3y0vtBuLGK8t7qzNpMzKkvmBVcg4OCSM8itFLW0kVXREZSMhgc5/Gt1hsfTXKuW1+t+7fbu/wJiqd76nxd+zZbG1utckuFMbRskTbhggjOQa+j9NlRwMEEHnIOa8v8d6Tp3g7x54mfysaffJHqXkRk/fYEMOPVkJ/Guc0WbXY9ZJt7GWxxELnAcmIRkbhnJ5PqFyR6VvVpuU3I9CjUUYKLPoZikaF5HCqOck1zt74x0pLz7JaGa8uO4hT5V+rdKn0+RtY8MpNMoDumHTqM+n0rzjxV4b1BpLi3tLlLARIGibYT9obuMj7oA79SRiso+87G70Vza8eeFrTxjp5iurWe1nx8rMo/Qjivn3x58KrvwzoJ1FZPNWOYRvgHlT91vb0Ne7eAtF8RWWqKv9rPe6SY/3ouVKsr542e1dN490sX3hXULVhljCWH1HP9KtTlSlvoRKEaq21PlX4K2mz4qeGTdRvJGl2JDGg3FtoLdPwr7ctr/QdXvFtI9ImD3kZPnG3VMrkE4bO7+PPHrmvF/ht4dt9K0K4kgEJ1LyUmkZkDNuODs5H3cHt3r3aWHw8sdpJqpsUuLhFYCV1Bd2KncO+cgDd9BXVTre0b8jP2SpRV73fY2bDSrKwleS0gWJnGG2k4PvjOMnuep71k+P/wDkXpfr/StTTdW07UcrYXsE7IWUqjgsNp2nI69RWX4//wCRel+v9K6L32OCpzc3v7+Ze0//AI9E+lLaZFsn0osP+PSPHpRaf8eyfT+tIgZdHiP/AK6L/OpgvFQ3H/LL/rov86sKMDmqQIgIxcgd/L/rUjD5c1Gxzd/9s/61KQdoFAyO3GYz/vt/Okuxi2l/3TTrb7rD/bb+dNuwfss3+4aAGy8EYoWkk6jucULWbLHg04U0UtNAJF/yFIPdG/lTtYsdIm+ztq1vaS4nQw+egbE2flK56NnoRzSIf+JvF/uH+VN1aI3Or6ZCGwIxLcZ9GChFP4eYafsoVHaauZ3a2Ks19pf2xzHYTXTWxKPPBbGQRH+Jd3XPqFz71oG40z7FHdvPbrayAMsryBVIPTkmuI0nUrOzvPDsV3MtnPpdtc295A7FWWXEfzFerBiGZTznPHNdTo2nvLoUiTQiBppZp4UkQFoN7MVOD0YZzjtnFOpgMPFL3F9y8/L+riVSV7XKcXiLSbrVorGwiFxmXy3lUHYOucHoSPl/A+1b7JYrBJMRF5cYbewOQuOucelc5pdlrVloYgtWCXj30jFpMFfLLH5m3EsRjpj5jx05wJod1Jp2rRXNspe/kJdfOyq5cfcXoABliepP5AlgcK3pBW9EP2ku5qtfWH22KAQKY3VX84kBArKxB5/3cY9xWhFDZzIHiWKRT/EpyPzrFi8ORxWV40VvZJfSOzxHyQ0ceMBBgj0Az7k1NbaI8U0XkstrCsW1nj5mkJJLBm+6Bk54BPpiolg8L0ivuQKc+paa40pI55JXgjjhl8mRpPlAfAOOfqKT7RYfa7WFIS/2ld8UqLlGAGeo9v5j1qudGnYXtv8AaRHZTszbUBZ3LKB8xPYY6Dk9zUN/od5Le209tdQxvGAGmMQDqBnhQOMYPTpxzmmsFhf5V9y/yE6lQ2LL7HO0j2yqTFI0bEKRhh1HvVysex06407Ftp8sa2O7fiXc7qScsBz3OTk9CTxWxTVKnS0ppJeQ+ZvcKKKKYHld8k0vxo8SR2ys0reF7cBVPLf6RLkD3IrduZBdXN0LGSC4t7q3W1t7OOVyYyP42jIxHt49MY7kgVn2TBfj9rLMQFHhy2JJ6AfaJa6uHxJpck7eW7+XlQbjyj5eTwuWx0PYng9jW1NtLRXJkr9SLUjbp4j0RJVL3SLKVcREn7u3lgMAEk9TXKL4JkSSSc2pdzeGWWGNlCNDsAKKW5JOSMk84PSvSJ54reMyTypFGOrOwUfmaba3VvdxmS1nimjBxujcMM+mRVQrzpr3SZU1Lc51FvH1e6mvdIkWF7NYt0UqMAAWbYOQ27kDgYB796k0PQbqztIfN1K9WQoTKnmCT5ixON7gtgA7evauhaWMByXUBPvZI+X6+lOJAHNQ6rtZFcqOcsdJayudRv7yCOS62usT20SjdGTk4Uc7ycbiepAxXPyeGr++i0mW2N5Y+VZQ27jzvKKkEZJUf3QG47lx2FdtNqdnFqEVjJMBcy42x7Sc5DEHPQD5G/KpLe/tLmyN3BPG1qN2ZQflG0kMc+gweaarThqCp3WhieHrC2uoNSeeBG36hIcYxt8ptsYGPQKMfWsy50y0vfFEl/p2s6e+ohiiwTL5hDKBkHDg4XGQAOCSec1a0TUNF1HU5lsNRhnSWQ3awbmR0kHDEdMqeDg+uec1ial4fhXxPHLrGsW3mOAxKI1vMAz4RIiCeMjBxz1/vZrklZpW1PWppxqS5m46bW/C3/DF218KadDJaWl9qcDz2i5CIAkhLnIyWZjt3DIXGNwHpVPR9K8ParttrTUp5lhhEfzKwWZUlLs4J4flhk+vNWrrwRd3WqxTy6nK0MCeSn2kmd2X+90AUjAAxnuSc1reF/DkulQXYnNnvuZ2eQQwbQYyMbBzkc89wKFHXYqddKDftW3p+vkZkXh7Ttc0SGOx1H7WluU8qWcNJsG1eCmQAduMYA4Pvz8//tTeC7SDS7LX7PU2up7KddLuUdAu0bMptwMHaBg9fvDvX0jp/g2xs4yIpbqCRZnkieCd12AngYJI+6AvIPFeYfFHwUp8+zuZ3ntbu1l8ppAfkkL7snk5bkc96mcnTXM0RzRrXpxm2ul0edeBxJD4Gt105VSQnapxjHvRqE2r6bGVfxHbxzdRCwYt6845H1NdN4T0prDw5p9qxxKsYL49TWzZ+ErG4ilS5sYJ1mfzJfMH32Hc/nXApJz1LUGo6HD+EfF2s2l0ras8klsXC+Yp3qOeh9K9k1Cd4tMa8iYBQm78MVy2u6Zb2sARLaBEWIRIi5wqjoAM9K6PSys2iRoRmPaFwe9ZzfvaG8E+XU8n1vxP4iGoIUuGhtHO0MyMcd84HOMVs6HLfX+ZoNdivJExviUFQPqDzXcz+FbC/gljvLO2uYpWDlJVJGR0PWrP/CP2kDxzQwJDLGgjVkGPkHRfpWunLpuZ294+cvjVoaXPiXTba3i2zSyIgxxkN/8AXr7G0Wwj03RbGwgwYbaBIFIORhVA/pXkfiDwxBqHi7RNQmwVh3hlx1wOOfx/SvQ/hql1H4Ut/tjlyXcoWOTszxXThqrb5DkxNG0faXN9/wDj4i/3W/pU46ComH+kx4/ut/SpRxge1dpxFdP9bc/73/soqMHNSJkSXP8Avf8AsoqFetRLcpEwpe9NHSjNCGcx8TZ4YPCcjXKkxmeJfl65J4xXh/2y3mupoIJkkkixvUHJXPTPofavSf2jxM3wsu1trg20zXduquCR1c8ZHSvEPC+lS2mlXdtb3d9c2sozEREpKvjkZyCQT3zxXLWmoSvf5HJXpqTuaxvNRTxHDbGyQ6U0eXud2GQ/Tv8ASpNe1/S9Muvst3ckBuDKF+VfQmuXfx9Bo962jtpcuzZt2kEyJKV42g8n5uKwPEPim2cfZ9Y0+9N5LCqzRSJGmMjIIwMg9Pr6VP7xyutjKNK+6PUZNVuILCO4srqN7a6QqHgkyH46/rVS31+VbfyNrKRjDKevqT+lZPg9tLt/COmtYSzXFhNOYZGlUFraVjwjgdAexrqL3SI7JgJYI8sMgqcg0uWMnZvUbqTg2jO3vqd+Jdu1Qcn2row+LdwOgQj9KyoeCAq4HoKsT3C28LHP7xhhV65NbJcqMW+Z3PB/hVk/EjS9pwTK+OcfwtXvXg8EaQ4PUXEw/wDHzWKPD2mpeWLS6faksQ52oFLA+46V09gba2mls7eEW8cbHZGDnHtnvTVVTeh0Yh3iaINOpFHFLTOOw+img06mIUdalU8VDT1OBmgCQU8VDu9OtSA0wKOvQPc6a8CYCSsqSZAPyEgNwfbNUvh5oo8N291owYtHa3T+Ux6mN8Mufzx+FbcoDxlTyDVq9t47U2V5E5kN5FvkOeFZTjb+ArixKauz28uqRcOXqn+Z0NlDGvzEBvqKXUpdlpIw4AFUdNuw+FJqTXmc2W2LBYENj1waxWqPXW4vhmImCYFe5rTtDsmweAa4/wAP/wBuQ3F5IZo57SabfErIEaBf7hwfmHvWrotpf2U7pfahJfB5GlDSoqmMH+AY7DtUONmmXe+h1O/g5rPvWXacVMXwnJrKupOWzSqS0CMbHPQaZct4wn1dbcS2cNqkMkgPzRnezA49OeaulFjlkCKBukZj75P/ANar1lqK2K38ZLF7iHaoDcDPBJHc46VneZuYkjk1thVdp9jy8xrctJ0+rf4EuaUGow1OzXoo8AkDGnK9RZpRTAnDZp1QK1Shs96QDs0UmR60hb0phccKkVsVCDThQFySW6+zBXKBgTjnOB7nFYXjDTJNUtPtkIVpkwzqufnA7j3rcdRLEUYkA+hwajtle2O1j5i54J7j3rxM0U4vmfw/kawatY82tI1EhdkKsOqY61ft5jcSHORt6Zrc8QaYgfz7dcROe38J9KisrWOFBLKoOPuof4j7+1eRfm0Ks0Zeo2xhmUshUugcZ967D4QE/wBsakD08hMf991yeuXGxGnuHJYt1NdN8FrlbjWNT2spC26Hgf7dd2AadeLWx0Rdz1jRulz6+Z/Sohp2kWuu3WqpDbQ6pNbhbiYMFd4lPBfnkDsT06ZqTRT/AMfP/XT+lc34kiB/tPUJlL2sV1bRXIC5P2aMhn4HJUFyxHcA19XTgpuzNW7Gvpl5oIupTZRwwyahL5jS/ZzGt1IBjdvIAc4A79BxWveXdtYWrz3UscMEalmZiAABya4u5u0urXxFB9pN1Jd3iLp0ay78/uYihjx0UNlsjgcmtHx7o7Xmj3E9nbwtfeWUZxEGkZdpAUHaTjJHAxnnkVp7KPNFPS5KlujU0XX7PWJrlbJw0cIGX3DBOWB4/wCAg+4YVqSSxxtGHdVaRtqAn7xwTge+AT+Fczr0WpTC9srOwiayltfLY+WAQzYXCncN+BkkYUcAZOeIdR0i6vILFHiuZpraKUq9xP8Aec4VS+07c/MzYHpilyRbveyBto6PTtQhvhJ5KSKqY5dduc57exBFWZ5o7eGSaZwkcal3ZjwoAyTXOXHhq3VrGGCBHg4WeV8tLgcjBJ4BIOcc/NxU66Xez2F1FNKII5VkCW6SGQAsCPmduSOc7RgCk4w3TBN9UbX2qDeiedFvcAqu8ZYHpgVXTVLOSO6kMpRLXJmMiMmzjOTkelZ6aVczz2l3cm1jng4jTy/MEYxgfNwS3J9ueneqEfh+9nlvLe5nEFjMwc+U7M8jgnDEtn/Zz67QMYoUIdWJyfY6wEEAjoeaWqVi18zH7aluigYzGxYufXoMD25q7WbViwoNFBpAePfBxo1+HXh03Sg24udRKhyAhm+0ybASeAfvYz398V2fh9Lo39mlx897bpJ9rmEgk+VuUjZgBkjg884XPGa534HTWkHwfsX1BoltvtN4G83G05u5eMHrn0rvtJvtPuA0OnlEKDcYvLMRAPfaQDg+uK2UnyWsS46nKXlpbXemeKoNC8s3Et4WkFtKU2sI1DM20gschsr3PB71H4e0CTQdZedbK4vMRJHbOEVcEIquWPYYVQP+BYHNd8zJGMsyoCe5xTuKpYiSi4rZ/wDAJdNN3ODg0xr3Q9Kt4LW9sLtLliJWhVGRGcvKeQSAQcdiSR6Vt3ul6gdIuLcy22oArhbaaLYjrnO0tknOO+etdDxRUus2x8iPLPFnhWws4tIuUhP2liyTO0eN27LcgcLgsQB2HA6Vn6Z4ftrYl0BPoCc16D4gubDVdBvlgvLdzEFIcSDAkOCgz05yAPrXB6TqAlh5bnHQ/wAq8rF3jU5n1PVwdpQt2NPRHVdPuD28xsVrxpDcRqxALDsRXG6dqOrNDPYxaWgk8whJ2b90wJyMkcj8q6SwS5ii33Cxo44KxsWH15rmizplE0SYoVxgL9K5zV5BcF4wcqwINXNSuSI8qeTWXCrHLP1qKs76FQjyq5Hp+jBbsTiWKKCO2EcjseI0Ucux+grU1HQdNWWG0u/EsUKpZRQ7CgDMgcsvJbGCcEDk8ccVpR+EvtGhMtjfzRpdhJZIZPmjZt2TyBuA9gcdD60ah4KlkuZZoLjzZZ0RXmklaN433ZeRdo5JAAxkcADpXoUqLhHvchYmHN8dvl6d/mafhXwzY6TLFdWUgnQWqwRyHBJG4ljn3OKk8f8A/IvS/X+ldEihFCr0AwK53x//AMi9L9f6V1JJKyPJq1ZVZc03dl/Tv+PSOls/+PdPof50mn/8ekdOtOIE+lMzI7rrF/10X+dWD0qG66xf9dF/nUx5wKaGQAf6UP8Ac/rUx4/lUSj/AEr/ALZD+dTnGDQBXtv4/wDro386dd/8esx/2D/KktR8h/66N/Olujm0m/3DQgIJeCPpTY+tOn+/+FMXrWfUslBzTgaYKdVCFjx/a0J/2G/lUk3/ACMNsf8Ap1l/9DjqKL/kKw/7h/lU14dusae+OGWWLP1Ct/7LVR3M3sYmqXskmrxSRwxoFuPscMwt1lk38ZPzEEKCei5PysTgCtaO8vbnS5HtxbR3sMjRSb1ZkypwxAGCcjkDPfFc7q40uTX9tv5sl1I7x48mF08zA3iNpMfN0yFJGeozmug0y7sbTw+Z7ZZhbW4berKTIrKTvDDruznP+FbzS5VYSvc53w1It2Z/EurXjN5EkkahYcKiDpwCxOA57nryeK3dL1cz6rqNsZUmjWQGB9yrnKKfLA6nHJ3HjnHaq/h/VNC1TSriPT0xp0OHkMqFI/m+bknr7jt0NbT3Vlb27XTSwJEU3mQEYKjvnuKKrvJpr/gCirLc4xL7xCun6hK95At7DMwjgmlTIjUhiNqoMnbnn37Vf1PUfEERtj5dvbwPchGmAAGzBzkO3BJxg4+vXFXNS13S7KWI3dq6S3ZMZzCPMZMfeK/eK849qv6NfW2rae22EhEYxmGUZYAfd3KehIwcH1FU5WXNyaBu7XMuK716S8uo7eXT5otkbQNtLcHIYsQQOqk8Z68VJca1eRR3zK1izWsjKUO4MVCqAeM4JcnrgY+hNaumXtjqCpcWboztErY6OqEnGV6jkH8jVQ6zaw3kkDWk6MzyAuIhtkKKSeR1OF7+oHriL3duULeZnjxBJDq0aXLoyNabzDCQ+Zsj5UPDcggc9cjGO+toOpLd6fam5ljW+dP3kJ+RlcfeXbnIwcj8KEvtJZBfySWsTbUJll2qybhlQxPQkGnpd6dHqMUUfli5vEMqOkRxKAOu8DB49+mKmVmtIjSfc0qKr2d5Deecbdi4hlaFjtIG5eoHrg8ZHfNT1i9CzybVraS7+M2vxQxeeT4esmaHOPNQXchZP+BKCOfWty4ksbjUNZvraSFpryzXT0s9hWcyHOAykAr19xgZ4xUFgP8Ai/usH/qXbX/0olrvxbQfaDcCGPzyNpk2jdj0z1rWFTl0JkrmHrNtZW0dtPPaQXuriIW1qJE3Fm9s9BnknsKyvCUOuQWl3aRnT9kbujXQjYebOSC8gG47gCW64yQAMAV0lteC512+tjbKPskcWJyQSxfcSoGOAAq9+c+1aKIqKFQBVHQAYqlUajytCcbu5wUeiD+yLG2fMmp39rJFdM4+aQyFPMmf6Be/TIArZay+3Jq0VymoWthOrxySXF1xt6Hy0ydq4B5469K6XAznHPrTZI0ljaORVdGGGVhkEehFDrNhynE2+lwad4bv7vT7ZLWS9KRW6qMGGMkRx49Dg7vqa6G70dRpFzZ2Mjwh7YW6LnKIFBAwp4GQcH1qfXreS40qVLdd0qFJUX+8UYMB+OMVHcTNcabe3EUkjwyQkRxJCfMRsEEY6k5xxgYxWVSTm3JnRTbjCKi+v+R5voHgHVbCx1j7RieW6tRBDH5qdMjIZiDjGBjGeB69NTUPCGtXmrRTpdpam3s4oYpEwdrqFJ25yQM7uetcqdN1nSvDY0W4sZo7iWbz/MBZssuwBVK5z8uScjrXoGlXGvC1uoktpii2SCB5EEQWURkEKp+Y5fHXAArmhGLXLY9vE1Kyk6vPF3v0W2i81/TKt74Xu7i4vWW61FUnn8lUNwxBRsmSRhnA6kKBjG0etOfQdQutKnhmlvH+2OUVZZnY20SsSHO45MpGMdBkjjg02NNV06WKPQYp5xHYoPLuWkKtIzjLOGxhsbzwee+OKsRW/iQ+K476aRxYlB/oaTfJkRjI5GPvE89TtrSy7HJzztfnWmqvvp5EUvh/Xr3SLGw1C/SSARMZlQeW28JiNSwPIDYJ9SOeKoa14f1O28NSC8u1uBFOZFUKWYKTjJZjngbeBgcd62I7jxHFf3ksFkJo3vSu2eQqFhC4GwdumSeck4rMsdG1pdF1eDVJLq6u7+NIox5rMkbNuJbJOFC5AOMfdxzUzhzJpIFOdvelG107K3V6+lv8jgpg1tN5YPK9wOK3tK1BGXBPNM8c6Wmk6ukcX+pkiVlBHQ9Dz9efxrH0/IkODxmvKknCVmbwkpq6JfFlyoVS7qBnLEnGF9a6OwuLGLRIpFuYvKKgq24YOenNcprEEGogwyRiXtt6g+xq5Y+F4I7EWs0Sy2e0EW7qCqnPGB2x2oSvqNtLQ7KzuA0WMjIqG7vdp2nntUFm8ESRxJ8uF24PWqdywL596HNpWBJbhcxS3s9uIslg3AHqf6V6RYWy2ljBbr92JAnHsKwfD1nEukfayn75jgH0G4V0vc16OFpckeZ7s83F1ud8i2RX/wCXpP8Adb+lSnr70yQf6TF/ut/Sn/xCuo4yBfv3P+8P/QRVcVZAx9q/3v8A2UVVBqZFok3U7PFRU4N2oQHmf7R11BZ/DCa4urc3EUV9asYt23dhzwT6V458NPiFDq2v22kXEBtLBoykQcgkP1GTXsn7RemXGsfC29s7RQ07XMDICQMkMe5NfMFvZHStJjimtbUalp5Fw480h5A3oR0ZD1HftWFeEZR1Malr+Z7F8TNEsTopbTWgXVrmdY7dwAWkYZbaG7E4wD68d64G3h0nWbiwi8RuI7h4hPY6gg2uJEOHhlB6kEYGfStbUZ9T8a+F9LvbONZpLOQNNbDMT+eB8rHB7dR65zWraanDe+F9Zs9U0JbrX9Ptt/lBA32nP3XC9cgnJx3Brjp+5p/SMn5D/D/h22sNUvnsnaNbyMB4DyrSKSwc+9XJndlVWJIUYAPaqugXV3Lpdlc3sXk3bIHdemD9O30qyzbiSe9ddtbnJKT6iR/LknoOazYt08rzSMQOT9B6VqsmY2UdwRU+hWWnSvFHqEjRwupAbdtw/ofyIpvYukrstXEUbTQ4x+7UL19BW3rXhF7K1bUJJsu7A7VHCZGRz36GseU2Ut1dTWjMLfeQrHknHJ/Cq2p3c37uQ3ckitGqjLk5xkdK5MPo7M7q1uU0rSUywKT94cGp6o6QG+xqzDG4k1ertPOYU4Gm0UED6M8UUUwHJ1qUHioQaUGgCQsAcVFJHjkBfrjn6Uo+8KlddyEVnVgpxsdGGryozTRZsHKS5Herl7ODGxkbCCsWyuf3hVgQQeauapALyweIMwDDHyHB/OvPjofURaaM+08b6RZvNbzTqpQAkZOR6Z471PpvjzSr7UzaLOhl49iPqDWVpfhnSoVdfslrubG8zruLkdyT1NakvhjSbkRiWxspApBAWFRz9RyfzraSjY2SOxSYOoKnINUrnqafbQR2sCrECFAwBnNUb24C59a5JsV7FWZt0pHpxTQeagDEkk1IrZPSvSoR5YJHymInz1JSJs8U7dx2qIGlB5rdGBOpz1pahDYNPDVRJJmnA80wHNLmgCTdSFqbSZoAkBzUgPFQA1IrUATA08HcMGolOaevWk0pKzGiBPllkhmw0TDBB/nWXdQmCdkJyB0PtUmtakmn3P77mMqCT3T/AOtUUd3DqdnvtnDyxDOAeSvpXyFakoSlFdGzpTTVjOu7NbwmFwCrgrg+/er3wLha08U6/bSqVlit1Vl9CJKueHoEur+NnycHgV2Oh6ZDbeMb69iQK81miSY/iw+Qfy4r0MspNSU/M0prqdTo33bn/rp/SotPmS1h1eeU4jjuZZGPsFB/kKl0bpc/9dP6VFGsDjWoLoqtuXJkLHA2NEuTnt35r6aPVGz6GBp0yaXd3UsOk2Vo6xi6niityreU3JIkzhmHUjaASCATjNafjLV7/SdLlubJLUR+WdssjMWD4JACAYI4zkkfSsCzTTGuLiR7uaW1eVTc3H2d1ZwR8olbdgKRjnYBg9gc11Him50mGxSDXZNltO4VflYgsCCBkA4/GuiSXtI3VyFez6FPTIho1jOs2pxNeXaNPF5xO0ELngMxJAzz7Ci+1q+fwc9/YQMNQWEPsZFO3jJZlLjCnBPXOD0zxWnbyabqtvZ3iNHNDIh8jdkKwIwSFPtx06E+tSTSWGmQCPy4o1cELDFGMvgdAo6//X96z5rvValW7GE+qa5Z3dqklvDcxywuzZcBhINpAART2zx79eKfb6trc2p3EMllDbxQwxvtk4y7bgRv3fdyvDBc+3q2bWvD+m3D20SLHPasHEcCgFmb5cZ9fmwQ2MYPpW5eHTla2u7pYGZmWKGYoGOXOFAOO5xVS0WsdxbvcxrHWNabTVuryxtV8u3eWfY7YDqD8qkjnLD8B3q2dZuUvbS3ktrZlmIUyJc/dwBvJBXsxxjOTx740JzaxabcMUEtrtdnRBv3DksMd+/FUrTUdJl8mKGNV2SfZ0RoduxipbA44BUH2qbqWvKFmupnWniZja24laNriS++zsdhASPcPnbaSBwVGc4ywHrXVqwYkAgkdQD0rNtrTSYI2t7eO0VbhOUUjMin9SKZZ3mkWthdy2kltDa2Zbzyg2iMqMnd+HNTPlesUON+prUUgYMoI6EZo9PrWVyzxf4ONDbeBtCutSUfYVbUEjkf/VxTG8l5Y/w5XgMeByO/PZ+GbL7GPD9irxS3Fqk80jROHCQuW2KWAA5JXoADsOBgVnfASMP8KNORwGVp70EHoQbqXiu8sbC002Fks7eK3jPzMI1xn3NbKr7vL/XX/MlrU4nxhptkkVxBYWUFzMD9rv5bmR3ZYQdxQOckF8YCjHy56cVoamdZ1PTdPt7m3Sxt7m4WO7SNzvaEkjYMfcJHJ54HA5ORt6E1vqejW96LIW63oW6MTgE5OCC2Op6fp6VrYqnWskrbE8rve5xX2UQ3ty6TXhhshcDfagyOjSlAsaLg5Khc4wQMj3qTXNPku9PsZbiXUFuYy8drDJNsZ5XGFaQxkAhRuOPTOa66ONI12xqqrknCjHJ5NNkgieeKZ0VpYs7GI5XPXFJVmmmh8uljmYtItoNW0nTYkzb2cBuXLcmSRcIhY9zlnb64rz7XdNk0PX7m0V2aF/3kbN1IPOfzzXo/iDVbHw/qY1TVruCzsWtWheaZgqowbcufqC2PcY714r4p+LnhvxLq+m2OmmaW6TKNfGPyoHJ/hVWO7BPQnH61w4qHPFvqd9GpyuKW1iCPxhrltqdzZWuj3U5j+YtGoKn0APr3q9aeMPFkl8kE/h1xE7YDSuIyM9+9PhWdrjfBHv3DP3yuPxFdHo2nzcTXSooHIRCTz6knrXAmraHp6Wuak8YMSFuvcDkVWuXEYAUgZp2pTrCmXYYqlo1tJrN/wWWFepx0FZRpyqS5YkTmoq7PNdS1HWvAPxCgK6vdL4f1+NrpI3uGC2shcLL64HOcgdGHcV9RaY0z2iPcTQzO/wAwaEYTB5GOTkY796+V/wBpS+hk8X6Xp0AAGn2OGA7GRsgf98qv51n/AA8+LmreEhbx3hm1LS7eNoltjLtKAkdCQemOAfU17VuTRnkuftI27XPsKub8f/8AIvS/X+lcr4W+NvhDXPLjnu5NLuX42Xq7Vz7OMr+ZFdP45ljn8NNLDIkkb8q6EEEYPIIqjCxpaf8A8esf0p1p/qE+lN07/j1iPtS2v/HvH9KAEueBHx/y0X+dSjtmobg5WL/rov8AOpscjigBg/4/P+2f/s1SP0qMf8fn/bM/+hVN1AoAgtj8jezv/Okuv+PWcf7Bpbb7r/8AXRv5mkuf+PSb/dP8qAIbk4kH0FRocn0p15jzB/uio15qTQmpw6UxSMYpwNMQ6L/kLQ/7jfyqfWEcW8dxGrM9tIs21epA4YD/AICTUEP/ACFovZT/ACrXdgilmICgZJJxgU07O5Fro4l4CkdhFLbX8gtLtrqC4tI/MSVGZmHQ4BO7ByM9cdc1tQ2V3Lp19lVt5b2YyMjAOY0IC4x0LbV+mT3xUQ8QPNC9xZ2sCWCp5gub24Fujpnh1GCdp7MQAe2avaZqv2qVYZ4RDK6GSMpIJI5VBwSjjrjIyCAea2lKTV7E21Od0rQL2z0O+stPD25mkKwvcSEPGp5ZjtJz8x4GRnuaty6FfalcabJfNFBDbIY2iR2dpFO3IYjAOdvPGMEj3qz4i8Rx6Us+BEiW4U3F1cMUhh3HCgkAlmORwB35I4qjo/i37TNbGYpLbTzPbiVLeSHy5FJGG3ZBBxxzn2rS9WS57f1YTjFaDfF/hy7vItM/skgvbOVZpXy3lnt83Bx7jPSp9K0TUbK6RY7j7PY/ZSvlRsGHm4xubIyT/FnPt0AzZ8S+IG0uQRQQlmBj8yUxvIE3ttQBEG52JB4HQDJPTMXh3xG+pyZwJrZtgE6QPDtLjcuVYnIIxyDxkZHOaOaq6fkhcsea/Uo6T4Tl04vBDIIjIJDJewkq53DAXBJJwecn065JrYm0Tz9FhsmitoXiPyGLdtTOQWA6k4JODkZPOaf4u1mw0PQ57rVLmS3gYeWJIlJcMQcbQOc9/wAKxPhnqulz6fLp1lq8+o3kLvJK1wT5hyRyBkjbzjg9c981nKdWfvtfM0jSgoPXXsbN3oJmS9ijuvLt7lFUxeUCOFC/MepGFxgY6mqt9oF5LrNre2l3Bb+XD5LusPz4Iwcc4A6EDoCAecVoXmvabZ6gLK8vIYJ/K84+YwVQucDJPGTzgdTg+lXY7lZrXz7QrOrKWTa3D/Q1HtJrUXKjN0O1vtNhhsDFbtZ24KLcGU+ZIOoJXbjd6nPJye9bVeef8JY0eu6jZa/qLaU1qqHELRFSWxhVUhpHPI+bAB9BXT6JqFxcrA4c3tjMWVLkQmJ1K5zvQgccEbhjnHHOadSnL4mNNbHMWH/Je9Z/7F60/wDSiWvQ6880/wD5L3rP/YvWv/pRLXodYjOY1i6vdE1DUr620q61BZ4YhGtvtP7xdw2tk5GcryAQBknpVhdT1M+LLayewePTpLJpWkJQkS7l4yGzgA4PHUjHFdBSY5zjmtedNaoRxg8Taja6lqsFzZyXsNs4VJLC2LAZ5wx3kZVRkjAPTHJApp8RatbXUllJFbXcixLIlxDFIRJv27AUUErn94Mk9EyetQN4p1O0bTVFpBdi7iluJFiYI0Y38ADPOwZ3E4yR26Ulvr+puj3FvaWNnCsa3V07hiZMlflI42uVyQMt/D9K39n1a/Em5qXV9rVx4iOmW720Fs0YmMsYLS26jH38/KS7blC4yACc8VoadqU87St5TS2aSC2jmUZeVwdryEDgIDx+BPSq/hq9u7qS8+1w26zLHC0jRRlcSlTuRjk5KgL9N2O1P8NTxWvhjSNyyESQAlkjLAHaWJbHTv8AjXPUetjaEVyt2ucvd/EC8S6MdrpZdUuJUYtkEomTwM9cKckZA9K6Lwl4kk13QYNRksJIfOk8tVjPmA/7WccLnjNYun+KNDvr2+k1G1+wi2kdoDcIUMwaMbiQwHzEfw+hFbFjqmj6V4biu7CDy4Wi3CKJNz7gmdr7c/MO+TWEX1voehiKUVBQVJqWmv3/APAIP+EtuI71LS50xUnM0cRWO7V8blJz0HIwMj3HNXF8WafPcW8Np508kzvGFCbCrKBwQ2Dg5AziuMg+IllBBdPJoMUSRvHwvyl5XA3EjZx3Gec4q1Y/FbwFZ2Eqvf2ulJGebcqFZiR/CiZJ/IUoVFLRMWIwrox5502l66Xt8/LQ3dO8awXOjy3Mlldrdwpue2WPcWOcHYejDPfNaUPifTJlkWOXfcxQfaJLdBukVeMjjjIyOM14b4i/aM8Nw6fcadp3h66vbZR5cSySiKN1HTI+8BwOK8M8WfFzXdfe+jtrex0e0ulVJoLGLG9RjALNk9h0xV3Zg/YXd016NPt/wT6Y+IHjbw1rOvWOl6ZqMVzqCwu8ip0QcYUn+97dsVi25AmAyB6Gvk7w/eyW3iTTbhZWR1uY9zg84LAH9Ca+o7qKawv2tbnBKMTHIOjr2rixVJ35zTD1V8CMPxIvik6lHDoz28Nm5y8qEiRB+Of0rSvbPVxYzf2fq+oreBR5CmRWQN3zxnBrVljeXEkRwwHWr9lHqnBMyFcdNgz+dZQqJLVHarWKHhKXxK0MTeJI7QMuAzI3zk+pxxXRSNvk2rzk4FRyKyw4fqTk1heKdRvdI8OajrGnlBJpyxT/ADjIb96qlSPQgtWcY+1mkiKk+SLZ7NaQG20WKL0UZ/EitM8ZrzPwx8XfDniK2jguJ/7Nv3C/urnhC2Rwr9D+OK9MBB5BBB5BHevXVraHjO7d2QSc3UR/2W/pUh6VHIf9JiH+y39KlP3aoRW5zc/7/wD7KKqg+lWl+9df73/soqkpqWWiTPegcmmg0opAcR8aLaG78Dut2XEMd5bzEocEbXz+VeX2nhXSdZkl1S6tEDToABE3zY7MPc16v8WpIY/Brm5cJEbqBSWOASW4B+vpXm4W3tYLu889YrWGJSvzbV3AcD88V5uLb9olqYVdzzzwNb63p+v3A8l7WGEmGUsDtdB93Hqe+e2a7/Tp1s9RF4IUabZ5Zcj5tmckZ/CuZ8S6vqCzG51LTPKs3RGMkNwf3LY+Y5HHXp61PoWoWmtR4iuSbhowxieUK+PVQprWDhKPqck+a9zc1WRJL2Z4vuMcj8qpKalWNlQB2LkfxEcmkCV0JaJGDHocioLuzedP3Emxs5x2z61ZjTtVqJelOw07GBbWOowv8iMRz/FWrbabLK4e8OAP4Aa01G2pA2alRSK521YkQgKAowBxing5qEGnhsVRLZJmlpo5FOFMSHLS0ynDpQMQnmgmo3f5jxTdxJFK40WYzmpSwFcH4n+JGg+HWeAyte3q8GC2wdp9GboP1Neba18Ztbu0kj021tbBGBAfmSRfcE8Z/CqUWzWNGUuh6/qOu2MPiBrGK4DXkcQknRefLBOBn356V1Gl3sdzGBuBP1614F+zuqan461GG9YyNc2EhZnOWY71JOT1Nep6xZ3/AIYui0StLaMcj2rlrYdp80dj28LU5YqDex6FbWtvJhmRST61eit4IuUCg15da+NGjXlHPtmrcfjOS5OEjdf97iuaUWlqdsZnd6neRWsJYsM9qxtLjfVL1PMUmB22YH8WeD+mayrWO41KRZbolYR6969G8NaULVVnkTa23EakfdHqfepp0nUlZCq1VGN2eJ+HvET6N4hvvCOvSj7ZY3DW9vcM3EyZygY/3tpHPf613G414n8fgi/FvWNgHzRwMcf3vLGf5Co/CPxHvtJhFtqiPqFsvCMWxKg9Mn7w+v513v3ZWPDrYe/vQPclaneYfWuY0PxhoushRa3iJMf+WM37twfoeD+FdBuHGeK0UjicHHcto2adVaJsk1MHqkZsmVsVKDmqwNODUxFikDDNM3iqGq3M1rB51vD5xUjcgOCRnnFJuxUVd2NOlVhnGeaz5L+COLzGlUADkd846Y9ax7fxHdXFnJNY2p8wsY/IlADEg+p6Ht6VSi3qWqbZ1qtyKmU1VkuLGXY1hdJKrD5oyw8yIjgq4HQ1Sm1i0tIDNdXEcUY4yx6nOMAdSamT5dxcjvYd4jhDwpKdmR8vze9cibiXTL1JbdoRKpyVZSuf0q/P4hhvNVtw1nfy6eEJc/ZSwJ9SOuPwqjrugQSRC/0S6eWxk5wkpIQ+mD0+navBxlP33Ui/dZtOlKCTaO58KPFLqC3EBAhmG9Vz90/xL+BrudNIbV5sA8QjJ7fe6V4v4H1P+yNThtrwkQyv8rH+F+gP0Ne16YP9PlPH+qH863y9qyXmbU3dGro44uf+ug/lWb4ktpZDcQwqG+2wqqq3Ad423bCenzKSOfStLRelz/vj+VWNUu7WxtDPfMBEpGBt3Fmz8oVRyWJxgDnNe7CXK7o2aucqzQz6pqs9rBevPfWotzayWjqA3PLO3ygDdzjjqeeKseLdCfU9HtrKcs1vCq5kjh82UuOOnYYBzg5OQOmc6R1i7yjf2TMkT/dWWeJJD9FLfoSK0bC9hvrfzYdwAJV0ddrIw6qwPQitnUlFqS6E8vRnOzWmq50aRbQSLbIJJ42kQlpGU8DeCRtOCTkcdM0/T9Au21a6vr+4MZebzoUhfdsOAp5I6FVxj0PqBUOreNbSzTzVktIbYgmOa7mKGYBgC0Uags6gkfNwO4yOa1tC1tNTlubdljW5t9u7y2LIwIyCpIB7+lN+0Ub20FZXOYuPDmoHxfdSLbN/ZU8gmYxzY3E/eBO4EA9SAOemTV2bRNa1Lw3Jp97cxRy+eNoChVWNfuhdg5G4AgEDGAD05dfeM4Uv1trfyYw5xE8wkYz/ADFMoqKeNwIBJGccAjmt3QtVXVLZZDHsZkWRcHKujdHUkA4PuAR3FXOpVSUmthckdUYtlolwlnFDFC0djDMGWxklIVwFGTu643gsAeDk5HIxq3Okefq63QOxNhL/ADE7n2lBhegwCeep4rB+I+r2tvDDYf8ACQSaPeH/AEhjFG7s0Shi33QcfdJ69j16V1emXtrf2Mc9hdR3UBGBKjhwSOvI7+vvWEpz+Jo19nFRTT17djJtNAkjuNJlne0lNjGYkxARsG0AbTnOQARk/wB48Vlw+HtXjiuY3ltjB9rW88lXZvtBG35GZ8kA7c5yecdACD01vq1jcKWiu4SvmtBktty6ttYDPXBBHFN129ubHTpJrO1NxIoJPogAJLEDk9OAOSSBx1Aqs72J5UT6fNczxs13afZjn5VMocke+OB+Zq16fWuD0zxXHdada3kuvW1rPcAlLS6gQE8gD5UcsASRg57iuw0+6ln8yO5gMFxCwVwDuRs8gq3cfkR3qalNxd2NM4j4Bf8AJK9L/wCvi8/9Kpa9DIyMHpXnnwC/5JXpf/Xe8/8ASqWvQ8Vn1Gcn4b1iS1/srQptM1BbiO3KyymHEcYQYB3dCCRgd+nHXEmm69qV/wCHNRvRYra3dvNOscd0DGpRHIUtzwdo55Az7V1OK5zxzqP9j+Gb24hhtWbHS4UGLnJLMOM8A8dzituZN7CPPvFPxz0bQdLg3sJNYZFlezWFz8pJ+UnjaxGDzxg55yK8B+Inxv8AF3i0zRWVy2i6ac4trJyHZf8Abk6n8MCvPdXuZtQ1G4url2e4uMzuzHksTk5/E1UtVLI24DJGKylK70VikrFae5uJmBuJpZnOCWkcsfzNXIm2kHODUaQKbjDdhkDPWpkiy3OcCosVsfSPw11S71LwfYahFi9lTMNwExvR1JHI75GD+NdFN4qS3mWCVZIpGGQsg25rl/2fLVIvDF5IgOfPMh/LB/kK7Sxm0rUrGK9vkX98u9YmXLEegUc/0rPFYenTSlFPU3oYib0fQZaWV7rtyrSlorQdWxy3so/rXd2y2mh6Y0khWG3hQu3sB1PuazPBOs2euR3MUMckT2shQb8ZkTswx+R/+vXM/H7Vk0bwRcQiRjc33+jQjHIz95vwGfzFehQpU6UdEctWrOo9XofOXi/WpfEXifU9WmyDdzs6L/dTog/BQKxpj+6YHpikifcpBxkcGor0lYJWzwEJrllq7lrRCecFUVv+H/FutaNGYbDUJ47VvvWxbdEf+Anj8q5Gx3ToHYnFX2YKgx1qdRs+qfh98bdG1KGGz8Qr/Zd6ML533rdz9eq/jx716zp8sc9jBLBIkkTrlXRgVYZ6gjrX5/rMwxiu58AfELW/CTp9gui9mWy9nN80T/h/CfcYquYlxPsm46Rn/pov86mByM1zfg/xLbeLPDNhq1ojRrLIEkiJyY3U4Zc9/Y+hFdIvQ1RBGP8Aj7/7Z/8As1TZ5qH/AJev+2f/ALNU+KQEFv8Adf8A66N/M0l5/wAek3+4f5UW3KN/10b+ZpL3/j1m/wBw/wAqYFa9/wBd+AqFTzipL0/vj9BUUfWpNCZTUgOaiBpw6e9MRLB/yFov91v5VsMAykEAgjBB71j2/OqxH/ZP8q2aZB5T40tNSnl8RyCeOMLp8kHleQDmEuPK6e4k+oY9xXX+AbRo/Dtq12C06PIFL8lQCU4z0yF6Vy/jvx3eaJ4hu7KzNkkFrarNM9xCxwxIA5B5HPTA712XhC+kvdIE080cjGV0XZD5WAD025PPfj19q7qyqKhG6sn/AJEJrmK/xFsBqXhC/ti7Rhgr5T72VYMoX3LACsPwBZXUOrajHdCGSIYEiu/7xJEO3JjHyjcRu3dxiuh8U6zdaVPpkdpYm7FxNtkAIyFBGQoJALfNkc44Nc98Ltc1LXLjUptS3Y3F0GIxtQsQgJTqQFIycdKmHtPq8u3+f/DCduZG/wCN7b+0dIXS4x+/vJUVD/c2nczH2AB/MVV+H9mbOzuI44fItsriJoyrI4GGUsQA2MAZAwcZq3481R9I8MXt3b3EcF2kZMJZN5ZhzgLzngH6dab4CvdS1Dw7Dc6xLDJcuT/qipAHHBKkgn16fSs/e9hfpcenMVPiloK+IfB9za+RLNNG6TQrEMvuBxxyOoLA8jgmq/w00OfSbGaS+sUs7uXAfyxtDqOFyu5vmA4z6HHNWPiFr1/okWnjTPK8y5kkRjJA0u0LEzAgAg/eCjvwTVb4V6xreu6D/aWvvbh5QojjiQLjAO5uCeDkYGe2e9V7/wBX6ct/6/IrS5n674T1K+fUFtSjXcupR3ou5yExsjXaq4BIXPy8cjB5ya7Pw8t0mlQR3qXAmjUKWuJFeRz3ZivHWuI8TeOr7Sb7xAIIPMg06W2VjIyqqBiAwyASQQw68jnjpXceHby4vtHt572LyroqBIhxkH3A6ZGDg8jIzSqxqezXMtP+Av8AgCTVzz/xt8O/7Yu9S1czy297LKHBgdmYqiBYlUDGCW5J5x2716JolgNM0q1s/NklMKBS8jZJNcRr3ifxC09pZ6MmnLfNeNDMrEyKkfOwsc/IGIC8jJzxiu/sZHmtY3l8rzSPn8p9y574OBmivKq6cYzenQIpX0OC0/8A5L5rP/YvWn/pRLXoleeaeP8Ai/Wsn/qXrT/0omrsNMmuJdU1dZZd0EUyRxJtA2jylZue+S1cyRZp0VyHiy5vpbPTt019o5OpbXNuFmkkjQORgKG4bapwQTjgipvC+ravePOdVitoYYT5Cx9J5JAobJGdoyrD5R0IPOK09m+XmJudKIIlBAjQA5JAUd+v504opABUYHQY6VyupeJ57LxTZ6a0doIZ7Z7lg8jiaNUQk5UKRnOAFBJIDHtzgXPj7VG0kXdlZaWyFfPW4+0ytC8CsgeRf3YJGXwM4zgkZxVRozlawNpHpKoqDCKFGc4AxzVXSrIafaC2R90SsxjBHKqSSF/DOK47WfEOqyak8GmXNpHFHZRXCtHEJ0keR2UfOWHy4APC888iqWi+Ltf1G+S3gtrV1a5kZndGKpbrJt3F1IAIwcZGTx6E0KhK1x81lY76HTbSKCaIQI6TMzyCQb95brnPXPT6ACuW8a+KfC/w+0meXUWhtTcKdlpbKBJOcY+VB+W44HvTfjB4xbwT4CvdXtwrXRKQW24ZUSOcBj6gDJ/CviHWNUutcvZb7UruW8upTlppXLMfx9PbpWD0LUpPqdJ4v+I+oayLu20uGPStNuHVmiQ75X252l5DzkZ/hwK85ld45snOH7+pq9KhI+Uc1E6CSMq/eoUUip1JTd5O5nTSEzKCeDSNFksRjNPa1dSRkH0NPtomeB2P97FMkoHcjAr1HIPvX2XpTxeLvBOkaluAmltUfeP4Xxhh+YNfHzQEnGK95/Z51e4htrrSGlMtuv7+ONuq5+9j2zg100FGV4SWjM5Nx1R2bX82mHyb6Jtw6MoyGHrV+08SW77VjJaT0ANdrJY6VqNiftkQVQOexU+oNczJoVzpiedZmWe2D/Mi7TKseeW7biBzgCuHFYONKXuvc7qGJcl7xbtjNfzJEsLF2GRHnBPufQe9V/i9bxaX8HfEobDSSQLvbpubeoGPQDoK7XQLKxt7COexcTxTqHE6ncZR2Oe9ee/tHzt/wrm+tx/y1KEj0UMMfmf5V14bBxpK+7OetiHUdtkfNiylkxkkV6B4D+K/iHwnstUmW+05elrckkKP9huq/wAvavPI12iopHKMpI74rDUR9keCPip4f8WTwReb/Z2obWBt7lgAx4+4/Rvpwa9DfgV+faSMpDJ+den+APjBrfhsw21+7anpa8eTMx3xj/YfqPoeKpMlx7H1ZGebn/e/9lFUFOKo+DvFGl+LNJn1DR5meLftdHG1422jhh/Xoau0SGh4NLn3pgPpThj8aQHBfHaWNfhnqUM0bSG5khtowoyRI7/KR9CM14zbGXUNJtLNJ2eBoCroRvJboWKn+Lr3r3L4tXVtaeD2nvXRIku4MM/RW3HBr508QPNqlqy+Fo1kuZLgs01rcDEZIP3x/CCBn6muPEwdSSjHcwqu0jroLKKTQ00Wc3F7Ysf9FW6x5mwjBQkdQDyM15pqfhXTjrMdr4JuJF1NWKBzcr5Tt3VCedwP/wBat68TxDbaVpugiQX9zfEQRMrASxN/GoP93b/F259qt6D8J7+w8SrHf+VPpwVJC8ErRyLk9V9wR36isIJ07ty/yITOttNMvdOs7SLUGY3HlKJSTkM4HJH41YCVoa1NJrHiOTS7e6ktHtI0YyzQhllLZO0E98Ac02+tfsc/lFgxwCcdq6qVXm0e5yVINO5VVcVJJIkEDyycIilifYDNJwASTgDms7VLi3urGazjuoRcXELCJd4y2RxWsnYzSL2k6hDqdhFdW5yjjoeqn0NXK4TwFcf2ao029WVL25kaRYtv3FHGT6Zwa7kGpg+ZXHJWdiQHGcnimwTx3C74XDoDjI5Ga5TXNUubvUpdKsojlSFY4698/Suk0m2azsIoXILqOSKiNRym4paIbjZXNEdKUU1DxThWxJFeXVvZW73F5PHBAgy0kjBVH4muE1b4t+HrJmSzW6v3HGYk2If+BN/hXGfHK/jvdahtoLpZUs4gHRGyEkJOQe2cY+leXdaaR20sOpR5pHq2pfGa9fP9naRbxejTyNIfyGBXL6z8TfEeqWctq9xDbxSDDG3j2Nj03ZzXKBflIzVeVNrVaSOhUYrZDM5pWGKdGF3fPSMOeKoo7n4Iaiul/EvR55n2QOzQux6YdSBn8cV9kyWUWo25jZEmibseRXw34JcR+JtLJVT/AKTGMN0+8Otfc9pYtb26XNiWjQjc8ZPT1remtCXoef8AiX4dyoZJ7C2aZOpRD84+mOv86ydBtrSGYRxWUrXQ42shJz7CvWr7VgfKtejyqS2Dj5Rx+tYWq6dY2ipeG8awiRhkBiQW7eWOqv6Bc56FTXl4p0VU5Gd1CU+W5p+HdHlXbPdxYbqkfp7n3rpXYQxnPLnoKxPAniSLxHps6uSt/ZyGG4VozGTgna+09MgcjsciukaHbC7qdzkHbnsa7YQjFWic05uTvI+J/jFdG5+JeuytyfNVc/RAOPauPEgzXYfGeNYvidrcSYIjeNMj/rmuf1zXE5xXPVXvGkdiZ5DGA3UA9DXUeHPGeq6XtFtdNNAOsE53r+HcfhXGTF5Y8qp2Z6nvTLaQxvg1HKJpPRn0BoHxL0i5XZqZfT5sc7wXQ/QgfzFdVZ+JNFvBm21ayf285QfyNfM3mhsZFK23g4B/CmpSRyywkJbaH1dFIHQMjBkPQqcg/jUwbivnX4f64NK8SWjzXLwWRO2cbjswQQMj645r36KZZEV0YMrAEFTkEeoNaxlc4q1F0nYuFhWdrN/DZ2jtK6rxxuOKneTiuc8SwzX7i3VQkO0ZkwDk55GD7Vcd9TOCu7FW2ul1G5MaK0d2wMoYIduCcDGe+PT1rV0/TLq1tthuYXlYMyyOoGT6DnH1qhYxx3GsyJcSxwOkIhESMMsOCT9eBwK0fDepxafbXUF0qf6DI89yfKwfLPI29iSMA57/AFrbnudqjY0FgstA09JDCsMxiM11ISGJJyc7u4xggZrhfB2p2eo3Gqa1dgFrUuQj8iAHJB9Mn+tTeJdfj1myuLGDaqTXSKNp6iTBxxwdqDHFcz4cm+yadrroENpJdMszgZIUZVc9vavOr1E5ehrTo2T8z1DwDKJvDVtfTMjvcBnLqOWG4/pnpW9PCkXnXNugaSQAyRjjzQOR/wAC9+/Q1xPh27mu7GKzsoEEOnwg75Y3CvjsnY4+tXtP1+/1i2E+lPHJcWsix3FkpXby2N4cDJFcqkmuVrQ1nB3c1a5r6tp9pc6X9ot7czRsm9GiOG9QRxXW/C3X11m3ljd3a6tYgkhcYY84BI9cfyrP0prcQLHbwyQty5VjkAlucHoRk5445re8IadBb61e3kMUaNNAqPtXBYh85PrVUcPyVI1Kez6djiinCpynZaL/AMvP/XQfyqTWg4024lt4UmuoI3ltw6bsSBTtwP049ai0Xpc/74/lWkxCgliABySa9VaHUeJ63b6iraQ5ktpFTUb2eDy0O8ZJ3KGz0Ds34EelezW1skAcgbnfHmOerkKFyfwArya3+Ik95KsMdto8ULPP9naRGO5U2tnaDxyck8Y2nrivXYiTEpYgkgEkdD9K7cWpxUVNW3/MiDTbsedfEvS2k1nTb+I/JBazxSQopO+IgF92CMKML09TxwK6bwRBcrolu2pxhr6NTC05mEzSAHk7h0Gc/L2rA8a+LrywGsW9jpkzXEEccVvP5QkVpHYAhh2XlcZ6nPtXS+Crhrrw1ZzOoTcpARYxGqgHGAoAwOKVXn9hHm2/p/qJW5zmPHlhHfaxHMC6W1jEq3DKQEV3YmMtx0XknHIDA12egJImjWazoiOsSrtUEAAcAc89MVyfj/xFqOi6rpVnoX2N7i8dvMidQW4XhmORhePTJxwa7mEuYkMmN+BuwMc96zq83s4X26DjbmZ5R8VPCj33i7TNUs7CS9lkgMM0Zhdo/lYYZioODtd8Z9PXGO3tdKez8Ff2fpZksnFuVjG3c0ORyBnkkZOCcnOM5rkPHHxA1bRdeubDTbawmWG5t4SJAxcrIhZjwwGQQOuB83tXfWt1dQeHlu9UCG6S386ZbdeM7ckLk8+nXmrq+0VOHNt0GrXZwGn+HNQ07XNL1OxsHt7SKytrWVYY4jLKFUEkhueSdpJIIC557ei6ok82jXawR5uHgcIhYD5ipwM9OvevPtM+Il7cXehWz2Jka/thLuCqrSbXKuyru6YAPPC8kk4r0HWLuay02a5tbc3UqAERAkbhkZ6Anpz0qayqc0edahFroeXaF8M30DxXp5stTuEs1XBiXlmRBu3MeAMyMAAB2zk1696fWuH0jxNrepeMLy0hsbN9Ft1j3TrLhgWXJIJOGw2VOB1U9DxXcUsTOpOSdR3dhxSWx558Af8Aklel/wDXe8/9Kpa9ErzX4Iym2+D1nMvWN75x+FzMa7aC9uYvDttdz28t3deRG0kdsoDOxA3bQSB1JOM9BXPbqUaTNivH/wBqbUJLL4O6osLBWup4ID6lS+4j/wAdro9b1rUNO8TanNHdtPbW8MOzTUgzuXDNJJv/AISB3PHAGOa8Z/af8SSXem6FoN8kMV7zfXUMMhdEVhtjBJAOfvdu3vVyg4pMSPneCVbpkkB5MZB+uRSxnYefWs3Tj5OpLGpzGxIFXpxtYjpWTRRYlhWbYykqVO7I61OgqCyffboe44P1FWhjtQB9A/s9p9r8L6pblmULcYJHdWVSR+n612K6Xbm7vbu5Yw2qKkcnlJnkE4J/urgjnBx6iuB/Z1uwlhrtvnB8yFx+OQf5V7BoM9tdSapAmDIJMOh7qAFz7jOR9a0xc7UEx0VeZPololvbQTRKInjGEK9ce/rXhX7SetSXviq1sDJuNtD5sig8KW6D8hn/AIFXt8k0emwMk8uIoY9/mMcBYx6/SvkfxfqkmueI9U1YhjHc3DNGT02DhR/3yBWsqidNSj1RnytSaZiK5D55z39xVCW9lvTJDbxHyyCGeTjH0FX8UxnWLBPVmwAO9cppYIIxBAiL0AxSSPu4HSkmb5MioLXL8elJlE7uR5cY+/IcD6dSaurIFUk4CismOZftU87sBFFiME/rUSXbXs+1ARGPuj+970WsK9z6t/Zj1F7nwrq1sc+Xb6hE6Z7b05H5rXuSfdrwX9lyeMaBrNoAvmJeQTE9yCpH81/WveI6tbESG/8AL3/2z/8AZqsVXH/H3/2z/wDZqsUCILYfK3/XRv5mmXv/AB7Tf7hp1vwj5/56N/OmXf8Ax7Tf7hpgVb0/vj9BUCnJ4p96f35+gqOM80jQnUmng8VED604HNAFq3/5CsX+4f5VsHpWNb/8hWL/AHD/ACrZpmZxlxDZXc0k2taZdWmpvH5EptoGlEgDKwZZFU5HyjGcEA4IqTw9YRQ6ljTLV4Le3+UrdyhmgVuSscQJKbupLEH2IwB1+K5nxlb29vaHUorVpL9CsamCR4ppATjYrJyTzwDxnrjrWvtfds9hxp88kluO1XXYLPXFsL1YZhIqtbrCwMyyHIwUJzz/AAsPcHHeLw9bW+l6iLe2tYdNjuULrBKymabYACQq8DG4Ekkkk9qg0DRbbUhp+rX9tPHf20zui3MryvHgMoVi/QjOfl4z0JHNani+C3k0dpJbfz7iEhrYLvDiXou0p8yk5xkHoeeKbnDlSjt1H7KXPyve5T17Xbey1ZdN1KNCt4oS0MEq+c7HIZdpII6cEZB5zjHM2n3ltoluiaj/AKDDcTERyXci75ZGyxLlfkUnsM+3oKpeFNMFz/p+q2TreRyboDPcPcNGu3grIWKnOTyoA7HOK6W+RZLSaN4/MRkIKYB3cdMHj8+KmU48qUQdNxlZnPeLvFenaTFDbXFvNqH21hB5No8bNhyFGQWBwd3UfpU1w1h4US1Fposq2JBjknsoPM8gDG3cq5cqeeQDjHPrVDwLpd5Z/aTqENmA4VV+zQxxqpU8q4VRl89WHynAwBXYPxGx+bgH7oyfwqOdWSSZVSnyScbp+h53pzaFq/jmX7J4fe5W7jeeTU33quQgQqUdR6gDGeueK6RNX0TQvtNjl7V4SXMcivmUkZ3Kzffz0znrxxT/AAydXleWTW3KnYogiAUHZz8zgciT1A+Udu9J4nTVZZ7WPR7kW5ZXEm5wN68cICDh+4Y8DuDmrlWU0tHb1BULT5OZevQydAnk1zQ547bR7B7WWR45GmRrZZCrclothOc9eSMg4NbmkyyWmqtpczCRzALndGojiiG7aEROw4JyTWvawrDAqIu3ufUk8kn1JPU1lQjd4vu2A+5YxLn6ySH+lKpPm0SshU4rVvojlrD/AJLxrX/Yv2n/AKUTV1+jc3Wrt63h/SOMVyOn/wDJeNa/7F+0/wDR81ddo3Fzqy+l4f1jQ0R2ZDLF/ZC6ls5PNkia2m84bMfN8pUqc9iGrL/tjwys0eove6Wks+6JLh3RWk2kg7WPUDBGRxxS6zprXXhi8stUmnvRLu3PDCA+wvkAIOG2jAxznHQ5xXjniDwZcpL4XXT7G5uLQG6lVPKO9AZMopAACqVI+UgDrXRhaUKztOVv+GbInLlV0j3WKwsSxmjhiZpJBceZ1JfGAwP04+lNk02x2IggREWQTbUG0Fgd2Tj3AOPUVyOm2viG317VruWe+eEJ5dtaPsW3LdFZcMflx8x6Ee9bGkadfaYlrapfQyafBEke14WMrMPvMZC/c89OKxlHl+0Unc0HsbE3jXQs7f7SUERkMa7tgJIXOOmSfzrLtfCXhy3jEcGkW0Sc5CZGc9c88/jVfwlbX1tYyG7+zBZ55bj5EdZMu5Pz5OM4wPwFbbNjmlKUoNpMNDzr9p2yS8+DeplCqi2lgmUdOjgYH4GvieK4Fu6pJxG3RvQ19X/tWa0bbwZpGmhsC+v8v7rGhOD+JH5V8nSqjb1blfftWLKRfmYpEJV5C8sPb1qFxh+PusMiqdrdm2fyJ/miPAY9vY1ZhG+OSAt88XKn1XtQMecOhXIDrUFiyGJoWbEm4naeM89qZcuY/LuF6jhh7VPcWyXoQxtgjBL+g9PrTC4JCTKQa7P4aat/YvjDTrh22wl/Kk5/hbj/AANcwkYjUKCTgYyTk1JGdrgg4PYirg+V6EvXQ+qfEepyQajbrZfPpiE+dKCCTIDgLx2HP4/StRPEUEdti123U4O35T8it6Mw6n/ZXLewHNc5pVzHrHhHTNQAAE8SmYD1zsf/ABqfwxoMWlaV5t80lxbKfLRYkACoTx5nfaOMquF7kN1rPHUoc6nJ7mtCTs0jqPhql5ZJcaXqjjb5rTW8gHykMdzIPTBOQPQmuX/aNZf+ENIHWVx27AgD9TXfWcscd/DBIBgKSc/3j3rzj9pZ9vhi3VTgGWNfwLZ/pXoNcpzrVnzbGwjOG6Hv6Us6B4z+dQyHvUTtLKVQMFjHUg8mvOaNkyzCxIxng08HacVCSFHHanodwBosUep/s9eJk0XxtJZ3MpW11Bfsx9BJwUJ/HI/4FX04D+dfC2i3Jg1mSRX2mOZGBHbABzX3HBMk8Mc0TB45VEisOjAjINDJJ1OO1PB9KiUjNPXANAzjvi/9mPgqVb6ON4WuYFPmYwp3cNz6V8w3Hij7B4maz8Pad9jsjKElAtx50+P4sHuecD0r6c+MFtcXfge4js4GuZBPE5iXGWUNyBn2r5e1bU9RtNct5E0i3uhtWUxmDPyjgDd2IxXJVb9rounc56qu7HX6BdquvQ65FLJDcxiUbZCCsq4G7Z9RjPcEVuad4tnbxdcavLJM1mkK25tIUL7VPILAdD3zWXrOs2F1oH2yz0mSS+dftMkUqmKSGIAb2jJGGfGenYEmsLVLG40TwhLd+GLq5a4WUXU82zJlUjocjHAI49jXPCDk/edjJ3Ssd/da1Z+IdSe509w0dtL5btjBLKOn6mqfiU3o0+W+sZZBdwjzQucrKB1Qj3Fc18OZJ720utYuFRG1BlZlUYBdMqWx78H867NpQsOHwFBJyemK66dLkVkc1ST5rEWpyM2k3JjB8xoW2qOuSvSuK8HWsT6X5s1u13cRSHy8HmMehPpntTtPi8Otql9A4hnmjkRYTLul+9yeemAc10CDTk8XaksTostuRDGoHlggjJ+XjvxVSjzMrk5Y3K1vfmS3nuLJLU6gwKxqzbT16FiM49qrT6nqDatHpzX0UVxGF81zkI5IB49MfrW+dP04OXNtEHPJIGDmqF74ZOuXCT6dGyyxgJ5znCMB2PckeorGcZRiOmud2SHatZakMPpkzecoVpVVseb1HDHnjHQmtfSp75rWET20gl3fP5jAAL7dyaswaLrCW8ZJgEgQAgqT09warG7ns5hFqEJiJOA45Un+lHLFyUrtFSo1Ix2NtcYrK8W6umieHL+/Y/NHGRGPVzwo/M1bWcbQc8V5D8XfFUd7eR6RandDavvmYHhpMYC/8Bz+Z9q6VqzKlDnkeazMr27nnzc5kz1Jz1qBlweKhnmJnc9N3WrOcrk1pax6sWQ9xmorkcg1Lgmo5lJFNFEIBYZoxinRgjg0rfLVoRZ0ef7PqNtN3ikWT8iD/Sv0Q0maF7CCZgvlSxgnPQZGa/ORPvccGvv74bXY1PwNod0p/wBZZRE9+doB/UVcdCJIxfF1jdpe3OtWafOkPlxWzHaNoJbJ9Mk/gK5/Qlv/ABEpu13Sy4KNO5MaReqAg5Uf9M48N/z0kzlR6hqcO+JgeuOtcBGNU0LSppbKVX0uKdiv7sfuVycpIAOY+eJF+ZDywZckceMio+9H4mbUW3o9jpfDGgppIS604L5mMSKqhFlAGMYHA6celdrLcK9sZFTy1RM7SMEH0rE8OTCTR7eQAneobPHUnnpWb8TNQ1m18PlPD0UDXzuFzNnaE744PPpkYrrV2lcxe58jfFS7jl+JHiJpHCn7Tg59lFchO0eAY5FcH0q/4/vIb3X9SmuI7iLUZJd0wBVoy/AOCOgwK569EJvX+w7hBwU3delYyjd3LT6GjHcMq7PvLULEMxIqCNiiM56jpWrLbI0QeLg45FZWLK8Ugxsb8KkSTacMapSMUf0pzNkZzRYDRsJA95ISflDKPyH/ANevePh3qlvdeGLSKGUtJbL5UqnqpyT+WOlfOFvOVDKDyzV2XgbXm0HVVlZibWUBJlH93sR7j/GlJ8ruYV6ftIadD6DaeqF9dQoYnmUMEYuM9FwDlj7VGtwskavGwZWGQwPBFJshnHlzjcXbhT3A5P8ASidTkjc82lFuRl6hor6gTdyFMSMPLSIFTGOuSSc5+laXirW7fQbaOwhSELcogklcF2k+XDJ9PmHerCMeCetZHi2zgu9OnkuohKkVtKcZxzjgg9iDVTcmrJ2NKdb3lzLQ4efVl0s29haQWwayV5TMylUhJ4D8/eOMD1zUejPMzQieLyJNRkYwSXOCm5QGCFR90HPpzkZrk206W/k86S4YS3Voju23qcn9Ts/WtHRYY9S0aa6dpPP022lkjyMAgFSp+uMjNc7pno86UbnrmlQapqWlW72SS2kzYhmXzFJQg4cMMD5cZIHNTaNotz4Ckv71ojeWbBlEkQJZVHzDeo5wTxkZx16V1vhdRNo1neYCtdW8UpA7EqCa3YkyMH9acKFld6M46uJbdlqiawuxc6XbuYinmhXUN1XIzg10Phr/AF0w/wBgfzrDGPLVcDKsGU/zH0rc8N8XEv8Auj+dbU002mJOMpxaZ0ei9Ln/AHx/KptWnmtrF5be0e8KkboUYBimfmK56kDJx3xioNEOVuP+ug/lWnW6Oo89bRvC7xCKBbkbJZJlhgtXDqzMrcjZkYKjrjqQeCa6fw1FPHFJJNG4WXD+bPIHmmP95tvyqMYAUdB6VtEZHNcD4jaLw/eRWmlRX0UU0Zk8qyudkUPzAFnQq3lR8/eXuDxWs63MrSf6lUqMqkuWC1Lct3puvW0lvqFgbi73KJ4rRfOSXYx2HeONueeSCDwelbfhhPs+lraqsEaWuIFiSQOyYA4cjgNznA6Z71HoHhux0a/v7y0U+deCMSMx3E7QerHk5LE1jeL0+xaksukW14+pXSkzrYzmJnCjCu4KlT0254boBnGKdScbcqegU6UpystyO5vNP8QebaS2sq6lGxF1BaJHOefkP7zoMgEBsqw5HHIrq7LUYZ5Hti0SX0Sq0tqsgZ4wRkZA7Y79Kp+HtHstPia6t4Qt3dojXMpLM0rAE5Jf5v4j1rO8eWIutPilisrG4uo5Bte8iVo0Tndvb7yr7rznHHNRUnF6K9kFOm5S5erI7y/0PX/E58PXdpdS3dn/AKbuwUiBXChtysMn5sAH0PpS6hrehWUd9omqiTTIWRogbhSkc6OvLRycg9SMZ3AjkdK0/CunLY6TbJLZw292qlZNkUack5ONnG3pjvwM81N4invrawV9Jj868MiqkRXKv6hj/CMfxdsdD0I6iVrXshqm3Lkuv0OM8EaZ4fv/AA/czW+kahZQ2jtAfNlZndVTDBCpyUIJ4HU54rW8U+KtBOgBzc6fewSsuYHBkVgDnDKoJXp1K4BxkV0mircDTIWvj/pTgvL8gTDE5IwPTpnJzisaKDUrzX7mO7WN9G3sAmIzITtHEnfZnlcfN0zxirlWUp80k7eoo0t0mtPx9BG06W8e1MGjWlibddkMkzhxECQfliTg4xkZIwa2dDvVv9PWVfMIWR4d0mMuUcoW445Kk1f6Vi+Dv+RftyBgNJK/5yuf61lKbk0OMFyOXW6/U4r4RHb8D4j6JqB/8jzV6HDiOxt06YjUfoK86+EgLfA6NQOSmoD/AMjzV3N4081hH9jmjhmKxlXkj3rjgkYyOoyOvGc0/skFWdYbK4v7u7uUW2ufLVkkwEUgbOvfdkDH+NfF/wC0Vai3+K+rrYsFhjWBFRGLbMRLx1NfVXxB0uHVJLAPNcxzpIvkBEZl3+YvzA9FYDJzjOOhr5H+LFs8HxJ8SRyYDLeycp0IPIP5HP1zTnpFO4RPOQzR3UchGMOCa3bxd3zCsvVY8KGzzWrC3m26k4yVrNvQoraZIBLPEeOd6/1rRB9KyZAba8ikP3d2D9DWmxx7UgPTPglqhtdZu7UNh7lY1T3YN/8AXr3K802bSx9utJGSIHcXB5hY9Sc8bT6njnDYXDJ8x/D6Xb4q08KzKWk6jqOCc/hjPt1r678N6pHqNiwbaLlPlmjxwR03Af3Tgj2OQelY4iclZ9Ea0orU8R+Kvi26t9LlMsrO1xE1nNar8nBOCyE8juCDypGD6nwozSGMQWc0vknkmVeRyeMjjOMV6d8cWt4PEy6bZE+TCqzNGeQjEEKB7AdPQHHQADzpwAo4A+laQtyq2xEt9RFYjAqAN592zdUiGwfXvSXcxhiJXmRvlQe9JCBb2wXv3q0hDrh/lwDTGk+y2jydHbhc+tMgUzzAn7oqlq90ssyxLyienc1KB9iuWecrDEGKLyAOrH1Nbum2f2dAW5kbqfT6VjWRm58jjtuboP8AE1citGL5luJmPfBxTYI95/ZsvZYPHTWin91dW7bx7qQwP8/zr6pXqK+Uf2YIok8czeZnzRZt5fP+2uf0r6uWmthSG/8AL3/2z/8AZqsVXH/H3/2z/wDZqnz1oJIIfuSf77fzqO7P+izf7h/lUlt9xh/tt/Oo7zi2m/3DTAoXp/0g/QVEh5p97/x8N+H8qjTr71JoTjpS01aXNMC3af8AIUiJ7hv5Vt1h2nGqw/7rfyrczTRmRXNxFawST3DrHDGpd3Y4CgdSayINRl1O0v4ntPIZUZEWWRcSHDA/hkdeRgg0/wAU6RFrukPZTBCGZWG7OBhhnp7ZH41iW/ge1trmO7t2t0vESbafs4KLK7Eq4XPIUMVCk4xjvUvmvpsdVKNHkvKVpen/AATrIMR2sQO1QqDgHgcfyqhF4g0efiLVbF856XC8469/epdH03+zrFbdriW4wACzgKBxjCqoAUegFcvZeALOOZBeCCe3jmlZUKMXdH3HDuWJ6kcDA47k0Ny6BTjRfN7ST8rdTZ0vxDpHlQWv9p2bzLGclZBtO0gHk4HUik8Q+KtK0KdINRmZZnUOqKuSykkZ9Oo59MisvRfAdnax2hvI7SVrd3ICQnDKWyAxJJJBCn04PHNO8VeEJ9c8Q2uoJfJBHDA0W3ytzBskqwPsTnn0qW5W8zeMMK6tnJ8uuv5d9zehvra2jvFNwk00DSSvFESzgYDY2kk5wy9OORjGcVPpmq2epLm0mDOFDtGeGTJIww7HIIx6iuRX4fJ9qS9l1O4uL9ivny3ALiYAAbCAR8nGSOpwvpz0miaSLO8v715Inlu2HESkIqgkgDJOSSzEn1NNXe6MasKKXuSu/Sxi3fjPQjqEESXE0dyXMQl8hgq4OCGzjjPH15rpFntYZJppL2IiQbxukUBUUdvbqSfeuVPw6sX8QTarLcyNI119rVPLUBWyDgnqV68e+a1E8E6GsaI1qzhcjLyMSchhzz0+Y8dOlKLn1NKywqt7OT89P+GL2v6v/ZkdukcUklxdSCKDEbMm89mI6cZP4GuO0PWtPOuHV2W6kvr24TT3g8xSkTADLqCfunj6fjXd3unRXctnI7yRvayeYhjbHYgg+xBxWfH4ZtI7cQLNcrD9p+0sofG85BCtxyowMfQU2m2TRq0oQcWnd7/19xzmnj/i+euH/qAWY/8AI89dRpE8R1PVot6+Y1zuVQeSBHGCfzNcxp3Pxy1z/sA2f/o+euh8NeGbLw/c6nLZs5+3XBm2ueIh/cX0XJY/8CraLSTvucTN41z6eLdJbT2vHlljhDSrh4m3Hy1LEgY6FfmB7gj1qTULLUhoV/bx3TXt3cO4jdwsXlRu2MDaB9xScHqcVjal4LhktJoY2nnj/dCESXD7kx8j5ORkGPAwc9KdOMH8bE/I0dY8V6TYammn3M8q3RVXKrBIwUMcDJAwOo61Vs/FejX+sPpdperLeKm/aFbawyQdrYwcEc8+lYmseCxL4wfVUKiItDsUNyoXbk9Mn7vTOP60fD/gZtH1NbuG4VGJYSSH5nIOSdo+6pJxz2ycVuoUOT4tbfiTd32OvtNatb28ltIPtHnRqWPmW7oMBtpILAZGePemXOr2MN21rJdxC5XZui3Zcb22rkDnkkCse50W5udViv5bloZwGZvLcleoEaY4yFAY89S3pxVNPDgWCOKQ+dI5dp5ZjuL/ADsy7iMEnJXOMcLxjis+Snvcd2eWftgxD/hHfDtzvUSQXkgMZOGIZBzj0BUD8a+Y5JTxNB94feUjgj3r6i/aQtvsnw+s4JIo5S11HGZwm0IoDHYuSW5PJyT0FfKzpJazfLyp6H19qwkknoWiyXiuYiykAgfMrdqqpM9pdRurbgvAIPUelKUSVsqfKlz0PQ/Q1HdROn+sUbuu4d6EBqXYDp8v3JRupmh3BVntnPuv9ai01xPaPAc70O5KpyloLhZU4IORQgOoJFIDmooJRLErr0YZqRe/0pgexfDHXWPgm+08HMtuzBF74cqRj8Qa9m8O3EV7oaoCPMVQsqehI4I9QeoPf8DXzn8GJYW8Yx2tyoaK6iaMqehOM/417ffWNxo80U0UjbNxEU+OSSfuP0GSexwr+qvy3NiZ+2tB7rY2pLl947IKZdOMjYM0RADAYOBXkn7QWqwX/hmz8ieGUpcJHJ5bhtpGTzjoa6+91ibUdDvbaz2pdSQsjKHI2sQQGXODjORyAQQQcEV4j8V5llsraNntY7AEL5FtCFubcBQCrqSAwBP+c130pydFOW5hJWnocCAXQlBuHqOaYuM+hrPsk+ywF7a5d1eRlRWTaSg6Njtn0qykj7WLE8CuZstImLBgfanQthsVEvCjFOU4bNAiLT5f9JupAespx+FfYHwX1J9S+HGlPI254N9uT7K3H6EV8Z6W+QCeAWJ/Emvq79nK5WXwTd24PzQXhJHsyKR/I0MpbHq/bNOB9ajBpc0gOa+JesvoPhOe9igNw4lSNUBxy2QD+BxXj2u+JH/4Q6K0nsre88QtLHbRhQAsz9S4x0wODjv9a9l8f29pc+F7hNRQPaK6O67S2cHoAOprwLS/DL6df3V9at5mjJMs0ERcs1rKG+7n0Knr/hXFXipVNTmrSSep61FZw6h4bhh1u0g+0S2u24tw25Q23BAPp714h4b8R32slNM0pbVpoRtNncTbQSDhtpA54/Tiu78V+L4bPwtfXdsQ12IRbqD/AAl8gn8Bn8cVynw60eLUJ44le3t5GVniEYGUKAccd+hNYwvFXsRKzR0kNnHp2lEqqWsVuhPkHjYB1APoKwf7L1XUzJPqSbl/5Y2McmFI9WPckdqd8TovEFi+n+eBqATLl4YyolAOCrAZ7ED35rl/FeuasiaHrltHPYuqGAxI5wrBjj8x2IrohOTW9zJU1c62zsmie3EGnxxxu5EhSQKYcDgkdzn34qbVNItrljLch5XP8Qchs9uneuqRNGl0oajeP9ll8oNLG7bSjY6e/P51tfCm3tJLO91O5VXvIJGWMschAEDggepwRTozU0WoX0PN7O0nsL+xTUZJ5NJmlCmR/vIT0Vj3B9a9gis/LtQyhVRQMBegHtWP4taw1jTdRksvmtLy3W8t8jaRnPQdsMp/OpfB+qm+8NWcsoBMkRVg3OeMVpZXOimlBWRqcVk6vb212klu6+Y5ByAM7fr6VGttqHn7Vu4ktCPuhWLg+gbPT9akug8FsUjeNB7Kf8aTSZdzx/xp4il8NWE0CPm8ZjHBn+Ed2/D+eK8QublpXYkkknJJ6k+tdX8V9VN94xvI1OUtj5APTJH3j+f8q4qtaUOVai5Um2uo5myQe9W4Xyoz6VSqRHK1sykXxSMATiq6TE96kjbJyTSsWRzLtcEfSkfkVLMu8GoeqZqkA1PvcV9rfs36iLv4Y6bGTlrcyQHn0Y/0Ir4qQV9Pfskaor6Trels3zwzpcKuf4XXB/VR+dUmS9j2zxTqiadpNzOVLOq/Io6s3YD1J7DvUOjGF9Et5LaVJoZk81ZFOQ4bnNUPG2mS62gtInQCJhIUkyElP9xiOQPccqcMOlcrpWo3ejXUoCysTKEuLWXarPI3TOMKk7fwsP3U+P4XyK87GpV/di9UdFD3NXsztvDatp0506OLFgXLwkHiL1jx6Z5H4jsKd4tvNkVzID8tvEz/AI4JqCw1K2uYY72zlEkDchuhBBwQQeQwOQQeQap+MH/4pfVrluAYZWJ9gp/wrXLqsqkGp7rQzxEVGV11PirUYTLf3byEuWlckt1PzHrVF4EiGVB/GtV2EsssinIdywPqCSapT9DWknqSloUp2+VEHdua0I7gwsN3T61TEReYHGVUVNdJvjOOopFIk1FBInmJ3rNWRkBU9MVPbXPl5R+VqG8dT9zvTXYHa1yqrlXyOtXbO7bzMMetV4EiYfvGwal+zpnKOMe5qpJPRkK57L8M9d+02jaZO/7yL5oSe6d1/D+Rr0rQ7f7VqMcROCykA4zjP/6q+YtMvJdPlWWGUrMjBlZT0Ne/eAPEI1ixN9bN5V1EVSQL/C4GePY9a5ndaHJXo2fOjbJCKSxwB1z2rm9b8UabPp91Yack+oXMkZRkgQnAPBOa6vxTYTadaLPLta3uYyQV52kjoa8m+Exzf664fDCFAF7n5+tXzHLGn3OeVdZtJLeNrCdDErRiOQKPkwSOeDkZ7+tX/BOoeKLS1nsNO0f7Ra3qtb3Gy1jlk+7jCls4wOccV0HjyMvfbdv7xwBjoclcf0q/4SsksLWOGJDH8y5GeQSATz9aSkr2OqVTkgpHofhLU7zT9JgtfFenyaMIittaXEsTxQ3KgYGN3Kt7HrXaxghsHtXz58Y5pWbw+HkkZfMcYLEjqtfQFp/qYf8Armn/AKCKuLuc1RLddS0tbfh4/vZz32D+dYqmtjw4czzf7o/nVoKPxo6XQxhbn/roP5CtOsvQzxcf74/lWmTVnoFPU9Qi0+DzJFZzniNMbmHcgd8DJPsDUCtLeLptybR4mLb5Ec4aP5GHPPPJ9+tY/ivwumuahBPlV2QvEWZ3+Xcy8qoOMhd/PqRUNr4PayeYafcxQCW685pDDvdEAOxUBO1WBJ+Yg5HWpvK+2h1xhR5E+b3vT/gnVXNzDaxGS5mjhjBxvkcKPzNUNQvrF7ZJvtll5UcqOzvMgUAEHqc84Pt1HIqv4p0Ma34cm01pCXIUq8jfeZSCNxA745wO9c1efDy3ntrpxFaG9lljmTaZERSCu8E5LHIB5J7npRJy6IdCFBpOpNp37enmdvZajaX2/wCyXMU21ih2MDgjqP1FY8niTS7+5m0yzvEN5scnJZAmwkNlhgggg+/4Uvh/wrZ6NfzXscUP2iToyKRsyF3gZJPJUHqTXM2fgLU4NQv7lNVgt2ubiRi8EPzmF85XkEA5I/Lr0qW5K2hdKnhm5Xna1rX/AOBc7i61eytBbvLOvl3D+Wki/MmcZGWHCjHc0671axtNOa/nuY1tAcGUHcOuO2e/FcppfghtIhuIbW4jubd4jGkV0ZMKxADMdpxk4546KACOa3JPDqP4Rk0MTkCSBommKZyzcl8Z9STjNHvWvbUiVOgpJKTav26f8Ah0bXdL1G/urTSr3a6gsY3iK4Y45QNjuckY6tW5a+XCiRecrygYJLfMxAGT/L9K4zRPhvZacZGku5ZWk8vJjXyipRlYFTkkZK8+vbFb+meFtM0y9W6tY5BKudu6QsFyoU4H0UU4uTXvIdeOHUn7KTa9P6/Iz/Eviy3sGvIFSd0tkzdsiurxK2QpQkYJz3z2NL4Gms7OJtDtbi4vJbSNJnuZMYkD8joTjjjBx0q9qnhuC/8A7RMlxKovBGWXCkKyAAEZHoMY6cn1qjby2lhqr3iC5ePylhRQFCIoIyQOpzgHJJpN8rvJlJ05UnCF7/rp5ebOd+ER2/BO2J/553x/8jzV1tpMrWUG1gdsaqcHOCFGRXHeBNEe4+Dlvol+z2tyTOGKt80LG4kdG4Pupx3Fb4sntdFSx065EEscYRJ3j8zB7sVJ5J569zW14uG5wNNPUtajqFvp9ubi8kMcIZQxAJPJx0H+cAmvhvxjef2p4r1u73bjNezPuznILnH6Yr6V+O/ih/D3h6Mwu6XM4dLYo4BMhGwkjuqozk+5XmvkpIZVl3REKD139KiVrJFIoavHiMMeuasaVJuhUdccVFrEitDjyZev3zwPyqppk+x9h6Gi2gGzew+fbso644ptvIZrZHb7wGGHuKnRxsHPUVVLBJXUdJOQPcdaLAdF4HvRYeK9JuycCG6jYn23AH9DX1Hr9hJobfbrBjHbhvldcfuSeMHPGw8Dn5cYVsYR1+P7eUxtkfe6g+/avq+bxtDL8NZ9XcoLqKx8x4xyGJXAIz1BbjB5ByD789XmTUktDaG1mfOvjjVjrXjDVL5lVd8vl4XOPkAXjIB6g8GsPIZsZ5FQ4IjGevf61XeRw2xM73HX0HrW6slYjUcwaabzBjy48hfc9zTJCzcGrartQKOg6VXZSH/+tTvcVhCfJtnxxxmsi1liLN5w6njAq5qkrLFtx1qxp3hfWpbT7aulXv2Pbu+0GBtgHrnHT3paLcSi29B0E8aqFjjYL9KuIyDaQPnYZrNdWjcCONpWHQnhR+FaVmpAzKPnPJNSVsev/s37v+FjQ46fZZc/mtfWqdK+af2XrKJ9Y1i8bmWGOGJfYO5J/wDQRX0ovcVa2M5DR/x9/wDbP/2apn+7kVXB/wBKx/0z/rU/8IoERW33H/66N/M0y8GLab/cNLbchv8Aro386bdnNtN/uGmBmXZ/fn6Co1POaW7P+kv+H8qYhzU2NESg0/cM1HmgHNMDRtP+QpF/ut/Krer3s9jDC9tYXF88k8cRSEqCis2DIdxHyqOT3qnan/iaRn2b+Va5OaaZCPOPiFc6qPGWj2mn397FbzRqzQQOUViJgDuYYwWBCjLDpxk5rN1zxVruhaFpklv9qZrd5Ev1u7ffIcH/AFm7JJiDOF3Y528GvQ7nRba6kvZLhpXkuWjYSBtrReXygQ9sNlvqxp76PA1i9uklwjOAGnEm6VsEnlmBzyTwRjnpXTCtBWTV7CcWYtr4rS0utI0t0vdSnmgilnvlRFRBIwVGbBx8zHgLnArzXw7r3ia88WXtvc61KrwapO8kZ58iBflJ2n5DGBkAHByQ3OOfao9NtY4bGFYz5VkF8lCxwNq7QSO5A6fnWNZ+C9Mgnlmklv7iaUyF3kuCCd7FmHy4wCT29q0pV6cIyTjuTKLZxtp4/wDEuseI7Cy06wsba3urdZwAGuGTJ/ibKjGME4HHrXp8l463ZgWyumXcB5wC7MEZzktnA+mazLPwnpVnqkF9awtBJBCIIo4n2oij2HX8Sa3XXPasK9SEmvZxshxTW55emp63D4n8aSy6xa/2bZozxxFcBmMW2NFYtwQwIbHVsAc5rnfhTrWqw+Gbaws5Jpbm/uo4UnkJlFt185yrdANpwD952J7GvXp9A0i4hgiudLsp4oAREksCuEB64yO9SWGkWNhBHFa20caRytMoVQMOxYk8Y/vH8K0WIhyOLj2/AOV3uZWg+ItQvtEuL6606OOdYvOgtopS8kq84O0AlQeAOCTycdKTX9TvfsumS6ZHKusODItg2cOhGH8wcYC5BBODkAfxVvadptjpyuthZ29sJDufyYwm4+px1rAj8QWkdxcXNro2rzXFwwSNhbEfaApI+Uk/KowT8+3OcjOayTUpXitBlfR9S1OLXl0kNJPp9tbJJNeXlvIsjuxOTuJC4ODjA4xjtXZRSJLGrxsrowyGU5BH1rj76907ULqO5n8L6rdzj5YvNssb0BJLYY4GOcB8Mc8A5rpdKujd2iyfY57NOiRzqFbb67QTt+h59qKiWjsJHGacR/wvTXBnk6BZnH/beatjxPcXGl3w1WR72WzhiSKK0tm4lneTaAwHUcryeB1rpfLTzfM2L5mNu7Azj0zWR4u02bVtGa1gVGJljdlZgu5VcMRkqwzx3BFRBrm1GVbTxTb4toLiO6kvGdophDaviNlQOzFeWCfMoB55YVuS3lvGshkmjTy08xwzgFV/vHPQe5riZ/B00kCQQM9s0srXEkyS7DH8ioI/3W3cCQG6Y4981abwVaQ6pquoQu3n3tsYDJKTI3zAKck/wgKMDnkk+laONPoxakg8TaRdwx3EWoW5jlClQXG4ZGQCByDjNJcaxbRXMcIdX3YLMHGEUnbk/wDAiox1+YVmXfh9GvkfzCkQmVnWJ2jBiRNqRYUgMO5z7+tRS2UqXd3dmxtmkkhWFI4TnzGDlg7kgYwcevSlywvowNF9csAoMlwIQSwBlBUHa+w4PfkfyNMOq2LJG63cLJJnYwbhsMFOD9SBXF674b1a5vVgsjbpb/ZEh+0s5Dq4JZmAHIJORkdN2faszW/DOqzytttxNa+XGscLSKTH82SoPsO/fOeTWqo03b3iXJroZn7Tk8cvw8smikRw2oRlSrAg/I5/lXy6mJVKkA+xr3/442s9p8MdLiuYWjmTURG2f4gsTKp9/lAGe+K+efPEMuTxg9OxrlnG0rGiehJHGpbCg8dVPUVZdFkUqQDxyDQRFKoYcg9CKngDbSrkMB0NMDDjYWeoccqDjn0q9qFuHBdACG5+lVNYi2XAYfxCrmlyiS32PgkUARaLOVZ4G+q/1rXDj6VjXlu1pcLcRj5Q2SK0i6uiun3WGRQBueFNU/sjxHp9+AWFvOrlQcFhnkD8K+wbe4ttR03fFtltpAUdHX80ZT0PqDXxEh56mvpLwRqVzH4e0q+jKKZrYZBJMcoTgg9T8uOvLJ6Mn3eXEUufVPU2pTtobeo2TabqEbwFpLYtjPDNH7MTyRwBuHOAA2cAjyf463am/wBMhBwWhcgjry4/+J/WvU/E2rQzaFdTwsUkQDejY3RnqM44II6EZBHIJr54+JmptqXimaQPmGGNYYh6KOSfxOT+VbYapJ0nGXQmpFKd0YW3cASST6k5qOZwigHjJ5qKGUlAM80y8DyDahxgcmi4bEkUwfjNSXUnlWcj+i8VkCRoGxgk113h7wP4k8UmNLaxlhtnIzPcKY41Hrz978M0nJR3BQcnocdZtJKFh3Yjzn/9VfUH7Ml1ELfW7UvidvKlCHuo3KT+ZH515N8R/AVt8P7jSohqQvHvIWdyybNrKcEAZPHI/Wr3ws18aF4v0q+Eg+zNJ5E+Dx5b/Kc/Tg/hT5ubVBKDg7M+vlNKDUZ447+1G6gRX1dwNPfOMZHWvnz4hQ6lZvqkGnQynS5sb1QYwQQwYH0+8Pxr2H4mwX8/hb/iVS+VeRXUMqH12tyPcEZFcB4mv7i4spNEmtWb7bYsxuEP+rJcAofw6GuKsm6lkcte17s8itrGKzsWxtNxeARNvGVXd6/SsvTbu78MeK7WbShILmxuBEI5SB5itwVbnpyefQ+1aGsC70jxHDZTputZCnlLgdD8vB9Qc1oWlje3OuzWklqzS2+fMG0fMMYTJ7/KKptx03MYytqzstQ8aT6h440uO1VTbRwSGSJmyrs2OCR6Y4NWtXNrqOlQ3ltAkEb4fy3AO1wT19wR+lcv/Z0NpM9zbu0WuW0DSw28rDbcKOqgHoaW18Qa79nWJ/DOpiPfvPlwxtjPJxkcUoUo30WqHGV1qQ+OPtOvWsjwhcxxoUQfxbRj8zzWl4f1a/sNN/s20v5oVZQJ9hALErt69emR+NZmoaprEs+Y/D2sFdoXEsKA5/4DgYqTQtT1SxkmZvB1xcyyH70oA2jBGB+ea15WiuZI39DkurOC3LCSSN4jCBkkbQSMAV0fw3nLeErQZ5UsuPoTXBWF34sjvbRYdImW3DBd1zIrGJe5AGM49639G0X+yb37TG+rsx+8guAEbknldvqaTui4Tj3PSFmA61yfjLxJbaRp9xc3ByIVLbQeWPYD61LcayixYksLkcYzv/8ArV5b8XNR05fDxt0jkS9uZVKqzljtU5JPt0FJOUmlY3vHozyHUbp76/ubuX/WTyNI31JzVeiiuwApRSUDg0wQvQ8UqyEU7G5cgUwigpk6TjGCDmmggZAPWoaKdxXJwecCvVv2bdb/ALJ+JtnbucRajG9o3+995T+a4/GvJo8swAPJOK0dA1OTR9dsNQjz5lncJMMHrtYEj9DRcu+h94W19DNrOoWnmD7VEwkMZ4JQgAMPUZBGR0PFM8Q6NBq1sW/dx3ixmNJXTerIesUi/wAcZ7r68gggGvPjdy3l4uq2rziGecmymiYMd2BwgPAcj/lmTtmXjhwue68O6/Dq0PkymKO+EfmFYySkyZx5kRPJXPBB+ZDlWAI58jF05Qm6sGdVKSlHlZwDX13o2pXCtGyyjatzBLLu35GEzIepOMRzHG//AFcmHCsek+IOrRRfDXVZVBwLCTOeDkoRgj1yan8U6dBqMQ8w+XcIGEUwUMVDcMpU8MjDhkPBHvgjzr4l3P2L4MXkEpijkGy22xk7S3mDhc84wp69q68DXjVT6MxrwcWux8/xcRKPYCopCuDk0xJxtGDxiqd7NhdoPJrVK7E3ZAl+Y8hFBBPehr3cORj2qhS1pyoy5miRm3EkVGevNFFNKwm7iohf7uM+lIVYdQaBx0qdJ+MSKGFDuCsMiByCDXrHwY1AQ3V9ZOwHnqsyc914I/I/pXl8YhZt0bbT/dNbGjXs+mX8F3akb4m3LnofY1hV1Q3Hmi0fU41OC88PzadfWr3Pz74znCp69Oea8i8DRQabr/iVWIVUGAfQbzxXeaBrC6r4Ya60obpXB3x/xK4H3TXBeEdD1yPUtTbUrG4j+1xnEhwVLbs881zzlyxbOOz+FnQ+IdKGr3S3aXAhYbcfLkVdtrMQ3G5X3bnDH8Bis/T7+SSwSNtP1CZlG1zDGHxg9TzxWxZ3Ef25IfsWoTBGAl8mAuE4zgnNOE4u3dmUnK3L0OL+MjfP4fx/fkP6rX0Jp5zaQH1iT/0EV51468NQeI10nydOnPkFtzOpiKgkev0r1K1tIkt4lEm3CKME56AVvHQU3dJDQa2PDR/0mb/dH86pJaIektXNPAspJHDbty4q1uOivfR02iMMXHPVx/KpTqDf21/Z/wBiuyn2f7QbvYPIzu2+Xuznf3xjpWPoFzueb0LDj8K6BtxjYI21iCA2M4PritbnejyNdY19vEOttLqmpSWVjdMrLbxj90guMnCiMbv3eF6tz3zwNZfGOrT+Po9FZ4beyuJEeCX7KclAAzx5LEeZ8yKQcbRk4zgV1cHhu1gFoYZ7qGaCPypJYn2tOC29t5wTy+W4IPzHnmrFxodrPdWU3zIlpJ5qQqq7S+Sd2SMg5OTgjPGc4ro9rB7roTZnIeLPHhufB3iWfw2LqC6soX8m7khBR9rhXeMZ525PUDn6VieD/F+tw25vdQuBqVu5trRE85QrztsU4kIAUr8zOADnPH3TXpGr6Db6lZahbmaa3N7GsEkkW3cIgeUXIIAOWzx/Efaqdp4L0iJVSVZ7hFn+1BJZML5uT8xVQATyevFaRrUVTcXHr+GgnGV7mL8OfF+ueKLq7a/sLW2s4Gdf3SuSSNpUBycE8nIwOnbOB0+o6jcS6JqMthE9rcx2ryRSX8ZjjDbSRu5yAMc+lO8N+HrXQIp47OS4cTSGRvNYdSfRQB+lXr6zgu4hFcxLJHuV9jdCQcjI7884PFc9WcHUvBWRUU7aniPirW9fPwr0C41HUQGu7yHbLDujmEKIcNJhiWcuFJHY4zzkV2WneL9VENuiWck/2OG3iuVcFpLiaZ9oCNnGURWLerEDjBrotR8H6BqEs011pdo1zMcvN5YL9QTgnpkjJ98nrWhcaNp1zDLDNZwtDKEDR4wuEJK4A6YJPStpV6UopOPcXK7jTrE0a6UJrF/NvG2TCFvMW3YDkFgMHB4zx0NY1/fXjeIpra1u7g6YjrLcTRxlzC6DJt1wDnfgE8Ej5h1Zcbt5cWWiaQxEZhtowI0jtoiWyxwqoqjkkkY96xrHxJpthBHaWun6qqwphlWydijk8I2OS7HJzyD1J5GcIa6pDZa8Ha1eazphu9Qt4LdGkZI9gkXeAxUHDqDyBnHvisrxFpV/Ys02no11aHlolP7yP6D+IfrTrS70W0uzLa6HqjTqeZGs5H8qQ8+WCx+U8k5HyD1qxf8AiMZAhifB6M/ArLFKC30RvhpVFK8Fc53R760u4UaCf5sn5kbHPpWq1tfCCc2t8zSuDs+0IJFQ44wBg49s1nQadBJqP2lII4m2/cg+VW9zx1rft5IyNqtgjqDXBTlbRM7qsVNXlHU+S/jB4O8d2uoS6p4ivX1OyGQLqAErGv8Ad2/8sx+nvXkskD4LvCQB/H5pyK/RK6jgnt3inVHjdSrIwyGB7EelfL/xj+Fc+gJda34ePmaUDvltuS9uO5Hqn6it41WnaRyyo3V4nhF00ptjmcyRtjryR+NZ6kqcg81duJBHKJIiMH7y+v4VVn2F90Ywrc49K6UcrNSxuDJHhuoqa5P7ouOq/MKx7Wbym9j1rV8wGHsc0wFWUfe7da7zWNTE3wr8Nok255pXhkQk7l8o8n3U5T6dORjb5p5uIXXPIOAPap9PuX8pY5JGMcZJRSeFzjP54FS1Yq5oTyhEZm6CmW33S8nDv29B6VnT3H2i6RF+4D+Zr0P4beAL7xlfMd5tdMiP766K55/uoO7fy71E5qCuy4Qc3ZGLoul32t38dlpVrLdXMn3UjGePU+g9zXrfh/4E3M6rJr+qJbMeTDarvYfVjx+QNeseEPCeieDbFotHhIlkA82eU7pJMep9PYYFdBFJ/ESpJ9a4amIbfunbTwyW55db/CTwlpUyTy2s19LH8ym7l3Ln/cGAfxzWzq3jrTfDNkbi+lBiQbREuMyH+4B79PStTx9NBpmg3uq3SnyLeMuyxcM56AD6kgV8ceKdQv8AxBqs99dp5YYny4d5IiXsBn+fc06MZVXdvQ1q1o0Y2itWS32sWz307CCSFWkZ1TAwgJJA+gqxazpcJuhORXOQs8jpFKEkXoNx5H41ch3abeI6kmFjg13WR5d7n1D+yzGc6454Be2X9XNfRA4JrwX9l6If2Fq90Puy30KA+uEJ/wDZq94zhqoh7jUP+lH/AK5/1qbsPqKrj/j7OP8Ann/WphkkUxEVv91/+ujfzouf+PWc/wDTM/ypIOj/APXRv50tx/x6z/7hoAxrtv37Y9v5VHGxouz/AKQ34fypqH0qTRE2aUGmA8UqmmBB/wAJTo1n4ytdEu71IdSlhMqJICqsCCAAx4LHHA611X2q3P8Ay8Rf99CuG13wnomt61aanqNhFLqNpxDcEsGTByMYPYk1c/sS3xnfL/38b/Ggg65bm2/57xf99Cn/AGq1/wCe8X/fQrkbXQ7Z1kLPKcPgfO3oPepX0G1CMd0vT++3+NMdzqftVt2ni/76FKLq2z/r4v8AvoVyiaDbFFJeXJAP32/xpsmhW4MeHl5YA/O3Tn3oYjrhd2w/5eIv++hR9stf+e8X/fQrlhoFrnG6X/v43+NJ/YFr5jrulwACP3je/vSA6r7Xa/8APeL/AL6o+123/PeL/vquWHh+1xy0vX++3+NMt9CtngVmeYk5/wCWjev1oA6sXdt3ni/76FKbu2/57x/99Vys2gWqxkhpc5A/1jev1py+H7U/xS/9/G/xosK51Au7b/nvF/30KPtlsP8AlvF/30K5KXQrZZ41VpQCpJ+du2PegaHa4PzS8f8ATRv8aaQzrftlt/z3i/76FBvLb/nvF/30K5GPQ7YySgtKQpAGZG9PrTv7BtSD80vTP32/xpCOq+2W3/PeL/voVBcX1vsP7+P/AL6Fclb6PbtEjM0pJHPzt/jUd3o1usYIMmcgffP+NFwNO4uoTIxE0f8A31VdrqH/AJ6p/wB9VgNpcBzy/wD30f8AGoH02ITBd0mNufvH/GgDoHuof+eqY+tQvcwn/lqmPrWI2mQ5HL/99Gof7Ohw3LdSPvGgDgv2ko4JfBdrOgRpkvY13A5IUq9fMVzCrnGMV9OfG2wij+Ht5Kud0c0LDJz/AB4/rXzFdSFTxSHYiRzaN98MvdTWhZ6hHK6oFcE9sVkIoPLckmrYX7PMqx43sMhyPu/QVYGhrVlJJbCQBQE5OTWNYTCGcE5weKt3EMhiLtcSM3fJ4rL70AdCZknRo2+ZW4qC1DQO1vIcj70Z9R6VSspDnn1q9ecwCQcNH8woAe0ojbmvon4CX8GreBb3TrpfMW0uiQMkFQwDKVI5BBBwRyK+bZcNhsYyM16H8FNeu9J1XUI4NrW0ywiWMjkkyBQQexG73z096wrxco6GtN2ep3vxPjm0qylCyHayMIplwDg/wsOgJJyR9xjyNjfe8Cmvnu5Gd/vHGT9BivbvifeSyx3gc5Ubk29sAV4LZd8+lKjUco6hNWZaSTaevHevRvhP8PLvxnM9zdtJaaUmQZgvzSN/dTP6mue+Hnh238Sa8treSOsCKZXVergfw57Z9a+n9HIsbaG1tFWGBAEREGAo9BXPiK/J7sdzoo0ubVmdoHwy8N+HZBPFZi6u15E90RIR9B0H5VY8S+L9J8MQCXVrlYQ33B1dsdlXqa371v3YyAc9c81jXfh/Qr6bN/oun3UjjBeaEO35nmuOM7yvM70lBWgtT5f+Jvix/G3iRr7y2htYUENvGTkhQScn3JOa5GGSe1kDxMRX19qfwt8FXcWW0KKBv71tK8X6A4/SuN8WfBPRINAvtT0e+vLSS1jMvlz7Z1cAZx2I+ua9GniYO0bHBUw85NzO8+BPjc+MfB6pdt/xNNN2wXGf41x8j/iAQfcV6QzBUZ2IVFGSxOAB6k9q+EfCvjrW/CbXzeH5o7Se7jWOSURhiAGyMA5APviquueNfE2uo8era7qN1E/DRPO2wj/dHH6V02OS59P+P/iHoes21xomgXUl7dQyJLNdW/8AqYdrdN/8RJ/u5HXmvPr29ubqYyzys0hAGc46DFeV+DtTm0rSNauLdUaTEA+cZH3yP61b/wCE11FusVt/3yf8axnH3ro5a9OUmmjuZoFuJ4ZZxvkgbdGxP3TiiR33tiR1LnDMCQSO1cM/jHUQoIjtgf8AcP8AjVVvGOoscmO2/wC+D/jUcjZj7GR3j6Tb3WtJa3StiWAyQzKfmGOGGfxrtUlaKJI0d8IoUZPYDFeLr461UyQuUtS8WQh8s8Z4PerJ+IGsHqtr/wB+z/jT5GhSozZ649w+fvt+dC3L5++3515D/wAJ3q3922/74P8AjS/8J5q/922/74P+NKzI9jI9g8+T++350G4k/wCejfnXj/8Awnur+lt/37P+NH/CeaweP9G/79//AF6OVj9jM9RuryQnG9j/AMCrw/xtqL6j4iu3Llo428tOegHH881q33jTVfKZcwguCNwTkVxpJYknkmtKUerOihTcdWFFFFbnSFFFFADo2wfapD81QdKkQmqRSY1l5ptTEZqNqTBoQHFPZi7Fm6n2qOnbjt28YznpSEj6z/Z1kg1n4YPZXoW4jS4khljfnIOCB+WMH2rb1/S7jSrgO0skkTyh4boOEcyEYGXPCT44Dn5Jh8smGw1eW/soarKur6zpJXMMsK3IbP3WU7enuD+lfSN5FHNbSxTRpJFIpV0dQysD1BB4IryK05Uar7M7oJTijgLfxCLoxWd+VW8ZzGkgQxrOy/eXaeY5V/iiPI6jcvNeHfHXxCJmj0GJjiG8kuZR9QAn82r0rxvYR6ZqNzaoTNCumyXyiUkkpCQBCzA5bGcxyZDx4xll+WvnPxtczXXiW7e4leZwVUPJjcVCjGSAAT6nHNdeHoxi/aQ2ZhUm37sjMD4UGq7sWbNIScUqDcea6krGTd9BVGetISB2zSvwOKZTQmSCT5cbRSLG7o7qjFExuIHC56Zpldb4Zl8nRpowkbpcMRIHXO4DjFRUn7NXFucnRVrVLZbS8eJCSvBGe2aq1ad1dCEqWK4li+45HtREQw2sM0kqBRkUaPRjOo8K+K7jS7vY7ulvKQJNjEfQ/hXpX9ryxybZLhyQNwVm4I/GvD7VQ8gDdDXocEkuo6bDLK4GwBQMenHX3rixUUloZzhf3jqbHUjDLejznhJYyFQSQPrjmpdK1KS1heX7RLG9w5cgtj865CeQmZGAwz4JzzV2WRtyMMZ3HryOCcVxRTi7mMo3O0TWL6ZikM8uP+ejOcH6DvV+C9u93N3Ox/3yK83stdvXlulUxqIxn7uSTxVk+Ir9DsV1BODu2jI+ldCm1uYukz1K3vro4/0ib/vs113gu5mkuLkSSu48sYDHPevFYdQmicIskrHdnLOTXrHw7t5YoJrmScyGWIELj7vOfWro1eaaRdOm4yTO78Ja5ZX0l19luo5DFJscZwQQPQ8/jXdWt3Ey8yx/99CvJNL0CwttRubm1iMM05/eMjn5uc+vrXTQ6Ynlk+dNwP75/wAa7jsO/W4gzzNH/wB9in/aLf8A57Rf99iuMj0iFokJlnyVBP7xvT60jaRENuJZ+Tj/AFjf407gdp9ot8486L/vsUC4tx/y2i/76FcUNIi/56z/APfxv8aE0iIyuDLPgAEfvG9/ek2B232qAD/XRf8AfYqKS+g7TR/99iuTGjQscGWf/v4f8arf2RCUUmWfP/XQ/wCNAHVm8iJ/10f/AH0KX7ZH/wA9Y/8AvoVxlzpUSJlZZ+o/5aH1+tM/syI5/eTcf7ZoA7db2PP+sQf8DFSi9ix/rUH/AAMVwX9mR7wPNmxjP3z/AI05tNiAP7yb/v4f8aAOvvb+IxlBKnIx98dK4nWtNuLi9tJYtSUWET7prYKNzr7MOfTimf2bGxfMs3DED5zTG06MKSJJcgf3zUzgpq0i6dWVN3iZOp+MNF0m/jlaSZ7JVZXljUkI/GAe/rVzT/FVne3MYt95WY7Y1KneT7VDdeFtLvo83UHmCQZcFjhj71xnjbTf+EC0O48V6FO/nWG0LbTDejB2CEZ6jg1yzw8tkdscYn8SPWpLxYlAkldH9HXIH41j+JdStxo1xFOEkE0TRlDyGyCMV4PL8ZNWunjW5tIm3ru+R9uP0Nct4y+KWp3RltYbdIAg2K+8sRkdelY+xqOWmxo61NK55VMfnI9OKeInFo0piJiLhBJ2BxnH5UlpF9pu4YS23zHVM9cZOM16V8Z9HtPDtpoel6cm23j84knq7fKCx9Sa9Q8tnl9SiZ8YzUVFADmYk80B2AwDTaKAPY/hd8LYr+x07xN4ku4Y9GaTcbXB3yoCQCWHQEj8q9vTVrLw1GsXh+zt5tFjBYLb8CJicmvlrw/491nSNOGmmZrnTAMC2kYgLzng/XtXVaX8VL2wn8lbFJIGXlGl/wDselcdWnUkzvo16cI6LU+gYtbu9WMF1p8luuwk+RKeHB7Ejpin2Piyd76WDU7Y2jRnBTGV+ob0NfOcvxNubXVWbTdNgtg7dDIzYz6dKz9Z+IGr3F2YtS8q6tiCGgBaNXJGAW2nJx6E49qzWFk9y5YxdD1T40ePbfVtPj0rSZN9qJ086ZT8rnk7R6gEZz7V43PGGXI61mzalJdXMEIURxQkkKvQnGP5Vdjc45roVNU1yo45VHUfMyrLbwujmVOQOoODVRZvKDxzjeg46dR2NaNxhoJSeu2shP3sEu/qg4Pt6VpEh6H1x+yvq1jeeCbizt3xe218JJ4z2VgAjD1Hyn8q90Vw5718vfsdQr5PiS4yd/mW0Q9h85/nX04hwfaqM2PH/Hyf9z+tWASOlVY/+Pk/7n9anoAjhPD+zt/OkuH/ANHmXBzsPNEXR/8Afb+dJdf8e8v+4f5UAYdyf9If601D74zRdc3D/Wmp0JpFom3ClU1GvOKlUc0xn//Z) ###Code #@title **1.セットアップ**(数分くらい掛かります) # Clone github !git clone https://github.com/sugi-san/sber-swap.git %cd sber-swap # load arcface !wget -P ./arcface_model https://github.com/sberbank-ai/sber-swap/releases/download/arcface/backbone.pth !wget -P ./arcface_model https://github.com/sberbank-ai/sber-swap/releases/download/arcface/iresnet.py # load landmarks detector !wget -P ./insightface_func/models/antelope https://github.com/sberbank-ai/sber-swap/releases/download/antelope/glintr100.onnx !wget -P ./insightface_func/models/antelope https://github.com/sberbank-ai/sber-swap/releases/download/antelope/scrfd_10g_bnkps.onnx # load model itself !wget -P ./weights https://github.com/sberbank-ai/sber-swap/releases/download/sber-swap-v2.0/G_unet_2blocks.pth # load super res model !wget -P ./weights https://github.com/sberbank-ai/sber-swap/releases/download/super-res/10_net_G.pth # Install required libraries !pip install mxnet-cu101mkl !pip install onnxruntime-gpu==1.8 !pip install insightface==0.2.1 !pip install kornia==0.5.4 # library import import cv2 import torch import time import os from utils.inference.image_processing import crop_face, get_final_image, show_images from utils.inference.video_processing import read_video, get_target, get_final_video, add_audio_from_another_video, face_enhancement from utils.inference.core import model_inference from network.AEI_Net import AEI_Net from coordinate_reg.image_infer import Handler from insightface_func.face_detect_crop_multi import Face_detect_crop from arcface_model.iresnet import iresnet100 from models.pix2pix_model import Pix2PixModel from models.config_sr import TestOptions # --- Initialize models --- app = Face_detect_crop(name='antelope', root='./insightface_func/models') app.prepare(ctx_id= 0, det_thresh=0.6, det_size=(640,640)) # main model for generation G = AEI_Net(backbone='unet', num_blocks=2, c_id=512) G.eval() G.load_state_dict(torch.load('weights/G_unet_2blocks.pth', map_location=torch.device('cpu'))) G = G.cuda() G = G.half() # arcface model to get face embedding netArc = iresnet100(fp16=False) netArc.load_state_dict(torch.load('arcface_model/backbone.pth')) netArc=netArc.cuda() netArc.eval() # model to get face landmarks handler = Handler('./coordinate_reg/model/2d106det', 0, ctx_id=0, det_size=640) # model to make superres of face, set use_sr=True if you want to use super resolution or use_sr=False if you don't use_sr = True if use_sr: os.environ['CUDA_VISIBLE_DEVICES'] = '0' torch.backends.cudnn.benchmark = True opt = TestOptions() #opt.which_epoch ='10_7' model = Pix2PixModel(opt) model.netG.train() # warning import warnings warnings.simplefilter('ignore') # import function from function import * # make folder import os os.makedirs('download', exist_ok=True) ###Output _____no_output_____ ###Markdown ![003.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAkACQAAD/4QN+RXhpZgAASUkqAAgAAAACADIBAgAUAAAAJgAAAGmHBAABAAAAOgAAAEAAAAAyMDIyOjAyOjI0IDEzOjUwOjE0AAAAAAAAAAMAAwEEAAEAAAAGAAAAAQIEAAEAAABqAAAAAgIEAAEAAAAMAwAAAAAAAP/Y/+AAEEpGSUYAAQEAAAEAAQAA/9sAQwAGBAUGBQQGBgUGBwcGCAoQCgoJCQoUDg8MEBcUGBgXFBYWGh0lHxobIxwWFiAsICMmJykqKRkfLTAtKDAlKCko/9sAQwEHBwcKCAoTCgoTKBoWGigoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo/8AAEQgACACgAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A8/01rAR3Iv1kLlMwlc4Dc9eR7VoXQ8PfaP8ARjdeSVA5GMNk5Pv2/AnvxRRX0Tjc8S43Uv7AZ2+wfaFBAwXyQDz+Pp+HvUUI0UzoZTciMy7iB2jyny+5xv5z2Hriiijl8wuR6mNLIm/s8yAiRSm/PKbeR9d3P0rLooppWE3cKKKKBBRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAH//2f/hAeFodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDYuMC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTU0PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6UGl4ZWxYRGltZW5zaW9uPjI5MjI8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KAP/bAEMABgQFBgUEBgYFBgcHBggKEAoKCQkKFA4PDBAXFBgYFxQWFhodJR8aGyMcFhYgLCAjJicpKikZHy0wLSgwJSgpKP/bAEMBBwcHCggKEwoKEygaFhooKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKP/AABEIADIDtQMBIgACEQEDEQH/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/APMKKKK+jPCCiin+W/8Azzf/AL5NADKKd5b/ANx/++TSMCv3gV+oxQAlFOgRrieOGAeZNIwREXksxOAAPUmnXUMlpcy210hhuIXMckb8MjA4II7EGgZHRQCD0OaKBBRRQSB1IFABRVu6069tb+SxubSeO9j+/AYz5i8A8qOehB/Gm/YLz/nzuv8Avw/+FF0OxWoopaBCUUUdOtABRRRQAUU8RSGJpRG5iUhWcKdoJ6AnoDTKBhRQeOtTSWs8dnBdyQutrOzJFKR8rsuNwB7kZGfrQBDRU4tZzYNeiFzZrKIWnx8gkIyFz64BOKg69KACihvlALfKG6E8Z+lFAgooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAAjII9a9w0fxtda3deKdXl1rUtI8P2llFFbKTu8id9qK21OW5V2xnvzXiKAF1DNtUkAtjOB616joeo2Hh/SptO0X4hWdpFLP58kqaLMZHIXaASwI2jrjHWsa8U1tr/Xkzak2mSL4rcsAfivqmM/8AQKk/xrl/FHjCfVtT0K+WR7m+021Fu893Gridw7neVPBBDDg16Z4N1lrrUrqS5+IC6paWtlPcTwNpLxAIEI3k7R90sDgcnpXk1z4fs5NQsLHw/rltq89xuDFomtFjKjIy0uByM/l71nT5eZ3X4f8AARc+a2j/AK+9nrWjCdLDQtTnMUr3VvFeslvpWmxquW+6GdkYfd6gd+uatX0cupXeqXcJW3Plz3aifTNNlA2qX2lldnb0zgnuarWdnM2haTbTwxrJp1lHaysj6VcKSCed0rFgDngcD261PBBJaQXMqQq/n281shJ0iBSzoV+/GQ3GQcA1zve+hstup4tax6h4z8UW0McduL+/dIwIYlijXgDO1RgAAZOPQ16L4OXwmnjOPQrDRbTVLK2jmkvdV1AGRpRGhLGNM7UXcAATknNc38P7c+FviRp1trc9vbl4nh8+OdJEjMsTKjb1JHUjvxmr50e5+HXhTWm1doo9e1aP+zrSCOVXZbfP72Y4zgMAAO9dNRp+6n00/ryMIK3vP5/15kGlJo3j6O6sINGtNG8RLC9xZyWG5YLnYNxieMk7WwDgj/6xPg3HPLqF6zxaWdIRR9slvkhJhLK6xMplGPv4yBjNP+GumXHhyWTxnrUT2enWMMn2QTKUa7nZCqIinkjkknp+tM+HOyLwl4hnuJNMjSW8sYd2poWtsgySHeB1GB09cUT2kltoEd03vqd/Hba7qY8Pi61Wxg1C+FxLeahozW4uXKqfLiiK8uMJhjnHIHama1JrGo6RcSrP490ZdN0l23XgRIrho1JJkYMSXbP5DiqejTQ3XiDRNWkmkvbONri1Gq3bixtpC1vIqwW0HG2POBv65IqraWNtpOkeIJZtM0XTd2l3NqssOvtdOZSg/dBGbGTj6jFc9rP+u5te6PPvB2hWF1Y6hrfiB5Y9E07ajRwHElzM33IUJ6epPYfmOubWdHt/h7/bEfg7w9G02pmzt4JYnkJiWPc5aQtuLZwMjH0rN8L6Zc+KvhrPoWieXJq1rqwvWtmkVDLE0WzcMkZ2kc+gqr47CtJonhHQt2oLpELJI9shfzrqQ7pSuByAcL+ddL9+Vn3/AA/4Jivdjdf0yt4t0jS7jw9Y+J/DkElrY3E7Wl1ZO5f7LOBuAVjyUZeRn/8AVheHvEGp+HbmW40e6+zSyp5bt5aPlc5x8wPeus8VW3/CLfD6y8NXrJ/bV5e/2ndwKwY2qBNkaNjgMc5xWR4E8NR6vcS6lq7G38Oaf+9vrk8BgOkSert0wOmfpVRkuRuWqIafMrbnU/FqK/uvBvgnVNTkE15JbyJcuFVSHcLIgYKAAdnt2ry1cbhuOBnk+gr0jTfE9v4u1rXdL191s7HXJEe0kb7tjOg2w/8AAduENcRrOi3uia1LperxG0uo3CvvB2gE/fB7r3yO1FL3VyPcKnvPmR6VpugadqXw01Ww8KXWp6ncXGq26r5tqsCh1Qn5vmOEC5JY9CBWVpHg/wAL315Nokeu3l5ri28sv2m1iX7EjRqWK7m+ZxxjcMCtTXLjTtH+Ed/YeFbyaSIatHaXt8rbRekwlm2jtHwoA7gd88838N91tpfjLUYULz22jtFGFGSplcIWH0ANZrm5ZNPr/kaO10rHOeF7i2t9atbm9uvskMeX837ILoKdpwDGSAw57/WvbfFW2DwLZl9TQrDHI90sfh2EvF54DR+ZGD+4yoxuHXv0FeSyWV/4IutE1KWOze9uLdrqO0uYd5gBJVGdD3Iwy+mPavSr+W1sms7sP4UjvdT0yCe9l1PUriG4uGkQFy4Q4KkjgUVvekmv0Clomma7si+Ccrq0JmLf2o0X/CNQed9lx5av9nz0zk7+uD6V4TrDQXmv3DWd0s8M82VneAW6ncRklBkIASeBwAK9eN5HdjxFr6y+F21y0sJLuG70jUJ5rlJAVUEhzjZg4x06DGK8hupb7xHrrOlsst/eSACG1hCBmPGFReB6/maKCtdirO9j0zQodf0+2ttGTxf4NurUN5cFlOy3vJPRVEZbknoDXGfE6yl0/wAXXFvcy6O9wkaCVdKi8qGNscrt7N3P17dK7Tw6nh/wFfRaXez3Nz4qvWW2nurBkxpYchSqMwIMnPzEDPYEd/M/EmnNpPibU9LMjTyWt1JDvP3pMMcNj1PX8aqlrNv9NxVNI2/pGZRRRXQYBRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVNaTm2u4ZxHFKYnV/LlXcjYOcMO4PcVDRQM6zUPGss+lX1hp+iaLpKXyhLmSxgZHkQHOzJY4GeoFc1ZyxwXcMtxbR3UKOGeCUkLIAeVJGCAfaoKKSilohuTe50et+LbzUtOOmWtrY6VpBIY2VhCI0cjoXblnP1P4VH4f8AFWoaLaPYrHaX2lSNvewvoBNAzf3gDyp91IrAopckbWsHM73uW9VuYLy/mntbC30+ByNttAWKJxzjcSeev402wvbnT7+3vbOZorq3cSRSDkqw6df5VWoqraWFfqaOt63qmu3QuNYv7m9mHCtM+do9AOgH0FaGg+K7zR9P+wra6feWYma5WG8txIqzFQofGRkgDgHI5Nc9RScU1aw+Zp3Ovh8faoLqa/u4rW+1fhbW9uU3GyX0hj+4vsccVS0bxZe2Avob6KHVrC+cy3NrfZdZJT/y1BHKv/tA5rnaKXs49h88u4oJByOD7cVraH4k1nQYLqHRdRuLFLnb5vkkKWxnHOMjqemKyKKppPRkptaodK7yyPJK7PI5JZ2YlmJ7knqa29c8U6nrOmafp1y8UOn2SBYra2jEcZYDBdgOrH1+uOtYVFJpPVhdoK3dX8VarrGh2Gl6lLHcw2TEwzSRgzBcYCF+pUen55wKwqKGk9WCbRprrV0vhqTQwsX2J7sXpO07/MCbBznpg9MUmha3qegXpu9GvZrO4KlC8R+8voQeCOO9ZtFHKtguyzqV9d6neTXeoXMtzdSnLyytuZj9a6q4+IeqyQ2UUdno6x2trFar5thHOxCLgEs4J59Og7VxlFJwi90NSa2Z2H/CwNUfT9Ss5rPSTFfWrWrtDZRwMoYjkMgBPToeK57RtXv9Evhe6VdSWt0EaMSR4yFYYIqhRQoRWiQOTe7Hbm379zb87t2ec5znPrmuzHxQ8XBeNUQS4CmcWkPmnAxy+3JPvXFUU5QjLdXBScdmKSSST1JyaSiimSFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAf/2Q==) ###Code #@title **2.写真の表示** display_pic('examples/images') #@title **3.顔の入れ替え(写真)** source_img = '02.jpg' #@param {type:"string"} target_img = '01.jpg' #@param {type:"string"} source_path = 'examples/images/'+source_img target_path = 'examples/images/' + target_img source_full = cv2.imread(source_path) crop_size = 224 # don't change this batch_size = 40 source = crop_face(source_full, app, crop_size)[0] source = [source[:, :, ::-1]] target_full = cv2.imread(target_path) full_frames = [target_full] target = get_target(full_frames, app, crop_size) final_frames_list, crop_frames_list, full_frames, tfm_array_list = model_inference(full_frames, source, target, netArc, G, app, set_target = False, crop_size=crop_size, BS=batch_size) result = get_final_image(final_frames_list, crop_frames_list, full_frames[0], tfm_array_list, handler) cv2.imwrite('examples/results/result.png', result) #@title **4.画像の表示** import matplotlib.pyplot as plt show_images([source[0][:, :, ::-1], target_full, result], ['Source Image', 'Target Image', 'Swapped Image'], figsize=(20, 15)) #@title **5.画像のダウンロード** import shutil source_name = os.path.splitext(source_img) target_name = os.path.splitext(target_img) download_name = 'download/'+source_name[0]+'_'+target_name[0]+'.png' shutil.copy('examples/results/result.png', download_name) from google.colab import files files.download(download_name) ###Output _____no_output_____ ###Markdown ![002.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAkACQAAD/4QOBRXhpZgAASUkqAAgAAAACADIBAgAUAAAAJgAAAGmHBAABAAAAOgAAAEAAAAAyMDIyOjAyOjI0IDEzOjUwOjE0AAAAAAAAAAMAAwEEAAEAAAAGAAAAAQIEAAEAAABqAAAAAgIEAAEAAAAPAwAAAAAAAP/Y/+AAEEpGSUYAAQEAAAEAAQAA/9sAQwAGBAUGBQQGBgUGBwcGCAoQCgoJCQoUDg8MEBcUGBgXFBYWGh0lHxobIxwWFiAsICMmJykqKRkfLTAtKDAlKCko/9sAQwEHBwcKCAoTCgoTKBoWGigoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo/8AAEQgACACgAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A4XRv7JNvdjVfOE3y+QYzx33Z4+lXLz/hHfP/ANG+1eWVA5zwcnJ5/Dj0z3oor6Jx1vc8S5HqR0BnYaeLlAQMF8kKefx9Pw96ig/sUzoZftIjMu4gdo8p8vucb+c9h60UUcvmFyPU/wCyyJv7P8wESKU355TbyPru5+lZdFFNKwm7hRRRQIKKKKACiiigAooooAKKKKACiiigAooooA//2f/hAeFodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDYuMC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTU0PC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6UGl4ZWxYRGltZW5zaW9uPjI5MjI8L2V4aWY6UGl4ZWxYRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpVc2VyQ29tbWVudD5TY3JlZW5zaG90PC9leGlmOlVzZXJDb21tZW50PgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KAP/bAEMABgQFBgUEBgYFBgcHBggKEAoKCQkKFA4PDBAXFBgYFxQWFhodJR8aGyMcFhYgLCAjJicpKikZHy0wLSgwJSgpKP/bAEMBBwcHCggKEwoKEygaFhooKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKP/AABEIADIDtQMBIgACEQEDEQH/xAAfAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgv/xAC1EAACAQMDAgQDBQUEBAAAAX0BAgMABBEFEiExQQYTUWEHInEUMoGRoQgjQrHBFVLR8CQzYnKCCQoWFxgZGiUmJygpKjQ1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4eLj5OXm5+jp6vHy8/T19vf4+fr/xAAfAQADAQEBAQEBAQEBAAAAAAAAAQIDBAUGBwgJCgv/xAC1EQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4+Tl5ufo6ery8/T19vf4+fr/2gAMAwEAAhEDEQA/APMKKKK+jPCCiip7W1ubyQR2lvPcSHgLDGzk/gBQMgorRk0PVYtSk0+XTbtL+OMyvbtERIqBdxYqecY5qpZ2097cJBZwS3E8n3I4kLs3GeAOTxRdBZkNFbH/AAi/iD/oA6v/AOAUv/xNH/CMeIP+gDq//gFL/wDE0uZdx8r7GPRVi9s7qwuDBfW09tOACY5oyjAHocHmq5460xBRRSFgOpA+pxQIWitCfRtSg1aXS5LG4/tGLmS2RC8i8A8quexB/Gnnw/rQBJ0fUwB1JtJOP/HaXMu47MzKKKMjOMjNMQUUU5UZ87FZiP7oJoAbRTnR0xvR1z03KRmm0AFFTLbTvayXKwytbxsEeUISisegJ6An0qGgYUUE4GTwKsS2dxFYW99JEVtLh3jhlJG12TG4D6ZH50AV6KsCzuDpj6iIibFJhbtNkbRIVLBfrgE1XHPSgAop8kUkcaSSRukbjKOykK3bg9DTAQRkEEe1AgooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAHIyI6tKpeNSCyg4JA6jPbjvX0rFelNL1y3v9QkgaOyhkaL+3Jswq7IVJZYBsyCBldxOcd818zsMqRnGRjNfSOma7Ff6brUi61aAx2MCFbPxG/lxbWRNwAh/c5xyw3ZJI75rlxSvb+ux04d7nN6NpWgaf4wu/Om1GC8Gj3FwyxSm9ieB4SfM82QI27B4Xbjgc81xyeGbSDxB4OTw/rGopFrYBjupIxFNBmQx5AVvY966nX7mWJvFniG6MQB0mDRrd1uWn82SUAE+YyqWYIpY8d6zrNPJ8V/Ca3P3ltbaQj03zuw/Spi2tb/1Ycktrf1csaE62UOkXura94snnuNXls40trr93+6lQAuGOcHcMge9T+NJ455PG17pmteK4LzSrnc0ct1ttzvn2EIFOQo5x7Ypmia79m0uytrHxfp+ivba3dzXsE1yY2liMq4GApzwrdcdaPGniJLrQ/GqXPjLTtVtr6SNtNsorpnaJRcbsbSoAwmOhPSlZ8+39X9B3XL/XY5TX/C9/dfEv/hG49Qn1G8d4oluroktgxhyW5JwoJ79BXX+Cbvw3H40TR9H0Swu9KtIppbzU7+ETzXAiRiWUH5Y1LYxgdKjv9RtNM/aAuZtQmFvbSKLdpicCIyWqoGJ7AEjmsp9KfwB4W1eG7uLebxBrCfYLaK1lEpS1zmSQ7f7+AAOv64pvmiovql+P+RKXK210b/r5kejS6b8QhcaZPpFjpviAwPNY3WnxeSkzIpYxSRjjkA4Yen5nwakuobzUJ2udPttFiVftz3nljYXV1iZS6kZD9sgHvUngDSbnwiz+MfEML2NvawyCwgnGyS7nZCqhUPO0Akk//XqH4dOlp4Q8QXVzdWVokt7YwCe9gM0AKmSQ7kwc8AcepHSqla0lHbT/AIIo3um99T0FLfWNXg8NW9/4jt5Zr0XM11daTPFFcXBVWKJEyoNyrsAZs4+bB6Cq+tQ6lqWj3Us0XjLRl0zSJDG8upxtFK0ak/vAhLMzZ5Pt2qvpk8M2v6LrN3JLNas89uuravJ9m+1l4HVY4LccJBkgZAzlqo29lZaJpHiCaex8Jadu065sVksL+aSUzlB+5w5xk45HXpXPaz/rubXujhPBmi6dJpuoeIPEQlbRtPKRC3hba93O33Yg38IxyT6V1zeJba1+HKatH4Z8NQyXGqG0t7drASKYUjy+5idzHJA3ZFZHhexfxR8Op/D+mz2yata6oL8QzzCITRNHsJBP90jJ9qi8aRNf3Oj+FfC8c2qW+jQmEyWsZkE9y53SuMDpnAB9jXRK0pWff8P+CYr3Y3X9Mq+LNM0u+8M2Pinw/amwgluDZXtiHLJBOF3AoTzsYc4PSud0PXdU0CeWfRr+aylkTY7xEAsuc45HrXXeLYU8L+BrPwtPJHJrM95/aN/HGwYW2E2Rxkjjdg5P/wCqs3wJ4bi1BpNa10m38M6ed9zMR/r2HSGP+8zHAOOgNXGS5G5aohp82m50/wAWodQuPBXgrUdUuXurwQPFdO5yyvIFlQN6Hb/KvK1xuG7OM849K9E0rxVb+Jtc12w8TOLbT9fkVo5eosZk4hb/AHQMIfauQ1/Qb7w/rb6ZrMZtplYAuQSrIT/rF/vLjnj6daKXurke4VPefMj0nS9C0rWfhpqtj4ROsXcs2pwZe8SKFY2VCxdyCQIwuST1zisjRPDHhDUb6bRLfUtSvtUW2mmOowBUtEaNNxAU/My8Y3cZ7Vo+IbrS7H4RXtj4UlnNmmrxWt1eE7TfkwlmYjsmQoC+i1z/AMNUlj0vxne2sby3cOjtDEiKWb964VmAHPABrJc3LKV+v+Ro7XSsc14WvILLW7S7urm4tY4suJbeBJ3VtpxhH+U8nvXuPiqeSLwXCh1jVJZbaJ2v4YtKtWmiEy7k85B/qhtGMrn3wcV5JdWWoeA73Rb8SQJq0tu1z9mmhDta5JVNytxuI+Ydwa9D1S7sNP8A7Oma68HW17faZb3F0+qJcm5neRAzs7R8EMRmnW96Skgpe6mmb73jDwkIl13Vf7Rx/abW39j2v2v7NjYP3P3cD7397B6YrwXWJYL/AF+5lt7qSWG4n3faLmNYiSxGWZUyF5J4HYV6yuo22o2/iTV0vPCdzrNrp8t5FdaSk63UcoKgSbpOMAHH4jtXlMr6p4q8QsyxNeapeyD5IYgu9sY+6oAHTk/UmiguW7FWd7I9H8Nf2vp32fRtN+IehTwuwihskhe9UknoqGM+/SuO+J1s9p4uuIJ9Q0+/uI0RJnsLcQRo44KbRxuHcj19eK7LQ5/D3ga7j0WRri/8RXzra3t/ZTCNbAOwUxxOQcsM/MRj0yOlec+I9Gk0vxTqekQrNO1rdSQodhLuAxwcD1GDVU/jb/Rain8NjIoooroMAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAfDI0M0csZAdGDrkZ5ByOK6yX4keLXUrHrD2ynqLWCKD/0FQa5Cik4xluilJrZnXTfEHXLr+zxqTWmox2UbxrHfQ+espbq8gY/M+AAD2H1NU5fF2oT+LbHxDcJbPdWbxGGFI/LhRY/uoFHRR7VztFJU4rZD55PqdnL8S/FEkruLy1UMxbaLCA4yc45TNMPxI8UH/l+tv/Bfb/8AxFcfRS9lDsg9pPua3irXJ/Emv3erXcUUU9ztLpFnaCFC8Z+lZ9jdT2F5DdWUrQXMLiSOSPgqw6EVDRVJJKyJbbdy5qup3+r3RudUvbi8uCMeZPIXIHoM9B7CtPQvFmp6LZi0tRaTWyytcJFdWyzKkpUL5gB/iAHGemTWBRQ4pqzQ1Jp3OqtvHesw3V3fSvBdavNgJqNzH5k1svdYs/Kn4DjtVTQvFeo6Sb1H8rULO9ybq0vgZYpmP8ZychwedwOawKKXJHsHPLuBAPUA/hWlpWu6rpEFzDpWo3VlFcgCZYJCm/GcZx9TWbRVNJ6MSdthWJYksSSxySTyTWzrnibVdbsrCzv7gGzsYxHBBEgjjXAxu2jgse5rFooaT1C7QVs6j4m1bUtCstIv7rz7KzctB5igunGNu/rtHYf4DGNRQ0nuCbRorrF2vh99FDR/YHuheEbPm8wLtB3emO1R6RquoaNd/atJvbizuNpXzIHKkg9QfUVSopWQXZNe3VxfXUtzezy3FzKd0ksrFmY+pJrrZPiT4i8izgt5bOCG1t47ZEFnFJlUGASXVjk/XHoBXGUUnCMt0NSa2Z2EnxE1+fT9QsruSznt723a2cfZI4yobHIKKpzx3yPaub0vUr3Srv7Vpt1Na3Gxk8yJtrbWGCKp0UKEVokDk3uxQSCCCcjnOa67/hZXjH7OIR4gvAoGNwCbyPdtu4/nXIUU5RjLdXBScdmKSSSTyTyaSiimSFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAf/Z) ###Code #@title **6.写真と動画の表示** # --- 画像表示 --- print('=== images ===') display_pic('examples/images') # --- 動画表示 --- print('=== videos ===') reset_folder('pic') files = sorted(os.listdir('examples/videos')) for file in files: save_frame(file) display_movie('pic', files) #@title **7.顔の入れ替え(動画)** source_img = '05.jpg' #@param {type:"string"} video = '01.mp4' #@param {type:"string"} source_path = 'examples/images/'+source_img path_to_video = 'examples/videos/'+video source_full = cv2.imread(source_path) OUT_VIDEO_NAME = "examples/results/result.mp4" crop_size = 224 # don't change this batch_size = 40 source = crop_face(source_full, app, crop_size)[0] source = [source[:, :, ::-1]] full_frames, fps = read_video(path_to_video) target = get_target(full_frames, app, crop_size) START_TIME = time.time() final_frames_list, crop_frames_list, full_frames, tfm_array_list = model_inference(full_frames, source, target, netArc, G, app, set_target = False, crop_size=crop_size, BS=batch_size) if use_sr: final_frames_list = face_enhancement(final_frames_list, model) get_final_video(final_frames_list, crop_frames_list, full_frames, tfm_array_list, OUT_VIDEO_NAME, fps, handler) add_audio_from_another_video(path_to_video, OUT_VIDEO_NAME, "audio") print(f'Full pipeline took {time.time() - START_TIME}') print(f"Video saved with path {OUT_VIDEO_NAME}") #@title **8.動画の表示** display_mp4('examples/results/result.mp4') #@title **9.動画のダウンロード** import shutil source_name = os.path.splitext(source_img) video_name = os.path.splitext(video) download_name = 'download/'+source_name[0]+'_'+video_name[0]+'.mp4' shutil.copy('examples/results/result.mp4', download_name) from google.colab import files files.download(download_name) ###Output _____no_output_____ ###Markdown ![001.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAkACQAAD/4QOnRXhpZgAASUkqAAgAAAACADIBAgAUAAAAJgAAAGmHBAABAAAAOgAAAEAAAAAyMDIyOjAyOjI0IDEzOjUwOjE0AAAAAAAAAAMAAwEEAAEAAAAGAAAAAQIEAAEAAABqAAAAAgIEAAEAAAA1AwAAAAAAAP/Y/+AAEEpGSUYAAQEAAAEAAQAA/9sAQwAGBAUGBQQGBgUGBwcGCAoQCgoJCQoUDg8MEBcUGBgXFBYWGh0lHxobIxwWFiAsICMmJykqKRkfLTAtKDAlKCko/9sAQwEHBwcKCAoTCgoTKBoWGigoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo/8AAEQgACACgAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A85s/sm2b7WJS2w+XsIA3YOM8dM4qb/QAHOHY7cquT1O7j8Mrz6joQaKK+jseHcew0whyhlBDLtDHIYDGeQOMnP0A/Co5GsHu8pHJHDvAC56rnqTyQfwNFFKwXJz/AGT1QXH+sPD/AN3Pt7Z79cdRk1AhsWkVmEkalzlM7tq/XHJ6f1xRRRYLjrg6cw3RCWPJztHzbRjkc471XuDb+XELcPvx85b19qKKdguV6KKKBBRRRQAUUUUAFFFFABRRRQAUUUUAf//Z/+EB4Wh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8APHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iWE1QIENvcmUgNi4wLjAiPgogICA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRmLXN5bnRheC1ucyMiPgogICAgICA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIgogICAgICAgICAgICB4bWxuczpleGlmPSJodHRwOi8vbnMuYWRvYmUuY29tL2V4aWYvMS4wLyI+CiAgICAgICAgIDxleGlmOlBpeGVsWURpbWVuc2lvbj4xNTI8L2V4aWY6UGl4ZWxZRGltZW5zaW9uPgogICAgICAgICA8ZXhpZjpQaXhlbFhEaW1lbnNpb24+MjkyNDwvZXhpZjpQaXhlbFhEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlVzZXJDb21tZW50PlNjcmVlbnNob3Q8L2V4aWY6VXNlckNvbW1lbnQ+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgoA/9sAQwAGBAUGBQQGBgUGBwcGCAoQCgoJCQoUDg8MEBcUGBgXFBYWGh0lHxobIxwWFiAsICMmJykqKRkfLTAtKDAlKCko/9sAQwEHBwcKCAoTCgoTKBoWGigoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgo/8AAEQgAMgPCAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A8wooor6M8IKKKKACilCkgkKSB1IHSkoAKKMjOM80EgdTigAooooAKKciM5IRWYgFjtGcAdT9Ks6pp15pV69nqVvJbXKBS0bjkAjIPuCDRfoMqUUUUCCiijIwDkYPvQAUUUEgdTQAUUZHqPzrSbQtUXTri+exnS1gETSOy7dqyZ8tsHna2OGxihtLcaVzNooooEFFFORGkdURSzsQqqoyST0AoAbRUk8UkEzxTxvFKh2sjqVZT6EHkVHQMKKKWgQlFFLigBKKWkoAKKKKACiiigAoq+NH1I6SdUGn3Z00NtN15TeVnOMbunXj61QoTuOwUUUUCCiiigAoopwVipYK20cFscD8aAG0U5EaRtsas7YJwoycDqcCm0AFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFdVpek+E59Pt5dR8U3dpdumZYE0t5RG3oGDDP1rla7nR/B1tpthBrfjif7DpkiiS3so2Bur4HkBVH3UPdj+nWoqOy3sXBXex2vhnwloVr8OvFGpR+Ibx9JvoFgad9NZGUJICWVC2XGSF/OvKfEVpo1pNANC1afUo2UmVpbQ2+w54ABJznmu5s/Edx4l0fxzLJFHa2cGjxxWlnF/q7eJZhhR7+p7n8K8vPU1nRjK7cn/Vi6jjZJI7a2RD8F9Qk2L5g12JQ2BnHk9M+lHwkRH13VhIisBo94w3AHBCDBrml1q7j8OS6IGiFhJcrdsCvzeYF2j5vTHauy8EWUvh3wr4g8T6nGYIbqwk07T1k+VriSXALKO6qB16dfSnNWi13YRd5J9jzpPuL3OBXooYeH7C30T4g+EiLLaTb31ugiuU3fNlZB8snXoTx+ledgYAHpXU+HtT8U6hpNz4d0cXmo2N0u02oiMyx85yhP3D75A/nV1FciDsVPBF5fWXi7TJNHj826knWFYH5WZXO1o39VIJB/PtWlrdrqmu+Kv7BdtPjm03zbSANcBY44kZmCea33gudq57DFbttFa/DG2mubqaC68ayxtHBbRMHTTQwwXdhwZMdB2/M15m5MjM0hLsxyS3JJ9TUx998yKfurlZ6d4F8BavZeNNCupp9HaKG8ikdY9SidiAecKDkn2qn4h+H+sXXiHU547nRQk13K67tTiU4LkjIzwfaofhPpa2upnxbqUYh0TRQ05mYYE02CEiT1bJHT29a4W5f7TczTyqvmSu0jcDqxJP86lKTm7Pby/4I24qK0L2kwEeIrG2cKzC8jiIHzKT5gB+or0u1gt/+FmfEVEghEMOnah5aBBtQqqgYGMDFcn8KtJOo+MrO5mwmnaYwv7yZvuxxx/Nz9SAPz9K2fA+oNq/iLx5qTKQ13o+oT7fTeQQP1FKq7t+SHTWi9TzZfuj6V23gjS/FMFu+o6L4Yi1W1ul2LJc2S3EfysQdoJ4Ocg/Sn2Xh+38PeEL7V/FNqBd38Bt9KsZsrIWOM3BHVQo6Z659xXFxzzqFjimmAzhVR2HJ9AK0b500iEuRps9ZA8ef9E90f/wSR/41ieI73xrpejodY0c6fbMk1iZpLcAPFM2/ycEkBVIJTj5exp3jD7V4d+Hnh/QbyaaPVbm4k1S5iMh3woV2Rq3OQSOcexrz6SeaRdsk0rr1wzkj9TWdOHNrpb+vMucraakW9e7L+dG9P76/mK7Ky+I/iGzsre1gbTfJgjWJN+nxMdqjAySMk8dam/4Wh4l/vaX/AOC2H/Ctbz7fj/wDO0e/4f8ABOIBB6EH6GvRfhhoRtHi8V6nZTXNtbP/AMS6yjUmTULkZKqg6lVwWLdOPY1y/ibxVqfiVLdNUNqVtyxTyLZIfvYznaOegpDrniDVtR0sR3t9PfWirBYiEkPGBwAgXv0ye/elNSlG2w4uMZX3O40Cfw34y8ax2lx4SvBd3s7yXU82ryExgZaR2G0dPTjsK861v7FJrl8NFjkGnm4YWqMxZim7C898/wBa9V8b+I5dD8O/ZdRhsv8AhO9Rtvs+oXVsMPFbE52yEceaw4OO34V5Z4cuYLLxDpd1dAG2gu4ZZBj+FXBP6CopX1ktvW5VS2kTuotI0TwxqunaFd6ZHr3ie6khS4SeVhbWZkIxGFUgu4B5JOKt6s3g/UfG+o+GrjRLbSohdtZ2mp2DspjcHaDIhO1lLcEjH9a05dBuPDvj7X/Guuqh0m0mlvbCUuGF7LJkwqmDzjdk+mK4DwNoF94o8RR3D7ksophdX983EcKBt7sWPGTg4FQrSXM30/Et3XupdfwMuaCbw14nkgvrW2uZ9PuSkkEy7opCpxgjup612fhrXm8Q6p9gsvB/gxJvKkmzLaSBdqLuPIY84Fcl451aHXPGGs6nb8W9zcu8ZIxlOik/UAH8a6bwFZv4f8Paz4t1FTDC1nJY6cH4NxPKNuVHdVGcn6+laT1hd7/qRD4rLY3/AAtqVlPpNzr/AIk8HeGbPQYEPlslkyy3cxHyxxbmOeepwQPzx5RbwPqOpxwW6QxSXMwSNC+2NCzYA3HoBnGTU2razqWr/Zv7Uvri7FtGIoRK2QigYwB0HTr1Pes89OuPcVUIct2TKfNZHRN4S1BLfxC0piS60NlF1akkybC20uvYqDjPsQaojRLo+GX10tEtmt2LIKSd7SbN3AxggDrzW7b6/wCJPE3it5NIiB1bUbMafMsCZ86PaFZn3ZAyAMtwBinfEC6tLK00vwtpU6XFtpAdrm4j+7PdP/rGHqFxtB+tJSldJ7jajZtGb4d1fQ9Ps5ItY8MxavO0m5ZmvZINq4A24Xrzk5961f8AhJvCI5/4QGD/AMGs9cT6e/Ar0Twp4ah8O28PinxrE0FnCfMsdOkGJr6Ucr8p5VAcEk/y6lRRWrv97CDk9F+SNj4l6rpugtb6dYaXLbalLoUdlNEbxngtI5PmKbCMs4H8RPfOM1iaeNB0j4eaVquoeHLbVry7vriBnluZYtqoAR90471xuu6pda3rF5qd+4a6upDK5HQZ6AewGAPYV3Meg6rrvwk0FNG0+5vni1O7aQQJuKggAE1DgoRim+uupXM5ybSMpvFHhgKT/wAIDp/Az/yEbj/GovilpthpXi1rfSbRbS0a0t5hCrswUugY8sSepqB/h94vKMB4b1Tof+WP/wBetX4yrJbeO1V1KSxWVplWHRhGOCPqKqPLzpRffrfsJ35XzL8Dgt6f31/MUb0/vr+YruT8UfEpJJbS/wDwWw/4Un/C0PEv97S//BbD/hV3n2X3/wDAItDv+H/BOIBB6EH6GvQ/C2n32p/CTxLBptrPdSrqVtIY4ULttC8nA54rkfEeu33iG+S71I25mSMRDyIFiXaCT0XjPJ5rq/C9/d6b8KfEF1p11Na3Meq2m2WFyrD5T3FTUvyrvdfmVC1321Mvwf4mTw5DeLLa/wCnRyLcWc6RgSRzL8rRuTgmJ0LKy/iBmuc1KeK61C6uLa2S1hllZ0gRiyxAnIUE9hXZz+N7HxDp80PjXSEvL9YmFvqdmBDcbwPlEmOGXOOcfhXBjpz1qoLVtqzFJ6JJ6BRRRVmYUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABTmdn272ZtqhRk5wB0A9B7U2igDV0fW7jSrDV7SCOJ49Tthayl85Vd27K4PXPrWXSUUWS1Hc0/D2sz6Fqa31rBZzyqrKFuoRKgz3we46g07xF4h1XxHe/a9avZbqYDCbsBYx6Ko4UfSsqilyq97aj5nawHkV1mpfEPxTf2YtH1aW3tQu3yrRFgBGO+wAn865OihxUt0Ck1sxTyST1PJpKKKZJu+I/Fer+IYbWDUrhfstqoWG2hjEUSYGN2xeMn19+MVhUUUkklZDbb1ZrW/iHUrfw5c6FBOI9OuZhNMioA0hA4DN1K8A4Pen+FfEup+FtQlvdFmSG5khaAs8Yf5SQeAe+QKxqKOVNNW3HzPe5c1XU77V76S81S7mu7qT70srbifb2HsOKveFvEd94ZvZbvTEtftLx7FkngWUxHOdyZ6N7+9YtFDimrdBJtO5Z1C9utSvZry/uJbm6mbdJLI2WY+9VqKKewBRRRQIK6XQPGWo+H9Ims9Gis7S5mYl9QSEfathH3A56D6DNc1RSlFS0Y02tUOkd5ZGkkZnkclmZjksT1JPc02iimIkeaV4o4nllaOP7iFyVT6DoPwrd17xlr+u2UdnqOou1kgAFtEixRcdyqgAn65rnqKTinq0NNotaXevp2o215FHBK8DhxHPGJI2x2ZT1FXvEviTVfEt4tzrF207INsUYAWOJfRFHAFY9FHKr36hd2sFFFFMR0Fj4t1TTvDsmj6a0FnDMW8+eCIJPMp/gaTrtHoMVz9FFJRS2G23uanhvXb3w7qi6hpjQrcqrKpliWQDPfB7jqDUOsatf61fve6tdzXd0/BklbJx6DsB7DiqNFHKr36hd2sFaX9t6l/ZNvpi3cqWUEjzRxodmGb7xyOT+NZtFNpPcL2LBvrwgj7Zdf9/3/AMan1nVr3Wr37Zqc5nufLSLeVA+VRhRx7VQoostwuwooooEFatvrlzb+G73RESE2l3cR3LsQd4ZBgYOcY/CsqihpPcadgooooEFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAf/2Q==) ###Code #@title **10.データアップロード** #@markdown ・selectで写真(images)か動画(videos)を選択して下さい)\ #@markdown ・動画はHD以下、20秒以内にして下さい import os import shutil from google.colab import files import cv2 select = 'videos' #@param ["images", "videos"] # ルートへ画像をアップロード uploaded = files.upload() uploaded = list(uploaded.keys()) # ルートから指定フォルダーへ移動 for file in uploaded: shutil.move(file, 'examples/'+select+'/'+file) ###Output _____no_output_____ ###Markdown **Face Swap:**> Credits: https://github.com/neuralchen/SimSwap **Installation** ###Code # copy github repository into session storage !git clone https://github.com/neuralchen/SimSwap # install python packages !pip install insightface==0.2.1 onnxruntime moviepy imageio==2.4.1 # download model checkpoints !wget -P /content/SimSwap/arcface_model https://github.com/neuralchen/SimSwap/releases/download/1.0/arcface_checkpoint.tar !wget https://github.com/neuralchen/SimSwap/releases/download/1.0/checkpoints.zip !unzip ./checkpoints.zip -d /content/SimSwap/checkpoints !wget -P /content/SimSwap/parsing_model/checkpoint https://github.com/neuralchen/SimSwap/releases/download/1.0/79999_iter.pth !wget --no-check-certificate "https://sh23tw.dm.files.1drv.com/y4mmGiIkNVigkSwOKDcV3nwMJulRGhbtHdkheehR5TArc52UjudUYNXAEvKCii2O5LAmzGCGK6IfleocxuDeoKxDZkNzDRSt4ZUlEt8GlSOpCXAFEkBwaZimtWGDRbpIGpb_pz9Nq5jATBQpezBS6G_UtspWTkgrXHHxhviV2nWy8APPx134zOZrUIbkSF6xnsqzs3uZ_SEX_m9Rey0ykpx9w" -O antelope.zip !unzip ./antelope.zip -d /content/SimSwap/insightface_func/models/ # clean content directory ! rm ./antelope.zip ./checkpoints.zip # import packages import os import cv2 import torch import fractions import numpy as np from PIL import Image import torch.nn.functional as F from torchvision import transforms # move to the SimSwap directory os.chdir("SimSwap") # import project modules from models.models import create_model from options.test_options import TestOptions from insightface_func.face_detect_crop_multi import Face_detect_crop from util.videoswap import video_swap from util.add_watermark import watermark_image ###Output _____no_output_____ ###Markdown **Inference** ###Code # convert image to tensor transformer = transforms.Compose([ transforms.ToTensor(), ]) # Instead of softmax loss, we use arcface loss transformer_Arcface = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) # denormalize image tensor detransformer = transforms.Compose([ transforms.Normalize([0, 0, 0], [1/0.229, 1/0.224, 1/0.225]), transforms.Normalize([-0.485, -0.456, -0.406], [1, 1, 1]) ]) # Get test options as opt object opt = TestOptions() # Hardcode few parameters with opt object opt.initialize() opt.parser.add_argument('-f') opt = opt.parse() opt.pic_a_path = './demo_file/input_picture.png' # Place input picture here opt.video_path = './demo_file/input_video.mp4' # Place input video here opt.output_path = './output/demo.mp4' # Target destination folder for output opt.temp_path = './tmp' opt.Arc_path = './arcface_model/arcface_checkpoint.tar' opt.isTrain = False # Puts in evaluation mode opt.no_simswaplogo = True # Removes simswap logo opt.use_mask = True # New feature up-to-date crop_size = opt.crop_size torch.nn.Module.dump_patches = True model = create_model(opt) model.eval() app = Face_detect_crop(name='antelope', root='./insightface_func/models') # reduce det_threshold if face is not being recognized app.prepare(ctx_id= 0, det_thresh=0.3, det_size=(640,640)) with torch.no_grad(): pic_a = opt.pic_a_path img_a_whole = cv2.imread(pic_a) print(img_a_whole.shape) img_a_align_crop, _ = app.get(img_a_whole,crop_size) img_a_align_crop_pil = Image.fromarray(cv2.cvtColor(img_a_align_crop[0],cv2.COLOR_BGR2RGB)) img_a = transformer_Arcface(img_a_align_crop_pil) img_id = img_a.view(-1, img_a.shape[0], img_a.shape[1], img_a.shape[2]) # moves tensor to GPU img_id = img_id.cuda() # create latent id img_id_downsample = F.interpolate(img_id, size=(112,112)) latend_id = model.netArc(img_id_downsample) latend_id = latend_id.detach().to('cpu') latend_id = latend_id/np.linalg.norm(latend_id,axis=1,keepdims=True) latend_id = latend_id.to('cuda') # swap faces of input video with input image video_swap(opt.video_path, latend_id, model, app, opt.output_path, temp_results_dir=opt.temp_path, no_simswaplogo = opt.no_simswaplogo, use_mask=opt.use_mask ) ###Output _____no_output_____ ###Markdown **Display Output Video** ###Code from IPython.display import HTML from base64 import b64encode # path for input video input_path = "/content/SimSwap/output/demo.mp4" # path for the output compressed video output_path = "/content/SimSwap/output/cmp_demo.mp4" os.system(f"ffmpeg -i {input_path} -vcodec libx264 {output_path}") # Show video mp4 = open(output_path,'rb').read() data_url = "data:video/mp4;base64," + b64encode(mp4).decode() HTML(""" <video width=1024 controls> <source src="%s" type="video/mp4"> </video> """ % data_url) ! rm /content/SimSwap/output/cmp_demo.mp4 ! rm /content/SimSwap/output/demo.mp4 ###Output _____no_output_____
_notebooks/2021-01-19-First-Post.ipynb
###Markdown "First Post"> "Awesome summary" - toc: true- branch: master- badges: true- comments: true- author: Reut Farkash- categories: [testingthings, jupyter] My First Fastpages Notebook Blog Post> Trying to blog to better keep track of the best resources I come across. To create a similar blog:[fastpages github](https://github.com/fastai/fastpages)[1littlecoder tutorial vid](https://www.youtube.com/watch?v=L0boq3zqazI&ab_channel=1littlecoder) Top Deep learning playlists:[CS231n Winter 2016](https://www.youtube.com/watch?v=NfnWJUyUJYU&list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC) - Stanford - Deep learning for computer vision Git / GitHub Python Data Structures and Algorithms Ethics Biology Statistics Causal Inference ###Code ###Output _____no_output_____
notebooks/Voronoi Reflection Trick.ipynb
###Markdown Voronoi TesselationTracking data gives an unparalelled level of detail about the positioning of players and their control of space on the pitch. However, this data can also be difficult to work with and hard to interpret. One solution to these issues is to transform the data in ways that make it easier to analyse further. One such transformation is the Voronoi tesselation wherein the pitch is broken down into regions closest to each player. This pitch breakdown gives a rough estimate of the space a player or team has, or how this available space changes over time. Here, we demonstrate how to build a Voronoi tesselation using the existing Scipy implementation combined with one small trick. 1. Data and setup Plotting a pitchTo help with visualisation we first define a basic pitch plotter using a slightly modified version of the code from [FCPython](https://fcpython.com/visualisation/drawing-pitchmap-adding-lines-circles-matplotlib). ###Code #Dimensions of the plotted pitch max_h, max_w = 90, 130 #Creates the pitch plot an returns the axes. def createPitch(): #Create figure fig=plt.figure(figsize=(13,9)) ax=plt.subplot(111) #Pitch Outline & Centre Line plt.plot([0,0],[0,90], color="black") plt.plot([0,130],[90,90], color="black") plt.plot([130,130],[90,0], color="black") plt.plot([130,0],[0,0], color="black") plt.plot([65,65],[0,90], color="black") #Left Penalty Area plt.plot([16.5,16.5],[65,25],color="black") plt.plot([0,16.5],[65,65],color="black") plt.plot([16.5,0],[25,25],color="black") #Right Penalty Area plt.plot([130,113.5],[65,65],color="black") plt.plot([113.5,113.5],[65,25],color="black") plt.plot([113.5,130],[25,25],color="black") #Left 6-yard Box plt.plot([0,5.5],[54,54],color="black") plt.plot([5.5,5.5],[54,36],color="black") plt.plot([5.5,0.5],[36,36],color="black") #Right 6-yard Box plt.plot([130,124.5],[54,54],color="black") plt.plot([124.5,124.5],[54,36],color="black") plt.plot([124.5,130],[36,36],color="black") #Prepare Circles centreCircle = plt.Circle((65,45),9.15,color="black",fill=False) centreSpot = plt.Circle((65,45),0.8,color="black") leftPenSpot = plt.Circle((11,45),0.8,color="black") rightPenSpot = plt.Circle((119,45),0.8,color="black") #Draw Circles ax.add_patch(centreCircle) ax.add_patch(centreSpot) ax.add_patch(leftPenSpot) ax.add_patch(rightPenSpot) #Prepare Arcs leftArc = mpl.patches.Arc((11,45),height=18.3,width=18.3,angle=0,theta1=310,theta2=50,color="black") rightArc = mpl.patches.Arc((119,45),height=18.3,width=18.3,angle=0,theta1=130,theta2=230,color="black") #Draw Arcs ax.add_patch(leftArc) ax.add_patch(rightArc) #Tidy Axes plt.axis('off') #Display Pitch return ax #An example pitch ax = createPitch() ###Output _____no_output_____ ###Markdown Data and TransformationsWe begin by grabbing a single frame of x,y positions for 22 players that is in Tracab format. We then transform these positions to dimensions of the FCPython pitch. ###Code #Five frames of tracking data in Tracab format df = pd.read_csv('../data/tracab-like-frames.csv') #The dimensions of the tracab pitch data_w, data_h = 10500, 6800 #Pull the x/y coordinats for the home/away team h_xs = df[[c for c in df.columns if 'H' in c and '_x' in c]].iloc[0].values h_ys = df[[c for c in df.columns if 'H' in c and '_y' in c]].iloc[0].values a_xs = df[[c for c in df.columns if 'A' in c and '_x' in c]].iloc[0].values a_ys = df[[c for c in df.columns if 'A' in c and '_y' in c]].iloc[0].values #This transforms the data to the plotting coords we use. def transform_data(xs, ys, data_w, data_h, max_w, max_h): x_fix = lambda x : (x+data_w/2.)*(max_w / data_w) y_fix = lambda y : (y+data_h/2.)*(max_h / data_h) p_xs = list(map(x_fix, xs)) p_ys = list(map(y_fix, ys)) return p_xs, p_ys #Home team xs and ys h_xs, h_ys = transform_data(h_xs, h_ys, data_w, data_h, max_w, max_h) #Away team xs and ys a_xs, a_ys = transform_data(a_xs, a_ys, data_w, data_h, max_w, max_h) ###Output _____no_output_____ ###Markdown Plotting the players ###Code ax = createPitch() ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=90.) ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=90.) ###Output _____no_output_____ ###Markdown 2. Voronoi Tesselation - First attemptThe hard work of performing a Voronoi tesselation is fortunately already implemented as part of the scipy package which means all we need to do is provide data in the correct form. There is also a plotting function to help visualise the Voronoi tesselation. ###Code from scipy.spatial import Voronoi #Combined all of the players into a length 22 list of points xs = h_xs+a_xs ys = h_ys+a_ys ps = [(x,y) for x,y in zip(xs, ys)] #Perform the voronoi calculation, returns a scipy.spatial convex hull object vor = Voronoi(ps) ###Output _____no_output_____ ###Markdown Scipy.spatial provides a method that can plot a Voronoi tesselation onto provided axes. We can combine this with the plotting above to show the Voronoi tesselation of players ###Code from scipy.spatial import voronoi_plot_2d ax = createPitch() voronoi_plot_2d(vor, ax, show_vertices=False, show_points=False) ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=90.) ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=90.) plt.xlim(-15,145) plt.ylim(-10,100) ###Output _____no_output_____ ###Markdown 3. Problem - Dealing with pitch boundariesThe Voronoi tesselation algorithm doesn't know that we're looking at a bounded box (the pitch) when building the tesselation. As a result, the algorithm identifies polygons for some players with a vertex outside of the pitch. This is not ideal if we want to look at pitch control etc. Note also the dotted lines. These indicate those points equidistant from two players and go to infinity - also not ideal for a modelling football.Rather than go back and try to build a Voronoi algorithm for ourselves that accounts for the bounded pitch we can use properties of the Voronoi algorithm to _trick_ it into putting the boundaries where we need them.**The Trick:** By adding the reflection of all players about each of the four touchlines, each touchline necessarily becomes a the edge of a polygon found by the Voronoi algorithm.By running the Voronoi algorithm on this extended set of points, and then throwing away all information about points that aren't actually players on the pitch, we end up with a Voronoi tesselation with polygons truncated by the touchlines. This is exactly what we need! ###Code #Step 1 - Create a bigger set of points by reflecting the player points about all of the axes. extended_ps = (ps + [(-p[0], p[1]) for p in ps] + #Reflection in left touchline [(p[0], -p[1]) for p in ps] + #Reflection in bottom touchline [(2*max_w-p[0], p[1]) for p in ps]+ #Reflection in right touchline [(p[0], 2*max_h-p[1]) for p in ps] #Relfection in top touchline ) #Step 2 - Create a Voronoi tesselation for this extended point set vor = Voronoi(extended_ps) #Step 3 (Optional) - Check that the Voronoi tesselation works correctly and finds the pitch boundaries # ax = createPitch() fig=plt.figure(figsize=(13,9)) ax=plt.subplot(111) e_xs, e_ys = zip(*extended_ps) voronoi_plot_2d(vor, ax, show_vertices=False, show_points=False, line_colors='k', zorder=0) ax.scatter(e_xs, e_ys, c='grey', s=20.) ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=20.) ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=20.) plt.xlim(-0.5*max_w,1.5*max_w) plt.ylim(-0.5*max_h,1.5*max_h); #Step 4 - Throw away the reflected points and their Voronoi polygons, then plot ax = createPitch() #Plot the Voronoi regions that contain the player points for pix, p in enumerate(vor.points): #Each polygon in the VT has a corresponding point region = vor.regions[vor.point_region[pix]] #That point corresponds to a region if not -1 in region: #-1 is a point at infinity, we don't need those polygons polygon = [vor.vertices[i] for i in region] #The region polygon as a list of points if p[0] in xs and p[1] in ys: if p[0] in a_xs and p[1] in a_ys: plt.fill(*zip(*polygon), alpha=0.2, c='xkcd:pale red') else: plt.fill(*zip(*polygon), alpha=0.2, c='xkcd:denim blue') #Add in the player points ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=90.) ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=90.) plt.xlim(0,max_w) plt.ylim(0,max_h); ###Output _____no_output_____
7 QUORA INSINCERE QUESTIONN/introducing-bert-with-tensorflow.ipynb
###Markdown BERTBERT, or Bidirectional Encoder Representations from Transformers, is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.Academic paper which describes BERT in detail and provides full results on a number of tasks can be found here: https://arxiv.org/abs/1810.04805.Github account for the paper can be found here: https://github.com/google-research/bertBERT is a method of pre-training language representations, meaning training of a general-purpose "language understanding" model on a large text corpus (like Wikipedia), and then using that model for downstream NLP tasks (like question answering). BERT outperforms previous methods because it is the first *unsupervised, deeply bidirectional *system for pre-training NLP. ![](https://www.lyrn.ai/wp-content/uploads/2018/11/transformer.png) Downloading all necessary dependenciesYou will have to turn on internet for that.This code is slightly modefied version of this colab notebook https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb ###Code import pandas as pd import os import numpy as np import pandas as pd import zipfile from matplotlib import pyplot as plt %matplotlib inline import sys import datetime #downloading weights and cofiguration file for the model !wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip repo = 'model_repo' with zipfile.ZipFile("uncased_L-12_H-768_A-12.zip","r") as zip_ref: zip_ref.extractall(repo) !ls 'model_repo/uncased_L-12_H-768_A-12' !wget https://raw.githubusercontent.com/google-research/bert/master/modeling.py !wget https://raw.githubusercontent.com/google-research/bert/master/optimization.py !wget https://raw.githubusercontent.com/google-research/bert/master/run_classifier.py !wget https://raw.githubusercontent.com/google-research/bert/master/tokenization.py ###Output _____no_output_____ ###Markdown Example below is done on preprocessing code, similar to **CoLa**:The Corpus of Linguistic Acceptability isa binary single-sentence classification task, where the goal is to predict whether an English sentenceis linguistically “acceptable” or notYou can use pretrained BERT model for wide variety of tasks, including classification.The task of CoLa is close to the task of Quora competition, so I thought it woud be interesting to use that example.Obviously, outside sources aren't allowed in Quora competition, so you won't be able to use BERT to submit a prediction. ###Code # Available pretrained model checkpoints: # uncased_L-12_H-768_A-12: uncased BERT base model # uncased_L-24_H-1024_A-16: uncased BERT large model # cased_L-12_H-768_A-12: cased BERT large model #We will use the most basic of all of them BERT_MODEL = 'uncased_L-12_H-768_A-12' BERT_PRETRAINED_DIR = f'{repo}/uncased_L-12_H-768_A-12' OUTPUT_DIR = f'{repo}/outputs' print(f'***** Model output directory: {OUTPUT_DIR} *****') print(f'***** BERT pretrained directory: {BERT_PRETRAINED_DIR} *****') from sklearn.model_selection import train_test_split train_df = pd.read_csv('../input/train.csv') train_df = train_df.sample(2000) train, test = train_test_split(train_df, test_size = 0.1, random_state=42) train_lines, train_labels = train.question_text.values, train.target.values test_lines, test_labels = test.question_text.values, test.target.values import modeling import optimization import run_classifier import tokenization import tensorflow as tf def create_examples(lines, set_type, labels=None): #Generate data for the BERT model guid = f'{set_type}' examples = [] if guid == 'train': for line, label in zip(lines, labels): text_a = line label = str(label) examples.append( run_classifier.InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) else: for line in lines: text_a = line label = '0' examples.append( run_classifier.InputExample(guid=guid, text_a=text_a, text_b=None, label=label)) return examples # Model Hyper Parameters TRAIN_BATCH_SIZE = 32 EVAL_BATCH_SIZE = 8 LEARNING_RATE = 2e-5 NUM_TRAIN_EPOCHS = 3.0 WARMUP_PROPORTION = 0.1 MAX_SEQ_LENGTH = 128 # Model configs SAVE_CHECKPOINTS_STEPS = 1000 #if you wish to finetune a model on a larger dataset, use larger interval # each checpoint weights about 1,5gb ITERATIONS_PER_LOOP = 1000 NUM_TPU_CORES = 8 VOCAB_FILE = os.path.join(BERT_PRETRAINED_DIR, 'vocab.txt') CONFIG_FILE = os.path.join(BERT_PRETRAINED_DIR, 'bert_config.json') INIT_CHECKPOINT = os.path.join(BERT_PRETRAINED_DIR, 'bert_model.ckpt') DO_LOWER_CASE = BERT_MODEL.startswith('uncased') label_list = ['0', '1'] tokenizer = tokenization.FullTokenizer(vocab_file=VOCAB_FILE, do_lower_case=DO_LOWER_CASE) train_examples = create_examples(train_lines, 'train', labels=train_labels) tpu_cluster_resolver = None #Since training will happen on GPU, we won't need a cluster resolver #TPUEstimator also supports training on CPU and GPU. You don't need to define a separate tf.estimator.Estimator. run_config = tf.contrib.tpu.RunConfig( cluster=tpu_cluster_resolver, model_dir=OUTPUT_DIR, save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS, tpu_config=tf.contrib.tpu.TPUConfig( iterations_per_loop=ITERATIONS_PER_LOOP, num_shards=NUM_TPU_CORES, per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2)) num_train_steps = int( len(train_examples) / TRAIN_BATCH_SIZE * NUM_TRAIN_EPOCHS) num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION) model_fn = run_classifier.model_fn_builder( bert_config=modeling.BertConfig.from_json_file(CONFIG_FILE), num_labels=len(label_list), init_checkpoint=INIT_CHECKPOINT, learning_rate=LEARNING_RATE, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, use_tpu=False, #If False training will fall on CPU or GPU, depending on what is available use_one_hot_embeddings=True) estimator = tf.contrib.tpu.TPUEstimator( use_tpu=False, #If False training will fall on CPU or GPU, depending on what is available model_fn=model_fn, config=run_config, train_batch_size=TRAIN_BATCH_SIZE, eval_batch_size=EVAL_BATCH_SIZE) """ Note: You might see a message 'Running train on CPU'. This really just means that it's running on something other than a Cloud TPU, which includes a GPU. """ # Train the model. print('Please wait...') train_features = run_classifier.convert_examples_to_features( train_examples, label_list, MAX_SEQ_LENGTH, tokenizer) print('***** Started training at {} *****'.format(datetime.datetime.now())) print(' Num examples = {}'.format(len(train_examples))) print(' Batch size = {}'.format(TRAIN_BATCH_SIZE)) tf.logging.info(" Num steps = %d", num_train_steps) train_input_fn = run_classifier.input_fn_builder( features=train_features, seq_length=MAX_SEQ_LENGTH, is_training=True, drop_remainder=True) estimator.train(input_fn=train_input_fn, max_steps=num_train_steps) print('***** Finished training at {} *****'.format(datetime.datetime.now())) """ There is a weird bug in original code. When predicting, estimator returns an empty dict {}, without batch_size. I redefine input_fn_builder and hardcode batch_size, irnoring 'params' for now. """ def input_fn_builder(features, seq_length, is_training, drop_remainder): """Creates an `input_fn` closure to be passed to TPUEstimator.""" all_input_ids = [] all_input_mask = [] all_segment_ids = [] all_label_ids = [] for feature in features: all_input_ids.append(feature.input_ids) all_input_mask.append(feature.input_mask) all_segment_ids.append(feature.segment_ids) all_label_ids.append(feature.label_id) def input_fn(params): """The actual input function.""" print(params) batch_size = 32 num_examples = len(features) d = tf.data.Dataset.from_tensor_slices({ "input_ids": tf.constant( all_input_ids, shape=[num_examples, seq_length], dtype=tf.int32), "input_mask": tf.constant( all_input_mask, shape=[num_examples, seq_length], dtype=tf.int32), "segment_ids": tf.constant( all_segment_ids, shape=[num_examples, seq_length], dtype=tf.int32), "label_ids": tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32), }) if is_training: d = d.repeat() d = d.shuffle(buffer_size=100) d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder) return d return input_fn predict_examples = create_examples(test_lines, 'test') predict_features = run_classifier.convert_examples_to_features( predict_examples, label_list, MAX_SEQ_LENGTH, tokenizer) predict_input_fn = input_fn_builder( features=predict_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False) result = estimator.predict(input_fn=predict_input_fn) from tqdm import tqdm preds = [] for prediction in tqdm(result): for class_probability in prediction: preds.append(float(class_probability)) results = [] for i in tqdm(range(0,len(preds),2)): if preds[i] < 0.9: results.append(1) else: results.append(0) from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score print(accuracy_score(np.array(results), test_labels)) print(f1_score(np.array(results), test_labels)) ###Output _____no_output_____
notebooks/plotter.ipynb
###Markdown This notebook plots the KN lightcurves ###Code import matplotlib.pyplot as plt import pandas as pd import numpy as np ###Output _____no_output_____ ###Markdown Read the data, sort it by filter ###Code data = pd.read_csv('input/gw170817.data', delim_whitespace=True) for col in data.columns: print(col) #print(data['MJD']) i_band_mag = data.loc[data['Band'] == 'i']['Mag'] i_band_time = data.loc[data['Band'] == 'i']['MJD'] z_band_mag = data.loc[data['Band'] == 'z']['Mag'] z_band_time = data.loc[data['Band'] == 'z']['MJD'] Y_band_mag = data.loc[data['Band'] == 'Y']['Mag'] Y_band_time = data.loc[data['Band'] == 'Y']['MJD'] r_band_mag = data.loc[data['Band'] == 'r']['Mag'] r_band_time = data.loc[data['Band'] == 'r']['MJD'] g_band_mag = data.loc[data['Band'] == 'g']['Mag'] g_band_time = data.loc[data['Band'] == 'g']['MJD'] u_band_mag = data.loc[data['Band'] == 'u']['Mag'] u_band_time = data.loc[data['Band'] == 'u']['MJD'] #print(i_band_mag) ###Output MJD Band Mag e_mag ###Markdown Make the plots ###Code scatter = plt.plot(u_band_time, u_band_mag, '.g-') plt.title('u-band') plt.xlabel('MJD') plt.ylabel('u-band magnitude') plt.xlim(57980,57996) plt.gca().invert_yaxis() plt.show() scatter = plt.plot(g_band_time, g_band_mag, '.b-') plt.title('g-band') plt.xlabel('MJD') plt.ylabel('g-band magnitude') plt.xlim(57980,57996) plt.gca().invert_yaxis() plt.show() scatter = plt.plot(r_band_time, r_band_mag, '.g-') plt.title('r-band') plt.xlabel('MJD') plt.ylabel('r-band magnitude') plt.xlim(57980,57996) plt.gca().invert_yaxis() plt.show() scatter = plt.plot(i_band_time, i_band_mag, '.r-') plt.title('i-band') plt.xlabel('MJD') plt.ylabel('i-band magnitude') plt.xlim(57980,57996) plt.gca().invert_yaxis() plt.show() scatter = plt.plot(z_band_time, z_band_mag, '.y-') plt.title('z-band') plt.xlabel('MJD') plt.ylabel('z-band magnitude') plt.xlim(57980,57996) plt.gca().invert_yaxis() plt.show() scatter = plt.plot(Y_band_time, Y_band_mag, '.b-') plt.title('y-band') plt.xlabel('MJD') plt.ylabel('y-band magnitude') plt.xlim(57980,57996) plt.gca().invert_yaxis() plt.show() scatter = plt.plot(u_band_time, u_band_mag, label = "u") scatter = plt.plot(g_band_time, g_band_mag, label = "g") scatter = plt.plot(r_band_time, r_band_mag, label = "r") scatter = plt.plot(i_band_time, i_band_mag, label = "i") scatter = plt.plot(z_band_time, z_band_mag, label = "z") scatter = plt.plot(Y_band_time, Y_band_mag, label = "y") plt.title('All Bands') plt.xlabel('MJD') plt.ylabel('Band Magnitude') plt.xlim(57983,57996) plt.gca().invert_yaxis() plt.legend() plt.show() ###Output _____no_output_____
nbs/effect_prediction.ipynb
###Markdown Variant effect predictionThe variant effect prediction parts integrated in `concise` are designed to extract importance scores for a single nucleotide variant in a given sequence. Predictions are made for each output individually for a multi-task model. In this short tutorial we will be using a small model to explain the basic functionality and outputs.At the moment there are three different effect scores to be chosen from. All of them require as in input:* The input sequence with the variant with its reference genotype* The input sequence with the variant with its alternative genotype* Both aformentioned sequences in reverse-complement* Information on where (which basepair, 0-based) the mutation is placed in the forward sequencesThe following variant scores are available:* In-silico mutagenesis (ISM): - Predict the outputs of the sequences containing the reference and alternative genotype of the variant and use the differential output as a effect score.* Gradient-based score* Dropout-based score Calculating effect scoresFirstly we will need to have a trained model and a set of input sequences containing the variants we want to look at. For this tutorial we will be using a small model: ###Code from effect_demo_setup import * from concise.models import single_layer_pos_effect as concise_model import numpy as np # Generate training data for the model, use a 1000bp sequence param, X_feat, X_seq, y, id_vec = load_example_data(trim_seq_len = 1000) # Generate the model dc = concise_model(pooling_layer="sum", init_motifs=["TGCGAT", "TATTTAT"], n_splines=10, n_covariates=0, seq_length=X_seq.shape[1], **param) # Train the model dc.fit([X_seq], y, epochs=1, validation_data=([X_seq], y)) # In order to select the right output of a potential multitask model we have to generate a list of output labels, which will be used alongside the model itself. model_output_annotation = np.array(["output_1"]) ###Output Using TensorFlow backend. ###Markdown As with any prediction that you want to make with a model it is necessary that the input sequences have to fit the input dimensions of your model, in this case the reference and alternative sequences in their forward and reverse-complement state have to have the shape [?, 1000, 4].We will be storing the dataset in a dictionary for convenience: ###Code import h5py dataset_path = "%s/data/sample_hqtl_res.hdf5"%concise_demo_data_path dataset = {} with h5py.File(dataset_path, "r") as ifh: ref = ifh["test_in_ref"].value alt = ifh["test_in_alt"].value dirs = ifh["test_out"]["seq_direction"].value # This datset is stored with forward and reverse-complement sequences in an interlaced manner assert(dirs[0] == b"fwd") dataset["ref"] = ref[::2,...] dataset["alt"] = alt[::2,...] dataset["ref_rc"] = ref[1::2,...] dataset["alt_rc"] = alt[1::2,...] dataset["y"] = ifh["test_out"]["type"].value[::2] # The sequence is centered around the mutatiom with the mutation occuring on position when looking at forward sequences dataset["mutation_position"] = np.array([500]*dataset["ref"].shape[0]) ###Output _____no_output_____ ###Markdown All prediction functions have the same general set of required input values. Before going into more detail of the individual prediction functions We will look into how to run them. The following input arguments are availble for all functions: model: Keras model ref: Input sequence with the reference genotype in the mutation position ref_rc: Reverse complement of the 'ref' argument alt: Input sequence with the alternative genotype in the mutation position alt_rc: Reverse complement of the 'alt' argument mutation_positions: Position on which the mutation was placed in the forward sequences out_annotation_all_outputs: Output labels of the model. out_annotation: Select for which of the outputs (in case of a multi-task model) the predictions should be calculated.The `out_annotation` argument is not required. We will now run the available predictions individually. ###Code from concise.effects.ism import ism from concise.effects.gradient import gradient_pred from concise.effects.dropout import dropout_pred ism_result = ism(model = dc, ref = dataset["ref"], ref_rc = dataset["ref_rc"], alt = dataset["alt"], alt_rc = dataset["alt_rc"], mutation_positions = dataset["mutation_position"], out_annotation_all_outputs = model_output_annotation, diff_type = "diff") gradient_result = gradient_pred(model = dc, ref = dataset["ref"], ref_rc = dataset["ref_rc"], alt = dataset["alt"], alt_rc = dataset["alt_rc"], mutation_positions = dataset["mutation_position"], out_annotation_all_outputs = model_output_annotation) dropout_result = dropout_pred(model = dc, ref = dataset["ref"], ref_rc = dataset["ref_rc"], alt = dataset["alt"], alt_rc = dataset["alt_rc"], mutation_positions = dataset["mutation_position"], out_annotation_all_outputs = model_output_annotation) gradient_result ###Output _____no_output_____ ###Markdown The output of all functions is a dictionary, please refer to the individual chapters further on for an explanation of the individual values. Every dictionary contains pandas dataframes as values. Every column of the dataframe is named according to the values given in the `out_annotation_all_outputs` labels and contains the respective predicted scores. Convenience functionFor convenience there is also a function available which enables the execution of all functions in one call.Additional arguments of the `effect_from_model` function are: methods: A list of prediction functions to be executed. Using the same function more often than once (even with different parameters) will overwrite the results of the previous calculation of that function. extra_args: None or a list of the same length as 'methods'. The elements of the list are dictionaries with additional arguments that should be passed on to the respective functions in 'methods'. Arguments defined here will overwrite arguments that are passed to all methods. **argv: Additional arguments to be passed on to all methods, e.g,: out_annotation. ###Code from concise.effects.snp_effects import effect_from_model # Define the parameters: params = {"methods": [gradient_pred, dropout_pred, ism], "model": dc, "ref": dataset["ref"], "ref_rc": dataset["ref_rc"], "alt": dataset["alt"], "alt_rc": dataset["alt_rc"], "mutation_positions": dataset["mutation_position"], "extra_args": [None, {"dropout_iterations": 60}, {"rc_handling" : "maximum", "diff_type":"diff"}], "out_annotation_all_outputs": model_output_annotation, } results = effect_from_model(**params) ###Output _____no_output_____ ###Markdown Again the returned value is a dictionary containing the results of the individual calculations, the keys are the names of the executed functions: ###Code print(results.keys()) ###Output _____no_output_____
08_TorchText/pytorch-seq2seq-modern/2_Learning_Phrase_Representations_using_RNN_Encoder_Decoder_for_Statistical_Machine_Translation.ipynb
###Markdown 2 - Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine TranslationIn this second notebook on sequence-to-sequence models using PyTorch and TorchText, we'll be implementing the model from [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](https://arxiv.org/abs/1406.1078). This model will achieve improved test perplexity whilst only using a single layer RNN in both the encoder and the decoder. IntroductionLet's remind ourselves of the general encoder-decoder model.![](assets/seq2seq1.png)We use our encoder (green) over the embedded source sequence (yellow) to create a context vector (red). We then use that context vector with the decoder (blue) and a linear layer (purple) to generate the target sentence.In the previous model, we used an multi-layered LSTM as the encoder and decoder.![](assets/seq2seq4.png)One downside of the previous model is that the decoder is trying to cram lots of information into the hidden states. Whilst decoding, the hidden state will need to contain information about the whole of the source sequence, as well as all of the tokens have been decoded so far. By alleviating some of this information compression, we can create a better model!We'll also be using a GRU (Gated Recurrent Unit) instead of an LSTM (Long Short-Term Memory). Why? Mainly because that's what they did in the paper (this paper also introduced GRUs) and also because we used LSTMs last time. To understand how GRUs (and LSTMs) differ from standard RNNS, check out [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) link. Is a GRU better than an LSTM? [Research](https://arxiv.org/abs/1412.3555) has shown they're pretty much the same, and both are better than standard RNNs. Preparing DataAll of the data preparation will be (almost) the same as last time, so we'll very briefly detail what each code block does. See the previous notebook for a recap.We'll import PyTorch, TorchText, spaCy and a few standard modules. ###Code ! pip install spacy==3.0.6 --quiet ###Output  |████████████████████████████████| 12.8MB 226kB/s  |████████████████████████████████| 51kB 9.1MB/s  |████████████████████████████████| 9.1MB 22.1MB/s  |████████████████████████████████| 624kB 40.1MB/s  |████████████████████████████████| 460kB 51.8MB/s [?25h ###Markdown You might need to restart the Runtime after installing the spacy models ###Code ! python -m spacy download en_core_web_sm --quiet ! python -m spacy download de_core_news_sm --quiet import torch import torch.nn as nn import torch.optim as optim # shit from the past # from torchtext.legacy.datasets import Multi30k # from torchtext.legacy.data import Field, BucketIterator import spacy import numpy as np import random import math import time from typing import * ###Output _____no_output_____ ###Markdown Then set a random seed for deterministic results/reproducability. ###Code SEED = 1234 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True ###Output _____no_output_____ ###Markdown Previously we reversed the source (German) sentence, however in the paper we are implementing they don't do this, so neither will we. Load our data. ###Code from torchtext.data.utils import get_tokenizer from torchtext.vocab import build_vocab_from_iterator from torchtext.datasets import Multi30k SRC_LANGUAGE = 'de' TGT_LANGUAGE = 'en' # Place-holders token_transform = {} vocab_transform = {} # Create source and target language tokenizer. Make sure to install the dependencies. # the 'language' should be a full qualified name, since shortcuts like `de` and `en` are deprecated in spaCy 3.0+ token_transform[SRC_LANGUAGE] = get_tokenizer('spacy', language='de_core_news_sm') token_transform[TGT_LANGUAGE] = get_tokenizer('spacy', language='en_core_web_sm') # Training, Validation and Test data Iterator train_iter, val_iter, test_iter = Multi30k(split=('train', 'valid', 'test'), language_pair=(SRC_LANGUAGE, TGT_LANGUAGE)) train_list, val_list, test_list = list(train_iter), list(val_iter), list(test_iter) train_list[0] print(f"Number of training examples: {len(train_iter)}") print(f"Number of validation examples: {len(val_iter)}") print(f"Number of testing examples: {len(test_iter)}") ###Output Number of training examples: 29000 Number of validation examples: 1014 Number of testing examples: 1000 ###Markdown Then create our vocabulary, converting all tokens appearing less than twice into `` tokens. ###Code # helper function to yield list of tokens def yield_tokens(data_iter: Iterable, language: str) -> List[str]: language_index = {SRC_LANGUAGE: 0, TGT_LANGUAGE: 1} for data_sample in data_iter: yield token_transform[language](data_sample[language_index[language]]) # Define special symbols and indices UNK_IDX, PAD_IDX, BOS_IDX, EOS_IDX = 0, 1, 2, 3 # Make sure the tokens are in order of their indices to properly insert them in vocab special_symbols = ['<unk>', '<pad>', '<bos>', '<eos>'] for ln in [SRC_LANGUAGE, TGT_LANGUAGE]: # Create torchtext's Vocab object vocab_transform[ln] = build_vocab_from_iterator(yield_tokens(train_list, ln), min_freq=1, specials=special_symbols, special_first=True) # Set UNK_IDX as the default index. This index is returned when the token is not found. # If not set, it throws RuntimeError when the queried token is not found in the Vocabulary. for ln in [SRC_LANGUAGE, TGT_LANGUAGE]: vocab_transform[ln].set_default_index(UNK_IDX) ###Output _____no_output_____ ###Markdown Finally, define the `device` and create our iterators. ###Code device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') from torch.nn.utils.rnn import pad_sequence # helper function to club together sequential operations def sequential_transforms(*transforms): def func(txt_input): for transform in transforms: txt_input = transform(txt_input) return txt_input return func # function to add BOS/EOS and create tensor for input sequence indices def tensor_transform(token_ids: List[int]): return torch.cat((torch.tensor([BOS_IDX]), torch.tensor(token_ids), torch.tensor([EOS_IDX]))) # src and tgt language text transforms to convert raw strings into tensors indices text_transform = {} for ln in [SRC_LANGUAGE, TGT_LANGUAGE]: text_transform[ln] = sequential_transforms(token_transform[ln], #Tokenization vocab_transform[ln], #Numericalization tensor_transform) # Add BOS/EOS and create tensor # function to collate data samples into batch tesors def collate_fn(batch): src_batch, tgt_batch = [], [] for src_sample, tgt_sample in batch: src_batch.append(text_transform[SRC_LANGUAGE](src_sample.rstrip("\n"))) tgt_batch.append(text_transform[TGT_LANGUAGE](tgt_sample.rstrip("\n"))) src_batch = pad_sequence(src_batch, padding_value=PAD_IDX) tgt_batch = pad_sequence(tgt_batch, padding_value=PAD_IDX) return src_batch, tgt_batch from torch.utils.data import DataLoader BATCH_SIZE = 128 train_dataloader = DataLoader(train_list, batch_size=BATCH_SIZE, collate_fn=collate_fn) val_dataloader = DataLoader(val_list, batch_size=BATCH_SIZE, collate_fn=collate_fn) test_dataloader = DataLoader(test_list, batch_size=BATCH_SIZE, collate_fn=collate_fn) ###Output _____no_output_____ ###Markdown Building the Seq2Seq Model EncoderThe encoder is similar to the previous one, with the multi-layer LSTM swapped for a single-layer GRU. We also don't pass the dropout as an argument to the GRU as that dropout is used between each layer of a multi-layered RNN. As we only have a single layer, PyTorch will display a warning if we try and use pass a dropout value to it.Another thing to note about the GRU is that it only requires and returns a hidden state, there is no cell state like in the LSTM.$$\begin{align*}h_t &= \text{GRU}(e(x_t), h_{t-1})\\(h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})\\h_t &= \text{RNN}(e(x_t), h_{t-1})\end{align*}$$From the equations above, it looks like the RNN and the GRU are identical. Inside the GRU, however, is a number of *gating mechanisms* that control the information flow in to and out of the hidden state (similar to an LSTM). Again, for more info, check out [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) excellent post. The rest of the encoder should be very familar from the last tutorial, it takes in a sequence, $X = \{x_1, x_2, ... , x_T\}$, passes it through the embedding layer, recurrently calculates hidden states, $H = \{h_1, h_2, ..., h_T\}$, and returns a context vector (the final hidden state), $z=h_T$.$$h_t = \text{EncoderGRU}(e(x_t), h_{t-1})$$This is identical to the encoder of the general seq2seq model, with all the "magic" happening inside the GRU (green).![](assets/seq2seq5.png) ###Code class Encoder(nn.Module): def __init__(self, input_dim, emb_dim, hid_dim, dropout): super().__init__() self.hid_dim = hid_dim self.embedding = nn.Embedding(input_dim, emb_dim) #no dropout as only one layer! self.rnn = nn.GRU(emb_dim, hid_dim) self.dropout = nn.Dropout(dropout) def forward(self, src): #src = [src len, batch size] embedded = self.dropout(self.embedding(src)) #embedded = [src len, batch size, emb dim] outputs, hidden = self.rnn(embedded) #no cell state! #outputs = [src len, batch size, hid dim * n directions] #hidden = [n layers * n directions, batch size, hid dim] #outputs are always from the top hidden layer return hidden ###Output _____no_output_____ ###Markdown DecoderThe decoder is where the implementation differs significantly from the previous model and we alleviate some of the information compression.Instead of the GRU in the decoder taking just the embedded target token, $d(y_t)$ and the previous hidden state $s_{t-1}$ as inputs, it also takes the context vector $z$. $$s_t = \text{DecoderGRU}(d(y_t), s_{t-1}, z)$$Note how this context vector, $z$, does not have a $t$ subscript, meaning we re-use the same context vector returned by the encoder for every time-step in the decoder. Before, we predicted the next token, $\hat{y}_{t+1}$, with the linear layer, $f$, only using the top-layer decoder hidden state at that time-step, $s_t$, as $\hat{y}_{t+1}=f(s_t^L)$. Now, we also pass the embedding of current token, $d(y_t)$ and the context vector, $z$ to the linear layer.$$\hat{y}_{t+1} = f(d(y_t), s_t, z)$$Thus, our decoder now looks something like this:![](assets/seq2seq6.png)Note, the initial hidden state, $s_0$, is still the context vector, $z$, so when generating the first token we are actually inputting two identical context vectors into the GRU.How do these two changes reduce the information compression? Well, hypothetically the decoder hidden states, $s_t$, no longer need to contain information about the source sequence as it is always available as an input. Thus, it only needs to contain information about what tokens it has generated so far. The addition of $y_t$ to the linear layer also means this layer can directly see what the token is, without having to get this information from the hidden state. However, this hypothesis is just a hypothesis, it is impossible to determine how the model actually uses the information provided to it (don't listen to anyone that says differently). Nevertheless, it is a solid intuition and the results seem to indicate that this modifications are a good idea!Within the implementation, we will pass $d(y_t)$ and $z$ to the GRU by concatenating them together, so the input dimensions to the GRU are now `emb_dim + hid_dim` (as context vector will be of size `hid_dim`). The linear layer will take $d(y_t), s_t$ and $z$ also by concatenating them together, hence the input dimensions are now `emb_dim + hid_dim*2`. We also don't pass a value of dropout to the GRU as it only uses a single layer.`forward` now takes a `context` argument. Inside of `forward`, we concatenate $y_t$ and $z$ as `emb_con` before feeding to the GRU, and we concatenate $d(y_t)$, $s_t$ and $z$ together as `output` before feeding it through the linear layer to receive our predictions, $\hat{y}_{t+1}$. ###Code class Decoder(nn.Module): def __init__(self, output_dim, emb_dim, hid_dim, dropout): super().__init__() self.hid_dim = hid_dim self.output_dim = output_dim self.embedding = nn.Embedding(output_dim, emb_dim) self.rnn = nn.GRU(emb_dim + hid_dim, hid_dim) self.fc_out = nn.Linear(emb_dim + hid_dim * 2, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, input, hidden, context): #input = [batch size] #hidden = [n layers * n directions, batch size, hid dim] #context = [n layers * n directions, batch size, hid dim] #n layers and n directions in the decoder will both always be 1, therefore: #hidden = [1, batch size, hid dim] #context = [1, batch size, hid dim] input = input.unsqueeze(0) #input = [1, batch size] embedded = self.dropout(self.embedding(input)) #embedded = [1, batch size, emb dim] emb_con = torch.cat((embedded, context), dim = 2) #emb_con = [1, batch size, emb dim + hid dim] output, hidden = self.rnn(emb_con, hidden) #output = [seq len, batch size, hid dim * n directions] #hidden = [n layers * n directions, batch size, hid dim] #seq len, n layers and n directions will always be 1 in the decoder, therefore: #output = [1, batch size, hid dim] #hidden = [1, batch size, hid dim] output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)), dim = 1) #output = [batch size, emb dim + hid dim * 2] prediction = self.fc_out(output) #prediction = [batch size, output dim] return prediction, hidden ###Output _____no_output_____ ###Markdown Seq2Seq ModelPutting the encoder and decoder together, we get:![](assets/seq2seq7.png)Again, in this implementation we need to ensure the hidden dimensions in both the encoder and the decoder are the same.Briefly going over all of the steps:- the `outputs` tensor is created to hold all predictions, $\hat{Y}$- the source sequence, $X$, is fed into the encoder to receive a `context` vector- the initial decoder hidden state is set to be the `context` vector, $s_0 = z = h_T$- we use a batch of `` tokens as the first `input`, $y_1$- we then decode within a loop: - inserting the input token $y_t$, previous hidden state, $s_{t-1}$, and the context vector, $z$, into the decoder - receiving a prediction, $\hat{y}_{t+1}$, and a new hidden state, $s_t$ - we then decide if we are going to teacher force or not, setting the next input as appropriate (either the ground truth next token in the target sequence or the highest predicted next token) ###Code class Seq2Seq(nn.Module): def __init__(self, encoder, decoder, device): super().__init__() self.encoder = encoder self.decoder = decoder self.device = device assert encoder.hid_dim == decoder.hid_dim, \ "Hidden dimensions of encoder and decoder must be equal!" def forward(self, src, trg, teacher_forcing_ratio = 0.5): #src = [src len, batch size] #trg = [trg len, batch size] #teacher_forcing_ratio is probability to use teacher forcing #e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time batch_size = trg.shape[1] trg_len = trg.shape[0] trg_vocab_size = self.decoder.output_dim #tensor to store decoder outputs outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device) #last hidden state of the encoder is the context context = self.encoder(src) #context also used as the initial hidden state of the decoder hidden = context #first input to the decoder is the <sos> tokens input = trg[0,:] for t in range(1, trg_len): #insert input token embedding, previous hidden state and the context state #receive output tensor (predictions) and new hidden state output, hidden = self.decoder(input, hidden, context) #place predictions in a tensor holding predictions for each token outputs[t] = output #decide if we are going to use teacher forcing or not teacher_force = random.random() < teacher_forcing_ratio #get the highest predicted token from our predictions top1 = output.argmax(1) #if teacher forcing, use actual next token as next input #if not, use predicted token input = trg[t] if teacher_force else top1 return outputs ###Output _____no_output_____ ###Markdown Training the Seq2Seq ModelThe rest of this tutorial is very similar to the previous one. We initialise our encoder, decoder and seq2seq model (placing it on the GPU if we have one). As before, the embedding dimensions and the amount of dropout used can be different between the encoder and the decoder, but the hidden dimensions must remain the same. ###Code INPUT_DIM = len(vocab_transform[SRC_LANGUAGE]) OUTPUT_DIM = len(vocab_transform[TGT_LANGUAGE]) ENC_EMB_DIM = 256 DEC_EMB_DIM = 256 HID_DIM = 512 ENC_DROPOUT = 0.5 DEC_DROPOUT = 0.5 enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT) dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = Seq2Seq(enc, dec, device).to(device) ###Output _____no_output_____ ###Markdown Next, we initialize our parameters. The paper states the parameters are initialized from a normal distribution with a mean of 0 and a standard deviation of 0.01, i.e. $\mathcal{N}(0, 0.01)$. It also states we should initialize the recurrent parameters to a special initialization, however to keep things simple we'll also initialize them to $\mathcal{N}(0, 0.01)$. ###Code def init_weights(m): for name, param in m.named_parameters(): nn.init.normal_(param.data, mean=0, std=0.01) model.apply(init_weights) ###Output _____no_output_____ ###Markdown We print out the number of parameters.Even though we only have a single layer RNN for our encoder and decoder we actually have **more** parameters than the last model. This is due to the increased size of the inputs to the GRU and the linear layer. However, it is not a significant amount of parameters and causes a minimal amount of increase in training time (~3 seconds per epoch extra). ###Code def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ###Output The model has 24,728,918 trainable parameters ###Markdown We initiaize our optimizer. ###Code optimizer = optim.Adam(model.parameters()) ###Output _____no_output_____ ###Markdown We also initialize the loss function, making sure to ignore the loss on `` tokens. ###Code criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX) ###Output _____no_output_____ ###Markdown We then create the training loop... ###Code def train(model, iterator, optimizer, criterion, clip): model.train() epoch_loss = 0 for i, batch in enumerate(iterator): src, trg = batch src, trg = src.to(device), trg.to(device) optimizer.zero_grad() output = model(src, trg) #trg = [trg len, batch size] #output = [trg len, batch size, output dim] output_dim = output.shape[-1] output = output[1:].view(-1, output_dim) trg = trg[1:].view(-1) #trg = [(trg len - 1) * batch size] #output = [(trg len - 1) * batch size, output dim] loss = criterion(output, trg) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() epoch_loss += loss.item() return epoch_loss / len(iterator) ###Output _____no_output_____ ###Markdown ...and the evaluation loop, remembering to set the model to `eval` mode and turn off teaching forcing. ###Code def evaluate(model, iterator, criterion): model.eval() epoch_loss = 0 with torch.no_grad(): for i, batch in enumerate(iterator): src, trg = batch src, trg = src.to(device), trg.to(device) output = model(src, trg, 0) #turn off teacher forcing #trg = [trg len, batch size] #output = [trg len, batch size, output dim] output_dim = output.shape[-1] output = output[1:].view(-1, output_dim) trg = trg[1:].view(-1) #trg = [(trg len - 1) * batch size] #output = [(trg len - 1) * batch size, output dim] loss = criterion(output, trg) epoch_loss += loss.item() return epoch_loss / len(iterator) ###Output _____no_output_____ ###Markdown We'll also define the function that calculates how long an epoch takes. ###Code def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs ###Output _____no_output_____ ###Markdown Then, we train our model, saving the parameters that give us the best validation loss. ###Code N_EPOCHS = 10 CLIP = 1 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_dataloader, optimizer, criterion, CLIP) valid_loss = evaluate(model, val_dataloader, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut2-model.pt') print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}') print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') ###Output Epoch: 01 | Time: 1m 2s Train Loss: 4.385 | Train PPL: 80.200 Val. Loss: 5.010 | Val. PPL: 149.942 Epoch: 02 | Time: 1m 2s Train Loss: 4.083 | Train PPL: 59.338 Val. Loss: 4.814 | Val. PPL: 123.220 Epoch: 03 | Time: 1m 2s Train Loss: 3.759 | Train PPL: 42.918 Val. Loss: 4.510 | Val. PPL: 90.930 Epoch: 04 | Time: 1m 2s Train Loss: 3.412 | Train PPL: 30.329 Val. Loss: 4.313 | Val. PPL: 74.698 Epoch: 05 | Time: 1m 2s Train Loss: 3.065 | Train PPL: 21.426 Val. Loss: 4.271 | Val. PPL: 71.570 Epoch: 06 | Time: 1m 2s Train Loss: 2.789 | Train PPL: 16.265 Val. Loss: 4.204 | Val. PPL: 66.965 Epoch: 07 | Time: 1m 2s Train Loss: 2.525 | Train PPL: 12.494 Val. Loss: 4.161 | Val. PPL: 64.145 Epoch: 08 | Time: 1m 2s Train Loss: 2.309 | Train PPL: 10.064 Val. Loss: 4.163 | Val. PPL: 64.271 Epoch: 09 | Time: 1m 2s Train Loss: 2.117 | Train PPL: 8.305 Val. Loss: 4.168 | Val. PPL: 64.570 Epoch: 10 | Time: 1m 2s Train Loss: 1.988 | Train PPL: 7.299 Val. Loss: 4.139 | Val. PPL: 62.737 ###Markdown Finally, we test the model on the test set using these "best" parameters. ###Code model.load_state_dict(torch.load('tut2-model.pt')) test_loss = evaluate(model, test_dataloader, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') ###Output | Test Loss: 4.094 | Test PPL: 59.971 |
pymks/fmks/tests/non_periodic.ipynb
###Markdown Implement Masking and Test Issue 517Testing for weighted masks and fix [517](https://github.com/materialsinnovation/pymks/issues/517). ###Code import dask.array as da import numpy as np from pymks.fmks import correlations from pymks import plot_microstructures A = da.from_array(np.array([ [ [1, 0, 0], [0, 1, 1], [1, 1, 0] ], [ [0, 0, 1], [1, 0, 0], [0, 0, 1] ] ])) mask = np.ones((2,3,3)) mask[:,2,1:] = 0 mask = da.from_array(mask) plot_microstructures(A[0], A[1], titles=['Structure[0]', 'Structure[1]'], cmap='gray', figsize_weight=2.5) plot_microstructures(mask[0], mask[1], titles=['Mask[0]', 'Mask[1]'], cmap='viridis', figsize_weight=2.5) ###Output _____no_output_____ ###Markdown Check that periodic still worksThe normalization occurs in the two_point_stats function and the auto-correlation/cross-correlation occur in the cross_correlation function. Checking that the normalization is properly calculated.First is the auto-correlation. Second is the cross-correlation. ###Code correct = (correlations.cross_correlation(A, A).compute() / 9).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A).compute().round(3).astype(np.float64) assert (correct == tested).all() correct = (correlations.cross_correlation(A, 1-A).compute() / 9).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Check that masked periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In masked periodic, we assume that vectors going across the boundary of the structure come back on the other side. However, a vector landing in the masked area is discarded (ie not included in the correlation sum).Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation. ###Code correct_periodic_mask_auto = np.array([ [ [2,1,2], [1,4,1], [2,1,2] ], [ [1,0,0], [0,2,0], [0,0,1] ] ]) correct_periodic_mask_cross = np.array([ [ [1,3,1], [2,0,2], [1,1,1] ], [ [0,1,2], [2,0,2], [1,2,0] ] ]) norm_periodic_mask = np.array([ [5,5,5], [6,7,6], [5,5,5] ]) # Auto-Correlation correct = (correct_periodic_mask_auto / norm_periodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64) assert (correct == tested).all() # Cross-Correlation correct = (correct_periodic_mask_cross / norm_periodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Test that non-periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In non-periodic, we assume that a vector used to count up 2 point states can only connect two states in the structure. A vector going outside of the bounds of the structure is not counted.Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation. ###Code correct_nonperiodic_auto = np.array([ [ [1,1,2], [2,5,2], [2,1,1] ], [ [0,0,0], [0,3,0], [0,0,0] ] ]) correct_nonperiodic_cross = np.array([ [ [2,3,1], [1,0,2], [0,2,1] ], [ [1,2,1], [2,0,1], [1,2,1] ] ]) norm_nonperiodic = np.array([ [4,6,4], [6,9,6], [4,6,4] ]) # Auto-Correlation correct = (correct_nonperiodic_auto / norm_nonperiodic).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() # Cross-Correlation correct = (correct_nonperiodic_cross / norm_nonperiodic).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Check that non-periodic masking worksIn non-periodic masking, vectors that go across the boundary or land in a mask are not included in the sum. ###Code correct_nonperiodic_mask_auto = np.array([ [ [1,0,1], [1,4,1], [1,0,1] ], [ [0,0,0], [0,2,0], [0,0,0] ] ]) correct_nonperiodic_mask_cross = np.array([ [ [1,3,1], [1,0,1], [0,1,0] ], [ [0,1,1], [1,0,1], [1,2,0] ] ]) norm_nonperiodic_mask = np.array([ [2,4,3], [4,7,4], [3,4,2] ]) # Auto-Correlation correct = (correct_nonperiodic_mask_auto / norm_nonperiodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() # Cross-Correlation correct = (correct_nonperiodic_mask_cross / norm_nonperiodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Check that different sized dask arrays are valid masks.We want to be able to specify the same mask for each sample. We also want to be able to specify a different mask for each sample. This validates that both are possible. ###Code A = da.random.random([1000,3,3]) mask_same4all = da.random.randint(0,2,[3,3]) mask_same4some = da.random.randint(0,2,[100,3,3]) mask_diff4all = da.random.randint(0,2,[1000,3,3]) correlations.two_point_stats(A, A, mask=mask_same4all) # The following check fails. Therefore, the current implementation # only works for one mask for all or different mask for all, which # is feature rich enough for me. # correlations.two_point_stats(A, A, mask=mask_same4some) correlations.two_point_stats(A, A, mask=mask_diff4all) ###Output _____no_output_____ ###Markdown Some check that boolean and integers are valid masksA mask could be true and false specifying where there is a microstructure. However, it could also be any value in the range $[0,1]$ which specifies the probability a value is correctly assigned. The mask right now only implements confidence in a single phase, although idealy it should represent the confidence in all phases. However, for the use cases where there are 2 phases, a mask with a probability for one phase also completely describes the confidence in the other phase. Therefore, this implementation is complete for 2 phases. ###Code mask_int = da.random.randint(0,2,[1000,3,3]) mask_bool = mask_int.copy().astype(bool) print(mask_int.dtype, mask_bool.dtype) correlations.two_point_stats(A, A, mask=mask_int) correlations.two_point_stats(A, A, mask=mask_bool) ###Output int64 bool ###Markdown Implement Masking and Test Issue 517Testing for weighted masks and fix [517](https://github.com/materialsinnovation/pymks/issues/517). ###Code import dask.array as da import numpy as np from pymks.fmks import correlations from pymks import plot_microstructures A = da.from_array(np.array([ [ [1, 0, 0], [0, 1, 1], [1, 1, 0] ], [ [0, 0, 1], [1, 0, 0], [0, 0, 1] ] ])) mask = np.ones((2,3,3)) mask[:,2,1:] = 0 mask = da.from_array(mask) plot_microstructures(A[0], A[1], titles=['Structure[0]', 'Structure[1]'], cmap='gray', figsize_weight=2.5) plot_microstructures(mask[0], mask[1], titles=['Mask[0]', 'Mask[1]'], cmap='viridis', figsize_weight=2.5) ###Output _____no_output_____ ###Markdown Check that periodic still worksThe normalization occurs in the two_point_stats function and the auto-correlation/cross-correlation occur in the cross_correlation function. Checking that the normalization is properly calculated.First is the auto-correlation. Second is the cross-correlation. ###Code correct = (correlations.cross_correlation(A, A).compute() / 9).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A).compute().round(3).astype(np.float64) assert (correct == tested).all() correct = (correlations.cross_correlation(A, 1-A).compute() / 9).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Check that masked periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In masked periodic, we assume that vectors going across the boundary of the structure come back on the other side. However, a vector landing in the masked area is discarded (ie not included in the correlation sum).Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation. ###Code correct_periodic_mask_auto = np.array([ [ [2,1,2], [1,4,1], [2,1,2] ], [ [1,0,0], [0,2,0], [0,0,1] ] ]) correct_periodic_mask_cross = np.array([ [ [1,3,1], [2,0,2], [1,1,1] ], [ [0,1,2], [2,0,2], [1,2,0] ] ]) norm_periodic_mask = np.array([ [5,5,5], [6,7,6], [5,5,5] ]) # Auto-Correlation correct = (correct_periodic_mask_auto / norm_periodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64) assert (correct == tested).all() # Cross-Correlation correct = (correct_periodic_mask_cross / norm_periodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Test that non-periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In non-periodic, we assume that a vector used to count up 2 point states can only connect two states in the structure. A vector going outside of the bounds of the structure is not counted.Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation. ###Code correct_nonperiodic_auto = np.array([ [ [1,1,2], [2,5,2], [2,1,1] ], [ [0,0,0], [0,3,0], [0,0,0] ] ]) correct_nonperiodic_cross = np.array([ [ [2,3,1], [1,0,2], [0,2,1] ], [ [1,2,1], [2,0,1], [1,2,1] ] ]) norm_nonperiodic = np.array([ [4,6,4], [6,9,6], [4,6,4] ]) # Auto-Correlation correct = (correct_nonperiodic_auto / norm_nonperiodic).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() # Cross-Correlation correct = (correct_nonperiodic_cross / norm_nonperiodic).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Check that non-periodic masking worksIn non-periodic masking, vectors that go across the boundary or land in a mask are not included in the sum. ###Code correct_nonperiodic_mask_auto = np.array([ [ [1,0,1], [1,4,1], [1,0,1] ], [ [0,0,0], [0,2,0], [0,0,0] ] ]) correct_nonperiodic_mask_cross = np.array([ [ [1,3,1], [1,0,1], [0,1,0] ], [ [0,1,1], [1,0,1], [1,2,0] ] ]) norm_nonperiodic_mask = np.array([ [2,4,3], [4,7,4], [3,4,2] ]) # Auto-Correlation correct = (correct_nonperiodic_mask_auto / norm_nonperiodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() # Cross-Correlation correct = (correct_nonperiodic_mask_cross / norm_nonperiodic_mask).round(3).astype(np.float64) tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64) assert (correct == tested).all() ###Output _____no_output_____ ###Markdown Check that different sized dask arrays are valid masks.We want to be able to specify the same mask for each sample. We also want to be able to specify a different mask for each sample. This validates that both are possible. ###Code A = da.random.random([1000,3,3]) mask_same4all = da.random.randint(0,2,[3,3]) mask_same4some = da.random.randint(0,2,[100,3,3]) mask_diff4all = da.random.randint(0,2,[1000,3,3]) correlations.two_point_stats(A, A, mask=mask_same4all) # The following check fails. Therefore, the current implementation # only works for one mask for all or different mask for all, which # is feature rich enough for me. # correlations.two_point_stats(A, A, mask=mask_same4some) correlations.two_point_stats(A, A, mask=mask_diff4all); ###Output _____no_output_____ ###Markdown Some check that boolean and integers are valid masksA mask could be true and false specifying where there is a microstructure. However, it could also be any value in the range $[0,1]$ which specifies the probability a value is correctly assigned. The mask right now only implements confidence in a single phase, although idealy it should represent the confidence in all phases. However, for the use cases where there are 2 phases, a mask with a probability for one phase also completely describes the confidence in the other phase. Therefore, this implementation is complete for 2 phases. ###Code mask_int = da.random.randint(0,2,[1000,3,3]) mask_bool = mask_int.copy().astype(bool) print(mask_int.dtype, mask_bool.dtype) correlations.two_point_stats(A, A, mask=mask_int) correlations.two_point_stats(A, A, mask=mask_bool); ###Output int64 bool
F20/deep_learning_cross_validation.ipynb
###Markdown Deep Learning Pipeline for Random cross validation Before you run next block, please make sure you download the Waveform Data folder from Rice Box. Note you only need to download the patient folders which have labelled events associated with them (to save space), which are currently patients: 1, 2, 3, 4, 7 , 8, 13, 14, 15, 16, 17, 18, 19, 20, 22. However make sure to keep all patients within their own folder, and keep all patients together in a Waveform Data folder. Also, make sure to download the Labelled_Events.xlsx file from the GitHub repo. Save both of these to a place where the local version of this notebook has access, and make sure you know the local paths. Please also download ECG_feature_extraction.py, ECG_preprocessing.py, PPG_preprocessing.py, data_generator.py and CNN_models.py . The detailed information about these .py files can be found in readme file. Make sure all of you have all installed packages and they are up to date using the requirements.txt file (pip install -r requirements.txt). ###Code import h5py import pywt import numpy as np import os import random from glob import glob # from sklearn.model_selection import train_test_split from ECG_feature_extraction import * from ECG_preprocessing import * from PPG_preprocessing import * from os import listdir import pandas as pd ###Output _____no_output_____ ###Markdown loading cwt images for later training deep learning modelThree parameters need to be provided for this section:patient_folder_path: the local path of the folder containing all Waveform Data (folder containing folders for each patient)excel_file_path: the local path Labelled__Events.xlsx filesave_path: the path of folder where you want to save the "cwt images" (these will be used for modelling). ###Code def load_event_cwt_images(save_path,patient_folder_path,excel_file_path,excel_sheet_name='PJ',fs=240): ''' load cwt features input: save_path: it is the folder path to save these np.array files patient_folder_path: it is the folder containing different patients data excel_file_path: the path for labelled event excel excel_sheet_name: it is the labelled event that you plan to work with. Basically save the same events into a folder call the same name as the excel_sheet_name fs: sampling frequncy output: no return value but you can check the saved file based on your save_path ''' labelevent = pd.read_excel(excel_file_path,sheet_name=excel_sheet_name) count = 1 # save_path = save_path+excel_sheet_name+'/' for _,record in labelevent.iterrows(): label_record = record.tolist() patient_id,event_start_time,event_end_time = label_record patient_file_path = patient_folder_path+'/'+str(int(patient_id)) for block_file in listdir(patient_file_path): # trying to find the ecg signal and ppg signal during the label event time block_path = patient_file_path+'/'+block_file all_signals = h5py.File(block_path, 'r') signals_keys = set(all_signals.keys()) block_start_time,block_end_time = all_signals['time'][0],all_signals['time'][-1] if block_start_time <= event_start_time <= event_end_time <= block_end_time: start_index = int((event_start_time-block_start_time)*fs) end_index = int((event_end_time-block_start_time)*fs) #event_time = all_signals['time'][start_index:end_index +1] ecg, ppg = None, None if 'GE_WAVE_ECG_2_ID' in signals_keys: ecg = all_signals['GE_WAVE_ECG_2_ID'][start_index:end_index +1] if 'GE_WAVE_SPO2_WAVE_ID' in signals_keys: ppg = all_signals['GE_WAVE_SPO2_WAVE_ID'][start_index:end_index +1] # print("loaded ppg: ", ppg) if ppg is None or ecg is None: continue # ECG signal preprocessing for denoising and R-peak detection R_peak_index,ecg_denoise = ecg_preprocessing_final(ecg) # the location of R_peak during the label event ppg_denoise = PPG_denoising(ppg) ## extract cwt features for ecg signal and ppg signal ecg_cwt = compute_cwt_features(ecg_denoise,R_peak_index,scales = np.arange(1,129),windowL=-240,windowR=240,wavelet = 'morl') ppg_cwt = compute_cwt_features(ppg_denoise,R_peak_index,scales = np.arange(1,129),windowL=-240,windowR=240,wavelet = 'coif') if len(ecg_cwt)!=len(ppg_cwt): raise Exception("The beat length is not correct!!! Please check!") if not ecg_cwt or not ppg_cwt: continue for i in range(len(ecg_cwt)): combined = np.stack((ecg_cwt[i],ppg_cwt[i]),axis=-1) np.save(save_path+str(count)+'_'+excel_sheet_name,combined) # temp = ecg_cwt[i] # temp = np.reshape(temp,(128,480,1)) # np.save(save_path+str(count)+'_'+excel_sheet_name,temp) count+=1 return def load_cwt_files(patient_folder_path,excel_file_path,save_path,label_type= ['PJ','PJRP','PO','PP','PS','PVC']): ''' Implements function load_event_cwt_images to generate cwt features and then save into a specific folder Arguments: patient_folder_path: the path of the folder which save the patients' waveforms excel_file_path: the path of the excel file which contains the label events save_path: the folder path to save cwt features label_types: a default list containing labels Returns: no return ''' for label in label_type: load_event_cwt_images(save_path,patient_folder_path,excel_file_path,excel_sheet_name=label) ############# you should modify this line to change these respective paths based on the instructions ############################################## load_cwt_files(patient_folder_path='I:/COMP549/data',excel_file_path='I:/COMP549/events/Labelled_Events.xlsx',save_path='I:/COMP549/cwt_features_images_ecg/') ###Output After detrend before wavelet: [146.67326 167.97446 173.65193 ... 172.30809 172.17563 170.23831] ###Markdown set up the library for deep learning if any error generated at this step, please update the required libraries ###Code import time import os #from data_generator import get_train_valid_generator #from losses import make_loss, dice_coef_clipped, binary_crossentropy, dice_coef, ceneterline_loss import tensorflow as tf import time #import matplotlib.pyplot as plt # -------------------------- set gpu using tf --------------------------- # import tensorflow as tf # import time # config = tf.ConfigProto() # config.gpu_options.allow_growth = True # session = tf.Session(config=config) # ------------------- start importing keras module --------------------- from keras.callbacks import (ModelCheckpoint, CSVLogger, TensorBoard, EarlyStopping) # import tensorflow.keras.backend.tensorflow_backend as K from keras.optimizers import Adam from CNN_models import * import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Train the deep learning model Please provide the following information:EPOCHS: epoch number for training BATCH_SIZE: batch size for trainingDATA_DIR: the path of folder containing "cwt images"LOG_DIR: the path of folder you would like to save the training logVAL_SIZE: the percentage of test dataset ###Code ##############################Please modify this part ############################################### EPOCHS = 20 BATCH_SIZE = 16#8 DATA_DIR = 'I:/COMP549/cwt_features_images_ecg' #I:/COMP549/cwt_features_images' LOG_DIR = "./log" VAL_SIZE = 0.15 ######################################################################################################### def summarize_diagnostics(history): # you could use this function to plot the result fig, ax = plt.subplots(1,2, figsize=(20, 10)) # plot loss ax[0].set_title('Loss Curves', fontsize=20) ax[0].plot(history.history['loss'], label='train') ax[0].plot(history.history['val_loss'], label='test') ax[0].set_xlabel('Epochs', fontsize=15) ax[0].set_ylabel('Loss', fontsize=15) ax[0].legend(fontsize=15) # plot accuracy ax[1].set_title('Classification Accuracy', fontsize=20) ax[1].plot(history.history['accuracy'], label='train') ax[1].plot(history.history['val_accuracy'], label='test') ax[1].set_xlabel('Epochs', fontsize=15) ax[1].set_ylabel('Accuracy', fontsize=15) ax[1].legend(fontsize=15) def train(): model = twoLayerCNN(input_size=(32,120,2)) #model = VGG(input_shape=(128,480,2)) model.summary() # model.load_weights(pre_model_path) # model.compile(optimizer=Adam(lr=3e-4), loss=make_loss('bce_dice'), # metrics=[dice_coef, binary_crossentropy, ceneterline_loss, dice_coef_clipped]) model.compile(loss=tf.keras.losses.categorical_crossentropy, optimizer= Adam(lr=3e-5), metrics=['accuracy']) print("got twolayerCNN") model_name = 'twolayerCNN_ecg-{}'.format(int(time.time())) if not os.path.exists("./results/"): os.mkdir('./results') if not os.path.exists("./weights/"): os.mkdir('./weights') save_model_weights = "./weights/ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.hdf5" print('Fitting model...') start_time = time.time() tensorboard = TensorBoard(log_dir = LOG_DIR, write_images=True) earlystop = tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=3, verbose=1, mode='min') checkpoint = tf.keras.callbacks.ModelCheckpoint(save_model_weights, monitor="val_loss", mode = "min", verbose=1, save_best_only=True, save_weights_only=True) csv_logger = CSVLogger('./results/{}_train.log'.format(model_name)) train_gen, valid_gen, num_train, num_valid = get_train_valid_generator(data_dir=DATA_DIR,batch_size=BATCH_SIZE,val_size = VAL_SIZE) history = model.fit(x = train_gen, validation_data=valid_gen, epochs=EPOCHS, steps_per_epoch=(num_train+BATCH_SIZE-1)//BATCH_SIZE, validation_steps=(num_valid+BATCH_SIZE-1)//BATCH_SIZE, callbacks=[earlystop, checkpoint, tensorboard, csv_logger]) end_time = time.time() print("Training time(h):", (end_time - start_time) / 3600) summarize_diagnostics(history) if __name__ == "__main__": train() ###Output conv1 shape : (None, 32, 120, 32) conv2 shape: (None, 16, 60, 64) Model: "functional_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 120, 2)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 32, 120, 32) 608 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 16, 60, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 16, 60, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 16, 60, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 30, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 8, 30, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 15360) 0 _________________________________________________________________ dense (Dense) (None, 2) 30722 ================================================================= Total params: 49,826 Trainable params: 49,826 Non-trainable params: 0 _________________________________________________________________ got twolayerCNN Fitting model... Epoch 1/20 1/5947 [..............................] - ETA: 0s - loss: 0.6836 - accuracy: 0.5000WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01. Instructions for updating: use `tf.profiler.experimental.stop` instead. 2/5947 [..............................] - ETA: 1:03:53 - loss: 0.6557 - accuracy: 0.5938WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.5071s vs `on_train_batch_end` time: 0.7835s). Check your callbacks. 15/5947 [..............................] - ETA: 52:08 - loss: 0.5923 - accuracy: 0.7542
notebooks/tpore_survival_analysis_same_membrane.ipynb
###Markdown Load data ###Code df = pd.read_csv(f"{processed_data_dir}data.csv").drop("Unnamed: 0", axis=1) df.Replica = df.membrane df.Replica = df.Replica.astype("category") df["Replica_enc"] = df.Replica.cat.codes category_dic = {i: cat for i, cat in enumerate(np.unique(df["Replica"]))} category_dic n_categories = len(category_dic) dummies = pd.get_dummies(df.Replica, prefix="Replica") for col in dummies.columns: df[col] = dummies[col] df.tpore = df.tpore * 10 df.tpore = df.tpore.astype(int) df.head() ###Output _____no_output_____ ###Markdown Visualize Data ###Code df["tpore"].groupby(df["Replica"]).describe() _ = df["tpore"].hist(by=df["Replica"], sharex=True, density=True, bins=10) _ = df["tpore"].hist(bins=50) ###Output _____no_output_____ ###Markdown Visualize Priors These are the shapes of the priors used. ###Code beta = 1 alpha = 5 d = st.gamma(scale=1 / beta, a=alpha) x = np.linspace(0, 10, 100) tau_0_pdf = d.pdf(x) plt.plot(x, tau_0_pdf, "k-", lw=2) plt.xlabel("lambda0(t)") ###Output _____no_output_____ ###Markdown Prepare data ###Code n_sims = df.shape[0] sims = np.arange(n_sims) interval_length = 15 # 1.5 ns interval_bounds = np.arange(0, df.tpore.max() + interval_length + 1, interval_length) n_intervals = interval_bounds.size - 1 intervals = np.arange(n_intervals) last_period = np.floor((df.tpore - 0.01) / interval_length).astype(int) pore = np.zeros((n_sims, n_intervals)) pore[sims, last_period] = np.ones(n_sims) exposure = ( np.greater_equal.outer(df.tpore.values, interval_bounds[:-1]) * interval_length ) exposure[sims, last_period] = df.tpore - interval_bounds[last_period] ###Output _____no_output_____ ###Markdown Run Model ###Code with pm.Model() as model: lambda0 = pm.Gamma("lambda0", 5, 1, shape=n_intervals) beta = pm.Normal("beta", 0, sigma=100, shape=(n_categories)) lambda_ = pm.Deterministic( "lambda_", T.outer(T.exp(T.dot(beta, dummies.T)), lambda0) ) mu = pm.Deterministic("mu", exposure * lambda_) exp_beta = pm.Deterministic("exp_beta", np.exp(beta)) obs = pm.Poisson( "obs", mu, observed=pore, ) pm.model_to_graphviz(model) %%time if infer: with model: trace = pm.sample(1000, tune=1000, random_seed=RANDOM_SEED, return_inferencedata=True, cores=8) else: trace=load_trace(model_path, url_data) if infer: trace.posterior = trace.posterior.reset_index( ["beta_dim_0", "exp_beta_dim_0", "lambda0_dim_0"], drop=True ) trace = trace.rename( { "lambda0_dim_0": "t", "beta_dim_0": "Membrane", "exp_beta_dim_0": "Membrane", } ) trace = trace.assign_coords( t=interval_bounds[:-1] / 10, Membrane=list(category_dic.values()), ) trace ###Output _____no_output_____ ###Markdown Convergences ###Code with az.rc_context(rc={"plot.max_subplots": None}): az.plot_trace(trace, var_names=["beta", "lambda0"]) with az.rc_context(rc={"plot.max_subplots": None}): az.plot_autocorr(trace, combined=True, var_names=["lambda0", "beta"]) def get_survival_function(trace): l = [] for interval in range(n_intervals - 1): l.append( np.trapz( trace.values[:, :, :, 0 : interval + 1], axis=3, dx=interval_length, ) ) l = np.exp(-np.array(l)) return l def get_ecdf(data): x = np.sort(data) n = x.size y = np.arange(1, n + 1) / n return x, y def get_hdi(x, axis, alpha=0.06): x_mean = np.nanmedian(x, axis=axis) percentiles = 100 * np.array([alpha / 2.0, 1.0 - alpha / 2.0]) hdi = np.nanpercentile(x, percentiles, axis=axis) return x_mean, hdi fig, ax = plt.subplots(1, 1, figsize=(6, 4)) survival_function = get_survival_function(trace.posterior.lambda_.astype(np.float16)) # Empyrical CDF data ax.plot(*get_ecdf(df.tpore / 10), label="obs.") # Empyrical CDF data-binned binned_data = np.where(pore[:, :] == 1)[1] * interval_length / 10 ax.plot(*get_ecdf(binned_data), label="obs. binned") # Plot Posterior Predictive hdi = get_hdi(survival_function[:, :, :, :], axis=(1, 2, 3)) x = np.arange(n_intervals - 1) * interval_length / 10.0 ax.plot(x, 1 - hdi[0], label="Posterior Predictive Check") ax.fill_between(x, 1 - hdi[1][0, :], 1 - hdi[1][1, :], alpha=0.1, color="g") ax.set_xlabel("t-pore (ns)") ax.set_ylabel("CDF(t-pore)") ax.set_title("Posterior Predictive Check") ax.legend() n_categories = len(category_dic) n_rows = ceil(n_categories / 4) fig, ax = plt.subplots(n_rows, 4, figsize=(6 * 4, 4 * n_rows)) ax.flatten() for i in range(n_categories): # Mask by replica type mask = df.Replica == category_dic[i] survival_function = get_survival_function(trace.posterior.lambda_[:, :, mask, :].astype(np.float16)) # Empyrical CDF data ax[i].plot(*get_ecdf(df[mask].tpore / 10), label="obs.") # Empyrical CDF data-binned binned_data = np.where(pore[mask, :] == 1)[1] * interval_length / 10 ax[i].plot(*get_ecdf(binned_data), label="obs. binned") # Plot Posterior Predictive hdi = get_hdi(survival_function[:, :, :, :], axis=(1, 2, 3)) x = np.arange(n_intervals - 1) * interval_length / 10.0 ax[i].plot(x, 1 - hdi[0], label="Posterior Predictive Check") ax[i].fill_between(x, 1 - hdi[1][0, :], 1 - hdi[1][1, :], alpha=0.1, color="g") ax[i].set_xlabel("t-pore (ns)") ax[i].set_ylabel("CDF(t-pore)") ax[i].set_title(f"Posterior Predictive Check {category_dic[i]}") ax[i].legend() ###Output _____no_output_____ ###Markdown Analyze Plot posterior ###Code variable = "lambda0" ax = az.plot_forest(trace, var_names=variable, combined=True) ax[0].set_xlabel("lambda0[t]") variable = "beta" ax = az.plot_forest(trace, var_names=variable, combined=True) ax[0].set_xlabel("beta") variable = "exp_beta" ax = az.plot_forest(trace, var_names=variable, combined=True) ax[0].set_xlabel("exp(beta)") hdi = az.hdi(trace.posterior, var_names=["exp_beta"]) for i in range(n_categories): print(f"{category_dic[i]} {hdi.exp_beta[i,:].values.mean()}") fig, ax = plt.subplots(1, 2, figsize=(20, 7)) lambda0 = trace.posterior.lambda0.values beta = trace.posterior.beta.values y, hdi = get_hdi(lambda0, (0, 1)) x = interval_bounds[:-1] / 10 ax[0].fill_between(x, hdi[0], hdi[1], alpha=0.25, step="pre", color="grey") ax[0].step(x, y, label="baseline", color="grey") for i in range(n_categories): lam = np.exp(beta[:, :, [i]]) * lambda0 y, hdi = get_hdi(lam, (0, 1)) ax[1].fill_between(x, hdi[0], hdi[1], alpha=0.25, step="pre") ax[1].step(x, y, label=f"{category_dic[i]}") ax[0].legend(loc="best") ax[0].set_ylabel("lambda0") ax[0].set_xlabel("t (ns)") ax[1].legend(loc="best") ax[1].set_ylabel("lambda_i") ax[1].set_xlabel("t (ns)") ###Output _____no_output_____ ###Markdown Save Model? ###Code print(model_path) if save_data: remove(model_path) trace.to_netcdf(model_path) ###Output Didn't remove anything
notebook/04_transfer_learning_CNN.ipynb
###Markdown 6.0 Transfer Learning with Pre-trained Model (CNN) ###Code import pretrainedmodels # https://github.com/Cadene/pretrained-models.pytorch import torch import torch.nn as nn from torch.utils.data import Dataset # Start with the lightweight model for experiment # We choose Resnet34 for our initial transfer learning model_name = 'resnet34' backbone = pretrainedmodels.__dict__[model_name](pretrained='imagenet') backbone # With torch.nn , we can acccess each of the layers in the pre-trained model backbone.layer4 ###Output _____no_output_____ ###Markdown 1.0 We need to convert the color channel to black/gray (1 channel), instead of original Imagenet color channel (3 channel) ###Code # original backbone.conv1 # changed backbone.conv1 = nn.Conv2d(1,64,7,2,3, bias = False) ###Output _____no_output_____ ###Markdown 2.0 Convert the last out_features to be classes of 3 different output for this competition (i.e.it has 186 classes) ###Code in_features = backbone.last_linear.in_features in_features backbone.last_linear = nn.Linear(in_features, 186) # check it has changed to 186 output features backbone.last_linear ###Output _____no_output_____ ###Markdown 3.0 Test out the customized Pre-trained model ###Code batches = torch.rand(6,1,137,236) batches.shape outputs = backbone(batches) outputs.shape # logits outputs outputs.max() outputs.min() ###Output _____no_output_____
003_synthetic_features_and_outliers.ipynb
###Markdown [View in Colaboratory](https://colab.research.google.com/github/AmoDinho/Machine-Learning-Crash-with-TF/blob/master/synthetic_features_and_outliers.ipynb) Copyright 2017 Google LLC. ###Code # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Synthetic Features and Outliers **Learning Objectives:** * Create a synthetic feature that is the ratio of two other features * Use this new feature as an input to a linear regression model * Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data Let's revisit our model from the previous First Steps with TensorFlow exercise. First, we'll import the California housing data into a *pandas* `DataFrame`: Setup ###Code import math from IPython import display from matplotlib import cm from matplotlib import gridspec import matplotlib.pyplot as plt import numpy as np import pandas as pd import sklearn.metrics as metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",") california_housing_dataframe = california_housing_dataframe.reindex( np.random.permutation(california_housing_dataframe.index)) california_housing_dataframe["median_house_value"] /= 1000.0 california_housing_dataframe ###Output _____no_output_____ ###Markdown Next, we'll set up our input function, and define the function for model training: ###Code def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): """Trains a linear regression model of one feature. Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(buffer_size=10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels def train_model(learning_rate, steps, batch_size, input_feature): """Trains a linear regression model. Args: learning_rate: A `float`, the learning rate. steps: A non-zero `int`, the total number of training steps. A training step consists of a forward and backward pass using a single batch. batch_size: A non-zero `int`, the batch size. input_feature: A `string` specifying a column from `california_housing_dataframe` to use as input feature. Returns: A Pandas `DataFrame` containing targets and the corresponding predictions done after training the model. """ periods = 10 steps_per_period = steps / periods my_feature = input_feature my_feature_data = california_housing_dataframe[[my_feature]].astype('float32') my_label = "median_house_value" targets = california_housing_dataframe[my_label].astype('float32') # Create input functions. training_input_fn = lambda: my_input_fn(my_feature_data, targets, batch_size=batch_size) predict_training_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False) # Create feature columns. feature_columns = [tf.feature_column.numeric_column(my_feature)] # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer ) # Set up to plot the state of our model's line each period. plt.figure(figsize=(15, 6)) plt.subplot(1, 2, 1) plt.title("Learned Line by Period") plt.ylabel(my_label) plt.xlabel(my_feature) sample = california_housing_dataframe.sample(n=300) plt.scatter(sample[my_feature], sample[my_label]) colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)] # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print "Training model..." print "RMSE (on training data):" root_mean_squared_errors = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period, ) # Take a break and compute predictions. predictions = linear_regressor.predict(input_fn=predict_training_input_fn) predictions = np.array([item['predictions'][0] for item in predictions]) # Compute loss. root_mean_squared_error = math.sqrt( metrics.mean_squared_error(predictions, targets)) # Occasionally print the current loss. print " period %02d : %0.2f" % (period, root_mean_squared_error) # Add the loss metrics from this period to our list. root_mean_squared_errors.append(root_mean_squared_error) # Finally, track the weights and biases over time. # Apply some math to ensure that the data and line are plotted neatly. y_extents = np.array([0, sample[my_label].max()]) weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') x_extents = (y_extents - bias) / weight x_extents = np.maximum(np.minimum(x_extents, sample[my_feature].max()), sample[my_feature].min()) y_extents = weight * x_extents + bias plt.plot(x_extents, y_extents, color=colors[period]) print "Model training finished." # Output a graph of loss metrics over periods. plt.subplot(1, 2, 2) plt.ylabel('RMSE') plt.xlabel('Periods') plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(root_mean_squared_errors) # Create a table with calibration data. calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) display.display(calibration_data.describe()) print "Final RMSE (on training data): %0.2f" % root_mean_squared_error return calibration_data ###Output _____no_output_____ ###Markdown Task 1: Try a Synthetic FeatureBoth the `total_rooms` and `population` features count totals for a given city block.But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of `total_rooms` and `population`.In the cell below, create a feature called `rooms_per_person`, and use that as the `input_feature` to `train_model()`.What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lowerthe final RMSE should be.) **NOTE**: You may find it helpful to add a few code cells below so you can try out several different learning rates and compare the results. To add a new code cell, hover your cursor directly below the center of this cell, and click **CODE**. ###Code # # YOUR CODE HERE # california_housing_dataframe["rooms_per_person"] = california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"] calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person" ) ###Output Training model... RMSE (on training data): period 00 : 212.73 period 01 : 190.37 period 02 : 169.58 period 03 : 154.51 period 04 : 141.20 period 05 : 133.88 period 06 : 131.58 period 07 : 130.85 period 08 : 131.73 period 09 : 133.20 Model training finished. ###Markdown SolutionClick below for a solution. ###Code california_housing_dataframe["rooms_per_person"] = ( california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"]) calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person") ###Output _____no_output_____ ###Markdown Task 2: Identify OutliersWe can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line.Use Pyplot's [`scatter()`](https://matplotlib.org/gallery/shapes_and_collections/scatter.html) to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained in Task 1.Do you see any oddities? Trace these back to the source data by looking at the distribution of values in `rooms_per_person`. ###Code # YOUR CODE HERE plt.figure(figsize=(15,6)) plt.subplot(1,2,1) plt.scatter(calibration_data["predictions"], calibration_data["targets"]) ###Output _____no_output_____ ###Markdown Task 3: Clip OutliersSee if you can further improve the model fit by setting the outlier values of `rooms_per_person` to some reasonable minimum or maximum.For reference, here's a quick example of how to apply a function to a Pandas `Series`: clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))The above `clipped_feature` will have no values less than `0`. ###Code #First clip the feature california_housing_dataframe["rooms_per_person"] = ( california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5)) _ = california_housing_dataframe["rooms_per_person"].hist() ##Verify Clip calibration_data = train_model( learning_rate=0.05, steps=500, batch_size=5, input_feature="rooms_per_person" ) #Plot the new model _ = plt.scatter(calibration_data["predictions"], calibration_data["targets"]) ###Output _____no_output_____
asi_challenge.ipynb
###Markdown CLAUDIO SCALZO USER asi17 ASI Challenge Exercise Naive Bayes Classification and Bayesian Linear Regression on the Fashion-MNIST and CIFAR-10 datasets DESCRIPTIONThis notebook presents the "from-scratch" implementations of the Naive Bayes Classification and the Bayesian Linear Regression, applied to the Fashion-MNIST and CIFAR-10 datasets.INSTRUCTIONS TO RUN THE NOTEBOOKTo be able to run the notebook the only thing to ensure is that the datasets are in the correct directories. The following structure is the correct one:- asi_challenge_claudio_scalzo.ipynb- datasets/ - Fashion-MNIST/ - fashion-mnist_train.csv - fashion-mnist_test.csv - CIFAR-10/ - data_batch_1 - data_batch_2 - data_batch_3 - data_batch_4 - data_batch_5 - test_batchCOLORSFor the sake of readability, the notebook will follow a color convention: All the cells related to the Fashion-MNIST dataset will be in green and labeled with: FASHION-MNIST All the cells related to the CIFAR-10 dataset will be in yellow and labeled with: CIFAR-10 All the blue cells are generic comments and the answers to the exercise questions are marked with: ANSWER or TASKSECTIONSThe sections numbering will follow exactly the one provided in the requirements PDF. ###Code ### LIBRARIES IMPORT # Data structures import numpy as np import pandas as pd from numpy.linalg import inv, solve # Plot import seaborn as sns import matplotlib.pyplot as plt # Utilities from time import time import pickle # SciPy, scikit-learn from sklearn.metrics import mean_squared_error, log_loss, confusion_matrix from scipy.stats import t # Warnings import warnings warnings.filterwarnings("ignore") ###Output _____no_output_____ ###Markdown 1. Datasets loading TASK1. Download the Fashion-MNIST and CIFAR-10 datasets and import them.The first step consists in the datasets import. This process will be split in two parts, one for the Fashion-MNIST dataset and another one for the CIFAR-10 dataset. While in the first case it will be very easy (being the dataset saved in csv files), in the seconds case the process will be longer, because the CIFAR datasets are saved in binary files. FASHION-MNIST Let's define the datasets location and load them in two Pandas DataFrame: mnistTrain and mnistTest. ###Code # DIRECTORY AND CONSTANTS DEFINITION mnistPath = "./datasets/Fashion-MNIST/" height = 28 width = 28 # FILEPATHS DEFINITION mnistTrainFile = mnistPath + "fashion-mnist_train.csv" mnistTestFile = mnistPath + "fashion-mnist_test.csv" # LOAD THE MNIST AND CIFAR TRAINSET AND DATASET mnistTrain = pd.read_csv(mnistTrainFile) mnistTest = pd.read_csv(mnistTestFile) ###Output _____no_output_____ ###Markdown Now we can show some example of the loaded data: ###Code # SHOW SOME SAMPLES plt.figure(figsize=(15,10)) for i in range(6): plt.subplot(1,6,i+1) image = mnistTrain.drop(columns=["label"]).loc[i].values.reshape((height, width)) plt.imshow(image, cmap="gray") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown CIFAR-10 First of all, we have to declare the path of the CIFAR-10 datasets and some useful values: ###Code # DIRECTORY AND CONSTANTS DEFINITION cifarPath = "./datasets/CIFAR-10/" trainfiles = 5 height = 32 width = 32 channels = 3 pixels = height * width * channels chpix = height * width ###Output _____no_output_____ ###Markdown Now, let's define a function to load a single binary file which contains a certain number of images: ###Code # FUNCTION TO LOAD A SINGLE TRAINFILE def loadImages(filename): # Load binary file file = open(filename, "rb") # Unpickle data = pickle.load(file, encoding="bytes") # Get raw images and raw classes rawImages = data[b'data'] rawClasses = data[b'labels'] return np.array(rawImages, dtype=int), np.array(rawClasses, dtype=int) ###Output _____no_output_____ ###Markdown Now it's time to use the previous function to load all the five trainsets in our directory: they will be merged in a unique Pandas DataFrame named cifarTrain. ###Code # ALLOCATE AN EMPTY ARRAY (width of number of pixels + one for the class label) images = np.empty(shape=(0, pixels + 1), dtype=int) # LOAD ALL THE TRAINFILES for i in range(trainfiles): # Load the images and classes for the "i"th trainfile newImages, newClasses = loadImages(filename = cifarPath + "data_batch_" + str(i + 1)) # Create the new batch (concatenating images and classes) newBatch = np.concatenate((np.asmatrix(newClasses).T, newImages), axis=1) # Concatenate the new batch with the previous ones images = np.concatenate((images, newBatch), axis=0) # CREATE THE TRAIN DATAFRAME attributes = [("pixel" + str(i) + "_" + str(c)) for c in ["r", "g", "b"] for i in range(height * width)] cifarTrain = pd.DataFrame(images, columns = ["label"] + attributes) ###Output _____no_output_____ ###Markdown The cifarTrain has been imported, now let's do the same for the file containing the testset: also in this case, it will be saved in a dataframe, cifarTest. ###Code # LOAD THE IMAGES AND CLASSES newImages, newClasses = loadImages(filename = cifarPath + "test_batch") # CREATE THE IMAGES ARRAY (concatenating images and classes) images = np.concatenate((np.asmatrix(newClasses).T, newImages), axis=1) # CREATE THE TEST DATAFRAME attributes = [("pixel" + str(i) + "_" + str(c)) for i in range(height * width) for c in ["r", "g", "b"]] cifarTest = pd.DataFrame(images, columns = ["label"] + attributes) ###Output _____no_output_____ ###Markdown Now we can show some example of the loaded data: ###Code # SHOW SOME SAMPLES plt.figure(figsize=(15,10)) for i in range(0,6): plt.subplot(1,6,i+1) imageR = cifarTrain.iloc[i, 1 : chpix+1].values.reshape((height,width)) imageG = cifarTrain.iloc[i, chpix+1 : 2*chpix+1].values.reshape((height,width)) imageB = cifarTrain.iloc[i, 2*chpix+1 : 3*chpix+1].values.reshape((height,width)) image = np.dstack((imageR, imageG, imageB)) plt.imshow(image) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown Everything is loaded! We can start analyzing our data. 2. Descriptive statistics 2.1 Data description The first step is to investigate data. Some really simple statistics are shown: they are useful to introduce and to understand the data. FASHION-MNIST ###Code # PRINT TO DESCRIBE THE TRAIN AND THE TEST print("[TRAINSET]") print("Number of rows:", mnistTrain.shape[0]) print("Attributes:", mnistTrain.drop(columns=['label']).shape[1], "(without considering the label)") print("\n[TESTSET]") print("Number of rows:", mnistTest.shape[0]) print("Attributes:", mnistTest.drop(columns=['label']).shape[1], "(without considering the label)") print("\nExample:") display(mnistTrain.head(5)) ###Output [TRAINSET] Number of rows: 60000 Attributes: 784 (without considering the label) [TESTSET] Number of rows: 10000 Attributes: 784 (without considering the label) Example: ###Markdown The number of rows is 60000, while the number of columns is 785 (784 attributes + 1 label). But what does they mean? Each row represents a picture. Each column represents a pixel (784 = 28x28). So, the value of a row "r" in a given column "c" represents the brightness (from 0 to 255) of a given pixel "c" in a given picture "r".In the testset we find the same situation but with a smaller row dimension: 10000. The number of columns is, of course, the same: 785 (784 attributes + 1 label). CIFAR-10 ###Code # PRINT TO DESCRIBE THE TRAIN print("[TRAINSET]") print("Number of rows:", cifarTrain.shape[0]) print("Attributes:", cifarTrain.drop(columns=['label']).shape[1], "(without considering the label)") print("\n[TESTSET]") print("Number of rows:", cifarTest.shape[0]) print("Attributes:", cifarTest.drop(columns=['label']).shape[1], "(without considering the label)") print("\nExample:") display(cifarTrain.head(5)) ###Output [TRAINSET] Number of rows: 50000 Attributes: 3072 (without considering the label) [TESTSET] Number of rows: 10000 Attributes: 3072 (without considering the label) Example: ###Markdown The number of rows is 50000, because we merged 5 files of 10000 rows (images) each. The number of columns is instead 3073 (3072 attributes + the label): why this number? Because each picture was a 32x32 pixels, with 3 channels (RGB), so each picture has 3072 pixels.The number of rows in the testset is smaller: 10000. 2.2 Data distribution analysis Now is time to analyze the distribution of our data. In this section I'm going to analyze the distribution in the trainset, which will be useful to train the model. FASHION-MNIST CIFAR-10 ###Code # TAKE DISTRIBUTION mnistDistribution = mnistTrain["label"].value_counts() cifarDistribution = cifarTrain["label"].value_counts() # TAKE CLASSES AND FREQUENCIES mnistClasses = np.array(mnistDistribution.index) mnistFrequencies = np.array(mnistDistribution.values) cifarClasses = np.array(cifarDistribution.index) cifarFrequencies = np.array(cifarDistribution.values) # PLOT THE DISTRIBUTION OF THE TARGET VARIABLE plt.figure(figsize=(15,5)) plt.subplot(1,2,1) plt.bar(mnistClasses, mnistFrequencies, align="center", color="green") plt.xticks(list(range(np.min(mnistClasses), np.max(mnistClasses)+1))) plt.xlabel("Class") plt.ylabel("Count") plt.title("[Fashion-MNIST]", weight="semibold"); plt.subplot(1,2,2) plt.bar(cifarClasses, cifarFrequencies, align="center", color="orange") plt.xticks(list(range(np.min(mnistClasses), np.max(mnistClasses)+1))) plt.xlabel("Class") plt.ylabel("Count") plt.title("[CIFAR-10]", weight="semibold"); plt.suptitle("Distribution of the label in the trainset", fontsize=16, weight="bold") plt.show() ###Output _____no_output_____ ###Markdown QUESTIONComment on the distribution of class labels and the dimensionality of the input and how these may affect the analysis.ANSWER- The dimensionalityFirst of all, the dimensionality is very high. As previously said, each column represents a pixel of the image! So, even a very small picture has a lot of features. A big dimensionality like this (784 attributes on the Fashion-MNIST and 3072 attributes on the CIFAR-10) can usually represent an issue, generally known as "curse of dimensionality" (source).However, the Naive Bayes classifier is usually suited when dealing with high-dimensional datasets: indeed, thanks to its simplicity and thanks also to its Naive assumptions can perform well when data dimensionality is really really high.In our case, the high dimensionality is an issue especially for the regressor.The Bayesian Linear Regression algorithm, indeed, has to find the weights (and find the regression line) basing its analysis on a big set of dimensions, which is of course harder (and computationally heavier because of the big matrices in the products).- The distributionThe distribution is uniform: each class has the same amount of images in the dataset. We'll use this fact to compute the prior probabilities in the Naive Bayes Classifier: being each prior the same for each class, the model will not be biased towards some classes, because the posterior computation will be equally influenced by this factor for each class. Before starting the new section, let's define some functions to graphically plot the confusion matrix, the errorplot and the scatter plot. This function will be useful to show the classifier and the regressor performance in the two datasets. ###Code # FUNCTION TO PLOT THE REQUIRED CONFUSION MATRICES def plotConfusionMatrix(cm1, cm2, classes1, classes2): def plotCM(cm, classes, cmap, title): sns.heatmap(cm, cmap=cmap, annot=True, fmt="d", cbar=False) plt.ylabel('True label') plt.xlabel('Predicted label') plt.title(title) plt.figure(figsize=(16,7)) plt.subplot(1,2,1) plotCM(cm1, classes1, "Greens", "[Fashion-MNIST]") plt.subplot(1,2,2) plotCM(cm2, classes2, "Oranges", "[CIFAR-10]") plt.subplots_adjust(wspace=0.4) plt.show() print() # FUNCTION TO PLOT THE REQUIRED SCATTER PLOTS def plotScatterPlot(raw1, raw2, corr1, corr2): def plotSP(raw, corr, color, title): plt.title(title) plt.xticks(np.arange(-2,12)) plt.yticks(np.arange(0,10)) plt.ylabel('True label') plt.xlabel('Predicted continuous label value') plt.grid(linestyle=':') plt.scatter(raw, corr, color=color) plt.figure(figsize=(15,8)) plt.subplot(1,2,1) plotSP(raw1, corr1, "green", "[Fashion-MNIST]") plt.subplot(1,2,2) plotSP(raw2, corr2, "orange", "[CIFAR-10]") plt.suptitle("Scatter plot of true raw predictions versus predicted ones", weight="semibold", fontsize=14) plt.show() print() # FUNCTION TO PLOT THE REQUIRED ERROR PLOTS def plotErrorPlot(pred1, pred2, var1, var2): def plotEP(pred, var, correct, color, title): plt.errorbar(np.arange(0,30), pred[:30], yerr=t.ppf(0.997, len(pred)-1)*np.sqrt(var[:30]), ls="None", color=color, marker=".", markerfacecolor="black") # plt.scatter(np.arange(0,30), correct[:30], c="blue", alpha=0.2, linewidths=0.1) # plt.legend(["True classes", "Predictions (with error)"], loc=2) plt.yticks(np.arange(-1,13,1)) plt.ylabel('Predictive variance') plt.xlabel('Sample of dataset') plt.grid(linestyle=':') plt.title(title) plt.figure(figsize=(15,8)) plt.subplot(1,2,1) plotEP(pred1, var1, mnistCorrect, "green", "[Fashion-MNIST]") plt.subplot(1,2,2) plotEP(pred2, var2, cifarCorrect, "orange", "[CIFAR-10]") plt.suptitle("Predicted variances on a subset of the predicted data", weight="semibold", fontsize=14) plt.show() print() ###Output _____no_output_____ ###Markdown Moreover, to facilitate each model's work, we can normalize the values of our datasets (except for the class label) dividing each value by 255. Let's do it: ###Code def normalize(dataset): return dataset.apply(lambda col: col.divide(255) if(col.name != "label") else col) # NORMALIZE MNIST mnistTrainNorm = normalize(mnistTrain) mnistTestNorm = normalize(mnistTest) # NORMALIZE CIFAR cifarTrainNorm = normalize(cifarTrain) cifarTestNorm = normalize(cifarTest) # PRINT AN EXAMPLE print("Example of the normalized MNIST trainset:") display(mnistTrainNorm.head(5)) # BACKUP NON-NORMALIZED mnistTrainFull = mnistTrain mnistTestFull = mnistTest cifarTrainFull = cifarTrain cifarTestFull = cifarTest # SPLIT THE DATASETS IN 'X' AND 'y' # Fashion-MNIST mnistTrain = mnistTrainNorm.drop(columns=['label']).values mnistTarget = mnistTrainNorm['label'].values mnistTest = mnistTestNorm.drop(columns=['label']).values mnistCorrect = mnistTestNorm['label'].values # CIFAR-10 cifarTrain = cifarTrainNorm.drop(columns=['label']).values cifarTarget = cifarTrainNorm['label'].values cifarTest = cifarTestNorm.drop(columns=['label']).values cifarCorrect = cifarTestNorm['label'].values ###Output _____no_output_____ ###Markdown Now we're ready to start the classification. 3. Classification TASKa) Implement the Naive Bayes Classifier.The Naive Bayes Classifier is for sure the most basic and simple algorithm belonging to the probabilistic classifiers family. It puts its roots into the Bayes theorem, specifically the Naive version, which considers independent all the features. This assumption has of course two main aspects: the first one is to heavily simplify the computation, the other one is of course to be too "naive", not respecting most of the times the real dependence among features.$$P(t_{new}=k \mid \mathbf{X}, \mathbf{t}, \mathbf{x_{new}}) = \dfrac{p(\mathbf{x_{new}} \mid t_{new}=k, \mathbf{X}, \mathbf{t}) \space P(t_{new}=k)} {\sum_{j=0}^{K-1} p(\mathbf{x_{new}} \mid t_{new}=j, \mathbf{X}, \mathbf{t}) \space P(t_{new}=j) }$$The prior probability, $P(t_{new}=k)$, will be computed taking the occurrence probability of each class (in this case, the same for each class, given the label distribution).The likelihood, instead, is represented by:$$p(\mathbf{x} \mid t=k, \mathbf{X}, \mathbf{t}) = \mathcal{N}(\mu_{kd}, \sigma_{kd})$$where $\mu$ and $\sigma$ are respectively the mean of each feature for each class, and the variance of each feature for each class.Given the fact that we're only interested to the maximum posterior value among all class for each image, we can use also the log-likelihood: in this way, numerical issues are avoided.Moreover, the denominator is just a normalization constant, not useful in the max-search, we can avoid it.The computed expression so, will be:$$\log p(t_{new}=k \mid \mathbf{X}, \mathbf{t}, \mathbf{x_{new}}) = \log p(\mathbf{x} \mid t=k, \mathbf{X}, \mathbf{t}) + \log P(t=k)$$ ###Code class NaiveBayesClassifier: # ----- PRIVATE METHODS ------------------------------------------------- # # MEANS AND VARIANCES FOR THE LIKELIHOOD: P(X|C) def _computeMeansStds(self, train, target): # Temp DataFrame pdf = pd.DataFrame(train) pdf['label']= target smoothing = 1e-5 # Compute means and variances. For example: # <means> | attr0 | attr1 | ... # <stds> | attr0 | attr1 | ... # -------------------------- # -------------------------- # class0 | 12 | 3 | ... # class0 | 0.2 | 0.03 | ... # class1 | 8 | 0 | ... # class1 | 0.07 | 0.1 | ... # ... | ... | ... | ... # ... | ... | ... | ... self.means = pdf.groupby("label").mean().values self.stds = pdf.groupby("label").std().values + smoothing # PRIORS: P(C) def _computePriors(self, target): # Compute the distribution of the label self.priors = np.bincount(target) / len(target) # LIKELIHOOD: P(X|C) def _logLikelihood(self, data, c): return np.sum(-np.log(self.stds[c, :]) - 0.5 * np.log(2 * np.pi) -0.5 * np.divide((data - self.means[c, :])**2, self.stds[c, :]**2), axis=1) # ----------------------------------------------------------------------- # # ----- PUBLIC METHODS -------------------------------------------------- # # TRAIN - LIKELIHOOD and PRIOR def fit(self, train, target): # Classes self.classes = list(np.unique(target)) # Compute priors and likelihoods self._computePriors(target) self._computeMeansStds(train, target) return self.classes # TEST - POSTERIOR: P(C|X) def predict(self, test): # The posterior array will be like: # <post> | sample0 | sample1 | ... # ----------------------------- # class0 | 0.1 | 0.4 | ... # class1 | 0.18 | 0.35 | ... # ... | ... | ... | ... self.posteriors = np.array([self._logLikelihood(test, c) + np.log(self.priors[c]) for c in self.classes]) # Select the class with max probability (and also its posteriors) for each sample return np.argmax(self.posteriors, axis=0), self.posteriors.T # VALIDATE PREDICTION def validate(self, pred, correct, prob): # Accuracy, error, confusion matrix acc = np.mean(pred == correct) ll = log_loss(correct, prob) cm = confusion_matrix(correct, pred) return acc, ll, cm # ----------------------------------------------------------------------- # ###Output _____no_output_____ ###Markdown QUESTIONb) Describe a positive and a negative feature of the classifier for these tasks.ANSWERRegarding positive features, as said before, the Naive Bayes Classifier has the capability to work also with really high-dimensional datasets. Thanks to its simplicity, indeed, it hasn't relevant dimensionality issues. Moreover, there is no need to set (and search for the best) hyperparameters to make it work: it works as it maximum capabilities right after it's implemented.The negative feature is, of course, its Naive assumption. It assumes that all the features are independent, which of course is not true for the biggest part of the available datasets. This model is too simple for a good image classification, a field in which more complex models, like Convolutional Neural Networks, are leading (source). QUESTIONc) Describe any data pre-processing that you suggest for this data and your classifier.ANSWERClassifiers (and models, in general) can be hugely helped by a good data pre-processing. In this case, one of the first things that one can think is the dimensionality reduction. Like said before the Naive Bayes Classifier doesn't suffer a lot from high-dimensional datasets, but speaking in general terms, models are of course facilitated in their work when they have to deal with a reduced set of feature. For this reason one can think about PCA (Principal Component Analysis, source here) or LDA (Linear discriminant analysis, source here): in this case LDA is clearly more appropriate, because it looks for linear combination of variables that can express better the original space (like PCA) but taking into considerations the labels, so making a net distinction between classes of the dataset.Another thing that can be tried, is to transform each picture of the CIFAR-10 dataset in grayscale, deleting the color information. This can be done with a simple weighted sum between the R, G and B components (0.21 R + 0.72 G + 0.07 B). Of course this will be a dimensionality reduction, but it doesn't make so much sense because it will make bigger the correlation between the features instead of having all the colour channels separated, making even worse the naive assumption of independence between features.Talking about two concrete pre-processing related to these two datasets: images have of course been flattened (when they were originally loaded in the "square" shape) and the pixel values have been normalized, bringing them in the range [0.0, 1.0] instead of [0, 255]. TASKd) Apply your classifier to the two given datasets. ###Code # CLASSIFY FUNCTION def classify(train, target, test, correct): # NAIVE BAYES CLASSIFIER nbc = NaiveBayesClassifier() # TRAIN startTime = time() classes = nbc.fit(train, target) endTime = time() print("Train time: %.3f seconds" % (endTime-startTime)) # TEST startTime = time() pred, prob = nbc.predict(test) endTime = time() print("Test time: %.3f seconds\n" % (endTime-startTime)) # VALIDATION accuracy, ll, cm = nbc.validate(pred, correct, prob) print("Accuracy: %.2f%%" % (accuracy * 100)) print("LogLikelihood Loss: %.2f" % (ll)) return cm ###Output _____no_output_____ ###Markdown FASHION-MNIST Let's start the classification for the Fashion-MNIST dataset: ###Code # CLASSIFY mnistCM = classify(mnistTrain, mnistTarget, mnistTest, mnistCorrect) ###Output Train time: 1.165 seconds Test time: 1.847 seconds Accuracy: 59.16% LogLikelihood Loss: 2.14 ###Markdown CIFAR-10 Now it's time for the CIFAR-10 classification: ###Code # CLASSIFY cifarCM = classify(cifarTrain, cifarTarget, cifarTest, cifarCorrect) ###Output Train time: 6.331 seconds Test time: 7.290 seconds Accuracy: 29.76% LogLikelihood Loss: 5.85 ###Markdown TASKe) Display the confusion matrix on the test data. FASHION-MNIST CIFAR-10 ###Code # PLOT THE CONFUSION MATRICES plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses) ###Output _____no_output_____ ###Markdown QUESTIONf) Discuss the performance, compare them against a classifier that outputsrandom class labels, and suggest ways in which performance could be improved.ANSWERThe performance are "good", considering that our models are very very simple. What is clear is that the performances on the Fashion-MNIST are way better than the CIFAR-10 dataset. One of the things that cause the model to work badly are of course the dimensionality of the CIFAR-10 dataset.The accuracies are:- [CLASSIFICATION] Fashion-MNIST accuracy: 59.16%- [CLASSIFICATION] CIFAR-10 accuracy: 29.76% Let's see what happens for a Random classifier: ###Code # RANDOM PREDICTIONS mnistRandPred = np.random.randint(0, 9, mnistTest.shape[0]) cifarRandPred = np.random.randint(0, 9, cifarTest.shape[0]) # ACCURACY mnistRandAcc = np.mean(mnistRandPred == mnistCorrect) cifarRandAcc = np.mean(cifarRandPred == cifarCorrect) # SHOW print("[RANDOM Classifier] Fashion-MNIST random accuracy: %.2f%% (expected around 10%%)" % (mnistRandAcc * 100)) print("[RANDOM Classifier] CIFAR-10 random accuracy: %.2f%% (expected around 10%%)" % (cifarRandAcc * 100)) ###Output [RANDOM Classifier] Fashion-MNIST random accuracy: 10.33% (expected around 10%) [RANDOM Classifier] CIFAR-10 random accuracy: 10.40% (expected around 10%) ###Markdown The random classifier, of course, has an accuracy around 10%: the probability of getting the right class is $\frac{right \space class}{all \space classes}$, in this case: $\frac{1}{10}$. Trying a different approach: grayscale CIFAR-10 CIFAR-10 (grayscale) ###Code # CREATE GRAYSCALE CIFAR-10 hl = 32 * 32 cifarGrayTrain = np.empty((cifarTrain.shape[0],hl)) cifarGrayTest = np.empty((cifarTest.shape[0],hl)) for i in range(hl): cifarGrayTrain[:,i] = (0.21 * cifarTrainFull.iloc[:,i+1] + 0.72 * cifarTrainFull.iloc[:,hl+i+1] + 0.07 * cifarTrainFull.iloc[:,2*hl+i+1]) / 255 cifarGrayTest[:,i] = (0.21 * cifarTestFull.iloc[:,i+1] + 0.72 * cifarTestFull.iloc[:,hl+i+1] + 0.07 * cifarTestFull.iloc[:,2*hl+i+1]) / 255 # CLASSIFY cifarCM = classify(cifarGrayTrain, cifarTarget, cifarGrayTest, cifarCorrect) ###Output Train time: 1.007 seconds Test time: 3.104 seconds Accuracy: 26.84% LogLikelihood Loss: 5.65 ###Markdown The grayscale approach, as expected, doesn't improve the predictions. Indeed, transform the coloured picture in grayscale, just makes the correlations between features bigger (e.g. some pixels which were of a dark - but different - colour, now can be dark and of the same - or similar - gray value), going farther from the Naive assumption. 4. Bayesian Regression TASKa) Implement the Bayesian Linear Regression. ###Code class BayesianLinearRegression: # ----- PRIVATE METHODS ------------------------------------------------- # # CREATE THE MATRIX FOR THE MATRICIAL-FORM REGRESSION def _matricize(self, x, k): # ALLOCATE MATRIX X = np.ones(shape=(x.shape[0], 1), dtype=int) # STACK COLUMNS for i in range(k): X = np.hstack((X, np.power(x, i+1))) return X # COMPUTE THE WEIGHTS ARRAY def _weights(self, X, t): # np.linalg.solve, when feasible, is faster so: # inv(X.T.dot(X)).dot(X.T).dot(t) # becomes: return solve(X.T.dot(X), X.T.dot(t)) # RETURN THE VARIANCE def _variance(self, X, w, t): return (t - X.dot(w.T)).T.dot(t - X.dot(w.T)) / X.shape[0] # RETURN THE PREDICTED t def _target(self, X_new, w): return X_new.dot(w.T) # RETURN THE PREDICTIVE VARIANCE def _predictiveVar(self, X_new, X, var): return var * np.diag(X_new.dot(inv(X.T.dot(X))).dot(X_new.T)) # ----------------------------------------------------------------------- # # ----- PUBLIC METHODS -------------------------------------------------- # # TRAIN def fit(self, train, target, k): # Compute X, w and t self.X = self._matricize(train, k) self.w = self._weights(self.X, target) self.var = self._variance(self.X, self.w, target) return np.unique(target) # TEST def predict(self, test, k): # Compute the matrix for the test set X_new = self._matricize(test, k) # Predict the new target for the test set (as a continuous variable) t_new_raw = self._target(X_new, self.w) # Compute the predictive variance var_new = self._predictiveVar(X_new, self.X, self.var) return t_new_raw, var_new # VALIDATION def validate(self, correct, raw): # Accuracy, error, confusion matrix, mse mse = mean_squared_error(correct, raw) return mse # ----------------------------------------------------------------------- # ###Output _____no_output_____ ###Markdown TASKb) Treat class labels as continuous and apply regression to the training data. ###Code def regress(train, target, test, correct, k): # BAYESIAN LINEAR REGRESSION blr = BayesianLinearRegression() # TRAIN startTime = time() classes = blr.fit(train, target, k) endTime = time() print("Train time: %.3f seconds" % (endTime-startTime)) # TEST startTime = time() raw, var = blr.predict(test, k) endTime = time() print("Test time: %.3f seconds\n" % (endTime-startTime)) # VALIDATION mse = blr.validate(correct, raw) print("[RAW PREDICTIONS] Mean Squared Error (MSE): %.2f" % (mse)) return raw, var def validatePictures(mnistRaw, mnistVar, cifarRaw, cifarVar): # SCATTER PLOT plotScatterPlot(mnistRaw, cifarRaw, mnistCorrect, cifarCorrect) # ERRORPLOT plotErrorPlot(mnistRaw, cifarRaw, mnistVar, cifarVar) ###Output _____no_output_____ ###Markdown FASHION-MNIST ###Code # REGRESS mnistRaw, mnistVar = regress(mnistTrain, mnistTarget, mnistTest, mnistCorrect, k = 1) ###Output Train time: 1.456 seconds Test time: 3.905 seconds [RAW PREDICTIONS] Mean Squared Error (MSE): 1.96 ###Markdown CIFAR-10 ###Code # REGRESS cifarRaw, cifarVar = regress(cifarTrain, cifarTarget, cifarTest, cifarCorrect, k = 1) ###Output Train time: 12.835 seconds Test time: 22.006 seconds [RAW PREDICTIONS] Mean Squared Error (MSE): 8.03 ###Markdown TASKc) Produce a scatter plot showing the predictions versus the true targets for thetest set and compute the mean squared error on the test set.The mean squared error has been shown before, is:- [Fashion-MNIST] Mean Squared Error (MSE): 1.96- [CIFAR-10] Mean Squared Error (MSE): 8.03 FASHION-MNIST CIFAR-10 ###Code # PLOT IMAGES validatePictures(mnistRaw, mnistVar, cifarRaw, cifarVar) ###Output _____no_output_____ ###Markdown As we can see from the previous plots, the regression predicts a set of continuous values, often out of the [0,9] range. In the second plot we can observe (in a little subset of data) the error that each prediction has: this has been computed using the predicted variance and getting the 99% confidence-level of the standard deviation.It's soon clear, looking at the error plot, that the errors on the CIFAR-10 predictions are bigger than the ones in the Fashion-MNIST: the model is less certain in its predictions in the CIFAR-10 second dataset. QUESTIONd) Suggest a way to discretize predictions and display the confusion matrix on thetest data and report accuracy. ###Code # DISCRETIZER discretizer = np.vectorize(lambda label: 9 if label > 9 else (0 if label < 0 else round(label))) ###Output _____no_output_____ ###Markdown ANSWERThe predictions have been discretized in a really simple way: the continuous variables have been rounded to the closest integer. Moreover, the values smaller than 0 have been approximated to 0, and the values bigger than 9 have been approximated to 9.More advanced approaches could have been taken, like one-hot encode the labels and regress on each "column" of the one-hot encoded classes: this approch will be done afterwards. FASHION-MNIST ###Code # DISCRETIZE mnistPred = np.array(discretizer(mnistRaw), dtype=int) # VALIDATE accuracy = np.mean(mnistPred == mnistCorrect) mnistCM = confusion_matrix(mnistCorrect, mnistPred) print("[DISCRETE PREDICTIONS] Accuracy: %.2f%%" % (accuracy * 100)) ###Output [DISCRETE PREDICTIONS] Accuracy: 39.19% ###Markdown CIFAR-10 ###Code # DISCRETIZE cifarPred = np.array(discretizer(cifarRaw), dtype=int) # VALIDATE accuracy = np.mean(cifarPred == cifarCorrect) cifarCM = confusion_matrix(cifarCorrect, cifarPred) print("[DISCRETE PREDICTIONS] Accuracy: %.2f%%" % (accuracy * 100)) ###Output [DISCRETE PREDICTIONS] Accuracy: 10.95% ###Markdown The regressor performances are:- [REGRESSION] Fashion-MNIST accuracy: 39.19%- [REGRESSION] CIFAR-10 accuracy: 10.95% FASHION-MNIST CIFAR-10 ###Code # CONFUSION MATRIX plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses) ###Output _____no_output_____ ###Markdown QUESTIONe) Discuss regression performance with respect to classification performance.ANSWERThe regression performances are of course very weak with respect to classification performance. A linear regression is a "wrong" tool to approach image classification problems.Also from the point of view of the computational time, in both datasets the Bayesian Regression is way slower than the Naive Bayes Classifier. QUESTIONf) Describe one limitation of using regression for this particular task.ANSWEROne big limitation of linear regression is that it works trying to find a set of weights that models the relationships between the continuous data and the labels. In this case, even if it "works", is out of context: we're trying to find a set of discrete labels (from 0 to 9) according to some pre-defined pattern. We're using a "little drill against a huge building in reinforced concrete".It would have been a little bit more meaningful if the [0,9] range would have had an "ordinal" information (like a gradual scale of values, where 0 "nominal" values where, for example, 2 means different than 1, and non greater than 1. Trying a different approach: one-hot encoded labels One approach to improve the Bayesian Linear Regression performance can be to one-hot encode the targets and regress on them one by one. In this case, the target column becomes a 10-column matrix, and a loop can be done on each column, using as target one at a time: the result will be a prediction matrix (10000, 10) and thanks to the argmax the best class will be chosen.Let's try it: ###Code from keras.utils import to_categorical def regressOneHot(train, target, test, correct, k): # BAYESIAN LINEAR REGRESSION blr = BayesianLinearRegression() # FIT & PREDICT target_bin = to_categorical(target, len(mnistClasses)) pred = np.zeros((test.shape[0], len(mnistClasses))) for i in range(10): blr.fit(train, target_bin[:,i], k) pred[:,i], _ = blr.predict(test, k) pred = np.argmax(pred, axis=1) # VALIDATION accuracy = np.mean(pred == correct) cm = confusion_matrix(correct, pred) print("[DISCRETE PREDICTIONS] Accuracy: %.2f%%" % (accuracy * 100)) return cm ###Output Using TensorFlow backend. ###Markdown FASHION-MNIST ###Code mnistCM = regressOneHot(mnistTrain, mnistTarget, mnistTest, mnistCorrect, 1) ###Output [DISCRETE PREDICTIONS] Accuracy: 82.18% ###Markdown CIFAR-10 ###Code cifarCM = regressOneHot(cifarTrain, cifarTarget, cifarTest, cifarCorrect, 1) ###Output [DISCRETE PREDICTIONS] Accuracy: 36.37% ###Markdown FASHION-MNIST CIFAR-10 ###Code # CONFUSION MATRIX plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses) ###Output _____no_output_____ ###Markdown The results are way better! The accuracies now are:- [Fashion-MNIST] Accuracy: 82.18% (before was 29%)- [CIFAR-10] Accuracy: 36.37% (before was 11%)The one-hot encoding actually worked. Indeed, taking this approach, we regress on each of the one-hot-encoded labels, overtaking the issue, described before, of the nominal (versus ordinal) target label. 5. Bonus question Integrating Convolutional Neural Networks (with the LeNet architecture) and the Naive Bayes Classifier Convolutional Neural Networks actually represent one of the most powerful methods to face image classification problems (source).The simplest architecture is the LeNet (source): two convolution layers alternated by the max pooling phase, followed by a flatten phase and a set of fully connected layers.Let's implement the model using Keras: ###Code %%capture from keras import backend as K from keras.models import Sequential, Model from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from keras.utils import to_categorical class LeNetCNN: def reshape(self, train, target, test, correct, num_classes, input_shape): # DESIRED INPUT SHAPE h, w, c = self.input_shape = input_shape self.num_classes = num_classes # RESHAPE # Train set self.train = train.reshape((train.shape[0], h, w, c)).astype('float32') self.target_bin = to_categorical(target, num_classes) # Test set self.test = test.reshape((test.shape[0], h, w, c)).astype('float32') self.correct_bin = to_categorical(correct, num_classes) return self.train, self.test def buildAndRun(self, batch_size, epochs): # MODEL CONSTRUCTION (LeNet architecture) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=self.input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu', name="intermediate")) model.add(Dropout(0.5)) model.add(Dense(self.num_classes, activation='softmax')) # MODEL COMPILING model.compile(loss="categorical_crossentropy", optimizer="adadelta", metrics=['accuracy']) # TRAIN model.fit(self.train, self.target_bin, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.1) # PREDICT score = model.evaluate(self.test, self.correct_bin, verbose=0) print("\nConvolutional Neural Network:") print(' - Loss: %.2f' % (score[0])) print(' - Accuracy: %.2f%%' % (score[1]*100)) return model ###Output _____no_output_____ ###Markdown Now let's run the model with our two dataset: FASHION-MNIST ###Code # BUILD, RESHAPE THE DATASETS AND RUN THE CNN cnn = LeNetCNN() train, test = cnn.reshape(mnistTrain, mnistTarget, mnistTest, mnistCorrect, num_classes = 10, input_shape = (28,28,1)) model = cnn.buildAndRun(batch_size = 128, epochs = 10) ###Output Train on 54000 samples, validate on 6000 samples Epoch 1/10 54000/54000 [==============================] - 71s 1ms/step - loss: 0.7031 - acc: 0.7400 - val_loss: 0.4586 - val_acc: 0.8315 Epoch 2/10 54000/54000 [==============================] - 72s 1ms/step - loss: 0.4707 - acc: 0.8298 - val_loss: 0.3906 - val_acc: 0.8635 Epoch 3/10 54000/54000 [==============================] - 77s 1ms/step - loss: 0.4097 - acc: 0.8517 - val_loss: 0.3472 - val_acc: 0.8797 Epoch 4/10 54000/54000 [==============================] - 68s 1ms/step - loss: 0.3736 - acc: 0.8653 - val_loss: 0.3234 - val_acc: 0.8838 Epoch 5/10 54000/54000 [==============================] - 67s 1ms/step - loss: 0.3511 - acc: 0.8734 - val_loss: 0.3051 - val_acc: 0.8877 Epoch 6/10 54000/54000 [==============================] - 67s 1ms/step - loss: 0.3306 - acc: 0.8814 - val_loss: 0.2944 - val_acc: 0.8953 Epoch 7/10 54000/54000 [==============================] - 67s 1ms/step - loss: 0.3173 - acc: 0.8856 - val_loss: 0.2823 - val_acc: 0.8978 Epoch 8/10 54000/54000 [==============================] - 67s 1ms/step - loss: 0.3038 - acc: 0.8892 - val_loss: 0.2759 - val_acc: 0.9040 Epoch 9/10 54000/54000 [==============================] - 67s 1ms/step - loss: 0.2957 - acc: 0.8935 - val_loss: 0.2699 - val_acc: 0.9042 Epoch 10/10 54000/54000 [==============================] - 67s 1ms/step - loss: 0.2849 - acc: 0.8966 - val_loss: 0.2800 - val_acc: 0.9037 Convolutional Neural Network: - Loss: 0.26 - Accuracy: 90.57% ###Markdown The accuracy of the output of the neural network is not bad at all. However, I'm not interested in it, but in using the intermediate model built after the two "Convolution -> ReLU activation -> Pooling" phases, right after the outputs are flattened.Now the intermediate_model will be used to generate the intermediate trainset and testset which will be given as input to the Naive Bayes Classifier. ###Code # EXTRACT THE MODEL OF THE INTERMEDIATE LAYER model_intermediate = Model(inputs=model.input, outputs=model.get_layer("intermediate").output) # PREDICT TO GET THE INTERMEDIATE TRAINSET AND TESTSET train_intermediate = model_intermediate.predict(train) test_intermediate = model_intermediate.predict(test) # CLASSIFY MNIST mnistCM = classify(train = train_intermediate, target = mnistTarget, test = test_intermediate, correct = mnistCorrect) ###Output Train time: 0.255 seconds Test time: 0.159 seconds Accuracy: 89.81% LogLikelihood Loss: 1.30 ###Markdown The accuracy of the Naive Bayes Classifier (using as inputs the outputs of the convolutional layers) is very high. Let's see what happens with the CIFAR-10 dataset: CIFAR-10 ###Code # BUILD, RESHAPE THE DATASETS AND RUN THE CNN cnn = LeNetCNN() train, test = cnn.reshape(cifarTrain, cifarTarget, cifarTest, cifarCorrect, num_classes = 10, input_shape = (32,32,3)) model = cnn.buildAndRun(batch_size = 128, epochs = 10) ###Output Train on 45000 samples, validate on 5000 samples Epoch 1/10 45000/45000 [==============================] - 91s 2ms/step - loss: 1.9032 - acc: 0.3108 - val_loss: 1.6068 - val_acc: 0.4172 Epoch 2/10 45000/45000 [==============================] - 86s 2ms/step - loss: 1.5847 - acc: 0.4348 - val_loss: 1.4236 - val_acc: 0.4908 Epoch 3/10 45000/45000 [==============================] - 98s 2ms/step - loss: 1.4572 - acc: 0.4832 - val_loss: 1.3557 - val_acc: 0.5252 Epoch 4/10 45000/45000 [==============================] - 105s 2ms/step - loss: 1.3718 - acc: 0.5148 - val_loss: 1.2849 - val_acc: 0.5334 Epoch 5/10 45000/45000 [==============================] - 100s 2ms/step - loss: 1.3163 - acc: 0.5366 - val_loss: 1.2414 - val_acc: 0.5712 Epoch 6/10 45000/45000 [==============================] - 93s 2ms/step - loss: 1.2624 - acc: 0.5542 - val_loss: 1.1885 - val_acc: 0.5890 Epoch 7/10 45000/45000 [==============================] - 87s 2ms/step - loss: 1.2212 - acc: 0.5742 - val_loss: 1.1503 - val_acc: 0.5972 Epoch 8/10 45000/45000 [==============================] - 87s 2ms/step - loss: 1.1852 - acc: 0.5840 - val_loss: 1.1831 - val_acc: 0.5792 Epoch 9/10 45000/45000 [==============================] - 87s 2ms/step - loss: 1.1454 - acc: 0.5998 - val_loss: 1.0722 - val_acc: 0.6280 Epoch 10/10 45000/45000 [==============================] - 87s 2ms/step - loss: 1.1217 - acc: 0.6064 - val_loss: 1.0748 - val_acc: 0.6262 Convolutional Neural Network: - Loss: 1.11 - Accuracy: 60.69% ###Markdown Also in the CIFAR-10 case (like in the Fashion-MNIST) the accuracy of the output of the neural network is better than the one provided by the pure Naive Bayes Classifier. However, like said before, the interest in not in the network output but in the intermediate model built after the two "Convolution -> ReLU activation -> Pooling" phases, right after the outputs are flattened.Now the intermediate_model will be used to generate the intermediate trainset and testset which will be given as input to the Naive Bayes Classifier. ###Code # EXTRACT THE MODEL OF THE INTERMEDIATE LAYER model_intermediate = Model(inputs=model.input, outputs=model.get_layer("intermediate").output) # PREDICT TO GET THE INTERMEDIATE TRAINSET AND TESTSET train_intermediate = model_intermediate.predict(train) test_intermediate = model_intermediate.predict(test) # CLASSIFY CIFAR cifarCM = classify(train = train_intermediate, target = cifarTarget, test = test_intermediate, correct = cifarCorrect) ###Output Train time: 0.165 seconds Test time: 0.152 seconds Accuracy: 60.07% LogLikelihood Loss: 2.69 ###Markdown FASHION-MNIST CIFAR-10 ###Code # PLOT THE CONFUSION MATRICES plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses) ###Output _____no_output_____ ###Markdown The performance are way better!We obtain the 89.81% of accuracy with the Fashion-MNIST dataset, and 60.07% of accuracy with the CIFAR-10.This means that also very simple models, like the Naive Bayes Classifier, can be hugely helped by anticipating powerful models like CNNs! Trying the grayscale CIFAR-10 Let's try the same for the grayscale CIFAR-10. In the first parts of the notebook, the Naive Bayes Classifier performed worse with the grayscale CIFAR-10.Let's see what happens with this new hybrid model: CIFAR-10 (grayscale) ###Code # BUILD, RESHAPE THE DATASETS AND RUN THE CNN cnn = LeNetCNN() train, test = cnn.reshape(cifarGrayTrain, cifarTarget, cifarGrayTest, cifarCorrect, num_classes = 10, input_shape = (32,32,1)) model = cnn.buildAndRun(batch_size = 128, epochs = 10) # EXTRACT THE MODEL OF THE INTERMEDIATE LAYER model_intermediate = Model(inputs=model.input, outputs=model.get_layer("intermediate").output) # PREDICT TO GET THE INTERMEDIATE TRAINSET AND TESTSET train_intermediate = model_intermediate.predict(train) test_intermediate = model_intermediate.predict(test) # CLASSIFY CIFAR cifarCM = classify(train = train_intermediate, target = cifarTarget, test = test_intermediate, correct = cifarCorrect) ###Output Train time: 0.201 seconds Test time: 0.169 seconds Accuracy: 63.93% LogLikelihood Loss: 3.77
CNN-based-Handwritten-Hindi-Text-Recognition-main/CNN Model 2.ipynb
###Markdown ###Code from google.colab import drive drive.mount("/content/drive/") ###Output Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True). ###Markdown Importing Libraries and Importing Dataset ###Code #importing necessary libraries import os import keras import matplotlib import cv2 import numpy as np import skimage.io as io import pandas as pd import matplotlib.pyplot as plt from scipy import interp from itertools import cycle from keras.layers import * from keras.utils import * from keras.optimizers import Adam from keras.models import * from sklearn.model_selection import train_test_split from sklearn.utils import shuffle from sklearn import model_selection import sklearn.metrics as metrics from sklearn.metrics import roc_curve, auc from sklearn.metrics import roc_auc_score ###Output _____no_output_____ ###Markdown Reading the data from the disk ###Code # reading data from the disk storage data= pd.read_csv(r'/content/drive/My Drive/devanagari-character-set.csv') data.shape size=data.shape[0] # shape of the data is 92000 images # and each image is 32x32 with 28 pixels of the region representing the actual text # and 4 pixels as padding #creating a temp type array of our dataset array=data.values #X is for input values and Y is for output given on that input attributes X=array[:,0:1024].astype(float) Y=array[:,1024] ###Output _____no_output_____ ###Markdown Pre-processing for Y values ###Code #collecting the digit value from Y[i] i=0 Y_changed=np.ndarray(Y.shape) for name in Y: x = name.split('_') if(x[0]=='character'): Y_changed[i]=int(x[1]) elif x[0]=='digit': Y_changed[i]=(37 + int(x[1])) i=i+1 # copy the contents of the array to our original array Y=Y_changed #removing the extra elements after memory allocation for numpy array Y=Y[0:size].copy() print("The processed Y shape is "+str(Y.shape)) ###Output The processed Y shape is (92000,) ###Markdown Train and Test Split ###Code #size of the testing data split_size=0.20 #seed value for keeping same randomness in training and testing dataset seed=6 #splitting of the data X_train,X_test,Y_train,Y_test=model_selection.train_test_split(X,Y,test_size=split_size,random_state=seed) ###Output _____no_output_____ ###Markdown Reshaping the data ###Code # reshaping the data in order to convert the given 1D array of an image to actual grid representaion X_train = X_train.reshape((size*4)//5,32,32,1) print(X_train.shape) Y_train = Y_train.reshape((size*4)//5,1) print(Y_train.shape) X_test = X_test.reshape(size//5,32,32,1) print(X_test.shape) Y_test = Y_test.reshape(size//5,1) print(Y_test.shape) ###Output (73600, 32, 32, 1) (73600, 1) (18400, 32, 32, 1) (18400, 1) ###Markdown Creating a reference dictionary ###Code # a reference array for final classification of data # reference = {1: 'ka', 2: 'kha', 3: 'ga', 4: 'gha', 5: 'kna', 6: 'cha', 7: 'chha', 8: 'ja', 9: 'jha', 10: 'yna', 11: 'taamatar', 12: 'thaa', 13: 'daa', 14: 'dhaa', 15: 'adna', 16: 'tabala', 17: 'tha', 18: 'da', 19: 'dha', 20: 'na', 21: 'pa', 22: 'pha', 23: 'ba', 24: 'bha', 25: 'ma', 26: 'yaw', 27: 'ra', 28: 'la', 29: 'waw', 30: 'motosaw', 31: 'petchiryakha', 32: 'patalosaw', 33: 'ha', 34: 'chhya', 35: 'tra', 36: 'gya', 37: 0, 38: 1, 39: 2, 40: 3, 41: 4, 42: 5, 43: 6, 44: 7, 45: 8, 46: 9} reference = {1: 'क', 2: 'ख', 3: 'ग', 4: 'घ', 5: 'ङ', 6: 'च', 7: 'छ', 8: 'ज', 9: 'झ', 10: 'ञ', 11: 'ट', 12: 'ठ', 13: 'ड', 14: 'ढ', 15: 'ण', 16: 'त', 17: 'थ', 18: 'द', 19: 'ध', 20: 'न', 21: 'प', 22: 'फ', 23: 'ब', 24: 'भ', 25: 'म', 26: 'य', 27: 'र', 28: 'ल', 29: 'व', 30: 'स', 31: 'ष', 32: 'श', 33: 'ह', 34: 'श्र', 35: 'त्र', 36: 'ज्ञ', 37: 0, 38: 1, 39: 2, 40: 3, 41: 4, 42: 5, 43: 6, 44: 7, 45: 8, 46: 9} labels=['क', 'ख', 'ग', 'घ', 'ङ', 'च', 'छ', 'ज', 'झ', 'ञ', 'ट', 'ठ', 'ड', 'ढ', 'ण', 'त', 'थ', 'द', 'ध', 'न', 'प', 'फ', 'ब', 'भ', 'म', 'य', 'र', 'ल', 'व', 'स', 'ष', 'श', 'ह', 'श्र', 'त्र', 'ज्ञ', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] print(reference) print(type(reference)) ###Output {1: 'क', 2: 'ख', 3: 'ग', 4: 'घ', 5: 'ङ', 6: 'च', 7: 'छ', 8: 'ज', 9: 'झ', 10: 'ञ', 11: 'ट', 12: 'ठ', 13: 'ड', 14: 'ढ', 15: 'ण', 16: 'त', 17: 'थ', 18: 'द', 19: 'ध', 20: 'न', 21: 'प', 22: 'फ', 23: 'ब', 24: 'भ', 25: 'म', 26: 'य', 27: 'र', 28: 'ल', 29: 'व', 30: 'स', 31: 'ष', 32: 'श', 33: 'ह', 34: 'श्र', 35: 'त्र', 36: 'ज्ञ', 37: 0, 38: 1, 39: 2, 40: 3, 41: 4, 42: 5, 43: 6, 44: 7, 45: 8, 46: 9} <class 'dict'> ###Markdown Normalization and shuffling of data ###Code #normalization of data X_train = X_train/255 X_test = X_test/255 X_train, Y_train = shuffle(X_train, Y_train, random_state = 2) X_test, Y_test = shuffle(X_test, Y_test, random_state = 2) ###Output _____no_output_____ ###Markdown Testing and Validation split ###Code X_test, X_val, Y_test, Y_val = train_test_split(X_test, Y_test, test_size = 0.6, random_state = 1) print(X_test.shape) print(X_val.shape) ###Output (7360, 32, 32, 1) (11040, 32, 32, 1) ###Markdown Splitting of Y values into 46 categories for training, testing and validation ###Code Y_test = to_categorical(Y_test) Y_val = to_categorical(Y_val) Y_train = to_categorical(Y_train) inputs = Input(shape = (32,32,1)) conv0 = Conv2D(64, 3, padding = 'same', activation = 'relu')(inputs) conv1 = Conv2D(64, 3, padding='same', activation='relu')(conv0) conv2 = Conv2D(128, 3, padding='same', activation='relu')(conv1) pool2 = MaxPooling2D((2,2))(conv2) conv3 = Conv2D(128, 3, padding='same', activation='relu')(pool2) conv4 = Conv2D(256, 5, padding='same', activation='relu')(conv3) pool4 = MaxPooling2D((2,2))(conv4) conv5 = Conv2D(256, 5, padding='same', activation='relu')(pool4) flat = Flatten()(conv5) dense0 = Dense(512, activation='relu')(flat) dense1 = Dense(128, activation='relu')(dense0) dense2 = Dense(64, activation='relu')(dense1) dense3 = Dense(47, activation='softmax')(dense2) model = Model(inputs,dense3) print(model.summary()) ###Output Model: "functional_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 1)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 32, 32, 64) 640 _________________________________________________________________ conv2d_1 (Conv2D) (None, 32, 32, 64) 36928 _________________________________________________________________ conv2d_2 (Conv2D) (None, 32, 32, 128) 73856 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 16, 16, 128) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 16, 16, 128) 147584 _________________________________________________________________ conv2d_4 (Conv2D) (None, 16, 16, 256) 819456 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 256) 0 _________________________________________________________________ conv2d_5 (Conv2D) (None, 8, 8, 256) 1638656 _________________________________________________________________ flatten (Flatten) (None, 16384) 0 _________________________________________________________________ dense (Dense) (None, 512) 8389120 _________________________________________________________________ dense_1 (Dense) (None, 128) 65664 _________________________________________________________________ dense_2 (Dense) (None, 64) 8256 _________________________________________________________________ dense_3 (Dense) (None, 47) 3055 ================================================================= Total params: 11,183,215 Trainable params: 11,183,215 Non-trainable params: 0 _________________________________________________________________ None ###Markdown Data Augmentation:https://keras.io/api/preprocessing/image/tf.keras.preprocessing.image.ImageDataGenerator( featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-06, rotation_range=0, width_shift_range=0.0, height_shift_range=0.0, brightness_range=None, shear_range=0.0, zoom_range=0.0, channel_shift_range=0.0, fill_mode="nearest", cval=0.0, horizontal_flip=False, vertical_flip=False, rescale=None, preprocessing_function=None, data_format=None, validation_split=0.0, dtype=None,) ###Code from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import * datagen = ImageDataGenerator( rotation_range = 20, width_shift_range = 0.2, height_shift_range = 0.2, shear_range=0.2, zoom_range = 0.2, brightness_range=[0.4,1.5] ) datagen.fit(X_train) model.compile(Adam(lr = 10e-4), loss = 'categorical_crossentropy', metrics = ['accuracy']) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.8, patience=3) history = model.fit_generator(datagen.flow(X_train, Y_train, batch_size = 200), epochs = 10, validation_data = (X_val, Y_val), callbacks = [reduce_lr]) # Accuracy print(history) fig1, ax_acc = plt.subplots() plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.title('Model - Accuracy') plt.legend(['Training', 'Validation'], loc='lower right') plt.show() # Loss fig2, ax_loss = plt.subplots() plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('Model- Loss') plt.legend(['Training', 'Validation'], loc='upper right') plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.show() ###Output _____no_output_____ ###Markdown Model Testing and Accuracy check* model.evaluate()* Precision, Recall, F1-score, Support* Plot ROC and compare AUC ###Code model.evaluate(X_test, Y_test, batch_size = 400, verbose =1) Y_pred = model.predict(x = X_test, verbose = 1) Y_score=model.predict(X_test) print(Y_score) n_classes=47 # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(Y_test[:, i], Y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Compute micro-average ROC curve and ROC area fpr["micro"], tpr["micro"], _ = roc_curve(Y_test.ravel(), Y_score.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) # Compute macro-average ROC curve and ROC area # First aggregate all false positive rates all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)])) # Then interpolate all ROC curves at this points mean_tpr = np.zeros_like(all_fpr) for i in range(n_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) # Finally average it and compute AUC mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = auc(fpr["macro"], tpr["macro"]) # Plot all ROC curves lw=2 plt.figure(1) plt.plot(fpr["micro"], tpr["micro"], label='micro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["micro"]), color='deeppink', linestyle=':', linewidth=4) plt.plot(fpr["macro"], tpr["macro"], label='macro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["macro"]), color='navy', linestyle=':', linewidth=4) colors = cycle(['aqua', 'darkorange', 'cornflowerblue']) for i, color in zip(range(n_classes), colors): plt.plot(fpr[i], tpr[i], color=color, lw=lw, label='ROC of class {0} (area = {1:0.2f})' ''.format(i, roc_auc[i])) plt.plot([0, 1], [0, 1], 'k--', lw=lw) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Some extension of Receiver operating characteristic to multi-class') plt.legend(loc="lower right") figure = plt.gcf() # get current figure figure.set_size_inches(15,10) plt.show() Y_pred = np.argmax(Y_pred, axis = 1) print(Y_pred.shape) Y_test = np.argmax(Y_test, axis = 1) print(Y_test.shape) print("Classification report for the model %s:\n%s\n" % (model, metrics.classification_report(Y_test, Y_pred))) ###Output Classification report for the model <tensorflow.python.keras.engine.functional.Functional object at 0x7f66059db6d8>: precision recall f1-score support 1 0.98 0.26 0.41 161 2 0.75 0.53 0.62 167 3 0.87 0.77 0.82 146 4 0.74 0.10 0.17 172 5 0.80 0.03 0.05 151 6 1.00 0.40 0.57 162 7 0.72 0.16 0.26 162 8 0.82 0.55 0.66 184 9 1.00 0.03 0.07 173 10 0.73 0.45 0.55 168 11 0.86 0.04 0.07 159 12 0.00 0.00 0.00 151 13 0.50 0.03 0.06 146 14 1.00 0.01 0.01 154 15 0.64 0.37 0.47 150 16 0.66 0.98 0.79 167 17 0.58 0.22 0.32 169 18 0.37 0.20 0.26 159 19 0.50 0.01 0.01 153 20 0.56 0.87 0.68 180 21 0.83 0.50 0.62 156 22 0.96 0.17 0.29 143 23 0.24 0.68 0.35 166 24 0.38 0.34 0.36 165 25 0.48 0.82 0.61 158 26 0.85 0.31 0.45 149 27 0.62 0.86 0.72 169 28 0.80 0.41 0.54 153 29 0.18 0.75 0.30 151 30 0.94 0.42 0.58 151 31 0.10 0.99 0.18 165 32 0.29 0.70 0.41 155 33 0.80 0.31 0.45 141 34 0.32 0.12 0.17 168 35 0.60 0.85 0.71 150 36 0.60 0.77 0.67 183 37 1.00 0.15 0.26 171 38 0.63 0.99 0.77 179 39 0.73 0.87 0.79 160 40 0.63 0.65 0.64 156 41 0.51 0.25 0.33 162 42 0.99 0.84 0.91 169 43 1.00 0.42 0.59 162 44 1.00 0.49 0.66 137 45 0.76 0.92 0.83 156 46 0.93 0.09 0.16 151 accuracy 0.46 7360 macro avg 0.68 0.45 0.44 7360 weighted avg 0.68 0.46 0.44 7360
bin/jupyter/.ipynb_checkpoints/decision-tree-checkpoint.ipynb
###Markdown FA (weighted) Classifcation ###Code df.wa = read_excel( "../../results/df-water-access.xlsx" ,sheet=1) df.exp =read_excel("../../results/df-water-explore.xlsx" ,sheet=1) df.cluster = read_excel("../../results/df-fa-seven-cluster-rank.xlsx" ,sheet=1) df.wb = read_excel("../../results/df-wb.xlsx" ,sheet=1 ) df.exp$clusters <- as.factor(df.cluster$clusters) df <- merge(x = df.exp, y = df.wb, by = c("Country")) df <- df[, c(1:13, 17,21)] #scaling the world bank data similar to DHS aggregation out of 100 df.wb <- df[,c(9:15)] df.wb <- data.frame(lapply(df.wb, function(x) scale(x, center = FALSE, scale = max(x, na.rm = TRUE)/100))) df.scale <- cbind(df, df.wb) df.scale <- df.scale[,c(1:8,15:21)] # explanation of histogram sample of cart df.a <- df[, c(1:6,8)] hist(df$cart) # explanation of the explnatory variables. explnatory <- df[,c(2:7, 9:15)] chart.Correlation(explnatory, histogram=TRUE, pch=19 , tl.cex = .7 ) #Giving unique names for the typology # "Decentralized" , "Hybrid", "Centralized" df <- df%>% mutate(clusters=case_when( .$clusters=="1" ~ "Decentralized", .$clusters=="2" ~ "Hybrid", .$clusters=="3" ~ "Centralized", )) df.scale <- df.scale%>% mutate(clusters=case_when( .$clusters=="1" ~ "Decentralized", .$clusters=="2" ~ "Hybrid", .$clusters=="3" ~ "Centralized", )) df$clusters <- as.factor(df$clusters) df.scale$clusters <- as.factor(df.scale$clusters) write_xlsx(df , '../../results/class.xlsx') write_xlsx(df.scale , '../../results/class-scale.xlsx') head(df) ###Output _____no_output_____ ###Markdown Tree ###Code # Make big tree form <- as.formula(clusters ~ . - Country) tree.fwa <- rpart(form,data=df,control=rpart.control(minsplit=4,cp=0.01, xval = nrow(df), maxsurrogate = 0, minbucket = 4 ) ) par(mar=c(1,1,1,1)) pdf(file = "../../docs/manuscript/pdf-image/cp.pdf" , width = 5, height = 5 ) plotcp(tree.fwa) dev.off() printcp(tree.fwa) #size of the plot options(repr.plot.width=10, repr.plot.height=10) par(mar = c(1,1,1,1)) par(cex=1) #Interatively prune the tree tree.pru <- prune(tree.fwa, cp=0.017) # interactively trim the tree # Development of fancy plots pdf(file = "../../docs/manuscript/pdf-image/rpart.pdf" , width = 7, height = 7 ) fancyRpartPlot(tree.pru, main ='', sub ='', caption='' ,palettes=c("Blues","Greens", "Reds" )) dev.off() summary(tree.pru) # Development of fanct variable importance plot tree.fwa$variable.importance var.imp = read_excel( "../../results/variable-importance.xlsx" ,sheet=1) s <- ggplot(var.imp , aes(x= reorder(Variable, + Importance), y= Importance)) + geom_bar(stat="identity", fill="steelblue") + theme_minimal() + coord_flip() + theme(text = element_text(size=17))+ #Font size theme(axis.text.x = element_text(size=17), axis.text.y = element_text(size=17)) + #Adjusting the tick sizes xlab("") pdf(file = "../../docs/manuscript/pdf-image/var-imp.pdf" , width = 12, height = 7 ) par(mar=c(1,1,1,10)) s dev.off() ###Output _____no_output_____
DeepLearningNN.ipynb
###Markdown A simple Deep Learning Model for Classifying Sentiment ###Code # Code : Import Libraries import numpy as np import pandas as pd from keras.models import Sequential from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D from keras.utils.np_utils import to_categorical from keras.wrappers.scikit_learn import KerasClassifier from sklearn.feature_extraction.text import CountVectorizer from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV import re import warnings; warnings.simplefilter('ignore') # Code Load Data tweetsInfo = pd.read_csv('AllTweetInfo.csv') tweetsInfo.head(2) ###Output _____no_output_____ ###Markdown Create Input Features and Train Validation Split ###Code # Get Input Featurs ftr_col = 'text_features' tokenizer = Tokenizer(split=' ') tokenizer.fit_on_texts(tweetsInfo[ftr_col].values) X_t = tokenizer.texts_to_sequences(tweetsInfo[ftr_col].values) X_padded = pad_sequences(X_t) # Creat Train and Validation Split y = pd.get_dummies(tweetsInfo['sentiment']).values X_train, X_test, Y_train, Y_test = train_test_split(X_padded,y, test_size = 0.3, random_state = 27) #print(X_train.shape,Y_train.shape) #print(X_test.shape,Y_test.shape) val_size = 100 X_validate = X_test[-val_size:] Y_validate = Y_test[-val_size:] X_test = X_test[:-val_size] Y_test = Y_test[:-val_size] ###Output _____no_output_____ ###Markdown A simple LSTM Network ###Code def CreateModel(X_shape): lstm_out1, lstm_out2, l1, l2, em = 196,196,2,2,56 model = Sequential() model.add(Embedding(max_fatures, em, input_length = X_shape)) model.add(LSTM(lstm_out1, dropout=0.2)) model.add(Dense(l1,activation='relu')) model.add(Dense(l2,activation='relu')) model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy']) return model # embed_dim = 56 # lstm_out1 = 196 # lstm_out2 = 196 # l1 = 2 # l2 = 2 # X_shape = X_padded.shape[1] # Helper Functin For Model Creation def CreatModel(batch_size, epochs, X_shape, X_train, Y_train): #model = KerasClassifier(build_fn = CreateModel) currmodel = CreateModel(X_shape) print(currmodel.summary()) print() print('Training Model') currmodel.fit(X_train, Y_train, epochs = epochs, batch_size=batch_size, verbose = 2) print() return currmodel X_shape = X_padded.shape[1] m = CreatModel(32, 10, X_shape, X_train, Y_train) print("Evalution Scores") score,acc = m.evaluate(X_test, Y_test, verbose = 2, batch_size = 32) print("score: %.2f" % (score)) print("acc: %.2f" % (acc)) # define the grid search parameters # param_grid = dict(batch_size=batch_size, epochs=epochs) # grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1) # grid_result = grid.fit(X_train, Y_train) # print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) # batch_size = [10, 20, 40, 60, 80, 100] # epochs = [10, 50, 100] # batch_size = [10, 20, 40, 60, 80, 100] # epochs = [10, 50, 100] #creat_train_eval(lstm_out1, lstm_out2, l1, l2, em, batchsize,X_shape) # NN = KerasClassifier(build_fn=CreateModel, verbose=0) # #Defualt Params # batchsize = 32 # # Params to be Tuned # epochs = [5, 10] # batches = [5, 10, 100] # optimizers = ['rmsprop', 'adam'] # # Tune Params # hyperparameters = dict(optimizer=optimizers, epochs=epochs, batch_size=batches) # grid = GridSearchCV(estimator=NN, param_grid=hyperparameters) # grid_result = grid.fit(X_train, Y_train) # grid_result.best_params ###Output _____no_output_____
jupyter_book/book_template/content/03/4/Introduction_to_Tables.ipynb
###Markdown We can now apply Python to analyze data. We will work with data stored in Table structures.Tables are a fundamental way of representing data sets. A table can be viewed in two ways:* a sequence of named columns that each describe a single attribute of all entries in a data set, or* a sequence of rows that each contain all information about a single individual in a data set.We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details. The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it. ###Code cones ###Output _____no_output_____ ###Markdown The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.A table method is just like a function, but it must operate on a table. So the call looks like`name_of_table.method(arguments)`For example, if you want to see just the first two rows of a table, you can use the table method `show`. ###Code cones.show(2) ###Output _____no_output_____ ###Markdown You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows. Choosing Sets of Columns The method `select` creates a new table consisting of only the specified columns. ###Code cones.select('Flavor') ###Output _____no_output_____ ###Markdown This leaves the original table unchanged. ###Code cones ###Output _____no_output_____ ###Markdown You can select more than one column, by separating the column labels by commas. ###Code cones.select('Flavor', 'Price') ###Output _____no_output_____ ###Markdown You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column. ###Code cones.drop('Color') ###Output _____no_output_____ ###Markdown You can name this new table and look at it again by just typing its name. ###Code no_colors = cones.drop('Color') no_colors ###Output _____no_output_____ ###Markdown Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table. Sorting Rows The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones. ###Code cones.sort('Price') ###Output _____no_output_____ ###Markdown To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method. By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`. ###Code cones.sort('Price', descending=True) ###Output _____no_output_____ ###Markdown Like `select` and `drop`, the `sort` method leaves the original table unchanged. Selecting Rows that Satisfy a Condition The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones. ###Code cones.where('Flavor', 'chocolate') ###Output _____no_output_____ ###Markdown The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`. ###Code cones.where('Flavor', 'Chocolate') ###Output _____no_output_____ ###Markdown Like all the other table methods in this section, `where` leaves the original table unchanged. Example: Salaries in the NBA "The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.Each row represents one player. The columns are:| **Column Label** | Description ||--------------------|-----------------------------------------------------|| `PLAYER` | Player's name || `POSITION` | Player's position on team || `TEAM` | Team name ||`SALARY` | Player's salary in 2015-2016, in millions of dollars| The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016. ###Code nba ###Output _____no_output_____ ###Markdown Fans of Stephen Curry can find his row by using `where`. ###Code nba.where('PLAYER', 'Stephen Curry') ###Output _____no_output_____ ###Markdown We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors. ###Code warriors = nba.where('TEAM', 'Golden State Warriors') warriors ###Output _____no_output_____ ###Markdown By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses. ###Code warriors.show() ###Output _____no_output_____ ###Markdown The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order. ###Code nba.sort('SALARY') ###Output _____no_output_____ ###Markdown These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table. The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows. ###Code nba.sort('SALARY', descending=True) ###Output _____no_output_____ ###Markdown We can now apply Python to analyze data. We will work with data stored in Table structures.Tables are a fundamental way of representing data sets. A table can be viewed in two ways:* a sequence of named columns that each describe a single attribute of all entries in a data set, or* a sequence of rows that each contain all information about a single individual in a data set.We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details. The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it. ###Code cones ###Output _____no_output_____ ###Markdown The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.A table method is just like a function, but it must operate on a table. So the call looks like`name_of_table.method(arguments)`For example, if you want to see just the first two rows of a table, you can use the table method `show`. ###Code cones.show(2) ###Output _____no_output_____ ###Markdown You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows. Choosing Sets of Columns The method `select` creates a new table consisting of only the specified columns. ###Code cones.select('Flavor') ###Output _____no_output_____ ###Markdown This leaves the original table unchanged. ###Code cones ###Output _____no_output_____ ###Markdown You can select more than one column, by separating the column labels by commas. ###Code cones.select('Flavor', 'Price') ###Output _____no_output_____ ###Markdown You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column. ###Code cones.drop('Color') ###Output _____no_output_____ ###Markdown You can name this new table and look at it again by just typing its name. ###Code no_colors = cones.drop('Color') no_colors ###Output _____no_output_____ ###Markdown Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table. Sorting Rows The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones. ###Code cones.sort('Price') ###Output _____no_output_____ ###Markdown To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method. By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`. ###Code cones.sort('Price', descending=True) ###Output _____no_output_____ ###Markdown Like `select` and `drop`, the `sort` method leaves the original table unchanged. Selecting Rows that Satisfy a Condition The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones. ###Code cones.where('Flavor', 'chocolate') ###Output _____no_output_____ ###Markdown The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`. ###Code cones.where('Flavor', 'Chocolate') ###Output _____no_output_____ ###Markdown Like all the other table methods in this section, `where` leaves the original table unchanged. Example: Salaries in the NBA "The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.Each row represents one player. The columns are:| **Column Label** | Description ||--------------------|-----------------------------------------------------|| `PLAYER` | Player's name || `POSITION` | Player's position on team || `TEAM` | Team name ||`SALARY` | Player's salary in 2015-2016, in millions of dollars| The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016. ###Code nba ###Output _____no_output_____ ###Markdown Fans of Stephen Curry can find his row by using `where`. ###Code nba.where('PLAYER', 'Stephen Curry') ###Output _____no_output_____ ###Markdown We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors. ###Code warriors = nba.where('TEAM', 'Golden State Warriors') warriors ###Output _____no_output_____ ###Markdown By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses. ###Code warriors.show() ###Output _____no_output_____ ###Markdown The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order. ###Code nba.sort('SALARY') ###Output _____no_output_____ ###Markdown These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table. The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows. ###Code nba.sort('SALARY', descending=True) ###Output _____no_output_____ ###Markdown We can now apply Python to analyze data. We will work with data stored in Table structures.Tables are a fundamental way of representing data sets. A table can be viewed in two ways:* a sequence of named columns that each describe a single attribute of all entries in a data set, or* a sequence of rows that each contain all information about a single individual in a data set.We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details. The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it. ###Code cones ###Output _____no_output_____ ###Markdown The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.A table method is just like a function, but it must operate on a table. So the call looks like`name_of_table.method(arguments)`For example, if you want to see just the first two rows of a table, you can use the table method `show`. ###Code cones.show(2) ###Output _____no_output_____ ###Markdown You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows. Choosing Sets of Columns The method `select` creates a new table consisting of only the specified columns. ###Code cones.select('Flavor') ###Output _____no_output_____ ###Markdown This leaves the original table unchanged. ###Code cones ###Output _____no_output_____ ###Markdown You can select more than one column, by separating the column labels by commas. ###Code cones.select('Flavor', 'Price') ###Output _____no_output_____ ###Markdown You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column. ###Code cones.drop('Color') ###Output _____no_output_____ ###Markdown You can name this new table and look at it again by just typing its name. ###Code no_colors = cones.drop('Color') no_colors ###Output _____no_output_____ ###Markdown Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table. Sorting Rows The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones. ###Code cones.sort('Price') ###Output _____no_output_____ ###Markdown To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method. By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`. ###Code cones.sort('Price', descending=True) ###Output _____no_output_____ ###Markdown Like `select` and `drop`, the `sort` method leaves the original table unchanged. Selecting Rows that Satisfy a Condition The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones. ###Code cones.where('Flavor', 'chocolate') ###Output _____no_output_____ ###Markdown The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`. ###Code cones.where('Flavor', 'Chocolate') ###Output _____no_output_____ ###Markdown Like all the other table methods in this section, `where` leaves the original table unchanged. Example: Salaries in the NBA "The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.Each row represents one player. The columns are:| **Column Label** | Description ||--------------------|-----------------------------------------------------|| `PLAYER` | Player's name || `POSITION` | Player's position on team || `TEAM` | Team name ||`SALARY` | Player's salary in 2015-2016, in millions of dollars| The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016. ###Code nba ###Output _____no_output_____ ###Markdown Fans of Stephen Curry can find his row by using `where`. ###Code nba.where('PLAYER', 'Stephen Curry') ###Output _____no_output_____ ###Markdown We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors. ###Code warriors = nba.where('TEAM', 'Golden State Warriors') warriors ###Output _____no_output_____ ###Markdown By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses. ###Code warriors.show() ###Output _____no_output_____ ###Markdown The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order. ###Code nba.sort('SALARY') ###Output _____no_output_____ ###Markdown These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table. The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows. ###Code nba.sort('SALARY', descending=True) ###Output _____no_output_____
notebook/data-preprocessing.ipynb
###Markdown Data Importing and Preprocessing - Women's March 2017 dataset Este notebook trata da parte de leitura dos dados dos tweets de arquivos `.json` e pré-processamento das informações contidas nestes arquivos.Quando a API do Twitter retorna dados de tweets recuperados através dela, eles são entregues em um arquivo JSON com vários dados e metadados que compõe um tweet. Muitos desses dados contidos no JSON são desnecessários para a análise que temos como objetivo, por isso, os trechos de código abaixo tratam as informações necessárias para a análise e descartam as demais.Para esse trabalho utilizamos bibliotecas específicas além do que a linguagem provém:- **`os`**: lida com detalhes específicos ao sistema operacional da máquina rodando a análise, independente de qual sistemaoperacional seja;- **`re`**: lida com expressões regulares pra reconhecimento de padrões em textos;- **`glob`**: lida com padrão de arquivos em sistemas Unix;- **`pandas`**: toolkit para análise de dados;- **`json_normalize`**: um sub-módulo do pandas para tratamento de arquivos e dados JSON. ###Code import os import re import glob import pandas as pd from pandas.io.json import json_normalize ###Output _____no_output_____ ###Markdown Com auxílio da biblioteca **`re`**, foram criadas funções para converter algumas informações presentes no texto do tweet para informações simples e concisas para análise posterior. ###Code # Helper functions def get_hashtags(text): s = re.findall('(?:^|\s)[##]{1}(\w+)', text) return s if len(s) > 0 else '' def get_mentions(text): s = re.findall('(?:^|\s?|\.?)[@@]{1}([^\s#<>[\]|{}:;,.\(\)=+]+)', text) return s if len(s) > 0 else '' def get_source(text): s = re.findall('<a\s+?href=\"[^\"]*\"\s+?rel=\"[^\"]*\">([^<>]+)<\/a>', text) return s[0] if len(s) > 0 else '' def get_urls(text): s = re.findall('http[s]?://(?:[a-z]|[0-9]|[$-_@.&amp;+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+', text) return s[0] if len(s) > 0 else '' path = '../data/' filenames = glob.glob(os.path.join(path, '*.json')) filenames.sort() ###Output _____no_output_____ ###Markdown Devido ao enorme número de dados que foram recuperados da API (11.249.944 tweets), o pré-processamento foi feito em blocos. Na importação dos dados da API, os tweets foram divididos em 16 arquivos, e na importação dos dados esses 16 arquivos foram lidos, pré-processados, e outros 16 arquivos foram criados com os dados prontos para a análise, que geraram 9.170.486 instâncias.Os dados para análise se resumem em:- id do tweet- data e hora de criação do tweet- dispositivo fonte- texto do tweet- hashtags presentes no tweet- menções à usuários- urls presentes no tweet- número de vezes que o tweet foi favoritado- número de vezes que o tweet foi retweetado- localização do usuário- nome do usuário- username- quantos seguidores o usuário tem- se é um usuário verificado ou nãoApós o pré-processamento dos tweets, onde foram escolhidos apenas 14 características para observação, conseguimos reduzir a nossa amostra de um arquivo de cerca de 96GB iniciais para um arquivo de 4,8GB, descartando apenas os metadados desnecessários e os tweets de idioma diferente do inglês. ###Code for file in filenames: json_reader = pd.read_json(file, lines=True, chunksize=2048) wm_data = pd.DataFrame() for chunk in json_reader: not_truncated = chunk[chunk['truncated'] == False] only_english = not_truncated[not_truncated['lang'] == 'en'].reset_index() only_english['hashtags'] = only_english['text'].apply(get_hashtags) only_english['mentions'] = only_english['text'].apply(get_mentions) only_english['urls'] = only_english['text'].apply(get_urls) only_english['source'] = only_english['source'].apply(get_source) user_df = json_normalize(only_english['user']) # Selecting only few columns tweet_df = only_english[['id_str', 'created_at', 'source', 'text', 'hashtags', 'mentions', \ 'urls', 'favorite_count', 'retweet_count']] user_df = user_df[['location', 'name', 'screen_name', 'followers_count', 'verified']] frames = [tweet_df, user_df] df = pd.concat(frames, axis=1) wm_data.append(df) fp = file[2:] filepath = '../data/clean_{}'.format(fp) with open(filepath, 'w') as f: f.write(wm_data.to_json(orient='records', lines=True)) ###Output _____no_output_____
c4_wk1_Tensorflow_serving_in_Colab.ipynb
###Markdown **Train and deploy a model with TensorFlow Serving** ###Code import sys # Confirm that we're using Python 3 assert sys.version_info.major is 3, 'Not running Python 3. Use Runtime > Change runtime type' # TensorFlow and tf.keras print("Installing dependencies for Colab environment") !pip install -Uq grpcio==1.26.0 import tensorflow as tf from tensorflow import keras # Helper libraries import matplotlib.pyplot as plt import os import numpy as np import subprocess print('TensorFlow version: {}'.format(tf.__version__)) ###Output Installing dependencies for Colab environment TensorFlow version: 2.6.0 ###Markdown Import Mnist dataset ###Code mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Scale the values of the arrays below to be between 0.0 and 1.0. train_images = train_images / 255.0 test_images = test_images / 255.0 ###Output Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11493376/11490434 [==============================] - 0s 0us/step 11501568/11490434 [==============================] - 0s 0us/step ###Markdown ###Code # Reshape the arrays below. train_images = train_images.reshape(train_images.shape[0], 28, 28, 1) test_images = test_images.reshape(test_images.shape[0], 28, 28, 1) print('\ntrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype)) print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype)) idx = 42 plt.imshow(test_images[idx].reshape(28,28), cmap=plt.cm.binary) plt.title('True Label: {}'.format(test_labels[idx]), fontdict={'size': 16}) plt.show() # Create a model. model = keras.Sequential([ keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3, strides=2, activation='relu', name='Conv1'), keras.layers.Flatten(), keras.layers.Dense(10, activation=tf.nn.softmax, name='Softmax') ]) model.summary() # Configure the model for training. model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) epochs = 5 # Train the model. history = model.fit(train_images, train_labels, epochs=epochs) # Evaluate the model on the test images. results_eval = model.evaluate(test_images, test_labels, verbose=0) for metric, value in zip(model.metrics_names, results_eval): print(metric + ': {:.3}'.format(value)) import tempfile MODEL_DIR = tempfile.gettempdir() version = 1 export_path = os.path.join(MODEL_DIR, str(version)) print('export_path = {}\n'.format(export_path)) keras.models.save_model( model, export_path, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None ) print('\nSaved model:') !ls -l {export_path} ###Output export_path = /tmp/1 ###Markdown Examine your saved modelWe'll use the command line utility `saved_model_cli` to look at the [MetaGraphDefs](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/MetaGraphDef) (the models) and [SignatureDefs](../signature_defs) (the methods you can call) in our SavedModel. See [this discussion of the SavedModel CLI](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.mdcli-to-inspect-and-execute-savedmodel) in the TensorFlow Guide. ###Code !saved_model_cli show --dir {export_path} --all ###Output MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs: signature_def['__saved_model_init_op']: The given SavedModel SignatureDef contains the following input(s): The given SavedModel SignatureDef contains the following output(s): outputs['__saved_model_init_op'] tensor_info: dtype: DT_INVALID shape: unknown_rank name: NoOp Method name is: signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['Conv1_input'] tensor_info: dtype: DT_FLOAT shape: (-1, 28, 28, 1) name: serving_default_Conv1_input:0 The given SavedModel SignatureDef contains the following output(s): outputs['Softmax'] tensor_info: dtype: DT_FLOAT shape: (-1, 10) name: StatefulPartitionedCall:0 Method name is: tensorflow/serving/predict WARNING: Logging before flag parsing goes to stderr. W1102 10:27:53.778752 139784460539776 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling __init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. Defined Functions: Function Name: '__call__' Option #1 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #2 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs') Argument #2 DType: bool Value: False Argument #3 DType: NoneType Value: None Option #3 Callable with: Argument #1 inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Option #4 Callable with: Argument #1 Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input') Argument #2 DType: bool Value: True Argument #3 DType: NoneType Value: None Function Name: '_default_save_signature' Traceback (most recent call last): File "/usr/local/bin/saved_model_cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 990, in main args.func(args) File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 691, in show _show_all(args.dir) File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 283, in _show_all _show_defined_functions(saved_model_dir) File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 186, in _show_defined_functions function._list_all_concrete_functions_for_serialization() # pylint: disable=protected-access AttributeError: '_WrapperFunction' object has no attribute '_list_all_concrete_functions_for_serialization' ###Markdown Add TensorFlow Serving distribution URI as a package source: ###Code import sys # We need sudo prefix if not on a Google Colab. if 'google.colab' not in sys.modules: SUDO_IF_NEEDED = 'sudo' else: SUDO_IF_NEEDED = '' !echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \ curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add - !{SUDO_IF_NEEDED} apt update ###Output deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2943 100 2943 0 0 143k 0 --:--:-- --:--:-- --:--:-- 143k OK Get:1 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB] Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:4 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B] Hit:5 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease Get:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Hit:7 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease Get:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB] Get:9 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3,012 B] Get:10 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Ign:11 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease Ign:12 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Get:13 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [696 B] Hit:14 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Get:15 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B] Get:16 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,810 kB] Get:17 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [927 kB] Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,213 kB] Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,837 kB] Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [667 kB] Get:21 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [44.7 kB] Get:23 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [341 B] Get:24 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [348 B] Get:25 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [786 kB] Get:26 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,400 kB] Get:27 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [633 kB] Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,434 kB] Fetched 14.1 MB in 3s (4,769 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 63 packages can be upgraded. Run 'apt list --upgradable' to see them. ###Markdown Install TensorFlow Serving ###Code #!apt autoremove !{SUDO_IF_NEEDED} apt-get install tensorflow-model-server ###Output Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libnvidia-common-460 Use 'apt autoremove' to remove it. The following NEW packages will be installed: tensorflow-model-server 0 upgraded, 1 newly installed, 0 to remove and 63 not upgraded. Need to get 347 MB of archives. After this operation, 0 B of additional disk space will be used. Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.6.0 [347 MB] Fetched 347 MB in 5s (66.8 MB/s) Selecting previously unselected package tensorflow-model-server. (Reading database ... 155062 files and directories currently installed.) Preparing to unpack .../tensorflow-model-server_2.6.0_all.deb ... Unpacking tensorflow-model-server (2.6.0) ... Setting up tensorflow-model-server (2.6.0) ... ###Markdown Start running Tensorflow Serving ###Code os.environ["MODEL_DIR"] = MODEL_DIR %%bash --bg nohup tensorflow_model_server \ --rest_api_port=8501 \ --model_name=digits_model \ --model_base_path="${MODEL_DIR}" >server.log 2>&1 !tail server.log ###Output 2021-11-02 10:34:41.196601: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 46151 microseconds. 2021-11-02 10:34:41.197083: I tensorflow_serving/servables/tensorflow/saved_model_warmup_util.cc:59] No warmup data file found at /tmp/1/assets.extra/tf_serving_warmup_requests 2021-11-02 10:34:41.197218: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: digits_model version: 1} 2021-11-02 10:34:41.197750: I tensorflow_serving/model_servers/server_core.cc:486] Finished adding/updating models 2021-11-02 10:34:41.197794: I tensorflow_serving/model_servers/server.cc:133] Using InsecureServerCredentials 2021-11-02 10:34:41.197806: I tensorflow_serving/model_servers/server.cc:383] Profiler service is enabled 2021-11-02 10:34:41.198223: I tensorflow_serving/model_servers/server.cc:409] Running gRPC ModelServer at 0.0.0.0:8500 ... [warn] getaddrinfo: address family for nodename not supported 2021-11-02 10:34:41.198611: I tensorflow_serving/model_servers/server.cc:430] Exporting HTTP/REST API at:localhost:8501 ... [evhttp_server.cc : 245] NET_LOG: Entering the event loop ... ###Markdown Make REST requests ###Code import json import random import requests # docs_infra: no_execute !pip install -q requests headers = {"content-type": "application/json"} rando = random.randint(0,len(test_images)-5) data = json.dumps({"signature_name": "serving_default", "instances":test_images[rando:rando+5].tolist()}) json_response = requests.post('http://localhost:8501/v1/models/digits_model:predict', data=data, headers=headers) predictions = json.loads(json_response.text)['predictions'] plt.figure(figsize=(10,15)) for i in range(5): plt.subplot(1,5,i+1) plt.imshow(test_images[rando+i].reshape(28,28), cmap = plt.cm.binary) plt.axis('off') color = 'green' if np.argmax(predictions[i]) == test_labels[rando+i] else 'red' plt.title('Prediction: {}\n True Label: {}'.format(np.argmax(predictions[i]), test_labels[rando+i]), color=color) plt.show() ###Output _____no_output_____
a-proof-time-series/1-data_exploration.ipynb
###Markdown Data exploration from dataset without domains and labels Selecting IDs from non-annotated dataset ###Code df = pd.read_table("./processed/covid_data_without_levels_anonimized.csv", sep=',' , index_col=0) df.head() #number of unique patients unique_id = df['MDN'].unique() df['MDN'].nunique() #1290 unique ids #group by unique patients df_grouped_ind = df.groupby(['MDN']).count() df_grouped_ind df_filtered100 =df.groupby(['MDN']).filter(lambda x: len(x) > 100) df_filtered100.nunique() df_filtered500 =df.groupby(['MDN']).filter(lambda x: len(x) > 500) df_filtered1000 =df.groupby(['MDN']).filter(lambda x: len(x) > 1000) pd.crosstab(df_filtered1000['Notitiedatum'], df_filtered1000['MDN']).plot(title= 'Annotations frequency over time - over 1000 notes') pd.crosstab(df_filtered500['Notitiedatum'], df_filtered500['MDN']).plot(legend=False, title= 'Annotations frequency over time - over 500 notes') #grouped by patients and date to see the spread on notes within each patient df.groupby(['MDN', 'Notitiedatum']).count() #how many annotated data per group print(df_filtered100.annotated.value_counts()) print(df_filtered500.annotated.value_counts()) print(df_filtered1000.annotated.value_counts()) data = [{'uniqueID':df_filtered100['MDN'].nunique(), 'annotated': df_filtered100.annotated.value_counts()[1], 'not annoated': df_filtered100.annotated.value_counts()[0], 'total notes':df_filtered100.shape[0]}, {'uniqueID':df_filtered500['MDN'].nunique(), 'annotated': df_filtered500.annotated.value_counts()[1], 'not annoated': df_filtered500.annotated.value_counts()[0], 'total notes':df_filtered500.shape[0]}, {'uniqueID':df_filtered1000['MDN'].nunique(), 'annotated': df_filtered1000.annotated.value_counts()[1], 'not annoated': df_filtered1000.annotated.value_counts()[0], 'total notes':df_filtered1000.shape[0]}] # Creates DataFrame. df_summary = pd.DataFrame(data, index =['>100 per ID', '>500 per ID', '>1000 per ID']) df_summary df_filtered500['MDN'].unique() ###Output _____no_output_____
DSN_KOWOPE.ipynb
###Markdown ###Code !pip install catboost import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt % matplotlib inline from sklearn.base import BaseEstimator, TransformerMixin import xgboost from catboost import CatBoostClassifier from lightgbm import LGBMModel,LGBMClassifier from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier, VotingClassifier from mlxtend.classifier import StackingClassifier from sklearn.linear_model import LinearRegression from sklearn import model_selection from sklearn.model_selection import train_test_split, RandomizedSearchCV, StratifiedKFold from sklearn.metrics import roc_auc_score import os, gc, warnings warnings.filterwarnings('ignore') from google.colab import drive drive.mount('/content/drive') Train = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Train.csv') Test = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Test.csv') def fill_arbitary(col): for i in col: b = -999999 Train[i].fillna(b,inplace=True) Test[i].fillna(b,inplace= True) def model_predict(estimator,train,label,test, estimator_name): mean_train = [] mean_test_val = [] test_pred = np.zeros(test.shape[0]) val_pred = np.zeros(train.shape[0]) for count, (train_index,test_index) in enumerate(skf.split(train,label)): x_train,x_test = train.iloc[train_index],train.iloc[test_index] y_train,y_test = label.iloc[train_index],label.iloc[test_index] print(f'========================Fold{count +1}==========================') estimator.fit(x_train, y_train) train_predict = estimator.predict_proba(x_train)[:,1] test_predict = estimator.predict_proba(x_test)[:,1] val_pred[test_index] = test_predict test_pred+= estimator.predict_proba(test)[:,1] print('\nValidation scores', roc_auc_score(y_test,test_predict)) print('\nTraining scores', roc_auc_score(y_train,train_predict)) mean_train.append(roc_auc_score(y_train, train_predict)) mean_test_val.append(roc_auc_score(y_test,test_predict)) print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val)) print('Average Training ROC score for 10 folds split:',np.mean(mean_train)) print('standard Deviation for 10 folds split:',np.std(mean_test_val)) return val_pred, test_pred, estimator_name def lgbm_predict(estimator,train,label,test,estimator_name): mean_train = [] mean_test_val = [] test_pred = np.zeros(test.shape[0]) val_pred = np.zeros(train.shape[0]) for count, (train_index,test_index) in enumerate(skf.split(train,label)): x_train,x_test = train.iloc[train_index],train.iloc[test_index] y_train,y_test = label.iloc[train_index],label.iloc[test_index] print(f'========================Fold{count +1}==========================') estimator.fit(x_train,y_train,eval_set=[(x_test,y_test)],early_stopping_rounds=200, verbose=250) train_predict = estimator.predict_proba(x_train, num_iteration = estimator.best_iteration_)[:,1] test_predict = estimator.predict_proba(x_test, num_iteration = estimator.best_iteration_)[:,1] val_pred[test_index] = test_predict test_pred+= estimator.predict_proba(test, num_iteration = estimator.best_iteration_)[:,1] print('\nValidation scores', roc_auc_score(y_test,test_predict)) print('\nTraining scores', roc_auc_score(y_train,train_predict)) mean_train.append(roc_auc_score(y_train, train_predict)) mean_test_val.append(roc_auc_score(y_test,test_predict)) print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val)) print('Average Training ROC score for 10 folds split:',np.mean(mean_train)) print('standard Deviation for 10 folds split:',np.std(mean_test_val)) return val_pred, test_pred, estimator_name def xgb_predict(estimator,train,label,test,estimator_name): mean_train = [] mean_test_val = [] test_pred = np.zeros(test.shape[0]) val_pred = np.zeros(train.shape[0]) for count, (train_index,test_index) in enumerate(skf.split(train,label)): x_train,x_test = train.iloc[train_index],train.iloc[test_index] y_train,y_test = label.iloc[train_index],label.iloc[test_index] print(f'========================Fold{count +1}==========================') estimator.fit(x_train, y_train, early_stopping_rounds = 200, eval_metric="auc", eval_set=[(x_test, y_test)],verbose=250) train_predict = estimator.predict_proba(x_train, ntree_limit = estimator.get_booster().best_ntree_limit)[:,1] test_predict = estimator.predict_proba(x_test, ntree_limit = estimator.get_booster().best_ntree_limit)[:,1] val_pred[test_index] = test_predict test_pred+= estimator.predict_proba(test, ntree_limit = estimator.get_booster().best_ntree_limit)[:,1] print('\nTesting scores', roc_auc_score(y_test,test_predict)) print('\nTraining scores', roc_auc_score(y_train,train_predict)) mean_train.append(roc_auc_score(y_train, train_predict)) mean_test_val.append(roc_auc_score(y_test,test_predict)) print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val)) print('Average Training ROC score for 10 folds split:',np.mean(mean_train)) print('standard Deviation for 10 folds split:',np.std(mean_test_val)) return val_pred, test_pred, estimator_name def cat_predict(estimator,train,label,test,estimator_name): mean_train = [] mean_test_val = [] test_pred = np.zeros(test.shape[0]) val_pred = np.zeros(train.shape[0]) for count, (train_index,test_index) in enumerate(skf.split(train,label)): x_train,x_test = train.iloc[train_index],train.iloc[test_index] y_train,y_test = label.iloc[train_index],label.iloc[test_index] print(f'========================Fold{count +1}==========================') estimator.fit(x_train,y_train,eval_set=[(x_test,y_test)],early_stopping_rounds=200, verbose=250,use_best_model=True) train_predict = estimator.predict_proba(x_train)[:,1] test_predict = estimator.predict_proba(x_test)[:,1] val_pred[test_index] = test_predict test_pred+= estimator.predict_proba(test)[:,1] print('\nTesting scores', roc_auc_score(y_test,test_predict)) print('\nTraining scores', roc_auc_score(y_train,train_predict)) mean_train.append(roc_auc_score(y_train, train_predict)) mean_test_val.append(roc_auc_score(y_test,test_predict)) print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val)) print('Average Training ROC score for 10 folds split:',np.mean(mean_train)) print('standard Deviation for 10 folds split:',np.std(mean_test_val)) return val_pred, test_pred, estimator_name class TargetEncoder(BaseEstimator, TransformerMixin): """Target encoder. Replaces categorical column(s) with the mean target value for each category. """ def __init__(self, cols=None): """Target encoder Parameters ---------- cols : list of str Columns to target encode. Default is to target encode all categorical columns in the DataFrame. """ if isinstance(cols, str): self.cols = [cols] else: self.cols = cols def fit(self, X, y): """Fit target encoder to X and y Parameters ---------- X : pandas DataFrame, shape [n_samples, n_columns] DataFrame containing columns to encode y : pandas Series, shape = [n_samples] Target values. Returns ------- self : encoder Returns self. """ # Encode all categorical cols by default if self.cols is None: self.cols = [col for col in X if str(X[col].dtype)=='object'] # Check columns are in X for col in self.cols: if col not in X: raise ValueError('Column \''+col+'\' not in X') # Encode each element of each column self.maps = dict() #dict to store map for each column for col in self.cols: tmap = dict() uniques = X[col].unique() for unique in uniques: tmap[unique] = y[X[col]==unique].mean() self.maps[col] = tmap return self def transform(self, X, y=None): """Perform the target encoding transformation. Parameters ---------- X : pandas DataFrame, shape [n_samples, n_columns] DataFrame containing columns to encode Returns ------- pandas DataFrame Input DataFrame with transformed columns """ Xo = X.copy() for col, tmap in self.maps.items(): vals = np.full(X.shape[0], np.nan) for val, mean_target in tmap.items(): vals[X[col]==val] = mean_target Xo[col] = vals return Xo def fit_transform(self, X, y=None): """Fit and transform the data via target encoding. Parameters ---------- X : pandas DataFrame, shape [n_samples, n_columns] DataFrame containing columns to encode y : pandas Series, shape = [n_samples] Target values (required!). Returns ------- pandas DataFrame Input DataFrame with transformed columns """ return self.fit(X, y).transform(X, y) fill_arbitary(Train.drop(["Applicant_ID","default_status"],axis=1)) Train.default_status.replace({"yes":1,"no":0},inplace=True) encoder = TargetEncoder() a = pd.DataFrame(Train.form_field47) b = pd.DataFrame(Test.form_field47) X_target_encoded = encoder.fit(a,Train["default_status"]) Train = X_target_encoded.transform(Train) Test = X_target_encoded.transform(Test) Train = pd.get_dummies(Train, columns=['form_field47']) Test = pd.get_dummies(Test, columns=['form_field47']) train = Train.drop(["Applicant_ID","default_status"],1) target = Train["default_status"] test = Test.drop(["Applicant_ID"],1) skf = StratifiedKFold(n_splits = 10,shuffle=True,random_state=199) ###Output _____no_output_____ ###Markdown XGBOOST ###Code clf1=xgboost.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.3, gamma=0.0, learning_rate=0.01, max_delta_step=0, max_depth=6, min_child_weight=5, missing=None, n_estimators=700, n_jobs=1, nthread=None, objective='binary:logistic', random_state=40, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=0.7, verbosity=1) XGB_train, XGB_test, XGB_name = xgb_predict(clf1, train, target, test,'XGB') ###Output ========================Fold1========================== [0] validation_0-auc:0.783569 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.831124 [500] validation_0-auc:0.833935 [699] validation_0-auc:0.834871 Testing scores 0.8348708571412803 Training scores 0.8726294761950564 ========================Fold2========================== [0] validation_0-auc:0.790504 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.833055 [500] validation_0-auc:0.835176 [699] validation_0-auc:0.835863 Testing scores 0.8358631028608515 Training scores 0.8723105346951949 ========================Fold3========================== [0] validation_0-auc:0.790541 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.835962 [500] validation_0-auc:0.839742 [699] validation_0-auc:0.841372 Testing scores 0.841372282901621 Training scores 0.8717170006294914 ========================Fold4========================== [0] validation_0-auc:0.802071 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.842515 [500] validation_0-auc:0.845995 [699] validation_0-auc:0.847387 Testing scores 0.8473930222686983 Training scores 0.8713779819956291 ========================Fold5========================== [0] validation_0-auc:0.791245 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.834076 [500] validation_0-auc:0.838032 [699] validation_0-auc:0.839907 Testing scores 0.8399071121406688 Training scores 0.8713854048834265 ========================Fold6========================== [0] validation_0-auc:0.792897 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.842445 [500] validation_0-auc:0.844653 [699] validation_0-auc:0.845389 Testing scores 0.845461397155159 Training scores 0.8704780831967167 ========================Fold7========================== [0] validation_0-auc:0.774624 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.82707 [500] validation_0-auc:0.832472 [699] validation_0-auc:0.834565 Testing scores 0.8345746529453788 Training scores 0.8721230336554464 ========================Fold8========================== [0] validation_0-auc:0.789074 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.829003 [500] validation_0-auc:0.832207 [699] validation_0-auc:0.833425 Testing scores 0.8334248147157227 Training scores 0.8723327496975007 ========================Fold9========================== [0] validation_0-auc:0.797298 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.836494 [500] validation_0-auc:0.839825 [699] validation_0-auc:0.841088 Testing scores 0.8410876159492046 Training scores 0.8713398505796698 ========================Fold10========================== [0] validation_0-auc:0.795566 Will train until validation_0-auc hasn't improved in 200 rounds. [250] validation_0-auc:0.838427 [500] validation_0-auc:0.842228 [699] validation_0-auc:0.843474 Testing scores 0.8434780555011572 Training scores 0.8713463009901032 Average Testing ROC score for 10 folds split: 0.8397432913579742 Average Training ROC score for 10 folds split: 0.8717040416518236 standard Deviation for 10 folds split: 0.004637733147910091 ###Markdown CATBOOST ###Code clf2=CatBoostClassifier(border_count=200, max_depth=7, n_estimators=5000, l2_leaf_reg=10, learning_rate=0.03, bootstrap_type = 'Bernoulli', silent=False, use_best_model=False, eval_metric='AUC', random_seed=34) CAT_train, CAT_test, CAT_name = cat_predict(clf2, train, target, test,'CAT') ###Output ========================Fold1========================== 0: test: 0.7900518 best: 0.7900518 (0) total: 56.8ms remaining: 4m 43s 250: test: 0.8335453 best: 0.8335507 (249) total: 11s remaining: 3m 27s 500: test: 0.8353724 best: 0.8353724 (500) total: 21.7s remaining: 3m 15s 750: test: 0.8353617 best: 0.8355416 (608) total: 32.5s remaining: 3m 3s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8355416104 bestIteration = 608 Shrink model to first 609 iterations. Testing scores 0.8355416104184248 Training scores 0.8729225301874605 ========================Fold2========================== 0: test: 0.7961931 best: 0.7961931 (0) total: 48.7ms remaining: 4m 3s 250: test: 0.8353298 best: 0.8353809 (244) total: 11s remaining: 3m 28s 500: test: 0.8370820 best: 0.8371787 (490) total: 21.9s remaining: 3m 16s 750: test: 0.8374007 best: 0.8374126 (676) total: 32.9s remaining: 3m 6s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8374512479 bestIteration = 754 Shrink model to first 755 iterations. Testing scores 0.8374512479305218 Training scores 0.8806035948353764 ========================Fold3========================== 0: test: 0.7976162 best: 0.7976162 (0) total: 51.3ms remaining: 4m 16s 250: test: 0.8387629 best: 0.8387629 (250) total: 11.5s remaining: 3m 37s 500: test: 0.8420867 best: 0.8420881 (499) total: 22.5s remaining: 3m 22s 750: test: 0.8430277 best: 0.8430527 (744) total: 33.8s remaining: 3m 11s 1000: test: 0.8432417 best: 0.8432502 (997) total: 45.2s remaining: 3m 1250: test: 0.8431715 best: 0.8433837 (1192) total: 56.7s remaining: 2m 49s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8433836804 bestIteration = 1192 Shrink model to first 1193 iterations. Testing scores 0.8433836803606234 Training scores 0.8986255127859466 ========================Fold4========================== 0: test: 0.8023044 best: 0.8023044 (0) total: 62.7ms remaining: 5m 13s 250: test: 0.8452849 best: 0.8452849 (250) total: 13.2s remaining: 4m 9s 500: test: 0.8474455 best: 0.8476055 (484) total: 24.6s remaining: 3m 40s 750: test: 0.8481216 best: 0.8481824 (739) total: 35.7s remaining: 3m 22s 1000: test: 0.8486150 best: 0.8486481 (966) total: 46.5s remaining: 3m 5s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8486481191 bestIteration = 966 Shrink model to first 967 iterations. Testing scores 0.8486481191053611 Training scores 0.8890688236106942 ========================Fold5========================== 0: test: 0.7976873 best: 0.7976873 (0) total: 45ms remaining: 3m 44s 250: test: 0.8382213 best: 0.8382213 (250) total: 11.1s remaining: 3m 30s 500: test: 0.8406750 best: 0.8407245 (496) total: 21.9s remaining: 3m 16s 750: test: 0.8410173 best: 0.8410660 (691) total: 32.7s remaining: 3m 5s 1000: test: 0.8410251 best: 0.8411415 (817) total: 43.7s remaining: 2m 54s Stopped by overfitting detector (200 iterations wait) bestTest = 0.841141512 bestIteration = 817 Shrink model to first 818 iterations. Testing scores 0.8411415120389779 Training scores 0.8819964517276514 ========================Fold6========================== 0: test: 0.8022439 best: 0.8022439 (0) total: 50.3ms remaining: 4m 11s 250: test: 0.8456641 best: 0.8456641 (250) total: 11.1s remaining: 3m 30s 500: test: 0.8473241 best: 0.8473389 (498) total: 22.5s remaining: 3m 22s 750: test: 0.8477716 best: 0.8478324 (746) total: 33.4s remaining: 3m 9s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8478324429 bestIteration = 746 Shrink model to first 747 iterations. Testing scores 0.8478324428838977 Training scores 0.8783355665768353 ========================Fold7========================== 0: test: 0.7872877 best: 0.7872877 (0) total: 49.3ms remaining: 4m 6s 250: test: 0.8321991 best: 0.8321991 (250) total: 11.1s remaining: 3m 30s 500: test: 0.8361658 best: 0.8361658 (500) total: 22s remaining: 3m 17s 750: test: 0.8377947 best: 0.8378594 (740) total: 32.9s remaining: 3m 6s 1000: test: 0.8386791 best: 0.8387142 (996) total: 43.8s remaining: 2m 55s 1250: test: 0.8390111 best: 0.8392390 (1167) total: 54.7s remaining: 2m 44s 1500: test: 0.8392478 best: 0.8393998 (1476) total: 1m 5s remaining: 2m 33s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8395668816 bestIteration = 1548 Shrink model to first 1549 iterations. Testing scores 0.83956688162493 Training scores 0.9126801489200083 ========================Fold8========================== 0: test: 0.7834297 best: 0.7834297 (0) total: 47.6ms remaining: 3m 57s 250: test: 0.8312266 best: 0.8312266 (250) total: 11.2s remaining: 3m 32s 500: test: 0.8339722 best: 0.8340069 (490) total: 22.2s remaining: 3m 19s 750: test: 0.8345778 best: 0.8346993 (727) total: 33.2s remaining: 3m 7s 1000: test: 0.8351939 best: 0.8353133 (930) total: 44.2s remaining: 2m 56s 1250: test: 0.8349874 best: 0.8353923 (1075) total: 55.2s remaining: 2m 45s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8353922965 bestIteration = 1075 Shrink model to first 1076 iterations. Testing scores 0.8353922965320741 Training scores 0.8949412147380144 ========================Fold9========================== 0: test: 0.8013960 best: 0.8013960 (0) total: 49.2ms remaining: 4m 5s 250: test: 0.8399025 best: 0.8399025 (250) total: 11.1s remaining: 3m 30s 500: test: 0.8414454 best: 0.8414518 (477) total: 22s remaining: 3m 17s 750: test: 0.8420560 best: 0.8420560 (750) total: 33.1s remaining: 3m 7s 1000: test: 0.8418872 best: 0.8422984 (946) total: 45.2s remaining: 3m Stopped by overfitting detector (200 iterations wait) bestTest = 0.8422983939 bestIteration = 946 Shrink model to first 947 iterations. Testing scores 0.8422983938811368 Training scores 0.888883356485162 ========================Fold10========================== 0: test: 0.7908759 best: 0.7908759 (0) total: 52.1ms remaining: 4m 20s 250: test: 0.8400487 best: 0.8400499 (248) total: 12.6s remaining: 3m 58s 500: test: 0.8424217 best: 0.8424827 (496) total: 23.5s remaining: 3m 31s 750: test: 0.8429035 best: 0.8429035 (750) total: 34.6s remaining: 3m 15s 1000: test: 0.8431591 best: 0.8432622 (954) total: 45.7s remaining: 3m 2s 1250: test: 0.8433648 best: 0.8435077 (1208) total: 56.8s remaining: 2m 50s Stopped by overfitting detector (200 iterations wait) bestTest = 0.8435077065 bestIteration = 1208 Shrink model to first 1209 iterations. Testing scores 0.8435077065019819 Training scores 0.9005667464724361 Average Testing ROC score for 10 folds split: 0.841476389127793 Average Training ROC score for 10 folds split: 0.8898623946339586 standard Deviation for 10 folds split: 0.004387148691646989 ###Markdown LGBM ###Code clf3=LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=0.7, importance_type='split', learning_rate=0.05, max_depth=3, metric='auc', min_child_samples=15, min_child_weight=0.001, min_split_gain=0.0, n_estimators=5000, n_jobs=-1, num_leaves=300, num_threads=15, num_trees=500, objective=None, random_state=29, reg_alpha=4, reg_lambda=4, silent=True, subsample=0.7, subsample_for_bin=200000, subsample_freq=3) LGBM_train, LGBM_test, LGBM_name = lgbm_predict(clf3, train, target, test,'LGBM') ###Output Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.832994 [500] valid_0's auc: 0.834734 Did not meet early stopping. Best iteration is: [477] valid_0's auc: 0.834758 Validation scores 0.8347582313017391 Training scores 0.8565784467131087 ========================Fold2========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.83602 [500] valid_0's auc: 0.837135 Did not meet early stopping. Best iteration is: [485] valid_0's auc: 0.837203 Validation scores 0.8372030571447642 Training scores 0.8563558260698734 ========================Fold3========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.838294 [500] valid_0's auc: 0.841396 Did not meet early stopping. Best iteration is: [500] valid_0's auc: 0.841396 Validation scores 0.8413960843807278 Training scores 0.8560760244973092 ========================Fold4========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.844361 [500] valid_0's auc: 0.846562 Did not meet early stopping. Best iteration is: [493] valid_0's auc: 0.846591 Validation scores 0.8465908434330081 Training scores 0.8553614805435277 ========================Fold5========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.837805 [500] valid_0's auc: 0.840685 Did not meet early stopping. Best iteration is: [500] valid_0's auc: 0.840685 Validation scores 0.8406849720737937 Training scores 0.8563553449764859 ========================Fold6========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.843998 [500] valid_0's auc: 0.845937 Did not meet early stopping. Best iteration is: [500] valid_0's auc: 0.845937 Validation scores 0.8459365027265129 Training scores 0.856165139754816 ========================Fold7========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.831257 [500] valid_0's auc: 0.834918 Did not meet early stopping. Best iteration is: [486] valid_0's auc: 0.835076 Validation scores 0.8350763065058433 Training scores 0.8560600853475305 ========================Fold8========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.831024 [500] valid_0's auc: 0.834307 Did not meet early stopping. Best iteration is: [498] valid_0's auc: 0.834325 Validation scores 0.8343252914762337 Training scores 0.8571583901326212 ========================Fold9========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.839066 [500] valid_0's auc: 0.841259 Did not meet early stopping. Best iteration is: [495] valid_0's auc: 0.841281 Validation scores 0.8412807784284143 Training scores 0.8560935529720934 ========================Fold10========================== Training until validation scores don't improve for 200 rounds. [250] valid_0's auc: 0.840326 [500] valid_0's auc: 0.843027 Did not meet early stopping. Best iteration is: [465] valid_0's auc: 0.843111 Validation scores 0.8431108657816418 Training scores 0.8549488062963058 Average Testing ROC score for 10 folds split: 0.8400362933252679 Average Training ROC score for 10 folds split: 0.8561153097303672 standard Deviation for 10 folds split: 0.004291250752815905 ###Markdown RANDOM FOREST ###Code clf4=RandomForestClassifier(bootstrap=False, ccp_alpha=0.0, class_weight=None, criterion='gini', max_depth=10, max_features='auto', max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=20, min_samples_split=20, min_weight_fraction_leaf=0.0, n_estimators=400, n_jobs=-1, oob_score=False, random_state=6, verbose=0, warm_start=False) RF_train, RF_test, RF_name = model_predict(clf4, train, target, test,'RF') ###Output ========================Fold1========================== Validation scores 0.8306279847787816 Training scores 0.8861992803677387 ========================Fold2========================== Validation scores 0.8313701424932463 Training scores 0.8861823952670765 ========================Fold3========================== Validation scores 0.8340843734838415 Training scores 0.8856023701333954 ========================Fold4========================== Validation scores 0.8418490886189433 Training scores 0.8852444792277329 ========================Fold5========================== Validation scores 0.832393951043807 Training scores 0.8858514945519453 ========================Fold6========================== Validation scores 0.8423337337367709 Training scores 0.8851825165239982 ========================Fold7========================== Validation scores 0.8265494027047229 Training scores 0.8858131740036257 ========================Fold8========================== Validation scores 0.8271567310530104 Training scores 0.885895204916245 ========================Fold9========================== Validation scores 0.835906189749856 Training scores 0.885141600059136 ========================Fold10========================== Validation scores 0.836353368215782 Training scores 0.8851824388210343 Average Testing ROC score for 10 folds split: 0.8338624965878761 Average Training ROC score for 10 folds split: 0.8856294953871927 standard Deviation for 10 folds split: 0.005130857811429926 ###Markdown GBM ###Code clf5 = GradientBoostingClassifier(ccp_alpha=0.0, criterion='friedman_mse', init=None, learning_rate=0.9, loss='deviance', max_depth=2, max_features=1, max_leaf_nodes=2, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=20, min_samples_split=20, min_weight_fraction_leaf=0.1, n_estimators=200, n_iter_no_change=None, presort='deprecated', random_state=67, subsample=0.7, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) GBM_train, GBM_test, GBM_name = model_predict(clf5, train, target, test,'GBM') Train_stack3 = pd.DataFrame(XGB_train) Train_stack3 = pd.concat([Train_stack3,pd.DataFrame(CAT_train),pd.DataFrame(LGBM_train), pd.DataFrame(RF_train),pd.DataFrame(GBM_train)],1) Test_stack3 = pd.DataFrame(XGB_test) Test_stack3 = pd.concat([Test_stack3,pd.DataFrame(CAT_test),pd.DataFrame(LGBM_test), pd.DataFrame(RF_test),pd.DataFrame(GBM_test)],1) Test_stack3.columns=[XGB_name, CAT_name, LGBM_name, RF_name, GBM_name] Train_stack3.columns=[XGB_name, CAT_name, LGBM_name, RF_name, GBM_name] Test_stack3 = Test_stack3/10 Train_stack3 Test_stack3 meta_estimator = LinearRegression() final_prediction = meta_estimator.fit(Train_stack3, target).predict(Test_stack3) Train_stack3.corr() final_prediction # Create a data frame with two columns: Applicant_ID & default_status. default_status contains your predictions Applicant_ID = np.array(Test['Applicant_ID']) Solution = pd.DataFrame(final_prediction, Applicant_ID, columns = ["default_status"]) print(Solution) # Write your solution to a csv file with the name Solution.csv Solution.to_csv("Zindi Credit Project37.csv", index_label = ["Applicant_ID"]) ###Output _____no_output_____
c7_classification_performance_measures/03_implement_confusion_matrix_precision_and_recall.ipynb
###Markdown 实现混淆矩阵,精准率和召回率 ###Code import numpy as np from sklearn import datasets digits = datasets.load_digits() X = digits.data y = digits.target.copy() # 把数据变为极度偏斜的数据 # 把手写数字分为9和非9两大类, 重点关注的是分类为9的数字 y[digits.target==9] = 1 y[digits.target!=9] = 0 from sklearn.model_selection._split import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666) from sklearn.linear_model.logistic import LogisticRegression log_reg = LogisticRegression() log_reg.fit(X_train, y_train) log_reg.score(X_test, y_test) ###Output _____no_output_____ ###Markdown 虽然0.975555555551看上去很高了,但因为我们的数据是极度偏斜的数据,即使我们把全部分类预测为"非9"也会有0.9左右的正确率 ###Code y_predict = log_reg.predict(X_test) ###Output _____no_output_____ ###Markdown 求TP,FP,FN,TN的值 ###Code def TN(y_true, y_predict): assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致' return np.sum((y_true == 0) & (y_predict == 0)) TN(y_test, y_predict) def FP(y_true, y_predict): assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致' return np.sum((y_true == 0) & (y_predict == 1)) FP(y_test, y_predict) def FN(y_true, y_predict): assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致' return np.sum((y_true == 1) & (y_predict == 0)) FN(y_test, y_predict) def TP(y_true, y_predict): assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致' return np.sum((y_true == 1) & (y_predict == 1)) TP(y_test, y_predict) def confusion_matrix(y_true, y_predict): """返回一个2✖️2的混淆矩阵""" return np.array([ [TN(y_true, y_predict), FP(y_true, y_predict)], [FN(y_true, y_predict), TP(y_true, y_predict)] ]) confusion_matrix(y_test, y_predict) ###Output _____no_output_____ ###Markdown 根据混淆矩阵求精准率和召回率 ###Code def precision_score(y_true, y_predict): """求精准率""" tp = TP(y_true, y_predict) fp = FP(y_true, y_predict) try: return tp / (tp + fp) except: # 分母为0时,结果返回0 return 0.0 # 精准率 precision_score(y_test, y_predict) def recall_score(y_true, y_predict): """求召回率""" tp = TP(y_true, y_predict) fn = FN(y_true, y_predict) try: return tp / (tp + fn) except: return 0.0 # 召回率 recall_score(y_test, y_predict) ###Output _____no_output_____ ###Markdown scikit-learn中的混淆矩阵,精准率和召回率 混淆矩阵 ###Code import sklearn.metrics.classification as classification classification.confusion_matrix(y_test, y_predict) ###Output _____no_output_____ ###Markdown 精准率 ###Code classification.precision_score(y_test, y_predict) ###Output _____no_output_____ ###Markdown 召回率 ###Code classification.recall_score(y_test, y_predict) ###Output _____no_output_____
galaxy_project/Ga) Two star test implementation.ipynb
###Markdown Two star test ###Code %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.integrate import odeint from IPython.html.widgets import interact, interactive, fixed from plotting_function import plotter from initial_velocities import velocities_m, velocities_S from DE_solver import derivs, equationsolver ###Output _____no_output_____ ###Markdown Defining some test values for a simple two star system to check if everything was working correctly: ###Code max_time_test = 1 time_step_test = 80 M_test = 1e11 S_test = 1e11 S_y_test = 70 S_x_test = -.01*S_y_test**2+25 m_x_test_1 = -3.53 m_y_test_1 = 3.53 m_x_test_2 = -3.53 m_y_test_2 = -3.53 vxS_test = velocities_S(M_test,S_test,S_x_test,S_y_test)[0] vyS_test = velocities_S(M_test,S_test,S_x_test,S_y_test)[1] vxm_test_1 = velocities_m(M_test,m_x_test_1,m_y_test_1)[0] vym_test_1 = velocities_m(M_test,m_x_test_1,m_y_test_1)[1] vxm_test_2 = velocities_m(M_test,m_x_test_2,m_y_test_2)[0] vym_test_2 = velocities_m(M_test,m_x_test_2,m_y_test_2)[1] ic_test = np.array([S_x_test,S_y_test,vxS_test,vyS_test,m_x_test_1,m_y_test_1,vxm_test_1,vym_test_1, m_x_test_2,m_y_test_2,vxm_test_2,vym_test_2]) ###Output _____no_output_____ ###Markdown Using equationsolver to solve the DE's ###Code sol_test = equationsolver(ic_test,max_time_test,time_step_test,M_test,S_test) ###Output _____no_output_____ ###Markdown Saving results and initial conditions to disk ###Code np.savez('two_star_test_sol+ic.npz',sol_test,ic_test) ###Output _____no_output_____
experiments/java_parsing.ipynb
###Markdown Experiments in spliting Java code ###Code import regex def split_methods(code): """Parse Java files into separate methods :param code: Java code to parse. :rtype: map """ pattern = r'(?:(?:public|private|static|protected)\s+)*\s*[\w\<\>\[\]]+\s+\w+\s*\([^{]+({(?:[^{}]+\/\*.*?\*\/|[^{}]+\/\/.*?$|[^{}]+|(?1))*+})' scanner = regex.finditer(pattern, code, regex.MULTILINE) return map(lambda match: match.group(0), scanner) file = open("experiments/fixtures/forest-fire.java", "r") code = file.read() file.close() methods = split_methods(code) for i, method in enumerate(methods): print("\n\nFunction {}\n--".format(i)) print(method) ###Output Function 0 -- private static List<String> process(List<String> land){ List<String> newLand = new LinkedList<String>(); for(int i = 0; i < land.size(); i++){ String rowAbove, thisRow = land.get(i), rowBelow; if(i == 0){//first row rowAbove = null; rowBelow = land.get(i + 1); }else if(i == land.size() - 1){//last row rowBelow = null; rowAbove = land.get(i - 1); }else{//middle rowBelow = land.get(i + 1); rowAbove = land.get(i - 1); } newLand.add(processRows(rowAbove, thisRow, rowBelow)); } return newLand; } Function 1 -- private static String processRows(String rowAbove, String thisRow, String rowBelow){ String newRow = ""; for(int i = 0; i < thisRow.length();i++){ switch(thisRow.charAt(i)){ case BURNING: newRow+= EMPTY; break; case EMPTY: newRow+= Math.random() < P ? TREE : EMPTY; break; case TREE: String neighbors = ""; if(i == 0){//first char neighbors+= rowAbove == null ? "" : rowAbove.substring(i, i + 2); neighbors+= thisRow.charAt(i + 1); neighbors+= rowBelow == null ? "" : rowBelow.substring(i, i + 2); if(neighbors.contains(Character.toString(BURNING))){ newRow+= BURNING; break; } }else if(i == thisRow.length() - 1){//last char neighbors+= rowAbove == null ? "" : rowAbove.substring(i - 1, i + 1); neighbors+= thisRow.charAt(i - 1); neighbors+= rowBelow == null ? "" : rowBelow.substring(i - 1, i + 1); if(neighbors.contains(Character.toString(BURNING))){ newRow+= BURNING; break; } }else{//middle neighbors+= rowAbove == null ? "" : rowAbove.substring(i - 1, i + 2); neighbors+= thisRow.charAt(i + 1); neighbors+= thisRow.charAt(i - 1); neighbors+= rowBelow == null ? "" : rowBelow.substring(i - 1, i + 2); if(neighbors.contains(Character.toString(BURNING))){ newRow+= BURNING; break; } } newRow+= Math.random() < F ? BURNING : TREE; } } return newRow; } Function 2 -- public static List<String> populate(int width, int height){ List<String> land = new LinkedList<String>(); for(;height > 0; height--){//height is just a copy anyway StringBuilder line = new StringBuilder(width); for(int i = width; i > 0; i--){ line.append((Math.random() < TREE_PROB) ? TREE : EMPTY); } land.add(line.toString()); } return land; } Function 3 -- public static void processN(List<String> land, int n){ for(int i = 0;i < n; i++){ land = process(land); } } Function 4 -- public static void processNPrint(List<String> land, int n){ for(int i = 0;i < n; i++){ land = process(land); print(land); } } Function 5 -- public static void print(List<String> land){ for(String row: land){ System.out.println(row); } System.out.println(); } Function 6 -- public static void main(String[] args){ List<String> land = Arrays.asList(".TTT.T.T.TTTT.T", "T.T.T.TT..T.T..", "TT.TTTT...T.TT.", "TTT..TTTTT.T..T", ".T.TTT....TT.TT", "...T..TTT.TT.T.", ".TT.TT...TT..TT", ".TT.T.T..T.T.T.", "..TTT.TT.T..T..", ".T....T.....TTT", "T..TTT..T..T...", "TTT....TTTTTT.T", "......TwTTT...T", "..T....TTTTTTTT", ".T.T.T....TT..."); print(land); processNPrint(land, 10); System.out.println("Random land test:"); land = populate(10, 10); print(land); processNPrint(land, 10); }
8-Labs/Lab03/dev_src/Lab3.ipynb
###Markdown **Download** (right-click, save target as ...) this page as a jupyterlab notebook from:[Lab3](https://atomickitty.ddns.net:8000/user/sensei/files/engr-1330-webroot/engr-1330-webbook/ctds-psuedocourse/docs/8-Labs/Lab2/Lab3_Dev.ipynb?_xsrf=2%7C1b4d47c3%7C0c3aca0c53606a3f4b71c448b09296ae%7C1623531240)___ Laboratory 3: Structures and Conditions. ###Code # Preamble script block to identify host, user, and kernel import sys ! hostname ! whoami print(sys.executable) print(sys.version) print(sys.version_info) ###Output DESKTOP-EH6HD63 desktop-eh6hd63\farha C:\Users\Farha\Anaconda3\python.exe 3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0) ###Markdown Full name: R: Title of the notebook: Date: ___ ![](https://i.pinimg.com/originals/a0/f8/5c/a0f85c35e406acb5b84c13dae888d5a3.gif) Data Structures: List (Array)A list is a collection of data that are somehow related. It is a convenient way to refer to acollection of similar things by a single name, and using an index (like a subscript in math)to identify a particular item.Consider the "math-like" variable $x$ below:\begin{gather}x_0= 7 \\x_1= 11 \\x_2= 5 \\x_3= 9 \\x_4= 13 \\\... \\x_N= 223 \\\end{gather} The variable name is $x$ and the subscripts correspond to different values. Thus the `value` of the variable named $x$ associated with subscript $3$ is the number $9$.The figure below is a visual representation of a the concept that treats a variable as a collection of cells. ![](array-image.jpg)In the figure, the variable name is `MyList`, the subscripts are replaced by an indexwhich identifies which cell is being referenced. The value is the cell content at the particular index. So in the figure the value of `MyList` at Index = 3 is the number 9.'In engineering and data science we use lists a lot - we often call then vectors, arrays, matrices and such, but they are ultimately just lists.To declare a list you can write the list name and assign it values. The square brackets are used to identify that the variable is a list. Like: MyList = [7,11,5,9,13,66,99,223]One can also declare a null list and use the `append()` method to fill it as needed. MyOtherList = [ ] Python indices start at ZERO. Alot of other lnguages start at ONE. Its just the convention. The first element in a list has an index of 0, the second an index of 1, and so on.We access the contents of a list by referring to its name and index. For example MyList[3] has a value of the number 9. ###Code MyOtherList = [] #Create an empty list MyOtherList.append(765) #Add one item to the list print(MyOtherList) MyList = [7,11,5,9,13,66,99,223] #Define a list print(MyList) sublist = MyList[3:6] #slice a sublist print("sublist is: ", sublist) mysum = sum(sublist) #sum the numbers in the sublist print("Sum: ", mysum) mylength = len(sublist) #get the length of the sublist print("Length: ", mylength) ###Output [765] [7, 11, 5, 9, 13, 66, 99, 223] sublist is: [9, 13, 66] Sum: 88 Length: 3 ###Markdown Data Structures: Special List | TupleA tuple is a special kind of list where the values cannot be changed after the list is created.It is useful for list-like things that are static - like days in a week, or months of a year.You declare a tuple like a list, except use round brackets instead of square brackets. MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec") Data Structures: Special List | DictionaryA dictionary is a special kind of list where the items are related data PAIRS. It is a lot like a relational database (it probably is one in fact) where the first item in the pair is called the key, and must be unique in a dictionary, and the second item in the pair is the data.The second item could itself be a list, so a dictionary would be a meaningful way to build adatabase in Python.To declare a dictionary using `curly` brackets MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03}To declare a dictionary using the `dict()` method MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03) ___Some examples follow: ###Code MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec") MyTupleName MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03} print(MyPetsNamesAndMass) MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03) print(MyPetsNamesAndMassToo) # Tuples MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec") # Access a Tuple print ("5th element of the tuple:", MyTupleName[4]) # Dictionary MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03} # Access the Dictionary print ("Aspen's mass = ", MyPetsNamesAndMass["Aspen"]) # Change a value in a dictionary print ("Merrimee's mass" , MyPetsNamesAndMass["Merrimee"]) MyPetsNamesAndMass["Merrimee"] = 0.01 print ("Merrimee's mass" , MyPetsNamesAndMass["Merrimee"], "She lost weight !") # Alternate dictionary MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03) print ("Merrimee's mass" , MyPetsNamesAndMassToo["Merrimee"]) # Attempt to change a Tuple #MyTupleName[3]=("Fred") # Activate this line and see what happens! ###Output 5th element of the tuple: May Aspen's mass = 6.3 Merrimee's mass 0.03 Merrimee's mass 0.01 She lost weight ! Merrimee's mass 0.03 ###Markdown ___ Example: Nested DictionaryFrom the dictionary below, print "Pandemic" and "Tokyo": ###Code FD = {"Quentin":"Tarantino","2020":[2020,"COVID",19,"Pandemic"],"Bond":["James","Gun",("Paris","Tokyo","London")]} #A nested dictionary print(FD) FD['2020'][3] FD['Bond'][2][1] ###Output _____no_output_____ ###Markdown ___![](https://www.xelplus.com/wp-content/uploads/2018/08/VBA-IF.jpg) Conditional ExecutionConditional statements are logical expressions that evaluate as TRUE or FALSE and usingthese results to perform further operations based on these conditions.All flow control in a program depends on evaluating conditions. The program will proceeddiferently based on the outcome of one or more conditions - really sophisticated AI programs are a collection of conditions and correlations. Amazon knowing what you kind of want is based on correlations of your past behavior compared to other peoples similar, butmore recent behavior, and then it uses conditional statements to decide what item to offer you in your recommendation items. It's spooky, but ultimately just a program running in the background trying to make your money theirs. Conditional Execution: ComparisonThe most common conditional operation is comparison. If we wish to compare whether twovariables are the same we use the == (double equal sign).For example x == y means the program will ask whether x and y have the same value. If they do, the result is TRUE if not then the result is FALSE.Other comparison signs are `!=` does NOT equal, ` `larger than, `=` greater than or equal.There are also three logical operators when we want to build multiple compares(multiple conditioning); these are `and`, `or`, and `not`.The `and` operator returns TRUE if (and only if) **all** conditions are TRUE.For instance `5 == 5 and 5 < 6` will return a TRUE because both conditions are true.The `or` operator returns `TRUE` if at least one condition is true. If **all** conditions are FALSE, then it will return a FALSE. For instance `4 > 3 or 17 > 20 or 3 == 2` will return `TRUE`because the first condition is true.The `not` operator returns `TRUE` if the condition after the `not` keyword is false. Think of itas a way to do a logic reversal. ###Code # Compare x = 7 y = 10 print("x =: ",x,"y =: ",y) print("x is equal to y : ",x==y) print("x is not equal to y : ",x!=y) print("x is greater than y : ",x>y) print("x is less than y : ",x<y) # Logical operators print("5 == 5 and 5 < 6 ? ",5 == 5 and 5 < 6) print("4 > 3 or 17 > 20 ",4 > 3 or 17 > 20) print("not 5 == 5",not 5 == 5) ###Output 5 == 5 and 5 < 6 ? True 4 > 3 or 17 > 20 True not 5 == 5 False ###Markdown Conditional Execution: Block `if` statement![](https://pythonexamples.org/wp-content/uploads/2020/07/python-if.gif) The `if` statement is a common flow control statement. It allows the program to evaluate if a certain condition is satisfied and to perform a designed action based on the result of the evaluation. The structure of an `if` statement is if condition1 is met: do A elif condition 2 is met: do b elif condition 3 is met: do c else: do e The `elif` means "else if". The `:` colon is an important part of the structure it tells where the action begins. Also there are no scope delimiters like (), or {} . Instead Python uses indentation to isolate blocks of code. This convention is hugely important - many other coding environments use delimiters (called scoping delimiters), but Python does not. The indentation itself is the scoping delimiter.The next code fragment illustrates illustrates how the `if` statements work. The program asks the user for input. The use of `raw_input()` will let the program read any input as a stringso non-numeric results will not throw an error. The input is stored in the variable named `userInput`. Next the statement if `userInput == "1":` compares the value of `userInput`with the string `"1"`. If the value in the variable is indeed \1", then the program will executethe block of code in the indentation after the colon. In this case it will execute print "Hello World" print "How do you do? "Alternatively, if the value of `userInput` is the string `'2'`, then the program will execute print "Snakes on a plane "For all other values the program will execute print "You did not enter a valid number" ###Code # Block if example userInput = input('Enter the number 1 or 2') # Use block if structure if userInput == '1': print("Hello World") print("How do you do? ") elif userInput == '2': print("Snakes on a plane ") else: print("You did not enter a valid number") ###Output Enter the number 1 or 21 Hello World How do you do? ###Markdown Conditional Execution: Inline `if` statementAn inline `if` statement is a simpler form of an `if` statement and is more convenient if youonly need to perform a simple conditional task. The syntax is: do TaskA `if` condition is true `else` do TaskB An example would be myInt = 3 num1 = 12 if myInt == 0 else 13 num1An alternative way is to enclose the condition in brackets for some clarity like myInt = 3 num1 = 12 if (myInt == 0) else 13 num1In either case the result is that `num1` will have the value `13` (unless you set myInt to 0).One can also use `if` to construct extremely inefficient loops. ###Code myInt = 0 num1 = 12 if (myInt == 0) else 13 num1 ###Output _____no_output_____ ###Markdown ___ Example: Pass or Fail?Take the following inputs from the user: 1. Grade for Lesson 1 (from 0 to 5) 2. Grade for Lesson 2 (from 0 to 5) 3. Grade for Lesson 3 (from 0 to 5) Compute the average of the three grades. Use the result to decide whether the student will pass or fail. ###Code Lesson1 = int(input('Enter the grade for Lesson 1')) Lesson2 = int(input('Enter the grade for Lesson 2')) Lesson3 = int(input('Enter the grade for Lesson 3')) Average = int(Lesson1+Lesson2+Lesson3)/3 print('Average Course Grade:',Average) if Average >= 5: print("Passed") else: print("Failed") ###Output Enter the grade for Lesson 12 Enter the grade for Lesson 25 Enter the grade for Lesson 31 Average Course Grade: 2.6666666666666665 Failed
Modulo2/Ejercicios/Problemas Diversos.ipynb
###Markdown PROBLEMAS DIVERSOS ###Code def cantidad(): n=int(input("Ingrese cantidad de alumnos:")) print (n) return n def nota(): nota = float(input("Introduce la nota(0 - 10): ")) return nota def validar_nota(nota): try: c=nota() if c >=0 and c <= 10: return c # Importante romper la iteración si todo ha salido bien else: print('nota fuera del rango') print("Ingrese nota nuevamente:") na = float(input("Introduce la nota(0 - 10): ")) return na except: print("Ha ocurrido un error, introduce bien la nota") def ingresar_alumnos(n): promedio=0 aprobados=0 desaprobados=0 total = 0 lista_alumnos = [] lista =[] for i in range (n): alumno ={} nom = input("Ingrese el nombre del alumno {}:".format(i+1)) alumno['nombre']=nom alumno['nota1']=validar_nota(nota) alumno['nota2']=validar_nota(nota) alumno['nota3']=validar_nota(nota) alumno['prom']=round(((alumno['nota1']+alumno['nota2']+alumno['nota3'])/3),2) promedio = alumno['prom'] dato = str(promedio) + ", corresponde al alumno: " + nom if promedio>=4: alumno['estado']="aprobado" aprobados+=1 total+=promedio else: alumno['estado']="desaprobado" desaprobados+=1 total+=promedio # agregando alumno a lista alumnos lista_alumnos.append(alumno) lista.append(dato) print(lista_alumnos) return (aprobados,desaprobados,total,n,lista) def imprimir(x,y,z,n,lista): prom_cur=round(float(z/n),2) print ("La cantidad de aprobados son: {} \nLa cantidad de desaprobados son: {} \nEl promedio total del curso es: {} ".format(x,y,prom_cur)) return lista def promedios (lista): lista.sort() #se ordena la lista print('El Máximo Promedio es:',lista[-1]) print('El Mínimo Promedio es:',lista[0]) ###Output _____no_output_____ ###Markdown 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code n=cantidad() aprobados,desaprobados,total,n,lista=ingresar_alumnos(n) ###Output Ingrese cantidad de alumnos: 4 ###Markdown 2. y 3.*Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. Informar el promedio de nota del curso total. ###Code lista=imprimir(aprobados,desaprobados,total,n,lista) ###Output La cantidad de aprobados son: 3 La cantidad de desaprobados son: 1 El promedio total del curso es: 4.66 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code promedios (lista) ###Output El Máximo Promedio es: 6.33, corresponde al alumno: Andrea El Mínimo Promedio es: 2.33, corresponde al alumno: Saul ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) cantidad lista_alumnos = [] for i in range(3): alumno = {} # ingreso nombre nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre #ingreso de notas alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) #agrupando datos en lista lista_alumnos.append(alumno) lista_alumnos alumno for persona in lista_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output gonzalo 5.0 martina 6.0 Isabel 5.666666666666667 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. 3.Informar el promedio de nota del curso total. 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) lista_alumnos = [] for i in range(cantidad): alumno = {} nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) lista_alumnos.append(alumno) lista_alumnos alumno ###Output _____no_output_____ ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code def apro_desap(): for j in lista_alumnos: prom = sum(j['notas'])/3 if prom >= 4: print(j['nombre'], ': Aprobado') else: print(j['nombre'], ': Desaprobado') apro_desap() ###Output Miguel : Aprobado Ayelen : Desaprobado Walter : Desaprobado ###Markdown 3.Informar el promedio de nota del curso total. ###Code for n in lista_alumnos: print(n['nombre'], sum(n['notas'])/3) ###Output Miguel 8.333333333333334 Ayelen 3.6666666666666665 Walter 3.0 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def prom_alto_bajo(): bajo = 11 alto = 0 for n in lista_alumnos: if ((sum(n['notas'])/3) <= bajo): bajo = sum(n['notas'])/3 print('El promedio mas bajo es {}'.format(bajo)) for n in lista_alumnos: if ((sum(n['notas'])/3) >= alto): alto = sum(n['notas'])/3 print('El promedio mas alto es {}'.format(alto)) prom_alto_bajo() ###Output El promedio mas bajo es 3.0 El promedio mas alto es 8.333333333333334 ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code alumnos=[] num=int(input("Ingrese el número de alumnos ")) listado_alumnos=[] for i in range(num): nomb=input("Ingrese el nombre completo del alumno: ") while True: try: nota1=int(input("Ingrese la nota 1 del alumno ")) except ValueError: print("Debes escribir un número.") continue if 0 < nota1 < 11: break while True: try: nota2=int(input("Ingrese la nota 2 del alumno ")) except ValueError: print("Debes escribir un número.") continue if 0 < nota2 < 11: break while True: try: nota3=int(input("Ingrese la nota 3 del alumno ")) except ValueError: print("Debes escribir un número.") continue if 0 < nota3 < 11: break alumnos={'nombre':nomb,'notas':[nota1,nota2,nota3]} listado_alumnos.append(alumnos) listado_alumnos ###Output _____no_output_____ ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code paso = 0 npaso = 0 for persona in listado_alumnos: if sum(persona['notas'])/3 >= 4: paso += 1 print(persona['nombre'],"APROBADO") else: npaso += 1 print(persona['nombre'],"DESAPROBADO") print(F"Los alumnos aprobados son {paso} alumnos reprobados son {npaso}") ###Output francis DESAPROBADO marco APROBADO atalaya APROBADO Los alumnos aprobados son 2 alumnos reprobados son 1 ###Markdown 3.Informar el promedio de nota del curso total. ###Code for persona in listado_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output francis 2.0 marco 5.0 atalaya 8.0 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def cargaralumnos(self, listado_alumnos): notas = [] for i in range(self.num): nombre=input(f"Ingrese el nombre completo del alumno {i+1}: ") for n in range(3): while True: try: nota = float(input(f"Ingrese la nota {n+1} del alumno: ")) if nota >= 0 and nota <= 10: notas.append(nota) break else: print("La nota debe estar comprendida entre 0 y 10") except: print("Ingrese un número valido") alumno = {'nombre' : nombre, 'notas' : [notas[0], notas[1], notas[2]]} notas.clear() listado_alumnos.append(alumno) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code #1. Declarando Lista vacia lista_alumnos = [] #2. Definiendo la función para cargar n alumnos def alumnos(lista_alumnos, cantidad): for i in range(cantidad): n = 0 alumno = {} nombre = input(f"Ingrese el nombre completo del alumno {len(lista_alumnos) + 1}: ") alumno['nombre'] = nombre while n < 3: try: nota = float(input(f"Ingresa la nota {n + 1}: ")) if nota >= 0 and nota <= 10: alumno[f'nota{n+1}'] = nota n = n+1 else: print("La nota debe estar comprendida entre 0 y 10") except: print("Ingrese una nota valida.") lista_alumnos.append(alumno) #3. Ingresando datos while True: try: cantidad = int(input("Ingrese la cantidad de alumnos a insertar")) if cantidad <= 0: print("Se debe registrar una cantidad de alumnos mayor a 0") else: break except: print("Por favor ingrese un valor de cantidad válido: ") alumnos(lista_alumnos, cantidad) #4. Imprimiendo datos lista_alumnos #------------- SOLUCIÓN DEL PROFESOR ----------------- #cantidad = int(input('¿Cuántos alumnos desea ingresar?')) #cantidad #lista_alumnos = [] #for i in range(cantidad): # alumno = {} #ingreso nombre # nombre = input(f'Ingrese el nombre del alumno {i+1}: ') # alumno['nombre'] = nombre #ingreso de notas # alumno['notas'] = [] # for n in range(3): # nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) # alumno['notas'].append(nota) #agrupando datos en lista # lista_alumnos.append(alumno) #lista_alumnos #alumno #----------------------------------------------------- ###Output _____no_output_____ ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code def promedio (lista_alumnos): for alumno in lista_alumnos: promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3 alumno['promedio'] = promedio def evaluar(lista_alumnos): aprobados = 0 desaprobados = 0 #Hallando promedio de cada alumno promedio(lista_alumnos) for alumno in lista_alumnos: if alumno['promedio'] >= 4: alumno['estado'] = 'Aprobado' aprobados += 1 else: alumno['estado'] = 'Desaprobado' desaprobados += 1 print(f'La cantidad de alumnos aprobados es de: {aprobados}') print(f'La cantidad de alumnos desaprobados es de: {desaprobados}') evaluar(lista_alumnos) ###Output La cantidad de alumnos aprobados es de: 2 La cantidad de alumnos desaprobados es de: 1 ###Markdown 3.Informar el promedio de nota del curso total. ###Code def promedio_curso(lista_alumnos): promedio = 0 for alumno in lista_alumnos: promedio += alumno['promedio'] return promedio / len(lista_alumnos) print(f"El promedio de nota del curso total es: {promedio_curso(lista_alumnos)}") ###Output El promedio de nota del curso total es: 6.0 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def puesto_promedio(lista_alumnos): palto = 0 pbajo = 10 for alumno in lista_alumnos: nombre = alumno['nombre'] if alumno['promedio'] >= palto: alumno_alto = alumno['nombre'] palto = alumno['promedio'] if alumno['promedio'] <= pbajo: alumno_bajo = alumno['nombre'] pbajo = alumno['promedio'] print(f"El alumno con el promedio más alto es: {alumno_alto}") print(f"El alumno con el promedio más bajo es: {alumno_bajo}") puesto_promedio(lista_alumnos) ###Output El alumno con el promedio más alto es: Eddie El alumno con el promedio más bajo es: Raúl ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def buscar_alumno(nombre, lista_alumnos): for alumno in lista_alumnos: if alumno['nombre'] == nombre: print(alumno) nombre = input("Ingrese el nombre del o los alumnos a buscar: ") buscar_alumno(nombre, lista_alumnos) ###Output Ingrese el nombre del o los alumnos a buscar: Eddie ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) cantidad lista_alumnos = [] for i in range(3): alumno = {} # ingreso nombre nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre #ingreso de notas alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) #agrupando datos en lista lista_alumnos.append(alumno) lista_alumnos alumno for persona in lista_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output gonzalo 5.0 martina 6.0 Isabel 5.666666666666667 ###Markdown ***************************************OTRO METODO CON LISTAS ###Code alumnos=[] num=int(input("Ingrese el número de alumnos ")) listado_alumnos=[] for i in range(num): nomb=input("Ingrese el nombre completo del alumno: ") while True: try: nota1=int(input("Ingrese la nota 1 del alumno ")) except ValueError: print("Debes escribir un número.") continue if 0 < nota1 < 11: break while True: try: nota2=int(input("Ingrese la nota 2 del alumno ")) except ValueError: print("Debes escribir un número.") continue if 0 < nota2 < 11: break while True: try: nota3=int(input("Ingrese la nota 3 del alumno ")) except ValueError: print("Debes escribir un número.") continue if 0 < nota3 < 11: break alumnos={'nombre':nomb,'notas':[nota1,nota2,nota3]} listado_alumnos.append(alumnos) ###Output Ingrese el nombre completo del alumno: diego Ingrese la nota 1 del alumno 1 Ingrese la nota 2 del alumno 2 Ingrese la nota 3 del alumno 3 Ingrese el nombre completo del alumno: chamako Ingrese la nota 1 del alumno 2 Ingrese la nota 2 del alumno 3 Ingrese la nota 3 del alumno 4 Ingrese el nombre completo del alumno: marco Ingrese la nota 1 del alumno 1 Ingrese la nota 2 del alumno 2 Ingrese la nota 3 del alumno 3 Ingrese el nombre completo del alumno: luis Ingrese la nota 1 del alumno 4 Ingrese la nota 2 del alumno 5 Ingrese la nota 3 del alumno 6 Ingrese el nombre completo del alumno: carlos Ingrese la nota 1 del alumno 7 Ingrese la nota 2 del alumno 8 Ingrese la nota 3 del alumno 9 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code paso = 0 npaso = 0 for persona in listado_alumnos: if sum(persona['notas'])/3 >= 4: paso += 1 print(persona['nombre'],"APROBADO") else: npaso += 1 print(persona['nombre'],"DESAPROBADO") print(F"Los alumnos aprobados son {paso} alumnos reprobados son {npaso}") ###Output diego DESAPROBADO chamako DESAPROBADO marco DESAPROBADO luis APROBADO carlos APROBADO Los alumnos aprobados son 2 alumnos reprobados son 3 ###Markdown 3.Informar el promedio de nota del curso total. ###Code for persona in listado_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output diego 2.0 chamako 3.0 marco 2.0 luis 5.0 carlos 8.0 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) cantidad lista_alumnos = [] for i in range(3): alumno = {} # ingreso nombre nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre #ingreso de notas alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) #agrupando datos en lista lista_alumnos.append(alumno) lista_alumnos alumno for persona in lista_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output gonzalo 5.0 martina 6.0 Isabel 5.666666666666667 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. 3.Informar el promedio de nota del curso total. 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code #1. Declarando Lista vacia lista_alumnos = [] #2. Definiendo la función para cargar n alumnos def alumnos(lista_alumnos, cantidad): for n in range(cantidad): n = 0 alumno = {} nombre = input(f"Ingrese el nombre completo del alumno {len(lista_alumnos) + 1}: ") alumno['nombre'] = nombre while n < 3: try: nota = float(input(f"Ingresa la nota {n + 1}: ")) if nota >= 0 and nota <= 10: alumno[f'nota{n+1}'] = nota n = n+1 else: print("La nota debe estar comprendida entre 0 y 10") except: print("Ingrese una nota valida.") lista_alumnos.append(alumno) #3. Ingresando datos while True: try: cantidad = int(input("Ingrese la cantidad de alumnos a insertar")) if cantidad <= 0: print("Se debe registrar una cantidad de alumnos mayor a 0") else: break except: print("Por favor ingrese un valor de cantidad válido: ") alumnos(lista_alumnos, cantidad) #4. Imprimiendo datos lista_alumnos ###Output _____no_output_____ ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code def promedio (lista_alumnos): for alumno in lista_alumnos: promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3 alumno['promedio'] = promedio def evaluar(lista_alumnos): aprobados = 0 desaprobados = 0 #Hallando promedio de cada alumno promedio(lista_alumnos) for alumno in lista_alumnos: if alumno['promedio'] >= 4: alumno['estado'] = 'Aprobado' aprobados += 1 else: alumno['estado'] = 'Desaprobado' desaprobados += 1 print(f'La cantidad de alumnos aprobados es de: {aprobados}') print(f'La cantidad de alumnos desaprobados es de: {desaprobados}') evaluar(lista_alumnos) ###Output _____no_output_____ ###Markdown 3.Informar el promedio de nota del curso total. ###Code def promedio_curso(lista_alumnos): promedio = 0 for alumno in lista_alumnos: promedio += alumno['promedio'] return promedio / len(lista_alumnos) print(f"El promedio de nota del curso total es: {promedio_curso(lista_alumnos)}") ###Output _____no_output_____ ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def puesto_promedio(lista_alumnos): palto = 0 pbajo = 10 for alumno in lista_alumnos: nombre = alumno['nombre'] if alumno['promedio'] >= palto: alumno_alto = alumno['nombre'] palto = alumno['promedio'] if alumno['promedio'] <= pbajo: alumno_bajo = alumno['nombre'] pbajo = alumno['promedio'] print(f"El alumno con el promedio más alto es: {alumno_alto}") print(f"El alumno con el promedio más bajo es: {alumno_bajo}") puesto_promedio(lista_alumnos) ###Output _____no_output_____ ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def buscar_alumno(nombre, lista_alumnos): for alumno in lista_alumnos: if alumno['nombre'] == nombre: print(alumno) nombre = input("Ingrese el nombre del o los alumnos a buscar: ") buscar_alumno(nombre, lista_alumnos) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code alumnos = input("Ingrese el nombre del estudiante a añadir: ") if alumnos=="": print("el nombre no puede estar vacio") ### LAS NOTAS DEBEN ESTAR COMPRENDIDAS ENTRE O Y 10 print("INTRODUCE LA NOTA DE LA PRIMERA PC") calif1 = input() print("INTRODUCE LA NOTA DE LA SEGUNDA PC") calif2 = input() print("INTRODUCE LA NOTA DE LA SEGUNDA PC") calif3 = input() ###Output INTRODUCE LA NOTA DE LA PRIMERA PC ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code calificacion1 = int(calif1) calificacion2 = int(calif2) calificacion3 = int(calif3) ###promedio de las 3 notas suma_de_notas = calificacion1+calificacion2+calificacion3 promed = suma_de_notas/3 print("el promedio de notas es: %d"%promed) ### tener en cuenta que se apueba con nota >=4 if promed>=4: print("aprobado") else: print("desaprobado") ###Output aprobado ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code lista = [] def alumnos(lista, cant): for i in range(cant): n = 0 alumno = {} nombre = input(f"Ingrese el nombre completo del alumno {len(lista) + 1}: ") alumno['nombre'] = nombre while n < 3: try: nota = float(input(f"Ingresa la nota {n + 1}: ")) if nota >= 0 and nota <= 10: alumno[f'nota{n+1}'] = nota n = n+1 else: print("La nota debe ser menor a 10 y mayor a 0") except: print("Ingrese una nota menor a 10 y mayor a 0") lista.append(alumno) while True: try: cant = int(input("Ingrese la cantidad de alumnos:")) if cant <= 0: print("La cantidad de alumnos debe ser mayor a 0") else: break except: print("Ingrese una nota mayor a 0") alumnos(lista, cant) lista ###Output Ingrese la cantidad de alumnos: 3 Ingrese el nombre completo del alumno 1: Dany Joel Anaya Sánchez Ingresa la nota 1: 4 Ingresa la nota 2: 5 Ingresa la nota 3: 6 Ingrese el nombre completo del alumno 2: José Alejandro Jara Piña Ingresa la nota 1: 6 Ingresa la nota 2: 7 Ingresa la nota 3: 8 Ingrese el nombre completo del alumno 3: Erick Andres Melo Villar Ingresa la nota 1: 1 Ingresa la nota 2: 2 Ingresa la nota 3: -1 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code def promedio (lista): for alumno in lista: promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3 alumno['promedio'] = promedio def evaluar(lista): aprobados = 0 desaprobados = 0 promedio(lista) for alumno in lista: if alumno['promedio'] >= 4: alumno['estado'] = 'Aprobado' aprobados += 1 else: alumno['estado'] = 'Desaprobado' desaprobados += 1 print(f'Cantidad de alumnos aprobados: {aprobados}') print(f'Cantidad de alumnos desaprobados es de: {desaprobados}') evaluar(lista) ###Output Cantidad de alumnos aprobados: 3 Cantidad de alumnos desaprobados es de: 0 ###Markdown 3.Informar el promedio de nota del curso total. ###Code def promedio_curso(lista): promedio = 0 for alumno in lista: promedio += alumno['promedio'] return promedio / len(lista) print(f"Promedio de nota del curso total: {promedio_curso(lista)}") ###Output Promedio de nota del curso total: 5.444444444444444 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def puesto_promedio(lista): promedioalto = 0 promediobajo = 10 for alumno in lista: nombre = alumno['nombre'] if alumno['promedio'] >= promedioalto: alumno_alto = alumno['nombre'] promedioalto = alumno['promedio'] if alumno['promedio'] <= promediobajo: alumno_bajo = alumno['nombre'] promediobajo = alumno['promedio'] print(f"Alumno con el promedio más alto: {alumno_alto}") print(f"Alumno con el promedio más bajo: {alumno_bajo}") puesto_promedio(lista) ###Output Alumno con el promedio más alto: José Alejandro Jara Piña Alumno con el promedio más bajo: Erick Andres Melo Villar ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def buscar_alumno(nombre, lista): for alumno in lista: if alumno['nombre'] == nombre: print(alumno) nombre = input("Nombre de alumno(s) que desea buscar: ") buscar_alumno(nombre, lista) ###Output Nombre de alumno(s) que desea buscar: Erick Andres Melo Villar ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) lista_alumnos = [] for i in range(cantidad): alumno = {} nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) lista_alumnos.append(alumno) alumno ###Output _____no_output_____ ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code def apro_desap(): for j in lista_alumnos: prom = sum(j['notas'])/3 if prom >= 4: print(j['nombre'], ': Aprobado') else: print(j['nombre'], ': Desaprobado') apro_desap() ###Output Anggie : Aprobado ###Markdown 3.Informar el promedio de nota del curso total. ###Code for n in lista_alumnos: print(n['nombre'], sum(n['notas'])/3) ###Output Anggie 15.333333333333334 ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def prom_alto_bajo(): bajo = 11 alto = 0 for n in lista_alumnos: if ((sum(n['notas'])/3) <= bajo): bajo = sum(n['notas'])/3 print('El promedio mas bajo es {}'.format(bajo)) for n in lista_alumnos: if ((sum(n['notas'])/3) >= alto): alto = sum(n['notas'])/3 print('El promedio mas alto es {}'.format(alto)) prom_alto_bajo() ###Output El promedio mas bajo es 11 El promedio mas alto es 15.333333333333334 ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output Ingrese cantidad de alumnos: 1 Ingrese el nombre del alumno 1: 15 Ingrese Nota 1: 18 Ingrese Nota 2: 13 Ingrese Nota 3: 17 ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) cantidad lista_alumnos = [] for i in range(3): alumno = {} # ingreso nombre nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre #ingreso de notas alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) #agrupando datos en lista lista_alumnos.append(alumno) lista_alumnos alumno for persona in lista_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output gonzalo 5.0 martina 6.0 Isabel 5.666666666666667 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. 3.Informar el promedio de nota del curso total. 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code cantidad = int(input('Cuantos alumnos desea ingresas? ')) cantidad lista_alumnos = [] for i in range(3): alumno = {} # ingreso nombre nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre #ingreso de notas alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) #agrupando datos en lista lista_alumnos.append(alumno) lista_alumnos alumno for persona in lista_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output gonzalo 5.0 martina 6.0 Isabel 5.666666666666667 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code cant=int(input("Ingrese la cantidad de alumnos: ")) lista_alum=[] for i in range(cant): alumno={} nombre=input("Ingrese nombre del estudiante: ") alumno['nombre']=nombre alumno['notas']=[] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) lista_alum.append(alumno) print(lista_alum) for estudiante in lista_alum: if (sum(estudiante['notas'])/3) >= 4: print(estudiante['nombre'], sum(estudiante['notas'])/3, "APROBADO") else: print(estudiante['nombre'], sum(estudiante['notas'])/3, "DESAPROBADO") ###Output _____no_output_____ ###Markdown 3.Informar el promedio de nota del curso total. ###Code cant=int(input("Ingrese la cantidad de alumnos: ")) lista_alum=[] for i in range(cant): alumno={} nombre=input("Ingrese nombre del estudiante: ") alumno['nombre']=nombre alumno['notas']=[] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) lista_alum.append(alumno) for estudiante in lista_alum: print(estudiante['nombre'], "Su promedio es: ",sum(estudiante['notas'])/3) ###Output _____no_output_____ ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def prom_alto(): cant=int(input("Ingrese la cantidad de alumnos: ")) lista_alum=[] for i in range(cant): alumno={} nombre=input(f"Ingrese nombre del estudiante {i+1}: ") alumno['nombre']=nombre alumno['notas']=[] alumno['promedio']=[] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) alumno['promedio'] =sum(alumno['notas'])/3 lista_alum.append(alumno) ordenados = sorted(lista_alum, key=lambda alumno : alumno['promedio']) print(ordenados) print("El estudiante con promedio BAJO es :", ordenados[0]) print("El estudiante con promedio ALTO es :", ordenados[-1]) prom_alto() ###Output _____no_output_____ ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output _____no_output_____ ###Markdown PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos. ###Code N=input("Ingresar su nombre compelto:") NOTA1=float(input("Ingresar primera nota:")) NOTA2=float(input("Ingresar segunda nota:")) NOTA3=float(input("Ingresar tercera nota:")) cantidad = int(input('Cuantos alumnos desea ingresas? ')) cantidad lista_alumnos = [] for i in range(3): alumno = {} # ingreso nombre nombre = input(f'Ingrese el nombre del alumno {i+1}: ') alumno['nombre']= nombre #ingreso de notas alumno['notas'] = [] for n in range(3): nota = float(input(f'Ingrese la nota {n+1} del alumno: ')) alumno['notas'].append(nota) #agrupando datos en lista lista_alumnos.append(alumno) lista_alumnos alumno for persona in lista_alumnos: print(persona['nombre'], sum(persona['notas'])/3) ###Output lisseth 8.0 camila 12.333333333333334 pedro 11.0 ###Markdown 2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. ###Code numeroCalificaciones=0 while True: float numeroCalificaciones=int(raw_input("Dame el numero de calificaciones: ")) break except ValueError: print "Error" suma=0 Calificaciones=[] for i in range(0,numeroCalificaciones): while True: try: Calificacion= int(raw_input("dame la calificacion"+str(i)+":")) break except ValueError: print "Error:" Calificaciones.append(Calificacion) suma=suma + calificacion promedio= suma/numeroCalificaciones for i in range(0,numeroCalificaciones): if Calificaciones[i]>=15: print(srt(Calificaciones[i]) +" Calificacion Aprobatoria") else: print(srt(Calificaciones[i]) +" Calificacion NO Aprobatoria") print promedio # Al escanear se devuelve como cadena promedio_como_cadena = input("Dime tu promedio: ") # Convertir a float promedio = float(promedio_como_cadena) # Hacer la comparación if promedio >= 11: print("Aprobado") else: print("No aprobado") ###Output Dime tu promedio: 10 No aprobado ###Markdown 3.Informar el promedio de nota del curso total. ###Code ###Output _____no_output_____ ###Markdown 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. ###Code def fun(nota): if nota > 7: return "Promociona" else: if nota < 4: return "Aplazado" else: if 4 <= nota <= 7: return "Aprobado" aplazados = aprobados = notables = 0 while True: nota = float(input('Ingrese nota (0 para terminar):')) if nota == 0: break if nota > 10: continue else: if nota < 4: aplazados += 1 elif nota >= 4 and nota <=7: aprobados += 1 elif nota > 7 and nota <= 10: notables += 1 print ('\nNumero de aprobados %d' %aprobados) print('Numero de aplazados %d' %aplazados) print('Numero de notables %d' %notables) ###Output Ingrese nota (0 para terminar):12 Ingrese nota (0 para terminar):12 Ingrese nota (0 para terminar):0 Numero de aprobados 0 Numero de aplazados 0 Numero de notables 0 ###Markdown 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas. ###Code def alumno(n): notas=[] nombre=[] for i in range(n): name= input(f'Ingrese el nombre del alumno {i+1}: ') nombre.append(name) nota_1 = float(input('Ingrese Nota 1: ')) nota_2 = float(input('Ingrese Nota 2: ')) nota_3 = float(input('Ingrese Nota 3: ')) notas.append([nota_1,nota_2,nota_3]) print("Alumnos \t Notas") for i in range(n): print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2]) n=int(input("Ingrese cantidad de alumnos: ")) alumno(n) ###Output Ingrese cantidad de alumnos: 2 Ingrese el nombre del alumno 1: camila Ingrese Nota 1: 12 Ingrese Nota 2: 13 Ingrese Nota 3: 14 Ingrese el nombre del alumno 2: pedro Ingrese Nota 1: 12 Ingrese Nota 2: 13 Ingrese Nota 3: 14 Alumnos Notas camila 12.0 13.0 14.0 pedro 12.0 13.0 14.0
notebooks/02_basic_numerical_operations.ipynb
###Markdown Numerical Operations in Python ###Code from __future__ import print_function # we will use the print function in this tutorial for python 2 - 3 compatibility a = 4 b = 5 c = 6 # we'll declare three integers to assist us in our operations ###Output _____no_output_____ ###Markdown If we want to add the first two together (and store the result in a variable we will call `S`):```pythonS = a + b ```The last part of the equation (i.e `a+b`) is the numerical operation. This sums the value stored in the variable `a` with the value stored in `b`.The plus sign (`+`) is called an arithmetic operator.The equal sign is a symbol used for assigning a value to a variable. In this case the result of the operation is assigned to a new variable called `S`. The basic numeric operators in python are: ###Code # Sum: S = a + b print('a + b =', S) # Difference: D = c - a print('c + a =', D) # Product: P = b * c print('b * c =', P) # Quotient: Q = c / a print('c / a =', Q) # Remainder: R = c % a print('a % b =', R) # Floored Quotient: F = c // a print('a // b =', F) # Negative: N = -a print('-a =', N) # Power: Pow = b ** a print('b ** a =', Pow) ###Output a + b = 9 c + a = 2 b * c = 30 c / a = 1.5 a % b = 2 a // b = 1 -a = -4 b ** a = 625 ###Markdown What is the difference between `/` and `//` ?The first performs a regular division between two numbers, while the second performs a *euclidean division* **without the remainder**. Important note: In python 2 `/` would return an integer if the two numbers participating in the division were integers. In that sense:```pythonQ = 6 / 4 this would perform a euclidean division because both divisor and dividend are integers!Q = 6.0 / 4 this would perform a real division because the dividend is a floatQ = c / (a * 1.0) this would perform a real division because the divisor is a floatQ = c / float(a) this would perform a real division because the divisor is a float```One way to make python 2 compatible with python 3 division is to import `division` from the `__future__` package. We will do this for the remainder of this tutorial. ###Code from __future__ import division Q = c / a print(Q) ###Output 1.5 ###Markdown We can combine more than one operations in a single line. ###Code E = a + b - c print(E) ###Output 3 ###Markdown Priorities are the same as in algebra: parentheses -> powers -> products -> sumsWe can also perform more complex assignment operations: ###Code print('a =', a) print('S =', S) S += a # equivalent to S = S + a print('+ a =', S) S -= a # equivalent to S = S - a print('- a =', S) S *= a # equivalent to S = S * a print('* a =', S) S /= a # equivalent to S = S / a print('/ a =', S) S %= a # equivalent to S = S % a print('% a =', S) S **= a # equivalent to S = S ** a print('** a =', S) S //= a # equivalent to S = S // a print('// a =', S) ###Output a = 4 S = 9 + a = 13 - a = 9 * a = 36 / a = 9.0 % a = 1.0 ** a = 1.0 // a = 0.0 ###Markdown Other operations: ###Code n = -3 print('n =', n) A = abs(n) # Absolute: print('absolute(n) =', A) C = complex(n, a) # Complex: -3+4j print('complex(n,a) =', C) c = C.conjugate() # Conjugate: -3-4j print('conjugate(C) =', c) ###Output n = -3 absolute(n) = 3 complex(n,a) = (-3+4j) conjugate(C) = (-3-4j) ###Markdown Bitwise operations:Operations that first convert a number to its binary equivalent and then perform operations bit by bit bevore converting them again to their original form. ###Code a = 3 # or 011 (in binary) b = 5 # or 101 (in binary) print(a | b) # bitwise OR: 111 (binary) --> 7 (decimal) print(a ^ b) # exclusive OR: 110 (binary) --> 6 (decimal) print(a & b) # bitwise AND: 001 (binary) --> 1 (decimal) print(b << a) # b shifted left by a bits: 101000 (binary) --> 40 (decimal) print(8 >> a) # 8 shifted left by a bits: 0001 (binary - was 1000 before shift) --> 1(decimal) print(~a) # NOT: 100 (binary) --> -4 (decimal) ###Output 7 6 1 40 1 -4 ###Markdown Built-in methodsSome data types have built in methods, for example we can check if a float variable stores an integer as follows: ###Code a = 3.0 t = a.is_integer() print(t) a = 3.2 t = a.is_integer() print(t) ###Output True False ###Markdown Note that the casting operation between floats to integers just discards the decimal part (it doesn't attempt to round the number). ###Code print(int(3.21)) print(int(3.99)) ###Output 3 3 ###Markdown We can always `round` the number beforehand. ###Code int(round(3.6)) ###Output _____no_output_____ ###Markdown ExerciseWhat do the following operations return? ###Code E1 = ( 3.2 + 12 ) * 2 / ( 1 + 1 ) E2 = abs(-4 ** 3) E3 = complex( 8 % 3, int(-2 * 1.0 / 4)-1 ) E4 = (6.0 / 4.0).is_integer() E5 = (4 | 2) ^ (5 & 6) ###Output _____no_output_____ ###Markdown Python's mathematical functionsMost math functions are included in a seperate library called `math`. ###Code import math x = 4 print('exp = ', math.exp(x)) # exponent of x (e**x) print('log = ',math.log(x)) # natural logarithm (base=e) of x print('log2 = ',math.log(x,2)) # logarithm of x with base 2 print('log10 = ',math.log10(x)) # logarithm of x with base 10, equivalent to math.log(x,10) print('sqrt = ',math.sqrt(x)) # square root print('cos = ',math.cos(x)) # cosine of x (x is in radians) print('sin = ',math.sin(x)) # sine print('tan = ',math.tan(x)) # tangent print('arccos = ',math.acos(.5)) # arc cosine (in radians) print('arcsin = ',math.asin(.5)) # arc sine print('arctan = ',math.atan(.5)) # arc tangent # arc-trigonometric functions only accept values in [-1,1] print('deg = ',math.degrees(x)) # converts x from radians to degrees print('rad = ',math.radians(x)) # converts x from degrees to radians print('e = ',math.e) # mathematical constant e = 2.718281... print('pi = ',math.pi) # mathematical constant pi = 3.141592... ###Output exp = 54.598150033144236 log = 1.3862943611198906 log2 = 2.0 log10 = 0.6020599913279624 sqrt = 2.0 cos = -0.6536436208636119 sin = -0.7568024953079282 tan = 1.1578212823495775 arccos = 1.0471975511965979 arcsin = 0.5235987755982989 arctan = 0.4636476090008061 deg = 229.1831180523293 rad = 0.06981317007977318 e = 2.718281828459045 pi = 3.141592653589793 ###Markdown The `math` package also provides other functions such as hyperbolic trigonometric functions, error functions, gamma functions etc. Generating a pseudo-random numberPython has a built-in package for generating pseudo-random sequences called `random`. ###Code import random print(random.randint(1,10)) # Generates a random integer in [1,10] print(random.randrange(1,100,2)) # Generates a random integer from [1,100) with step 2, i.e from 1, 3, 5, ..., 97, 99. print(random.uniform(0,1)) # Generates a random float in [0,1] ###Output 1 21 0.7912325286049906 ###Markdown ExampleConsider the complex number $3 + 4j$. Calculate it's magnitude and it's angle, then transform it into a tuple of it's polar form. ###Code z = 3 + 4j ###Output _____no_output_____ ###Markdown Solution attempt 1 (analytical). We don't know any of the built-in complex methods and we try to figure out an analytical solution. We will first calculate the real and imaginary parts of the complex number and then we will try to apply the Pythagorean theorem to calculate the magnitude. Step 1: Find the real part of the complex number.We will make use of the mathematical formula: $$Re(z) = \frac{1}{2} \cdot ( z + \overline{z} )$$ ###Code rl = ( z + z.conjugate() ) / 2 print(rl) ###Output (3+0j) ###Markdown Note that *rl* is still in complex format, even though it represents a real number... Step 2: Find the imaginary part of the complex number.**1st way**, like before, we use the mathematical formula: $$Im(z) = \frac{z - \overline{z}}{2i}$$ ###Code im = ( z - z.conjugate() ) / 2j print(im) ###Output (4+0j) ###Markdown Same as before `im` is in complex format, even though it represents a real number... Step 3: Find the sum of the squares of the real and the imaginary parts:$$ S = Re(z)^2 + Im(z)^2 $$ ###Code sq_sum = rl**2 + im**2 print(sq_sum) ###Output (25+0j) ###Markdown Still we are in complex format.Let's try to calculate it's square root to find out the magnitude: ###Code mag = math.sqrt(sq_sum) ###Output _____no_output_____ ###Markdown Oh... so the `math.sqrt()` method doesn't support complex numbers, even though what we're trying to use actually represents a real number. Well, let's try to cast it as an integer and then pass it into *math.sqrt()*. ###Code sq_sum = int(sq_sum) ###Output _____no_output_____ ###Markdown We still get the same error.We're not stuck in a situation where we are trying to do something **mathematically sound**, that the computer refuses to do.But what is causing this error? In math $25$ and $25+0i$ are exactly the same number. Both represent a natural number. But the computer sees them as two different entities entirely. One is an object of the *integer* data type and the other is an object of the *complex* data type. The programmer who wrote the code for the `math.sqrt()` method of the math package, created it so that it can be used on *integers* and *floats* (but not *complex* numbers), even though in our instance the two are semantically the same thing.Ok, so trying our first approach didn't work out. Let's try calculating this another way. We know from complex number theory that:$$ z \cdot \overline{z} = Re(z)^2 + Im(z)^2 $$ ###Code sq_sum = z * z.conjugate() mag = math.sqrt(sq_sum) ###Output _____no_output_____ ###Markdown This didn't work out either... Solution attempt 2. We know that a complex number represents a vector in the *Re*, *Im* axes. Mathematically speaking the absolute value of a real number is defined differently than the absolute value of a complex one. Graphically though, they can both be defined as the distance of the number from (0,0). If we wanted to calculate the absolute of a real number we should just disregard it's sign and treat it as positive. On the other hand if we wanted to do the same thing to a complex number we would need to calculate the euclidean norm of it's vector (or in other words measure the distance from the complex number to (0,0), using the Pythagorean theorem). So in essence what we are looking for is the absolute value of the complex number. Step 1: Calculate the magnitude. ###Code mag = abs(z) print(mag) ###Output 5.0 ###Markdown Ironically, this is the exact opposite situation of where we were before. Two things that have totally **different mathematical definitions** and methods of calculation (the absolute value of a complex and an integer), can be calculated using the same function.** 2nd Way:** As a side note we could have calculated the magnitude using the previous way, if we knew some of the complex numbers' built-in functions: ###Code rl = z.real print('real =', rl) im = z.imag print('imaginary =', im) # (now that these numbers are floats we can continue and perform operations such as the square root mag = math.sqrt(rl**2 + im**2) # mag = 5.0 print('magnitude =', mag) ###Output real = 3.0 imaginary = 4.0 magnitude = 5.0 ###Markdown Step 2: Calculate the angle.**1st way: ** First we will calculate the cosine of the angle. The cosine is the real part divided by the magnitude. ###Code cos_ang = rl / mag print(cos_ang) ###Output 0.6 ###Markdown To find the angle we use the arc cosine function from the math package. ###Code ang = math.acos(cos_ang) print('phase in rad =', ang) print('phase in deg =', math.degrees(ang)) ###Output phase in rad = 0.9272952180016123 phase in deg = 53.13010235415599 ###Markdown **2nd way:** Another way tou find the angle (or more correctly phase) of the complex number is to use a function from the `cmath` (complex math) package. ###Code import cmath ang = cmath.phase(z) print('phase in rad =', ang) ###Output phase in rad = 0.9272952180016122 ###Markdown Without needing to calculate anything beforehand (no *rl* and no *mag* needed). Step 3: Create a tuple of the complex number's polar form: ###Code pol = (mag, ang) print(pol) ###Output (5.0, 0.9272952180016122) ###Markdown Solution attempt 4 (using python's built in cmath package): ###Code pol = cmath.polar(z) print(pol) ###Output (5.0, 0.9272952180016122) ###Markdown Numerical Operations in Python ###Code from __future__ import print_function # we will use the print function in this tutorial for python 2 - 3 compatibility a = 4 b = 5 c = 6 # we'll declare three integers to assist us in our operations ###Output _____no_output_____ ###Markdown If we want to add the first two together (and store the result in a variable we will call `S`):```pythonS = a + b ```The last part of the equation (i.e `a+b`) is the numerical operation. This sums the value stored in the variable `a` with the value stored in `b`.The plus sign (`+`) is called an arithmetic operator.The equal sign is a symbol used for assigning a value to a variable. In this case the result of the operation is assigned to a new variable called `S`. The basic numeric operators in python are: ###Code # Sum: S = a + b print('a + b =', S) # Difference: D = c - a print('c + a =', D) # Product: P = b * c print('b * c =', P) # Quotient: Q = c / a print('c / a =', Q) # Remainder: R = c % a print('a % b =', R) # Floored Quotient: F = c // a print('a // b =', F) # Negative: N = -a print('-a =', N) # Power: Pow = b ** a print('b ** a =', Pow) ###Output a + b = 9 c + a = 2 b * c = 30 c / a = 1.5 a % b = 2 a // b = 1 -a = -4 b ** a = 625 ###Markdown What is the difference between `/` and `//` ?The first performs a regular division between two numbers, while the second performs a *euclidean division* **without the remainder**. Important note: In python 2 `/` would return an integer if the two numbers participating in the division were integers. In that sense:```pythonQ = 6 / 4 this would perform a euclidean division because both divisor and dividend are integers!Q = 6.0 / 4 this would perform a real division because the dividend is a floatQ = c / (a * 1.0) this would perform a real division because the divisor is a floatQ = c / float(a) this would perform a real division because the divisor is a float```One way to make python 2 compatible with python 3 division is to import `division` from the `__future__` package. We will do this for the remainder of this tutorial. ###Code from __future__ import division Q = c / a print(Q) ###Output 1.5 ###Markdown We can combine more than one operations in a single line. ###Code E = a + b - c print(E) ###Output 3 ###Markdown Priorities are the same as in algebra: parentheses -> powers -> products -> sumsWe can also perform more complex assignment operations: ###Code print('a =', a) print('S =', S) S += a # equivalent to S = S + a print('+ a =', S) S -= a # equivalent to S = S - a print('- a =', S) S *= a # equivalent to S = S * a print('* a =', S) S /= a # equivalent to S = S / a print('/ a =', S) S %= a # equivalent to S = S % a print('% a =', S) S **= a # equivalent to S = S ** a print('** a =', S) S //= a # equivalent to S = S // a print('// a =', S) ###Output a = 4 S = 9 + a = 13 - a = 9 * a = 36 / a = 9.0 % a = 1.0 ** a = 1.0 // a = 0.0 ###Markdown Other operations: ###Code n = -3 print('n =', n) A = abs(n) # Absolute: print('absolute(n) =', A) C = complex(n, a) # Complex: -3+4j print('complex(n,a) =', C) c = C.conjugate() # Conjugate: -3-4j print('conjugate(C) =', c) ###Output n = -3 absolute(n) = 3 complex(n,a) = (-3+4j) conjugate(C) = (-3-4j) ###Markdown Bitwise operations:Operations that first convert a number to its binary equivalent and then perform operations bit by bit bevore converting them again to their original form. ###Code a = 3 # or 011 (in binary) b = 5 # or 101 (in binary) print(a | b) # bitwise OR: 111 (binary) --> 7 (decimal) print(a ^ b) # exclusive OR: 110 (binary) --> 6 (decimal) print(a & b) # bitwise AND: 001 (binary) --> 1 (decimal) print(b << a) # b shifted left by a bits: 101000 (binary) --> 40 (decimal) print(8 >> a) # 8 shifted left by a bits: 0001 (binary - was 1000 before shift) --> 1(decimal) print(~a) # NOT: 100 (binary) --> -4 (decimal) ###Output 7 6 1 40 1 -4 ###Markdown Built-in methodsSome data types have built in methods, for example we can check if a float variable stores an integer as follows: ###Code a = 3.0 t = a.is_integer() print(t) a = 3.2 t = a.is_integer() print(t) ###Output True False ###Markdown Note that the casting operation between floats to integers just discards the decimal part (it doesn't attempt to round the number). ###Code print(int(3.21)) print(int(3.99)) ###Output 3 3 ###Markdown We can always `round` the number beforehand. ###Code int(round(3.6)) ###Output _____no_output_____ ###Markdown ExerciseWhat do the following operations return? E1 = ( 3.2 + 12 ) * 2 / ( 1 + 1 )E2 = abs(-4 ** 3)E3 = complex( 8 % 3, int(-2 * 1.0 / 4)-1 )E4 = (6.0 / 4.0).is_integer()E5 = (4 | 2) ^ (5 & 6) Python's mathematical functionsMost math functions are included in a seperate library called `math`. ###Code import math x = 4 print('exp = ', math.exp(x)) # exponent of x (e**x) print('log = ',math.log(x)) # natural logarithm (base=e) of x print('log2 = ',math.log(x,2)) # logarithm of x with base 2 print('log10 = ',math.log10(x)) # logarithm of x with base 10, equivalent to math.log(x,10) print('sqrt = ',math.sqrt(x)) # square root print('cos = ',math.cos(x)) # cosine of x (x is in radians) print('sin = ',math.sin(x)) # sine print('tan = ',math.tan(x)) # tangent print('arccos = ',math.acos(.5)) # arc cosine (in radians) print('arcsin = ',math.asin(.5)) # arc sine print('arctan = ',math.atan(.5)) # arc tangent # arc-trigonometric functions only accept values in [-1,1] print('deg = ',math.degrees(x)) # converts x from radians to degrees print('rad = ',math.radians(x)) # converts x from degrees to radians print('e = ',math.e) # mathematical constant e = 2.718281... print('pi = ',math.pi) # mathematical constant pi = 3.141592... ###Output exp = 54.598150033144236 log = 1.3862943611198906 log2 = 2.0 log10 = 0.6020599913279624 sqrt = 2.0 cos = -0.6536436208636119 sin = -0.7568024953079282 tan = 1.1578212823495775 arccos = 1.0471975511965979 arcsin = 0.5235987755982989 arctan = 0.4636476090008061 deg = 229.1831180523293 rad = 0.06981317007977318 e = 2.718281828459045 pi = 3.141592653589793 ###Markdown The `math` package also provides other functions such as hyperbolic trigonometric functions, error functions, gamma functions etc. Generating a pseudo-random numberPython has a built-in package for generating pseudo-random sequences called `random`. ###Code import random print(random.randint(1,10)) # Generates a random integer in [1,10] print(random.randrange(1,100,2)) # Generates a random integer from [1,100) with step 2, i.e from 1, 3, 5, ..., 97, 99. print(random.uniform(0,1)) # Generates a random float in [0,1] ###Output 1 21 0.7912325286049906 ###Markdown ExampleConsider the complex number $3 + 4j$. Calculate it's magnitude and it's angle, then transform it into a tuple of it's polar form. ###Code z = 3 + 4j ###Output _____no_output_____ ###Markdown Solution attempt 1 (analytical). We don't know any of the built-in complex methods and we try to figure out an analytical solution. We will first calculate the real and imaginary parts of the complex number and then we will try to apply the Pythagorean theorem to calculate the magnitude. Step 1: Find the real part of the complex number.We will make use of the mathematical formula: $$Re(z) = \frac{1}{2} \cdot ( z + \overline{z} )$$ ###Code rl = ( z + z.conjugate() ) / 2 print(rl) ###Output (3+0j) ###Markdown Note that *rl* is still in complex format, even though it represents a real number... Step 2: Find the imaginary part of the complex number.**1st way**, like before, we use the mathematical formula: $$Im(z) = \frac{z - \overline{z}}{2i}$$ ###Code im = ( z - z.conjugate() ) / 2j print(im) ###Output (4+0j) ###Markdown Same as before `im` is in complex format, even though it represents a real number... Step 3: Find the sum of the squares of the real and the imaginary parts:$$ S = Re(z)^2 + Im(z)^2 $$ ###Code sq_sum = rl**2 + im**2 print(sq_sum) ###Output (25+0j) ###Markdown Still we are in complex format.Let's try to calculate it's square root to find out the magnitude: ###Code mag = math.sqrt(sq_sum) ###Output _____no_output_____ ###Markdown Oh... so the `math.sqrt()` method doesn't support complex numbers, even though what we're trying to use actually represents a real number. Well, let's try to cast it as an integer and then pass it into *math.sqrt()*. ###Code sq_sum = int(sq_sum) ###Output _____no_output_____ ###Markdown We still get the same error.We're not stuck in a situation where we are trying to do something **mathematically sound**, that the computer refuses to do.But what is causing this error? In math $25$ and $25+0i$ are exactly the same number. Both represent a natural number. But the computer sees them as two different entities entirely. One is an object of the *integer* data type and the other is an object of the *complex* data type. The programmer who wrote the code for the `math.sqrt()` method of the math package, created it so that it can be used on *integers* and *floats* (but not *complex* numbers), even though in our instance the two are semantically the same thing.Ok, so trying our first approach didn't work out. Let's try calculating this another way. We know from complex number theory that:$$ z \cdot \overline{z} = Re(z)^2 + Im(z)^2 $$ ###Code sq_sum = z * z.conjugate() mag = math.sqrt(sq_sum) ###Output _____no_output_____ ###Markdown This didn't work out either... Solution attempt 2. We know that a complex number represents a vector in the *Re*, *Im* axes. Mathematically speaking the absolute value of a real number is defined differently than the absolute value of a complex one. Graphically though, they can both be defined as the distance of the number from (0,0). If we wanted to calculate the absolute of a real number we should just disregard it's sign and treat it as positive. On the other hand if we wanted to do the same thing to a complex number we would need to calculate the euclidean norm of it's vector (or in other words measure the distance from the complex number to (0,0), using the Pythagorean theorem). So in essence what we are looking for is the absolute value of the complex number. Step 1: Calculate the magnitude. ###Code mag = abs(z) print(mag) ###Output 5.0 ###Markdown Ironically, this is the exact opposite situation of where we were before. Two things that have totally **different mathematical definitions** and methods of calculation (the absolute value of a complex and an integer), can be calculated using the same function.**2nd Way:** As a side note we could have calculated the magnitude using the previous way, if we knew some of the complex numbers' built-in functions: ###Code rl = z.real print('real =', rl) im = z.imag print('imaginary =', im) # (now that these numbers are floats we can continue and perform operations such as the square root mag = math.sqrt(rl**2 + im**2) # mag = 5.0 print('magnitude =', mag) ###Output real = 3.0 imaginary = 4.0 magnitude = 5.0 ###Markdown Step 2: Calculate the angle.**1st way:** First we will calculate the cosine of the angle. The cosine is the real part divided by the magnitude. ###Code cos_ang = rl / mag print(cos_ang) ###Output 0.6 ###Markdown To find the angle we use the arc cosine function from the math package. ###Code ang = math.acos(cos_ang) print('phase in rad =', ang) print('phase in deg =', math.degrees(ang)) ###Output phase in rad = 0.9272952180016123 phase in deg = 53.13010235415599 ###Markdown **2nd way:** Another way tou find the angle (or more correctly phase) of the complex number is to use a function from the `cmath` (complex math) package. ###Code import cmath ang = cmath.phase(z) print('phase in rad =', ang) ###Output phase in rad = 0.9272952180016122 ###Markdown Without needing to calculate anything beforehand (no *rl* and no *mag* needed). Step 3: Create a tuple of the complex number's polar form: ###Code pol = (mag, ang) print(pol) ###Output (5.0, 0.9272952180016122) ###Markdown Solution attempt 4 (using python's built in cmath package): ###Code pol = cmath.polar(z) print(pol) ###Output (5.0, 0.9272952180016122)
Python_Misc/TMWP_PY36_OO_Towers_of_Hanoi.ipynb
###Markdown Python 3 [conda default] Towers of Hanoi![Towers of Hanoi](https://upload.wikimedia.org/wikipedia/commons/6/60/Tower_of_Hanoi_4.gif)The "Towers of Hanoi' problem is a popular choice by computer programming training classes. An execellent write-up of it can be found here:[Python Course.eu - Towers of Hanoi](http://www.python-course.eu/towers_of_hanoi.php)Wikipedia also has some great information about the problem, its history, and related programming concerns: [Hannoi on Wikipedia](http://en.wikipedia.org/wiki/Tower_of_Hanoi) (though some of it gets highly technical). In this Notebook- [The Solution](solution): Immediately below is a solution to the problem organized as OO Python code. This code leverages concepts and ideas from the best of what is found in the research section, but creates a unique implementation that could be used to achieve multiple objectives: output the answer, store the answer, tell us different things about the answer. This code also illustrates many concepts of the Python programming language that students and non-experts may find useful.- [OO Design Considerations For The Solution](ooDesign) - notes on the object design heirarchy (what was chosen over what was rejected).- [Putting a Tracer on The Solution To Watch Recursion in Action](trace) - This section is an experiment purely for the educational value. It makes it possible to watch recursive function calls traced through the hanoi solution object.- [Related Research and Experiments](Research): This section contains code from multiple sources showing approaches to the Towers of Hanoi problem. It also contains edits to this code that help unmask things about the algorithms' inner workings, as well as enhancements and experiments that ultimately pave the way for the final solution given at the start of this notebook. Version NotesThis code was originally written in Python 2.7. It was later converted to Python 3.6. The two changes that were required in order to do this were: - import sys (was not needed under Python 2.7- .pop() worked on range objects in Python 2.7, they had to be wrapped in list() to work under Python 3.6 - Example: `(list(range(numDisks, 0, -1)), 1)`- all the rest of this code is unchanged from the original Python 2.7 experiment An Object Oriented Solution to The Towers of Hanoi ###Code ''' This solution leverages the best of the code in this notebook to attempt to create something extensible, self-contained, and capable of delivering different outputs to meet different needs. It gets longer than the more elegant solutions in the "Research" section, but the design wraps the basic functionality with different features that take into account different potential future use cases. ''' ### Verson Two: Object code import pandas as pd from warnings import warn from warnings import filterwarnings import numpy as np import sys ## added during PY 2.7 to 3.6 upgrade test class SimpleWarning(object): '''class SimpleWarning() -->\n\n configures warn() for the most common "alert the user" use case. ''' def __init__(self, warnText, wStackLevel=1, wCategory=RuntimeWarning): self._wrnTxt = "\n%s" %(warnText) filterwarnings("once") warn(self._wrnTxt, stacklevel=wStackLevel, category=wCategory) sys.stderr.flush() # this provides warning ahead of the output instead of after it # sys is imported by warnings so we don't have to import it here # common categories to use: UserWarning, Warning, RuntimeWarning, ResourceWarning def reInitialize(self, riWarnText, riWStackLevel=1, riWCategory=RuntimeWarning): '''SimpleWarning.reInitialize(...)-->\n\nFor multiple warnings in one code procedure, this function can reinitialize the same object to be reused.''' self.__init__(self, riWarnText, riWStackLevel, riWCategory) class HanoiSolution(object): _mvListValues = ["Step", "Count", "None", "Visual"] def __init__(self, numDisks, moveList="Visual", divider=30): # conditions for warning and to help control all output that comes later: if numDisks <= 0: self._tmpTxt = "is not valid for the number of disks. Resetting number of disks to default." self._tmpTxt = "%s %s" %(numDisks, self._tmpTxt) # warn("\n%s %s" %(numDisks, self._tmpTxt)) self._hsWarn = SimpleWarning(self._tmpTxt) numDisks = 3 if numDisks > 1: self._chr1 = 's' # used to ensure printed output is plural if disks > 1 else: # set plurality conditions here and then just add self._chr1 self._chr1 = '' # instead of 's' on words throughout the code where it applies if numDisks > 9: # for 10 disks or more (issue a warning) self._tmpTxt = "disks selected. The number of steps in a solution grows at an accelerated rate" self._tmpTxt += " as the number of disks increases. \nThis program may take a while to complete.\n" self._tmpTxt += "Please be patient..." self._tmpTxt = "%d %s" %(numDisks, self._tmpTxt) # warn("\n%s %s" %(numDisks, self._tmpTxt)) self._hsWarn = SimpleWarning(self._tmpTxt) # peg structure: ( [ disks ], Peg_ID_Number ) self._peg1 = (list(range(numDisks, 0, -1)), 1) self._peg2 = ([], 2) self._peg3 = ([], 3) self.dsks = numDisks # number of disks for simulation self.moveCount = 0 # move counter self.divChars = divider # number of characters for divider used in output self.moveListDefault = moveList # what type of output from the moveList do you want? # store the answer as a default for the object to use # invalid moveList argument is reset to default and a warning is output: if moveList not in self._mvListValues: self._tmpTxt = "is not a valid moveList arg for _output_diskProgress(...). " + \ "Default will be used." self._tmpTxt = "'%s' %s" %(moveList, self._tmpTxt) # warn("\n'%s' %s" %(moveList, self._tmpTxt)) self._hsWarn = SimpleWarning(self._tmpTxt) self.moveListDefault = self._mvListValues[-1] # last value, by convention is default for obj class # make it default for this instance of the class else: self.moveListDefault = moveList self._moveDisks(nDisks=numDisks, source=self._peg1, target=self._peg3, auxiliary=self._peg2, moveList=self.moveListDefault) if moveList != "None": print(self.__str__()) # this outputs final answer with move count # at end of all moveList args that include printed # output # meat and potatoes of the algorithm: sumulation of moving the disks from one peg to another def _moveDisks(self, nDisks, source, target, auxiliary, moveList): if nDisks > 0: # move n-1 disks from source to auxiliary self._moveDisks(nDisks-1, source, auxiliary, target, moveList) if self.moveCount == 0: # output initial state if appropriate self._diskMovementProgression(nDisks, source, target, moveList) self.moveCount += 1 # increment counter of how many steps it takes # move the nth disk from source to target target[0].append(source[0].pop()) # in this object: outputs the moves in accordance with moveList argument self._diskMovementProgression(nDisks, source, target, moveList) # move the n-1 disks that were left on auxiliary to target self._moveDisks(nDisks-1, auxiliary, target, source, moveList) def _diskMovementProgression(self, nDisks, source, target, moveList): # this function sets up ability to over-ride the function call in the middle # of moveDisk by child objects self._output_diskProgress(nDisks, source, target, moveList) def _output_diskProgress(self, nDisks, source, target, moveList): # Display our progress (create each step of the answer and output it) if moveList == "Visual" or moveList == "Step": if moveList == "Visual": if self.moveCount > 0: print("Step %d:" %self.moveCount) else: # in this context, moveCount = 0 print("Initial State:") # used by both "Visual" and "Step" if self.moveCount == 0: pass else: print("Move disk " + str(nDisks) + " from peg " + str(source[1]) + " to peg " + str(target[1])) if moveList == "Visual": print("-"*self.divChars) print(str(self._peg1[0]) + '\n' + str(self._peg2[0]) + '\n' + str(self._peg3[0]) + '\n' + '#'*self.divChars) elif moveList == "None" or moveList == "Count": pass else: # this scenario should never occur the way this code is written. # if it does, we want the code to throw an error so we know to look into it raise ValueError("%s is not a valid arg for moveList in _output_diskProgress(...).") def reInitialize(self, nDisks, moveList, divider=30): # allows resetting the object for a new simulation without having to create a new instance self.__init__(nDisks, moveList, divider) def __str__(self): # what we want to see for print(HanoiSolution) return ("%d disk" + self._chr1 + " would take %d move" + self._chr1 + " to solve.") %(self.dsks, self.moveCount) class HanoiStoredSolution(HanoiSolution): dfCellDataType = np.int64 def __init__(self, numDisks, moveList="Stored", divider=30): self._solutionDF = pd.DataFrame({'disk':[],'fromPeg':[], 'toPeg':[]}, dtype=self.dfCellDataType) self._mvListValues.append("Stored") super(HanoiStoredSolution, self).__init__(numDisks, moveList, divider) # alternatively, this should also work: HanoiSolution.__init__(self, numDisks, moveList, divider) if moveList == "Stored": # print("You selected to store the movelist with this agrument: %s" %moveList) # debug statement print("The move list is stored in a dataframe accessible with `.get_solutionDF()`:") print(self.get_solutionDF()) def _store_diskProgress(self, dsk, sourceID, targetID): # Builds this: self._solutionPD = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[] }) if self.moveCount > 0: self._solutionDF = self._solutionDF.append(pd.DataFrame({ 'disk':[dsk], 'fromPeg':[sourceID],'toPeg':[targetID] }), ignore_index=True) def _diskMovementProgression(self, nDisks, source, target, moveList="Visual"): # tell us about it and store the results: if moveList == "Stored": output_moveList = "None" else: output_moveList = moveList self._output_diskProgress(nDisks, source, target, output_moveList) self._store_diskProgress(nDisks, source[1], target[1]) def get_solutionDF(self): return self._solutionDF print("Hanoi Solution Objects Loaded and ready to use.") # doc strings for SimpleWarning print(SimpleWarning.__doc__) print("-"*72) print(SimpleWarning.reInitialize.__doc__) ###Output class SimpleWarning() --> configures warn() for the most common "alert the user" use case. ------------------------------------------------------------------------ SimpleWarning.reInitialize(...)--> For multiple warnings in one code procedure, this function can reinitialize the same object to be reused. ###Markdown Tests of Hanoi Solution ObjectsThese tests are designed to test and show all the functionality built into the Hanoi solution objects. Comments in each cell indicate what is being tested ###Code # Method Resolution Order for the class objects print(HanoiSolution.__mro__) print(HanoiStoredSolution.__mro__) # pass in an invalid moveList argument ... show the warning that is displayed but code continues to execute import sys myHanoiTower = HanoiSolution(1, "Something Stupid") # pass in invalid numDisks argument ... show warning, code executes with object default # note: warning comes at the end of the output myHanoiTower = HanoiSolution(0) # default moveList = 'Visible' # reinitialize existing object with new number of disks # just output the final count myHanoiTower.reInitialize(25, "Count") # this took maybe 3 minutes to run on my computer anotherHanoiTower = HanoiSolution(13, "None") # warning kicks in if numDisks is >= 10 # this cell is part of testing the "None" option for moveList print(anotherHanoiTower) # now we can ask it for the answer anotherHanoiTower.moveCount # or obtain the final answer to send to other code # reinitialize with "Step" output ... using the same object but reinitializing for new number of Disks and Output request myHanoiTower.reInitialize(7, "Step") # object stores these elements once created: print(myHanoiTower.divChars) print(myHanoiTower.dsks) print(myHanoiTower.moveCount) print(myHanoiTower.moveListDefault) # build a describe or summary function for this later? myHanoiSSTower = HanoiStoredSolution(3) # access this element and reset it if your computer is not 64bit: myHanoiSSTower.dfCellDataType myHanoiSSTower = HanoiStoredSolution(3, "Count") # output stops with the move count for the solution myHanoiSSTower.get_solutionDF() # produces the df table if last line on Jupyter myHanoiSSTower = HanoiStoredSolution(3, "None") # outputs Nothing (except initialization lines) myHanoiSSTower.get_solutionDF() # produces the df table if last line on Jupyter # in production code, might turn off class object intialization lines # other stored elements in the object: print(myHanoiSSTower.divChars) print(myHanoiSSTower.dsks) print(myHanoiSSTower.moveCount) print(myHanoiSSTower.moveListDefault) myHanoiSSTower.reInitialize(10, "Count") # just showing some more of the parent code working in the child object # myHanoiSSTower.get_solutionDF() # solution DF is big, uncomment this line to view it myHanoiSSTower.get_solutionDF().tail() # this validates the count is right by showing final records in the DF # note: index runs 0 ... 1022, so 1023 is the correct count myHanoiSSTower2 = HanoiStoredSolution(5, "Visual", 72) # args: numDisks, moveList (type), divider (num chars) myHanoiSSTower2.get_solutionDF() # show stored DF in the object myHanoiSSTower2.reInitialize(7, "Step", 35) myHanoiSSTower2.get_solutionDF() # show stored solution when done # show changes to stored values: print(myHanoiSSTower2.divChars) print(myHanoiSSTower2.dsks) print(myHanoiSSTower2.moveCount) print(myHanoiSSTower2.moveListDefault) ###Output 35 7 127 Step ###Markdown OO Design ConsiderationsTheoretically, as simulations grow larger, it may be desirable to have versions of the code that store the results versus versions that do not (so as not to expend the memory storing the steps when the resulting DF is not needed). Python does support multi-inheritance, and so in theory, the objects could have followed an inheritance scheme like this:Base Object: output of moves in solution => Child that can output all disk moves (as simple steplist) => Child that can add more visual output to move list => Child that can store all disk moves in DF => multi-inherence: child that can do all output + store results in DFInstead, a simpler design that avoids multi-inheritance was selected. Multi-inheritance increases the complexity of maintenance and creates code that is harder to read and instantly see what it does. There are many use cases for which this complexity is worth what it gains you, but not in this such a design feels over-engineered. The final object model selected has just two "hanoi solution" objects in it:Base Object: can print out whatever we wish to see of the solution => Child Object: inherits all print options and stores the moves in a DF Making The Solution Traceable (Watching The Recursion)This modification to the solution is designed for purely academic reasons. One of the reasons "The Towers of Hanoi" problem is so popular in code language education programs is that it is a problem best solved through recursion. In fact, it is said that the problem is difficult to solve without recursion. The purpose of the coding modifications in this section are to add tracer lines into the output that make visible the method calls and recursive method calls in action.Output gets messy, but is interesting from a purely academic and educational standpoint. ###Code class HanoiStoredSolutionTron(HanoiStoredSolution): ''' HanoiStoredSolutionTron -->\n\nAdds TRON (tracer on) functionality to HanoiStoredSolution. Created as an illustration of the flow of recursive function calls.''' def __init__(self, numDisks, moveList="Stored", divider=30): print(HanoiStoredSolutionTron.__mro__) print("calling: __init__(self, " + str(numDisks) + ", " + str(moveList) + ", " + str(divider) + ")") HanoiStoredSolution.__init__(self, numDisks, moveList, divider) def _moveDisks(self, nDisks, source, target, auxiliary, moveList): print("calling: _moveDisks(self, " + str(nDisks) + ", " + str(source) + ", " + str(target) + ", "+ str(auxiliary) + ", " + str(moveList) + ")") HanoiStoredSolution._moveDisks(self, nDisks, source, target, auxiliary, moveList) def _diskMovementProgression(self, nDisks, source, target, moveList="Visual"): print("calling: _diskMovementProgression(self, " + str(nDisks) + ", " + str(source) + ", " + str(target) + ", " + str(moveList) + ")") HanoiStoredSolution._diskMovementProgression(self, nDisks, source, target, moveList) def _store_diskProgress(self, dsk, sourceID, targetID): print("calling: _store_diskProgress(self, " + str(dsk) + ", " + str(sourceID) + ", " + str(targetID) + ")") HanoiStoredSolution._store_diskProgress(self, dsk, sourceID, targetID) def _output_diskProgress(self, nDisks, source, target, moveList): print("calling: _output_diskProgress(self, " + str(nDisks) + ", " + str(source) + ", " + str(target) + ", " + str(moveList) + ")") HanoiStoredSolution._output_diskProgress(self, nDisks, source, target, moveList) def reInitialize(self, nDisks, moveList, divider=30): print("calling: reInitialize(self, " + str(nDisks) + ", " + str(moveList) + ", " + str(divider) + ")") HanoiStoredSolution.reInitialize(self, nDisks, moveList, divider) def __str__(self): print("calling: __str__(self)") return HanoiStoredSolution.__str__(self) def get_solutionDF(self): print("calling: get_solutionDF(self)") return HanoiStoredSolution.get_solutionDF(self) print("HanoiStoredSolutionTron Object Loaded.") print(HanoiStoredSolutionTron.__doc__) hsst1 = HanoiStoredSolutionTron(3, "Visual", 72) # function calls w/ full output showing hsst1.reInitialize(3, "Count") # Just function call trace and final move count hsst1.get_solutionDF() ###Output calling: get_solutionDF(self) ###Markdown Hannoi Solutions Research and ExperimentationCode presented here, when it has a source, the source is sited. Then edits and enhancements are made to this code experimenting with it in different ways as part of the research that ultimately led to the solution given at the start of this notebook. ###Code # Example 1: # source: http://www.python-course.eu/towers_of_hanoi.php ''' This code solves the puzzle, but shows us nothing in terms of how it does it. What we really want is a program that solves the puzzle and provides a solution. But this code is a good clean example of recursive programming. ''' def hanoi(n, source, helper, target): if n > 0: # move tower of size n - 1 to helper: hanoi(n - 1, source, target, helper) # move disk from source peg to target peg if source: target.append(source.pop()) # move tower of size n-1 from helper to target hanoi(n - 1, helper, source, target) source = [4,3,2,1] target = [] helper = [] hanoi(len(source),source,helper,target) print(source, helper, target) # modified from source for Python 2.7 as well as 3.x compatibility # source: http://www.python-course.eu/towers_of_hanoi.php ''' This is better, but the solution provided is output in such a mess that its hard to see the solution from what is essentially a trace of the inner workings of the program. This code makes a good demonstration of how the recursive algorithm does its work though. ''' def hanoi(n, source, helper, target): print("hanoi( " + str(n) + str(source) + str(helper) + str(target) + " called") # modified from source for 2.7 and 3.x compatibility if n > 0: # move tower of size n - 1 to helper: hanoi(n - 1, source, target, helper) # move disk from source peg to target peg if source[0]: disk = source[0].pop() print("moving " + str(disk) + " from " + source[1] + " to " + target[1]) # modified from source for Python 2.7 and 3.x compatibility target[0].append(disk) # move tower of size n-1 from helper to target hanoi(n - 1, helper, source, target) source = ([4,3,2,1], "source") target = ([], "target") helper = ([], "helper") hanoi(len(source[0]),source,helper,target) print(source, helper, target) # modified from source for Python 2.7 as well as 3.x compatibility # let's take the previous code and modify it so we can run w/ and w/o the trace for better analysis # some other tweaks to language and output will also be made def hanoi(n, source, helper, target, tron = False, diskTrace = False): if tron == True: # tron = "Tracer On" and was the title of a popular movie set in a virtual world print("hanoi( " + str(n) + str(source) + str(helper) + str(target) + " called") # modified from source for compatibility with Python 2.7 or 3.x if n > 0: # move tower of size n - 1 to helper: hanoi(n - 1, source, target, helper, tron, diskTrace) # move disk from source peg to target peg if source[0]: disk = source[0].pop() if diskTrace == True: mv = "move disk " + str(disk) else: mv = "move" print(mv + " from " + source[1] + " to " + target[1]) target[0].append(disk) # move tower of size n-1 from helper to target hanoi(n - 1, helper, source, target, tron, diskTrace) # set up pegs source = ([4,3,2,1], "source") target = ([], "target") helper = ([], "helper") # run simulation and print results: print(source + helper + target) hanoi(len(source[0]),source,helper,target, diskTrace = True) # add final argument of True to turn full program trace back on # then output will look like previous cell # it is disabled here to demonstrate the cleaner "solution" output print(source + helper + target) # these lines modified from source for 2.7 and 3.x compatibility # source: http://www.python-course.eu/towers_of_hanoi.php # this code used as starting point and then modified and enhanced considerably to create this version # this alteration to the source will make the code a bit more self contained and will give options for which # peg gets moved to which peg. For simplicity, the story it tells is we are moving from "peg 1" to # "peg 3" (labeled simply 1, 2, 3) rather than "source", "target", etc. # user can chose which of the 3 pegs is source, target, and what earlier code called "helper" or "auxilliary" def hanoi(n, start=1, end=3, spare=2, tron=False, diskTrace=False): # sets up data structure(s) to pass into our recursive child function if sorted([start, end, spare]) != [1,2,3]: raise ValueError("Arguments: start, end, spare - must be unique and can only contain the values 1, 2, or 3.\n" + "This tells the program which peg (of the 3 pegs) is used for what role in the game.") hanoi_towers = [(list(range(n, 0, -1)), start), ([], end), ([], spare)] step_count = [0] # embedded child function does all the actual work: #start #spare #end def hanoiRecurModule(n, source, helper, target, tron = False, diskTrace = False): if tron == True: # tron = "Tracer On" and was the title of a popular movie set in a virtual world print("hanoiRecurModule( " + str(n) + str(source) + str(helper) + str(target) + " called") # modified from source for compatibility with Python 2.7 or 3.x if n > 0: # move tower of size n - 1 to helper: hanoiRecurModule(n - 1, source, target, helper, tron, diskTrace) # move disk from source peg to target peg if source[0]: disk = source[0].pop() if diskTrace == True: mv = "move disk " + str(disk) else: mv = "move" print(mv + " from " + str(source[1]) + " to " + str(target[1])) step_count[0] += 1 target[0].append(disk) # move tower of size n-1 from helper to target hanoiRecurModule(n - 1, helper, source, target, tron, diskTrace) #start #spare #end hanoiRecurModule(n, hanoi_towers[0], hanoi_towers[2], hanoi_towers[1], tron, diskTrace) if step_count == [1]: endSentence = " step." else: endSentence = " steps." print("Task completed in " + str(step_count)[1:-1] + endSentence) # run simulation and print results: hanoi(4, start=1, end=3, spare=2, diskTrace = True) hanoi(1, start=1, end=3, spare=2, diskTrace = True) hanoi(1, start=1, end=3, spare=2, diskTrace = False) # testing the ValueError try: hanoi(4, start=1, end=3, spare=1, diskTrace = True) except Exception as ee: print(str(type(ee))+": \n"+str(ee)) # with tracer on hanoi(3, start=1, end=3, spare=2, tron=True, diskTrace = True) # source: https://en.wikipedia.org/wiki/Tower_of_Hanoi # recursive implementation section # this solution requires slightly more steps than the above code, but is still quite elegant # it also provides the best visual metaphor for the solution in its output of any of the # code in this notebook yet A = [5,4,3,2,1] B = [] C = [] def move(n, source, target, auxiliary): if n > 0: # move n-1 disks from source to auxiliary, so they are out of the way move(n-1, source, auxiliary, target) # move the nth disk from source to target target.append(source.pop()) # Display our progress print(str(A) + '\n' + str(B) + '\n' + str(C) + '\n' + '##############') # modified from source so it will work in Python 2.7 or Python 3.x # move the n-1 disks that we left on auxiliary onto target move(n-1, auxiliary, target, source) # initiate call from source A to target C with auxiliary B move(5, A, C, B) # Solution Experiment One # modified from code presented in previous cells .. # why are we asking the user for things the code can do for us ... # this version is more self-contained and requires less of the user to run it def solveHanoi(numDisks): peg1 = list(range(numDisks, 0, -1)) peg2 = [] peg3 = [] def moveDisks(numDisks, source, target, auxiliary): # python allows nested functions but this may not be best practice # completing the code this way just as an experiment if numDisks > 0: # move n-1 disks from source to auxiliary, so they are out of the way moveDisks(numDisks-1, source, auxiliary, target) # move the nth disk from source to target target.append(source.pop()) # Display our progress print(str(peg1) + '\n' + str(peg2) + '\n' + str(peg3) + '\n' + '##############') # modified from source so it will work in Python 2.7 or Python 3.x # move the n-1 disks that we left on auxiliary onto target moveDisks(numDisks-1, auxiliary, target, source) moveDisks(numDisks, source=peg1, target=peg3, auxiliary=peg2) # initiate call from source A to target C with auxiliary B solveHanoi(5) ### Verson One: Object code ## Useful help topic: http://stackoverflow.com/questions/3277367/how-does-pythons-super-work-with-multiple-inheritance import pandas as pd import numpy as np class HanoiSolution_v1(object): def __init__(self, numDisks): # peg structure: ( [ disks ], Peg_ID_Number ) self._peg1 = (list(range(numDisks, 0, -1)), 1) self._peg2 = ([], 2) self._peg3 = ([], 3) self.dsks = numDisks # number of disks for simulation self.moveCount = 0 # move counter self.divChars = 25 # number of characters for divider used in output self._solutionPD = pd.DataFrame({'disk':[],'fromPeg':[], 'toPeg':[]}, dtype=np.int64) self._moveDisks(nDisks=numDisks, source=self._peg1, target=self._peg3, auxiliary=self._peg2) def _moveDisks(self, nDisks, source, target, auxiliary): if nDisks > 0: # move n-1 disks from source to auxiliary, so they are out of the way self._moveDisks(nDisks-1, source, auxiliary, target) # move the nth disk from source to target target[0].append(source[0].pop()) # Display our progress (create each step of the answer and output it) self.moveCount += 1 print("Step %d:" %self.moveCount) print("-"*self.divChars) print("Move disk " + str(nDisks) + " from " + str(source[1]) + " to " + str(target[1])) print(str(self._peg1[0]) + '\n' + str(self._peg2[0]) + '\n' + str(self._peg3[0]) + '\n' + '#'*self.divChars) self._store_diskProgress(nDisks, source[1], target[1]) # move the n-1 disks that were left on auxiliary to target self._moveDisks(nDisks-1, auxiliary, target, source) return self.moveCount def _store_diskProgress(self, dsk, sourceID, targetID): # Builds this: self._solutionPD = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[] }) self._solutionPD = self._solutionPD.append(pd.DataFrame({ 'disk':[dsk],'fromPeg':[sourceID], 'toPeg':[targetID] }), ignore_index=True) def __str__(self): return "%d disks would take %d moves to solve." %(self.dsks, self.moveCount) myHanoiTower_v1 = HanoiSolution_v1(5) print(myHanoiTower_v1) # exploration of the object structure: print(type(myHanoiTower_v1)) myHanoiTower_v1._solutionPD ###Output _____no_output_____
tour_model_eval/Compare user mode mapping effect with outputs.ipynb
###Markdown This compares the effect of the `same_mode` mapping change on the staging databaseTODO: Extend to the other databases as wellThis assumes that the models have been built (using `build_save_model.py`) for the "before" values and `bin/build_label_model.py -a` for the "after" values.They have been renamed to `user_label_first_round_.before` and `user_label_first_round_.after`, and `locations_first_round_.before` and `locations_first_round_.after`A sample script that could be used for this renaming is: `for f in user_labels_first_round_*; do mv $f $f.before; done`This script reads those files and works with them. ###Code import os os.environ["EMISSION_SERVER_HOME"] = "/Users/kshankar/e-mission/e-mission-server" MODEL_DIR = os.getenv("EMISSION_SERVER_HOME"); MODEL_DIR import emission.analysis.modelling.tour_model_first_only.load_predict as lp label_result_list = [] for l in os.listdir(MODEL_DIR): if l.startswith("user_labels_first_round") and not l.endswith(".after"): uuid = l.split("_")[4] before_ui_map = lp.loadModel(MODEL_DIR+"/"+l) after_ui_map = lp.loadModel(MODEL_DIR+"/"+l+".after") for cluster_label in before_ui_map: before_cluster_options = before_ui_map[cluster_label] after_cluster_options = after_ui_map[cluster_label] before_max_p = sorted(before_cluster_options, key=lambda lp: lp["p"])[-1]["p"] after_max_p = sorted(after_cluster_options, key=lambda lp: lp["p"])[-1]["p"] label_result_list.append({"user_id": uuid, "cluster_label": cluster_label, "before_unique_combo_len": len(before_cluster_options), "after_unique_combo_len": len(after_cluster_options), "before_max_p": before_max_p, "after_max_p": after_max_p}) import pandas as pd label_result_df = pd.DataFrame(label_result_list); label_result_df mismatched_df = label_result_df.query("before_max_p != after_max_p"); mismatched_df len(mismatched_df) print(mismatched_df.drop("user_id", axis=1).head().to_markdown()) ax = mismatched_df.user_id.value_counts().plot(kind="bar") ax.set_xticklabels(list(range(len(mismatched_df)))) label_result_df[["before_max_p", "after_max_p"]].plot.box(by="user_id") label_result_df.query("before_max_p < 1")[["before_max_p", "after_max_p"]].plot.box(by="user_id") label_result_df.query("before_max_p < 1").after_max_p.describe() ###Output _____no_output_____
assignment2/160575/160575.ipynb
###Markdown EM Algorithm Batch EM **Import necessary libraries** ###Code import numpy as np import random import matplotlib.pyplot as plt import scipy.io %matplotlib inline ###Output _____no_output_____ ###Markdown **Load Data** ###Code data = scipy.io.loadmat('mnist_small.mat') X = data['X'] Y = data['Y'] ###Output _____no_output_____ ###Markdown **Print Data Shape** ###Code print(X.shape, Y.shape) ###Output (10000, 784) (10000, 1) ###Markdown **GMM Algorithm** ###Code def gmm(X, K): [N, D] = X.shape if K >= N: print('you are trying to make too many clusters!') return numIter = 200 # maximum number of iterations to run si2 = 1 # initialize si2 dumbly pk = np.ones(K) / K # initialize pk uniformly mu = np.random.rand(K, D) # initialize means randomly z = np.zeros((N, K)) for iteration in range(numIter): # in the first step, we do assignments: # each point is probabilistically assigned to each center for n in range(N): for k in range(K): # TBD: compute z(n,k) = log probability that # the nth data point belongs to cluster k z[n][k] = np.log(pk[k]) - np.linalg.norm(X[n] - mu[k])**2 / (2*si2) # turn log probabilities into actual probabilities maxZ = np.max(z[n]) z[n] = np.exp(z[n] - maxZ - np.log(np.sum(np.exp(z[n] - maxZ)))) nk = np.sum(z, axis=0) # re-estimate pk pk = nk/N # re-estimate the variance mu = z.T@X mu = np.array([mu[k]/nk[k] for k in range(K)]) # re-estimate the variance si2 = np.sum(np.square(X - z@mu))/(N*D) return mu, pk, z, si2 ###Output _____no_output_____ ###Markdown **Running GMM for k = 5, 10, 15, 20** ###Code for k in [5, 10, 15, 20]: mu, pk, z, si2 = gmm(X, k) # calling the function # printing mean for i in range(k): plt.imshow(mu[i].reshape((28, 28)), cmap='gray') plt.savefig('figure '+str(i+1)+' for k_'+str(k)) plt.show() ###Output _____no_output_____ ###Markdown Online EM **Online GMM algorithm** ###Code def online_gmm(X, K): batch_size = 100 # the batch size for onlineEM kappa = 0.55 # kappa for learning rate numIter = 200 # total number of iterations np.random.shuffle(X) # randomly shuffle X to include examples from all digits X = X[:batch_size] # select the first 100 of 100 [N, D] = X.shape # N and D from X if K >= N: print('you are trying to make too many clusters!') return # initialize si2 dumbly si2 = 1 # initialize pk uniformly pk = np.ones(K) / K # we initialize the means totally randomly mu = np.random.rand(K, D) z = np.zeros((N, K)) for iteration in range(numIter): learning_rate = (iteration + 1)**(-0.55) # learning for rate for the iteration for n in range(N): for k in range(K): # TBD: compute z(n,k) = log probability that # the nth data point belongs to cluster k z[n][k] = np.log(pk[k]) - np.linalg.norm(mu[k] - X[n])**2 / (2*si2) maxZ = np.max(z[n]) # turn log probabilities into actual probabilities z[n] = np.exp(z[n] - maxZ - np.log(np.sum(np.exp(z[n] - maxZ)))) nk = np.sum(z, axis=0) # re-estimate pk pk = (1-learning_rate)*pk + learning_rate*nk/N mu_prev = mu mu = z.T@X mu = (1-learning_rate)*mu_prev + learning_rate*np.array([mu[k]/nk[k] if nk[k] is not 0 else mu_prev for k in range(K)]) si2 = np.sum(np.square(X - z@mu))/(N*D) return mu, pk, si2 ###Output _____no_output_____ ###Markdown **Running Online GMM for k = 5, 10, 15, 20** ###Code for k in [5, 10, 15, 20]: mu, pk, si2 = online_gmm(X, k) # calling the function # printing mean for i in range(k): plt.imshow(mu[i].reshape((28, 28)), cmap='gray') # plt.savefig('onlineEM_figure '+str(i+1)+' for k_'+str(k)) plt.show() ###Output _____no_output_____
plot_mlp_losses.ipynb
###Markdown Arguments ###Code subject = 'F' voxel_num = 500 loss_type = 'Train' def collect_mlp_losses(n_folds, encoding_model, subject, voxel_num, loss_type): fold_losses = [] for fold in range(n_folds): curr_fold_losses = np.load("{}/mlp_fold_{}_losses/subject_{}/fold_{}.npy".format(encoding_model, loss_type, subject, fold)) curr_fold_losses = curr_fold_losses[voxel_num] fold_losses.append(curr_fold_losses) fold_losses = np.array(fold_losses) return fold_losses X = np.arange(1,11) mlp_initial_losses = collect_mlp_losses(n_folds, 'mlp_initial', subject, voxel_num, loss_type) mlp_smallerhiddensize_losses = collect_mlp_losses(n_folds, 'mlp_smallerhiddensize', subject, voxel_num, loss_type) mlp_largerhiddensize_losses = collect_mlp_losses(n_folds, 'mlp_largerhiddensize', subject, voxel_num, loss_type) mlp_additionalhiddenlayer_losses = collect_mlp_losses(n_folds, 'mlp_additionalhiddenlayer', subject, voxel_num, loss_type) fig, axs = plt.subplots(2, 2, figsize=(14,8)) for fold in range(n_folds): axs_x, axs_y = fold // 2, fold % 2 axs[axs_x, axs_y].plot(X, mlp_initial_losses[fold], color='green') axs[axs_x, axs_y].plot(X, mlp_smallerhiddensize_losses[fold], color='blue') axs[axs_x, axs_y].plot(X, mlp_largerhiddensize_losses[fold], color='red') axs[axs_x, axs_y].plot(X, mlp_additionalhiddenlayer_losses[fold], color='black') axs[axs_x, axs_y].set_title('{} Losses: Subject {} - Voxel {} - Fold {}'.format(loss_type, subject, voxel_num, fold+1)) for i, ax in enumerate(axs.flat): if i // 2 == 0: ax.set(ylabel='Loss') else: ax.set(xlabel='Epoch', ylabel='Loss') green_patch = mpatches.Patch(color='green', label='mlp_initial') blue_patch = mpatches.Patch(color='blue', label='mlp_smallerhiddensize') red_patch = mpatches.Patch(color='red', label='mlp_largerhiddensize') black_patch = mpatches.Patch(color='black', label='mlp_additionalhiddenlayer') plt.legend(handles=[green_patch, blue_patch, red_patch, black_patch]) plt.show() ###Output _____no_output_____
finding-relationships-data-python/02/demos/demo-06-HistogramsKDEPlotsRugPlots.ipynb
###Markdown Automobile Dataset Source: https://www.kaggle.com/toramky/automobile-dataset* symboling - Rating corresponds to the degree to which the auto is more risky than its price indicates. Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process "symboling" * 3 -> Risky * -3 -> pretty safe* normalized-losses - The third factor is the relative average loss payment per insured vehicle year. This value is normalized for all autos within a particular size classification (two-door small, station wagons, sports/speciality, etc…), and represents the average loss per car per year.* make - making company* fuel-type - Type of fuels* aspiration - * num-of-doors* body-style* drive-wheels* engine-location* wheel-base* length* width* height* curb-weight* engine-type* num-of-cylinders* engine-size* fuel-system* bore* stroke* compression-ratio* horsepower* peak-rpm* city-mpg* highway-mpg* price Import the data ###Code automobile_data = pd.read_csv('datasets/Automobile_data.csv', na_values = '?') automobile_data.head(5) automobile_data.shape automobile_data.isnull().sum() ###Output _____no_output_____ ###Markdown Cleaning ###Code automobile_data.dropna(inplace=True) automobile_data.shape ###Output _____no_output_____ ###Markdown Saving back to dataset folder for future use ###Code automobile_data.to_csv('datasets/automobile_data_processed.csv', index = False) automobile_data.dtypes ###Output _____no_output_____ ###Markdown Describing the data ###Code automobile_data.describe().transpose() ###Output _____no_output_____ ###Markdown * From here we can see that the distribution of price. Most of the vehicle has the price in the range of 5000-10000 ###Code plt.figure(figsize=(12, 8)) sns.distplot(automobile_data['price'], color='red') plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown * If we will add more bin then we can see the exact range for the price ###Code plt.figure(figsize=(12, 8)) sns.distplot(automobile_data['price'], bins=20, color='red') plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown * This is just the distplot and see, the distribution ###Code plt.figure(figsize=(12, 8)) sns.distplot(automobile_data['price'], hist=False, color='blue') plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown * We can add the bin rug plot also to show the distribution ###Code plt.figure(figsize=(12,8)) sns.distplot(automobile_data['price'], hist=False, rug=True, color='blue') plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown Rug plot ###Code plt.figure(figsize=(12,8)) sns.rugplot(automobile_data['price'], height=0.5, color='blue') plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown * Kde plot will let's know the density of each range of the price ###Code plt.figure(figsize=(12,8)) sns.kdeplot(automobile_data['price'], shade=True, color='blue') plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown Scatterplot * Now let's take the horsepower and price of the car from the automobile data, So basically the horsepower is increasing according to price ###Code plt.figure(figsize=(12, 8)) sns.scatterplot(x='horsepower', y='price', data=automobile_data, s=120) plt.title('Automobile Data') plt.show() ###Output _____no_output_____ ###Markdown * If we will check the number of cylinders according to the price and horsepower, Most of the cars are using number of cylinders is 4.* Also when the horsepower is increasing then the price is increasing ###Code plt.figure(figsize=(12, 8)) sns.scatterplot(x='horsepower', y='price', data=automobile_data, hue='num-of-cylinders', s=120) plt.title('Automobile Data') plt.show() sns.regplot(x='horsepower', y='price', data=automobile_data) plt.show() sns.regplot(x='highway-mpg', y='price', data=automobile_data) plt.show() ###Output _____no_output_____ ###Markdown * Now let's see the relationship of horsepower and price ###Code sns.jointplot(x='horsepower', y='price', data=automobile_data) plt.show() sns.jointplot(x='horsepower', y='price', data=automobile_data, kind='reg') plt.show() ###Output _____no_output_____ ###Markdown * We can just see the density ###Code sns.jointplot(x='horsepower', y='price', data=automobile_data, kind='kde') plt.show() ###Output _____no_output_____ ###Markdown * The better representaion is here now about the density. Now it is very clear about which horsepower and price has more density ###Code sns.jointplot(x='horsepower', y='price', data=automobile_data, kind='hex') plt.show() ###Output _____no_output_____ ###Markdown * Also we can see the rug plot ang kde plot together , to see the distribution range* From here we can see that, the range of horsepower 50-60 has more density, and the price for the high density is 5000-10000* Also, the rug plot will help us to understand ###Code f, ax = plt.subplots(figsize=(6, 6)) sns.kdeplot(automobile_data['horsepower'], automobile_data['price'], ax=ax) sns.rugplot(automobile_data['horsepower'], color="limegreen", ax=ax) sns.rugplot(automobile_data['price'], color="red", vertical=True, ax=ax) plt.title('Automobile Data') plt.show() ###Output _____no_output_____
Session 04 - Language Models.ipynb
###Markdown Language ModellingThe Natural Language Toolkit has data types and functions that make life easier for us when we want to count bigrams and compute their probabilities. ###Code # Needed imports import nltk %matplotlib notebook ###Output _____no_output_____ ###Markdown **Import the Brown corpus**The Brown University Standard Corpus of Present-Day American Englis, or just Brown Corpus (https://en.wikipedia.org/wiki/Brown_Corpus), is a general corpus containing 500 samples of English-language text, totaling roughly one million words, compiled from works published in the United States in 1961. ###Code from nltk.corpus import brown brown.categories() ###Output _____no_output_____ ###Markdown We can access the words of the Brown corpus, either all of them of those belonging to any of its categories. ###Code print(brown.words()) print(brown.words(categories='mystery')) ###Output [u'The', u'Fulton', u'County', u'Grand', u'Jury', ...] [u'There', u'were', u'thirty-eight', u'patients', ...] ###Markdown We compute the word frequency by using the `FreqDist` function of NLTK (an nltk.FreqDist() is like a dictionary, but it is ordered by frequency). The following uses this function to compute the freqs and plot the 20 most frequent words 1. Frequency Distribution ###Code freq_brown = nltk.FreqDist(brown.words()) list(freq_brown.keys())[:20] freq_brown.most_common(20) ###Output _____no_output_____ ###Markdown We can draw the frequency distribution by plotting it ###Code freq_brown.plot(30) ###Output _____no_output_____ ###Markdown We can see that they are mostly stopwords and punctuation signs.From NLTK we can access a list of stowords from different languages. This is helpful if we want to remove them. ###Code from nltk.corpus import stopwords print(stopwords.words('english')) ###Output [u'i', u'me', u'my', u'myself', u'we', u'our', u'ours', u'ourselves', u'you', u"you're", u"you've", u"you'll", u"you'd", u'your', u'yours', u'yourself', u'yourselves', u'he', u'him', u'his', u'himself', u'she', u"she's", u'her', u'hers', u'herself', u'it', u"it's", u'its', u'itself', u'they', u'them', u'their', u'theirs', u'themselves', u'what', u'which', u'who', u'whom', u'this', u'that', u"that'll", u'these', u'those', u'am', u'is', u'are', u'was', u'were', u'be', u'been', u'being', u'have', u'has', u'had', u'having', u'do', u'does', u'did', u'doing', u'a', u'an', u'the', u'and', u'but', u'if', u'or', u'because', u'as', u'until', u'while', u'of', u'at', u'by', u'for', u'with', u'about', u'against', u'between', u'into', u'through', u'during', u'before', u'after', u'above', u'below', u'to', u'from', u'up', u'down', u'in', u'out', u'on', u'off', u'over', u'under', u'again', u'further', u'then', u'once', u'here', u'there', u'when', u'where', u'why', u'how', u'all', u'any', u'both', u'each', u'few', u'more', u'most', u'other', u'some', u'such', u'no', u'nor', u'not', u'only', u'own', u'same', u'so', u'than', u'too', u'very', u's', u't', u'can', u'will', u'just', u'don', u"don't", u'should', u"should've", u'now', u'd', u'll', u'm', u'o', u're', u've', u'y', u'ain', u'aren', u"aren't", u'couldn', u"couldn't", u'didn', u"didn't", u'doesn', u"doesn't", u'hadn', u"hadn't", u'hasn', u"hasn't", u'haven', u"haven't", u'isn', u"isn't", u'ma', u'mightn', u"mightn't", u'mustn', u"mustn't", u'needn', u"needn't", u'shan', u"shan't", u'shouldn', u"shouldn't", u'wasn', u"wasn't", u'weren', u"weren't", u'won', u"won't", u'wouldn', u"wouldn't"] ###Markdown **But should we remove them? Why?** No, just think in what we are trying to do here. We are trying to use the dataset to create a model of the language to, given a set of words, predict the most probable next word. For this process, stopwords, as well as punctuation or other signs are need.For the same reason, we shall not stemmize/lemmatize, neither normalize the words. We need all these variations to learn a proper language model (i.e, `the` != `The`)As we will discuss in the comming lessons, both stemming and stopwords removal could be useful in other tasks such Text Classification. 2. Bigram ModelWe'll start small and we will create a language model based on bi-grams. To that end, we will use the `ConditionalFreqDist` function of NLTK. `nltk.ConditionalFreqDist()` counts frequencies of pairs. When given a list of bigrams, it maps each first word of a bigram to a FreqDist over the second words of the bigram.If you remember the theoretical session, we are applying the Markov assumption: the next element (word in our case) of a sequence can be predicted by just focusing on the previous one.The following code creates these bi-gram counts.If we pring the `conditions` we can see the antecedent of the bi-grams. (`conditions()` in a `ConditionalFreqDist` are like `keys()` in a dictionary). ###Code cfreq_brown_2gram = nltk.ConditionalFreqDist(nltk.bigrams(brown.words())) cfreq_brown_2gram.conditions()[:20] ###Output _____no_output_____ ###Markdown Let' see the most frequent terms after the word `my`. ###Code # the cfreq_brown_2gram entry for "my" is a FreqDist (i.e, a dictionary of word and freqCount). my_terms = cfreq_brown_2gram["my"] # Sort (desc) the terms by frequency and print the 25th most common sorted(my_terms.items(), key=lambda x: -x[1])[:25] ###Output _____no_output_____ ###Markdown We can do the same with the `most_common` function ###Code cfreq_brown_2gram["my"].most_common(25) ###Output _____no_output_____ ###Markdown With the `nltk.ConditionalProbDist()`, map pairs are mapped to probabilities, instead of counts. ###Code cprob_brown_2gram = nltk.ConditionalProbDist(cfreq_brown_2gram, nltk.MLEProbDist) # Uses a Maximum Likelihood Estimation (MLE) estimator ###Output _____no_output_____ ###Markdown This again has `conditions()` wihch are like dictionary keys ###Code cprob_brown_2gram.conditions() ###Output _____no_output_____ ###Markdown We can also find the words that can come after `my` by using the function `samples()` ###Code cprob_brown_2gram["my"].samples() ###Output _____no_output_____ ###Markdown In addition, you can see the prob of a particular pair ###Code cprob_brown_2gram["my"].prob("own") cprob_brown_2gram["my"].prob("leg") ###Output _____no_output_____ ###Markdown 3. Compute the probability of a sentence Create a function to compute the probability of a word from its frequency ###Code def unigram_prob(word): len_brown = len(brown.words()) return float(freq_brown[word]) / float(len_brown) unigram_prob("night") ###Output _____no_output_____ ###Markdown We now can ask for the probability of a word sequence.For instance: `P(how do you do) = P(how) * P(do|how) * P(you|do) * P(do | you)` ###Code unigram_prob("how") * cprob_brown_2gram["how"].prob("do") * cprob_brown_2gram["do"].prob("you") * cprob_brown_2gram["you"].prob("do") ###Output _____no_output_____ ###Markdown Compare it with the prob of another not so common sentence: `how do you dance` ###Code unigram_prob("how") * cprob_brown_2gram["how"].prob("do") * cprob_brown_2gram["do"].prob("you") * cprob_brown_2gram["you"].prob("dance") ###Output _____no_output_____ ###Markdown As expected, one order of magnitude less probable 4. Generate Language With our bi-gram language model already generated, we can now use it to generate text and see what has our model learned. ###Code cprob_brown_2gram["my"].generate() ###Output _____no_output_____ ###Markdown Let's see if the model create valid text or just jiberish ###Code word = "my" text = "" for index in range(20): text += word + " " word = cprob_brown_2gram[ word].generate() print(text) ###Output my burning arcs which is bounded up in a line of him in her back of a Democratic duties normally ###Markdown It is not a valid sentence, but it has some kind of sense. Remember that we are just learning from bigrams! **We can try another datasets to train a language models using different dataset.**In particular we are going to import the book dataset of NLTK, which includes the text of different books. The following function takes a text (i.e., the text o a given book) to learn a language model, and a initial word to start the generation and the number of words that have to be generated. ###Code # Here is how to do this with NLTK books: from nltk.book import * def generate_text(text, initialword, numwords): bigrams = list(nltk.ngrams(text, 2)) cpd = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(bigrams), nltk.MLEProbDist) word = initialword text = "" for i in range(numwords): text += word + " " word = cpd[ word].generate() print(text) ###Output *** Introductory Examples for the NLTK Book *** Loading text1, ..., text9 and sent1, ..., sent9 Type the name of the text or sentence to view it. Type: 'texts()' or 'sents()' to list the materials. text1: Moby Dick by Herman Melville 1851 text2: Sense and Sensibility by Jane Austen 1811 text3: The Book of Genesis text4: Inaugural Address Corpus text5: Chat Corpus text6: Monty Python and the Holy Grail text7: Wall Street Journal text8: Personals Corpus text9: The Man Who Was Thursday by G . K . Chesterton 1908 ###Markdown We use different books to generate text ###Code # Holy Grail generate_text(text6, "I", 25) # sense and sensibility generate_text(text2, "I", 25) ###Output I can it had passed it all , with my exchange , on remaining half so important Tuesday came only be sure you know where ###Markdown 5. TriGrams Let's try a more advance model using tri-grams to see if it is able to generate better language.We cannot use the `ConditionalFreqDist` as before. `nltk.ConditionalFreqDist` expects its data as a sequence of `(condition, item)` tuples. `nltk.trigrams` returns tuples of length 3. Therefore, we have to adapt the trigrams output. ###Code def generate_text(text, initialword, numwords): trigrams = list(nltk.ngrams(text, 3, pad_right=True, pad_left=True)) trigram_pairs = (((w0, w1), w2) for w0, w1, w2 in trigrams) cpd = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(trigram_pairs), nltk.MLEProbDist) word = initialword text = "" for i in range(numwords): w = cpd[(word[i],word[i+1])].generate() word += [w] print(" ".join(word)) generate_text(text2, ["I", "am"], 25) ###Output I am afraid , Miss Dashwood was above with her increase of emotion , her eyes were red and swollen ; and without selfishness -- without encouraging ###Markdown As expected, it creates a better LM.Can we go on with more n-grams? Let's see 6. N-grams ###Code def generate_text(text, initialword, numwords): ngrams = list(nltk.ngrams(text, 4, pad_right=True, pad_left=True)) ngram_pairs = (((w0, w1, w2), w3) for w0, w1, w2, w3 in ngrams) cpd = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(ngram_pairs), nltk.MLEProbDist) word = initialword text = "" for i in range(numwords): w = cpd[(word[i],word[i+1], word[i+2])].generate() word += [w] print(" ".join(word)) generate_text(text2, ["I", "am", "very"], 25) ###Output I am very sure that Colonel Brandon would give me a living ." " No ," answered Elinor , without knowing what she said . I have many ###Markdown As we make the n-grams larger we got more accurate language models. However, if we create large n-grams we are not going to have enough data to train our models: we will never see enough data (enough sequences of n-grams) to train the model 7. Star Wars Let's try to generate some text based on the dialogues from the Star Wars scripts (episodes IV,V, and VI).All the information for this exercise was retrieved from the [Visualizing Star Wars Movie Scripts](https://github.com/gastonstat/StarWars) project.We start by reading all the dialogue lines from the scripts, which are labeled with the character speaking. We are only considering Luke, Leia, Han Solo and Vader. We left Chewbacca out of the example for obvious reasons...We read all the lines of each character and combine them in one single string. We tokenize this string using the `WordPunctTokenizer` and use these tokens to create an NLTK Text object.__NOTE__: some warnings may appear when executing this part (something like *Skipping line...*), due to some minor parsing errors when generating the dataframe. You can ignore them. ###Code import nltk from nltk import word_tokenize, WordPunctTokenizer import pandas wpt = WordPunctTokenizer() c3po_string = "" vader_string = "" solo_string = "" luke_string = "" leia_string = "" def read_lines(path): lines = pandas.read_csv(path, delim_whitespace=True, error_bad_lines=False) solo_lines = lines.loc[lines['Char'] == 'HAN']['Text'] vader_lines = lines.loc[lines['Char'] == 'VADER']['Text'] luke_lines = lines.loc[lines['Char'] == 'LUKE']['Text'] leia_lines = lines.loc[lines['Char'] == 'LEIA']['Text'] global vader_string, c3po_string, solo_string, luke_string, leia_string solo_string = solo_string + " " + " ".join(solo_lines) vader_string = vader_string + " " + " ".join(vader_lines) luke_string = luke_string + " " + " ".join(luke_lines) leia_string = leia_string + " " + " ".join(leia_lines) read_lines('files/SW_EpisodeIV.txt') read_lines('files/SW_EpisodeV.txt') read_lines('files/SW_EpisodeVI.txt') solo_text = nltk.Text(wpt.tokenize(solo_string)) vader_text = nltk.Text(wpt.tokenize(vader_string)) luke_text = nltk.Text(wpt.tokenize(luke_string)) leia_text = nltk.Text(wpt.tokenize(leia_string)) ###Output Skipping line 555: expected 3 fields, saw 9 Skipping line 54: expected 3 fields, saw 4 Skipping line 191: expected 3 fields, saw 4 Skipping line 285: expected 3 fields, saw 10 ###Markdown Using these Text objects, we can proceed in the same way as in the previous examples, to generate texts. The following `generate_text_backoff` tries to generate a new word based on an 4-gram proabability. If this fails, it tries the Tri-gram one and then the Bi-gram. If none of them are sucessful, it just stops. Recalling from the POS tagging session, this is known as a backoff strategy.This function takes as parameter some training text, an initial words to start the sentence, and the length of the text to be generated. ###Code def generate_text_backoff(text, initialwords, numwords): #ngrams ngrams = list(nltk.ngrams(text, 4, pad_right=True, pad_left=True)) ngram_pairs = (((w0, w1, w2), w3) for w0, w1, w2, w3 in ngrams) cpdNgram = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(ngram_pairs), nltk.MLEProbDist) #trigram trigrams = list(nltk.ngrams(text, 3, pad_right=True, pad_left=True)) trigram_pairs = (((w0, w1), w2) for w0, w1, w2 in trigrams) cpd3gram = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(trigram_pairs), nltk.MLEProbDist) #bigram bigrams = list(nltk.ngrams(text, 2)) cpd2gram = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(bigrams), nltk.MLEProbDist) word = initialwords for i in range(numwords): #try n-gram if (word[i],word[i+1], word[i+2]) in cpdNgram: w = cpdNgram[(word[i],word[i+1], word[i+2])].generate()#.max() #try 3-gram elif (word[i+1],word[i+2]) in cpd3gram: w = cpd3gram[(word[i+1],word[i+2])].generate()#.max() #try 2-gram elif word[i+2] in cpd2gram: w = cpd2gram[word[i+2]].generate().#max() #at least we tried... else: break word += [w] return " ".join(word) ###Output _____no_output_____ ###Markdown Now that we have our function ready, let's try to generate some texts and check how they vary from one character to another, using by the different starting tuples. ###Code print("Han Solo: " + generate_text_backoff(solo_text, ["Chewie", "come", "here"], 25) + "\n") print("Leia: " + generate_text_backoff(leia_text, ["My", "name", "is"], 25) + "\n") print("Luke: " + generate_text_backoff(luke_text, ["It", "sure", "is"], 25) + "\n") print("Vader: " + generate_text_backoff(vader_text, ["It", "sure", "is"], 25) + "\n") print("Vader: " + generate_text_backoff(vader_text, ["I", "am", "your"], 25) + "\n") ###Output _____no_output_____
pi/device_info.ipynb
###Markdown Device Info & Maintenance Software Update```bashsudo apt updatesudo apt -y full-upgradesource /home/pi/.venv/jns/bin/activate pip3 list --outdatedcd ~/iot49git pull``` System ###Code !uname -a !cat /etc/os-release ###Output PRETTY_NAME="Raspbian GNU/Linux 10 (buster)" NAME="Raspbian GNU/Linux" VERSION_ID="10" VERSION="10 (buster)" VERSION_CODENAME=buster ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" ###Markdown Disk ###Code !df -h ###Output Filesystem Size Used Avail Use% Mounted on /dev/root 29G 3.8G 24G 14% / devtmpfs 430M 0 430M 0% /dev tmpfs 463M 0 463M 0% /dev/shm tmpfs 463M 12M 451M 3% /run tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 463M 0 463M 0% /sys/fs/cgroup /dev/mmcblk0p1 253M 49M 204M 20% /boot ###Markdown apt Packages ###Code # largest installed packages !dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -nr | head -n 20 # remove package # !sudo apt purge -y packagename ###Output _____no_output_____ ###Markdown Pip ###Code !pip list ###Output Package Version --------------------------------- --------- anyio 2.2.0 argon2-cffi 20.1.0 astroid 2.5.1 async-generator 1.10 attrs 20.3.0 Automat 20.2.0 autopep8 1.5.5 Babel 2.9.0 backcall 0.2.0 bleach 3.3.0 bleak 0.10.0 certifi 2020.12.5 cffi 1.14.5 chardet 4.0.0 colorzero 1.1 constantly 15.1.0 cryptography 3.4.6 cycler 0.10.0 decorator 4.4.2 defusedxml 0.7.1 entrypoints 0.3 flake8 3.8.4 gpiozero 1.5.1 hyperlink 21.0.0 hypothesis 6.8.0 idna 2.10 ifaddr 0.1.7 importlib-metadata 3.7.2 incremental 21.3.0 iniconfig 1.1.1 iot-device 0.4.6 iot-kernel 0.4.6 ipykernel 5.5.0 ipython 7.21.0 ipython-genutils 0.2.0 isort 5.7.0 jedi 0.17.2 Jinja2 2.11.3 json5 0.9.5 jsonschema 3.2.0 jupyter-client 6.1.11 jupyter-contrib-core 0.3.3 jupyter-contrib-nbextensions 0.5.1 jupyter-core 4.7.1 jupyter-highlight-selected-word 0.2.0 jupyter-latex-envs 1.4.6 jupyter-lsp 1.1.4 jupyter-nbextensions-configurator 0.4.1 jupyter-packaging 0.7.12 jupyter-server 1.4.1 jupyterlab 3.0.10 jupyterlab-pygments 0.1.2 jupyterlab-server 2.3.0 kiwisolver 1.3.1 lazy-object-proxy 1.5.2 lxml 4.6.2 MarkupSafe 1.1.1 matplotlib 3.3.4 mccabe 0.6.1 mistune 0.8.4 mpmath 1.2.1 nbclassic 0.2.6 nbclient 0.5.3 nbconvert 6.0.7 nbformat 5.1.2 nest-asyncio 1.5.1 notebook 6.2.0 numpy 1.20.1 packaging 20.9 pandas 1.2.3 pandocfilters 1.4.3 parso 0.7.1 pexpect 4.8.0 picamera 1.13 pickleshare 0.7.5 Pillow 8.1.2 pip 21.0.1 pluggy 0.13.1 prometheus-client 0.9.0 prompt-toolkit 3.0.17 ptyprocess 0.7.0 py 1.10.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycodestyle 2.6.0 pycparser 2.20 pycurl 7.43.0.6 pydocstyle 5.1.1 pyflakes 2.2.0 Pygments 2.8.1 pylint 2.7.2 pyOpenSSL 20.0.1 pyparsing 2.4.7 pyrsistent 0.17.3 pyserial 3.5 pytest 6.2.2 python-dateutil 2.8.1 python-jsonrpc-server 0.4.0 python-language-server 0.36.2 pytz 2021.1 PyYAML 5.4.1 pyzmq 22.0.3 readline 6.2.4.1 requests 2.25.1 rope 0.18.0 scipy 1.6.1 Send2Trash 1.5.0 service-identity 18.1.0 setuptools 52.0.0 six 1.15.0 sniffio 1.2.0 snowballstemmer 2.1.0 sortedcontainers 2.3.0 sympy 1.7.1 termcolor 1.1.0 terminado 0.9.2 testpath 0.4.4 toml 0.10.2 tornado 6.1 traitlets 5.0.5 Twisted 21.2.0 txdbus 1.1.2 typed-ast 1.4.2 typing-extensions 3.7.4.3 ujson 4.0.2 urllib3 1.26.3 wcwidth 0.2.5 webencodings 0.5.1 websocket-client 0.58.0 wheel 0.36.2 wrapt 1.12.1 yapf 0.30.0 zeroconf 0.28.8 zipp 3.4.1 zope.interface 5.2.0
docs/examples/idealized.ipynb
###Markdown Idealized Synthetic Data*Under development* ###Code import sys; sys.path.append("../../") import numpy as np import pandas as pd import xarray as xr from melodies_monet import driver an = driver.analysis() an.control = "control_idealized.yaml" an.read_control() an ###Output _____no_output_____ ###Markdown ````{admonition} Note: This is the complete file that was loaded.:class: dropdown```{literalinclude} control_idealized.yaml:caption::linenos:``````` Generate data Model ###Code rs = np.random.RandomState(42) control = an.control_dict nlat = 100 nlon = 200 lon = np.linspace(-161, -60, nlon) lat = np.linspace(18, 60, nlat) Lon, Lat = np.meshgrid(lon, lat) time = pd.date_range(control['analysis']['start_time'], control['analysis']['end_time'], freq="3H") ntime = time.size # Generate translating and expanding Gaussian x_ = np.linspace(-1, 1, lon.size) y_ = np.linspace(-1, 1, lat.size) x, y = np.meshgrid(x_, y_) mu = np.linspace(-0.5, 0.5, ntime) sigma = np.linspace(0.3, 1, ntime) g = np.exp( -( ( (x[np.newaxis, ...] - mu[:, np.newaxis, np.newaxis])**2 + y[np.newaxis, ...]**2 ) / ( 2 * sigma[:, np.newaxis, np.newaxis]**2 ) ) ) # Coordinates lat_da = xr.DataArray(lat, dims="lat", attrs={'longname': 'latitude', 'units': 'degN'}, name="lat") lon_da = xr.DataArray(lon, dims="lon", attrs={'longname': 'longitude', 'units': 'degE'}, name="lon") time_da = xr.DataArray(time, dims="time", name="time") # Generate dataset field_names = control['model']['test_model']['variables'].keys() ds_dict = dict() for field_name in field_names: units = control['model']['test_model']['variables'][field_name]['units'] # data = rs.rand(ntime, nlat, nlon) data = g da = xr.DataArray( data, # coords={"lat": lat_da, "lon": lon_da, "time": time_da}, coords=[time_da, lat_da, lon_da], dims=['time', 'lat', 'lon'], attrs={'units': units}, ) ds_dict[field_name] = da ds = xr.Dataset(ds_dict).expand_dims("z", axis=1) ds["z"] = [1] ds_mod = ds ds_mod ds.squeeze("z").A.plot(col="time") ds.to_netcdf(control['model']['test_model']['files']) ###Output _____no_output_____ ###Markdown Obs ###Code # Generate positions # TODO: only within land boundaries n = 500 lats = rs.uniform(lat[0], lat[-1], n)#[np.newaxis, :] lons = rs.uniform(lon[0], lon[-1], n)#[np.newaxis, :] siteid = np.arange(n)[np.newaxis, :].astype(str) # Generate dataset field_names = control['model']['test_model']['variables'].keys() ds_dict = dict() for field_name0 in field_names: field_name = control['model']['test_model']['mapping']['test_obs'][field_name0] units = control['model']['test_model']['variables'][field_name0]['units'] values = ( ds_mod.A.squeeze().interp(lat=xr.DataArray(lats), lon=xr.DataArray(lons)).values + rs.normal(scale=0.3, size=(ntime, n)) )[:, np.newaxis] da = xr.DataArray( values, coords={ "x": ("x", np.arange(n)), # !!! "time": ("time", time), "latitude": (("y", "x"), lats[np.newaxis, :], lat_da.attrs), "longitude": (("y", "x"), lons[np.newaxis, :], lon_da.attrs), "siteid": (("y", "x"), siteid), }, dims=("time", "y", "x"), attrs={'units': units}, ) ds_dict[field_name] = da ds = xr.Dataset(ds_dict) ds ds.to_netcdf(control['obs']['test_obs']['filename']) ###Output _____no_output_____ ###Markdown Load data ###Code an.open_models() an.models['test_model'].obj an.open_obs() an.obs['test_obs'].obj %%time an.pair_data() an.paired an.paired['test_obs_test_model'].obj an.paired['test_obs_test_model'].obj.dims ###Output _____no_output_____ ###Markdown Plot ###Code %%time an.plotting() ###Output Warning: variables dict for A_obs not provided, so defaults used Warning: variables dict for B_obs not provided, so defaults used Wall time: 4.19 s
datasets/switchboard-corpus/convert.ipynb
###Markdown Code to Convert the Switchboard dataset into Convokit format ###Code import os os.chdir("../../") # import convokit from convokit import Corpus, Speaker, Utterance os.chdir("datasets/switchboard-corpus") # then come back for swda from swda import Transcript import glob ###Output _____no_output_____ ###Markdown Create UsersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN ###Code files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files user_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex, "education": trans.from_caller_education, "birth_year": trans.from_caller_birth_year, "dialect_area": trans.from_caller_dialect_area} user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex, "education": trans.to_caller_education, "birth_year": trans.to_caller_birth_year, "dialect_area": trans.to_caller_dialect_area} ###Output _____no_output_____ ###Markdown Create a Speaker object for each unique user in the dataset ###Code corpus_users = {k: Speaker(name = k, meta = v) for k,v in user_meta.items()} ###Output _____no_output_____ ###Markdown Check number of users in the dataset ###Code print("Number of users in the data = {}".format(len(corpus_users))) # Example metadata from user 1632 corpus_users['1632'].meta ###Output _____no_output_____ ###Markdown Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (Speaker): the Speaker giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance ###Code utterance_corpus = {} # Iterate thru each transcript for file in files: trans = Transcript(file, './swda/swda-metadata.csv') utts = trans.utterances root = str(trans.conversation_no) + "-0" # Get id of root utterance recent_A = None recent_B = None # Iterate thru each utterance in transcript last_speaker = '' cur_speaker = '' all_text = '' text_pos = '' text_tag_list = [] counter = 0 first_utt = True for i, utt in enumerate(utts): idx = str(utt.conversation_no) + "-" + str(counter) text = utt.text # Check which user is talking if 'A' in utt.caller: recent_A = idx; user = str(trans.from_caller) cur_speaker = user else: recent_B = idx; user = str(trans.to_caller) cur_speaker = user # Only add as an utterance if the user has finished talking if cur_speaker != last_speaker and i > 0: # Put act-tag and POS information into metadata meta = {'tag': text_tag_list, } # For reply_to, find the most recent utterance from the other caller if first_utt: reply_to = None first_utt = False elif 'A' in utt.caller: reply_to = recent_B else: reply_to = recent_A utterance_corpus[idx] = Utterance(idx, corpus_users[user], root, reply_to, None, all_text, meta) # Update with the current utterance information # This is the first utterance of the next statement all_text = utt.text text_pos = utt.pos text_tag_list = [(utt.text, utt.act_tag)] counter += 1 else: # Otherwise, combine all the text from the user all_text += utt.text text_pos += utt.pos text_tag_list.append((utt.text, utt.act_tag)) last_speaker = cur_speaker last_speaker_idx = idx utterance_list = [utterance for k,utterance in utterance_corpus.items()] ###Output _____no_output_____ ###Markdown Check number of utterances in the dataset ###Code print("Number of utterances in the data = {}".format(len(utterance_corpus))) # Example utterance object utterance_corpus['4325-2'] ###Output _____no_output_____ ###Markdown Create corpus from list of utterances ###Code switchboard_corpus = Corpus(utterances=utterance_list, version=1) print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids()))) ###Output number of conversations in the dataset = 1155 ###Markdown Create Conversations ###Code # Set conversation Metadata for i, c in enumerate(switchboard_corpus.conversations): trans = Transcript(files[i], './swda/swda-metadata.csv') idx = str(trans.conversation_no) convo = switchboard_corpus.conversations[c] convo.meta['filename'] = files[i] date = trans.talk_day convo_date = "%d-%d-%d" % (date.year, date.month, date.day) convo.meta['talk_day'] = convo_date convo.meta['topic_description'] = trans.topic_description convo.meta['length'] = trans.length convo.meta['prompt'] = trans.prompt convo.meta['from_caller'] = str(trans.from_caller) convo.meta['to_caller'] = str(trans.to_caller) print(switchboard_corpus.conversations['4384-0'].meta) ###Output {'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'} ###Markdown Update corpus level metadata ###Code switchboard_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') idx = str(trans.conversation_no) switchboard_meta[idx] = {} switchboard_corpus.meta['metadata'] = switchboard_meta switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus" switchboard_corpus.meta['metadata']['4325'] ###Output _____no_output_____ ###Markdown Save created corpus ###Code switchboard_corpus.dump("corpus", base_path = "./") ###Output _____no_output_____ ###Markdown Check if available info from dataset can be viewed directly ###Code from convokit import meta_index meta_index(filename = "./corpus") switchboard_corpus = Corpus(filename = "./corpus") switchboard_corpus.print_summary_stats() ###Output Number of Users: 440 Number of Utterances: 122646 Number of Conversations: 1155 ###Markdown Code to Convert the Switchboard dataset into Convokit format ###Code import os os.chdir("../../") # import convokit from convokit import Corpus, User, Utterance os.chdir("datasets/switchboard-corpus") # then come back for swda from swda import Transcript import glob ###Output _____no_output_____ ###Markdown Create UsersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN ###Code files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files user_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex, "education": trans.from_caller_education, "birth_year": trans.from_caller_birth_year, "dialect_area": trans.from_caller_dialect_area} user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex, "education": trans.to_caller_education, "birth_year": trans.to_caller_birth_year, "dialect_area": trans.to_caller_dialect_area} ###Output _____no_output_____ ###Markdown Create a User object for each unique user in the dataset ###Code corpus_users = {k: User(name = k, meta = v) for k,v in user_meta.items()} ###Output _____no_output_____ ###Markdown Check number of users in the dataset ###Code print("Number of users in the data = {}".format(len(corpus_users))) # Example metadata from user 1632 corpus_users['1632'].meta ###Output _____no_output_____ ###Markdown Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (User): the User giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance ###Code utterance_corpus = {} # Iterate thru each transcript for file in files: trans = Transcript(file, './swda/swda-metadata.csv') utts = trans.utterances root = str(trans.conversation_no) + "-0" # Get id of root utterance recent_A = None recent_B = None # Iterate thru each utterance in transcript last_speaker = '' cur_speaker = '' all_text = '' text_pos = '' text_tag_list = [] counter = 0 first_utt = True for i, utt in enumerate(utts): idx = str(utt.conversation_no) + "-" + str(counter) text = utt.text # Check which user is talking if 'A' in utt.caller: recent_A = idx; user = str(trans.from_caller) cur_speaker = user else: recent_B = idx; user = str(trans.to_caller) cur_speaker = user # Only add as an utterance if the user has finished talking if cur_speaker != last_speaker and i > 0: # Put act-tag and POS information into metadata meta = {'tag': text_tag_list, } # For reply_to, find the most recent utterance from the other caller if first_utt: reply_to = None first_utt = False elif 'A' in utt.caller: reply_to = recent_B else: reply_to = recent_A utterance_corpus[idx] = Utterance(idx, corpus_users[user], root, reply_to, None, all_text, meta) # Update with the current utterance information # This is the first utterance of the next statement all_text = utt.text text_pos = utt.pos text_tag_list = [(utt.text, utt.act_tag)] counter += 1 else: # Otherwise, combine all the text from the user all_text += utt.text text_pos += utt.pos text_tag_list.append((utt.text, utt.act_tag)) last_speaker = cur_speaker last_speaker_idx = idx utterance_list = [utterance for k,utterance in utterance_corpus.items()] ###Output _____no_output_____ ###Markdown Check number of utterances in the dataset ###Code print("Number of utterances in the data = {}".format(len(utterance_corpus))) # Example utterance object utterance_corpus['4325-2'] ###Output _____no_output_____ ###Markdown Create corpus from list of utterances ###Code switchboard_corpus = Corpus(utterances=utterance_list, version=1) print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids()))) ###Output number of conversations in the dataset = 1155 ###Markdown Create Conversations ###Code # Set conversation Metadata for i, c in enumerate(switchboard_corpus.conversations): trans = Transcript(files[i], './swda/swda-metadata.csv') idx = str(trans.conversation_no) convo = switchboard_corpus.conversations[c] convo.meta['filename'] = files[i] date = trans.talk_day convo_date = "%d-%d-%d" % (date.year, date.month, date.day) convo.meta['talk_day'] = convo_date convo.meta['topic_description'] = trans.topic_description convo.meta['length'] = trans.length convo.meta['prompt'] = trans.prompt convo.meta['from_caller'] = str(trans.from_caller) convo.meta['to_caller'] = str(trans.to_caller) print(switchboard_corpus.conversations['4384-0'].meta) ###Output {'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'} ###Markdown Update corpus level metadata ###Code switchboard_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') idx = str(trans.conversation_no) switchboard_meta[idx] = {} switchboard_corpus.meta['metadata'] = switchboard_meta switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus" switchboard_corpus.meta['metadata']['4325'] ###Output _____no_output_____ ###Markdown Save created corpus ###Code switchboard_corpus.dump("corpus", base_path = "./") ###Output _____no_output_____ ###Markdown Check if available info from dataset can be viewed directly ###Code from convokit import meta_index meta_index(filename = "./corpus") switchboard_corpus = Corpus(filename = "./corpus") switchboard_corpus.print_summary_stats() ###Output Number of Users: 440 Number of Utterances: 122646 Number of Conversations: 1155 ###Markdown Code to Convert the Switchboard dataset into Convokit format ###Code import os os.chdir("../../") # import convokit from convokit import Corpus, Speaker, Utterance os.chdir("datasets/switchboard-corpus") # then come back for swda from swda import Transcript import glob ###Output _____no_output_____ ###Markdown Create SpeakersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN ###Code files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files user_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex, "education": trans.from_caller_education, "birth_year": trans.from_caller_birth_year, "dialect_area": trans.from_caller_dialect_area} user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex, "education": trans.to_caller_education, "birth_year": trans.to_caller_birth_year, "dialect_area": trans.to_caller_dialect_area} ###Output _____no_output_____ ###Markdown Create a Speaker object for each unique user in the dataset ###Code corpus_speakers = {k: Speaker(id = k, meta = v) for k,v in user_meta.items()} ###Output _____no_output_____ ###Markdown Check number of users in the dataset ###Code print("Number of users in the data = {}".format(len(corpus_speakers))) # Example metadata from user 1632 corpus_speakers['1632'].meta ###Output _____no_output_____ ###Markdown Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (Speaker): the Speaker giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance ###Code utterance_corpus = {} # Iterate thru each transcript for file in files: trans = Transcript(file, './swda/swda-metadata.csv') utts = trans.utterances root = str(trans.conversation_no) + "-0" # Get id of root utterance recent_A = None recent_B = None # Iterate thru each utterance in transcript last_speaker = '' cur_speaker = '' all_text = '' text_pos = '' text_tag_list = [] counter = 0 first_utt = True for i, utt in enumerate(utts): idx = str(utt.conversation_no) + "-" + str(counter) text = utt.text # Check which user is talking if 'A' in utt.caller: recent_A = idx; user = str(trans.from_caller) cur_speaker = user else: recent_B = idx; user = str(trans.to_caller) cur_speaker = user # Only add as an utterance if the user has finished talking if cur_speaker != last_speaker and i > 0: # Put act-tag and POS information into metadata meta = {'tag': text_tag_list, } # For reply_to, find the most recent utterance from the other caller if first_utt: reply_to = None first_utt = False elif 'A' in utt.caller: reply_to = recent_B else: reply_to = recent_A utterance_corpus[idx] = Utterance(idx, corpus_speakers[user], root, reply_to, None, all_text, meta) # Update with the current utterance information # This is the first utterance of the next statement all_text = utt.text text_pos = utt.pos text_tag_list = [(utt.text, utt.act_tag)] counter += 1 else: # Otherwise, combine all the text from the user all_text += utt.text text_pos += utt.pos text_tag_list.append((utt.text, utt.act_tag)) last_speaker = cur_speaker last_speaker_idx = idx utterance_list = [utterance for k,utterance in utterance_corpus.items()] ###Output _____no_output_____ ###Markdown Check number of utterances in the dataset ###Code print("Number of utterances in the data = {}".format(len(utterance_corpus))) # Example utterance object utterance_corpus['4325-2'] ###Output _____no_output_____ ###Markdown Create corpus from list of utterances ###Code switchboard_corpus = Corpus(utterances=utterance_list, version=1) print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids()))) ###Output number of conversations in the dataset = 1155 ###Markdown Create Conversations ###Code # Set conversation Metadata for i, c in enumerate(switchboard_corpus.conversations): trans = Transcript(files[i], './swda/swda-metadata.csv') idx = str(trans.conversation_no) convo = switchboard_corpus.conversations[c] convo.meta['filename'] = files[i] date = trans.talk_day convo_date = "%d-%d-%d" % (date.year, date.month, date.day) convo.meta['talk_day'] = convo_date convo.meta['topic_description'] = trans.topic_description convo.meta['length'] = trans.length convo.meta['prompt'] = trans.prompt convo.meta['from_caller'] = str(trans.from_caller) convo.meta['to_caller'] = str(trans.to_caller) print(switchboard_corpus.conversations['4384-0'].meta) ###Output {'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'} ###Markdown Update corpus level metadata ###Code switchboard_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') idx = str(trans.conversation_no) switchboard_meta[idx] = {} switchboard_corpus.meta['metadata'] = switchboard_meta switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus" switchboard_corpus.meta['metadata']['4325'] ###Output _____no_output_____ ###Markdown Save created corpus ###Code switchboard_corpus.dump("corpus", base_path = "./") ###Output _____no_output_____ ###Markdown Check if available info from dataset can be viewed directly ###Code from convokit import meta_index meta_index(filename = "./corpus") switchboard_corpus = Corpus(filename = "./corpus") switchboard_corpus.print_summary_stats() ###Output Number of Speakers: 440 Number of Utterances: 122646 Number of Conversations: 1155 ###Markdown Code to Convert the Switchboard dataset into Convokit format ###Code import os os.chdir("../../") # import convokit from convokit import Corpus, User, Utterance os.chdir("datasets/switchboard-corpus") # then come back for swda from swda import Transcript import glob ###Output _____no_output_____ ###Markdown Create UsersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN ###Code files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files user_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex, "education": trans.from_caller_education, "birth_year": trans.from_caller_birth_year, "dialect_area": trans.from_caller_dialect_area} user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex, "education": trans.to_caller_education, "birth_year": trans.to_caller_birth_year, "dialect_area": trans.to_caller_dialect_area} ###Output _____no_output_____ ###Markdown Create a User object for each unique user in the dataset ###Code corpus_users = {k: User(name = k, meta = v) for k,v in user_meta.items()} ###Output _____no_output_____ ###Markdown Check number of users in the dataset ###Code print("Number of users in the data = {}".format(len(corpus_users))) # Example metadata from user 1632 corpus_users['1632'].meta ###Output _____no_output_____ ###Markdown Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (User): the User giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance ###Code utterance_corpus = {} # Iterate thru each transcript for file in files: trans = Transcript(file, './swda/swda-metadata.csv') utts = trans.utterances root = str(trans.conversation_no) + "-0" # Get id of root utterance recent_A = None recent_B = None # Iterate thru each utterance in transcript last_speaker = '' cur_speaker = '' all_text = '' text_pos = '' text_tag_list = [] counter = 0 first_utt = True for i, utt in enumerate(utts): idx = str(utt.conversation_no) + "-" + str(counter) text = utt.text # Check which user is talking if 'A' in utt.caller: recent_A = idx; user = str(trans.from_caller) cur_speaker = user else: recent_B = idx; user = str(trans.to_caller) cur_speaker = user # Only add as an utterance if the user has finished talking if cur_speaker != last_speaker and i > 0: # Put act-tag and POS information into metadata meta = {'tag': text_tag_list, } # For reply_to, find the most recent utterance from the other caller if first_utt: reply_to = None first_utt = False elif 'A' in utt.caller: reply_to = recent_B else: reply_to = recent_A utterance_corpus[idx] = Utterance(idx, corpus_users[user], root, reply_to, None, all_text, meta) # Update with the current utterance information # This is the first utterance of the next statement all_text = utt.text text_pos = utt.pos text_tag_list = [(utt.text, utt.act_tag)] counter += 1 else: # Otherwise, combine all the text from the user all_text += utt.text text_pos += utt.pos text_tag_list.append((utt.text, utt.act_tag)) last_speaker = cur_speaker last_speaker_idx = idx utterance_list = [utterance for k,utterance in utterance_corpus.items()] ###Output _____no_output_____ ###Markdown Check number of utterances in the dataset ###Code print("Number of utterances in the data = {}".format(len(utterance_corpus))) # Example utterance object utterance_corpus['4325-2'] ###Output _____no_output_____ ###Markdown Create corpus from list of utterances ###Code switchboard_corpus = Corpus(utterances=utterance_list, version=1) print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids()))) ###Output number of conversations in the dataset = 1155 ###Markdown Create Conversations ###Code # Set conversation Metadata for i, c in enumerate(switchboard_corpus.conversations): trans = Transcript(files[i], './swda/swda-metadata.csv') idx = str(trans.conversation_no) convo = switchboard_corpus.conversations[c] convo.meta['filename'] = files[i] date = trans.talk_day convo_date = "%d-%d-%d" % (date.year, date.month, date.day) convo.meta['talk_day'] = convo_date convo.meta['topic_description'] = trans.topic_description convo.meta['length'] = trans.length convo.meta['prompt'] = trans.prompt convo.meta['from_caller'] = str(trans.from_caller) convo.meta['to_caller'] = str(trans.to_caller) print(switchboard_corpus.conversations['4384-0'].meta) ###Output {'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'} ###Markdown Update corpus level metadata ###Code switchboard_meta = {} for file in files: trans = Transcript(file, './swda/swda-metadata.csv') idx = str(trans.conversation_no) switchboard_meta[idx] = {} switchboard_corpus.meta['metadata'] = switchboard_meta switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus" switchboard_corpus.meta['metadata']['4325'] ###Output _____no_output_____ ###Markdown Save created corpus ###Code switchboard_corpus.dump("corpus", base_path = "./") ###Output _____no_output_____ ###Markdown Check if available info from dataset can be viewed directly ###Code from convokit import meta_index meta_index(filename = "./corpus") switchboard_corpus = Corpus(filename = "./corpus") switchboard_corpus.print_summary_stats() ###Output Number of Users: 440 Number of Utterances: 122646 Number of Conversations: 1155
RECOMMENDER SYSTEM/USER-BASED COLLABORATIVE FILTERING.ipynb
###Markdown PROBLEM STATEMENT - This notebook implements a movie recommender system. - Recommender systems are used to suggest movies or songs to users based on their interest or usage history. - For example, Netflix recommends movies to watch based on the previous movies you've watched. - In this example, we will use Item-based Collaborative Filter - Dataset MovieLens: https://grouplens.org/datasets/movielens/100k/ - Photo Credit: https://pxhere.com/en/photo/1588369 ![image.png](attachment:image.png) STEP 0: LIBRARIES IMPORT ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline ###Output _____no_output_____ ###Markdown STEP 1: IMPORT DATASET ###Code # Two datasets are available, let's load the first one: movie_titles_df = pd.read_csv("Movie_Id_Titles") movie_titles_df.head(20) # Let's load the second one! movies_rating_df = pd.read_csv('u.data', sep='\t', names=['user_id', 'item_id', 'rating', 'timestamp']) movies_rating_df.head(10) movies_rating_df.tail() # Let's drop the timestamp movies_rating_df.drop(['timestamp'], axis = 1, inplace = True) movies_rating_df movies_rating_df.describe() movies_rating_df.info() # Let's merge both dataframes together so we can have ID with the movie name movies_rating_df = pd.merge(movies_rating_df, movie_titles_df, on = 'item_id') movies_rating_df movies_rating_df.shape ###Output _____no_output_____ ###Markdown STEP 2: VISUALIZE DATASET ###Code movies_rating_df.groupby('title')['rating'].describe() ratings_df_mean = movies_rating_df.groupby('title')['rating'].describe()['mean'] ratings_df_count = movies_rating_df.groupby('title')['rating'].describe()['count'] ratings_df_count ratings_mean_count_df = pd.concat([ratings_df_count, ratings_df_mean], axis = 1) ratings_mean_count_df.reset_index() ratings_mean_count_df['mean'].plot(bins=100, kind='hist', color = 'r') ratings_mean_count_df['count'].plot(bins=100, kind='hist', color = 'r') # Let's see the highest rated movies! # Apparently these movies does not have many reviews (i.e.: small number of ratings) ratings_mean_count_df[ratings_mean_count_df['mean'] == 5] # List all the movies that are most rated # Please note that they are not necessarily have the highest rating (mean) ratings_mean_count_df.sort_values('count', ascending = False).head(100) ###Output _____no_output_____ ###Markdown STEP 3: PERFORM ITEM-BASED COLLABORATIVE FILTERING ON ONE MOVIE SAMPLE ###Code userid_movietitle_matrix = movies_rating_df.pivot_table(index = 'user_id', columns = 'title', values = 'rating') userid_movietitle_matrix titanic = userid_movietitle_matrix['Titanic (1997)'] titanic # Let's calculate the correlations titanic_correlations = pd.DataFrame(userid_movietitle_matrix.corrwith(titanic), columns=['Correlation']) titanic_correlations = titanic_correlations.join(ratings_mean_count_df['count']) titanic_correlations titanic_correlations.dropna(inplace=True) titanic_correlations # Let's sort the correlations vector titanic_correlations.sort_values('Correlation', ascending=False) titanic_correlations[titanic_correlations['count']>80].sort_values('Correlation',ascending=False).head() # Pick up star wars movie and repeat the excerise ###Output _____no_output_____ ###Markdown STEP4: CREATE AN ITEM-BASED COLLABORATIVE FILTER ON THE ENTIRE DATASET ###Code # Recall this matrix that we created earlier of all movies and their user ID/ratings userid_movietitle_matrix movie_correlations = userid_movietitle_matrix.corr(method = 'pearson', min_periods = 80) # pearson : standard correlation coefficient # Obtain the correlations between all movies in the dataframe movie_correlations # Let's create our own dataframe with our own ratings! myRatings = pd.read_csv("My_Ratings.csv") #myRatings.reset_index myRatings len(myRatings.index) myRatings['Movie Name'][0] similar_movies_list = pd.Series() for i in range(0, 2): similar_movie = movie_correlations[myRatings['Movie Name'][i]].dropna() # Get same movies with same ratings similar_movie = similar_movie.map(lambda x: x * myRatings['Ratings'][i]) # Scale the similarity by your given ratings similar_movies_list = similar_movies_list.append(similar_movie) similar_movies_list.sort_values(inplace = True, ascending = False) print (similar_movies_list.head(10)) ###Output Liar Liar (1997) 5.000000 Con Air (1997) 2.349141 Pretty Woman (1990) 2.348951 Michael (1996) 2.210110 Indiana Jones and the Last Crusade (1989) 2.072136 Top Gun (1986) 2.028602 G.I. Jane (1997) 1.989656 Multiplicity (1996) 1.984302 Grumpier Old Men (1995) 1.953494 Ghost and the Darkness, The (1996) 1.895376 dtype: float64
08-north-korean-news-odonnchadha.ipynb
###Markdown North Korean NewsScrape the North Korean news agency http://kcna.kpSave a CSV called `nk-news.csv`. This file should include:* The **article headline*** The value of **`onclick`** (they don't have normal links)* The **article ID** (for example, the article ID for `fn_showArticle("AR0125885", "", "NT00", "L")` is `AR0125885`The last part is easiest using pandas. Be sure you don't save the index!* _**Tip:** If you're using requests+BeautifulSoup, you can always look at response.text to see if the page looks like what you think it looks like_* _**Tip:** Check your URL to make sure it is what you think it should be!_* _**Tip:** Does it look different if you scrape with BeautifulSoup compared to if you scrape it with Selenium?_* _**Tip:** For the last part, how do you pull out part of a string from a longer string?_* _**Tip:** `expand=False` is helpful if you want to assign a single new column when extracting_* _**Tip:** `(` and `)` mean something special in regular expressions, so you have to say "no really seriously I mean `(`" by using `\(` instead_* _**Tip:** if your `.*` is taking up too much stuff, you can try `.*?` instead, which instead of "take as much as possible" it means "take only as much as needed"_ ###Code import requests import re from bs4 import BeautifulSoup url = "http://kcna.kp/kcna.user.home.retrieveHomeInfoList.kcmsf" raw_html = requests.get(url).content soup_doc = BeautifulSoup(raw_html, "html.parser") print(type(soup_doc)) ### TEST VIEWS OF DATA # raw_html # print(soup_doc) print(soup_doc.prettify()) soup_doc.find_all('h3') # soup_doc.select('div#events-horizontal') # upcoming_events_div = soup.select_one('div#events-horizontal') # article_area > div.harticle15 > ul:nth-child(9) > li:nth-child(2) > h3 > strong > font > a.titlebet # //*[@id="article_area"]/div[1]/ul[2] ###Output _____no_output_____
Image_Captioning.ipynb
###Markdown ###Code from google.colab import drive drive.mount('/content/drive') ###Output Mounted at /content/drive ###Markdown Download the required data : Annotations,Captions,Images ###Code import os import sys from pycocotools.coco import COCO import urllib import zipfile os.makedirs('opt' , exist_ok=True) os.chdir( '/content/opt' ) !git clone 'https://github.com/cocodataset/cocoapi.git' ###Output Cloning into 'cocoapi'... remote: Enumerating objects: 975, done. remote: Total 975 (delta 0), reused 0 (delta 0), pack-reused 975 Receiving objects: 100% (975/975), 11.72 MiB | 29.57 MiB/s, done. Resolving deltas: 100% (575/575), done. ###Markdown Download the Annotations and Captions : ###Code os.chdir('/content/opt/cocoapi') # Download the annotation : annotations_trainval2014 = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip' image_info_test2014 = 'http://images.cocodataset.org/annotations/image_info_test2014.zip' urllib.request.urlretrieve(annotations_trainval2014 , filename = 'annotations_trainval2014.zip' ) urllib.request.urlretrieve(image_info_test2014 , filename= 'image_info_test2014.zip' ) ###Output _____no_output_____ ###Markdown Extract Annotations from ZIP file ###Code with zipfile.ZipFile('annotations_trainval2014.zip' , 'r') as zip_ref: zip_ref.extractall( '/content/opt/cocoapi' ) try: os.remove( 'annotations_trainval2014.zip' ) print('zip removed') except: None with zipfile.ZipFile('image_info_test2014.zip' , 'r') as zip_ref: zip_ref.extractall( '/content/opt/cocoapi' ) try: os.remove( 'image_info_test2014.zip' ) print('zip removed') except: None ###Output zip removed zip removed ###Markdown Initialize and verify the loaded data ###Code os.chdir('/content/opt/cocoapi/annotations') # initialize COCO API for instance annotations dataType = 'val2014' instances_annFile = 'instances_{}.json'.format(dataType) print(instances_annFile) coco = COCO(instances_annFile) # initialize COCO API for caption annotations captions_annFile = 'captions_{}.json'.format(dataType) coco_caps = COCO(captions_annFile) # get image ids ids = list(coco.anns.keys()) ###Output instances_val2014.json loading annotations into memory... Done (t=4.81s) creating index... index created! loading annotations into memory... Done (t=0.33s) creating index... index created! ###Markdown plot a sample Image ###Code import matplotlib.pyplot as plt import skimage.io as io import numpy as np %matplotlib inline #Pick a random annotation id and display img of that annotation : ann_id = np.random.choice( ids ) img_id = coco.anns[ann_id]['image_id'] img = coco.loadImgs( img_id )[0] url = img['coco_url'] print(url) I = io.imread(url) plt.imshow(I) # Display captions for that annotation id : ann_ids = coco_caps.getAnnIds( img_id ) print('Number of annotations i.e captions for the image: ' , ann_ids) print() anns = coco_caps.loadAnns( ann_ids ) coco_caps.showAnns(anns) ###Output http://images.cocodataset.org/val2014/COCO_val2014_000000454382.jpg Number of annotations i.e captions for the image: [168868, 216949, 219721, 231967, 238819] The blue dump truck rides down the street next to the houses. A blue dump truck traveling down a street past tall houses. A blue truck parked on the road near houses. Dump truck alone on road with buildings and bare trees and shrubs behind it. A blue dump truck sits parked on a residential street. ###Markdown Download Train , Test , Val Images : ###Code os.chdir('/content/opt/cocoapi') train2014 = 'http://images.cocodataset.org/zips/train2014.zip' test2014 = 'http://images.cocodataset.org/zips/test2014.zip' val2014 = 'http://images.cocodataset.org/zips/val2014.zip' urllib.request.urlretrieve( train2014 , 'train2014' ) urllib.request.urlretrieve( test2014 , 'test2014' ) #urllib.request.urlretrieve( val2014 , 'val2014' ) ###Output _____no_output_____ ###Markdown unzip the download image zip files ###Code os.chdir('/content/opt/cocoapi') with zipfile.ZipFile( 'train2014' , 'r' ) as zip_ref: zip_ref.extractall( 'images' ) try: os.remove( 'train2014' ) print('zip removed') except: None os.chdir('/content/opt/cocoapi') with zipfile.ZipFile( 'test2014' , 'r' ) as zip_ref: zip_ref.extractall( 'images' ) try: os.remove( 'test2014' ) print('zip removed') except: None ###Output zip removed zip removed ###Markdown Step1 Explore the DataLoader Vocabulary.py ###Code # vocabulary.py ------------------------------------------------------------- import nltk import pickle import os.path from pycocotools.coco import COCO from collections import Counter class Vocabulary(object): def __init__(self, vocab_threshold, vocab_file='./vocab.pkl', start_word="<start>", end_word="<end>", unk_word="<unk>", annotations_file='../cocoapi/annotations/captions_train2014.json', vocab_from_file=False): """Initialize the vocabulary. Args: vocab_threshold: Minimum word count threshold. vocab_file: File containing the vocabulary. start_word: Special word denoting sentence start. end_word: Special word denoting sentence end. unk_word: Special word denoting unknown words. annotations_file: Path for train annotation file. vocab_from_file: If False, create vocab from scratch & override any existing vocab_file If True, load vocab from from existing vocab_file, if it exists """ self.vocab_threshold = vocab_threshold self.vocab_file = vocab_file self.start_word = start_word self.end_word = end_word self.unk_word = unk_word self.annotations_file = annotations_file self.vocab_from_file = vocab_from_file self.get_vocab() def get_vocab(self): """Load the vocabulary from file OR build the vocabulary from scratch.""" if os.path.exists(self.vocab_file) & self.vocab_from_file: with open(self.vocab_file, 'rb') as f: vocab = pickle.load(f) self.word2idx = vocab.word2idx self.idx2word = vocab.idx2word print('Vocabulary successfully loaded from vocab.pkl file!') else: self.build_vocab() with open(self.vocab_file, 'wb') as f: pickle.dump(self, f) def build_vocab(self): """Populate the dictionaries for converting tokens to integers (and vice-versa).""" self.init_vocab() self.add_word(self.start_word) self.add_word(self.end_word) self.add_word(self.unk_word) self.add_captions() def init_vocab(self): """Initialize the dictionaries for converting tokens to integers (and vice-versa).""" self.word2idx = {} self.idx2word = {} self.idx = 0 def add_word(self, word): """Add a token to the vocabulary.""" if not word in self.word2idx: self.word2idx[word] = self.idx self.idx2word[self.idx] = word self.idx += 1 def add_captions(self): """Loop over training captions and add all tokens to the vocabulary that meet or exceed the threshold.""" coco = COCO(self.annotations_file) counter = Counter() ids = coco.anns.keys() for i, id in enumerate(ids): caption = str(coco.anns[id]['caption']) tokens = nltk.tokenize.word_tokenize(caption.lower()) counter.update(tokens) if i % 100000 == 0: print("[%d/%d] Tokenizing captions..." % (i, len(ids))) words = [word for word, cnt in counter.items() if cnt >= self.vocab_threshold] for i, word in enumerate(words): self.add_word(word) def __call__(self, word): if not word in self.word2idx: return self.word2idx[self.unk_word] return self.word2idx[word] def __len__(self): return len(self.word2idx) ###Output _____no_output_____ ###Markdown data_loader.py ###Code # Data Loader --------------------------------------------------------------------------------------------- import nltk import os import torch import torch.utils.data as data from PIL import Image from pycocotools.coco import COCO import numpy as np from tqdm import tqdm import random import json def get_loader(transform, mode='train', batch_size=1, vocab_threshold=None, vocab_file='./vocab.pkl', start_word="<start>", end_word="<end>", unk_word="<unk>", vocab_from_file=True, num_workers=0, cocoapi_loc='/opt'): """Returns the data loader. Args: transform: Image transform. mode: One of 'train' or 'test'. batch_size: Batch size (if in testing mode, must have batch_size=1). vocab_threshold: Minimum word count threshold. vocab_file: File containing the vocabulary. start_word: Special word denoting sentence start. end_word: Special word denoting sentence end. unk_word: Special word denoting unknown words. vocab_from_file: If False, create vocab from scratch & override any existing vocab_file. If True, load vocab from from existing vocab_file, if it exists. num_workers: Number of subprocesses to use for data loading cocoapi_loc: The location of the folder containing the COCO API: https://github.com/cocodataset/cocoapi """ assert mode in ['train', 'test'], "mode must be one of 'train' or 'test'." if vocab_from_file==False: assert mode=='train', "To generate vocab from captions file, must be in training mode (mode='train')." # Based on mode (train, val, test), obtain img_folder and annotations_file. if mode == 'train': if vocab_from_file==True: assert os.path.exists(vocab_file), "vocab_file does not exist. Change vocab_from_file to False to create vocab_file." img_folder = os.path.join(cocoapi_loc, 'cocoapi/images/train2014/') annotations_file = os.path.join(cocoapi_loc, 'cocoapi/annotations/captions_train2014.json') if mode == 'test': assert batch_size==1, "Please change batch_size to 1 if testing your model." assert os.path.exists(vocab_file), "Must first generate vocab.pkl from training data." assert vocab_from_file==True, "Change vocab_from_file to True." img_folder = os.path.join(cocoapi_loc, 'cocoapi/images/test2014/') annotations_file = os.path.join(cocoapi_loc, 'cocoapi/annotations/image_info_test2014.json') # COCO caption dataset. dataset = CoCoDataset(transform=transform, mode=mode, batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_file=vocab_file, start_word=start_word, end_word=end_word, unk_word=unk_word, annotations_file=annotations_file, vocab_from_file=vocab_from_file, img_folder=img_folder) if mode == 'train': # Randomly sample a caption length, and sample indices with that length. indices = dataset.get_train_indices() # Create and assign a batch sampler to retrieve a batch with the sampled indices. initial_sampler = data.sampler.SubsetRandomSampler(indices=indices) # data loader for COCO dataset. data_loader = data.DataLoader(dataset=dataset, num_workers=num_workers, batch_sampler=data.sampler.BatchSampler(sampler=initial_sampler, batch_size=dataset.batch_size, drop_last=False)) else: data_loader = data.DataLoader(dataset=dataset, batch_size=dataset.batch_size, shuffle=True, num_workers=num_workers) return data_loader class CoCoDataset(data.Dataset): def __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file, img_folder): self.transform = transform self.mode = mode self.batch_size = batch_size self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word, end_word, unk_word, annotations_file, vocab_from_file) self.img_folder = img_folder if self.mode == 'train': self.coco = COCO(annotations_file) self.ids = list(self.coco.anns.keys()) print('Obtaining caption lengths...') all_tokens = [nltk.tokenize.word_tokenize(str(self.coco.anns[self.ids[index]]['caption']).lower()) for index in tqdm(np.arange(len(self.ids)))] self.caption_lengths = [len(token) for token in all_tokens] else: test_info = json.loads(open(annotations_file).read()) self.paths = [item['file_name'] for item in test_info['images']] def __getitem__(self, index): # obtain image and caption if in training mode if self.mode == 'train': ann_id = self.ids[index] caption = self.coco.anns[ann_id]['caption'] img_id = self.coco.anns[ann_id]['image_id'] path = self.coco.loadImgs(img_id)[0]['file_name'] # Convert image to tensor and pre-process using transform image = Image.open(os.path.join(self.img_folder, path)).convert('RGB') image = self.transform(image) # Convert caption to tensor of word ids. tokens = nltk.tokenize.word_tokenize(str(caption).lower()) caption = [] caption.append(self.vocab(self.vocab.start_word)) caption.extend([self.vocab(token) for token in tokens]) caption.append(self.vocab(self.vocab.end_word)) caption = torch.Tensor(caption).long() # return pre-processed image and caption tensors return image, caption # obtain image if in test mode else: path = self.paths[index] # Convert image to tensor and pre-process using transform PIL_image = Image.open(os.path.join(self.img_folder, path)).convert('RGB') orig_image = np.array(PIL_image) image = self.transform(PIL_image) # return original image and pre-processed image tensor return orig_image, image def get_train_indices(self): sel_length = np.random.choice(self.caption_lengths) all_indices = np.where([self.caption_lengths[i] == sel_length for i in np.arange(len(self.caption_lengths))])[0] indices = list(np.random.choice(all_indices, size=self.batch_size)) return indices def __len__(self): if self.mode == 'train': return len(self.ids) else: return len(self.paths) ###Output _____no_output_____ ###Markdown Dataloader creation ###Code import sys from pycocotools.coco import COCO !pip install nltk import nltk nltk.download('punkt') from torchvision import transforms # Define a transform to pre-process the training images. transform_train = transforms.Compose([ transforms.Resize(256), # smaller edge of image resized to 256 transforms.RandomCrop(224), # get 224x224 crop from random location transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5 transforms.ToTensor(), # convert the PIL Image to a tensor transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model (0.229, 0.224, 0.225))]) # Set the minimum word count threshold. vocab_threshold = 8 # Specify the batch size. batch_size = 200 # Obtain the data loader. data_loader_train = get_loader(transform=transform_train, mode='train', batch_size=batch_size, vocab_threshold=vocab_threshold, vocab_from_file=False, cocoapi_loc = '/content/opt') import torch import numpy as np import torch.utils.data as data # Exploring the dataloader now : sample_caption = 'A person doing a trick xxxx on a rail while riding a skateboard.' sample_tokens = nltk.tokenize.word_tokenize( sample_caption.lower() ) sample_caption = [] start_word = data_loader_train.dataset.vocab.start_word end_word = data_loader_train.dataset.vocab.end_word sample_tokens.insert(0 , start_word) sample_tokens.append(end_word) sample_caption.extend( [ data_loader_train.dataset.vocab(token) for token in sample_tokens ] ) sample_caption = torch.Tensor( sample_caption ).long() print('Find Below the Sample tokens and the idx values of those tokens in word2idx' , '\n') print(sample_tokens) print(sample_caption ) print('Find index values for words below \n') print('Start idx {} , End idx {} , unknown idx {}'.format( 0,1,2 )) # Lets check word2idx in vocb print('First few vocab' , dict(list(data_loader_train.dataset.vocab.word2idx.items())[:10])) # Print the total number of keys in the word2idx dictionary. print('Total number of tokens in vocabulary:', len(data_loader_train.dataset.vocab)) ###Output First few vocab {'<start>': 0, '<end>': 1, '<unk>': 2, 'a': 3, 'very': 4, 'clean': 5, 'and': 6, 'well': 7, 'decorated': 8, 'empty': 9} Total number of tokens in vocabulary: 7073 ###Markdown Step 2: Use the Data Loader to Obtain BatchesThe captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption). In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare. ###Code from collections import Counter counter = Counter(data_loader_train.dataset.caption_lengths) lengths = sorted( counter.items() , key = lambda pair : pair[1] , reverse=True ) for val,count in lengths: print( 'value %2d count %5d' %(val,count) ) if count < 10000: break ###Output value 10 count 86334 value 11 count 79948 value 9 count 71934 value 12 count 57637 value 13 count 37645 value 14 count 22335 value 8 count 20771 value 15 count 12841 value 16 count 7729 ###Markdown To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`. ###Code # Randomly sample a caption length, and sample indices with that length. indices = data_loader_train.dataset.get_train_indices() print('Sample Indices:' , indices ) # Create and assign a batch sampler to retrieve a batch with the sampled indices. sampler = data.sampler.SubsetRandomSampler( indices ) data_loader_train.batch_sampler.sampler = sampler # obtain images, caption : images , captions = next(iter(data_loader_train)) print(images.shape , captions.shape) ###Output Sample Indices: [364220, 241543, 319815, 116354, 114582, 307649, 115217, 16948, 51787, 226827, 73848, 126963, 250676, 9538, 61102, 127666, 185651, 59314, 133641, 261485, 264340, 289678, 149341, 402152, 335108, 407115, 157272, 6151, 6600, 372761, 311533, 28604, 192585, 289947, 354326, 165509, 51134, 60859, 165878, 61715, 91975, 311726, 243462, 156881, 380643, 398269, 123678, 47498, 338653, 147094, 162088, 413379, 216311, 198913, 376596, 358961, 122811, 26997, 376488, 14894, 202376, 58856, 308987, 271161, 68161, 19618, 396538, 156274, 309753, 45759, 211793, 305514, 337269, 292970, 331635, 311510, 208640, 105570, 293107, 108782, 191947, 132584, 367952, 208657, 220552, 84165, 267140, 355447, 210245, 255111, 119437, 173160, 60367, 241446, 4949, 52803, 405757, 310024, 90704, 411894, 408404, 290443, 298771, 242154, 140971, 199808, 236390, 253064, 9524, 21141, 8932, 307443, 28445, 371693, 202967, 176705, 75601, 323405, 97186, 381356, 362725, 166656, 118944, 115961, 388047, 239326, 378820, 162684, 217240, 222029, 120129, 269512, 110314, 186867, 299294, 37371, 52729, 351248, 136968, 35254, 396989, 172400, 239099, 241661, 36358, 413430, 400403, 101006, 212381, 397283, 342339, 316051, 397098, 401370, 279713, 74279, 18483, 332961, 238322, 299761, 407369, 108212, 44403, 331635, 72893, 98197, 307528, 308098, 348520, 117081, 96016, 138362, 225536, 393645, 282158, 298562, 50680, 156576, 311336, 148936, 308964, 394994, 53333, 179381, 84165, 158379, 31342, 92272, 130109, 81364, 180549, 322327, 413562, 347379, 286183, 292989, 89276, 208379, 280488, 294320] torch.Size([200, 3, 224, 224]) torch.Size([200, 13]) ###Markdown Step 3: Experiment with the CNN EncoderThe encoder uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding. ###Code import torch import torch.nn as nn import torchvision.models as models class EncoderCNN(nn.Module): def __init__(self, embed_size): super(EncoderCNN, self).__init__() resnet = models.resnet50(pretrained=True) for param in resnet.parameters(): param.requires_grad_(False) modules = list(resnet.children())[:-1] self.resnet = nn.Sequential(*modules) self.embed = nn.Linear(resnet.fc.in_features, embed_size) def forward(self, images): features = self.resnet(images) features = features.view(features.size(0), -1) features = self.embed(features) return features # specify dim of image embedding device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') embed_size = 256 encoder = EncoderCNN( embed_size ) encoder.to(device) images= images.to(device) # images from step2 features = encoder(images) print(type(features) , features.shape , images.shape) assert( type(features) == torch.Tensor ) , 'Encoder output should be pytorch tensor' assert (features.shape[0] == batch_size) & (features.shape[1] == embed_size) , "The shape of the encoder output is incorrect." ###Output Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth ###Markdown Step 4: Implement the RNN DecoderIn the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.htmltorch.nn.CrossEntropyLoss) optimizer in PyTorch. ###Code import os import torch.utils.data as data import torch import math import pickle import matplotlib.pyplot as plt % matplotlib inline class DecoderRNN(nn.Module): def __init__(self, embed_size, hidden_size, vocab_size, num_layers=1): super( DecoderRNN , self).__init__() self.embed_size = embed_size self.hidden_size = hidden_size self.vocab_size = vocab_size self.num_layers = num_layers self.word_embedding = nn.Embedding( self.vocab_size , self.embed_size ) self.lstm = nn.LSTM( input_size = self.embed_size , hidden_size = self.hidden_size, num_layers = self.num_layers , batch_first = True ) self.fc = nn.Linear( self.hidden_size , self.vocab_size ) def init_hidden( self, batch_size ): return ( torch.zeros( self.num_layers , batch_size , self.hidden_size ).to(device), torch.zeros( self.num_layers , batch_size , self.hidden_size ).to(device) ) def forward(self, features, captions): captions = captions[:,:-1] self.batch_size = features.shape[0] self.hidden = self.init_hidden( self.batch_size ) embeds = self.word_embedding( captions ) inputs = torch.cat( ( features.unsqueeze(dim=1) , embeds ) , dim =1 ) lstm_out , self.hidden = self.lstm(inputs , self.hidden) outputs = self.fc( lstm_out ) return outputs def Predict(self, inputs, max_len=20): final_output = [] batch_size = inputs.shape[0] hidden = self.init_hidden(batch_size) while True: lstm_out, hidden = self.lstm(inputs, hidden) outputs = self.fc(lstm_out) outputs = outputs.squeeze(1) _, max_idx = torch.max(outputs, dim=1) final_output.append(max_idx.cpu().numpy()[0].item()) if (max_idx == 1 or len(final_ouput) >=20 ): break inputs = self.word_embedding(max_idx) inputs = inputs.unsqueeze(1) return final_output embed_size = 256 hidden_size = 100 num_layers =1 num_epochs = 4 print_every = 150 save_every = 1 vocab_size = len(data_loader_train.dataset.vocab) total_step = math.ceil( len(data_loader_train.dataset.caption_lengths) / data_loader_train.batch_sampler.batch_size ) decoder = DecoderRNN( embed_size , hidden_size, vocab_size ,num_layers) criterion = nn.CrossEntropyLoss() lr = 0.001 all_params = list(decoder.parameters()) + list( encoder.embed.parameters() ) optimizer = torch.optim.Adam( params = all_params , lr = lr ) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model_save_path = '/content/drive/My Drive/Colab Notebooks/ComputerVision/RNN_LSTM/image_caption/CVND---Image-Captioning-Project/checkpoint' os.makedirs( model_save_path , exist_ok=True) # Save the params needed to created the model : decoder_input_params = {'embed_size' : embed_size , 'hidden_size' : hidden_size , 'num_layers' : num_layers, 'lr' : lr , 'vocab_size' : vocab_size } with open( os.path.join(model_save_path , 'decoder_input_params_12_20_2019.pickle'), 'wb') as handle: pickle.dump(decoder_input_params, handle, protocol=pickle.HIGHEST_PROTOCOL) import sys for e in range(num_epochs): for step in range(total_step): indices = data_loader_train.dataset.get_train_indices() new_sampler = data.sampler.SubsetRandomSampler( indices ) data_loader_train.batch_sampler.sampler = new_sampler images,captions = next(iter(data_loader_train)) images , captions = images.to(device) , captions.to(device) encoder , decoder = encoder.to(device) , decoder.to(device) encoder.zero_grad() decoder.zero_grad() features = encoder(images) output = decoder( features , captions ) loss = criterion( output.view(-1, vocab_size) , captions.view(-1) ) loss.backward() optimizer.step() stat_vals = 'Epochs [%d/%d] Step [%d/%d] Loss [%.4f] ' %( e+1,num_epochs,step,total_step,loss.item() ) if step % print_every == 0 : print(stat_vals) sys.stdout.flush() if e % save_every == 0: torch.save( encoder.state_dict() , os.path.join( model_save_path , 'encoderdata_{}.pkl'.format(e+1) ) ) torch.save( decoder.state_dict() , os.path.join( model_save_path , 'decoderdata_{}.pkl'.format(e+1) ) ) ###Output Epochs [1/4] Step [0/2071] Loss [8.8806] Epochs [1/4] Step [150/2071] Loss [4.0232] Epochs [1/4] Step [300/2071] Loss [3.5489] ###Markdown Load the saved checkpoint ###Code model_save_path = '/content/drive/My Drive/Colab Notebooks/ComputerVision/RNN_LSTM/image_caption/CVND---Image-Captioning-Project/checkpoint' os.makedirs( model_save_path , exist_ok=True) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') with open( os.path.join(model_save_path , 'decoder_input_params_12_19_2019.pickle'), 'rb') as handle: decoder_input_params = pickle.load(handle) embed_size = decoder_input_params['embed_size'] hidden_size= decoder_input_params['hidden_size'] vocab_size = decoder_input_params['vocab_size'] num_layers = decoder_input_params['num_layers'] encoder = EncoderCNN( embed_size ) encoder.load_state_dict( torch.load( os.path.join( model_save_path , 'encoderdata_{}.pkl'.format(1) ) ) ) decoder = DecoderRNN( embed_size , hidden_size , vocab_size , num_layers ) decoder.load_state_dict( torch.load( os.path.join( model_save_path , 'decoderdata_{}.pkl'.format(1) ) ) ) ###Output _____no_output_____ ###Markdown Create Dataloader for test data : ###Code from torchvision import transforms # Define a transform to pre-process the training images. transform_test = transforms.Compose([ transforms.Resize(256), # smaller edge of image resized to 256 transforms.RandomCrop(224), # get 224x224 crop from random location transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5 transforms.ToTensor(), # convert the PIL Image to a tensor transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model (0.229, 0.224, 0.225))]) # Obtain the data loader. data_loader_test = get_loader(transform=transform_test, mode='test', cocoapi_loc = '/content/opt') data_iter = iter(data_loader_test) def get_sentences( original_img, all_predictions ): sentence = ' ' plt.imshow(original_img.squeeze()) return sentence.join([data_loader_test.dataset.vocab.idx2word[idx] for idx in all_predictions[1:-1] ] ) encoder.to(device) decoder.to(device) encoder.eval() decoder.eval() original_img , processed_img = next( data_iter ) features = encoder(processed_img.to(device) ).unsqueeze(1) final_output = decoder.predict( features , max_len=20) get_sentences(original_img, final_output) ###Output _____no_output_____ ###Markdown Features/weights of all images - transfer learning: strip off last layer of CNN - probably a fully connected layer with softmax activation, for classification - take the weights (4096 x 1) and feed into an RNN (specifically LSTM)- greedy search vs beam search for image caption- think of a tree structure - greedy search: given a word, choose the most likely next word; then, given the first two words, choose the most likely third word, etc.- greedy search may not result in globally optimal outcome- beam search: given a word, limit to top N most likely next words....- other extreme: form every possible caption and choose the best- model architecture of CNN: VGG (Visual Geometry Group) model, which is pretrained on the ImageNet dataset, has 16 layers- reshape each of 8,000 color images ![caption_tree.png](caption_tree.png) ###Code def extract_features(directory): """Modify VGG and pass all images through modified VGG; collect results in a dictionary""" # load the CNN model; need to import VGG model = VGG16() # pop off the last layer of this model model.layers.pop() print(model.summary()) # output is the new last layer of the model; is this step necessary? # need to import Model model = Model(inputs = model.inputs, outputs = model.layers[-1].output) # view architecture / parameters print(model.summary()) # pass all 8K images through the model and collect weights in a dictionary features = {} # need to import listdir for name in listdir(directory): filename = directory + '/' + name # load and reshape image # shouldn't target_size = (3,224,224)? image = load_img(filename, target_size = (224,224)) # convert the image pixels to a (3 dimensional?) numpy array, then to a 4 dimensional array image = img_to_array(image) image = image.reshape((1,image.shape[0],image.shape[1],image.shape[2])) # preprocess image in a black box before passing into model image = preprocess_input(image) feature = model.predict(image, verbose = 0) # image_id - all but .jpg - will be a key in features dictionary image_id = name.split('.')[0] features[image_id] = feature print('>%s' % name) return features # imports from os import listdir # will dump the features dictionary into a .pkl file from pickle import dump, load from keras.applications.vgg16 import VGG16, preprocess_input # from keras.applications.vgg19 import VGG19 from keras.preprocessing.image import load_img, img_to_array from keras.models import Model, load_model # used after copying model from EC2 instance # checking the functionality of listdir listdir('../Flicker8k_Dataset/')[:5] directory = 'Flicker8k_Dataset/' features = extract_features(directory) print('Extracted features for %d images' % len(features)) dump(features, open('features.pkl','wb')) # why 'wb' and not just 'w'? ###Output _____no_output_____ ###Markdown Images with multiple descriptions (human captions) ###Code def load_doc(filename): """Open and read text file containing human captions - load into memory""" # open the file in read mode file = open(filename, 'r') # read all the human captions doc = file.read() # close the context manager file.close() return doc filename_captions = '../Flickr8k_text/Flickr8k.token.txt' doc = load_doc(filename_captions) def load_descriptions(doc): """Dictionary of photo identifier (aka image_id) to list of 5 textual descriptions""" descriptions = {} # iterate through lines of doc for line in doc.split('\n'): tokens = line.split() # tokens is a list, split by whitespace if len(tokens) < 2: continue # move on to next line; continue vs pass? image_id, image_desc = tokens[0], tokens[1:] image_id = image_id.split('.')[0] # again, drop the .jpg # re-join the description after previously splitting image_desc = ' '.join(image_desc) if image_id not in descriptions.keys(): descriptions[image_id] = [] descriptions[image_id].append(image_desc) # .append for lists, .update for sets return descriptions descriptions = load_descriptions(doc) print(len(descriptions)) # this means there are 92 images not included in any of train, dev, and test sets ###Output 8092 ###Markdown Clean the descriptions and reduce the size of the vocab- convert all words to lowercase- remove all punctuation; what's the easiest way to do this?- remove words with fewer than 2 characters, e.g. "a"- remove words containing at least one number ###Code def clean_descriptions(descriptions): """Clean textual descriptions through a series of list comprehensions""" # make a translation table to filter out punctuation table = str.maketrans('', '', string.punctuation) # why can't it be ", " ??!! for key, desc_list in descriptions.items(): # for desc in desc_list: for i in range(len(desc_list)): desc = desc_list[i] # tokenize the description desc = desc.split() # convert to lowercase via list comprehension desc = [word.lower() for word in desc] # probably can remove punctuation before converting to lowercase desc = [word.translate(table) for word in desc] desc = [word for word in desc if len(word) > 1] desc = [word for word in desc if word.isalpha()] # overwrite desc_list[i] desc_list[i] = ' '.join(desc) import string clean_descriptions(descriptions) string.punctuation def to_vocabulary(descriptions): """Determine the size of the vocabulary: the number of unique words""" vocab = set() for key, desc_list in descriptions.items(): for desc in desc_list: vocab.update(desc.split()) return vocab # vocab = [] # for key, desc_list in descriptions.items(): # for desc in desc_list: # vocab.append(word for word in desc.split()) # return set(vocab) vocabulary = to_vocabulary(descriptions) print('Size of vocabulary: %d' % len(vocabulary)) def save_descriptions(descriptions, filename): """One line per description, not one line per image!""" lines = [] for key, desc_list in descriptions.items(): for desc in desc_list: lines.append(key + ' ' + desc) print(len(lines)) data = '\n'.join(lines) file = open(filename, 'w') # why not "wb"? "wb" only for .pkl file.write(data) file.close() save_descriptions(descriptions, 'descriptions.txt') ###Output 40460 ###Markdown Note that $40460 = 8092\times 5$. Just the training images and descriptions ###Code def load_set(filename): """Obtain list of image_id's for training images for filtering purposes""" doc = load_doc(filename) dataset = [] for line in doc.split('\n'): if len(line) < 1: continue # will there be any line with zero characters ?! identifier = line.split('.')[0] dataset.append(identifier) return set(dataset) # why are we allowed to de-duplicate only at the very end? def load_clean_descriptions(filename, dataset): """Load RELEVANT clean descriptions into memory, wrapped in startseq, endseq""" descriptions = {} doc = load_doc(filename) for line in doc.split('\n'): tokens = line.split() image_id, image_desc = tokens[0], tokens[1:] # done this before if image_id in dataset: if image_id not in descriptions.keys(): descriptions[image_id] = [] # wrap description in startseq, endseq image_desc = 'startseq ' + ' '.join(image_desc) + ' endseq' descriptions[image_id].append(image_desc) return descriptions def load_photo_features(filename, dataset): """Load FEATURES of relevant photos, as a dictionary""" all_features = load(open(filename, 'rb')) # filter based on image_id's with a dictionary comprehension features = {image_id: all_features[image_id] for image_id in dataset} return features filename_training = '../Flickr8k_text/Flickr_8k.trainImages.txt' train = load_set(filename_training) print('Number of training images: %d' % len(train)) train_descriptions = load_clean_descriptions('descriptions.txt', train) print(len(train_descriptions)) train_features = load_photo_features('features.pkl', train) print(len(train_features)) def to_lines(descriptions): """All descriptions, of training images, in a list - prior to encoding""" all_desc = [] for key, desc_list in descriptions.items(): for desc in desc_list: all_desc.append(desc) # keys not included in all_desc return all_desc def create_tokenizer(descriptions): """Fit Keras tokenizer on training descriptions""" all_desc = to_lines(descriptions) tokenizer = Tokenizer() tokenizer.fit_on_texts(all_desc) return tokenizer # return fitted tokenizer tokenizer = create_tokenizer(train_descriptions) training_vocab_size = len(tokenizer.word_index) + 1 # add 1 due to zero indexing # tokenizer.word_index is a dictionary with keys being the (unique) words in the training vocabulary # training vocab contains the words "startseq", "endseq" print('Size of vocabulary - training images: %d' % training_vocab_size) import numpy as np def max_length(descriptions): """Return maximum length across all training descriptions""" all_desc = to_lines(descriptions) return max(len(desc.split()) for desc in all_desc) max_length = max_length(train_descriptions) print('Length of longest caption among training images: %d' % max_length) def create_sequences(tokenizer,max_length,descriptions,photos): # more like create_arrays """Input - output pairs for each image""" X1, X2, y = [], [], [] for key, desc_list in descriptions.items(): for desc in desc_list: # encode each description; recall: each description begins with "startseq" and ends with "endseq" seq = tokenizer.texts_to_sequences([desc])[0] # already fitted tokenizer on training descriptions # convert seq into several X2, y pairs for i in range(1,len(seq)): in_seq, out_seq = seq[:i], seq[i] # add zeros to the front of in_seq so that len(in_seq) = max_length in_seq = pad_sequences([in_seq], maxlen = max_length)[0] # encode (one-hot-encode) out_seq out_seq = to_categorical([out_seq], num_classes = training_vocab_size)[0] X1.append(photos[key][0]) # why not just photos[key] ??? X2.append(in_seq) y.append(out_seq) return np.array(X1), np.array(X2), np.array(y) # return numpy arrays for model training X1train, X2train, ytrain = create_sequences(tokenizer, max_length, train_descriptions, train_features) print(X1train.shape) print(X2train.shape) print(ytrain.shape) ###Output _____no_output_____ ###Markdown Model structure and training ###Code def define_model(max_length, training_vocab_size): """Model which feeds photo features into an LSTM layer/cell and generates captions one word at a time""" input_1 = Input(shape = (4096,)) f1 = Dropout(0.5)(input_1) # for regularization # fully connected layer with 256 nodes, 256 = 2 ** 8, 4096 = 2 ** 12 f2 = Dense(256, activation = 'relu')(f1) # input_shape = , "leaky relu" input_2 = Input(shape = (max_length,)) # recall that after padding, len(in_seq) = max_length # 5 human captions per image s1 = Embedding(input_dim = training_vocab_size, output_dim = 256, mask_zero = True)(input_2) # embed each word as a vector with 256 components s2 = Dropout(0.5)(s1) s3 = LSTM(256)(s2) decoder1 = add([f2,s3]) # f2 + s3 decoder2 = Dense(256, activation = 'relu')(decoder1) outputs = Dense(training_vocab_size, activation = 'softmax')(decoder2) model = Model(inputs = [input_1, input_2], outputs = outputs) model.compile(loss = 'categorical_crossentropy', optimizer = 'adam') # model.fit, model.predict # categorical_crossentropy vs BLEU score # can't directly optimize for BLEU score print(model.summary()) # plot_model(model, to_file = 'model.png', show_shapes = True) return model ###Output _____no_output_____ ###Markdown - 6,000 training images - 30,000 training captions- ~7,500 unique words in training captions - this is training_vocab_size- after tokenizing, think of tokenizer.word_index dictionary- values in this dictionary range from 1 to training_vocab_size- from the documentation: If mask_zero is set to True (ignore zeros added during padding), input_dim should equal size of vocabulary + 1. ###Code # imports from keras.utils.vis_utils import plot_model from keras.layers import Dense, Embedding, Input, LSTM, Dropout from keras.layers.merge import add from keras.callbacks import ModelCheckpoint model = define_model(max_length, training_vocab_size) ###Output __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) (None, 34) 0 __________________________________________________________________________________________________ input_1 (InputLayer) (None, 4096) 0 __________________________________________________________________________________________________ embedding_1 (Embedding) (None, 34, 256) 1940224 input_2[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 4096) 0 input_1[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 34, 256) 0 embedding_1[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 256) 1048832 dropout_1[0][0] __________________________________________________________________________________________________ lstm_1 (LSTM) (None, 256) 525312 dropout_2[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 256) 0 dense_1[0][0] lstm_1[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 256) 65792 add_1[0][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 7579) 1947803 dense_2[0][0] ================================================================================================== Total params: 5,527,963 Trainable params: 5,527,963 Non-trainable params: 0 __________________________________________________________________________________________________ None ###Markdown - for embedding layer, $1940224 = 256\times 7579$- $1048832 = (256\times 4096) + 256$- for LSTM layer/cell, $525312 = 4(256^2 + (256\times 256) + 256)$- $65792 = (256\times 256) + 256$- $1947803 = (256\times 7579) + 7579$ ![Plot-of-the-Caption-Generation-Deep-Learning-Model.png](Plot-of-the-Caption-Generation-Deep-Learning-Model.png) ###Code # check validation loss after each epoch and save models which improve val_loss filepath = 'model-ep{epoch:02d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5' # .hdf5 checkpoint = ModelCheckpoint(filepath, monitor = 'val_loss', verbose = 1, save_best_only = True, mode = 'min') # dev images, i.e. validation images filename_dev = '../Flickr8k_text/Flickr_8k.devImages.txt' dev = load_set(filename_dev) print('Number of images in dev dataset: %d' % len(dev)) # include only descriptions pertaining to dev images dev_descriptions = load_clean_descriptions('descriptions.txt', dev) print(len(dev_descriptions)) # include only features pertaining to dev images dev_features = load_photo_features('features.pkl', dev) print(len(dev_features)) # same max_length = 34, same tokenizer trained on training captions X1dev, X2dev, ydev = create_sequences(tokenizer, max_length, dev_descriptions, dev_features) print(X1dev.shape) print(X2dev.shape) print(ydev.shape) # finally, let's fit the captioning model which was defined by define_model # why 20 epochs? verbose = 2 more or less verbose than verbose = 1? model.fit([X1train,X2train], ytrain, epochs=20, verbose=2, callbacks=[checkpoint], validation_data=([X1dev,X2dev], ydev)) ###Output _____no_output_____ ###Markdown Model evaluation by BLEU scores So far, we have used the training images to fit the captioning model, and the development images to determine val_loss. Now we will use the *test* images for the first time, to evaluate the trained model. ###Code def word_from_id(integer, tokenizer): """Convert integer (value) to corresponding vocabulary word (key) using tokenizer.word_index dictionary""" for word, index in tokenizer.word_index.items(): if index == integer: return word return None def generate_caption(model, photo, tokenizer, max_length): """Given a photo feature vector, generate a caption, word by word, using the model just trained""" # caption begins with "startseq" in_text = 'startseq' # iterate over maximum potential length of caption for i in range(max_length): # encode in_text using tokenizer.word_index sequence = tokenizer.texts_to_sequences([in_text])[0] # pad this sequence so that its length is max_length = 34 sequence = pad_sequences([sequence], maxlen = max_length) # predict next word in the sequence; y_vec is vector of probabilities with 7579 components y_vec = model.predict([photo,sequence], verbose = 0) # pick out the position of the word with greatest probability y_int = np.argmax(y_vec) # convert this position into English word by means of the function we just wrote word = word_from_id(y_int, tokenizer) if word is None: break # recursion: append word as input for generating the next word in_text += ' ' + word if word == 'endseq': break return in_text def evaluate_model(model, photos, descriptions, tokenizer, max_length): """Compare the generated caption with the 5 human descriptions across the whole test set""" actual, generated = [], [] for key, desc_list in descriptions.items(): yhat = generate_caption(model, photos[key], tokenizer, max_length) # each desc begins with "startseq" and ends with "endseq" # split_desc is a list of 5 sublists split_desc = [desc.split() for desc in desc_list] # actual is a list of lists of lists actual.append(split_desc) # generated is a list of lists generated.append(yhat.split()) print(len(actual)) print(len(generated)) # compute BLEU scores print('BLEU-1: %f' % corpus_bleu(actual, generated, weights = (1.0,0,0,0))) print('BLEU-2: %f' % corpus_bleu(actual, generated, weights = (0.5,0.5,0,0))) print('BLEU-3: %f' % corpus_bleu(actual, generated, weights = (0.33,0.33,0.33,0))) print('BLEU-4: %f' % corpus_bleu(actual, generated, weights = (0.25,0.25,0.25,0.25))) %%bash pip install nltk from nltk.translate.bleu_score import corpus_bleu # test images, previously unused # shouldn't there be 1,092 test images? filename_test = '../Flickr8k_text/Flickr_8k.testImages.txt' test = load_set(filename_test) print('Number of images in test dataset: %d' % len(test)) # include only descriptions pertaining to test images test_descriptions = load_clean_descriptions('descriptions.txt', test) print(len(test_descriptions)) # include only features pertaining to test images test_features = load_photo_features('features.pkl', test) print(len(test_features)) # load the model which was trained on an AWS EC2 instance filename_model = '../model-ep3-loss3.664-val_loss3.839.h5' model = load_model(filename_model) evaluate_model(model, test_features, test_descriptions, tokenizer, max_length) ###Output 1000 1000 BLEU-1: 0.340888 BLEU-2: 0.175420 BLEU-3: 0.099687 BLEU-4: 0.051633 ###Markdown BLEU scores range from 0 (worst) to 1 (best). **SHOULD GO BACK AND RETRAIN THE MODEL FOR MORE THAN 3 EPOCHS!!** Generate captions for entirely new images ###Code dump(tokenizer, open('tokenizer.pkl', 'wb'), protocol=3) type(tokenizer) def extract_features_2(filename): """Extract features for just one photo, unlike extract_features""" # instantiate Visual Geometry Group's CNN model model = VGG16() model.layers.pop() model = Model(inputs = model.inputs, outputs = model.layers[-1].output) # not strictly necessary # reshape image before passing through pretrained VGG model image = load_img(filename, target_size=(224,224)) image = img_to_array(image) print(image.shape) image = image.reshape((1,image.shape[0],image.shape[1],image.shape[2])) print(image.shape) image = preprocess_input(image) features_2 = model.predict(image, verbose = 0) # the prediction is a vector with 4096 components return features_2 photo = extract_features_2('example.jpg') caption = generate_caption(model, photo, tokenizer, max_length) caption = caption.split() caption = ' '.join(caption[1:-1]) print(caption) ###Output black dog is running through the water ###Markdown ![example.jpg](example.jpg) ###Code import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina' dog = plt.imread('example.jpg') plt.imshow(dog); ###Output _____no_output_____ ###Markdown Coding practice - data structures*Cracking the Coding Interview* Class of nodes for binary trees, and functions for traversal ###Code class Node: def __init__(self, value): self.val = value self.left = None self.right = None def trav(self): if self.left: self.left.trav() print(self.val) if self.right: self.right.trav() def preorder(self): print(self.val) if self.left: self.left.preorder() if self.right: self.right.preorder() def postorder(self): if self.left: self.left.postorder() if self.right: self.right.postorder() print(self.val) node_8 = Node(8) node_3 = Node(3) node_10 = Node(10) node_8.left = node_3 node_8.right = node_10 node_1 = Node(1) node_6 = Node(6) node_3.left = node_1 node_3.right = node_6 node_4 = Node(4) node_7 = Node(7) node_6.left = node_4 node_6.right = node_7 node_14 = Node(14) node_13 = Node(13) node_10.right = node_14 node_14.left = node_13 ###Output _____no_output_____ ###Markdown Function to create minimal / balanced BST from sorted array ###Code def min_bst_helper(start,end,arr): if start > end: return mid = (start + end) // 2 n = Node(arr[mid]) # print(n.val) n.left = min_bst_helper(start,mid - 1,arr) n.right = min_bst_helper(mid + 1,end,arr) return n def min_bst(sort_arr): return min_bst_helper(0,len(sort_arr) - 1,sort_arr) sort_arr = [1,3,4,6,7,8,10,13,14] min_bst(sort_arr).val min_bst(sort_arr).left.val min_bst(sort_arr).right.val test_list = [] test_set = set() test_list.append(4) test_list.append(3) # test_list.insert(0,3) test_list test_list.append(3) test_set.update([3]) test_list test_set test_list.append(3) test_set.update([3]) test_list test_set test_list.append(4) test_list test_list.pop() # the last thing that was appended gets popped off, like a stack test_list ###Output _____no_output_____ ###Markdown Heaps - specifically, min heaps ###Code from heapq import heappush, heappop test_heap = [] heappush(test_heap, 3) heappush(test_heap, 4) heappush(test_heap, 2) heappush(test_heap, 5) heappush(test_heap, 1) heappush(test_heap, 7) heappush(test_heap, 8) heappush(test_heap, 6) test_heap print(test_heap[0]) print(min(test_heap)) ###Output 1 1 ###Markdown Class of stacks, which are basically just Python lists - LIFO! ###Code class Stack: def __init__(self): self.stack = [] def stackpop(self): if len(self.stack) == 0: return "Can't pop since it's empty!" else: return self.stack.pop() def stackpush(self,val): return self.stack.append(val) def stackpeak(self): if len(self.stack) == 0: return "Can't peek since it's empty" else: return self.stack[-1] test_stack = Stack() test_stack.stack test_stack.stackpop() test_stack.stackpush(3) test_stack.stack ###Output _____no_output_____ ###Markdown Towers of Hanoi, a meta-class problem (OOP) ###Code class Tower: def __init__(self, i): self.disks = Stack() self.index = i # def index(self): # return self.index def add(self, d): # d is the value of the disk we are trying to place if len(self.disks.stack) != 0 and self.disks.stackpeak() <= d: print("Error placing disk " + str(d)) else: self.disks.stackpush(d) def move_top_to(self, t): # t is the index of another tower top = self.disks.stackpop() t.add(top) def move_disks(self, n, destination, buffer): # destination, buffer are indices for the other two towers if n > 0: self.move_disks(n-1, buffer, destination) self.move_top_to(destination) buffer.move_disks(n-1, destination, self) def hanoi(n): # n is the number of disks towers = [] for i in range(3): # towers[i] = Tower(i) towers.append(Tower(i)) for j in range(n, 0, -1): towers[0].add(j) # populating Tower(0) with the n disks towers[0].move_disks(n, towers[2], towers[1]) return towers towers = hanoi(5) towers[0].disks.stack towers[2].disks.stack towers[1].disks.stack ###Output _____no_output_____ ###Markdown Making change - RECURSION ###Code def count_ways(amount): denoms = [100,50,25,10,5,1] return count_ways_helper(amount, denoms, 0) def count_ways_helper(amount, denoms, index): if index >= len(denoms) - 1 or amount == 0: return 1 # not ways += 1? don't increment index by 1. denom_amount = denoms[index] ways = 0 # clearing and resetting ways?! for i in range(amount): if i * denom_amount > amount: break amount_remaining = amount - (i * denom_amount) ways += count_ways_helper(amount_remaining, denoms, index + 1) return ways count_ways(100) def num_ways(amount): ways = 0 for i in range(amount + 1): for j in range((amount // 5) + 1): for k in range((amount // 10) + 1): for l in range((amount // 25) + 1): if i + 5*j + 10*k + 25*l == amount: ways += 1 return ways num_ways(500) ###Output _____no_output_____ ###Markdown Class of queues - FIFO! ###Code class Queue: def __init__(self): self.queue = [] def queuepop(self): # dequeue if len(self.queue) == 0: return "Can't pop since it's empty" else: self.queue.pop() def queuepush(self,val): # enqueue # return self.queue.append(val) return self.queue.insert(0,val) test_queue = Queue() test_queue.queue test_queue.queuepop() test_queue.queuepush(2) test_queue.queuepush(3) test_queue.queue test_queue.queuepop() test_queue.queue ###Output _____no_output_____ ###Markdown Class of nodes for singly linked lists ###Code class Node_LL: def __init__(self,value): self.val = value self.next = None def traverse(self): node = self while node != None: print(node.val) node = node.next def trav_recursive(self): print(self.val) if self.next: self.next.trav_recursive() node1 = Node_LL(12) # the head node node2 = Node_LL(99) node3 = Node_LL(37) node1.next = node2 node2.next = node3 node1.traverse() node1.trav_recursive() ###Output 12 99 37 ###Markdown Class of nodes for doubly linked lists ###Code class Node_DLL: def __init__(self,value): self.val = value self.next = None self.prev = None def traverse_forward(self): node = self while node != None: print(node.val) node = node.next def traverse_backward(self): node = self while node != None: print(node.val) node = node.prev def delete(self): self.prev.next = self.next self.next.prev = self.prev node1 = Node_DLL(12) node2 = Node_DLL(99) node3 = Node_DLL(37) node1.next = node2 node2.next = node3 node3.prev = node2 node2.prev = node1 node1.traverse_forward() node3.traverse_backward() node2.delete() node1.next.val node3.prev.val ###Output _____no_output_____ ###Markdown Breadth first search / traversal for binary trees (w/o queues) ###Code def bfs(node): result = [] current_level = [node] while current_level != []: next_level = [] for node in current_level: result.append(node.val) if node.left: next_level.append(node.left) if node.right: next_level.append(node.right) current_level = next_level return result bfs(node_8) node_8.trav() node_8.preorder() node_8.postorder() ###Output 1 4 7 6 3 13 14 10 8 ###Markdown Miscellaneous functions ###Code def word_count_helper(string): output_dict = {} for word in string.split(' '): if word in output_dict.keys(): output_dict[word] += 1 # value is count, not a list else: output_dict[word] = 1 return output_dict word_count_helper('hello hello world') def max_profit(prices): if not prices: print('There are no prices!') else: max_profit = 0 max_price = prices[-1] # min_price = prices[0] for price in prices[::-1]: if max_price - price > max_profit: max_profit = max_price - price if price > max_price: max_price = price # if max_profit < price - min_price: # max_profit = price - min_price # if price < min_price: # min_price = price return max_profit prices = [3,-1,4,9.5,0] max_profit(prices) max_profit([]) def magic_slow(sort_arr): magic_indices = [] for i in range(len(sort_arr)): if sort_arr[i] == i: magic_indices.append(i) if len(magic_indices) == 0: return "There are no magic indices" else: return magic_indices magic_slow([0,1,2,3]) magic_slow([1,2,3,4]) magic_slow([-40,-20,-1,1,2,3,5,7,9,12,13]) def magic_fast_helper(arr, start, end): if start > end: return mid = (start + end) // 2 if arr[mid] == mid: magic_indices.append(mid) elif arr[mid] > mid: return magic_fast_helper(arr, start, mid - 1) else: return magic_fast_helper(arr, mid + 1, end) def magic_fast(sort_arr): return magic_fast_helper(sort_arr, 0, len(sort_arr) - 1) magic_indices = [] magic_fast([-40,-20,-1,1,2,3,5,7,9,12,13]) print(magic_indices) def power(n): if n == 0: return [[]] if n == 1: return [[], [1]] temp_list = [] for subset in power(n-1): temp_list.append(subset + [n]) if n > 1: return power(n-1) + temp_list power(5) def kaprekar(number): if len(str(number)) != 4 or len(set(str(number))) == 1: return "Invalid input" else: ascending = int(''.join(sorted(str(number)))) descending = int(''.join(sorted(str(number), reverse = True))) output = descending - ascending count = 1 while output != 6174: ascending = int(''.join(sorted(str(output)))) descending = int(''.join(sorted(str(output), reverse = True))) output = descending - ascending count += 1 return count kaprekar(5790) ###Output _____no_output_____ ###Markdown Fibonacci: memoization & recursion ###Code def fibonacci(n): """Return nth number in Fibonacci sequence using memoization""" if n < 0: print('Value Error: input must be nonnegative integer!') else: if n == 0: return 0 if n == 1: return 1 a = 0 b = 1 for i in range(2,n): c = a + b a = b b = c return a + b fibonacci(-1) fibonacci(35) def fib(n): """Return the nth Fibonacci number using recursion""" if n < 0: print('Value Error: input must be nonnegative integer!') else: if n == 0: return 0 elif n == 1: return 1 else: return fib(n-1) + fib(n-2) fib(35) fib(-1) ###Output Value Error: input must be nonnegative integer! ###Markdown ![Model-of-Image-Caption-Generator-python-project.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABnAAAANkCAYAAACH+mQYAAAACXBIWXMAAAsTAAALEwEAmpwYAAAKT2lDQ1BQaG90b3Nob3AgSUNDIHByb2ZpbGUAAHjanVNnVFPpFj333vRCS4iAlEtvUhUIIFJCi4AUkSYqIQkQSoghodkVUcERRUUEG8igiAOOjoCMFVEsDIoK2AfkIaKOg6OIisr74Xuja9a89+bN/rXXPues852zzwfACAyWSDNRNYAMqUIeEeCDx8TG4eQuQIEKJHAAEAizZCFz/SMBAPh+PDwrIsAHvgABeNMLCADATZvAMByH/w/qQplcAYCEAcB0kThLCIAUAEB6jkKmAEBGAYCdmCZTAKAEAGDLY2LjAFAtAGAnf+bTAICd+Jl7AQBblCEVAaCRACATZYhEAGg7AKzPVopFAFgwABRmS8Q5ANgtADBJV2ZIALC3AMDOEAuyAAgMADBRiIUpAAR7AGDIIyN4AISZABRG8lc88SuuEOcqAAB4mbI8uSQ5RYFbCC1xB1dXLh4ozkkXKxQ2YQJhmkAuwnmZGTKBNA/g88wAAKCRFRHgg/P9eM4Ors7ONo62Dl8t6r8G/yJiYuP+5c+rcEAAAOF0ftH+LC+zGoA7BoBt/qIl7gRoXgugdfeLZrIPQLUAoOnaV/Nw+H48PEWhkLnZ2eXk5NhKxEJbYcpXff5nwl/AV/1s+X48/Pf14L7iJIEyXYFHBPjgwsz0TKUcz5IJhGLc5o9H/LcL//wd0yLESWK5WCoU41EScY5EmozzMqUiiUKSKcUl0v9k4t8s+wM+3zUAsGo+AXuRLahdYwP2SycQWHTA4vcAAPK7b8HUKAgDgGiD4c93/+8//UegJQCAZkmScQAAXkQkLlTKsz/HCAAARKCBKrBBG/TBGCzABhzBBdzBC/xgNoRCJMTCQhBCCmSAHHJgKayCQiiGzbAdKmAv1EAdNMBRaIaTcA4uwlW4Dj1wD/phCJ7BKLyBCQRByAgTYSHaiAFiilgjjggXmYX4IcFIBBKLJCDJiBRRIkuRNUgxUopUIFVIHfI9cgI5h1xGupE7yAAygvyGvEcxlIGyUT3UDLVDuag3GoRGogvQZHQxmo8WoJvQcrQaPYw2oefQq2gP2o8+Q8cwwOgYBzPEbDAuxsNCsTgsCZNjy7EirAyrxhqwVqwDu4n1Y8+xdwQSgUXACTYEd0IgYR5BSFhMWE7YSKggHCQ0EdoJNwkDhFHCJyKTqEu0JroR+cQYYjIxh1hILCPWEo8TLxB7iEPENyQSiUMyJ7mQAkmxpFTSEtJG0m5SI+ksqZs0SBojk8naZGuyBzmULCAryIXkneTD5DPkG+Qh8lsKnWJAcaT4U+IoUspqShnlEOU05QZlmDJBVaOaUt2ooVQRNY9aQq2htlKvUYeoEzR1mjnNgxZJS6WtopXTGmgXaPdpr+h0uhHdlR5Ol9BX0svpR+iX6AP0dwwNhhWDx4hnKBmbGAcYZxl3GK+YTKYZ04sZx1QwNzHrmOeZD5lvVVgqtip8FZHKCpVKlSaVGyovVKmqpqreqgtV81XLVI+pXlN9rkZVM1PjqQnUlqtVqp1Q61MbU2epO6iHqmeob1Q/pH5Z/YkGWcNMw09DpFGgsV/jvMYgC2MZs3gsIWsNq4Z1gTXEJrHN2Xx2KruY/R27iz2qqaE5QzNKM1ezUvOUZj8H45hx+Jx0TgnnKKeX836K3hTvKeIpG6Y0TLkxZVxrqpaXllirSKtRq0frvTau7aedpr1Fu1n7gQ5Bx0onXCdHZ4/OBZ3nU9lT3acKpxZNPTr1ri6qa6UbobtEd79up+6Ynr5egJ5Mb6feeb3n+hx9L/1U/W36p/VHDFgGswwkBtsMzhg8xTVxbzwdL8fb8VFDXcNAQ6VhlWGX4YSRudE8o9VGjUYPjGnGXOMk423GbcajJgYmISZLTepN7ppSTbmmKaY7TDtMx83MzaLN1pk1mz0x1zLnm+eb15vft2BaeFostqi2uGVJsuRaplnutrxuhVo5WaVYVVpds0atna0l1rutu6cRp7lOk06rntZnw7Dxtsm2qbcZsOXYBtuutm22fWFnYhdnt8Wuw+6TvZN9un2N/T0HDYfZDqsdWh1+c7RyFDpWOt6azpzuP33F9JbpL2dYzxDP2DPjthPLKcRpnVOb00dnF2e5c4PziIuJS4LLLpc+Lpsbxt3IveRKdPVxXeF60vWdm7Obwu2o26/uNu5p7ofcn8w0nymeWTNz0MPIQ+BR5dE/C5+VMGvfrH5PQ0+BZ7XnIy9jL5FXrdewt6V3qvdh7xc+9j5yn+M+4zw33jLeWV/MN8C3yLfLT8Nvnl+F30N/I/9k/3r/0QCngCUBZwOJgUGBWwL7+Hp8Ib+OPzrbZfay2e1BjKC5QRVBj4KtguXBrSFoyOyQrSH355jOkc5pDoVQfujW0Adh5mGLw34MJ4WHhVeGP45wiFga0TGXNXfR3ENz30T6RJZE3ptnMU85ry1KNSo+qi5qPNo3ujS6P8YuZlnM1VidWElsSxw5LiquNm5svt/87fOH4p3iC+N7F5gvyF1weaHOwvSFpxapLhIsOpZATIhOOJTwQRAqqBaMJfITdyWOCnnCHcJnIi/RNtGI2ENcKh5O8kgqTXqS7JG8NXkkxTOlLOW5hCepkLxMDUzdmzqeFpp2IG0yPTq9MYOSkZBxQqohTZO2Z+pn5mZ2y6xlhbL+xW6Lty8elQfJa7OQrAVZLQq2QqboVFoo1yoHsmdlV2a/zYnKOZarnivN7cyzytuQN5zvn//tEsIS4ZK2pYZLVy0dWOa9rGo5sjxxedsK4xUFK4ZWBqw8uIq2Km3VT6vtV5eufr0mek1rgV7ByoLBtQFr6wtVCuWFfevc1+1dT1gvWd+1YfqGnRs+FYmKrhTbF5cVf9go3HjlG4dvyr+Z3JS0qavEuWTPZtJm6ebeLZ5bDpaql+aXDm4N2dq0Dd9WtO319kXbL5fNKNu7g7ZDuaO/PLi8ZafJzs07P1SkVPRU+lQ27tLdtWHX+G7R7ht7vPY07NXbW7z3/T7JvttVAVVN1WbVZftJ+7P3P66Jqun4lvttXa1ObXHtxwPSA/0HIw6217nU1R3SPVRSj9Yr60cOxx++/p3vdy0NNg1VjZzG4iNwRHnk6fcJ3/ceDTradox7rOEH0x92HWcdL2pCmvKaRptTmvtbYlu6T8w+0dbq3nr8R9sfD5w0PFl5SvNUyWna6YLTk2fyz4ydlZ19fi753GDborZ752PO32oPb++6EHTh0kX/i+c7vDvOXPK4dPKy2+UTV7hXmq86X23qdOo8/pPTT8e7nLuarrlca7nuer21e2b36RueN87d9L158Rb/1tWeOT3dvfN6b/fF9/XfFt1+cif9zsu72Xcn7q28T7xf9EDtQdlD3YfVP1v+3Njv3H9qwHeg89HcR/cGhYPP/pH1jw9DBY+Zj8uGDYbrnjg+OTniP3L96fynQ89kzyaeF/6i/suuFxYvfvjV69fO0ZjRoZfyl5O/bXyl/erA6xmv28bCxh6+yXgzMV70VvvtwXfcdx3vo98PT+R8IH8o/2j5sfVT0Kf7kxmTk/8EA5jz/GMzLdsAAAAgY0hSTQAAeiUAAICDAAD5/wAAgOkAAHUwAADqYAAAOpgAABdvkl/FRgAEmSlJREFUeNrs3Xl4JFW9//FPVXU6+9LJ7BtDhn0Vwr7rzQgoCooDCu5LxgUVL+qM23X3zoj8LoLbBMQNQWZURBHECYpsgkzYd5gwzL5lOslk663q90d1J51Od7o76U46yfv1PP3MpLu66tSpc6qrzrfOOYbjOAIAAAAAAAAAAEDhMMkCAAAAAAAAAACAwkIABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKjGcsX/7Q+z8/4ueODIUMj0KmRxHDymrdgZA0s8r42II6fT0YUTDJIpYhhT2WNgdCitiO9snRrt6gNhqGXuoN6sn9vc4OR5LHlMqLJUdSX1AyJIVtyVNUptc3vawnH1ir/t79Q1ZuHnC8Sj76Szk9fskOjzmjjdJqOR3b1ffTS6Vg35DPii+9WtZxF8rZtyXteswZixW49b/l9O9XyYd/LntPW+plZy1R8PavK/TQr4Z/VrdIxU03S5ZnWHqGJ96UUTNXgZs/o8hzfx88vo5DDQIAAAAAAAAAIA/GFMB5subIpO87MiRJVeFuVUe6VRnuUXWkW2HDUq9VKluGDI3c+G8YUkmRFvrKNT8wQvwkbKu+rFgyDclx3O9FbCkUVrA/ZDzW2au/+LudXwRC2m3S3wgAAAAAAAAAAEwCYwvgVB458gJORMV2QJWRXs0K7tOC/p1a0L9LjiF1W+VpAzkRW/uDYSmYpgOMbSfZMUveGq9OrynX6TOrjK/3hfT79i6neX+fHiyypGiMCQAAAAAAAAAAoOCMKYBTGu4e8XNHUsSwtLeoRnu9s/RCeb0O7NuqQ3pf17z+3QqaReozi9P2xhmNiO2+JKm4SKXlJXpfVanxvuoyrdm2z/lWb0DbK4pGWIFhuC8AAAAAAAAAAIBx5snnyg1JHiciTyQiQ30KG5bayg5UW+kiLel9XSd0PafacKc6PZWKyMxLIEeSwhEpFJY8ljSvVstryo33b9mrT+3q1i9CESd5nMb0SEUlkh1JlW/zJJVKsiQVSyrSYL+eHklbJXVM+BEmCAUAAAAAAAAAwKTjGa8NOTJkObbKwl0KGx5trFiincUzddz+F3RoT5sCple9ZonMPAVxYnPj9PS7PXIOna+barrNM7e0hT7c19M3PL27Xpb2bZVRUStn/97EQMgRkp5Ks8luSS9K+qek30l6fFyPbCQso6RSRnFF6gwBAAAAAAAAAAAFyRzvDbqBnIjKQp3qsUr0YO3JesB3oiSpOtItO8+T0xiGFAhJ/UFpUW3oQ+eeccgDJRW1w6IcTo9foX/8WEZFXbLVDAa+HEeSIXlLZRSXS94yySqSpArJOUHSFyS1yjCvVjggBftykImO2zvItKLbH+YUo9z3f/brj383/MzfkmSCmXziIAAAAAAAAAAAUBDMidqwI0NlkT4Vh/fr5YqD9ZeZb5LfU6UZ4c7o52m+Hw1czJhRp5kzZ6qurlaVlRWyLHPgs1QMQ7IdaV9XQPULas747KebnjCtouLE5UKP/V729udlVM1KDJQMHVfNNCU7LCfYJ4WDMorLZc44UIZvQTSxthQOfN6omfe0UVrpHb4ztpRN4Mo0pa7dUiSU2JPmEEkvSfq3KmdeGX7hn1+2d796VuxD65CzVPr5v7ub7N9P6QcAAAAAAAAAoECZE7lxR4ZMx1FZqEOdRdW6c+Yb9VrJfNWGOmXJkRMX1DAMQ4ZhyIy+iorcKWf++c9/7fvHP+/Tff96UC+9/KpKyyo0b948VVdXyTCMEYM5hmFo+849WrJ44UGfuuIz/0m2TPiRW2SU+1Kvo6JOTsf2QGDNe5/o/9llG/rXXLah/8YPPhdYt8IfeeGfMmYulrzlcvo6pXLf0cXv++m/ZXnMhIRIWQwdZ5T5FNm0QfauV2WUDOk8NEduEMftYePxStIxkmTVn6SSj98io2auGzCiBw4AAAAAAAAAAAXLUwiJcGSoPNSlHk+51s84Q6f7N+jAfa+q3KysmTFjpsJOkWzbUcSODMQ5ystL1dcX0h/+cPsGKXL/nLnz6tvb2/tDwUBdwwknveWMM06vPPigJYqEw9rn3yfJDQAlMk1T27Zv03FvOPKYd11y2V9+v/aWt8V/Hnr4ZnlOuVxGea0bhElUVCz19xTZ7Zv/KukGSbbj32bZW5+ZGX5s3QlF5zRd4T3/i0fadkjOvq3yHPf24z2nXPbT8EO/Xh5bhVFeJ4WDmWeYabn5ZodlDI3BPT/wP8sjJ9AtSS9YB56kko/+SnIcOZ07BnIdAAAAAAAAAAAUJk+hJMQ2TJVFetVvFuuh2pPVbVbotL5n1/37zj+8sW3ztgXd3fv3Bvv7wxHbdiQ5Xo9hekqrFhZ5PaWhYOS7b33L+TM//elP3/rzn9/4f9df/6NPtm74z9tq62au+PCHP3T4YYceoj17dqu3t0+WZSXZuqGtW7bobW8974JXXn7pK0892frdwYRFZL/yoDxnN8np9bvzxwxJeETRHjW7JG2O++R1SRtC9zXfaBSV/Lno3M+db+96VfaeNhWdcllT+D+/u0mh4KNGea2MGYulQHe5pEMlLZN0hKSZkoolBSTtlRucuVnSs7EAjuHYUnGZ1Ne5SIZxpaS5A1vv2iVr7mHS0s9+oeiUy97iRIKmuvcaTseO1ZJ2yFsa3RenTtLJkt4q6SBJ1dFy0Sdpp6SHJf1W0m6qCwAAAAAAAAAA48NTSIlxZMwvsQMLwo4985kZx9Zsaq/wdLa99Jq6wvNlzguq2ArIMEOSbHmKDXXv3arg1mZJ+vnPf74nGAze+etf//rO73//6v/78Y9/vOrb3/72r35w9fffc/oZZ6269JJli8rKSrV3775hQRzDMBSJ2NqzZ5fee/ll39m06bX1nR37BoZUCz/zN3lOfa9keqLz1SRVluL9cHD9dZd6TlzWYZRUmc7+vTLnHynPEUu/En7qr2+XY7s9YwzzUEmtktweNoblTotj25IdlqS3SVoh07rNMYwPSeoLPfQrFb/7GjlSvQzzc24muulzuttlLjhG1hGN59odO87V/r0yZixW+MFfl0r6pBMJRxTslVFa/QFHukYy3O2a0QCVHY4Ns/YuSd+WdLXTv/+bVBkAAAAAAAAAAPJvogM4tZLOlnSGpNMlneDIsDxOSEbvXu2vmq/i910jyzRlGMYiDQyBZsiomavIyw9u7PvBm9fHgha/+c1vrq2pqTnguuuu+9xVV131uc985jMf/8qXv7zm6h/84NbXNm367ZVXfvaymTNmaM/evUmDOJ2dXTrwwAN13nnn//q23/32sNhnkbb/yH79cRlzD5fTvTfjnbMOOE6eMz8sJ9S/3+nrelilVWdIkhxH5rwjztZTf/U5vR1++/Un5Dnhnc/adqTLrJ5d5QT7pGCfG4wpKpFRXCa7Y4f7Xqj/UmvOwQvC0hnhx36vojM+JHPhsREn2COFAnK6dkkyJMOUEwlJwT53Dp/KGTKqZslxIm+XdJV69nWre69UNfshw+OVUVolJ9AjhfrdnjnFZZJhyfFvlexIuWEVfcOae5hjb/z3t6g2AAAAAAAAAADklzlB2z1V0m1yhxn7o6T/ljuMlyW5c+KYhqnicK9M/1Y5+zbL3vu67D2bZO95TfaeNtl7N8ne+dISOfZn4ld8/fXXf+4vf7nz35IUCIR+9v2rr378gfsf8Gzfuvny737nu58P21Ktr0a2PbwnjWVZ2r5tm8468/RDFy5a/Nn4zyKvPyGjrCbzjF1wtEo+cZs8J14izxveJnlL/+EGRww54YBUXlcV3WfZmx6TOas+aBjmiuBd31fgl019/T+9dGff9Re93v/jd20J3vV9xyitkkoqZXfskOfYC043Fx7z35IUfvRWKdD9XHDdylWBdV+UUVQqebwyqufI3vK0P3DzFa8FfnNFd+DXn+wJ/Owy2dufD0kqlySnt1Pm4mMedfZtvjPw+y+p/8YPdvX/ZNm2vusu3NT/s8t2RFr/KKN6jpvm/i55Tlr2TZnW0VQbAAAAAAAAAADya7x74LxZbrDm3My/4khOsrft2Hw0wcSPPvWpTy1705ve2Gbbtnf37j3HnXHmGbt//OOfnPupT33ymh/84AfBr331K9eVBgLq7w/IGOjV4wqGQioqsvSmN77pm7/61U0/ja3f3vGiJDuLnC2WYxhytj2n6A7sGzJ/jjtU2QJJCj3+J0W+sET23k0/k2OXSuqXdI+kXY4UsHe+XGvvfOm/iy+/foUTDkjecnmOu/BbwS1P/yz08M29of+s26dw4EvyloX07rqvqXuvVFYjp/31osjz994kd2g2r6SS6E70SlL/rz8h855rZO969V2S/kfSI5I2SPI7/m3hwOYnD/KG+n5WdPr7z7bbt8ictUTWkpO/HXnl4YuoOgAAAAAAAAAA5M94BXBqJF0n6X15WHck8Y0tWzZv+/znv/C/P/3pT77e09Mjv7/D98lPfuI/t99++8ktLeuvv/W2dYe/7/J3f2Lr1q3DVmaapvbu3asjjzyiuqra98muTv+1kmTvellOT4dkFUmRUGYpC/a788q4c9gMHbPNHfatSJIUDkp2WObsgyXD+D/r0LNlHXxalVFaU6+i4lqnc2cosqn1787eTZ9Vua/ECXTL8C0olzvs3HqFAzIq6lT8gZ/9TOH+rykcdNPoLauQ9Iyku5Mlz1pwtBzDlFm3KKDKmV/xHPtWmXMOm2OUVR8hw6ixt7/gVzjwd6fHf5bsiCFvmcwDTjg78srDVZK6qD4AAAAAAAAAAOTHeARw3i7pBkmz8r2h4uJinXTSyYpEwtq2bdvVe/fuvaqsrKyiu7tbFRXluvPOOx868sgjDrr/vns/edppp102f86M6r3t+4b1wgkEgpo5c5ZOPfXUz97zt7uulSS7fbOcrt0yyn3u3DJpWLMPllFaKWf/7thbh8f+Y5iW1L9fktzIzpJTVPqF9bL9245Q//7PGRUzznVC/QsV7JUcW0btQpn1J8np2BFRKCCFAzJKyiXpKEnrzQVHqeRDN0plNfOd9tclc8hhrU6WPu8FX1LRm6+UQv1yutvfJsP8qFFafY4T6KlSqE8yTFkzDpQCPe68P1aRFA7KKK2skXSw3F49AAAAAAAAAAAgD/I9B84XJN2hcQjeSJJt2/p//+//qaWlRd/61rd69u5t/0EkEpFlWdq1a7eKi72e36/7/f2SdOuttywzDFOWxxq2HsMw1Nvbo8MOPWSxovPUKNAje+szUml1ZonxlmnIkGnSf0mSHEeyiuR07ZEkvyQVnfNxOV27P+h07HjWCfZ+1N7TtlCRkIzyWhnltZLpkeEpkUzLkmO76zAsSZrj7nhEKq2W07XbSAjeDN+30mp5L/qGiv7rCtm7XpG985UfOf3df3Z6/G+32zdXyfLIqJghlVS6vYeKy939cBw5GtjuDKoOAAAAAAAAAAD5k88eOFdL+nzW3zIMN0hgFUmRoBQJu+9lIBQKaevWLTrhhAYdcsgh6unp+U1/f/83TNOUx+PRzp279Ybj3rCo6WMfuab5hp9ftWtfz6aqEs/inr5h0+iop6dXs2fPVo2v9l0d/n2PSpKzd5OMouKkU/IkMuceKoX6Yn8uknSg+4EpJ9Qve9uz7ZK2WoecJeuIN55t73r1F25gxpTpW6BI23/67M1PGHb764YT6OkzqmZVFJ1ymccoqZRkxIZgc6NPfV1yOndJ3uK0w7sVv/d6eY6/UJFNrZId+bYM81MK9csoq5aKShR59u899vbnvI5/W1hSvzH38BpPwzsNhQPuCuKHfgMAAAAAAAAAAHmRrwBO5sEbx5asIre3SUmlHDssJ9DrDjFmRyQjuw0///zzuuiii9TZ2SnDMNpM03xJ0qGSZNsRhUIhfec73/nv5htv/lLrk8+vessbj/tZd29wWIwoFAqptrZOc+fOOavDv89Nal9nZpl67FtlNbxDtn9b7K3V7gocGdWzZW95RpFXH35O0nOeo8+VZKySY0umJaOyTqGW63cF7/3Rbkn3SfqPpO2SSopOXHaLvOXxXYBsSW5PGdPtJTNoIMgz8KY5Y7GsQ89SZPOTkmNXyTC/KklGcZnkOOHAbz61I/Lyg1skrZf0rKTteuEfDd6zP3ad3blTw7YLAAAAAAAAAADyIh8BnCZlEryxbanIK7N2oZxgryKbnwoFnnugQ0/dv0/lRqUZ7tnpOfMj9Z6TLqlx/Fsz3vjzzz8vSTJNU44b0LhZ0rdj77W379OcOXN0wVsbv3Xvvffc/LZzz5JpdMhJiBQ5jqOiIo9mz5576AvPPz9f0jan1+/Of2OYseDIkD2SJOuwc1R8+Q/l9HRI4aBkWp+Q9G53pbaMmnmK3H21JG2QtN+cWW86vR2HyzBllNXI3t2m4L0/2ibpB5JulSQVlaj4oq/LqJnncXr97rBqyQ2O2WYoNoSbm1CrSMUXf9cNjtm2ZFjHyIm4C1bPVfiRWyKRlx98StJ3JT0iSebsg1V88Xe7nP79aXv2AAAAAAAAAACA3Ml1AKdB0poRl3AcN5BRt0hyHIWfvrs38PefdmrXs/dK6rnyfe845sH7//WlDTv3bTA8RWtkWudnk4BXXnlFkmRZlsLhsCQ9OnTzboeUj3zog5+/8+5/rt2+c8+zpUXWUcHQ8E4ldiSiBQsWVks6WNI2p8fvBmUMY7Bfi2m570kd1sFnqORjv5HTvUdOT/tcmZ4Vkj7rriwsc/Yhsl992A4/+rvtku4xqudItQu8TqDbHZLMMCU7LEkPSrrVWvQGWYedI/OQs2QdeMLb7d2vlsuOSHLc7cZSYVruy3EGAzjBXpmzlkjRAE7JR26SeejZsne+LKNqppy+rnIFetzvGaYUCRdLul7SI56jz5O5uEGeYy+QKuq+7Ox5TbI8kgy3p48yGkUOAAAAAAAAAACMUq4DOHeM+KkdkYpKZM45RJGX/hXo/8uqLm154l5Jv//cp5a/4X++/b8XbWh9/Ne/b3n4P5K2mYuO3aEef1YJ6OzsVGdnl7xebyyAsyn+c8MwFAgEdXzDiVZtTfnRL7/03DOnnHjcUcHQ/mHrCoVDKi8vk6T5kuT07Xfn5InrreN075NRPUfei79zhXXo2Wc43XuKnf17F8j0nCTJK8d257WZd6Sc/XvswC8/3uOEAr+X9Hdz9sEyZy0J2B07/Qr2lTk9+2TWHSDPye8+RlJp8UXf6FNZjcfp2HGpvePFG+KCLVJJhWQVFSkSkhMKyPAUS5bnNcdxpFC/nP17ZM45VEVnfeTjRuXMp6xDztxr73ql0fCWzAn+/sv/z3PsW583j1wqZ/dGqWO7rEPOlOfIpaeai4//e9GbPydFQjPsfVs/rb2b3uMGb+T2Oioul+J7+gAAAAAAAAAAgJzLZQDn/xQNdCRlR6SSCpm++Qr/4yfdgTu+9YqkH5//5sbOr/zPN990+umnffLhhx/et3RpY5Wk4zxHLt1mzjqozN63NatEBINBhUIhlZSUxN7aLmm/pMrYG319fZo5c5ZqqquP2LF9m1nkPTl5km1HHsuSpDpJbi+UxMlygj1SRa08J116rNO151ina89gbxVPkYzKWTKKyxV55aGe4NovBmz/1gcVnRPHqJwpZ/8eR127mo26Rd909u+W0+OX98L/OUfSFru7fY/aX6+SjHlG1SzJUyynY4ecwH6ZMw6UOefQA+xtz8rp2K7IKw/I84YLdygceEDF5Wc6Xbvk9HXK+9YvneVEgs/Zu14JGtVzvPZrGxR+8i//csKB1pJjL3hEtQtPcfZukkoq5L38um/IDn/A3v5CQJHQQplWuVG7UIoE5XTulHo7ZC04WpJmUXUAAAAAAAAAAMifXAVwZkq6MuWnji15y2TOPFDB33+5O/TALx6QtOqGG26c99GPfuSnkmY899zzOv300zsk9Um603PCxZJhFcuODA+ajKCoqEgej2dgqDRJPZLaFRfAsW1b3uJiFRcXze/v66szDCvl+qKbLpMkWUXuy7RKjJq5Msyh3zOqZ0s1cyXDdOfK6euUvfXpQLj1j6HwI7fulfQPSV+Mpkfh1j8q/PTdMjxF3/K+5YuNnrM+cqbCITl9nVKwr87wFNcZ1XOk4nJFnr5rv7371bKis5ssp7dDRnmtSj645sK+ay+odXr8+4JrVyp070+k4vJ3Fy/7383mgSdazv69cnrdHkxGea3XnH2Q7J0vS9J7Is/+vbXv+296V/El33/ZPOjUMifQI/V1SeHAgUa5T0a5TzIthe6/aY9RXD7Tc/oHZO/eKOuws1V86dWfCKxd8QuqDwAAAAAAAAAA+ZGrAM71qT9y52sxZ9Ur+Iev9oYe+MXdM2prrvl7yz8uPe644z7X19ev0tISXX75ZZ1yh2D7nlG7SOaSk2V3bLOzCd5I0o4dO9Te3q6FCxeqp6cn9nZf/DKGYSgYDChiO+XlFeW1tjvvzDCGIdlO3HQv/ftllFZKprU9/OjvHlH//lOGfSESltPbEXE6dwTs3Rt77J0vb5f0iKS7JP05tujiJUfIciLa2PaSnFCfAn/46tLw8/f+0Tr8jW8xZx8qo6JOCvcr8soDEXvzk4HQwzf3yY78yOnYebDnzA+9y+nao8hL/7JkR1ZL+oQTDoSdXa9I0vb+n777LM+p773dXHzCLLNmnhtACwcU2vq0Is/dK0lFkmTvfHlb3/XvPL3o5Hf/3jzg+CXG7INklFTK6d0iZ8eLwcir/w6Fn/pru6Rr5UTeZR185nGR5/4he8vTJ0rOeyTdShUCAAAAAAAAACD3chHA8Um6NOWnti1zzkEKP/K7cOj+n983f86sn/zrwYc/u2TJkvds3bpNCxbM149//OPAU0899ZikVZJUdNIyGZWz5Ozf6875koWqqiqVl5fH5r+JKYn/o7S0VO3te7V3717nmGOOrY6EkwdwLNNUOBSWpLAk2dtfUP8PL5RkbIls2vAeSV+WtFuSHf1KLNrUJ6lD0k5JGyU9JUmW5dHcBQfq8KNP0pFvOE2maeqFp/+jh+77i7o69gYiL/zzrZEX/vleSWdLmhs9Pt2Stkp6TtINoQduUviRW65xHOcUhQP7Jb2SmG6nt/Ph0L0/PlVSk1FSeaQTCXsV6rPlDiVnSPr34ML2k6FHbjlFj9zyfnm8DUZxeZ3T2+nIsfdJel3So5LuCNz2xRuN8tofOT37Zkf3rZfqAwAAAAAAAABAfuQigPP+lJ84joyqmXI6tinwu/9+RtJP773v/s8uWbLkoq1bt6mmpkbhcET/8z//0y3pBrnBEFkHniinx5918EaSysrKVFxcrEgkEnurSHHDpzmOo5KSYj3z9NO9+9rbjYMOWjK7vz+QPHOKvNrb3i5J/ui3FdnUGvt4k6SmZN8zLY/KyiplFRWppKRM1TUzNGvOQi088FD56mbJ6y1Rh3+P5Dg69sQ3qqRytu645RrZtlRaXnWzGeq7ORIKqqSsTF3d+0ttOUN6EDmh/qui+xSQFEyRFW2SVjr9+yXJkhQZIdv2Svp/CgflhIMl0XIRTFj3bqdn3yVyA3Z+qg4w8SzLIhPGl5Phcgb7AwCYInySVkhaJqk+eh/QIqk5+i8wrcS1MwAAAIyLXARwPpX6I0dmzTz1rvlQt6Tf3v6HP7zj0EMPvWjHjp0yTVMVFeX64Q9/GN63b98/JN0uSUbtQhkzDpTT2zGqxHi9Xnk8Htl2rFOM5kZvPIZovqH5ZU+R13fg4sUVO3ZsT7ouwzC0deuWiKQ9mWzbtIp08lkX6MCDjlRJsbfUMC3H6y2Wt7hEhmGov69XfT371d3VIcMwZEjGvr3bjZLiot6SijrNP/a9WnLAjEVHzg+fXFVVddQrbVv/8LtfXf+0HepLtrn9iW9U1cxQV8feIe/VzjtahxxzeuTwJXNONYtK+h964P4nXmy9K7aDkjOs/a4/zW4SvAEAAIAkNUhqjF5rN0Tfi/+/X1Ls6ae2uFdr9F8U/vFdn3Av5ZMbzFkmaaWk1WTTlNAYPd7x9Tf2vhLqsKJ12B9Xl6nPAAAAeTLWAM6Bkg5O9aFROVOhtv/Ifu6uuy654LzKi975zg+3t++TJJWXlysUCuu6667rlPQbSSFJsuYdLqNqlpw9o7sGrKioUGVlhfbubY+9tURuDxTZtq0ZM2Zof9d+3X77n7accebZhwyOfjaUaZnq6+vTnj172iXtSrfd8vJKXfahT2vB4sPVv3/X58uKgl8JR5yw4/QbcnrlOFJluaRyYyDbTdMwKqrKjVtv+fMlgR5/S21Zv46qr7x+UXXn230zqnRfy4bzwqG+tynaM6moqEiz5x+krg43nmQYkqeoRIuXHKZDjzheZrFP969fp+IiU2WVtZox71BVzzpES+Z6bppX2fmhqtq56u085Csvtt71PUmS4+jIY09RkbdYWza9ovY922UYhiqqfLIsjwzDUGl5jfbu2aFgXxe1BRgDy7Iao40g2VoyxpviBkkbsvxOq6QTOGqYBuiFBGSvSW6jbixwMxKfhjYAx2uTtE5uTw4afwvT2jTHeFX0moGeOJNPvdwgXOMIdTTZd+qj/29MUp9boq91ZC8AAEDujDWAc0bqjxwZlTMV+O3X95VIL/70V7e+z7ZtBQIBmaap6uoqPfTQw2pra3ta0r2xb1n1Jw98fzTtJTU1Ne63B3uWnDawbsuS11ukT3zi45skdZ133rkHdPiTdygpKS7WPn+Htm3duk3SlpG2WVZWoY984guaOWu2dmx9RRVl5kyPVzW2E1FpaalKSyvcfUnYHY9lqqy8Sju3b/pBJBw4vaf9lZ7S4iOd/v6g9nf3qbe3/yRJhyoawDnsDWfpzW//sPbubFPEdlTsMRR2iuSrqZLsoLr6DRWXlGnxkiN16DFnqsO/S/v8e1VeVP3OutpqVdVUyAzu+a6kf0l6SJJOPfttmjt/kXbt2Ka217eprLRIvhqfZBjyeAyV+ep1+9qfa+vz3JcBE6RRbuPWaC0bxXcayHYAQByf3MDNCqUP2mSqPm59y8nighMbMi2dJhHAmWzXlSuUedAmm/rcFH3RMwsAACCHxhrAOSrlJ6ZHCuyXXrrz6WXnnXFQbW1N/c6du2Sa5kBw5b777pPcJ9J7B7429zA5gW6N9mHXpUvfLGnI2LTvi/09f/48/fOf/4zcdNNND5562pkHz541s2Tr1q0yjKHbchxHVZWVevGlVxUMBp6TtCPV9qqqavTBpv9Wbd0M7dyxVTIs2bZ6whGpqrpWjz/x5N5HH320wzCM2sRJfQxDsixPmd/fcYCkWtvx9IQiTsCJTmthWVaPYkOaGabqjzhd/n27ZDuWwrYjjyP19YcU2r1LJV5TZmmdtm16QZ3+PZq/pEGRcER93X5dd/1vaw4+oPafoZB97ONPPF4i6URJDx1yzBkqrZyp1ze9IhlFmjFnsYosW5FIUJGIo4hjS709PHcM5MZoG7wmIoADAEDMCuU2cJOolSwuSPUZLsd1xuSQr8BNMgT0AAAAcmisAZyUF/ZGcbn6t74kKfLa5U2fPlxyAyOGYcjr9cpxHN1zz9/2yO0N4iqukCpqpWDfyFt1HMm0ZJRWSwnhhQMPPDD+zyWSDo0Fb7Zv3663ve2Cf8mw9r/73Ze+Yd++diWLThiGIRmWHn/8ibCk+5ViiJWamjp9cPnn5PPVafeu7TJNS5G4EdnKy8u1Y8eu4pdfevFn0ZvT4iSr8cod4s1vmEZiaga2e/Ab3qyq2vklob5tvuIi89ASjzPfMlXsKVXANLQ14ujZ/oDd7shQsdcjr9ejUE9f3bGHzDqqb+8B25956vHfybGrS4qL/9Lf33eDb+Z8NZx+gcIB/0nlxUZZKBLub/d3PVJRolm+cp0eceQrspzekoqS5yPh8NNUFWDcGkKS3XCPVsMYtgsAmN4aJK1R/ntl0tgL5I9P7lB3TeO0vfh5rwAAAJADYw3gLEz5SWm1Ik/+tmvxbF/xOedecFh3d89AT5eysjLt2rVbzz///GuSXo59xaycIRVXyrFDqbcYCcuoqJVZM0/hvb+XpJL4jxcvXjzQw8dxnA9I0vz587Rr1241NBz/dE9P778+89mrlnksFff09Mo0zWGbqKqs0ObNW/TkE60vyw3gDFNU5NUll39MtXUztWv7VpmWlTy5kVClpL9WzVj84oIDj1SgL5oPpqU9W19UZ/tWSVJxSbkOPKRBfT1dqqkavp759cfKDvcefsRC7+O1NZWyLEuOY8i2I+rs6lJ/INTd7Zi/jDjWp/fs2KQu/175yoouOvSAshsPrb9UXW8/X/PmzQ//5ubf3vH3v93ZU+2braqqGtPn2fGvKp9Z0ht0VOIJ/2nerMq3zKgt9zq2VFlRrj/d1fLE7rZHGyXto7oAE3bjvUyjG0+8kewDAIzCMrnBG1+et8Pk5/m7dmiIvmJznBijODaZIABXuMYrCDutyoOV4r4fnE+ng7iRbgAA42isAZwDUn1gWEXS9jb/EQcuLC0uK6vp2rN34LOSkhI9//zzam9vf11S3AdVMkoq5fR2DF9hNChjzjtMTs8+BW7+dHvo4d8UKTo/jCRVVlbqsMMO1f793YpEIhXV1dVXVVSU68EHHwxceOHbX923z//rd1z87jOPOfqwo7Zs2Zo0eGNHIvLV1unv6/8hyfm3pJeS/or76jRz9ly1792VMnjT1dWlRQsX6tTTz/7q8aee/1RN3RwjGOiTxzJMq2zW7r/cfutNTz3otskefWKjfDPnKdC/Taoanq5IX7sqfSc91x14dcuO555fuLu9S+FwSOXlZTri8MNVWeGpmFXqXHHOGScV//1vf2l6/skHdOGyD2x69vl/dpaXl1b7fLXq7+v29PV2f0HS77s69jzX19st02d12Y5dIieiYw6ZddH2ne36x31PB8vLy7zVNbW6fd2vF0XC9n+JySiBib4BH00dXDbGbfIEJQBMP01yG33HA43/uVEvt1Ex9m8uGuzXyQ3ipOvJ20z2F+y143rlPwibiGtHcD4FACDHxhrAqRrx03AgUF1d7UiSbQ+OLWYYkt+/T5K2xy9ulPskj3cgWDPAjsgorZbhm6/Iyw90B+/4Zre948XnJf1c0m2xxb797e+orKxMW7du04IF86+XVPbDH/6w88orr3xO0g1vffs7T7rgrW++cNvWLUmT6ziOqqqrtGXLNue+f923Q9IfUu3ajJlz5Ckqkh0/ZlqC9vZ2HXfcG3Teeede3t3lvzwYbJdZZam6qlxbt7+i5x79y9OSNkhS7cx5CgcDMrxm0nU9dt9abXyhNbj99RePl923StJmub1ijHnzF590xRWfeE9RUdg6+8xTPvavf67/3fOPt/wj0Ndx70tPPfC2t7zl/Ja3v/0ibzgckaQiSSWW5ZFhGHIcBW3HUU11jV7dtKO3ufmGHR3+vU9H13+IpL7oxT+AsRnLDfQyuRPCZnvz0TBB6QUATE7jGbyRaOzNhfXKX4/bSzRyEGCleMirEE1U8EYiKAvOpwAA5NxYAzhBSWXJP3Ikw5RhGkHJDdrEx2U6Ozul+N43kozSSsmyNDD1i2M7kiFz9kFSKKDgHd9oD/3rxv2S/iLpGkmvx7770Y99Qp/97GcUCga0YMH8N27evGXxxRe/4y8bNrTul3TrBz+8/KIzTjvxI9u3bVUkYg8M55aY5lmzZuvHP1kT6e/rbZF0d6odnzlnrjzWyNnnzoGzo//5F14qNuKmt6kor9C2bdsUDvV/W9L5ktTXs18yzNTrqvSpdk69wuHA3pm+srsPO+zws301lYslo3LHjm3ejg5/2DKqrZKSUs2eM++Kza+3/eOlpx6QpAcW1R95d6C/50JvcZkkBSSFhxQCj0em5dFvfvObjg7/3jslfV9ucM2SO0dPH1UFyMnN9GjVR1/ZDDPDpMIAgGx/p0YbvGnVYI+NNg0GZnxxv3+xedmWabBhmcbewtYqd07RFdHjVi93jpMWuT1vOH6Fp16jD97Ejm1r9JU4xGHsetSnwQeFGuO2xfw3AAAAeTDWAM4uSTXJPnAcSSUVZfu7d3fIjeYYA4EZSdFhQ7uGfCkSHozy2GGppLLGnLFYkWf/3hm8a3XY3v7CM5JukHRL7CunnnaW5hxwpOYtOUF//OsDOv6oA/Xc0xueu+id73pXOBw558ijj6u/4IK3XnXgAfPP2b5tqyK2kzR4E4nYmj9/rv7zWGvo0Uce3iLpppF2fM7cBQoE+kfMnLq6Oj3070e77/7rX9ZK6pRUHLfzsyX9J7ZsOBxMOqSbJJmWR0ef8V4dfdQRF8wt3/e9mgrP0bNm1ikUCikYCsvj8Wjz5tfV09ur6poSmZZ5kKTyqpqZPae++f2qm7Wgu6d7n6IBnGGKPB75OzrV1dW5T9J3NBhYi4jgDVAolklaneXyAABkwidp7Si+1xz9bUr1gEGsQVhx/y7X4LA0zH9T+Pxye9qsJCsmhbXKPnjTFq3HzRks15bimrNR9N4GAADIi7EGcF6XdGjST4Ld0gHHzNm49Zkdob7u14uLixf39Q3GAioqKqTEyd+KSmVU1En93ZKnWIZp7Q7d+b1wsOVHfkl3Sfo/Sa9KUklpuS5/X5MaTjhOr23t0PMvvaL7/v205s2q1Z1/uHF3ZVXNsne8Y9nlJ5xw3IWhYK+2b98hwzCTBm9s21ZNTZX6A2H95te/7pacmyX9K9VOl5aWa978A9Tb2502gwxphqTbJf0p9UKGamrnKBjoS9qfqW7uoVq4cP5Rs6yX/1IiS/v3F+uBBx7sfHXjq2ZPT2/YjoT6zz7rrMojjjqyIhSKyLZtS1J5ZVVdz6L6I9W7f3NxRU3qefZiMTOPp2irEnpFARg3rRq5l042PXjSDZ+2TgR4AACDVij9XCfx2uQOrzXap+1bRO8NIB/1ONse3yuV3QNCqa4rGUoPAAAgT8YawNmc6gOnt1PGEad6Nj5x09bXnn3irvrjTvtkb2/vQAClpqZakiqHfGffFtk7X5H6OmXUzFPw79d9LnRfsyHpUUk/jC1XN2O2PvjRK+Xz1Zivb9pY3tntVNRUlMyqqfAe7bWCSz99xSfeOLu2bKHHMrVn9w7ZtpOyd4tt2yovL1NNjU/fv/qajs5O/38kXTvSTh9+1Bvkq5uhPbu2KzEGNWR/nIHtGiOt7+AjTtbC+qO1besWqcyMT5sjya7yzdFBcz3fLLGl0spZ+sdf79p7551/fip64/uApI3nvOncfxdZZkUoFJEcJ+x+Paze7k5VlBlpD6RhGDIMw5RkSrKpGkDOpXsq0a+RJwuODTnjz2Bby9Jsp1XpAzgNonENAKaDerkNv5lqlbQ0w98jAIVZj/3ResyQZwAAAAVurAGcF1N+Eg6qdOFB6t0bPuy2X9/4p6+deOYnHccdvqyvr1+LF9ertrZ21r59+wa+Etm0QX2rzpEsjwzDlNO/f7ekKyXtji1TUlKij3/yCvlqa7R7146DSr3mhjk+o9Q05PGYpizLlKOwurq6onPdKMV8N9HgTVmZ6upm6KZf/KrzpReff13SF9PdkB55TIOCwYA7IpyRUT4Vjbi+489WoG+/TEMyDHkdSR6PJduOWJJ6Pd4ilZWVz1SPKRmOenu7qyR9zzfnkH8sPGCJDjn4oMOXHLhwfnv7PlVW1chjWYakSCgUUGRgaLZIJuk0Mt4jANlK90RkiwbnB0ilUZk94diQZjsAAMQQvAGmRj3OdAgzgjcAAACTyFgDOCNc9DkyHUdqOP/Dv77lpm9/7fpfPlZXV3fi3r171dPTo9mzZ+r44xsOa2lZP/Rr4YAUDsTPlrM7/uM3vqlRixYt1KbXNqrIYxWZhipj8ZmII4VDUqwDiTFCKMK2bVVWVKjG59Nvb/1dx0MP3r9d0pckPT3SDs+aPVcHLF6i7q4OGebwDRjudk3LMuWxLJnmyPGQo05cqnmLj9a+PVtlestVXGwYRZ6A+vt7dfTRR5c9/dRTs/fs2PLCi207ftlwkO/M/t4enXnGGd458xZ9bM6iIxvramsWzaw2Lw8He9Tf3yevt0hz582d2db2qhkI9MmORGSZlmVZjjyWJSMhmmWahseyLFkWnW6AApCuZ0wmARxfmnVMxM16bK6Dxri/47VosGdQbBLsfIpNoh0LmDUkpKUtLi3j3Ug5Utpix69tHPMK43Osk00C3ZLFMY6tq1GDk0zHl+lYecll/Y9Pd4OGTlafqsy25Pkc5JPUlKL+tCbkq6jbA+lqynBZv9z5ayYyeFOI5W40xzyWvpYJ+q0pRIV2rTBSOhPTOtHH1JdFPVa0Hhd68GYyXA9NdFko5DyK3Y8siztXS5kP2TeZzvXT6hxoWVbKYxuJRFYLAJAXYw3gPCI3WpJkfDJDkfatKn3nlXWvfvkXZ170xtPO/dM//72vsrJS+/fvlySdd955x7W0rC+X1JPJxmbPnqv/ajxXO7Zvl2FYchxFHA3O4ZIp245o1qxZMq0iNd9w077H/vPvnZK+Kunukb5nGIYueMd7VTdznhzHTBrAidhSebEqyrxBlVVUqshbMmJa6uYeoh07tqunq0OBsLRVxq8Xz555YWlJmZY2Nmrb9l2f+9tdf77v5z/8yk1tbzznv95y3tLL5i9YqCOOPPLd/T2dCgQDeuKJF7Sx7fVQ43+9sajWV6MLL7p4ztNPP3NSb3fn3TIt1c6aXVNm7lNZRZWKirxeSbI8Rar2zVZFVYevrNxUxAjJNE2PJIdqAUyYdDcdjRmsI93QaC0ZricXN5VN0Zcvw/1aJmlVNB+alX4y3dGkaUWaRo74vFmlwUl9890Qs0yZjV3fEH3F8mpdNI08RTuxN9zr0yxjJDRqrBqhHPriGoVWRI/v6jGW6di6WuQ23LWNYV9jN+2ZPukdX2alzCfLznXdjqWjKZqG5RrskbgiekxGslSj68FY6HU7m/nQJupcM5HlLpP6ndggmU15XDaK35rRXqs7GZ6jRrMtYwx1dyKuFUZzXBuj223I4piOV2NmNvW4WYU9X814nzMnY1kYzzzKJH8uSShTse3Vj+JYjPe5fjzPp4V2vzSexxYAMEZjDeD0S1or6d1JPw31ySqtknnRt352x5++tOgbX/nS177x3f/9diQSUSRi641vfOOM6A/u/Zls7MKL3qmioiL19/ennNNmJLZtq7jYqzlzFuq1Ta8HfvvbW3pea3v1JUnflHRPuu9blkc7d+/V7vX3KBQKJl2myFui7q49P3791aeP9XiKGve1t0vSjFTrfPhvN8m2I5IMGYYUcfTHDYsXr1owf97Kjg6/XmtrO0vSm2QH/vHPe++5/InHN+xcfMCiD5eWlVc7th32d3QEX3rx+V2Sbnv5pRdKDjhg0RV+v7+ov7+vKRIOtTzx8F9CuzbN++zmlzf81FPkPX337l0RSb4u/249+8SDzr+2Pt20v3P3dZGIU9XV2VEhqVhSH1UDyJ3ok0qZSHdDFXu6rjWDi/tkYk9tZRLAGe2FuS96U7liDFnWIGlNdB0rc9TQkEkDbbJ9WRW9WVmap+JRH93X0QbVYje6uZiEGPlTH21UaIjeLPuyLION0Zvo+MbdpuhnvizS0Shpg7IfOieXN+yxMr9MYwsmxdftFVnmQ330OGRTb9qmaN3OtOHXr9wH1SdzucvHb01TtJ5Ph4B8oV4rpLIqy7QmXj/k+yGQbHrfFOq1wmQ5Z05kWSjUPKpPKItrpsG5frqcA8d6bAEAOWI4zug7XJR96V+SdLIc+xE5tpI+bGBasmbXq/uHyz6nTY9e27xmzT0fa2p6c19fv0pLS3T00cdc++yzz3wu3baOfcNx+mjTJ7Vj+7b4tw/zevRCuu86jiPLsjRr1kwFg2E98uh/Om+77Xe9kXDoIUnfVpph00bpQElHRjPlRUmvZPn9d0g6QNJmSf+S1B732ZGSjpVUKrcH1BYNPhX6VklHSHpBbo+i2OQ3h0o6RFJY0hOSdsat702SSiT1SnpIUiiTBI6l7ADTiWVZ2Tzdt0EjP1GX7qZrpIrZHL2ZyaSBqUXZBy0a5Ab1c/1U1urofo/2Bmmtxt7rqFXpn3Qc+G3NosEl2wZ4pTm+K7NoHMj3k9WTQS7yoCFab0cSa7RZP4bjvU5u424ubqL9kk7IomFjNI3SmaZjLPMwrFF2DZfJLNdgz5xc1YOJrtvZnB/3jcN5eLQKodxlkkex38tclMdM0paPC/Dx7IFTCNcK43lc8z1vVDb1OHYdWGgm8pw5WcrCROZRuvNA7N4kk2uTZPcxE3muH8/zaSHeL+X02DKEGgDkz5h64BiVMyXpUUnbna5d82RHJCOhZ0w4JHt/u8qX//L/er579t1Ny5efu33Hjr9//etfXypJK1as+OD73vfeL2qEoEFVdY3eefEl2t/VJcdxZBjp7xEcx5FpGCorK1NVdbXCEVuPbXii+9577w1uem3j65J+JemnkoJ5ytvXoq/Run2Ez56LvpL5a/SV6KXoK5l/UBWAgpEuUNCo1AGcZRmsW3lqRGiM3oz48rDu2NP1o2l0WK/MAy/pbrZy3VixJg/rlAqzcWaq19lMys+KMdaP+Dluxlp2fNF1ZBqkbclT3sUCrCeM4ryUi8by2HrapmndziawPRFDLhVCucu0XK7IUXn0RX+3puoE84VyrZDpcV2Vg+MaO/+vzGOeTnSdmsznzMlQFibD70q9Rh+EKcRrjKl+DhyvYwsAyBFzLF8O3nKlgrd8VqH71nzOqJ4jJQusmKaczt1yDFNlV6x9RKV1Nd/4xjfe/L6L33Zzh79D733v5TWHHHLI5cO+tuQUlVzxB0nSaSe8QXPmzlNnp39Y8MYwDBmGIdMw5LEsFRcXq6qqUvPmzdWMmbPUub87/M/7/tVz3XXXd/38xuaXN7228cboBcsPlb/gDQBkK3ZjMZZ5cNIFGWI3SLkeTqAhjzcjY7l5XaPcB15ydfO2Jo/5tILqVHBy9dTsqhyWnUZl3hjVqvwNQzKahoFcNZbHp2E61u1s9nsiggmFVu5G+g3MZeOWbxx+UydCoV4rjFSfc1XnVih/8w9mU48Lbe6byXLOnMiyMFnyaCwPqUyWc/10Owfm4tgCAHJkTEOoxQdTSj97x0Zz/lH19u6NkmkNXzgSkjlriezdG/f1/fOGY/X4H7bONPXpO+6777qKmjq9+c1LS3fu3NkfW7zkE7+Tdeg56rlyjpbMm6Urv/Id9fT0KBx2O+o4jiPTNI+pqih9ypEjx3YUDIbU29ujjs5Oe9eu3fbzzz8ffPGF53f19/e9Iuk/kv4kJnrOGYZQAzJjWVYmF9OxCbIzHY4p2dNqIw2/1ir3CbTYzWC6Id0yHULNF93ueE1muVyZzcOwLHqTNCG/rWnya2OGN0L+hN+shiy+l8nwWAyhlrs8mKw/iPHnhXRG6vHSquFPt2ZaXmOWKLMGnEzOkeNdrwutbmdqrTKbA2c0Q2rmSiGUu4mq36mGw5mMQ6gV4rXCeB/X+GEwcynTepzN+X48TMbrofEuC4WSR+nypy2Lup1qKOiJOteP1/m0UO+XcnpsGUINAPLHk6sVBW7/+uWln/nTv1VcLgX7hvfGsYpk726TUTGjtvyyH7xuv/kzH9/zu29cf9pZ5/zyAxec8/D8WbX/b+fOnVdIss1ZS2Qd/ibZr7dKnmJt3L5bd9zxZ5106lnq6uyIzmnjkWE4m/92153NnV1dTcFgsKe3p6e3s7Oju7OzY7fcOWeelDvU2BOSdnG4AUzgTWqmYjcpI32nQcMDOPUaubdJtk9dZvp0YDaTjvqjN23r4m6ilkVv2LLZ3jqlHwoh26ft/NE8bY3L42V5KAuZ9MTwR29wk914ZTJhe2xiVIZSQzoN0fqbSUCgRYONK63RetiqkYc+WRYti5n0hGtSZsPKZFu32+LOGb5omuqp21n9Nk3k0DOFUu7Gkv5YXmfbI3RFtKy0TYFzTaFeK4ynZdFykI+5rDI9F2Zb/sbac8HgemhMZWGy5FEuflMn+7l+qp4D6wUAKAg564EjSUX/9akfed/xrU/Zrz+RfDi1+O9WzpBRXvuXnkfWfUF/uvYlBTcfK+kpWUUqef9PZR1/kexXHlLfmsulQLe7fm+pHMeWJJmGqYgdUSQc9Ej6sKQOSd3Rf3dLel0jzKuDsaMHDpAZy7IyuQmO71WT7mnKZE9Dp+vlc4IGgxOZ9MBJd+Mdu6jfmGE2pJu4NZv5LFI9vZdpXmS6vtiwC9kGcowx5FcmE65m0gPBL/dpw5Fu3OiBMzE9cGKNLfFBhSaNboiKxHXFbvKzaShIV58S6+lqZdcQmOlTp23RMjuSTM9dsbxZruTB6xWjyG9jjOfC8azbmdqXYR5kU0byYaLLnTPKetmcg9+UZL1wEhvwVimzBsylaa4r8nWOLNRrhWyPa0v0mK6L269sh6DK9Kn4bGzM8HyfbT3OZwBnsl4PjWdZKKQ8yuVN/0jlcCLO9eNxPi3Uc2DOjy09cAAgfzy5XFno3h9fYdWftNQ6ovEQe9uzkpl69U7Xbjn7976trOHtb9MJF7ZEnvrrdfaOF/o9h//XS8bcQ+XsekVGRe3Q9Qf7kq0qnIcLYQCYSK0auYGnMcUN2kg3ZvkYPjLTG1W/3KEi/GluZOuV2ZNlsaeSU60vm7kxRrqBboumO1eTpWeSXyszOFat0TQ3pbmhXcbvY8FJ1tjij2uwWJvlupINe7Iu2niQ6VAd9VnWl9Hsc7PSNwTWK31voKYx5HO80eT3VKzbk2VM+4kud9n+fqdqfIv9pmTzkEGyp8ZbkuxrJiZqAvtCvVbIRnOSctgWPTZtWRzPfDzNPhmfkJ/M10PjVRYm+zVjW9z1SKbnq4k414/H+XQqnAPHemwBAGNk5nqFgV99/Gx7+wshc85hkh1OvaDhbtrZ+5qc9tcbrSPe9Gfv+V980Zi15BWnc+ffVVzxm8jrTxymcICjBGCyy/bmOpObgsY0f8fL16S1mT5FnOnTdJk+teUbYdvphpJLvPHN5GZ1uXLTuJcuv9qyuHnO5Jg2UPUKznKlbmxZl2WDwCUjlEt/FvUpn41/jcpusub6Udb7ZOeS1gzqUK4aq6jbhSWX5W40dTxdI1azMh/KxzcFjnchXitko00jNyw3Z3Hupu5O7nPmeJaFyfy7slJub5eV0fyIf+XygbKJPNdPp3PgkGMbiUSWRCKRlZFIpCXhxVzTAJBHnlyv0An27ez/2bvPLv3kuofNOYfI3vmKZJpK2YPaMCXHkdO5K9Z/8yCZnoNUXKrwY+t8ioTeI2k/hwrAJJbJDUP8BXu28+DUp9nGaJ+6HWl87mXK/OntTG8wYzd2mdxENoxwM5cJv7JrvF2t7IZlG01+ZRNoG02QrxCsz8E6lk7S80BbBsd4XYbHrTWDMhB7IjVduRtrg0bsSdBY8DTbiYVTnddGW57bsmjciB9zn7o9+X5Xx6PcZaNZmTdONivzYfwalZ9etOOhUK8VspHJ+STTczfzSUzuc+Z4lYXJnEe5HiawEM/10+0cmK9jCwDIgicfK3X27/133/XvWFqy/LfrzQOOl739eclx0s6LM4QdkVFU0ihpnqSXOFQAprhkXfqXZXhBPtJyfo2+B85INzqZNkJkO4HmugxvNhonIF1jCeBksk/LcnyjVYgNRdO54TmTephpI22mDRCtGeT5aMpJbN6eXJfZXDRCrMvxMaFuF865ZCLKnfJUnmK/zU05LPuFqFCvFSbq3F0/CY5FIZzLC/WcOV5lYbLmUWxuoKl+rp9u58BcHlsAwCh58rVip7ejpe/6d55R/O6r/+o5+bJqZ+8mOT37JNPKfCWmtVdSiMMEYBpKF8BpzPBGL1lDby6GA8v0hiDbp4bHeuPbkEX+ZmOs4zo3ZLhPuW5gyeXcDhgbfw7LfzZjtOc6aLZCmfccyKV81W1N47rdmkX6J9pElbtsf7ezXT6TAM5kmatoMl0rTNS5Ox8mUz2e7OfM8SoLkzWPctHAPxnO9dPtHJirYwsAGAMzr2sPBx4K3PyZE4O//9IjMiRz7mGS5ZEcO9M1OBwiAFPAaC6e0zUE+eLW25jlenLRoJ/pjVW228rmBqZhDHndOgnKwGTeLqbmeWyDMhuWLR981O2cbzfTQGDDNC53+dQ6CfJ/ql4rTCWZ1uPGUay3JcWrbYx1muuhqZlHLWPc9lQ810+Vc2CLAAATyhyHbbwS+tcNp/Zd/85rwk/9VUb1XJmzD5YsbzaBHACYzEZzQ9SWwcV8JmNBr8vTPmV6M5Btz5VslveNIa/bJkEZAApFg9yGlYYJTkM+zjnTuW5n2gDkm6BjXwjlLp/assj/yXzuKMRrhakkXw25zXLnnEv2GsvT+FwPTd088o/hPDFVz/VT5Rzop1oCwMQyx2tDTvvmzwd+/YlTA83v/1t4wx9klPtkzj1cRs08yVsmGaY7Tw4ATD+pGnHSBV9iE3tqhO+P5YK7gUMDTHs+SWs19RtB+e0Z2TLKHUA9BjjXAwAw/jzjvL1HIm2PnB9pe+Qd5oJffMA69Kw3mwc0lJqzlsiomSfDWyonHJBsR0bNXKmoxBLDqAGYvjff6Z6qbNTIAZpWsnbaY8gDjNUKZf40cGxi9tboeS2+/HE9V3h1O5vfiGWSVlLugIKTTT1ukrRa0/Npeq6HCjOPONcDAJABzwRt93Z767O321uffYOki4zqOaeZ84880lp47DxVzpRRViOjbpGc7vZaSSUcJgCTlWVZYxkGId2NVKNGfvJyXR53za/MnpbL9om6bJZvHUO6cjGZrbJMKz2aJIMsmFTq5TauZGKdpOXKX8NgmzJr5KFuZ5f2bPJ1hdzG3+lU7vKpUOd1mg7XClNJNvXYp8EgzkSml+sh8mi6nOs5BwIAcsIzwdt/UtKTTudOT6Rz55LI8/cukbRY0lxJsyVVSGKiHACT/eZkLBf9LRp5mLT6NDf0+by5bMzD/jdkmT+jTVeDxreRN5MbzpWa2EYVIFGmw+00y21YyadMGyip29lZp8wb0FZEl2+bRuUunwp1XqfpcK0w1WRbj1s0cY26XA+RR9PpXM85EACQE2OaA8dxnFy9wo7jvOQ4zl2O4/zEcZyvOY7T5DjOZdH3xavwXgDGxWiHM0jX+yaTG/eRnu7KtAEv2ycIG8aYL5mmqzHLdDWO8Ti2jsM2gFzLtEyOx9BamTY2ZjPHQ8M4pauQ63Y2k5GP11wFhVTuslGfp/2czE9PF+q1wlSTbT1eo4mbc4TrIfJosp/rOQcCAMadSRYAwITz5+HCu2UM28zk5iCbxtRsGgmWjXH/2vKUrrHeKLdluI16qgMKSEOGdXE8nu7Mpm7XZ7Gspnndbsvyd6ZB+Q/iFFK5y0a2vxNNGS6XqwDORDTYF+q1wlQzmnq8foLKBNdD5NFkP9dncz7lHAgAyAkCOAAw8TcnrWk+y/bGpU35f2I3mxuCTBupGjX2J8oyTZdP0qoMl63PYh9SyXQ+ojVUGRSQXDXu5SJQks2cXmszPDc3jWO6CrluZzsMT6Pcxt+GaVDusrEiy2Uz3c9cNcBNxBP7hXqtMBVlW48bJG2YgHLB9RB5NNnP9dmcTzkHAgByggAOABS+bC++141Dmtqy2M6KDG80VmWx7VQBqmzm/mnK4GYpV0MG+TM8jo0iiIPJpSHDuqYc1KF1WaRppHqby14kU6Futyi7IZhiebghuk+jeQrcp7E1uo1XuctGfYbHeFkWv3frlP4hjkyvEZomoGwV6rXCVL1WbB5FmV0/hnrM9RB5NFWuMfJ1PuUcCADICQI4ADA5bsqzMV4X65k2FGQy3voaZf402eocpSu23VRPkjdJ2qjcPWWe6Q1ck0b3VGxDFjd/QKY3/5nU7xVp6liunvDOpm4vi9bfVdH/N0b/XROtX7lsrJwKdXulMg9+JztPboimMdmTwY3R14ro8dggaZ9S95QqtHKXbX6sHaF8rVBmPcRG+/s/ksbotuOPT/04/GYU6rXCVLRSoxtuKlk9rk9xLlqWg7rF9RB5NNnP9dmcTzkHAgDGzGAyegDIH8uyYg1W6W64R7rIro/eWGfCL6k2g+XWZ3DD0yZpSQ7WE5+21XIbpFo1+AR2UxY3I62STsjgBmijsn+63q/B4NdYbgaNET7bkOXNcuzJvbYkN7oNcWltiNvf5RneLDo52J/JLld5kMl60tXzfKwrk/PPSPu3Vpn3klgZV1YVV7ezqUuZ7Nd6TUxjTboyUEh1e7QmYl6MEzT8oYNCK3ejvVlq0WAApiGaJl+WZWRJBsvFhrTTGNK5NM/nyEK8Vsj1b2Ch/KaOtTyM13lzKl8P5Wp9hZJH+SrbhXiNka/zaSGeA0d1bCORCDf4ADABPGQBAORVLp5si92IZfLEeKZP7GXyhGYm21sevcHMpFEqm3lnRtpepjc+2W7Lp/w3DK/M8sawXtnNq5DpcQMy0aLMG1dW5aB+Z3oOyPScM56mQt1uje7HeA7J06DhAZxCLHej0TjG35TlWRy3QleI1wpT+by9XIU/tBbXQ+RRIZ7r83U+5RwIABgThlADgPzK5EI9kyEEMg3MZDrcSq5uUNqiN5jjYXkW6V6dh5swfw7W0TIO+cUQasiVdTkq97mU63NOrvZvqtTtZkmXjONxr58k5W68rc7i9zyb+aGmSr3N1bXCVNUczYdCrkdcD5FHhXiuz9f5lHMgAGBMCOAAQGHcLKST6YV4ywSkP9ZQkO+bkWyHDsp1I+QlOVrPauV3GKRGqhRyeG4qxDHUc3XOWancNnJMlbq9Tu4QMOPRANQwicqdxjH/V46iThS6Qr1WmKqao/W4rYDTyPXQ9M6jQr7G4BwIACgoBHAAYHLIJDAzkU+y5eupbX90vaO5GWmT23iRizQt1+BY1Lm6wcrnk3j0wkGu5KLxyJ9hec+m3I71nLNa+Wk4mip1uzV6/sz3E8ONk6zcpcuzXP2WjuYaYTIEvQrxWmEqi82DUchlg+uh6Z1HhXiuz+f5lHMgAGBUCOAAQH7l6qbIr/RBnHz0vsnmybx10YaCXA090Cx3AuexrK81uo6WMeR7/A2RP4fHfHU0v/Jx3JgHB7m0cgyNAq1ZnBeynddmXbR+Z5O2WGB3ZZb7kI2pUrdjT0cvUX4bhRomWblLZSy9lmINkMsnqJ6Op0K8VpjKYmUrVo/943icp9s5M5+mch4V4rk+n+dTzoEAgKwRwAGA/MrlRNvL5TYQpXoVwoV7m9yAx9JR3vj4425EcjV+uz+anuXKbiiRZDdE/hwf89hT7pfk6Pit02BvISBX/Bp8AtifxXdWym2kaIu+0n23fgxpq43+u07DG9FjT9Mu1fCAbmOG28jWVKrbbXF5vDKHvzXr0pyXC7ncpdr2CVmmN/Zbk6teEstH8VvHtcL0EKvHS+LOlbkUm6/lEkmGsu9JxvXQ9M2jQj3X5/N8yjkQAJAVw3EccgEAJoBlWdNhN31yG0jro//6NPRp65a4f9s0PkGoBknLov/WR19tca9WTdxwdL5o2uqj6UvML8WlMzH/mLAU41lGG+PqkBLq8UTWoWzUS9qYwXK5GLd+KtbtxiT7kuocH///VmXfYDiR5S7TmyUj4e9lcb9/sXzyR9M5HvUk9jvXELf9+OPgjzsWrQVwXim0a4XpIlk9ji8v8Vqj5cYf9//WPJUfroemZx4V6jVGvs+nk+YcGIlEOGsCwAQggAMAE2SaBHAAoFA1SVqTwXIrNTnmF0F+jDaAAwDAlEIABwAmBkOoAQAAYDpaluFy9G4DAAAAAEwIAjgAAACYblYos/lvJOaUAgAAAABMEA9ZAAAAgElmRfTfZmU/Dn6TpFUZLstcGwAAAACACUMPHAAAAEw2DXKDMPvkzmPTpOSTbserl7RWmc17E0PvGwAAAADAhDEcxyEXAGACWJZFJgDA6OxT8oBNq9weOfGBl3q5w6XVZ7mNNklLyOppL9ObJYOsAgBMZZFIhEwAgAlAAAcAJggBHAAYlQZJG8ZhO5eIIdRAAAcAAEkEcABgojCEGgAAACaTxnHYxmoRvAEAAAAATDAPWQAAuWNZ1lxJJ5MTAJA378nz+v8u6RFJF5HVyALlBQAw1e91R/p4RyQSeZRcAoDcYwg1AMjtRe1Fkm4nJwAAAAAA08QdkUjkIrIBAHKPIdQAAAAAAAAAAAAKDAEcAAAAAAAAAACAAsMQagAwSaUZgxgApotGSQ2S6qOv2Hvx2qIvv6TWuJd/IhIciUQ4agAAAACAtAjgAAAAAAAAAAAAFBiGUAMAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMAQwAEAAAAAAAAAACgwBHAAAAAAAAAAAAAKDAEcAAAAAAAAAACAAkMABwAAAAAAAAAAoMB4yAIAAAAAAAAUMq/XSybk3npJjQnvGVNxR4PBIEeb88e0rZuU/8mNAA4AAAAAAJj0vF7vCkn10ZcS/h/jl9Qa929rMBhsIfeAlHyS1khaFq0zy6P/AgDGgeE4DrkAAAAAAAAmNa/XO9oGDr+kdZJWB4PBNnKyYI8vmZB7mTzlv1Zu8Ca+vtROth2lBwLnjylYNyn/0wQ9cAAAAAAAk0KBNsA0SprMPTjGLf0F3IDkk9Qkqcnr9a4MBoOrqW3TxmSvv+NhWZL6Qr4BwDghgAMAAAAAQOYa5DZexl7S5JozYrKnPxt+uY3M8cM9xYZVa0zxnVVer7c+GAwup6hTfyFJatPwoQjpqQYA44QADgAAAAAAmVul1I3/pL+wtAaDwUuSfeD1emO9blbI7VEQr8nr9fqDweBKijv1F7pE7nBOsXqyXARwAGDcmGQBAAAAAACYToLBoD86VNoSJZ+QfYXX66WhH3DrR62kpdF/m8kSABg/BHAAAAAAAFONT+5QSciNKRvICAaDfrkN08l6FKzg0AMDWuQOSwiMG6/X6/N6vQ1er1djefHbjMmMAA4AAAAAYKrwyW103ygaNnIplqdNGj7c2KQXDeIkm/Om0ev11nP4AWB8RQM3/J5P499mDCKAAwAAAACY7OolrZHbkLFKmTVkNMZ9x4l7rY+uI74Hz4q4z5M1JDkJrxUjpHNFdLvr47a9Mfr3igzSviFhW40J6dwQtx+5TH98Hq/QFGssCgaDLXJ7GCRalu67Xq+33uv1rvB6veu9Xq8Tfe2L/r0iOt9ONk+Lp11fmu9viPue4/V6G+M+WxH3+foR1tHo9XrXeL3ejQnrWu/1eld5vd6J7uE2WetvujqcbL82jLLONUlam7CuWDqzCUyuT5JHE7lf8evbkOTY18ctM5Aer9e7gZ/Kwhc9/2X7ex5fP+PL674c/bbmuvzmqm5O+d9muDxkAQAAAABgkoo12DRl8R2f3MaOVA3zjdHXCrnzo+Risu4GjTx5en30FdvuSqWeZ8KfYp/Wa3yGjfNF92WFpNXRdE6VYZXWJTlGI+Zp9AnxVSnyaaAseb3e5dH1p5PR+uT2GFqXhzKScf3wer1LgsHgeE9mP9nr70jHJ9V+NURfKyRdouSBxsTl16Q41o0J6cylfO9XLK/XpDgWsX1riu5bWwbpQ4GI9nbM9vd8PM6buSy/+aybU/m3edqjBw4AAAAAYLKJf+I0sbGnTW4jTaoG1PXKoFdFdD25apz2K/MhYNI1UCezVvkN3lwit0HJn5DOVRp86ncqDDXWmqKsJRV9SnxVhsd0rdI3TOZ6faMpIxnXjwkI3kzV+tuQ4X75MjiOsXVlGqjL53kjl/sVW9+GDI5FbN+ahIKX0OMm29/zfJ4381F+c103p8tv87RnOI5DLgAAAAAACl6aJ3Rb5TbyjNTQk+wp3VYNfSI39hTsag0+BduowcaUJg1vEFmZZJ2JT+NuiK4j9lmsgblebgNRYqOkX24PgsQnaNcnLLtSyRuvWiQtzWH6Fff9VI1CzdF8S9pwHgwG810+Ehs4WoLB4NIxrkPBYNBIslyystSswYneY09m+zI4pqnK5mjXN5oyknX9CAaDK8e5/k+F+pvs+PiV3bBHicctxhdNZ7K6uU6DAcplGrlx2EiT3mTL5HO/YvuWqvE7tm++6L6larDO+nyAif09DwaDzXHLZ/qbnqvzZq7Lbz7qZla/zRMUdEcOEMABAAAAABQ0r9cbG1Yk2ZPXLXIbazMZviTWCBv/3VQNofVK3iMj08bMRE1yG3xTpTNZQ9TK6L6l235Mm9zGIH/01ZzD9CdaFt2nZGlJGsiZKgGcaMPjxoTFlsc3NkaX80WX86U5pknXl+T4Zbq+0ZaREetH7PhF96s+GAy2jvN5YCrU35GOT2t0+XVx5WKVkvcAOCHJ/sWGT0pc5/IkyzYp9fwiuQrg5Gq/UuWvP3r8WzNYdlTnA0zM73l0TrLE72mCzpu5KL/5qpuj+m0mkDP5MIQaAAAAAKAgRSdSX6/kDSvNchtLliqz4I00/MnWVN/zK3kjzFg0p0lnsobexizXv0SDDVPNeT4866J5v1TD5xRokttglu8hmiZKYkPcusTgjSQFg0G/hvfuWJHJ+lIcv0zXN9oyklH9CAaD/vEO3kyD+tuapC61KfXcGonr9ml4TwZ/9PutKfZn9Tgcs7Hu10jlfGmKfVutzOabQoH9ngeDwaXJgjeZnofzcN7MRfkd77qZ9rfZ6/Wu8Xq9DZTKyYMADgAAAACgoIzQ0BPrMbBEyZ9cTSdxyJRUTwFPlMQGofosvrd8AtN8SfSYJDaeNcnt1bG+wPJ5LGVGGv7k9UiNxYmf+TQ8EJHr9Y22jBR6/Ziq9VdyG2xTTTierFG6PkkZ8iX5Xtsot5krY92vkfatNc12MUl+z4PB4PJRBIXH47yZz/Kb77qZ9rfZ6/Wuj/aIQoHzkAUAAAAAgEIRncw42dOqsR4DY2nUWJew7ti8Ci3Rz9Yp/w2a0uCcHI0auSEp0wbg5gI4dLHJpldqcCx+X9z+Nnq93pXBYHB1AZe9VMPnxC/ToOGNcf4sG8Ea4tabdH3KLjDRoPTBzOax1g+v17su2qtookzV+hvbt1RaMli3L8PvJSvfjXk+ZmPZr1Tvrctgv1Dgv+djOJ+M13kzF+V3ouvmpP9tBgEcAAAAAEBhSTbBeHMwGPSnmMQ4GyuV/GnY2MTna5Ri/pYciU3YXJ/j9RbScEHxwbbYWP6TtewpSTlI1li/Psvt+PK4vrGUkbT1w+v1TuQcClO1/mZSp9JJNZ9IIcu08b5hEu4b0vyej3G943XezEX5LYi6Gc3z1dFz+GT7bZ72GEINAAAAAFDIVkhaEZ04fqz8cocTGanxJDbsV1MO98EXXecqjX/j70Sojx23SZbutD1wNPZGP+V5feNSP7xeb1Mhp4/6O6X4yAJ+zykTYxfN98n42zztEcABAAAAABSSxAmofXIbGzbKfcJ+rA0/frmT+y5X6oZgX3RbuZrkd62ST8C+XNIJkozoa7I/Vd4QzbeNGjpMi+T2jCjYicWjDVvLknw03SZDz7h+TNAk2NRfYAr8nnu93jU5ejADGfw2R4ezm3S/zXARwAEAAAAAFIxgMNgSDAZHmnh3o9wG1bGODd8styF4idxhXZINh5KLp1Qbk6Q1tu10E3FPFo1yh69J1vOhOZrHyydo2K1MJRtOJtmcL/4kZdbI5KXBhv7VI60vbrlMX/mYu2C86gf1NzdaU6R9Kmibwvs27X/PvV7v2iznECvk82ah1c20v83BYLDQf5shAjgAAAAAgAIUDAbbgsHgciVv+Fkmt1FivZL3mshGm9yGnCUa3lCYi0aWZOtYOUUOUyygtj5hP/1xebpc+ZmPJGeiw4ElK0fNSd5rTfL9sZTBZI17ywooe/JdP0T9zVk+ZFuOfJocgZDRBHAahEn1e+71etdncS4t9PPmRNfNtL/NBG4mFwI4AAAAAICClabhp1Fub5yNGvucF/4k6/elWC7RSI2FDRmuw6fxaXTMNv3J0plqSDu/3MbtWK+Igm8c8nq9K6L7kWh1MBhsSVIeW5Ps1wqv1zvaORmSrk+FN8dDpvWj0NM32etvKsmGb2tKk6bJMon5uhT7NlL5axIm5e+51+vdmMEcW5PlvDmedTOj3+ZgMLiSwM3kQwAHAAAAAFDwog0OsYafxCGT6pV67pV4KzRyo0niE6/JnvJN9l5iY0n89jMd2mmVxqfxKdv0x+dxrHEocTL3+MDN6hT7XDC8Xm+91+td4fV6Y/uSLI9GGl4nseGxQe4T5Onmc6jPZn1KP99TruePyEX9yCfqb2ptSt5jbL2GBzPq5Qa+J0uQo03DG8F9I9SRFSKAU/C/53GBnKS/59FAzkjB8UI5b0503czotzkYDK5OMiQoJgnDcRxyAQAAAABQ8Lxeb/yfPrmNHMkCNn65wZ7EJ7c3ym3ciDUIxj+F2qThDTsrNbwhf5ncBpaRLNVgg2OTkvfwaI7bfrJtD9y3J/ydOCRKsmVGkm36Fc3jZIGOWMNUszII2gSDwXyXDydJOUhssG/QyA3trZKWpmvo8nq9G5Q8mNAazbvY9+ujr9iytSlWmcv1jbaMZF0/gsHg6nGs/1Oh/o72+CSW7ZZoOuPVR8tRqp5HrRreU8ifZPnRnnPytV+xershxTpix8IXPb6pjkVLMBhcyi9pQf62j+b3vBDOm5mW33zVzYx/m/P9+4v8IoADAAAAAJgUEgI4MakafhIbb0dqAEymVW5DTLKG/FSNRjGJAZB0y8e0KH0D0lgDOKNJf+I2Y/OONGez0QkI4GRrtdyh0/wZbCvWAyDbYbOSzdUSK8e5Wt9oysio6sd4PdHt9XqnSv0d7fHJtKE4kwBVTOzp/DU5Oufkc7+k1MG0VMc/8ZgRwCn83/hMf88L5byZTfkdj7qZ8reZAM7kxhBqAAAAAIDJbGBSXg0fiiVeNsOmrFPqxl/JfRo4m4br5Uo/3NRyjTxsVy5lm/6Y+GHsmqdQ+WnW4NwAGeVLdLmlozhmDSOkI5fry1bW9WOch+Oh/uZm32NiAa7JNBdGc4bHbqRGdBSwYDDoj/bqS/d7XijnzUKpm1PxtxlxPGQBAAAAAGAKiAVyViv5BNfr5DZuLJP7xGriUFotchtO1il9Y22rpBOi24kfsic2FEpbkuWXxi0fazyKDQW1Ovr/ZI1KDcr9XCOjSf86FX7DUHyvicYkn7dFX7H9bA0Ggy2j3Vg0gLHS6/U2x5Wr+hT52RbNw7Y0ZXhlNJ9zsb5sZFw/gsHgeM99o2AwuM7r9VJ/M68HsbxaFpdXbXH5FKvLyYZ08qlw57Fqjh7jpmg5aIw7FrH9SlWnmbh9koieW1dLWu31eps08rCXE3nenOi6OVl+mzFGDKEGAAAAAACAgpZiCEUgGZ+kfQnvjeucTQCQKwyhBgAAAAAAAGCqWJbkPXrgAJiU6IEDAAAAAACAgkYPHGTIJ2mDhs+bVDvO8zYBQE7QAwcAAAAAAABAIWuUtEYjT0DfIGm9hgdvmgneAJis6IEDAAAAAACAgkYPnGlvlaQV0f/HJqZvif7tkxvgSRbc8Us6IRgMMoQagEmJAA4AAAAAAAAKGgGcaW+jhvesSccvaaXcHjjkIIBJiSHUAAAAAAAAABQq3yi+0yJpqaRmsg/AZEYPHAAAAAAAABQ0euBA7hBpjdGXT0OHTGvR0KHVWuO/SA8cAJMVARwAAAAAAAAAAIACwxBqAAAAAAAAAAAABYYADgAAAAAAAAAAQIEhgAMAAAAAAAAAAFBgCOAAAAAAAAAAAAAUGAI4AAAAAABkyDCM0bx+aBiGHf036++DckG5oCxRlgDqGqZpWXUch1wAAAAAACCTm+jsG2D+JuncuL/vkXReNivgvp1yQbmgLFGWAOoapmlZpeAAAAAAAJDhTXR2DT4lkvqSvF8qqT/TlXDfTrmgXFCWKEsAdQ3TE0OoAQAAAACQH1/N8n1QLgDKEkBdAwbQAwcAAAAAgExvojN/YvdwSU9K8ib5LCTpWEkvZLIi7tspF5QLyhJlCaCuYXqiBw4AAAAAALllSPqpkjf2SFKRpJ9FlwPlgnIByhJAXQOSIoADAAAAAEBufUDS2WmWOSu6HCgXlAtQlgDqGpAUQ6gBAAAAAJDpTXT6IVfqJL0U/TeddkmHSdo70kLct1MuKBeUJcoSQF3D9EQPHAAAAAAAcudqDTb2tKVYJvZ+naTvk2WUC8oFKEsAdQ1IhgAOAAAAAAC5caakD8X9/akUy30y7v8fkjv8CigXlAtQlgDqGjAEARwAAAAAAMbOK2lN3N9rJf0txbL3SLot7u+fKfUEyaBcAJQlgLqGaYoADgAAAAAAY/d5SYdH/98l6co0y38uupyi3/sCWUi5oFyAsgRQ14B4BpMnAQAAAACQ4U108kmPl0h6VlJJ9O8rJP04+v9kN92xlXwybrl+SUdJ2pi4MPftlAvKBWWJsgRQ1zBNyyoFBwAAAACADG+ikzf4/E3SudH/PybpFEl29O+RGnxMSY9IOjH693pJb05cmPt2ygXlgrJEWQKoa5ieGEINAAAAAIDRu1SDjT0RScs12NiTji2pKfo9SVoq6d1kKeWCcgHKEkBdAyQCOAAAAAAAjFa1pGvj/r5O0hNZruPJ6Pdiro2uF5QLygUoSwB1DdMcQ6gBAAAAAJDpTfTQIVd+IukT0f9vlXSEpP0JXxlpyJWYCkkvSFoQ/funcsfTd1fAfTvlgnJBWaIsAdQ1TM+ySsEBAAAAACDDm+jBBp+T5I53H3vjHZL+lOQrmTT4SNJFkm6P+86pkh6VaPChXFAuKEuUJYC6hmlbVik4AAAAAABkeBPtNvh4JG2QdGz07b9IenuKr2Ta4CNJd8St5ylJJ0gKc99OuaBcUJYoSwB1DdMTc+AAAAAAAJCdz2iwsadX0hU5Wu+nJfVE/3+spM+S1ZQLygVlibIEUNcwfRHAAQAAAAAgcwslfSvu769L2pyjdW+W9I24v78paRFZTrmgXFCWKEsAdQ3TE0OoAQAAAACQ6U20YfxR7vj4kvS0pAZJ4RG+ks2QK9Lw4Vz+6DjOxeQ85YJyQVmiLAHUNUzDskoABwAAAACADG+iDSMsyZLbkHOa3ImPR5Jtg48knSLp4ehytuM4FjlPuaBcUJYoSwB1DdMPQ6gBAAAAAJC5myTZcodGeSRP23gkuv6IpBvIcsoF5YKyRFkCqGuYnuiBAwAAAABAvm66DWPYTbfjOAY5Q7mgXICyBFDXgHTogQMAAAAAAAAAAFBgCOAAAAAAAAAAAAAUGAI4AAAAAAAAAAAABYYADgAAAAAAAAAAQIEhgAMAAAAAAAAAAFBgCOAAAAAAAAAAAAAUGAI4AAAAAAAAAAAABYYADgAAAAAAAAAAQIEhgAMAAAAAAAAAAFBgCOAAAAAAAAAAAAAUGAI4AAAAAAAAAAAABYYADgAAAAAAAAAAQIEhgAMAAAAAAAAAAFBgCOAAAAAAAAAAAAAUGAI4AAAAAAAAAAAABYYADgAAAAAAAAAAQIHxkAUAMD1ZlkUmACNrlLQ+4T2DbEEkEiETAAAAAAB5Rw8cAMBUUC9pjdzG9g2SnCSvDZLWSmqS5CPLxqQhLp/Xkp+j0ihpRbTMbowrp/ui761JUVaTle3GDLeZ+D2N47oBAAAAAECWDMfhHhsApqMp1gMnWU+JkfglrY6+kL0NcoM4MSunaF7mowdOk9zATX2Gy6+TdEnc38ku3FolnZDBupw0+5LPdU8p9MABkNVNt2EMO786jkOPTsoF5QKUJYC6BqRFDxwAwHTkk7RKbi+HQtMgt4F/rYYGSQotjfEaKVJp1csNfK1R5sEbSWrJoszkszwCAAAAAIBxRgAHADBVXSJpafR1idweIv6EZZpUWMGHfXIb+VdJWqbCHZqsOeHvForbiGJDziULyPmj+bdSgz2ZWuLKaqZ5m02vnmzlc90AMB3cJKkj+v+O6N8A5QKUJYC6BqTFEGoAME1NgyHUknVlboguFx8YSRyiaiIl/igvVWEGR3waDH6t0/CAznQrV+nyaqOGB+NaNBisSWVZNH9HKiPKoiyPZgi1XK17SmEINQAAAADAeKAHDgBgOmmV28sh3jKyJWuxOYSWauoGb3JljYYHb5qVWXBuXZrPWyS1JZTlXPUoy+e6AQAAAABABgjgAACmm2SN4jRMIx8aNTxA2CxpeQ63kRhAWzNJ1g0AAAAAANIggAMAmG78aT534l6xydt90f9vTHg/UWy59QnrWR99P9mcNivilkuUuJ7GHKS1Pvre2oT1b0yTzlTbzTRdii63Ru58MI7cOX/Wyx2OLRujyedklkXTs28M60i3/sSytzKHZblebpDFn+T4FvK6AQAAAABABgjgAACmm2QN8/40y6+XtEojT+TeJDcIskrDAxqN0fc3Kr+9fdKldUVcGhOHxKqPS+cGufMF5UJjNF2xgFFT3Lp9GhrUySRokot8juXT2uj6fCnWMdY8SAxMNSt9ADEb9Roczi7xONcX8LoBAAAAAEAGCOAAAKabZL0iWlMsG2vMT9eQv0LJ5zpJFAtkNORhvzJNaybq5QY4fDlK23qln2uoIZo3+c7nWPCmMYN1rBrj8UjUkqcy3ayh89WMNe3jtW4AAAAAADACAjgAgOmkQcMbn0dqVK9X+uG9liWs0y93jpNaSYakJRo6l0gsuBCzTu6E9kuTrHtl3GdLlTrQlGla12mwV8XSuDTWRtPsT0hnLobLatRgIKUtuu2VSj0XUWOe8jkmWZCrNbquWD4352C/k/VSac1TuU7WU2aZctPbK5/rBgAAAAAAIzAcxyEXAGAasixrKu1Oo9xeFfFqNRiQqJfb6JxsbpOlGhrESfbDGJu7JBYA8cWte6OGNtafoOQN9Ws1tBfKJRoexHDSpE1plk+XViX5O16Thk5U74/mo7JMZ7J0NcsNkoy0vVTL5Sqf66PryWR7DXKHdRt27ZRhmVyh4cFCIwdl3RlhnYl51CY3sJXJd/O97iklEonwIwIAAAAAyDt64AAApqr4ieljc6YkBm9WKv2wVn4N9srwx70nuQGI+EbtZqXuZZHYiyEfPRhGSqtS/B0vMaDkU27mO1mn5EGSZg3v7ZJs2LNc5XPiMG5t0TKQTGuSdRW6xDzOpFdWIawbAAAAAAAkQQAHADAdxXqpZNJAP1KwIDHYMFIwqDXNd3NhpLQmapTbS2S93ACXIzfolShXAZxUWjLIl1zlc2OSdPmzSFuha0mS5mSBy0JbNwAAAAAASMJDFgAAppFWuY328T1UMvlOKonBjbVZpKUhT/uXTmwumfpxzHf/KD/LdT4nBnBa8lzWEjUq/0Gh5Ro6TJxPbk+Z1dFtNxbougFME68+90cyYRI76Mh3Un5A+cG0LD+g/lH/MFEI4AAApqoTNNg7wK/RTyA/UoCh0Bqs0wVDks05k5g/hdgIPxkDA20p9qNlHLa7UkPn31mh4UPVFdq6AUwzBx91MZOxTgOO42Q9J5phGJQNysaYUY4oRxgf1DXqGvKPIdQAAFNVqwaHfWotwPT5x3l7jRoevFktdyL6Wrlz5yydguVgpHzO5/BfbRoexGnS+Aw51pyw7VhPmUJfNwAAAAAAiEMABwCA0UsMDC2VZGT4qh3ntCY2sq+W25uibRrlcybz7cQba7ClOcn6Vo1DfsXmeBrp+BfiugEAAAAAQBwCOAAAjF5iQGBZAac1k/lffFM8nxMDQel6xIw1MJFsrqUmucOO5du6hHyrV+7mPcrnugEAAAAAQBQBHAAARi8xsDDWxvnE3jC5DAglBiqSNbg3TfF8bkmSJ+uVPIizSmOfeydZb5XYutdnsP6xbn95Bse8ENcNAAAAAAAkecgCAABGrUVuL4v4wMeq6N/NGt7jo0FuQ3eb3CHMErVqaEN4U/Tvlui/y8eQ1lYNHTJsldwAw7rouleosAM4ucjn2JxIjQnLbozmQ5vcYM4y5S4g0RzdRmLeNkZfbdH0x/bBF12+Ifr/Wo1+vqQ2uQGkfAzbls91A5imXnn2D2TCJHbwURdTNjDuZYNyRDkCv+egrk11BHAAABiblRpscI+p18gN2y1KHsBp0fBeN7GGfmn40FXZWC1pbdzfvoS/Y/wqzKHUcpXPl8gN2PgS8qIpxfel3PSEaUuR1tjwY6l6WzVGj/toxQJf+eghk891AwAAAAAw7TGEGgAAY+OXdIKSB2RSSRUQaNbwie9zZV0GaVyex+0XSj77JS3V8F47yfLrkhymf3U0/dkGY+pzkG8r83hMVnIKAAAAAAAgPwjgAACQGyslLYn+26Lhw161ym28jy2XyvLoK34dbRoc4musaVyasK5WucGFJXKDN4npbpiC+dwqN5gSy+eY2JByS+UGb/w5TntrdL2x9CfrUdWqwZ5Dlyg3AbWx9NyayHUDAAAAADCtGY7jkAsAMA1ZlkUmAMAoRCIRMgEYpVef+6Mk6eCjLh52I8qY+ZNbsjHzHccxsl2PYRiUDcrGmFGOKEcYH9Q16hryjx44AAAAAAAAAAAABYYADgAAAAAAAAAAQIEhgAMAAAAAAAAAAFBgPGQBAAAAAGCy6Ovr1333P67/bHhWt9y2XpK0eNFsnXbqMTrphKN00glHqq6uOufbvfueh3VPy7919z2PSJLOP/cUXXv1VRwQAAAA5A0BHAAAAADApPD4Ey/qS1/7kTZt3jXk/U2bd2nT5vW65bb1uvEnX9aZZxyX0+3e9Ks7tPqam4e8d/c9j+jaqzkmhYTgHihHwOTS3t6pO+68T08/++pA2T/5xMN1/BsOU8Nxh+uEhsNVWlpCXcO0ZjiOQy4AwDRkWRaZAACjEIlEyARglF597o+SpIOPunjYjegrz/5hxO+2t3fqsg98ZUjw5vxzT5GvplL+jv0DDTFPPvqbnDb2PPDgE/roJ7838PcnPvYOHXrI4uj2T5PkNvi+8OImbWzbqlNOPloLF8yedsf24KMuHvae4zhGtusxDCPrsiGlDu7FG6/gniS99PQ6KnyOywbliHI03uUI+atrkhtIufIL/zfiMg//88acBkypa9S1yYgeOAAAAACAgnfHnfcNNKqef+4p+trKjw5p1Pnfb/VrQ+sLOX9S9x//emzg/yuueq8+/IELhy3zhpPfN5jO39MtZ7y1t3cOa3RPFtw7oeHwnG73gQefGNIQGB/ciyG4RzmiHAHDPf7Ei8OCN5ddulSStLFtqx597AWdf+4pOQ3eUNcwWRHAAQAAAAAUvNvWrR/4//sve+uwRp3S0pKcPxUvSQ//++mB/5926rEciAJEcA+UI2ByufZHtwwp9++55Nwhdau9vVOvb95BXQNEAAcAAAAAMAmMNKTRVNwuMkdwD5QjYPLYsnWXHn3shYG/E4M3klRXV53zuaaoa5isCOAAAAAAAAre+eeeMjCE0a9v+asOP2xx1k/Cv/jyJj3876d03/2tevSxF7R40Wydf+5pajju8GENs9/8bvPABOYxF77rCwP/f+npdWmXueP3V+uwQxbrxZc3Dbx/7dWf0/nnnqYHHnxCf7jjH7r7nkd08omH6z2XnKdzzjp+YJ/iJ1lO9nm8LVt36ZFHn9HzL7YNGXpm8aK5A/sWCEYUDLpzeN33wOP69v/+fOD7v7npm5o3Z4ZM0x3iPhAI6IMf/aY2bd6piB3RVZ95tz78gbdnfcwMw12fecDxKvnoL+X0+CU7nPOyQXAPlCNg8ujp7aOuAVkggAMAAAAAKHjnNp46EMC5+55H9MILr+nKT1+mk044Mu1Tun19/Vpz4x/10xtuH/L+ps27ou/drvPPPUX/+61P5Xx4pGQSJ1F+9LEX9OhjL+iyS5fqi//9fn3pf348sK+Jn3/9K01D1rVl6y41vuWKYdsY/P7t+sTH3qHlH7tUhuHIcaRzzjxOzz57pu6862FJ0i9/fYe+/MUPy7bduaj/fOf9at+3X5WV5TrzjGO17OJz1dMbkmWZ8npNmUZhzWVMcG9swb2h5WbopOItd/1oyBwQfX39umjZ5wcaQlMNQzQZTeZyJEmHHrNMkgbOEy++vEl/u+dh/fSG27V40WxdumypLrzgnIHz5eNPvKi/3HW/brltfdLPKUfIlwMWzhny961r78n6+FPXqGvTCQEcAAAAAEDBO//c0/TSy5sGgjCbNu8aaLRYcdV7UzaGSBoWvFlx1Xs1d85M7di5ZyCQEmu4vfbqqyRJbzr7RJ10wlFDGkZi34tJt8zMOt+wtMSWXbxotk479Rg9/O+nBxpWbrlt/cBk6ak+f9PZJw5pwFm4YLYuu3SpjjisXnNm12nmLHebzzzzqm78xZ8GglQHHXSgzj7zeDlujEaXv/utevyJl7RtR7v+9cBTOv3UJ3TWGcepbdM2Nd/0Z0nS/Dl1+vD73YYe27Zl27aCIcljWfIWmfJ4zIIoGwT3xhbcu/LTlw2pZ//Z8OxAQ+Y1P7x5oE5IbkNr/DwxU6khcCqVowcefEIf/eT3hqRj9TU367Z16/WndT/QrWvvGVLOEj+PTyPlCLlWWlqiyy5dOlA+Vl9zs+67v1Uf+9A7dELD4SPWEeoadW06MpzY1RsAYFqxLItMAIBRiEQiZAIwSq8+90dJ0sFHXTzsRvSVZ/+Q0ToSnzaNWbxotr668iPDnk6N7+EgDX86NbHB5MaffHnIOmJP2kpDn8CNl26ZxDTENy719fUPa5T/xMfeoeUffefA58uv+N7AfAHJGupTiW9Yeuv5Z+qbX1uu+CaA+x98Qt/7/i8lSfPn1ukn163QDb/400DPnC9/8YM6KyE/DUMyJDnR68mSYkumaejgoy5O2uYgSebiBncIte59aYdQ6/v+f42qbFx7/S3DGvSk9MG9xO8lC+7FjlmsUeyBB59Qd09fysBdrBfNSMvEggKJZSNWlhODd7E0pAruJSu7kvvk+UjBPWmw509Me3unLvvAV4Z9Hp/WxYtm68affW1IXUolWdlwHCev3bgMw5hW5SjxPBRz2aVLB57iTyxHqT5P9oT+dC1HyF9dS/xtiz//Xfnpy1L2KqSuUdemZT0jgAMA0xMBHAAYHQI4wOjlIoAjuY0V/7jvsSENGjG3/urbOv64wwb+jm/sSTV0SHyvh8SnW/MRwEkMIiU+wfvwP28c0lAcH7RavGi27rnzR8PS0NfXr/vuf1wvvbxJmzbvGBIQkqTi4hI99M8bldgE8KOf3TYQsDnrjGN1/4NPSZIueMtpuuLjl6ZuTDDcl+NIXm+Rjjp+WdLFJMmsP1kly2+W075Fbuhn2DKWJFOS+q45L6D4IdocRy89+TuZGTQdEdwbXXAvvqEzVZn707of6Pv/79cDT3knNiCOZDIFcCZrOUpcJrGxNrFn18knHq7/+/5VA+eZxEbxl55eRzlC3utaqt40sTL63W9+akhdoq5R16YrkywAAAAAAEwmdXXVWnZxo+6580e69urPDfns2h/dMuTvu+95eOD/bzjm0KTri38/WUNSriU+ARt7yjZ+/+IdeOC8gf8nm4T57nse1kXLPq8rv/B/+ukNtw8L3ozkYx+6SPPn1knSQPBm/tw6fexDF434PceRbNv9fygUUnlFZepld70s7dsqo6JWGv4QqVdSvaTFkhYb1bNlVM0afFXP1tZuUzt6TPn7DQVHiKGff+5pevifN+o7X1+uxYtmD8mzj37ye3r8iReHLP+3uLKx4qr3DjsuCxfM1oqr3jvwd+sTL+S9bFz12fcOPHVeWlqiiy9805DP33fZW4d8/p5Lzhv47OF/P510nX19/br7nod17fW36MovXKNDj1k2JGCYrLycf+5puuzSpQP596X/+fFAQ+Blly7NuCFwMpoK5ejKT182JB0XXnBOQr1/x5DzzMXv+K8hn2/ZuotyhLwrLS3RlZ++THf8/uohdURyh4f86Me/rfb2TuoadW3aI4ADAAAAAJi0zj/3NN34ky8P/P3oYy8MafCJD3iUlScfDz/V+5NB7Knb+AmKb/zJl3XH768eFtxKpri4WMsubhzy3llnHq/i4uKMtu847qu8tFiVVcmHl3J6/Ar948cyKurSr0ySZEqG6f4rQ44jBSNSV9BwAzmB1D3Jcx3cO/Low2WU1cgorZ52wb0v/vf7BwIYse8tXjRbX/zv90/588pkDxLHl4tk5SaxXCWWu57ePsoRxs1hhyzWhz9woZ589DdDAjCbNu/SHXfeR12jrk17BHAAAACmjka5Y9PEvwBgyksc0mhPu3/g//FP0Pf29Cf9fvz7J594+KTa92uvH2xMvvbqz+nDH7hQZ55xnA47ZPGwhqVkdu5q17U/uk2SO8SuZVla+4d79eJLm2QYmY2Y4jhSxHZUVlKs6pqapMuEHvu97O3Py6ialawXTpzoDDuO7f5rmJJVJFmegSXcQI6ldEPCjyW4F7Kl7d2mej2VUplPMiff8MNjDe6Vlpboowk9sc4/97RxmRi8kBAkphxhfJSWlujDH7hwSBDnvvtbqWvUtWmPAA4AIB8aJG1w77q1VpJviu0fjeQTx0nyahzldwEAU0R8Y6okzawbvPSIHzbkyadfSvr9+PfPOath0uz3lq27Eia5HzpESqqAVWz+mkAgqJt+9eeB9y+64HT19vaqt7df/3v1LxQMhuTxeGRZpjKJ5YQjEZUUF6uysir554/cIqM89WWhUVEnRUJyOnZEX9vldO6Q0dMuJ9DnBnIMU3JsBR1LO7rTp2m0wT3bMRSypb7ewECmNRx/yKSqF2MN7m3Zuktf/eaaIe/99Ibbhw0hNh0QJKYcYfycduqxA/+PzfFFXaOuTWcEcAAA+bBGbhBHkpZJaiJLkEeryAIAmPpu+tUdeuDBJ9TXN7TRpq+vX99edePA3yefePiQIUwajhtswFl9zc3Dxpt/4MEnhkw4HN9wlKn4RqVnnnl13PIkceiVxEDWX+66f/Dm3zRlWVY0IOORaVq6/8En9ODDz8jj8ejCt52ly99zrg49eJ56e3v0/AsbteaG29TV1aVQKCzTdHvnpBMOh1VeXqbS0rJhn4Uevln29hdklNUk/3JRcXRynbBkR9xXOKjZ1cWq8wbk6e+I9oQxJDuskKdc7V2BEdMz2uBekek+6/HCS69JhuQ4ts4+4/hJU19GG9yLr1fX/HCwXnziY+8Y+P+XvvajYfVwqiNITDnC+Nmz2z/kN526Rl2b7gjgAADyIfFKqZEsQZ7L24oCT9vaJPViquX/VN7HpCzLarAsa4VlWWsty2qgKgL5ddu69froJ7+nN5z8Pl35hWv0ze8265vfbdYbTn7fkDHir7zisiHfO/OM4wYm9pWkxrdcoZt+dYfuvudh3fSrO4ZMDLziqvfqsEMWZ522+IaWr35zzUD6Xnx5U17z5LBDFg8JHn3ui9fo7nse1t33PKz3f+R/BiYwlqRgMKBwoE89+zvVu79TzzzznL7+zR+pY98e+ffu1kVvcffh81e+T47jaPbs2bplbYu2bt0hjyV1dXUqHA6rqMgjwxy5O04oFFJ1dZWOfUND7ZAP7IjsVx6UUT0nOkRaAjuiZF19iouLVFlRpnm1JSoJdw0OpxYJqVulav75H4cE9+zonDk7OoL66jW3yaiaJaNqto474zRFSn3qCBgKRpIH9yKO5O831N5vakPrC7rhpj9LkYgMy6ujG94gf8Cdg8cfsCRzaJOK7Uh9YUP7+g3t6jW1o8fUwsOOkFE5U0Z5rR57erPsceoDnE1wL5k773pwoF5ddulSXfnpywYaUjdt3qU1N/5xyp1jCBJTjjB+vvnd5qQ9Q7Zs3aXvrPr5wN/xQRjqGnVtuvKQBQCAPGjW0F43LWQJ8myFpHWS2gooTfs0dPjA5imY79NhH5OyLGva7jswERKfTE02qe/iRbP11ZUf0fHHHTbss9gEvrGARnwDT8wnPvYOveeSc0eVvvPOPW3IZMmx9F267M15z5uvrvzIQBDq0cdeGBxuxrT0+as+oB9cd5tkWrIdWwHHo/6IrVA4pGt/+ieFDK8k6Qufe598M2eosqxIc2bX6eMfvUg//9VfdeCBB+qaH/5OP//ZlzV3Tp16evrU3d2tsrJyyZIikUjKdEUiEb338stmbNr0Wm9nx76BFvHwM3+T59T3SqYneRBnBIZhaGalV1t6wu5QanZE8ni19o4HtGXTrZKk8889RdU+n3qtKv3lrofcKXU8xZKk93/gneoLu0GWzoChw088SZddunSgXDS+5QpdddUHVTpzgXbv2ecGbyTJiehjTcs0/5Aj1RUMu+m2itx9sIMD6esOmfL3S+5G3X/OPOcU3br275Ijfff62/XgE22aUVmk973z9FEFCzMVC+7F6s3nvniN3nPJeZKkW9f+bciwRIlefHnTkGF4rvj4pZLc4Oh7PvA1Se6wPGedcXzS+jZZ3bZu/UB+nX/uKfLVVA45b8SkChLHl6MVV71Xc+fM1I6de4acb8YSJI6dY776zTV66JGn5Kup1KXL3kw5wqT8Tb/ltvUDdSb2kIW/Y/+Q3/fFi2brwgvOoa5R16Y9AjgAgHxYKbchvVFuozoNm8g3n9yh1C4psDRNh3yfzmUOwDhZuGC2Wu76kR559Bk9/2LbkAbVyy5dqiMOq9ebzjlxyFPx8UpLS/T1rzTpTWefqNYnXhhonFm8aLbOP/c0nXfuaWNqmDnskMW69Vff1l/uun8gbeefe4rKy0rznjdnnnGcbv3Vt/WrW+/W3+59QkaRV+e86RQtXXqmTmg4XC2PbdJTz2yUYxja1mPJ55ulP91+r57c3COjZp4WzJuhM9/2VoU8tkpL3YDK+y57q+6+52Ft2bJFCxcu1J/ufETvffdSlZdL5eWlat/XpeLiUlmWlTKIY9u26up8Ou+88+fc9rvfboq9H2n7j+zXH5cx93A53Xuz3l/TNFRs2grYbu+XHTv3auvuwclw7r7nEckwZNTMd4MssjV/Tp0+2fQuHXnkwW4PHzssOY56woY+8LHLJQ020l9z3e+iPYQGu8q855KluuD8M9zAjWlKstyh3IyhPXCKLccN6him+33H1llnHq9bf/+P6M6H9a8Hn5KsIl30llPyXjZSBvfkNm4mC2T29fXre6tvGvj7O19fPlCvjj/uMH3iY+8YqD9f+tqP9Kd1P5gSE2QTJKYcYfw8+9zGIX8nBkklt6fbd7/5qWG/69Q16tp0RAAHAJAPfkmroy8gX1ok1UdfkjvfUqPo8QUAU9LCBbO1cIE7tMnXvzK66fXOPOM4nXnGcbry05dl/J2Xnl6X0XLHH3eYjj/usKRpO+yQxSOuZyyfByKGFh52hK762hH6768O//z73/tM3F+GFAnr4refrYvffrakaJDCMhUK9Kuzq1/VVRWqq6vWPXf+SJIUChsqKyuX4zgyrSKFQ72qq61S1/5eWZZXlmUqEknek2b7tm0668zTvQ8//JBvy+ZNA5MaRF5/QkVLTh1VAEeSSjxSIOCmfe7sOv3y59/Sxqef1KttW9xGPceR09uht11wlg46YJZOP+ko+XwVcvbvVMQqjs7B47hDo5XX6nOfff9gcO/nf5bTsV3zF8zROee9WWeecZzq6xfKDPfL6N0rOzoGmi1DioSGpKvYclRqhNS7f78MOyxTtg6e4dG1X3m3Wv79gu78x5NSJKKzTj9akRKfgsFQXutMLLj361v+OtAIef65p+jiC9+kM884Tvfd3zrQQNje3qm6umrduvaegfcWL5qtC95yxpB1xoJ7mzbvGhiWJ5v6VMjnF4LElCOMj/PPPU2zZ9Xqyadf0tPPvjpQrhYvmq3TTj1GJ51wlM456/ikgQbqGnVtOjIcxyEXAGAaymQSWqTUKCnxMSGDbBkX8RcuLdHXqrj32iQtyeC743HMEre3VPkPLo132ZyIfZxwkUhElmUN2/dIJELwEEjj1efcMdgPPuriYTeirzz7BzIoC/1hQ11BQ33hHKzM9MgJ9qm2qF9VleUJ5zxbxSUVbocSx5FlmQr0d8s0DXV398m0vIrYES19yxXDVvupT33q0AUL5uuhhx61f/WrmzbGfjc8x12o4suvlb27TZKKJR0gSUbVLPVdvfQlp3PniGWjK2jI3x/9ebOK5PR3q644pMqKsiHLdXZ1yzAMlZWWyDRNt/ON46irJ6Auozo6BFuRjL5OLaz1yoibf6ejs1udVq3kRCSzSEZ/p6o9IXm9RXIcR7G2lNLSEplxcwI5jqOOjv0qKSmW1+uRGZ0nJxyOqL3XUcBTIUXCkuVRaaBds2orpnxZPfioi4dfQDhOXq/DDMPgHEM5wjigrlHXkH8mWQAAU4NlWU7Cq3GkZaM30LFX4wirrpfbQL5e7nwXjqQNcicsb1LyYYTSrT/+s/jJ5xslrYmu34lub72GzqeTidh6NsZtZ0N0P2K9NVYkpCPX4te9Ju79pmjexefliiT5WJ+wDxujf9dnsO16DU4qvz4uHRujf69QdsM/5To/fdHl1yd8J9u01csdns+fZN/HarRpXDFCHqzPot6lsix6LPaNId/GWkbGso+5LJujOTeN9RivkOQkCd5I0vpMz8EAMBa2I+3tM7WrN0fBmzjxAYzB60ZT+/d3DgQibNtRcUmlQqGIKipKFQ4HUz4YZJqm9u7dqyOPPMKsqvbVDOzDrpfl9HREhzjLzWVXsodTq6sqVFVZLtOyFHYMBW1DYdtUabFXRiTkPuLg2LJNj/oDwSHfrayscH8WHPfnwXYMFRV5VFparLKyEpWXl6q8vHRI8CaWhz5flUpLiyXDUsg2FIgYckxLpd7ovD1yJMNQv20N9OgBAABIhiHUAAAjaZLbQJrYkNkQfS2L/j2WOW4ao99fE7e+GF/088ZoWpZqaGO9kiy/SskDPrE0N8mdo6dhHPOxPpq2tRreoB2frksktabI9/ro+8uiy6V60n+FhvZISUxHfVx+xrY3nvmZqkwp7livSLOP8fsTG65vVUIerJPbGyeX5X40acyVVOUnMU2ZDFuYyzKSjVxud6znpkI8xgCQVm/YUHufofFu8y8p9igSicgwDNm2LdM0VFpWpf7+LlVVlqprfyDlfDiBQFAzZ87SqaeeWnPP3+7yS5LdvllO124Z5T45keyHEQsljtjmOMOCTyHbUFfQVJ/tUcQeDMQMzFtjhKPz1DgyDFPBYEilJcUD3w/LiAZbBmUSbOkLG+oOW4PBGceWZLhz7ziO26MnmmZbpkKhsIqLiyjcAAAgKXrgAABSiT3pn+4p9nU52NZ6DQ/eJGqQ24Cdbj3peuv4ovvVMM75uV6Z9XRKl++xhvz6MaanPpom3zjm54oMy1RsHzM9Rs0aGqyJBZ5GI19pHAtfBuVnrPs92jKSD+m2O9ZzUyEeYwBIqytoaE/v+AdvJLdXiR0JDvw/HHaDOabpzk9Q7I0FRZJ/t7e3R4cdekiRJPcLgR7ZW5+RSqtHlZ7+sBG/AckOy4wL4PSELW3vNtUdlCLhaIDINCXTcpc3jGGJtRPn8RlFPu/rN7S711Rv0JEdDrubMC132wPbHcq2bQo3AABIiQAOACCVxIbgZrlzixiSTpDb62KdRu4Rk4lGDTaQtsntQRBbd7JlG0dIb2JDa1t0XUujr/jeCfXjmJfx+9gaTVNzkryLDVUmDfYsWa3hPUl8Sh1YWRf33aWSaqPHrFbS8oRtxoaQGo/8XJZQpvzR9MTSt0RDe0vEGs8z4dfwnifLlP0QZblI47q4/EkUn3dLlXnPlmTHojWatti6sukFN9YyMtp9zGXZHO25aazHOB/HFwDS/9D1x835kpVooMIwNfYp0WzZjhtsMAxDkUhExcXF6u0Lqri4SMFQWGaKbfT09Gr27Nmq8dVWxt5z9m6SUVScdSoijhS24/bPcWTaIXk87gAjgYipvb1xI31aRVI4IPV1SN17ZezfI/VER940BptExhoX6wgY2h80FBseTaYpI9gr9fql/XtkdO+V09eVo2MBAACmC4ZQAwAkExvOKN7yuP+3KveNk80J25DcIMWahPeWafiQRsnmPUm2vtik9+snKF+Xa2jjcLOG9zTwRfM2fri41dHlGhLyYWWSbbTJbYRObLz2x207cU6eleOQn4mN7okN3G3R9fs02BurPvr/dRmWnxUJ5XZNNC8ylYs0tin10G2tyn44rtjQeZkci2a5c8CkM9YyMtp9zFXZHMu5aazHuE1SWyQSSTbfQ2skEmG4NQA55+831BXMpsHfGOxpIrlDeDm2xh6icGQ4zsB6nejUYRUV1YqEe9Xd3a1aX7UbYUkQCoVUW1unuXPnlHb497nf7+scVSr29cc9h2paUjigIoVVVOQ2b/hDHknhgXww+zpkBrtVUuJVcYlXlseUHGlPyJFjGFntfyq2I3UGousyDDds1rNXHscdlq2orEiWx1IwGJbftKRImIINAAAyQgAHAJBMst4UDcrfE+XrNLxBWnIbdWPzrMSnI1Hi8Gt+JQ9uSG7j8mrlZqL7bDRreC+JVg0GHuKt1tBG7livhbUJx8in5L0M/GnyOr6R3BddV1se87MpoUw1j1CWVidsv1GZD9O3XEODSbHgRya9U8YrjdlKPBZtIxyL1izK9ljLyGiNdbtjOTcV6jEGEHXwUReTCQmM4nKpPIORLB25gQPLI8dxpFC/nECvFOyLzr1iyyitkkqrJDtJ8MD0SME+OV27RtxMTU2NamvrhjYqeCxt2rRJ4VBIhmFKGj4kmOM4KiryaPbsud4Xnn/eIyns9Prd+W8MMxpgSl82jOIKqbxmYJ+NomI5+/e4PVtieVa7yE2DYUl2WI5/W/K8rV0YXYklhYNyOncktJZ4ZVTNcv9vFUl9XXK621O0rBTLqJqZflmrSEbNPHceHKtI6u2U07OPgs45BqCuAUiJIdQAAMm0aHhD63oN7+GQK+vSpCVesgBO4lBZyYYnG2md45Wnmb6/LsPl0s3NEZuIfb2kjXKbd5K1EtTnOT8bsli+Nct9TFxv4rpTTVQ/UWnMVmOSspHLsj3aMpKL/RrNdsdybirUYwwAyXm86YM3sZHCLI9kWnL6e+T4t8vp3Cn1d6mytEimHXKDNna0V8oYBAKBJAmQZsyok+SoPxBQqk4tdiSiBQsWmpK8kuT0+KVwcOi8MKYlOUl6upiWjLKaocEbT5GcUL+c/v1xCxoa0lPGSd5rxigud5dz4r42fKm49diS5U2dMYaRUQcno6xGsiNDvwcAADACAjgAgFQSn/KPTZC+UZlNqJ4N/yg/i0k2P0ih8Y/j95fFHadV0WOVTQN8rvMzcdtrNdjklOw1UlrSWZ6k3MZ6cLUUSBqzkVjPchV8HGsZmcjtjvbcVKjHGACSMirqRl4g1uumqFgKBeR07JCzf7cUCaq0pEQLFy6UY1iyY0EMw3J7foxBOJzYe8eQ4zgqKS6RYRgKBgLRXjjDhcIhlZeXSdGRQJy+/dGhxAaDGE73voFgjVFeK6NyhozKmTKq50glFQNnZ8PjlWPbcjp3JQRpnMHePLFeLt6yuOQabvCmrCaWfCXOhTMgFvBy5AZdPEXud03LfXnLZJREp/QJByVPkbtsJCwVlbrbHmh5sWSUVkue4sHdjR4/AACAkRDAAQCk0izpEiUPHDRqsAG2ECQ+nuqfxsetSW7DdH2SPEnWQ2U88rNxHPc/2RBjK5S+F04jZSSrMjKR2x3tualRADBJGGU1bpAgFUeSYcqwiuT0+N3hv8IBeTyWfD6fFi5aJNu21d0d1zvF8qQdquz/s/ff4XadZ4H3/33KKrueqi5ZbonsJI7TQ4qSAElIIIFEmQAzMAww4BDeUKYwwDADYTrTE368DBnI/OghGQQMZSgGO1ESbCe2bEcuim3FRV2n7rP3Xu0p7x9rH+lIPqqWXKTnc1370jm7rL3Ws56199G6133fZ2OtxZ+yDO89SmuklFhrEacJSjjn0XX/MHl8fU59bjmot7s5AUkbdApRMnqeAKURKsaXQ/zCgdXLwWVLdUk4AGcR3bWIyS2IsQ2I7vp62VKfeI53dbBFnVJh3tk6MLOcFeQdorO8rPV1ybSkdWIZpjwxxkIgJjbVt7H1iO46aHSOZ0rVlwrYOqATBEEQBEFwBiGAEwRBEJzJZ6ibjv8Uq/e/+Eme/V4y52LiCt1fb+fkHiJQ9/K4Dpikbtj+jhfYeF5I8OgTp8zXlVk4z5d1fK7m9qWaI8/2+z7bn01XclA4CIJnm5B1tsnpLGfeaI3vzxzvoZKmKVu3Xs2aNWsoy5IDBw48fbn+ma2acw5rT87i8d4jhDht4OakVRArzkOoqL5JJcTkFuT01ci11yOURggQSiF0hFAxyKhef1PUPW8WD4+yd1YZnrwHg3mEjhE6rgdMiDqDJk5HGUsZZD2EikAohJCIsQ1PX1bvKDiDiFKQGu9tHaCRdSBpZeaOnz8Ipqyfq3Qd9JGq7o8TpfXzsx4Uw3odABE362BdEARBEATBaegwBEEQBJeti1UOaZ76ROsvUJ+E/Q+cXE7oJ0ePPZfuPmWdXs2V2XD81CDFL3D6hvfP5nieurx3cGn7EM2PtvvTp4zNvufROp6rWzk5c+Rs+2LiWZojz9XcfCafTc/XfRwEV6RH9vx+qB11ipf/xvjyjxuAzurPqst9yQ03UO78l7763K/1pyfH5//y1r/pvPKVr5zIspxGI+UVr3iFAxaBY2LyKho//kf4vL8RU64aGRLjG7APf5biNz58ZPQ6ADqdDrt338uWLVuYmzvequxqRn1sAJIkIU0TXvHKV/U3bdoU7Xj/tyWzs09vayYEJ8q5AeRLiEYHpDLmzk/l5EspgH7N39l2/AXW4IcL+MVD3h19zLnZJwyQAwOgf3yFrnsJylse27e3HqVsUcirX7VR3fj1LbluW12OzuS4I4/gnrzXV1/8LYezi/pV74v09u/rUObYvZ+luvXjiz7rHeV4qMtD0W+oV71vo7z6NUqObxxl2xS44Tz2gb/G/O1vLQBHweMXDyf69d+5UW59VSTWXY9IO/jhPP7Qw94++rfe3PenBujF3/wTHfWi7YlfmsF86dNUf/tbhx75yv9eutzn+PUv3fGsvZf3/mmfMY8+sDN80IT5EzwLx9pqwvEXBBcuBHCCIAguH/s4OWizas8GpdQzyUS4lbp00WMr7pugPnn6XJ4IPfXE7AepT9ye7qr5D16mc+Bc+qVMPAfjeesqy7vU8+UznBz8uPZ5uI7nOrdX7tdbzrIvbnmW5shzNTefyWfT83UfB0EQnPSnGqcN3gDOIddfj7njU7763K8NN61fu/jZz39x4rrrruvs33+AzZs38Uu/9Ev+vvvuy4E5gOh1H0R01uKXZlbv9XIG3W6XVqt1au+bk07WNRoNZmdnmJmZ4eUvv1las3pmjJISUxkYBUfcwYfIP/ZtgDD28S8fos7ItJzIE1p+Hze6GaACivpvWs2Gzddw402v46WveCNSSh66/y6+cPsf01uY8fah2w7Yh27rAo0V5z78aBklsFjt+iTmjt8x3vsUU7jRYyfxw8Ws+utfehIYE2kn8dYIqsyP1gkgO/FkV1R3/M6T3PE7XXSciqQl/XARvFtedg70i9/7Z4uiNbnWD+b0KdscBEEQBEFwkhDACYIguHzczcknqW9RSn3CWnu8Ab1Savkq9XMxweoniVfLYniuywt9hpNPXF872s4PrfLcW7i05bSeS6eeAL/2NNv/bI/nrZxczmo5G+ZSZ259iJNP6F/7LK/jqUHVCwkYnLpeE9Q9Xt6xynH3Hzh7n5eLNUfOdxsv5vte6GfTxd7HF2P/BsEVJ1xBfSZ/A9A97cPeI7pr8AsHKD71j0tg4a9v/9z4dddd196//wDj4+MYY/nZn/3Z5ewbC6CueS1+MH/ewRuAZrNJkiQry6YJVpRi996Tpglfuf9+Pzc7y/XXX6fyvFj95EMUMzM7C8cDHx77+PE/UyvgyGqvk0rTbHZQUUSaNhkbn2bt+i1suWYbE1NrieOUhflj4D03v/brSTvr+KPf+S84B41WtyerrGerkrTZpNdfEg5/UrDEV/mx0TZ5Th9IqYAZn59TkowF5jEl3pSj5j1PW7b1g7lD1AE7G46N8PkTBOH4C4LgdEIAJwiC4PJxK0/PhPiyUmq5H8irOb/Mk8c4kcVwK/WJ0Gt5el+JfdTBo+d6208tNXXL6PdfGK3j8tX4t1zGc+DUzJn/MNpvn1mx7255DsbzVuq+NLecsm63jO4/df68erS+FyOA8lOcW9DyUqzj04Kqo99vHf37oQvcF69ecXwu74sPcm5lEy/WHDnfbbyY73uhn00Xex+vFjQ/vu3W2g8RBEFw/sZO/5BHjm9k+Cvf54DeH/z+77e3bdvWPnToMFJK2u0WH/vYx/zc3NyQUXkxMbkFMX0NfrhwQSsTxzFaa5xbTjZBjW4n+cT//ESpo1hdc/XV8tChg6suSwjB/v1PwShgcTZSRbz+Le/hmutfSprEQkhFHCfESYoQgjwbkg2W6PcW6h48wNzMQdIk8ml7ik03fzfXbZ3WL91kGt1uN35k3/7+p379FwtXZau9nTv1ju74NL2FmZPum9x4Ey9++Zu48br1qYxS/4VdnysevvvPljeQU2JD9U47c3aNDVM+CIIgCIIzCQGcIAiCy4S19hOj8minlk479aTo8knTM50s/SAnGr+f7aTqTz1PhuBDwJc5+Ur/a3l64/TlMZi4DKfBL3By35eJU34/n+2/2OP5U6O5+epTlnem4MqtPPMsneUT9ucS3LjY67haUPXtnAjGLAchzma5NNjEKfv2ltO8J5w+E+dizpHz2caL9b7P9LPpYu7jM267Uuoz1tqQkRMEwfmIWNFb5lSis4Zq3124B/5s8O3veZd8344dY8u9ZlqtFlVl+PjHP+6AHqOggdp4I6K7Fn9s3wWtULvdptNpMzMzu3zX8fVzzjE9Pc1Sb4k/+IM/NG/e/tZolTgIAFJJsizj2LFjlroU2hm1Wh3+3vf9CJuvvpF86chEMyqnjPXe+xz8EO+h0wJa4vhpDSkF7W6L3/2d/3OoGMwPJ5s5L7u2s/aqscX2xHSX22/9cstU2QFGQZMoili36Xp6C8fq8RWgo5Srr7uBbS95FTKZ4HN/9RmSSNLsTDK9cRtja1/MdRv0+o2dxW53cgPDxRfPPHz3n9U7wXteevPXEcUJTz3+CLPHDiKEoN2dQCmNEIJGa5yZY4cos16Y7UEQBEEQnJMQwAmCILi8fIj6pOjpTlbvoz4ZfLYyS68+j/f7zPNk2/dRl5U60/YzWt/PsPrJ4xe6z1CfaP7Js+yza8/ynEsxnvPAa6hPlP/kOW7P2y/CmMxTn7T/9Dk+92Ku4ydGx9ItF2Eb3kEdPHv1WfbFh86yrRdzjpzPNl6s932mn00XbR+PguYXY/8GQRAsa5z+IY/orKH47Z+zKZS//Ou/23XOURQFUkrGxrp84QtfZN++fQUwXH6Vuvb1x19/SuuaczI+Pl6/+kRmSXp82UoRxxEf/vAPVYB917u+qbkwv3pV3TRJmJtf4MD+/YazBHCazTb/8MM/wZq16zi0/xHaTal0jHTe0mg0aDTa9bacsjlaSZqtLocPPr7GmuKpwewjrpG8lDwvWepnDId5Sh2AygBueMVbeOe3fj8zh/dhnSfRAuMjJsa74Ep6uSBJm1x93UvZ9vLtLMwfYW5+hlY01p6aHKM73kaWx6ZHy8sA3vDW97Jh01UcOXSAfU8coNmImBifACHQWtCcuJY/+PSvsf/BEN8PgiAIguDchABOEATBZcRae7dS6jWcKHe1fOJxudTQJ6y186MyP2fyU5woebR8m6A+qb9vtKzl8k3PJ3cD11GfmF1ZMm6eE+WTTi1Htfz45eKnRtu4nI117Whclrd/H08/cf1qVs8EuRTj+VOj131w9LrlubXyPZdLX12s4OBnTrOez8Y6fmj0vA+echzdfZ7Hz93UgYdbVqzXavvi2Z4j57ONF+N9L9Zn00XZx9baDymlLsb+DYIggDNk3yA1FEuw90+KD77rzfHk5Hh0+PARpJTHgyu333471MGb49EWueEGfNHnQoI3AO94xzuXP++W7+ou/75p00Zuu+02PvnJT2ZveOP2eN3aNWL//v0IcfJ7ee/pdjo8vPdRyrIoOEMAp9sd53tv+cdMTk1z+NB+EArn8MZCd2ySe3bfa++8804nhJB1zswJQoBSWszPL0SAdF67ynrvRwXflFLu+NgIybUveRPzc0dwXmGcR3vI8orq6BHSWCIbUxx4/CEW54+x6bpXY40l68/z8V/87UdftHVyc1W55J7d90jqoFb24pe/mUZnDU88/giIiOn1VxMph7Ul1nqsdzAcXOiuCIIgCILgChUCOEEQBJcZa+089ZXuv3CG53xGKXW2/z4uZ1ZcCPEMH1/p1gs463C2slunBrDOt4fPrRdxG89n+85nmbeeZXx+4Tkcz30XsA4XMndWesd5Pv+ZrOOpPjG68Swt6x3PwRw51228GO/7TD6bLvo+ttZezP0bBMGVLTrtl1/SIt+/F7Dmu275kRjqwIgQgjiO8d7zF3/x55ZRJggASRvak1BmZ35X70EqRGPsad+z11xzzanrFy8Hbw4ePMh73/ueIUK57/zO70jm5mZX/ZoWQoBQ3HPPbn/S+p1ifHyK7/3QP2JiYoqjRw4ipcKuqMjWarU4dOiI+OrehxeA4gx/EwjACSlO+4QXveKddCc3iSo7oJJIxqn2WkmEbuCloLKeMi+c9QiSWBPHmmqQqZtfvDbOZrbar9x3zxLeqTRJ+nmeLU6s2cSr3/QeTDGfthIhK2vc7Hwvb6eoiRYN61GR8i5tp6U1pghTPQiCIAiCcxUCOEEQBMGV6NS+FXeHIQnjGQRBEATPsdMGcGiMYe/9bXf1ugnxtm96T9zvD45nujSbTY4cOcqDDz5YAeXyS2RnGpIO3lWnf0drEO1J5PhGzMz/hlOCIldfffXxDB/vfRdg06aNHDlylFe/+lXFYDDMfvTH/klbK8RgMERK+bS36HbaPPnkU9y7++6K0wRwoijm27/rB5mcWsORg/uRSq2+uraSwKA7fXW5+ZqXUmSjcZCKY/sfZnF2PwBJ2uKaF7+abNBjvPv05Wy69macGcYv2RJvnRzvoJTCe4FzlsVej7yoXN/LnvXq6LFDj9Obn2GiGbW3bW2u23btd9D71nezceMm/5u/9dv85Z//iRubWEe3O86EPrSlOyHFsPSk2vQ3ru20pidbwjvotFv84Z/dWhzdd+d+Rn14giAIgiAIziYEcIIgCILLyds5exmpX+HppbQ+E4YujGcQBEEQPF//fy5UBAf32Zdcs0UkzabsHZs5/liapjz44IPMzs5WrAwMpF1E2sEPF56+wFFQRm68AT+Yo/itH7HVF39TrHx9p9Phhhu2sbTUx1orx8bGJtvtFp///Of9t33bt1Zzc/O993/gOxsvv+mG5Kmn9q8avHHWMjE5xV/+1d8APmNFgGmliYkp1qzbwOzMkdMGb3q9Hldt2cIb3vTWyVe94d3F+NR6URYZWglUc6394z/43cX7Pl//CXLTa9/OxJqNFPkB6D59vWw2S2fidWW/eNQceuBBfXS2hzEVrVaTl9x4I522lmsbfvxtb36d+Ms//+MjD967i2/74D+o9jx4m2u1GnJiYpI864ts2J8AlnoLx8ps2EdOKOe8U3jLy1+8tn3w8Cx/c/v9vtVqirHxSf7gM7+hrXFNYClM9yAIgiAIntEfiEEQBEHwAvRpTvQEWe5DsWy52fip5b5uJWSMhPEMgiAIgueeOuOjpvBjY2MAOHeitpgQMD8/B6f0lhGtCdDx8WDNcc4iGmOIiU3Yr+5y5R/9vHOHHi6BRVYEFv71v/43NJtN9u8/wObNm9YC4mMf+5j78R//8QJY/JZv3ZG+51ve2T6w/6lVV9d7T3esy1NPHeD2z95ugP7pNm16zXp0FOFW1kw7xezsLK985St417u+qdvvzVOWs8iuYqzbYv/BR3jgzj8ugBxgcs1GTFkgYrnqsr50+6d57KG7/cEnHn4Cl02Pxs4CbNx0dfqRj3y4G0WGt27/urHP3vZXSw/ec+uwyBaGe+/bdeCbv/ndm7/1W98njLFQZyxJpTRCCLzHO+8ZHxvn0ccP+U984n+ahfmZAqioexx5YLBiVT4G/Ajwi0KIHzvfCeNP3bfBZeHUPlLn6PhcAsJcCoJwrAWXkRDACYIgCC4Xyw3EJ6gDC+dinroBexDGMwiCIAiea57T9nXxICRCCg910GblOaDFxUU4pSyXaHRAqdFiAe88COS666EqKP/oo7b67K866oDCPHWQAYAf+MEP82M/9qNUZcHmzZuaTz75lP7AB94/+PKX73ZA73u//0PtN7/xtWMHD+zHWneak2CetWvX8Uv/76/4PBsOOTlwcZI16zeg1ZlPT9Q9cA75Bx/ae9LbtVttDhw4gKnyKeAAQDZYAiFPv6zOBJPrr8WYwq6ZaA5uuOHG5sR4JwIhDh06IBYW5r0SYyJNG6xbv3H8ySf2Dffetwsgu+ralw6KfNCOk+byPjvpbJzWGqk0v/mbv2kX5mf6o7FdDq6JFc9/J/Cjo59/FPhT4C/DYRBcgDCXgiAca8FlLARwgiAIgsvF28/z+bcC3z76T3UQxjMIgiAIztvFvHq38c/+2lBnaTyN90Dalkv9o6MUlZVxgOXfOTl9xZoTUR5nIO0oOX01ds9fuvLPfsG7gw8VnJJ184Y3voX1W1/Kxutew84/3cWrXnYND9z/5eJ9O/7OIWNs46U3vTJ6z3u+ZeKarZuaBw/sxzq/6hhY69i0aQN3feluf+cdXzSj9zmt9Rs2UxT5GQdtamqKL/ztne7//ukfD0bbuvKNFaPsGwBjylVLugFIpbnpzd/NTS97SWtDa256vK2TtWumqKqKsjJorXnyyScYDIeMjadIJSNAdsfXuDe883uYWrvZDfpzjAI4TxNpzfzCIr3eogXmODmwtrzTUuB/n/LS/w2sXbkdQXAOwlwKgnCsBZe5EMAJgiAILhe/QB1EePvodi0nl/faN7rdSijzFcYzCIIgCJ4bZ7p697QBHMo+bH25emz/V0yV9askSaIsy44/3G634dTsnaiBaE9B3gedIKSy1Z/8O1/e+v+znJJ1kzZafNffv4VXv+aVfG3/Ag/ufYTb//Z+Nq6d5E9+/1dtpzveef/7P9h5zWte2a7KIQcPHkIIuWrwxjnH+HiXvDD85m/8hgPfA7LTDUij0WLjpq0Mh/2zDp6oAzV9zlCODSEYn1xPWWSwSoxlasM2tmzZlKxVX92UolhaSti16/Pu0cceZTAYemcr/9a3vEW+5GUvlVVlcc4JQHS6U1x17UsZLj0p2uOnD9wtx8y0jo6XZVvFvwA6p9zXGd3/L8JhEpyHMJeCIBxrwWUuBHCCIAiuUNbay3Gz7h7dfiHs4TCeQRAEQfA8c7ard6vTvdAPFxEveYN4bPcnzdf27B5c+8o3jg+Hw+MBlPHxMTglgOPnnsIdfgSyRcT4Rsq//PjR6vZPMHqv4xmzU9Pr+N4f+HEmJsZ54vHH5GLfy/F2qsbbcRKrsvkjH/lwc91kU2slOXb0EM7502a3OOdotZqMj0/wH//Tf3GLi/M5sHCmQbnxZa9gYmqaY0cOctoKctR9A073viu96CWvZ8u1N3Fg/1PQlCetG0B3Yj3Xb9BTqYNGZy1/86d/Zv/kT/5PAQypA03V277hm7ZESsqqsscbFjhnGPYXaTfPnnUlhECcPj3rRuAnTvPYPwN+G3goHC7BOQhzKQjCsRZcAUIAJwiCIAiCIAiCIAguvbNdvVue9pWmpLHleoYzJv693/jV/r987fZx7+vyZVmWc/XV1zI5Oann5uaOv8Q+/mWy//A2UBohJD5fssBRVmSFpGnKD/3wR5iYHOfokUNxI5ZXrZ8QQgqElhKlJB5Dr9cb9bo5fdk45xytZpOpqWk++b9+3e19+MEKmOH0WSgAvPTlr6YsizN2ADrFGZ/10le9lSJbQgoQAuEBrRXOWQE4HUc0my3FQILwDId9CcxNrH/xcMvW63jxi66Pr7tmi56dnaPTHUcrBUBVFdjjpdku+EIoAfwyp8u0ggj4H8DbOKW3ThCEuRQE4VgLrkwyDEEQBEEQBEEQBEEQXFJnu3r3RqA4/cs90nt49bu7v/E7v5ODz6emprDWMhgMWLduDa961auffnLJFFAM8PnxNjcnRR6+/hvezlVXbWH22EEiLZECqSVCCrAeigrKyuFcHbw5HeccnXabyakpfvt3P+W+8PnPGergTXGmQVm7bgNbr76Ofm8BIcUoQLTiRv2vUhKtFFKeOcLzste+g41X30RRlsi4RZI0ibQiz4fcdNNNIk0b+tihp3h436FeqzNBPhyw/c1vFn/vu7937Pu+/wenv+ODO9a/862vuLrVjAVCEscRGzZuUABFkeGsRUklltfn1GCWlAKlFEqd9lTLPwDeepa58pbR84LgTMJcCoJwrAVXiBDACYIgCIIgCIIgCIJL55yu3jVf+I38TIuws/tp7Phx9eicaLzv6994IEliOp0ORVHHSN71rncl5/N//HXrNvCNb/8mDh08iBAK7+tLh52vb/4cryN2zrJ27RpanS6f+J+ftLf99a3LwZvBGQdFCN7z/u9mas1GWu0JuuPTT79NTDM2Pi2bzTbNdocoTs+4LlMbXsyhQweZm1tgYWGe/Ufme1HcopE2ecfb387bvuEd43MHH+DXPvYzi5/Z+adLxsKmzVv4tve+u3Pz9WOTa9pFd/fue/jDP/5Ln6RNJifG+bb3fUB3uuPpsL8IUjG5duNofbpEUSwAlI4Ym1hHuzuhmq02jWZ7tXJvU8B/Psfd85+B6XDoBKeb6mEuBUE41oIrRyihFgRBEARBEARBEASXzjldvVt94de/R7/pe/6Cp5dZq1UZqtFFvu9frfujP/zpfR/9mZ+e+ei//ffT1lqsdXz913+9AhLqPi5n9W3v20EUReR5fk69ZU7lnCNJYtav38LXHn/C//Zv/4772r5HK2CWswRvAJTSHD46w9G/+guqavXqcVGc0u8dW3ji0fsTraPm3OwsgDrdMr/455/EOQvU2TzW0//y1VfPbd60cXJhYZ6v7dvXBJq4YnjbX//Fod33fNlcvfWqsUazJb1zfn5hwe99+EELLH1170Ni69arxufn50WeZ2PWVMPdX/xjf+Txjcee/OqX1+oobhw9egRA9eaPsmf35/ns/vuPLC0eXWutl73FBUkdvFsOhf0n6pOBAPuAa1fZhOX7p4D/CHx/OHyCVYS5FAThWAuuIML7UJ4vCIIgCIIgCIIgCM77P9TirE1bpoC9nDgBdCaz6vo33Bx/4N+28I5VW71IhVp3Lf2PffAYj985/4lf+ZXNP3jLLc0sy2k0Um666eXze/Z85djZ3ujmV7ySH7jlhzl08MDKu+NYc/XZXuu9RynF2rVrKEvDHXfe5X7v9z7lrKkyYI6zlE27QBEnMpgqztQvaHXt0TIq6gDXylJyMXXgS1IHWwwwHD3WGj1ecnJQKh4tj9H2mhWPNTkRuMlG/24HPrfiOe8G/u8q6/ku4M9X/P7WU173tH0RXHGfK2EuBUE41oIrTCihFgRBEARBEARBEASXxqlX765m+f4p++jf/mvRXWcQkjqIcwpT4ZZmaX3o/7+G9tr4lg99aP/P//zPDxuNurTYT/7kT46xauTnhO7YODs+8O0s9XrnfCLJe48AWs0mGzduZGp6DV/68m73X//bf7e/89u/WVlTzQOHuTTBG6gDL4PRrbyA1/eB+dG/9pTHSmAJWAR6nAjeMHq/+dG/eO+Xb6X3fjC6mVPGcTh6/pA6eBMDv7Li8U9z8sm+lf4C+L0Vv/8PTl96L7jyhLkUBOFYC65AIYATBEEQBEEQBEEQBBffduD7Vvz+/5zmeT+84ufvy/79W18sxtbDalcGS4lfPIoXkuZHPn0VjSn50Y9+dP/f/8B7ewvzC3z3d3+XfPGLX/y0Emzyuq8j/cjvA/DG17yC9Rs2srg4/7Srj4UQCCGQQqCVIkkSut0OGzduYHrNWhaX+v622z/rPv7xX3S/9qufqB7/2mOLwBHqIEe4rHh1/xS4cfRzD/jxszz/H42ex+h1PxGGMAhzKQjCsRZcuUIJtSAIgiAIgnP5o+nsZXKeibcCX2BFCZbwN1oQBMEL+rshBu7lxAmgTwPfwepBDgF8avQ4wEPp9/7K+9RL3o47+hjIVVq+2Aq59jrc0cdsdtv/fIJ7ft+skYz/0e23r22PT/HOd77jkcOHDx9/r/TDn0JtexuDH1/PdRvX8uM/828YDAYYUx3/zpFSJt12Y6vH452nLCuGwwELi4scOXLUP/jggzz80IMmz7MKyKmzWfIraX+f6bv5NHPhOmAPkI5+/wjwS8uLO81cgDqot/y8HHgZ8Nj5rE9w2X2uhLkUBOFYC67UuRomThAEQRAEwQX/gX8xfC/wv0Z/6N8APBH+uA+CIHjBfzf8c+Dfjn7ujT7fD3H6kz8bgIeBLgBSfbT1Hx/7XTd/AMps9WwcaxDtKURnCjfz5JHsUx9d5PHb5D94z9u27HnyaHb3/Q8eBZBrr6PxL/4W98TdZB97H5iCt7/zPbzuDW+ht7gw6mmjEcLLe+7atWax1xsry9INBwO/uLjgFhcXLHWZsWL0b87Ty5BdES4ggPPnwDeNfv4S8HXAcm28M50IlMAdwGtHv/8V8M7zWZ/gsvtcCXMpCMKxFlypczVMnCAIgiAIggv+A/+ZegPwxVPuexvw2fA3WhAEwQv2u+GiXL0rN73srY2f/uyCe2L36gGclQvoTCNak4PBHZ85xh/+95LyyQQoUBHp9/wy6lXvwz3yBbJf+S4o+gBEcQM/6rMjhcQ6izWloA4iuRU3M7pd8V9M5xnA+Q7qzCqoA16vBXavXNwZ5gLAK4AvA8spWH93xfLOuj7BZfW5EuZSEIRjLbiS52qYOEEQBEEQBBf0B/4ztQnYf5rHtnvvPx9GPQiC4AX53XDRrt5Nf/DXb1EveXvsDuwBqU+/It6BkIiprSDE0N73p/Pu0EOVvvEbS7FhG+R9cIbhf3n38QBOcP7OI4AzRp1RtX70+38D/vGpizvDXFj2X6l7K0Dda2gbsHgu6xNcNp8rYS4FQTjWgit9roaJEwRBEARBcN5/4D9TLereAaez03v/gTDqQRAEL7jvhot99e53N37i1rvkuhcJd/jhMwdxoA7kSIXorkXELfxwofL9mUp01xv78G1zxaf+SYmtwo67QOcRwPl/gQ+Pft4PvARYOnVxZ5kLAG3gIWDz6Pdfps7UOuv6BJfN50qYS0EQjrXgCifDEARBEARBEDyrNPDkGR5f5EQj6yAIguCFYwz47yt+/zgnB2/Oxb2j1y37L9l//5aeP7YPuf7F4CxnrGQmJHiPXzyCO7YPP5iLkLpJ0uiaL31mDbYK5wAuvdcBP7Ti9x/h6ScBz1V/9PplPwS8PgxxmEthLgVBONaCK0f44y0IgiAIguDZI6jL6Uye4TmbqHsNBEEQBC8s/54TpVf2Az93gcv5WU6U2FxHVfxc9ovv3+8OPIjcfFP9VXK+V/A6i4jSJvVFBMGlo4FPcOKK7D8G/vAZLvMPgf+z4u+IXwn7McylMJeCIBxrwZUjBHCCIAiCIAiePb9LXR7ndLYAgzBMQRAELziX9OpdP1y4KfvFHU+ZL3/GyS03I5rjo2yc8/nfvzpL+k5wEfwocPPo5yHwkYu03B9Z8ffBzcCPhaEOcynMpSAIx1pwZQgBnCAIgiAIgmfHz3Dm0mhv4sQV10EQBMELx7Nz9a4pquK3fvTJ8n//dI4AueEGULruexM8H2wB/tWK33+OM5dMPR9PAh9d8fvPA1eFIQ9zKcylIAjHWnD5CwGcIAiCIAiCMxBCbLwIi/k24N+c4fHvB74YRjsIguAF6dm8eresPvs/n8x+cce8ue9PEWMbkOteBCoOgZzn3seA1ujn+zm5H9LF8N+B+0Y/t4D/FoY8zKUwl4IgHGvB5U94HzKogyAIgiAInvZHUh24+RfAh4HfAH4ZuOMCFvUy4CtnePw/Az9x6p3hb7QgCIIXxHfFFuAhTpwA+onR5/rprPbhLs7w/H8K/KfRzwPgJZy4OjhV137dlH7dB1vqZd+EaI7j8yX8cAGqvC6xJgQIiZjYSPnbP2bMV/78KaAKe+7CnOm7WQhhADXax288h78ZzncuAHwd9QUfAnDeexX2ymX5uRLmUhCEYy0IjgsNlIIgCIIL8tXP/0EYhOBytxy8AfgegL27dp5XAGfb9h3TnDl483/37tr5E8/2MfbiN78/7N0gCIKL49m4eve7qTNwlq/e/cDosdzuu+OA3XdHW27+X1217S0tufXVQq69DjG+ERE38KYA5xHjGyBKw966tD4J/EPqMjl3XKL3uIO6JM/PAr8ahjzMpTCXgiAca8HlL2TgBEEQBBfkq5//A7Zt3/E14HbgNuC2vbt2PhVGJrgcbNu+YyNwYJWH2nt37Ryc4zISoAfEp3nKI8ANe3ftfNZr3oQAThAEwUX6D/VzcPXu6P1WkwBtMba+ITe9NFZbbtZ01iCa44ipq6j+9D94+9VdTwBl2HMX5mKePxFC+FWWL8IoB2EuBUE41oJgpZCBEwRBEDwTVwPfO7qxbfuOR6kDOrdTB3QOhiEKXqDevcp9h84jeCOBfZw+eJMBL3sugjdBEATBRfV8unq3AAq/eFjYxcORffCvIyAa/b9fEXrgBkEQBEEQvOCEAE4QBEFwMV0/uv0AwLbtO77KiYDO7Xt37TwUhih4gXjFKvfdey4v3LZ9hwD+Bth4hqdt3LtrZ7gCOgiC4AXOe38LcMu5Pl8IsdoyzuWl/2p0O6fVos6yCd8zQRAEQRAEL3AhgBMEQRBcSi8e3W4B2LZ9x8OcHNA5EoYoeJ66eZX77jvH1/4P4K1nOi727tq5EIY4CIIgCIIgCIIgCIIzCQGcIAiC4Nl0w+j2QwDbtu94kBM9dG7fu2vnTBii4HlitQDOvWd70bbtO36EM1+J/a69u3Y+EoY3CIIgCIIgCIIgCIKzCQGcIAiC4Ln0ktHthwG/bfuOPYyCOcDn9u7aORuGKHi2bdu+4xqgu8pD953ldd8AfPwMT/kne3ft/IswwkEQBEEQBEEQBEEQnIsQwAmCIAguqu//zm/lzt0P8PAjX8O68+rPLoCbRrcfBdy27Tu+wokMnc/t3bVzPoxw8Cy4+TT3nzZzZtv2HdcCf32GZX5y766d/zUMbRAEQRAEQRAEQRAE5yoEcIIgCIKL6if/n+8FoNcf8OX7HuTOe/Zw1+49PPzY4zjnz2dRkvpE+s3Aj1EHdO7jREBnV+gjElwiqwVw7tq7a6dd7cnbtu/oAo+dYXlfBH4gDGsQBEEQBEEQBEEQBOcjBHCCIAiCS6LbbvENb3ot3/Cm1wLQW+rzpfse5I5RQOer+564kIDOK0e3fwTYbdt33EsdzFkO6CyFkQ8uglesct+q5dO2bd8RAU+eYVkHgbfs3bXTh2ENgiAIgiAIgiAIguB8hABOEARB8Kzodtp845tfxze++XUALCwu8aX7HuSu3Xu44549PPK1J/H+vM5xK+DVo9s/Bcy27Tvups7QuR34/N5dO/th5IMLsFoGzr2n3rFt+w4B3A+MnWY5JXDd6TJ3giAIgiAIgiAIgiAIziQEcIIgCILnxPhYh3e85fW84y2vB2B2fpEv3fsAd+7ew127H+DRx5+6kO+0149uP0kd0PkSJwI6X9i7a+cgjHxwJqNyaNes8tBqGTh/CNxwhsVdtXfXzjyMahAEQRAEQRAEQRAEFyIEcIIgCILnhamJMd719W/kXV//RgBm5ha4a/cD3HXvHu7cvYd9Txy4kO+4N4xuPw1U27bvuIsTAZ0v7t21cxhGPjjFzae5//6Vv2zbvuPfAd96huW8au+unUfCcAZBEARBEARBEARBcKFCACcIgiB4XpqeHOebv/FNfPM3vgmAY7Pzx7Nz7ty9h8efOni+i4yAN41uPwOU27bvuBP4G+qAzh0hWyJg9QDOYyv7K23bvuPvUwcFT+fb9+7auTsMZRAEQRAEQRAEQRAEz0QI4ARBEAQvCGumJnjP27fznrdvB+DozBx33lNn59xxz1d46uB5JzvEwPbR7eeAfNv2HXdQB3NuA+7cu2tnEUb+ivOKVe47Xj5t2/YdrwN+4wyv/9d7d+38TBjGIAiCIAiCIAiCIAieqUsawNn+ptc/ztMa+8pVnulHt5X3aDwSSYnwApB4HNZbwNVLEhIda2xlkUrTajbJsoyqKtFJRFVVqMgwvU6yNNeit+BRUYEXGZIYUAg8wp+8Tqc20RZCAODE6bfVe08SJ1TG0GhGDMws0+sS0rTJ4f2GaqCRogIB1liEECilAIcQAmstzWYD7z350KG0xFOBV/WYCYt1BVEUoZQGPNZCZR1KCQQaFQ2ZWhshrGb2mMc5g9IG5xQ+BQQUgwrtG/jKYURJ0k6oihJVgNYpVnjwFhAIAcZYhBTEWqG9Jq88HoGzOemYxqHIi4J166ZxRlD0c/pLdYsJKQUeQaMzTpwIEjFgqTfERQ0mN04x0zuC8oZO2sKbktmjPYpBF+8FSit0MiBtD1C2zfyMQKsIHXtwEZWrUAKk80hVUYkIgUCM9p2UEmsdRbXIVS9uMD6heWLfgDSeoCxzhosC7+WKeSdGY+3xokRrD8JhrUf4BLzEOYcXFikcSoh6X1YxUkIUS6ywxE2LkooiM0StiCqVNKXAO0duQGeSvCywTqG9qGe18WAlVhhkQyCEJRaQ9wsa7TGk11hfoWXCwmCGya2e7lTK7NcEw9kYpS3CO7zVOCdxrkTHDh2llKVDSomQliTxGC/Ih6bevkjhKsDUc9t7j5QSJRQONzrOPM578AolNc5ZEPWYmcoQRRFRarhq6zhzc/PMHBVEOqEsSpSOAYuUHqUF3hlc5RHaM9Zp0lsY4mlAXIHweKcoc5AyAm9QErx3RLHAeUOqBcM8I07bOCuw5ZBut8Pk5DROSaxt0Wx3eezR+7FVjncS791oH8vRPpaAREhDdxwmxtdx8OBR4khQGcvkVAepNAvzGcJ6Oh1PP+9ROk2VKTASKo9THqcFEo1D4nyF9o5UKrwGLwymAlspWmMKLRT9xQrnxOIX7vji1eGr5+JYOz3Je9/5Ft77zrcAcPDIDHft3nM8qHPg8NHzXWQKvG10+yiQbdu+4285EdC5a++unWUY+cveahk49wJs275jA3DnGV67c++unT8bhjAIgiAIgiAIgiAIgovhUmfgbKQuWXPepHdI7+ugifTkhQCfkKYl4BDS0GxHZMMc5wXCKbI8xzmHEIqqEOi4QVUOOPBEhRKWKNYgZH3C1TkEdWCIUwI2F0IIQVmWOOcR0tPtajCapTkLziFVNQpE1cEFrTSMAkJSSoQQOOfrE9ZJiVIRcdKiLPtY65EixpcJUihs5bDG4oVEqghnPUI6TK544rEh3XYbayRx3ARfUVQZUkkarQjZ1BRLgihWeAlZlqFdAycNXjgkYAG8p6oMjUaCsZ48t6RaELf7eCTDxZgylwgpMLnn8P4ZdOwRVuI9SCVxeJRUNJpNUHXgR6aaylXMzy/Rbk2zMHeY3uwxtt2wBodi/nBFs9ViOOxjjEfLSaanNe1un0MHF/FukrQpsQXkWUUqJTg12q8OMQruCQTeO6RIOXZQkKgxxrptDj41h3f1iXcp62fWawbgQQi803UMizoIJnEgHFLXAQDhPMJJLBKnPFb36UxGNJI2Swsp8/N9pBQ4NLaEpTyjNDmNbgvlPd2xmPnFAcLHCA/OSqIORFFMNm+QCuSEIkq6FGVMpB3WGIosR0mNHzQpnYaqQkqLxCAklLYEoYnTBs4VeJ0hpMeYqF5/n6C0QkUWLWIcAi8NQjuEV3gvAI/3HucdQjiU0jhjkULU2+TrQI+1lmariVKKvFjgqSfmyYaOOEkRwiKVH80kj3MODMRxTF7leCM5fDSjkUh0oyDLJUkcIWOLx1JkJRKBUoI40USRwjlPUTkQTcqhI2nFeKEYlDn5kaOkcUyclFxz3VY646/jS3/7uVEwz9UBJy8BhweEAKUgiceYnekhBTjvcM7TaLYYLOZk/YzOWEJvIPC2Ca7CeEOcaoq4qoPHVmGNRydZPXZlgjEGISBtxzhfUpXgjEW3Bd2JiOGgaoWvnUv4hbNumve96228711vA+DA4aPcOQro3LV7DwePzJzvIhvAN4xuAMNt23d8kRM9dO7au2tnFUb+8rFt+w4FvGaVh+7btn1HAzhT3b4Hgb8TRjEIgiAIgiAIgiAIgovlUgdwBsD4hbzQC4cVAjxY41m30aJ0xuzhCEEMos4WiKIIvMQaT1kUSFlfae+9xXuJlAmCFOctzjuk8GilsTi8EwihRhkn57FuK7I8Vv6OAKkEvcUhXSL6PcugPyRO0uPv5X19QltGdXZAZSxC1O9vra0zedL6RPpwMMR7h5D1CXW8whox2i6JFx7nK6xzOFsyMTFGVSqGwwWEgHKQomhivcIiUFIQRRG5GmClpTPWYbCY4fMSJyXGWBKlcQiEEnQaHYo8R0hJ3IoRvmTthknyylFmGUpYkthjqzr7QxiJ8yDEaBulRCmJtSVL2YCJsTYiiomAZhKxtLBArBoMXYH1jun1k2SLBejDRIlA+HHmjgzIB5abbr4a549x7IBhaanP1Tds5tjhBZYO92k1FB6LFwKBrBMtvAcBSscUGXzt0R5JqojiFFMZhBN47xFCIJGjn32dqyEkxjiElAgUHo+zdQ6HFAJNgrMOlCBJNa7RRDWa9OYz+osGKQVxHFFmFlWCdhESSbVUIZVERzHOSpwXKCVRWmBMhUYjrSJSMcJ6klaTKlE0IofNJPO9ObTS9HuWvGdpj3VxMsMXkGcFUtVBKRV5sAILJO0m3dYaFhZ6DIYZLS0QeJwTOCGIIo1wlqKss108fpRxJsBHeKfqQJj2CFlRR3Dq8VVKjcZKgBfgNUVekcQNkiSlLIej7DKPsx4vK1oTLcolg44j4rZCuYoy91SlRUiDlBFJEuFNBTik1DjnybMSYyVrN0wze+wpFhYj2t0mzpbkec5gaYARhzgwe4gNa69CiwRDgUchnBgFpOrjTEmQCo4cmkdphacAHIKYhx/cRywi4rjBICupKo00IKVCRhE+genJMYaLA4ZzGQKBiiBJIvIFMIUgTerPIGs9SnvKqkLlhiROKc0oPS14Vmxav5Yd7/4Gdry7jr88dfBInaEzCuocPjZ7votsAm8f3QD627bv+DwnAjp3792104SRf0Hbdpr79wBPnOF1i8DNe3ftPOMVIdu275DUPZi+HfhG4Jf37tr5i2HYgyAIgiAIgiAIgiBYzfO2B44HnPT1yVdraTUHpM2MmUMTICKEU/QWC4SwCF8n+SilRsEUj9AGqQTOWbyoT6p7J/BeIIWm0dYUucdUJ0qknet6LQdunBtleyyf8B6V41KiTX/OI4QhiVK8FSgU1tfl0pKkLrXmveXpby0wlURK8A6cS1HCYd2KjAjnRklDFmRJEiuiKCEve0xOtJmcGkNpz+xMQTGoaEUtBllG0Tf0/YDORIMoivBesmbdGhZnZxgsVUgfUeFx0pNGCZ1OB+ccpalwVChtOTqbIaRmbDImkp6qqGjEGm9TysrgsURRhDEGa+pyYotzcxhlEJMNkpZm0BsgvMDbAUt9w7qNk+TWki0OaI17uu0WRw95srJEa6jyNl+5Zx5ERaOZUFWaXjbH2LqIxSMZtlToSGGlxkuFEALvloMzdcku7yLyzNJoeRAGqhS8HGWV1BlNguUyYh4hbT1nfAQIotiMAhcRlfd46VBRjI41XsUszjpi32JiumSp3yOKE7wFZV0d5HCeWKc4UTEYGPAJZWmwskKLGApFI2lCow5oFLMW+gtELY0XKXnfInxCHEtUqsmyjF6+RGVLQBI1mkSjYBmyQEmJEk2KJctMfxZTgZYGJTSlA+8qkmYT7y1VUeB8nQUmRrlIQsg6MGpHZdV0XVJPe49FEMeSPMtJkgQlEpRyNBqaLIMid0SxRUoHSKTU4MFYg1cemQgKO6QqNVEREcVQlhatUjwQJ1A5jTGWbFiBqJBIYhWD6vGi14/x2ANDtJU4K4iEwESatKGxznLwwGNokdBqtTG2IssyIhWhtK4DV84jvUdqh3OOJI0xlcEYSaRbSOEpyopYN2kkHitLvJVE1HNBVhJXFEyuaZMPLINBhbMFcRLhS7C+xBcRkhihDR5BPpRURcG6DZ3wrfMc2rJxHVs2ruMD3/KNADyx/9DxYM6du/dwbHb+fBfZBt41ugEsbdu+YxcnAjr37N2104aRf0FZrXzaAvApYM0ZXrf5dMG7U4I2Hznl4Y9v277jxr27dv5wGPogCIIgCIIgCIIgCE71vA3gCATaSTweGUse3xcjREIcK7wvcd6NMgZACDfKbll+tUcKjbOmfux4D4yac3W2S122zNb9TDjx2uU1WPlj/UbLr68DMRw/4e3wOAR1D5U60DIK6rg6K8QjjgeKvKvLVPm6YtfJ2y3Ae4lzHh1JhDXYqs4mkdrixagkFH4U5VJYA3EsMaVm5rCn0QDrh1x11WbSpMVTTx7GC0llCrwHLVtIDf2FBSbGJhifXI8vZjDGUpgcV1nwjqoqwXt0nJBXlqLSSFHQGY9QDmIk80cqrEnR0hFFFq00xnqSJMa6OjBiS4FONYNeD2c9FIJ+0QclyPOCNWvbRM2IR766m2s3a+aemiDvJ3THBEI6Bv0BZRGjZEJZZKxZN0HU1KTNnLe/exsP3HWQpcUhHjB1ChBqOTsKgceAdEg0tvJIVZ/o994jZJ1FIoRESofDEUWKzlgLKT2zxwqcq5hep8AL5mY8xHUwrrIgnSURAqFzmqnDWQk+JRtasBaJQiYJypXIuESmoCxUJagoJtaKKvdYV4B2OOUh8gihcaUDYSiKnHzJkzQ0ScNTZpZGkuKEx1vwUV0IDuVZt2mapaWMYugwRQ7G4rxFS4nAkQ9zEAqEwdqSqqjwRiDkqRPRgahG7W4UeeaJdIz3FmcMjWaKtY68GKJ1SlVlKF0HM6XOQTrwJ44PUHijyAeOqBOjBhZfOlRLkMaSaqGe152uQugBC5UDGyGFwwtIGwmRjygG81TSsWYbMONY3A9SC5Tz2KEn6cS0pj2pjMn6DmMWGZ/UeBtT5K7OnEJgKkurI6kqT5lX1BkzoKTGCgfKY/MCVIWIwWuJdZY4UwyyAZ3xiImpNofLHolrEMeSZgzdNGJx0MdTl0N03qOkxFSCSAtUVIRvneeRrZs3sHXzBr79ve8AYN8TB7hz9x7uuncPd+1+gJm5hfNdZAf45tENoLdt+47PcSKgc28I6DzvvWKV+wrgtWeaSnt37eyvvOMsQZtTfXjb9h3/cu+unbNh+IMgCIIgCIIgCIIgWOkSB3Dk0+5xwj3tPuGPx0ZW8HiWG5BDpFsIoXAuG2VCCISUKKnRqaLIHdaIusyTcEQmQnhJpS0eifAR0gPC4KSndCVaRtRxFllXgPJ1wEfJOhjkfL0edTkwhZajDjHCUJYeLVt4BGnLIWTJcAmUjHG+wgm3nMsw2gZ/vCm7NXV2x8oY0UpW1gEi6RwRkjTSVDqHGGwmsdZi8GgEDS0RLUVuLVHapOyXzM9nVKZgfNyTRTnzvQGNVooQERONLmU5oDeXEScxs8fmiFRK2mrR688z3UmJMCwMh5g8wUoNbkin2aUsDDavWJoZoKWiV0o8KSpSVDZDeEOsFLgKpTxpQ9PrZcSpIpLg+gZjLTaPmd7SgrZhcUmy78FjTE6sQSxNc3DfgOGSRQmHUnXGg8cCFiEbCGdZmJ9jMmqzfnOL6160hgfuXaDAECmJdAIh69473nmsdyAEUjAKfgniWGN8SVFWGCvBR0ihibSjqipAo+JmXZrO53gk/QF1T5ZUgdCYSoB3OJezOF8HzdKGYu6YwJsYqQtk3EdFCldqnC3ptiQ+kgwXS3wJQimSJEZHGWUBg2EBOkYIh8tdnbnlIpT2EA2ZvnYTTlrE7BKq4ZG6Sd5PqfKcwXxJ3NEk4zGFLlnIK5yt58hyAAcUrixH4UyFHDqwHiuo+/oIhZS2Lj/oJF56hIQ40lRlhcWA0HhjcUODSh02NThfoISmMhUOiYokxlqkkCAc1oPAkqQJ3gjsYkajoRmUJWZoqWyEqSTeWYYDSJsJzXUFCQ0WvtZHaEEpLKY06Epx+KsD1m4dp0rAKYV3kjg1mEpTDRwkHh9JsqU+W6+bYPOWCe750mHKApKGBCqqylJVGiUToEKriCQRZFlVB1xFXU7OOI8wdT8eKevsrGarQSttMHewh8QxOanJ8xJrmlSVRfq6D1TcqMupFX2P1GDKkrnD4dz989m1Wzdx7dZN/N33fRMAjz7+FHftfoA7d+/hS/c+wOz84vkusgu8Z3QDWBhl6Nw2ut2/d9dOF0b+eWW1DJx1Z3j+W/fu2vkknHfQZqXbQvAmCIIgCIIgCIIgCILV6Of36q1MgSlHGTYnQj0CjVCSosgxBqKogbWjK/59hV/R0l566ppkHryQNFoJ0giEFeSVP55HI72rG2Q4QaQVCKiqCiUEzhm8L2m2WvXV9bbEoygLS9IQSOkQqs688KOsm5Wkr9dGrsh28HC8F8vx51mHkOCkwigJicJ5UTdl93UZuO5kQjns4yrN1Pg0vTJnsDBEiPpEeVO32P/UUZz3NBopw2GJtRLpDcJZtFVgY7zTZFWJdYbKQ1E4dJIwOdkGKynyiqUiZ9BfGI29wuYe3bYgJVGj7t+CgFang7eCPDMMTUE3FkyvbVPlFaawJHEbIRR9+gyWjnH15jHGX7KBh3f3ODg/SxQJymEXZx1GFZTzBSDROkHHCueKuuSZi1iaq3j4vjkeun+W3pKESMIoy0SpOhPEGDd6fUzS8HhT923x0qFVA2JPZYpRYM1gK4EUKXmRs/+pwzRaYzRbXUw1oCw8ZeFROkfKEukbuMoinWTNWJOyGjJ3TFIVCi8KBJYobjK1rkPvqKMYGvKhwqkSb0sUMdkABksZrQ54l4JSdKdS5o8M0MkSekyRJHUZsChWFGaRztgk0YZpnJ8jXypI4zEwhrQ9QIqKw08cIkqaNNOIQb9AeIX0UBmP855Y1n1jZGSpnCNJIpQvMRRYI5FeIWVEZepgo5YaISBpasrC4UtLFCtKlxELTRKlLC3lNBspoChyj0eitcZaD14SxYJGM2U4qKCSdVaWrtiwqcXcwXnyQYKSDYwxpC4h62e0Wgk4RXdqmqV8AZ/WPY7KviYp17J0QJDnQ6IkRiJQOsVaiytLCiuoXIZzUJUJRw5lxAkkHUc2qI8DrSHPHIgCJSRSweYt65ibW+TI4RmiSOMFCKlwDrw1xA2Jbkl65ZDF+R4TzQk63Ta94TxVbhiWQ2zlaLckQtZl6LwxOOdwSISIkLTDt84LyPVXb+H6q7fw997/Lrz3PPr4fu685yvctfsB7rr3AeYXe+e7yHHgvaMbwOyKDJ3bgD1n66ESXHI3n8dzfxD4/LbtO7Zz/kGblX45DHsQBEEQBEEQBEEQBKvRL5xVXa1Pzag5ulckqUDgcE7ikRhVv0agRmWgHF46/Kg0m8sdeZnjrUaIqG7ULiRKgrN1BkxVVUhZlzMTwpE2I/LS0FsqEECjW6BFh+GSoswdQjmMGSBVjD/HC+2FqBuk1yXVRj106tXFeUdBRV5ZIi3xZYUc9ZjxFGzYOsnBp2Y5evgQjTTGVyUy6SK9wZoKKQWRUpiqwssIcBRFBU6MMoUMpSvpdCYBjx5GzB6bY8E7mg1HHDmklERKYU1dVs2jiOKYJAHnLd6VaOoT/v2lAVolOAdKa6oSrC2oCoczgrzMUVLTaEvStEnDdRB6SHcyZpAbOt2YfMmjVVL30xn1+fHeYbzFmBKtI7SKyXNHNmzgnCdKSuJIjzKzDNbWPWu0hqoCUzmiWAEC56HIDNXAIqRAKkCN+iXZFCkFWlR4b8myAVZGNBoKfI5UnkYjIY4TFhcGJKnAlJJ+fwEpFFke05kqSFLJ4pwmH0gOPLGAMIp2p0NRDMltRbOhWbNuDYcO9BBWU5UOY3N0WzIYDtFIOpOaqJ1QmRhJxHB2gBt6ZFqRKY8oBToXmLzEO8HkhjEaaUxvzrA4s0Szo2i2IhaPDmlFMd57dJrivcRXhnY7YVjVAc1OHCNiTZ6DtwpjBHY0lt47irKg0+0gnCMbDoiaKV5EiDRGxSlmfkBVVbTbbYpiiLWGVqtJnle0Wh2qKsNWFlMZ0kSQtCULS7N0xyKuv3ELR44eYWkpxyxBZQqixGEyjdAR0XiH4dF5yAe0kha+oekvWWaPFqxfm6I0LCyVVH2FVBakB6ex3qLjhAP7BwyG89zw0k1MrW3w5TsfIYnHUdoivCTSGqEsw2GfffueIElSIq1POUY9Uipw0OtneAstneK8Y7E3YDjK/tMaiDxeijr7b+DwSJx0OOFRIiYvyvCt8wIlhOBF12zhRdds4bs/8M147/nqvie5856vcMc9e/jSfQ/SW+qf72KngPePbgAz27bv+CwnAjoPhoDOs2fb9h3rgPXn+PRPAzcCF5pWtwf4deCX9+7aOQijHwRBEARBEARBEATBavQLdcWFEHjvccYitEJpSZFnOK+QUmEkRDJBGIdYDt4IjXceZwx5qXFKgrDgdd2PBpBaoaXCOUE2zBGRAF+flCVySB+hiUlbFd3JiqW5HEhwzgMRQtYBByEUwPGgzOnUDeI1pqr7cNR9eUZ5Q94TGXDW4LXHCwtCYYxjOGsYW5PQWdvEL1aMN5oMegVOevC+7u1z/ES0RWmJFhEmzxHOY7xFpZC2Enr9WZqNMSbXrCWKNIO5Hs5Bv6zHJ4kEcZJiTIX3jjiOKIsKLxzCe5y1aK0BS1lWpGmCcyVFXuF93TheKl8HS7TFYBgUiicODGhPeKK2x7mK0isirUgbFtszGOtBFHih8E6hZFLvP2ewzhMph7MV3kZUlUOqOpNJSoFUJd4rhFV4r8gXY6zKkdohjEeougeSR2LMcr+iEqk1aZJgywFR4nC5YTiwTEwrWm3PUq/EVJooFoxNS/Jc0R8mCCdwsk933KOjBrNHc6RM8V5jjSTLDUILpE8ZLgF2SJJKvBNUlaOzRtDseIa9EtlSDDOIjKYs+qAqNBJMyuL8ANWtmDuU4QcJwlRU3iGiNkiFS0tE2yISQWd6DYtlBcbTiRIQMPB9ptdNsnhwgNAOKwWD0qIsmFIQxSCVRTiBJAYqKuMAjZAlQnmqwuC1IjeWaByu2baVw/uOMuyVpKkgG3qyoUUpQVUNMaYupuhdgdKGZMzRVQ2qLObJx+eIOgO64ynOKby1OC/JK0OcGKqyT6La+FJQ2AIdSVQpSVWCc4LhUkWRx0jpAIeg7ucjdAxeEWnoxl2OHhkyc2xAM51CiLrfjY4SoijC2IwkaVOVFUNT1llarKxqJXBCUFqweYLEUSmYHfRojmkaLUU2dGitUHiK3OBLjRIShEHpej8XVYXzoVrW5UIIwbbrtrLtuq18zwffg3Oehx97nLt27+HOe/bw5fsepNc/7/Py08AHRjeAoysCOrfv3bXzwTDyl9T5ZN98+wUsfw914Ocze3ftfDgMdxAEwXPj0Qd2XtHr9KKXfeC8nn+2/8+G/f+MfBLYQZ2lvQDsvNTrd6Xv/4v5f4EL8cie339Bz6XznT9hHgUX81i7hMfP8/pz+3yOv3CsXcbHwqXcudvf9Ib50aQ+7tx74Jz7wWxH/XKkcmglcU4jO4YkUZRzHlOAlwIv6vJjyoAiwkQ5TlQImwAea0uiWDI+Pk6eVZRVSVVWeO+RSqCaEms1tjKs3ZhirCHrldgMvI/q8mbCUDe3EU87eCTu+O9SCJz3eCnB1z128PVLrfN44ZCjgI4EjBegFNJ6wFI5hY8UShV0mjGNqMGxw/MoHeG8B+lJkhTvPWVZoFNJtzvF4uwc3gxxRhF1OrQnm3iG5GUBpExNrGewMGD+yCw69WAs7ahFbhxFkdEZrzOSFudK4iQmihKKvKpP+guHMzFJ2sCYnMqWdVaMFEgtiWOBjCrKsqTIYXKywboNisNPFizORVhjaMSyzkjC4rwFH+GFBFmSxBJroSgg0jHW9VCRJVJdityRNAuk9hSDJmDBC6yL6jQnAFFgbIWSCjHqS4SQCOoMKOklzipUFEFcEDcd5cBjq5INV7UpCku/V5APJEnSIGkI8qEhThTWVyz1BoxNKpqNDjNHlxBIoqiJsB5TGqz36CRBCkE+HJCk0fH5MLaxQWUP8fIXrWehKrhv91HazY1k1RBhPZHQJGlKUQxII0FZKspSQOWJtSdpSYZFRdpok/uCsekJVCvB5QXlsQUaOmZYFSwu9rnqqmuZX+gxHMyTpi2yYYE3gNcoCXEjZX5xwKaNa9FK89RTT9HptOtSYMYSNWOGtkQpTYKm1UhZXJjHGYgbdU+a/iAjTSKUljjnmJqaZm5mnuGgT2NMsmZasGlTlyOziwz7GlNGzM/1SeImQkoqSpJU1yXfTIPKFiQNwfzcIqoSaAml93gEytfZVULWJQYRICMHKLwDpcBYi/CeNEnIswzrHEqmIBxKObTWFIUFsdx3S5z8BejrnjgySnHGI0xFpyloTHj6mUOLtP4MQhILzfxCn7SdIkSOtClDAwaHtWbhb2//8sTl8gXy1c//Adu273jal8jeXTuv+C9X6xwPP/I17tz9AHftrjN0+oPhM13sEU5k59y2d9fOr4Y/Yy6ebdt3/ATwHy/yYvcCnwF+b++unXueT9v74je/P+z0IAiuSCtPtLzoZR/4ZuAfAa859f+tV5gF4MvAfwP+7HI+GXTK/n838I/D/mcBuHu0//80nAw8t/NQK4R5VFscfY58DPjjMI+CS3Cshe/sK+w7OzjhEmfgXPqJU09Oj0QgvcJbgbcel1n6WYEwEch6MwUQaUZlliq8N2ilcK4OukRRhKk8C3ND4kRy4nNCA4Jy4EjTCKU8Wc/hXUI1KFCAxON9HaDxQpzY9BVtfByOKBYINHlhaDQ0xri674yuMwfqD6i6f8by9llEXU7MMSr1BlJ4bGkxeAbWIltulPlSZ8Q4KzCFQ0hf99wpPNlCn0gp4mabQT7EWIeWDRotcDMDjj51mMQ0aHXaLEhPlWdoJANTYL3CI7HW0+22EC5mOMzBW6JYY4zHeosjR1EiY0jQgGNYVGAiSlPvq9ZYl3WbCvLiAIcOTzIYxERNR1MrqqHDOIGOJJFUOKcoTIlG0GhDsx1x5IDDVhLRyHn567fw1L4hcSVoJAnVIgwzj1QKUEilMM7ghUUCEo0Qui6rJ0y9v6gDf/gIdIGKHM5pyoHBGkvSiBkMMgQKrRIQFc4blhbrMnVl7rCVJVEpRV9hc4d3EiEkxlQo6VGRwluNsY52KvENjfceJerw4+LMAsZ4HnMDpq/S3HTj1Rx4MqeoFJGMkDhsVSGcpOg7rDPgNFGs0InFCEspwAwKvDH05SJtuqRCkBvDfCmI1iUIX3Dw0FGa402UicmrHC0lWimqvJ67OX3WrGuwdv0UIk1pTiTse/CrxLqNQZBEivFmlzKviKRgbnaxLkeWAkkETnDtlnFaDcsjD81jypT+YIHuuAbfZLCUMUwVx45l5EVMI16HShJsdZS0EeO948iRHOkEpe1TVouoGCam15FGkxzefxSURjmB8AKEAa9GQVOPx2PN8vEdI1HgDA5Hnmc47+peSr5EUWffVSYHUc+Q5QCOdxIpBFpLjM1Yv3kdDsuRQ8dIIoVKNIOlCu8UUxumWVhYQCLYsGE9U2szDh89inMJHkgbCqTGWh2+da4QSkpeuu06XrrtOr7/O78V6xwP7t3HnffuqQM69z7IMMvPd7HrgO8Y3di2fcdB6mDO7dQZOo+GkX9Gbr5Iy1kO2nx6766dXwnDGgRB8Pz0opd94F8B/zKMBIxOhL19dPs3V8K4vOhlH/h54GfDrj++/79xdPt3wM+EITlnHwV+LgwDAGMr5tG/B/55GJLgIvo34bPpyv3ODi6DAA5wPFNFeIl3Dq0U5TDCE4E8EbEVmLqnjXQYY4l1RJLE5FlBWViEUgghkUKDt2RZhlZxXQ7Ng3QeUxRIIcl6Fik8ymu8q6/c96OsDuE9nJpS68EJQNZBJilV/bP3KHnyc1e+tA6eeiQeRqXVKmNpNBroSFBVUOWeoR8QxxrjINIRpqp7xghZ9waqrMWauvSYdSmtToulJcPM0WN420dWnobo0J9ZxJUFcawoBgonBdZbpFAIIcgzTxw74ligSRhkQ6Rq4IhImm20znBuQJI2kKTkeZ+OlmgVs7RQUJYSO1+w9ZqE8a1bueuOHGME7TYIBaqS4C15URDHCVoB1uBlzNKgpHIlMmqQ5TmRVxx4cp6lPgyqjOYgRg6aKCnxjDKCnMGLijRt4p2nrIrRvKyDS4yyK2QEeZnTGPPEiWPQsyjiUdYEVIWiKguKQqBkRJwI0pYmzwqs0QiVIrA4W+G8J40jyirHC0ecNgGByw0SX5fL85Zmq4l0FcM8RxpBW2/gyceXKIqSyckWg/mMSLZRSuBFRVUUtNttSipsURFHEq0lUihKZ9GxJ43aKKFotGFxYUgZCbySVIWAJUkjajM3O0er3WDNmvXMzM3ghiVVnhOJCC9jKuFQkaS0de+atBsjohKJozPWJk4kRS/DZBVxJ6pLFhaeVEkKZ1jqZUyvG+NlL38xc8ce5cihnLKoaDYVYxNt7LzF+xZLC5rF3hLWHmL9+nWkjZROp02SaPKioDQWJxTe5CghOXLkAGNjY2zauoa5IwvYos4CckIghF0ZK8V5ffwAMmWFEII4jimKoi6XKARCLB+ndbBUSo+zo9+FA69xXhArRbvTpSoLup0uV2+9isFgicWFAeWSozOW0l/qY4s6y2b/wWNI6SmGHud93X9JOIqiePrnQnDFUFJy043Xc9ON1/MDf/d9WGu5/6FHuWv3Hu669wHuvv9hsvy8Azobge8a3di2fcd+Tg7o7Asjf15e8QxeG4I2QRAELyAvetkH3htOeJzWvwDu4pQr6C+z/f8thODN6fxz4E7g/4ShOKtvIQRvTuengTvCPAoukvcSgjdX7Hd2ULvEJdRe//QSajw9uPFMSqgtk1LivEeruvdMWZnl87KjAI7A+7ofhbN1ICRJEpy3WGswlSeKNErVmRFC1v1cPALhJH50Zb73dZ8V5z3Cg/IeHSmcdbhRDw4Qo6v5T6SvOe8Rsi715HHHe90I4RBSHF/u8s25Ubm1VXaP9x6lNGNjLQb9IUXp0FE9us5pBAprCpAeKTXOeloTdWZC2YcqsyRJQqOdkhuHrUBYj9KOLB8AkjiOsbbu5wMKIeqAh5AOYwsasSaWKXlR4YWi8oCKmJrSFPkAaxQTk+MUZQ9bLpEkbfqZxFpB3rO0kopmp6KfR0ivKI0hLy2RFKSNlDzP0VojhMBUhtJLvFdoQMcOqSy2EJhSoZoaZw16FOiyRgEWhMVhEBG0Wu06AFXkOCNxpQZp8M4jBMjI44QiN0Omp7vkwwzhFUrGFIXDWQnOITAI7YlicM5QFfV6OSuIdYSQFWWVE0eSZqtBZSWmtKgIvC8QUqBkg0QIyjKjRBEhwWmKokJIj7Oe4cDQbLeQui4BVpkM5ysmxyexFo7NzNBsa6TQZJkl7Xi0FJTeEk80aERriJttPCVVb4lsZoAbgG4pXGIoy5KpqWmMrYi04uj+w/gc0rgNUcLQZyRNB1ZR+QHjkwnVvMfIiCRNKBdzbC5wLsenMWXh0drTnIqZPTYgFjFJBEol5HmBkoLK5OgoBgTW1llvaRoDFus8UkRombJ23Vpye7AONhYtnJMMBnO0ui3mF+ZYNzVFOTAsHOujtcYJOzp+YPmTxC0f/Jz4DBASvLM454/30DrxuB8d+ytLH9YBVu8dzVZMVQ7RUZN1G9bhhGFmZhYzrGikgrIoyTOBkIookXhf9+RqdevjsaRAJuAcC5/9k3tCCbXgaYyx3P/QI9y5u87Q2b1nL1lePNPFPsmJkmu379218/Ew0qvbtn1HApxvBO0FHbQJJdSCILhSPfrATl70sg/8DfD1YTRO6zbgG1b+f/Yy2/9/vbx9wao+C7ztctz/F+1EWv2fz1ups02C1X0OeGuYR8FFONZuW/5MCq6s7+zgBP2sHGwCvPPgPUmaYKzFWrsiM8Y/42Qd5+vl10EHX5+sHT22XGYNJKZy4OtyaVXl8L7uVaBjUZclswVRLHDeoTQI6r4rtjJIqZBydLV+veJoIUc9VQTL7X388vqMtn95K52vT8aDwliHd54oqbN+irLO7NGRIo4T+oN+nfWj5NPHdHQCvCgqrPd455FC1WWhRERVWpwH6QXOgnOeOPEgFMpKGglURYV3nlgqhtaDljTGIkSU0F8qKcsS5zxaq/rEuLcg/Sh7QWOtpLIW7yJk7Gg2DFlRkmUaiaQyhrIoiaMmvV7JsF9hpSJtRDRbmqXZPnmmUY0ELwqkLmjGinLoKasKpRTWmHp7tUQbQAr8aB2ShqagwlQOgUcjqHdNBcohiHHOIbUGFMPhqPeNqMvh1TXUGPU3Uijh8NaRiBaDOYc1oCODShSRUpSVQkpNs51iXJ9GmrA4X4JwCOqSdkJYhI+QwlFVhn7PgxyVvVMC4ypirZG6QkuNVBHZsMIONa02NLvQX/QoFdNux6PXGKwxaCVJkybDQQ5CMzXdpShycBHtriLdMGS8PY61M3SmSh6/1zGcHzA92cHOF0S2pOgWxBNjUMVI4VhcnMV5uP6GTZC0ePLBPs3YI+MlVFkhfII0Hu8lHkGhM/pzGRvXbqQUOZUu6UwlOFGxaWwd87PzTK2N6DQmOXZwSJaVaK2IE411S6xfO86w7xlmBcbYer66etsG+ZBuO8UYw5NPPIlXGS992cvZvGEbMzPHuONLtzOmWsQ64akn52hELaIoAlzd86YuYnjazwfvPc5Y5ChACqB1HZR03uOdojIOrer+VUIotJZYa6lKR5YbGmMtnBly4OA+Et0giWJUDFo7nI9JrCSvSkTkWDM9RZbFFEWFc9BsJRCVOBu+TIPTfBlrxatuuoFX3XQDH/6ev0NVmTqgc88e7ty9h3sf2EtelOe72KuA7xnd2LZ9x+OjP+6We+jsDyN/3NZzfF7ItAmCILg8vC4MwRm9Nmxf2P9BGKcwPkGYS2F8gueHSxvA8Z7KVDjniKKoLkUGWGuRUp64av4czmk6Z+sT5aMMmdXeq/7nTAsT9Ql8AdYuB1gcUoI1mrTtSRvxqAdOHUjJswrwo3JLJy9fAM45qqpEKolazrpZ9b3rAAjUPXSEslhT99qwzqKlAiGojMEYR6xjEJ7KLAe66iCURyCFQEpJnud1aagoBgx1Jx5DFBuUc6M+OhYhYfFYgXcSrVOklvi4rPvWFHX/EGMtRaXodCfxZhFTGdK0QZ5ndQP4UV8fIQQSiTX1v2JUos4aVZeoi3JwKbaSzBzro7RifGIcVRYM+0PypSEqzRnf2MFkY5QIPIJGKyZpaubLPqasUKruP+IB6w1aJ/X2eY+xnsXhEJ2mSGIwlkYroihyqsojpMd7i049SgtsFWGdpSwqnHcIFEpEo/0vsMaiI49TQ2zZojIxrVaM8xW9/hLNRpOk1cBWhqIwNNsRQkqkUthSg7AgS6ytMzyUrnupxKliOKwzPqRSIBKq3GJsRS4k7a5gzTgc6A1wfU27G6O1RDgPqs4WipMIqSOiSBInin6/j04EWqVUAvKyYP2aLpuvGufg0T5atdg4vZb9yZBjX5tFDeaR1pMVkvbYBGNjbbL+ECkUg6ygHObMLw7oTG9mbP0BZA4eS5I6bKGRUpH3MtxMRbquQaoFh5aO0kobbN56NQVLtMdgTWcdWM+mLQ32PzIgX8qRSd2XRuuYNG7QHWsz7PcRCLZs3sTc/DzZMGP9+g1EcZs8z2i1GgyzkmHmOPDUIZrpOM22pttp8NQTB7ju+hvw1QzFoEDrOoNOyjp7xrqTA8OMjpgTx349n+oAr0BHEd4bXGlwFhCyLkNo6lJqQniUhki3sNbhKk8SdyhcgbEe6wxZP6eR1j2bGq2IyAiWyiFPHT7M2rXr8NaCc4x3JxkM51nq98O3TnBOokjz6pffyKtffiM//L0fpKwq7nvwEe685yvcec8e7n3gq5RVdb6LvRr4vtGNbdt3PMqo3Bp1QOfglTree3ft/Oq27Tt+j1F/oVPs40TQ5p4wO4MgCC4LrVPveGTP71+xg/Gil33g1Lval/kmd8L+P+P+b4aPiHPSDfPojPOoEaZIEL6zw3d2cHFc0gBOIWFq0w2kMmLuyb1YbF2Oyns8po65OIlSalSqi5PKIIFH4ACJTDpUxuKrIbHWsKLPzOmuvD+1LNvKZ3nvcd6hNcQ6psgFRZ4hJTgrGfYdacPQbrcoCkuRO5yzeNxJyzN4tNJ14/TRKp20NkLUgRTv8N7W7TWUQCHrQIh3OFsHuKytg1QqrohjSVF4nK+AAmwLJRpEUYlxFdYLvIU4USgl8V6P+uoUdWkw44l0gpQCpTxVHuGdoKgceS7QaT1ClfNILRAIisrQbsHEmilmZhaAum9IlQ3r4A0ry04JrASlDXlhEWVElCpiGePQNFuKqjAs9etyYlJaJB6lYyamOgyHBVm1gE4b2ApMpfADixQxcVyXmbPWgvAoNNY6vCvRIqp7/IgUrZt1QKqyOCoKA0okQDEqjaZxxmKqHOcVUkbgLd5bBKbOvok9xkBRRGy6fpJeL6MZd9CxYX52gFcNlNYUxSI6VmgZkw0dQhQ024qiKMgGFuFTvHc4b5CoOvjkTF1QTzri2GFxOCtwlUaImOFSge56ptc3WFqEQc+TpB6hDN5KIi1xxiGkp3COPC+RSqB1RG8pw9iKtBkxN7fA8N4c3WrQWxjy+J4lsmKcZtJBiIK4rSnRzB/K6C8ukTQ0wyxHR4q166dptdpMjm/Gb4D9j+5Hqwaphtx6sqIk0gmdpA3GAjlxR7Np8zpcIegfseSLlkPZg0gnWOrPMVjMaI+3EJFAMaCZShqdSQ4fXiQbOpSWdXlCm7BuzUYQOVopxsdbICpUWQd95hcWmJ0/TNe0mJjsMDOzwNzMAGsdUiuMF/UB58DYCiFACIkfHfveg5Lq+JHvEcf73+ChyHPUKMNNKU0UK6wrUbouO2itHZVVrDPvnK1YmBuSpCkeT7ORIDz4UR8j74e02ilppGh3xpk9epg8q2h2Gxw7bOm0xrjxuheFb53ggsRRxGtvfgmvvfklfOT7voO8KLn3gb3cuXsPd96zh/sfeoSqMue72OtHtx8A2LZ9x1c5EdC5fe+unYeusGH+5dG/38GJoM1n9u7aeXeYgUEQBEEQBEEQBEEQPBcuaQAnbne48VVvoiUV98w+xXxvBp20cK4iTTStVpvBwJLnBa12irUV1jqMdTgnkNIjraEysOVFr2Bq/Wb23v05hr0FpNQIWQceHBY4OWCzel8djx+VzkKAFPVJXmc8UQTORPRmS4SM8D6iKg2lrqhKDz6uy4jhTn4nITDLfW84vujjv9W9NwTCqtF5ZIEp65PJdVRkubybR0URpixIm544cVgraDbbSJ1gDSzOLaJJaaQp1kqMrVi/cZLFhZzF+QFS169tNDV5VmfF9PsZzWZMZ0zS7+eYXJAmDZTy5FUJiaCiIBENkljTz/voKKU50WXx8DGksygt8Pbk0fTC4WWORaCUoNHQVFXF7JESpSqSBjSaKd7FVHmJ8xaEoKwc7nAdlNu8aZosz5mfG1ANJDKNaXc79AcDbOWw3hHpCI2k8hVRpOtMLOdQXpPPDEkSgxASax1pHNeZNM5hraMsDEIojDN4J0d7RaEkKGGx1hNFis5Ek6NHe8wcKel2YqydY+7QAFNq0labougBBU4KKqMoh5CmEqUrpiccWb/JzEGFlKbOBpEagSQfWvCe7lhEdzpibmGAFxJXRfiqwhnBzDHHxHTMmsk2/d4SyBwvQCiNMI6yyrE4ysLRabdJGw0q60kSRyIEWsNgWOBNiq4S2ipiCY8UAq0dpZcUmUXpCGUsNndkucWLJoUxVKUnjhS9o4eZn52nNAZlGpSlRXcdlfNoKTFFzuBogYgTmhMalxUcefIog3lP3GiTm4JuU9BbjGmNN+gN+8QyYuNVMUvzSyzMl2SZQKkmngGzs3N1mTuWsNWQxfkhSaLpjMW0Wm2UrBBScfCJY+wrvsaGLRE3vmQzX31ohlg3EUJiqOoeUqNAzvKRBKNkN1+vO4CzDjPqY8UpR6r3HinNKJvOEMV1Ro0xnrwoUdLQSNuMj69DzB9DirjucZMNaLcShEgZ9AUqUlR2iEwsnWZEHqc0dIu4ldJfHLBkM2x1LHzrBBdFmsR83atu4utedRP8Q8jygt179nLX7rrk2v0PPYIx9nwX++LR7RaAbdt3PMzJAZ0jl/OY7t2187PAZ7dt3/HDe3ftnAuzLAiCIAiCIAiCIAiC59olDeCo3PLoF/+KslyirBZJtKSypr5SXmoQEiUNEj/KbqlQWiCloqpEnakjLFLDwtxBvDQILVFxjPUeqSRJklDl2ShbQ5z/SnqPcaMAkJAoHQF18Mh7SX9puUF6VvfqOc/3cMZgASklWkc4a3H+RBaP0ivCPd4jhKTqK1xukAh8EVMNm7TWHWL9tQmP3Q++EDRjSZJ4vFhCaYlUkjjSSDVExwY/iKgqi1KaYV7R6mi0rBuxT0030DpicGSA1BHOOZywRKrBoDRkeUa7XdFqJ/QXhyinYEXm0WhlwUtwGqUMngJrI6T0OO8Z9ARV4tBaUFqDcyCVwOPJs4LJyXEG/Yr5+RyBwjhJWRa0ZU6c1EGhqgTnKgrjSZIUIcSoHJ8miiL6NqMYVODrAFzUVESNiN6CRMeeycmIY0eXEIzTaClcVWFyTyQSnChAeMzQkY7nrL+moHdAYUSH+YUFcG2ktGy9ShF1W8zORNii5PD+AUkyBqJisOSQ2jI53WD26DxVpolTgdIeU3nwGmNzykrgXYNua4J+L0OoGGs0ggoVRyzOVUyP53Q7FcPcUZmIxV5dCm98jcZWIGRFsykoiwItHY1mxOLSEGsyNqxvgG1zcH8dfOt0QOqSfs8Q6Xbdk8nUJfykjxAioS7dJ1icqyjKg1TlkDStPw6yoqDZinGVQIoMGYEiptnWVCX0ZgZEPqGhmlRyFtox02MbcEePIuKKbEnhiwjRipmdTbCuQSQdDW0YDKu6DJ6xbN4qaHSPcODxEvw42cCh8GzY2CZSSxw6MIurmiSJBLNIZ0ozuUaxNF/hXYSQbjQv5arHnhDieLlGYw1eipPKq0Hdq0rriLIq8ZWvg4TW4oUb/VsHeod5hhrGTE5OsrQ0wDtQqok1mkZTo1TBxPgUw2FGv5zj0UcO01Bt4rT+LEvamiLrMztfhG+d4JJopAlvfM3LeeNrXl4fx3nO3fc/zF33PsBdu/dw/0OP1t+T5+eG0e2HALZt3/Eg8DfUAZ3P7t21c+ZyHMsQvAmCIAiCIAiCIAiC4PnikgZwbFFwqLcPrRRRHOOUABwSSTas6C/NolTdlybL6iyNSCuU0nWvFWfRQuOkZ+7ofg4feIJ23CTSERKLtw6TFXVfkwsJ3oyIFSXblv/13uFdjPAKIRyeHKEE3tV9b6R8+kljKWUdDLEOLeUo4aZO8/Hewyhws/K13juEFKPeHXVeTpVLcmtQSqCkxVuJODZO5BLS2JENDYt5xVhDMD8/i60aRFFKo9mkrHKKzCCcBDdg7YYW8/NLZP023jWIooJ+P0N6RaRTVKwwxlBmJVmRE6cNqkFBtpQTaU+SxlS5Ryg/KtXmR9tfl2QDhzEW5+r9qHXd40hoibUe5+rgzfI4S1U3my+qbDQmdZk0HccIkTBYGiBVXbrNOTcqc1eXl3PeHb/PyYqxtQmDwRLGWpyBQU/DogRS8BXTWwX5oGIw74lURdrR9Ps53krIIyQCFxmOzld02x2uui6mLC25bTEcKKw39BYSmipn6/VttJHMzzyF8A7hE7KsQOglrrrG0uxmLGZtlNZ1ybrCIqQmimKkEszPZCRRhC01SlR4ZfDe0GhJBn3BkdkFxsYj4rhBqy3pdHIqAzbX5EPJ9Lq6d83CQkFrIiKJHSV9JicniJKEQU+hYwEqZ6kHxniaaUykYrzQEPUwVlFlAiELojQm0imFcfQXB0x2Fa4UTE23mDk6S2E9kdVgJIUF3VR01mqyIsP0LVZZ4rhB7BUm6TO1eQOLpuLYE0fptFvgKsrSIMqIOB6jHJRIl2NtDyES2u0GWWapqhbdzgSuaYiSlJljczz6tUU2bd5I0kwocuiMr2EwFMw+Mou1EQ6BEKoO4mARwo2OpRPZNc7546ULLaOyaat9RoxKKSqlQNZzW4yOY6So54gD5wtm545RlB2iSGN9hTOQFRX9foWUJfPzR3FOIlRKq5liihwVKUQ0pDdXsGntRnSahW+d4FnRSFPe/LpX8ObXvQKAYZZz9/0PccfdX+HOe/fw4N59WOfOd7EvGd0+Avht23fsAW6jDuh8bu+unbNh5IMgCIIgCIIgCIIgCC6eSxvAUY6k1UG7Uc8ZB0iD8B4pNFFS9yvRscLZiLKMMMZRlkMiLUdBkDr40dSKhk7wXmKdYTlrxTl7ugvwz90q53Wd96OMG4D6pK539QliIZ/+AiEEprIYa0jjBGctQtYLXw4ueb9qXbfj76eEr08eK0sUxbTaMcNsEWckS/2I2Zk+jVZCqxNT2oLeYsbUmvUUsmRoCvJCkGeWNhHNVkqceoaDHkkU0+9ZvOvTaCRUlcKhUNoSaYgSTWFKhsMhzUZUZ0FVqu455E1dpM7W666VGmVIWUAhkFgb4b1CUOKFRyDrXiTe4b1C63o/1T1FIiyWfr9Pp5PSbEuGwwIh6l4+kWxSVSV5XqF0RKIUrjL1fhbieBbO0iDDUNHttMmLggqHlIoyd3grGfQcR48MGR+bZP5oHxXFRFFKWTmkr/BWg9A473E2Zn5GoJMlGi3L+HSX/iBDSc3jTx4jOlpRFd26LpdoUHe898SJJko0aarpjjVZPOqQQuOdAyxIi9CKsrKUuaGQIPDoqBoF+yRlJhAOYt0iH3hsLKmKgk1b2vT7A+aOSWLdZHG+V9f785qF+YyidGzZei3ddsKhg0ss9YeMry/ZsGGaB3bPYyuY3LCGhcVFrC+pTAFIWp0EZEle9bA2xtIGJ2nGIERMTkFnusPC3BAKTyRTdAS9bJZe6RjrTDI+Ps4gG1LlQ5oNWCorFo8tICc6+LkhWVUipEPKmAjFYH6OMgepHFJpnAXrDMMBDHqWVtvTbFe4okJHDRAZznuu2rqeJ588jMeCb+BMhfCCZiMh1il5kVOVBlxdGi2OIjySqjREui6357B4Z5Gj3jirHnvOAhKBQGk96sdVZ9t5W3fYEkIiPCwt5UQRCOExlaHRcHS7HQb9gn5/ARXFxFGTssxxpsJaSSuN0MoyO9Oj2QlfOsFzo9lI2f76V7L99a8EoD8Y8qX7HhyVXHuAhx/52vkGdARw0+j2o4Dbtn3HV6iDObdRB3Tmw8gHQRAEQRAEQRAEQRBcuEtbQk0oVN1kBoEjEorKW7yvT/Db0YlwqhwpI7TWOCtodlt4V5INKpAKkAhX58k4udyifESMeslcZAIFwgFVvXwP3vn6inrqkkwAUgjwEmcNSSrppG2G/QFRkmAMeOcQwuNXCdzUSQGjB6QclXuCsXUtimqByvdpTihUY0hDr+PQ/8fef0fbeud3nef7l56008k36SqnUipVjnLZVabKNjbYAmMGGjdgk7oJQzO9mJ7A9LCaZugZpqenGxqGYdFrvBh6eoFwOxvbVWWrklSlfBWu8o3n3BN3fNIvzR/7SCrZZbDcCMr289L6rXvPuWc9e+/fTkfPZ3+/30slOquowiErK0PqWcLezoQQPUpLgm3Q5NRlQwhTiqJgPk2JMSFJxPGJaoFRmsY7QqhJY0Z0Cd4tK4xC7fA2EJqAQyAjy1lEQh6fFA9471BqeVJbqwxpA9a1ROFQQhOjWM6Wj5EQHUJKlFLE6AkhkGgDgHOCGB0xCJyLeN8sW+QpQXACLRJc65Yt7N62bwIjcmwlmdSOYX+FKCtk6pFZpFos2FyTVJXAu4S1rZS6njEfTyCky/tV1digoQVlAjY45oeCTPeo5wtGvRznJWujEc5FLrxU47Ulkylrq0M8LYtZhW8lr7w4ZnJo6fcyvA20LoAPSAFta4lCkCYZ0XuMliACbeuRaKxTKCWIUdDWLXXlMKng6nZNJIJ0OD9DeI9JDCYJiJDRTA3br864Gndp20hSCK6//iRKgFaO1Y0BddsglUObQHkgUNqDbJEBBkWGTgxRGsYHEzY2TjOZzwklKFNgkkBcRJzwDLdytBjSNIK6jIjQooxGSUtbOxKfs9gek2xV3PGhm7h87iKLwwV5njI/qpfXIYXGCoxP0FouZxCFhiQRWGuZzwIxCAQKHz2vv3KBW2++lUQL5ottbrjxFEIZdrYPsc3xrBnVoIzHNgKEZpD3aeycxh2yevoMsdaMt48o8gG1a5cvGzG8OQknIAB5XOFl8d6RpgUxRpq6Wgaw8XhW1nFFT6/IcL6kqVuGwyH9UYsSDXkvAeQywAyO3CiiELhg2dkr2VzfoDxsubYdunedzreFfq/guz7+Qb7r4x8EYDpf8I2nnuORx8/x6BPneOGV1wnhHb23SuC9x+uvsAx0nuKtQOfh8w8/NO52vtPpdDqdTqfT6XQ6nU7nt+5dDXBw4ERFkkisXX5qvcg1MSbUFcvqDiFxNqCVRekAQdFUzbJiB8MbyYc4Dmpk/OZR5YD4jdNZfjMCEPFbfvc3/OyyqOebLisKolAYlS5bhCGXFSHREZwlEDh743UUo5RzT+8hVYJ3EhEjQgqCACneuqZScFylIpBSLud1WI+PHics6cCjY0pdJmye8oyvTumNeqTX77CYCXpGo4VH64S2DQgE9aIhCoc2CVLCYtaSJQVV2eCcwsaAFDWp9CjhGY4ymsrRtp4kgTTLqUpLdMdhlZJIqXHtsuoohGUbs/6goGkq2qZFCElWpFBBayMRcfyzliw3SKkpy+q4KkcQYqB1FlC0TQQ0Ur5VoaTicdAUA8FbvLekSUqSJMzL+fH9J/BSkWUFOEfZzPDKktBj0B+RZjvcfk9ksrfJU49OGAwzmiZDEpAyIKVAJ2DbGh8SjDKk0RK8ZP9aincSpd1yiP3CoVKBSCEzCYPUEHRkPGnomYLQGnZer0nTjCKPHE4WBAyJ0ki3rKoKhGU7L2mRRkBQy73Ak2hAK9raIpPAylqKkorZtCZGR1EkCKUQZFRNBVEihUIIx3waCV6S9zQmC5x7/BDhNGla4KJhMq1JtEWoliRLWd/YxJWO8bUpWkuyfkVvEDi9NWAyr6lrQWxTqkmNDBqdSkR0lE0NRUHtKlYGHu8lk6OKtVFG3RqsDRTaESeHqK1V1jZG1JMF3taECMSAiILMKCIeZTxSa5oykmbZcRWXIwiPVA1ZkmNUzrWdQ9Y3eiBqrlx9lbveez2BnMuvLx+vbSNIkxSVeiqbMKsbisyxviUQWwuGdoXJNUdTlqgspxUNCo/ygijAqYiIBVKmuNhCjFjvkRikMHjXLp+bRBASax0iq1lb6TEZC6q5o6ojyJYsMwQhiEFgnMB7R2+1oJUlto3MpwtUMORJF+B0vj0N+z0+/YkP8elPfAiA6WzO1596jq8dBzovvnrhtxPovO94/VXA3/HAg4+zDHS+yDLQmXU73+l0Op1Op9PpdDqdTqfzm3tXAxy90WKdJZqc0EhcG1HBYJTChwVKSZYtxpYzJ6INyxkz0R+Xp7x19ZY5jkDEX3cC6dcV5LxrBMvZLU2D0orEJCCgqiZsbBZULbz8yjZpDqPBkNlRhdTmeP6N+g3XMUQQy2kexOhxLiK1JITI9HJJb0UxOLHGZF5y9TlBtJpGzhGNZ3PzJNXlhKNrJf2RZe1kTTNXFD2D1JYk69GUimXPumVbs6aFU1t9puMx06kgSSTRJZT1IafPZniniCFgjCFLV9i9doi1FoEjKxLq0lHXAWMUzka8l5hEo5UiBPdNyZhfjv0hYLRBa8V8XqLUG9so3rzfhHh77zsh1HGrNFBSYp1DyuXcnbZtj+/rNyquAq61ZIWmChOibdBOEWOf6DNefqbB2Zo0Besain5OPWsRaMARY0qWCnyIRO+RMsH7SAwCpXqU5T4xSqyzrGQZg5GhqT2zWY2bLI4DGM3aqE+oYFFbom7YvC6jaSTTQ0cQkSRROAdN49Ha0LQBETxCSowyiAjWRnzwjIYZeZowHldAIC9SYlAYnTJfLFAajFLgBDIIQiIJUSBTSTrw1NMS2h5T5yEopBBIo6nmjqYF23rSNEFlGWVTYktHkAn9foqdeYLzFGkgIVAuAqaQ2AZa35Aiib4k72c0AWSpkLpACI+mQrQpds9whSnJao+YJtBEtDkOFFVE6Ug4fh4X2bLtXsQTXQABRkusC2itKdYN1i1I+p7KpTjbsCinKGNo25YYI2kuiFJi6xTtBV41tCrAvmJ6ZcJutuD2j9/D9OqE/dd2UVoikegolnNxbCBJBU07RxkgmmWrQN0STSAKhScCEhElWil88MuqqghKClAZrQuUZUOaGRCaNikhZMwrGI0ytE+YTwO2tZjEdO86nd8RhoM+n/nkh/nMJz8MwNFkyqNPPsujTzzLI48/w8uvX37b3KnfAgV86Hj9p4C744EHH+OtQOdL5x9+aN7tfKfT6XQ6nU6n0+l0Op3OW97VAGe0mVHWgaYMeCSmUOBaykWDlClv1s4cV1XEuKzSkSjiv2Zmxb8vUUAIARklPgS0UgxXVtBpgluUBBuorCfWAYTE+UAq9bIKAQExvHmbhBB4H94KNFhO4UiTBC0MftZyaX6ICBoxK1A6JciWsL/KKy/v04wXpCJjZTWhbSRHByUbGyOKXsbkyKK1xBiFdykiKmzbkmeO/nUDDp9rkTYydwuC10wnNZsn+rS1xjaa8dEMH/zyOjpB0W8YrtccbGcIEqqqXlYmSPDHreSEkMubGMPx7ZLMZgtCCGRZhhCCpnHHw+RZznP59fsbPSFYtFFIKZExYpTGh4C1y7kxkYgSEh0EmBIrAivDdaSHclqyfeUSWb6s3ApeYlIw2Rxpxgz0kINrFSbpYYQAZdFRUbcBmQpi0PRHnsEgYTFfYzZdAIrgNVF7BMt2bmsbfaaLOfOJZxhBJBYtIXpBqCIJAiMCUStQGnW8/yGAcBGCx5iAlJq2XlYcGRVpbcvutQprPcpYlA0o2WP/4AiBpzdIMGlC5UDqBCFqUh2RUiCcIu/38aVCowhS4dsWISBJBcIEFuWclTPXUVjHfGdGrxhStoGDy2NSHen1M5wQ9Po9SCp29/YRLkPmkpg5BkmPo90WkQVGwwxXR4JwRO05df065WzMpGkokIQcdD/HuhIt+3hvae0C6yVpmgIpRgeOxodoZZBSIJVGqISysjR+QZCOODUsqpbgCl59qcKHGcgCH2Ax9wQFea9HjubaoqQOM1ZIKewqh9WYeuH4/X/4R/hX//Mv8+ozT9HPE2a2RUhJlqZU1SHDtQEm6TM+KjFKYtRy3lOQirZpj+c7RaTRxOioqmZZ+RYgMZqtUyc5PNwlhkiUkSTvUeQJ1bTmaN8hw3HoGQWhdd27Tud3pNXRkM996mN87lMfA+DgaMLXn3yWR544x6NPPMvLr1/67fz+8ZHj9ddZBjpf561A58vnH35o0e18p9PpdDqdTqfT6XQ6nd/L3tUAZzZdQJRoJYlpjdKelWGfioT5FKJs3hwOLo6rK+Lys/EQ5bfqd/bvlUAgpSDGiPcWKQTWao4mFmUMqmnQMcW1kbQH6+sJroTZxC4riILAu+PZOVItAysljr9eXoazlhgjUikSEpz3ROWxsYGQcHTRo9UWSSKQ0rK/N8XHiBYj9nctct/iXUGSWbyrcbVH6RwfBZcuH9LrJUgkEYWPlkQPmR6WBL9AK8H4oCYiUDIeV7soymrOyqYhSQ0CRYgKayNtezwfCLGcS2M0SilCCNjWI4Q8rqABxLIlW/D/+vtUSol3Hhffqr4RYtmWTSqJkgrvPUorVC7QicBWx/OWdM7KsGBeHtAfwspgk51ru6yuriznBEmLTiyeGc7PaBcj5tPIaDUhKTSH4wXIhKPxhGqhkAp6xTKsKZKcrDC0vkangpVixGFVsne0S5SCJMvASRYHDi39ssrGCOpqgRQp2miUFJhUIhAo1SKipcIgRUQpgbOWXjFkUc3oDwbE6PHOMRz28C6g5HJ2Tu0CvTxb7kVT4lrLYQ0uKLQT5LnBigadSET0qFQghaCuHZcuXsC3ltV+H2drEq0ReULbVJSNgzoShKRpl23WlDQ0dU2tPAkJbe3pmYIYW7ytSE2BNxlbN4+IPnL++TGZr2jDFJFKopTUiwltswz2eoMBMcDB/uS4Ak9jEkmapdRtQAmDdYEzp04ilGBvd4JOwEVF26QE12JShYiB6BLSNGW0omnrEl15tNaY3NMGy0ra58rzL3DxthsYXacI50tuuPNWRJFyuLePWyy49bqbmJcLLl0ao2ROjIG2cSQ6oKVDKIOznjY2eHdc1aQESSbxvmE2s6xs5KSZYT4viQH8RBGqCiHA2XRZfWYi0i9nQ3U6vxusr474nu/6ON/zXR8HYP9wzKNPPMujT57jkSfO8eqFK7+d30c+drz+M8De8cCDj/L2QKfqdr7T6XQ6nU6n0+l0Op3O7yXvaoDjfCTLhjTthJUT4BqYHwVcEwiqRcRICIHaVUipUSFDyWVrovhtumHL4pGIlBJrLcFpksLj/Awt+yQ6xbYeHxaIZEIiVuhTMC9L2nrZHmow6LOYz1BGE1zEx7isSJGCKMF6hQjLYehCRqwIROkgtGRG4V1NGy3aLOcJIRwxgmt6eF9j/VVmc8Ggt45SOcSIEZqqHFKVnkQELA1CGHxo0FpTzgwxeJR5o0uZWrbHkZZo+1x8USJkJElLtFqGQJ6GEJeBlFGSGCLIZUWIUnJZcSIkznki/q3qm9/UW2HQm/stloGZeHP/BUorvGywc01sJSarCMKBi/R7q0hGzKYHiHgN27bsXM5oKkm+us0nHriFvaNXWTnpePYrjmoGo7WEw3FFYixNZWiqBCEhSw1SeqyrKWcpSWLQUjPZX7CxtcHpE552LmgazWRWIbVCaY0LHi8Fg1GPFRWYHlV4J5HKEDyYNFDkEudAlI4YPRFFiIKqWaClJjhI0oS6KfHOUmQ96rpBaxjkUAw8s3HFfFGhUo0LBuUMMgpmC4uXNcN+j7YV1HUgyw153mc+npBESaIE/TzDx0ArPTpIIJDkmlBX4DVZmqC8wOgUoQWJkvRzw2zRgvCsDhVlGZk2E555uiJMBE1TkK57eqt9Dg8sISjSTJKm4JxhUc0YjoYkMaC1IE1znLcEWsq6Ik1W8DTs7m5z+voTqNwRasiyQGwVsU1wbYsyKb2+oTqe4TQ6YzGpoi41g/WMWS4oy4bTq0MuX/0SUbecvX2DJnGYTHDzPTdyZmMLGUqefeZ5ekXBbLoMJENYzsVKskBtl236+kWPxcxhrYMIVVUvA6goeP3li2SFotfPCVHgnMcGgbcBqSIRj1QS1Bvt/zqd33021lb4vs98gu/7zCcA2Ds4erM652uPP8OFy9vv9JAG+MTx+t8DzR0PPPgI8AWWgc7Xzj/8UN3tfKfT6XQ6nU6n0+l0Op3fzd7VACdpM4Jz2MpjY0HbSBaTFpN7pAqoSiO0Ib8ezDAwv9rixykqRgQWEX9r8yLktzgnGsWbDdr+jd4IDL65n/+3OubbvhcjgojC4mYNQqYYpSFp6BWaGPvsXvNIKRgNDbkq8LaldSUyCfRWJcG31GOPjAkOgVAClUZ85cErtNAIZQne4p3EyAFNVXLyppIz9wlefH4Hd+0WtMqp6wqHoxi03HWXQSnBI1/eRstbibLFukgAssIQpUOIsAyNYiBGgZIJKI0P7fHAoXi8NxKQKCWJeJyNBG+JUZEYQ8TQtA0xCpz1eB/fnG30xqwaQUQSiSEghIIoiESiiG/b/yVFBLQWED2RQOSNaiBoXUtiNHlqcDZgY8C1EZNIPI75YkKSGGIjmRy1KGHAO6R0FMWQl16pKashvZEiy1uuv3mA8y1VCXmeYO2CvNeHKGhKwdx7tBhRTcBJz+qaIIRANW7QSjBfCJTUDPoFZVXTWkeeG2g9rnEM1hKElAQfCLEFJCbT1M4tW26FSPQSJyHpLUOaNAiqqaNKQKY5TVWhVSA1Ka61VK1HJJ4oHEWhiVIg2xZtFN4GFC29PEXIiM4ywsTSNJDlCmUS6qbFNhUDk1NVdvnY1gnOK8pKEGJAoBEhwQePUgmuijTWIkXEBYghsL+A3kCxvjrCGMW4rZlPKq47PWJrZZOLz79A0R8ShKIJlqKXQtNwML7ExsmcLMlwZcb+fsPaekPW0+xcPeD2uw3BJxyMd2itRfsMlWjMSoqWmqPDfdrK00tHlEdzyu2G3rpl60ZPXWoWrSSsemSuaZTh5PA6lBDIExmHuwe4csqFnRnjg5KT6wPybADMCL4lSoMUEh8EZRlAek6cXkUbxWJxhE6A4PCNQImE3Ch0UtDUloWCvJ+BaPFNDUoSYiRGgVHy2zqY7nT+bdtcX+X7v/sBvv+7HwBgZ++ARx4/dxzqnOPS1Wvv9JAp8B3H6/8E1Hc88ODXWIY5XwAeOf/wQ023853fqhe/9C//vV7+7Z/8oe5O6HQ6nU6n0+l0Op3Ov9G7GuAIWbIYKxQDygOHSC29dU+SGMr9BN86RlsV933oLi4evUK2ekSzazh6xWJ8clx1Ed+aG8O3W/8hQfAWoxOKwYBpOUeYFkykmSr6vS3qsmVxOMYoSZ6n+IWhXrT4GAlWEb2GoBFRIoHQWqz1DAYKKT11pUj0AJ05ijTloK0oBj0GoxXSLGLj8YwZBciG02fPcO/7TrG60fLia79KP+1R+xqdl/gm4cqFOUUxIEuKZaVQPA5ZVEAbSah/422ESBQOgUCIZeu3EB1NC5sbm3jvmE5nyxkmxxUz32qvlmEQIAIcD4dffi3e+jpGBJLoI5GIdTVSLsuCYowM+n2CDyzGc7TW5IkkG6bM5xWtFeS9hOEog5CxuzPGOTBpRKeeiOHowHLtqmN/x3Nyc43V9RVefeUqrjYsXGCwalAK5hNBaxuMNiSJwWhPaCLzmQcpia7EBodvJN42rG/2Wd8sODqY01qPMTneNpRlS5IYmrrFGEWWpQgxwzWRstToVJMPYFFaTJ5j8pT5wQSdLsO8JDd4C3VtwURW11cYz0oO948oigwhl/N+mnaBNw0iiSgZiErjAwhlWVnrUVWOw705RjjW+gaUpLJTmlaSpD1i4mlLS/QglEMEj28DaSpprKetA0J6IEMJSUARvSFEwan1DVqvqZIaV+yzuztmvBsR5gS9jQKVtyzGc8xgj898uGI0MCSZZu8o4/JBw6m792nqksycQKYV04liUGwwHJX0OCJOFHWbU7YemZZsXLfB4ZUFOzuXMMZQDKA/ElR1oFxISudQJqM/hNn+mCefOOS+93yEzdVNFgceX004nFxk+2CbxfpJeklG6WpUnoANKDxCKpyDPE9pG8fe/gGla9EyQQRF1suxlaOpWpIiISSKqm5I8z7RR0KIwLL9H3I55yh41wU4nd+zTm6u8wc/9yn+4Oc+BcDVa/s8+sS5N0OdKzu77/SQGfCdx+s/B6o7Hnjwq7wV6Dx6/uGH2m7nO9/ytzchfke+HMcYu0acnU6n0+l0Op1Op/N7zLsa4KyuD7CNo5w2ZL1lK6WgIk3dYlsN0dO2Dfu7F5FJRW+gETVIrZAhJcYWRHyjsRbEb92A6Lf7f7NvzFf5X0QuT/4OCji5tsrO3mWU7AGScjqHANefWadcHNLSolAcHc4QpEhpUMmyqoEg0LkjKWqSWpAag7UOZQLEhOAc02ZCUuQcHjVsf+GIcpJinMOH5bwYYuTwqOZXfu01Vjc9t997PyquU9eawUqNUatMj16kKT3BV8eDdyRCgPeBEPy33E0hxPHcnG+qUBKSEAPT6YQ0SVFSARDib1L3FNXxsSDEdvnnMhI6DoYkPnhiDMu/e4+UyxksbwU8gqqq8d5D1LgGlGpp7Iy6DhD7NKVnLmogIKXGOU/rAkIJmjqQaMnaaAhScLDXsn/tCsYYhsMedd0SnGQyb/Bthk4jMdZ4H5ZzTFSAICFonBPEKDFmWS1UNWOiVgyGPQ52G5SAtKcxWiERSGUZ9AdY16JNihAgYiRRBqMdAoevW0JwCCnoDQZM5oeUdY2KKUWWE0LNfLEgMQaVpsu5PulyVhJRkxaSLEuwVtBUsLqeYUPFfFYjpERqxWDYJ09gMp0xHBli9LRtQMRAmiiSRFPVLcRIWmh6Rc7hwZQkVwjlsa3De0GMgSgb5o3l0qUG6wyLSaTfk7QNODxOeaKSFL0eZprwgbtP89f/2q1cuvIyv/CrT5KN+hy+Puau69d55flIf32FszfmfP3hObPsgLUTy2ona2ukyBj0VtCFRgdw7RSBYHOroL/eYJvAYppgkoJh5nDBUc5LCg2BmksXznHixFmsqxkvptxy2x2MZ/tM9ibsLfY5efo0Umr2d3dRQiyfe3jqxrOYG7Tu0xsFbGOpFy0mgkw0XkDrHV4u78vg3LKCSSwL3I6fFMQokEIj3wgxO53f406f2OAHv+c7+cHv+U4Aruzs8shxoPPoE+e4em3/nR4yBz59vADKOx548Cu8PdBx3c53Op1Op9PpdDqdTqfT+Z3kXQ1wDrcrlE5I+4GIp5waQkxRRSQZLisiFgeGnWtjXH5APS5ojyKiyZctvoQgiGXdzRsn+r9VPPDb+RyllHIZQoTwZjDxtjAn/tYPKryiXtSEWNE2CcIaUiUIocU5y2QxY9BLKJRGCgXRYxtBWc/oDTJWTnteu/w6xYmTrJ/W+H3BzostTSMYrLZUZY2f5wiZMQsz0mzOsLeFESwv0zYM+svh8Nf2dlD9dfJqC1crXDvGNY75kcEkDadOXcfVK3uEWGKMRKBoW0eMy0BiGZR8834sZ9JopXDeva26RghB07ZY55Fi2XrrN6u+AUWMkhBa0lQQBUQvEXF5kl5rjZaKQMDaFqNShNQQAwHHG5VAIfjlfaYCKpGoHLK+wmKwC4MIkmpWkaSGrc2TLGrLeHGIbR25S7E2Qmw4fXrA+CCwv9eSphJj4nJezFyidAamIc09q6srXLs6A1cQZUOSObK0z3Ts8F4ic4cQnqwoEATm8xJtEkJ0lGVLWUKWG/r9HILHNZ6jWUPRS0h7YEuLaxxSCKJtsI3DS8/ksCQv1onK0zYN1jZEH2mrmrzvSXNF4yqKXoIMNUYLwCOdRzmNdJ757BAvPP3eJo30BBbUscG2CXVIsLOSxpWARDQZXjcgW6QS9Hoj2tYyWzSgUqSGom/wTnJ0sGBtfYQLNePpBNEOMDqS9FqE0ORZgUw9J29aQ8aEo90J5YHj6a/t8tVfvoWZMuyW0NR73HvLvVx7LefKi2O2lcXkJaM1wfqmZHHokGKdla2cZtaSRkNPneHEmROY2OPFl1/iYF6jBynNxNHULXl/gcpqTAIKQTX3bAyGQMV0fBGvFU41VPUJ6jk0VmGSlFg6SjvDy5bR2gb7V/ZZWRkiRGSxaOkPh1hb4WLAKIG3y8o7pzwxKogKISWuXoABqdRbRWUuEMMyLJXHQWen03m7Mye3ePB7P82D37vMXy5dvcajT5zjq48/wyOPn2N3//CdHrIAvvt4AczveODBL7EMdL4IPNYFOp1Op9PpdDqdTqfT6XS+3b2rAY6PAeEhOAl4gg/IoEjSlmJVMq4NsdTUVw2xALvwbK6eYNzOcHWDkoYYA4GAjBLCcjyLkOKNgoxj3+pT7W8FEXBcbYP4DQUmQog3fywe//dOWrVJLLlRzPcaggrkxQiCxouKfOCJQ8PurKZxBT1l0TqyvrqKcw11kzKeW04PD7jvg5Kdy4H9VxLCTOBDQBtPiI4sTwkhEKhYXYsMthTV5ABBQYyCxbwiMSlKSrIsI81ygg84FxEukEgJGmblnMWi4rY7T1GVcy68toeWyTK4EQEhlpUChOU+xQhCBCJvhDu/kfim6pj4mzSIEvE4ZIuSiCDJFM4HQtTL6h/rcdZSFDnIgJKSGORxGCQhSgIehAS/DImSAfRGCVU9o5WaZJAznSxIYoqQAu8jZVmjTMrGiZOI6Djc2UXg0Vqwt12hdEGWDSgXESkrhFD4YDFKkiaSwShy6roB08mchJyqsTS1I4QFaa7xTiKEpm5AYTAJtI0nzw3WNUSvCRFmE8dolJHmmnIRGWQBoyO2FbTRwvHj27kGJSEf5CRJxrDfo6xLyvkcGUDLiJCCWkJvI6E8SJguJDYucOOI8AlCWpLUkxaK6TSgdY5Rgqb1+EaiQouNNb40NNGQ5AM8gSgdaSpBVJw6vUm1cMxmNdYKnDPQRopBhk4hGyiitvQyiUz61I3H+gW9nmF1OEIHiJkm3RhRzSp8Gumtr/DS5UP+2t/4Rd7/iSG33/Uhrl07z/W3BAa1Y/OU5MKrR6ROcfYmswxqhaA/jNRuSlVqjva3cW4Xf89deN2wcmaFarHgaH9Or5eytnIdh+M9egjGhw2JSBgWOfW85WB3yukbR/RWcmwUbB+8gp14BmaFECLXJkdAjTJwtHtIng3QRjObjVEqZbGYYVKJtQLplsGiCxVByOWDOzpkUIQYCXikUSilUFoR8IQQ8R6ss927TqfzW3D29AnOnj7BH/r9nwHg9UtXj+fnPMsjT5xj7+DonR6yD3zP8QKY3fHAgw/zVqDz+PmHH/Ldznc6nU6n0+l0Op1Op9P5dvKuBjiDoSHPci6+vo8UOXmSolQgJSW0NcnKjNj0mc9bttaGrI5SkkQzmSW0ZYPxliQ3mEyzaBqk1EjnCDGCAH9c+bFsryaBwBvdwWNcnuwGQEKeZbRNi/PH1SI+YIMlSRIkchlYhLhsARZ+XYzzZmXOrw8oIgi5jC+0JE9yitQwmzW01nLydE4+FEz3PZPthqNZhZSCtfUeWhuIBYmx1EcWsxgyvRhZzCyukeSFxCQR6frEaMjXHLU/JO+PGCTr7E120aqAYKmrkpkoGW0NOXPqOoR3+KYlKwqikbS2wYcGZMPqVo/+SsJwZYPJtGJ3d4wSBq0SgockgRAc1gqUEiBq0iwluONKqOMt8P7t7edCDG8FYXEZ6ixbzAtEjBAtRntk5qmFxJMgCUgVUdERnQTpMHlEKMNi3mCtxzuJyQ1JmuMszCcHZKain64TfSQRisXYUi9mXHcd3HBDwsvnKw72A64siWpObCQnNk9w5qZb2d+/Rl1OaGYeIeZIaciSlBAE1s3Y3DK4WFFVkrbN2L60wFaKlfWMokiZTBfU9QIv3wiUIsFJmtjS2Iosy4leUJghXjXUdY1UkV6RUNU1gUAvzXHe4pwD5QnOI8SyJdrKao/B+iqLecvVq9s4ZxFRIZVethMUESETmrlC2h4iagqTMolzXIgkWqITg3MNRZbjg2AxnYKUCBlRqUDKwHwxQ9ADUqS0CBNI85QsGXG0X+P8mJtu7TOZeJQQzI6OONpfznoSEcpZha0jw9EqwS8oFxUy7TOb1wjp6ZkB1bUJs6PZ8vmRSm54z0luvek0V2ev8y9/4Rs88JEP8rUvP890suCWW69n9USPazvXiHqNNJM4F6lDXLbH0xKbzfCN5aXzj9BfG0KIrGaBhhrnNSubQ4SqWRzMmV7y+CBZPwEbW5sUK6fIehFX1ahKMyo2aJuSQkWi0EglGawoJgeB0kbSoWTSVnihyUzKYjKjh0YZhXUtPnhSk6CTnKZukRKEkMQA3oVlyzqpcW1ASYMxYIXHN90H/jud344bz57mxrOn+ZE/8FkAXr1wZRnoPLkMdfYPx+/4VxTg+44XwPSOBx78Nd4KdJ7sAp1Op9PpdDqdTqfT6XQ6/769uy3Udi1Fb4HWGoIgeAvCMBt7EqfpjTJc4mgay/7ejKLwNM2C6Edk6RARS9JMMxgOWc0LeoM+k/1tdq5eRckUnaXYxhOjQLwZrhzPzBGCtm1pmoa0lyJbQestUkiiBMlyzkpTNyRJQoyRLMkgQFVXx8dYHlMeH/03VuZEYoRAQJlIiC1NHVBaYr3gaKdkfhiwNhKiJMmWFTNV46Dxy2qNXsaF5w11M+UDHz2LMPsoJbnyOuzv9Imqj/cO5wPX3XgTO1cOOPfKNbJihDKG4bDlxves8+q249rhmFs3tggtOO+xZtlurK4ddbUMEyI1l169xnA44PY7b6Bu50wPIlpmED3eW0wiSBJJCJGmFcSgEMLz1i5HlIos57S/fU9ihBACUr45AAQhl+3wVN+xdUYxm6Rcu+xRIoCISKkRRmPtclROUy3DkdGaRGvF3p7HZI6y2uFDH7iZ+25b46Gf/zKq2KLXzwnC07iAFAHrWtrWk6Q9kJIgGrxzXL14lcFgiDEJrTSg3sjlPM5XEAPaKJLMIINiNmk5uFaTGYHEUFY1iVrOMOn3hyzKOdYJjNQUucDFOVluWMxbhBdoqYFAlilMCvPZnMXMs3VyRN04Do8WaCUJRIzSy+dIIlBGU83mHO5PsU4gUIjj9nQuxOU+Np4Wg5YZ1XSObxtMqlAqEoOFaJAio7UtbWtRGmzrcM5io6JuS06cPIEgQYYes+mcICLzmSMWC6wNbKyvcMMNmufPH5IqzanNDZ57vsE7S54lEAxSS2ZHczyRjY1NZrMZ1s+ZVyW9aUVmejRVQ9QO00vprwtC9izlZJutmwzJIOVzn/5DPP30U5x/8WXO3rjFffd8gmeeusrO7iXWtwZMDkvSuMqolzMaGXSasJjCwfY+eS4QqSeJ4KznpeeeZXNjRJYXCB8RLqVc1KydLlhZX+P5b7zI3s6EiKY3AEzFdB5I1QAlwVcS4TOKviYZWIxNUCFbPl8TBahlCGZASoM0Zjm3STpUkiGlpC0XKKFQaKKH4AJSK4Jahm9SdzNwOp1/G26+4Qw333CG/9UPfg6Al1+/9GZ1zteffJaDo8k7PeQQ+P7jBTA+rtD5PMsZOs+cf/ih0O38717nH37o2+r63PHAg92d0ul0Op1Op9PpdDqddzfACW7EdFwjhUHqiA+ecuFJ05S2Eizmkeg8iSqIlWQ8rpASzly3QmIyDo88WZaSD3rcdNudbG5ucPn1HgJHmvbYuzZlPpmRGMGyq9oyKBAIhIAbbriBU6dPceHSBfZ2d9lYX6dpWtq2YTAcooTk8PBoORg+MUjkcs5ODLS+JTHJstIneqSQKL71/AoBEAUxRhrfEiRopbET8DGFVBFli5AWgsKHyOragIO9OeXC49uc1dWMYlCxegLWNxVXruyj1B2EKBDaMZ8ELr80gWgY5iMa27Iop3z2sx/k/rsK/pt/9hVGxXX4WtCUFUJIpLB463DeMzlsaNuAkorgFEdVzWyz4sbrb+KF2VWsbVEKQgDnBIOhxjlPXSW4VpNmAuccwS8rm6SUy85Rb9uJiNYQosf7CFG+2WJNmxSkp6xK6qqmyPsIYXDOEoLAGE3TeFyIaGMJ1hKjYHU9o9fPGE/H3HDPOvfekeMnrxNtTUBSl4G6XvZoO9zPONj1tI1iMDSkacZk1uKtJZUp7XxOFSzgUHL5GAkBpBT4ICh6q9RW08wqVDCIaDFSEYlMp1P6eUHR7yGlYDabUleWaW35gR/4HnQS+epXv7RsgyZanI/kWU5v2Gcxn1GVLb1en6ZdEIJiNBzQtg1pmhJCJMsyhIS9/UNiiIgIUmjyXkGe5TRN++Z8ofliTn84IM8yytmMIk9RmSQKmM9qgl+GTCKUJD1P2zZsbZxmPl9w8uRpzpy5Ducbnn32OYw0JOYEFy/vIJSn6Hl04pnXFYeHBYoBk3FFaB0uNCglSdI+4/Eh2AYRM7xTxOiRyhGcpdfr4dtIkNAbpczLBYSMqm3pb1k+cnqdyzs1X370q9y4dpWir7BVRjs+xTwvKMcp82bI2evXqPU+YZKwuzdjtGrItoYoKbnuxpuY2m0Wccb6cJ36sGbn2hglU6J0kEVCHRC6YTa7SlOPkV4jVJ9FOWVtmFKs9bm4P6b1NUK11IuUZUVSBTUoIYne4WKLSgTBebz3KKnIsowgwAeP1hqhwAeHzhTSK2QA7xwiBogOG5fRstK6e9fpdN4Ft954lltvPMsf+6HvIcbIy69f5quPPc2jT5zj6089x3gye6eHXAF+4HgBHHxThc4XgHPnH34odjvf6XQ6nU6n0+l0Op1O5930rp5NlKpFshxS31QNd91zN8PVnJ2dXV5/5dLyk+qJIS0UwVvSzBCi5Oq1VxAogtMMVk6ztbnF+voat9xyG9edOs1115/lwoVXuHx1G5VKeoMCCdRNRYxxWXnTtoxWR3zsEx8lfyJnZ/sqa6vrDIcrzGZTXGvRRnP9DTeye22P5194HhEVWkryImdjtMHh0SExBIxOsK3FRYcUEhDIt1WeiDcrUYQAGUE6MNoQvaNuLCJVROWwLhCCxEfPYCVjOp0jBdSN4YmvNgxXcvqjgJ1vIKlwtgXpUAIWU48UmiSNZFrinOG1K0f4DGTWIzYztncWFGmBFJK6anGtxQcHBLQG71qUFmgjeeXFHfKiIM00UrZIJQkebCsZHwSEAqEiCEvwKSG2CBlB6OUsECmQYjn7JsZlG7VeAZubJ7lyaQ8fIsFqIgpaga9SruwKiussJ27TLK5Z5lNHCIKmaUkzgzA1g9WGuhTMp4boDR/5+BpPPl6TJqu8enmHxU7JyvpZWpVTzRy28YgoCVGClJjU4V1N7Zbt8JABoQVZmlM3Ae8dMUSIAiEEMQRAkOgCISLOz3EukGUabRTOeQgR7z1VVTGbVcSoKHoRk3p6wwFZMmDn6pQ8DwxHKxTFkNYuUEoRfIqUnmIAZQl52iNNC7x3zMoSoqW1NW3riE7ghcAYSZHnFP0+QihmVUmSJJjUsDHqUx5NmF3bJ01TSAU+eryzGKMJITA+OqR3UuN1Q+oy7rzn/bznPe/jEx//KEWWYi2cf+45/j8/8Q+p65qV1ZzJrKSa5ySppz9IOP98i0oGpDJnZ3efpDDgNdNJTW+gWD+Rsn+t5WgvEMMaRo2oWVD0cryOeBvx3qJNjklS2nbO0aEi6hzlFFrOefHVS5w9a7j7vTfz3LNfI+Qp2aBHVmumlyOpXKN0EutyLl+acbS/S2I0IZOgBEnepxaWuvb0+ivUC4fIGsxIEHTD+mZBKCOXdkvGc4drIaHP7jWH3LWkQiM0RDRRtAgkvjYcXbWIvMEkAS0MShi8sAgh8D5grUUnBqXUm5U2MQaC90Qbl6VoLP/0wS2TTiWI3eneTuddJ4TgtpvOcttNZ/nRP/z7CSHy4qsXePSJc3zt8WWgM53N3+lh14EfOl4A+3c88OCv8lag81wX6HQ6nU6n0+l0Op1Op9P5t+1d/jh4gKAQApJUc217h4OxYjJbUPT6JEjK0DKu9phOr/Hjf+ov84mPfi8//4s/w6vnX+DF517muRfOMVjpcet77qTfW+G6k7exubHG4eFVBqOUPFfLFmghElXAe0+QkSACX3/sUZ565mlWhn16vR7feOwxPvyhj3D77bdzdHDIc88+y9133c2nP/Pd/PzP/wKPf/0bIBWjPGewMmA8PyRLCrI0Yzqe0jT2zVZq4Tio+VYUHhHBRY1MJ/S0pHUGWaRIk1NVCxrXMBoOkVLT1MsT0M4b9nbmHOwFtO4Rg0VIB9IiRSQxy/Zk3rK8FkLw7PlXmTSbrA1XkeEKqzdKnnu+wvoUCMuWYbWntY40S1GJpqpbWlujpaIcVyQmR+kUJT0uOmIE5yUahVSKEBxlvUAKsbxcGdF6eSLbHVewAMvQqLYc7Fb4AINRxFtoFw7nIYgeWqWEUjLe9rRzcDY5brnmkbqkP1TEkLKyphCqot8bsPPqmNfP72P1grVNAVUfpQyutdR1RIkUIR2tbUGDDMvAJXoL0TIYpLSNo60WaOmRIhKEfPNkupACfGSyv49JBVkeQVQMVzaoSsuiWlCkOfPFlMSmnD55isFolTvvfA/33HcXX3v0qxwd7PAn/8M/z2zxAr/8Sw8zHJ1gtJZwsHdIWbWsrg7QKpImPUya0zpLv9+nsZZJuUBqaFuLVDlSSja3hvQGfcZHC6ytEVJStQ1CKRIZCYBMDEFAPatJtAKh0EaRZwnzxZTyaErjaowe8rVf/TVuPn0ra/1Nnn7mWcZHe/z+z30329uv0bYNv/TLv8zhwRVq4QgIknlBVQZEaJi6CWuDIVEHmnKO81OIOYSCU6cF5WzO5LBl48QGQmdUdYMgIKSjKgNJloIIqMRyNL7GhZ2cj33wBHUTeM1uMhuD713lzPV9vK85e9uMD70v45VnFhyNTxDVHDFSZCOFMpK2FMwO5iRGEIPHOgtiyHCrhy1LojS4ZIE2nroUHFyuWRmM0LFFaUVbW6KARGvwlmXhnkEggQhq2fIwVIHYeHSqkRICYll9hqS1Fh8izh/P4pKSKCLeBlSAcDz/SR7PiYo+EH3kN3Rh7HQ67zopBXfeeiN33nojP/rD308IkRdeeZ1HnzjHI4+f4xtPPcd0vninh90A/tDxAtj9pkDni+cffui5buc7nU6n0+l0Op1Op9Pp/C/1rvfziccnMIXUTKZT3KFDiIRBMUBKS5a23Pme93Ld2Zv5yCc+Ta+3wolTZ/nuz3wfrz73Cn//H/xdvvSlL7Fx/fXc/d6PMz0Yc/XqRQ6u7RNspGlLqqZCaYW3jhgjg0EPszbi6HDC/v4h3lV88AMf4aabb+Pll14izzOauuGGm27k1ttuY7Qy4ju+82PM5/u8/MIrHB2NmbsFJ06v07YO6yEbDbB7M2SMIBzLM7EC3jjp+2YzMUEUHgQsqpZbb8343O/7GD/7M1/H6RFZsc4zT53jlvUbGGQ5TXmAlB5jlq31gzY460EEsiLiXESI5LjVmD1u1aaJAkJUiKjY2T4iSWu++/fdwslT8OS5F5GiIHiHlgKhDByHFsNhD2VaqkWD9y1aK7xzeBfxvsEjGBQpPVMyrTWty5ExYnREBIl3y5scdSTNEqKPtE1DiBFpFFk+oi4b0jSiE09ZWlxQBCFo/AwjNWmjsAsIQSNiS5YYhFLYuuSwbbBtYG01JysGXN0esy8leW8LTcRXgbYMeF/hrUc7jdIBT0QIhQgRpRxZLrALQV0bZjOLCA6jUnwMBOmXdxuaGAQhOvI8wbdQzSr6gx6DkwOuXD5gWKyS5RmroxUGvs9sNsPZljyzjGdP88XPv0Jb5/zxP/4nmIzHfPXRiwRvWcwPkPRoGklUihDDsromOHxsqZsG60tcqPDBkuucGDxtiKRRUGQFUUTSviERGWkTKJuGEASHO3uICMYogmvRQqBMgswT5osZwilOnT1DU884mC7ob25y+fIF/p9////Ir/zST/Ndn/pBVOL5xc//HG3bcOXi65zaGPHJH/tx/uH/+x+BSjiaVfSyHp6KNDMs5hXTecPW5gpn77yZ7WvXGI8DKwOJ0SllVVMvxlRNpPaeJHX0EoNwCThoxi0nT4147/2neencnNdeuUpvbcBww+EmKbOJoEhOIqzj7LpEqhm6J2HeR4gSpeZEG2nKhHoRUEScraGK3H/rrRzNAhPnKFOJxNDrn+Jwb8xs4nENHLgJmxvr9ApN8IGDyZTDyQIjE5QUIATOg5SRED1IQaYN0UfqqiISEVovn98RQhB4uwxqVVwGgMtWiurNtoHLpmnHAU5cvjbQfT6/0/n3TkrBXbfdxF233cSf/CM/gA+BF156jUeeePbNlmvzRflOD7sF/PDx4o4HHrzGsjLnCywDnRe7ne90Op1Op9PpdDqdTqfzTr3LAY4AHAAxiOVMD50gMDRlzdxNyfuezdF1KLHOqy9f4YPv3+L0mVNInfHJT3+WSTXj7/39/wc/8z//PHfe9l7uv/M+Xnv9ErOJRZGC9yi9PDEqpUQJgVISYwy3334nd96pefyxr3Du3Dk++9nv5fKlS/zqr/4aP/bjP8anP/1d7Fzd4cKFVzn37BNkhcZLB8EhrURrSdYf0DrFYl6jTIlflEgtEZLjapw3bucbZ2YjEbXsmKQidR05/9JFDqZzbnvP/XzoQx/n+WeeY3W0zs3X3czrr/4izgUEkSRJwCTM3YKIQ6iAFgKCJkpFlvcxSjObzfAxopXBeYezkrJs+cYTu8hzY6rGk+gG51qmixlGp6QmR+tIbyCJwVHNI4nO8b5GCMjznOFwk4pIvy+487oez7ywzaXLcwa9FJNoQi2wrcUYgzCC2pZsrG/Sz7fY2bvGfD5jvnBoLUiLjKoKRAS9lQxlBCYp2L0ypa0alIZoJFIEggx4awihD9GhRODoMBD3LTqFrJ8hlMfNZ7Qzj1aKKAxGGLQIxGDxCiQa4TxJYgghUnmLyPpE4cmVQFhH0zpQirdOpkdCWIY/Wkuk9EzGNbJ2BK9om5Y0MTRtQ69XUFYLtncOSIdHXHf3gp/8p69w7y1/iL0rr/P3/9F/Q9lOufHGE1SLlv29GUIbVFRMpjVSOoajIVpDXVdY69DaAZGyqtk6c5LD6Zz2qGTYW8MaQSUO8ULQS3Ja65lMj5DWkyYJIkAUkqgiPrT4IEBIXOOpakfjEt73kQ9z63vvpqz2+KWf+glW1lNaW/LMU09y4V+8hJCBLEv4E3/sjxG8Yn1lg8otUMLh3QIpDAKBUYL1UZ/JYc3aakpqRjRtydFYgtZEPccIQRCROiyQURFagVSSdm4RIWXv9YqvtlcJVuBCZCUbkJgaIWsW45TpQhC8JYaALxL29zymXpAU4FXAVWBqiYieICIhJoTKUx961ouc/cMrzJsFfZMxXyjG+57NzQFiGNm/NkNWNTecvo55W9FUNeOwIEqBUAKlBEIqjNEIIajqGutbpBSEsKzIEW34plc2dRzfvn2muRLL6rw3gps3/oixK73pdL5dKSm5+45buPuOW/jTf/QP4L3n3PlXePSJZ3nkiXM89vTzlFX9Tg97Avijx4s7HnjwKsdhDstA5+Vu5zudTqfT6XQ6nU6n0+n8m/w7nagthICoiCxbZmVZj2DhX/3crzKtS5qy5s//xb/A//av/nWu7F7h4UcfYfOGm/mzf+mv8/f+q/+C/+kf/2M+/A/+Ed/5fT/Ea9uXMfMZmdcgPUKEb7oMQdt4Vlc3eeATn+LUqU3+p//fQ/z0z/wsq6MV1tfXePTrj7J77RqXXr/IdDZhvhgzXB1y4roz7O4cUJUlh7uGMzedpbUNQlhGmykLaSkXnkQoovDLoAWF+OZP1sflJ/WTRLF3OXK4PSYf3cTNt9xPrAt0q9i9eolBNiL4SDgehO68RyBIjMZ6R1MF8rwAKZFK0hv0KYoCkyZMJlNUIgl1hUAihWL3imJl5XoGxQTnLUUxBMAYg5KCeTWjbsasrhvmU4GtPUoJYox452nblmTFUJnAC9sthIxETJEJeOXBKFSqkalE5xJXQh1q0jRn/dQGYcchJPQHI8qFZT5vGK0FElMyHAyYVjNMEejrVZpmgs4UwQ3Iey1NE2mmCqwlyEiSFxghidZjZxV1U1IMB5i1HBcDOLDjxbJVlRQEItI7hAu4OkGmGSr1ZANDlvSw85rWlfRXDK2Hplw+ZpYhmWSxaNAYev0M61vKGWxsruGaksV8RqMkB4d79Hp9tk6eZffwSUabKZ/45P08/mtP84/+8avkhWDjzBAZE6qFJ8kiSZ7RzCAagS5q6qrB2ohSBkFOU1cIacn6Pc5cfz0fOH2Cn3rop0n7ff6DP/6jPHfxZc498wTnv/51Zgd7xOCRKluOXBKCiMJ5RwyORGtU2zIc9HBt4COf/G7uv/8DECzFyTPc9KdP84s//yu89NorvO9993Pd2RP8/C/+NPd/4NOsbF3HV776eUarmrAjEMGgkoC1kdY6tk708M5TNo6dqxcZ9nvgJLXWeGPQiWNeVawOcnSesJi1lE7Q38gwViNFwdHRGOsTVlY16VbD1LW0u5qiCYgq4GwDRnJ136AGFbZyjHQkNS21V1jvyJJIKnMWpUV4KL3ma+e36WlP0tckwbA4ahCJIjWaqiw5deYMtrKcGWXkyvDlc08Rspx8MCA4j1YapRRCC7ReVtA4t5x3FLxAiON6O/EtQpiuoqbT+V1HKcV777qd9951O3/mj/8Q3nuefv5lHn3iHI8++SyPPf0CVf2OA53TwB8/XtzxwIOXeXug82q3851Op9PpdDqdTqfT6XR+Pf3v/iIj4EF4IgoRE4o8oz/os3tlm698+SuovwrXb53hwx/yzKrAvXffx+Xzz/PPf+Kf8Jf+5J/gh//0j7J+wwZ7s6voYJFRY10LEeqmprGWXjZA65wXX36NNO9z//s/zKNffQSlNdffcAPBRn7u536RvJdz003Xs3Vmk9XRgCzp8cjk60ymR5TjhupogUwiSlpkD0bpiPLCAbb1y+oPqeC4ZVJ480P3EREFRIkQkeAjVdnQ6w/42Ec+yc//5P+X7e2L+JASgUG/j7WWqq6IMaC1JE0SqrqFpqXXKwgB6rbGBU+IAZOlNHZB8BajEtLEIWhpq4Qiz6kbQaoN2hiSRHPq9Alee+11jg4vkmcrJGlCvSjRetkCrq5rFvM5zA2n77ieGKCalghvsG0kKov0nl6/j0fgYsTkPXwb2Lu6S5bmjNY3idHTVMuqnpW1gsFaRX0keP3lMem65+Z7TiBd5NolgYgrzBYNqkjIkxY9DNRzRag92nmyXk66moK0TOcwXBviRKRyltgqpgeeXEiQAici2gmM1DjfokROnub4GPEBkAqkonGeEBUxgFARozXOBrTSRB+JMZAkCbULVM2c9951F9tXrrJ9ZRupDEZKbj17hqNJ5MrTnheeu8Bf+E/+Mz784Q/wN/4P/xGL8QIjFT629FYCi2mktDmZhkGacVQ1BFvjAqR5YO1kyt6OI+1nBB04depGPvX7vp8vfv6X+Q9//Mf5sT/4w5z/4Ef53zzyKNXskN5gCEISlaS1LVoqVJqwcfoEIQauvvIcd7///dz+gU8T6PG5T38XqYJyviAkCi1XWS2GfMcnP8Yjj32JL335C2xf3eZv/hd/C28nqGDJ0z4+eFS2AGOhiYSoaVxJtu4xQnK4d0RiUtKeB5EQU0V0JZPSkq0qtG8wTjCZjlnPMtLUcbgTkfUms+mYLB6ydiKlXTGsyRVYGOxUIbPIxolVBiPDc4+/zqEVFD6SmAwiWCtwMmISCcGxsj5id3bEYiYop4qBTsmMY2YrpOxRzuHa1T1Onck5mLU89uhjqGFKWghc6Rj0hjRtg7MOJPjgkXIZ2kgliSF+6+Dmt/AqJxForfHOI6RAIHDede86nc7vMEop3nfPHbzvnjv4c3/iD+Gc5+nnX+KRJ87x6BPneOLceaq6eaeHvQ74E8eLOx548CLLMOeNlmuvdzvf6XQ6nU6n0+l0Op1O599pgBOJRBGACCIioidiiVEgWshCZGU4ePPnbzxxBoHi9SuX+AM/8iPc+/4P8Y//+/+W/9vf+pvc9767yfICJSXBe5TWSCmpXQvLkRRcvLzDcy+8zNbJIT/8x/4DQPHcM49z8fWLvPe++3jf+97HK6+9yu7eLqf0CaokwdlIlqVUi4RgJVcv7nDLXSfRiaEWESkNJ8+uMdtdUM89MQiQfjlS5c0J5R4hPCCXtzNIqnLKT/zEf88Lzz2JGabEfc3h/j5GKeqmBSKJSbCuxYeW4WCNXm+F6WyCbWtEYljUluHAsHXqFGma8vzzz+BcIFEglCZJFFI7hFAoKZkvFqRJysrKSa5cucT4cEIIQ7YvtTjfoM2y+gaWJ6uFNAg0/qiC4KnmFUrnxBCJAfDgQ8Qr8DicA0KEANV0ijoq8SFCdGgTyYuE6U6Cjo719YTY90ybPWJYkG1Khj3DaNFj71rJwf4UrXJynZDmBh8C+4f7JE1GspKSrAypvaeuaooiZ7C+TqYytl+9gEBjhj1oLMFZhIQgKzKlKJ2nlREtWoSMVE1JrzckH/Wo64boQbC8DwUeZyVZrumPGuoFTKdzsrx4swVWFJLdvSvs7Y85GufYaoX5tOQXf+7nKBcl3qUIJen3Vtg72qFe+ONHvsa5BOsswTsQKUUxJE1ByobaWybzKedfPM+t99zHaxcv8Hf+y7/N/+u/+weMr13j6OgIk+XIJEGZjLrxRC1JioJer0eS9zi6tscgMRztX+WOO+9hUGzgmwaKhKKfAYo/+NnPvfn8+umf/inKsuTia68wPTpCqYxBf0RIBD62bGyskeaCnctTdq82JInkhvcMKOvI5GBZeRYrR91UCBEolMS2gsq35GmPGCyjgSFYy87eNo4Npm3Dpz64y62bu9z3/k/xL3/lMo8/eYhrN7DO0e43JDhycZLV/gDqgBYZB0czjM5xzh+HnIIYIgiF0YZkIKmsoxENSZLixxJrW7QsmM3GrNma0ekT+GsTNvobEBfslVN62QjBMrgRAYIICCmIIaCUIhCIMfz2Xu9ixLkAMRJ8QEqJlKp71+l0fqf/4qQV77/3Tt5/7538hR/9w1jrloHO4+f42hPP8OS58zStfaeHvR740ePFHQ88+BpvVeh84fzDD13udr7T6XQ6nU6n0+l0Op3fe97VACfyzXNi+Ka/CUAt/1X6ZYuiKBgMh7z84os89NA/58EH/zAKRYieRBtuv/0OvuvjH+GjH/44f/tv/DWeevwR1rdOoLKEYpAzXBnRNA1GaWICw7V17rjrXm679Q6kcGRJj498+CPYxYSr21d5+umnWVtb5frTZ7i2u8OlC5dRN15Pa6dYIiiBiIIQIwSDUQonanxoWVntU+iC8X5FVS5nZbxxkjbEiJABJQIiKCLLeSuJSZnNpnz+V36aW26+HSES1lbWEAp2rl0jTRISY9A6wXlH1TRoAUYavKuJEZTOWD1xkjM338jR3jXqskbLBB9AekXwIJOIQOC9J8syiiLn6tWrTKcz+v0CayPBGfq5oq4avAuI4/5vQih0hKPtfVSQoBKiAoJHCU3wkrJp8TEijUDqhKgUUQe0FEghKdB421C3c0rXEF2BMh7laqpDx3BFMlzdIOgJSc+TZhnzUnHf2Xsp0j67269y+dIltJRIIq6suO62W0iHfa6+dol+0aOX59xy852srW/ysz/5L0nSlKSXsb+7w6AYYtuasprSWkV0AiEDIViEgESnuLbEu5amUUitMUoSYwsiEog425IUClLD+efOo6RAHM9WGo+n7OwesrYywi1qEp3wT/+Hf8BoJWV1fY2rV2a0smKt12PQ38C7CqMUIToOZi3ee5RIEEJSlgus1eRZn7nztFbxwnPP8uXHHuEHHvwRXnr8eX7oD/7AslVeiKi8j0hS8ryHjSVZPqJfFCwmC44Or2AXM1Lv2Fg7xWgw5KbrTpFJC1ggAWA2m/HUM9/goZ/8SR7+0hc4dXKVxWzGymCIdy0iq7Ai0ktylO4TQ4uUKd45bBBceWHCcKPP7XfcyO5uyWyyACNJU4OtW6xr8MEwWYCMgVUtaYMihBxQ6BxOrG/g/YxXXm64djnh2r5jMFT42GBkhq0kL77wOiujHv2eBGEYra4xL2dobUlNQVNHkkxxbXuHtOgTokMoS9SShWU538l5Tp4ZsvAttarpmzmDwjDeLkFMKXr58SgkgZSKEMMyFApxOU9LQvCBN2Zcid+0X9rbvy/icsaPVpK1UU5ZW+ZlhZYSpbsAp9P53cYYzQfuew8fuO89/Ed/8odpWstTz57nkSee5ZHHn+Gp516ite840LnpeP1pgDseePBljtutsQx0rnY73+l0Op1Op9PpdDqdzu9+72qAI6IC4VmWbwi+OcohircGfceIl6B7GW3V8tf+k7/Kle2r/Ln/+C/z+uVtBIa11SHzaWBjsMJnP/cDVG3JwcE+s9mUqpxTzubLShECxqS4yqKFIBMaLQxPfPkRrlx5lSzVpEbTNA27O9u0dUWvKDg4WHC4v0+vGDIoepSzCSEEiIJyIRiNVpBiSlSOpg5M5jNUqilUjrWGECPReYQStNbRlhVSgJARcMQgWUlX0WnBZH+X4CQ6Wc66SbMUozTOOXwIxKhZzGskDVmSITCIEEFJNk9fx6333MdTj3wZo9JllYEWKOmIPlIuAj5W5HmG0ZrpdEpZNsRg8MHR62miNXgvENGzrBZ6K1jzwRFCJAaJUgbnK6JwJFrS6+ckSQZRotKUCocnkBq9nMlChNJSl1CkfUxqiELQWoMIAsopVSmpdxVO9Nh9tUaKayiZ8Z6bNvnoRz/KYPV7+aVf+CUe+8pXAcVkPEU0jtV8yCzvI5REmhTnFEcHcz77fd+PdTV7e1fwbUu/KNjfu0RiDEoly4dfYxFCQtAQBM5BSiRNNU4KpIzQSjyCJE3RQkEjwDuILa51+BhIjWE4HLEiFLapcc6ytbHGyo2nOBhfQxlNkmVUZcOFCxe59fYzICUHh2OMUYgQkFoQoybiqZopTSPxTtMESd2rwRtCDY898RhDlbK/fZli2ENlCVEJlE6JwdEf5qATWhfwTSBah1aROkimZLx29QKjnuH0+mkAXnr5OX76Jx/iK1/7FXbHu1zembC5eZJFecTq+gZlrRnlYwZbMJ1FYqVxXjOZ1MxnNf2hxrWRclHQNg15NsFoT4iQZwVbm6topXn1tVdpFiVaSLQqqGaCNgSQilS00Ep+9peHCPl+Fouak2fu4L4P9Nnf22PixkThqV0gNTmLWclh21JkGXnRwyiFUB4vWqJKUCmo1iJ8IEQBQaF1RggWjUe4wOpQkff6TElY21hnMjzi0u6MrJ+S5j2k0aRoaGqatiXKiIjLlkkxRmJ0y4AHEPFbBDji7dU5QoCra4iCW268mV6uuLp3xOrqCBEi88W8e9fpdH6XSxPDh993Dx9+3z38pT/9I9RNy5PPnueRJ87xyOPnePr5l7D2HbdTvPV4/TjAHQ88+CJvBTpfPP/wQ9vdznc6nU6n0+l0Op1Op/O7z7sa4Ki4nAUCFikSbOsRSiPlsqnUG2SUEMERKIqCE1sn+a//7/81O+MJn/zkd5BnAxaLFtsuePTrv8qLL73Ebffcz6nFhIODbaqjCZOjI5xfVvPExlMdvcbPvnaBnxWWKAQ+ahARRMAYg1SKXpIyny8o5wukiBztHzGVM4xJkUEuW7F5z+H+PqNhn2ANjY/EaCA6rA0YrVAIMqMxQmD0cr7KZDymLOeE4JYD0oUnxpZARKLIepK9w4s4q1DK0LiGGMJxS7NlxQdiOTNDCEFwltZa5tOKU6du4p4fux9XSb7wsz9LfzgksqymsdYiFDSNpFyUVHWNUgqtFfOZQ0mFpGY6c0h53BYsaIQQSOnxeCAQRSTiSTODj57gGkbrG0ihGU8mSBlZ6Q1o2pZqPkPYZeWKLhKGGyN89Hi/DD62RgNoGoxMqBVoHxkyYny0oI0OpSq++K9+lice/QqnTp9ibW2V226/gyuXLjOtay5duozznsVsTts0DPp9ntreoegV3HTT9aSJRtiWdjFlbluCh0F/BYFm3s6x1i6ra5zH22VAZbUgzRsEAaNy2gBV2ZIISTARESPWeoSQ9IdDRisDVoYjTp+8jgsXLzKdTTl5covt7StMJgdILdBGIfWc1lkkfXauzjCpYWNjFakM89kMa1ucb4lRIkKKkJLewMBuxeTyNifuuIG7r7+fl8+d44Wrl9g6vYUNAesda8M1iixnMZ/TG/SY1nPqak6IDV5FEhPYPLnO9vYFHvv6l7lpa5ODq3v89M/8JF/68ud59tmnWFlL2Ty9xR333I4xfXYvXqRII71+Sr5yikILSneEVYq6CXinaJtA4xdIJKdO38zG1iYXd17AB4cQOW3jqJqW685scX2MXLx4kdg6NGpZwRYFUghCDOAjTaMIUbC2cpL33vcBis0Rz7/wPGX1AlWcoVREuYj0gigV3jrq+Zx8sMJ4MWZza4geFUwmcxI5JLQWIQRFmhLaAA7qRcNiMuaxR7YZnV3h5M1n2bvSsLc3RSpP0Rsy6G+hE0PTVDjXYvTy+sYQjq/vG69Qywqc34q2tRRS8OkP38e8bnjslR1G/ZRCBdoQyLTs3nU6nd9jsjTho++/l4++/174MajqhifOnefRJ87xyBPLQMc5/04Pe/vx+rMAdzzw4Au8PdC51u18p9PpdDqdTqfT6XQ6v/O9uy3URKSuambzMYP+iK2t65hM5vgYiASUePvJTCkE3juSJOHk1hb/7H/4J0wO9vnRP/Vn2N67ysWrr/H5r/wSRqbcfts99Nd65KOE/e0dGjwhRrz3CATSKaRStH7OPfe/l1tuv5f54ZgLL56naVuasqZtagpgPpvhXYPRmhgFVbv8u9IKT8tkMuHq1R2EVJy57gzv+8CHyJI+zz3zAhcuvohJl23LINJWDePJIa21pGlO3VTM5yW9fg+TaExaIxW4VuBcSgwR6/1xozUQYjn4XMrljA/vlqGU1AoVBE995SvM93ZJMygnE05srqIQWF+ztq544JOf5MqVBV/60tdQSqKlWrZ+ig1apDQLj9QtaZrgXCBN32gdFWhaS5polBY01hOIJEowyEY0TeDqlT1MErjplusoih6TsWXv8IC6rekPhww2N5CZIQKhajAxoi3Mru2CluTrQ4RtES4wXB2iNw3BCxIzYDapmVU1uxdeYfHUhH6SstLro7WirkrGhwecOXMGESIXX7+Ac5Z2MeVXX34JJTVJki5nsSiFNpq2miEJ1G0NQGoyjNIYvQyklAadNWRSEEKgWMshybCNow2eRGsGWc7m1jpbW5sMih4mSVhM5+zv73PvvffS2hpjDJAgtae1FZPpjKLo45xiNnOcHI5obIV+MwQQy/AGgU4kxiSkWcHwuhUW84bJ1SNUHWiOxuRJDysigWXoaJDMjsb0+wNGgwGzZsJ8doCdl8Q0YWt9lZXVIRHBhZef5+/+X5/n2aeeYjZf0Cv6/NE/+qfoD4dcG+/z0U99nKeeeYLECHZeu8SKlqykNzE/nKPcClbBTde/h+g9T4wX1OUYBNxy912870Pv55/8k9e5fGUbKRp6gwFV7bi2e4gQgiwvcMohXMDWCwKCEGBZqxJRMRJ8pK4kBwf7DE5tcOL0aV556UUCASHlMkYMAtBEQJmMxnoEmjwfYtsGbwMyphgjCTHgGgtEykXLiRM38pkf+EGijrzw4nNcfOkyTTlFec9gmJCkAZQgzXKqqlxes7isCTQmQQIhLMPTSPxN85sQltWFQgh8CORpymc/+l6IkedfvkSSJwgtkFoTWsuiqrt3nU7n97g8S/n4B+/j4x+8D4Cqrnns6Rf42uPP8OgTz3Lu/Ct4/44DnTuP158H4h0PPPgcb83Q+dXzDz+03+18p9PpdDqdTqfT6XQ6v/O8qwHOYXPIhz78Ce577/38zE89xL0fvJcrr2/z3NPP0esnxLiscHjDsmURQERJweqwzxc//8t8x2c+w90f+DD7iz1uvflOqsWULI9IBRcuHjBZzJfD3aVAs6zukTFhOFolzzPWNs+AEJR1xWB1RFLX+OBx0ZGnGaPhkJ2dqzRNS17kjLKCum4YTyYEIlprdvf2gEh/WHB4dMjHP3ovwQsub79KFC1VVdG2lr39PVwLa2snWFtb59577+bCxYv8ws/9ApmJ3HTzEO88zoKz4D0oId7WYM4HjwsgEKg3qmRiREtFlhS88OTTlO0RK6MevXRIdCneGZoq8OKLL3Hl8hxBxGiDFAofLEpJZJLgrEJg0Il7sxpJSMW8XKC1RGtQCjyCtq5ROmdtfRXnNJNJxv7BNvt7JWliOdjfJ0kzTp86S97v44Cqajg6OCDRCXVZ0VYVUmsGwxWk1mQuEo3Ai0i+UhC8QIqEO264mbM33MjKypCLr13g6w9/CTudQYyUswn9LOHiyy8znc9ARIwSKK8o+hlaLdu6DUd9Wu8Yj0tiCIRY0eulpGkPhaYoCnzwzKspaM+gv8Xk6JCyWbBxao0zmxv4JtC2NVXdUGQZvcGAxKTLahzlubZ3jV6vh3OOZ849TX+Q09qWUa8gCIeMPa5tH2LSyKmzZ8nzBJ1IZvMph+MjEpOipUIKBUHjAqQiolcKRoMRexd3eO2FF0EqhElROiFNNU1ZYusK11TUqSFpHGdO38TH3v8JLr70OtvzI4oE6rLl6OCIq/svY4whN5qbb7qZP/Pjf45777ufL/7aV/nIpz7D9vQSGLj59jsYZivMplN86VhdP83m+ilWN7e46dbb2Lt2lSe/8XUUkqLX48LuNq//q18kH27wfd//CQZ5zvb+LkIb5rM5e/t75INVemlGdDVHRwfMZyXWOkQELTVSSaSA+WLBS+fPM6nHzOcz3HyBcRLpFVWYE7RjdbhCCI7WlmRZil3MufDaZaRuCUEQg0EriZASaTTWtayf3OQv/Md/lu/4fd+HlY5//s/+Kf/if3yIauYpMku/b6iblsaPkVKhlEQpvaxk8h6jNWmSATVN20CIy95o/PqZXiy/inH5miMlo5UVDqYljz3zLKo34MRKyng6Y2EDzkfqKLp3nU6n8zZ5lvHJD9/PJz98PwCLsuIbTz2/rNB58hzPnX8VH8I7OaQA7j5ef5FloHOOtwKdXzv/8EMH3c53Op1Op9PpdDqdTqfz7e9dDXDuvv8uNk8PuPu+9yDUg3z+8z9HO22QKrB/uEeuCwaD3vIT7uKbT2xGBIFBUWBnU/6r/8vf4i/+tf+U993/Qc4MNrh48XkuXn2Rw8k+bT1DxEC0HqRAGU2IESMjoq0pekMOXrvGq0+dQ2rB6uYmPgQCEaEUXoAxhtFojb39Paq6IdiAQnD3Xe/Bh8CVyxepmgppDM888zTPPvssX3/kq/gQGR+Mcd5iXYuPnhMntih6q7ROUiM4fcud/Ohf+Cv8/h/8I3zpX/0U3/j6F5kvHKnRKAUER4zieMoGQEAEsQyyRAAJQQAxEGzAJD1W1jfJwxAlHE3VUBQ5RveYT1oee/R10tygdbqcZSNbhIS6bgixJMsH1FVLrGvyXp/GO5pZiUbR62fU7QJQGJ2BEfjouXD1JUTUIARCCQ4OKly7YGtrjRhadi5dJksMRivqqmI2n5H3ehTDEb2VEfnaiF5RkHiPtSlSKbIiAxGJITIcDTl7aoPRSkaWZvRvPEOfD5MoSVH0eemFl/jiF77IiZPrfOiDd4NKODy6yuHhPiZJSJMMKVOatuXgYJ+6bBERbrjxLGevP42zjjNnT/HcuXM8f+48J8+cJOlFNk9usDoaUNoZK1trZMmA6BVVWXI0PlhWPkmJjJBkijxLsG3DyVMnODg8xMdl9ZQwAh8VF17fYXP9JPd/58d46bWniGLG7m5JRHPq9ClWR2uApypnTKYlddVgomF4QrKopzhnqNyUKCW9foZKBGsbQ5q2IjEF68MttjZP8Niz57AHh/z5P/KXObF6hlv+1I1sT/b5O3/7/8zOa88jY8CQc+stN5IkgtXVVRo342/+l/87+sMNLu1d5MgeEaNkoNf50R/9YX7yX/yPPPfk43zHZ2/nhptv5sTKaYZrm9TlgrKswTuUlkQkp07dymf/6Kf4+PvfS5GmvH7pMs++/BLX9nZ58qkncK1lbdCnbStciDgHJvE4694MK1ECKSKLas7htUA1r4ltQLYCmcLKRg+ftCgdSFXBZHtOnvQ4e+P1HE2mtL4mTw1KaKTMWMwbpFFIqUjzwK994Sc5//I5Vk/2ePncN7hhJWGx71lMGrZWN1lf7xF0RpFnTKctPvhlgOwDUkiyPCUcfw+WbeAIyyrBZWYTgICSkcQkKGVom4rZbMqXnzwkMymjRDM9HDOvLQvruf7G27jt5Fb3rtPpdP61ekXOpz72fj71sfcDMF+UfP2p545brj3LCy+99tsJdO49Xn8ZCHc88OAzLMOcLwC/FmM86na+0+l0Op1Op9PpdDqdbz/vaoBzerhJNT3HP/zvPs/Bdo+6aZhVE7bObvC//mN/ha/+wjf48pd/hbWTI2z0xyHGsldRROBDZHUwYmd3l5/4b/8ed/+dv8sg6+NdYDw+YlFOERKUFuhUk2qN0oa2dZg0QcSAbRfLKh8VMWnGfD6jqkpC8Cgj8cKz8A29zVUq13Cwt09wHiMke7v7rK6O6OUZznq81Vx/6haadsHLL5xbDqUXOWk6WFbIJJLRsE/eUxyOjzg8nPPVR7/Ive97Hz/w6U9y9/Wb7Fy6wJNPPY0sDGiBEZEQDAFDEIEoA0lQmCAIqiGIgCciUCAFZTlGKIlCEGNASknTlMeVTIEsTUEKgtNoA8oE6qohzQw33HQ9g2FG0zRcfH2fw0nA03D7rdfhZw1XLl5DJglBaaSMuCBYXxlw5uyNVAvPqTMj0iQlBMP+bsnXv/YcVVty/Y1rrK6mrG/2yBINCJRWJEmC1Iq0lxFjpJ8XuCbw4ksvc8uNp+j3M6SKrAxGFFmGki0izDF5RXHLKlprtk6c4fd9+ju5fPkyly4/TzG8iUsX94kiYLIcZwV2MSO4BVXbcP0Nm6ytGMrFnFOnTmMSQ9YL3H5XzuqJG9k9eAnn97n/9lu4+64Ner0BPlh8DMuKDp/SVillpXBtpJ0P6WWrxGg5PJywt7/L9rVtYhSkaYqQgtXVEc4pvvd7/gjf873fxckTZ/nZn/9pfuULP8WpkyusrZ1gOl2QGoVJPWkmEcoz1zO0dIzyHu28xC4EiXKsn+mTpIK6amiafZKs5jOf/iPcfPMnaVrBB7/je/nJn/4nPPPE49z6B97DM08/yUP/4l/y8jP/f/b+O06y7CDPx58TbqjY1bkn57A5SburDUoorHICGQlJBowxBowN+GtjcMLG2AgcQAT/sMkKlkA5Z23Q5rw7uzs5dc90T6fKdcMJvz9u7yAJgcH2Euz7zKc/M1Ph1q1TVX3rnve873uUudkGSg4Yn5tGVTJmtk4wf/Y0v/Bf7qDZbNEaDqg3WsSiRp6O+I7XvpgrrriGRx5+gEcffohAe5YWn2bh7NNc87wXsXThLMYlYAxTk9O85U3vYPf+Kziwc4po43O+f/t2ms069z/9BEdPHKa/tIhLFFrHCK9RKihi4xQbPVQCk2fkPmf73A62bd/DcJDx+OOPkdIhbnkGaY4wgq27t3PVFbeyvtbHDOHggZ389w++Gx06qmFIf72PdwlKNxlvNlhfP4d2mkcfuJfeA19messYQZIxE7R46fN3cOjYeayFPQcvZeeB3SyeWeDpx55EOtBIpNbUqjXUhgNNeIekcMNpGSClwNgE7y1KeRrNOkEQkaeGeqVBfzggDASt8Qlyb1gf9cg9hLLKprHtXHHwqvKoU1JS8heiXqvykpuex0tueh4A3f6ABx97ivsefpL7H3mSZ46fwjn/F9mkBK7a+PmHgBNCPMYfCzq3e++75ciXlJSUlJSUlJSUlJSUlPzV85wKOCN3gU1bxkltwtmzq4TxGOMzdTYdnORt3/023vLqd/HG176WC+tnaIxXYaPw/BtxzrJpZhOrF1b4r7/+y7zolS/i5OIT5LKHqkEURgRRwNgYRFGIc5AbQ+4sUVihXo0J4wi9ZhgMh0ghCCueuBYipcJ4i0dSr0XU6ltwfkR7pYuQMTISnJqfZzgYoXQRdRYGEWGoSZMhSgKWoq9GKAZZj5XVBS7dcikyaBLHgs7iCT7yu7/B76y2eeKRBxn2B4zV68V9vMM/Oz0sIDMpxo9Quo5UEi9Ecb0X4NWGSckgfLHydsPLgPcW7y3Phjs557E2x7gcP5SkieQlLz+I1JKTxxfYsnk705M1up0TZIMhtbDGsC4xIiQWVbz3KAXNac0Lbr2Ser3F5z99L2trCUL18CJnNMzYd/kEN93yErZun0OqHOcTwCOlJ89z8jRDSMlwOCTLDOMTdbIR3Hn3PLe+6GrmNjXI0oRASvAJngQlE2oVT80LpHRk7jy6MsVbv+fV/Py/PcLnPvswYZyhdY1qvU5USYmjmPW1LnE14e3vuo2ZuSrWpDgbk+UjvOwg5Iitu7Zy+dXvYv7sAnMzM9QbTbIswxMhvUcJD07jnCbLPWliyQfjHDnU5YEHDrG2eoHc5jhACM9kbZwo1gyGQ7IEDh89wsLiKYzpM+h3mJ1tcMVVuwhVg+PHl7C2Qq0ecurMPBcWLUJW2LWnztyOOZZXjzPo5egwpNoasXPHAQ7svRGtAy7Zv4ctWw8yGGkGyYh6VTE3+6P84i/+Avd9/Wssry4z6ORcumeW5mTEwkrKxFiFE+fPsNTrsnvXTnYFIc4Ytm7aTGuywVe/eBevfPltXHbwSo4dOcSZM09RaWrW1lZYW1+m0RpnZfksd9/9VfIsoxKG5GnOLTdex1Szibn4CS1CxWZbE2ih6LbbDDpt7HCEkJqQnIlGhTTNyHMQsuh5yjPo9UZESrFv9+UcPfEUTnRoTVUYjUYMBn1q4Rivf+U7mJ6e4P4H7iXHsba0RHd5RFyVVKua4bJnbX3EP/2n/5j13il+973/jc2zOwicwlgQtUm6w5Tl0wnN9TbjYzGddpvp8W286AWv4sT0U5w7tcDi4jJhGKG13hBkctLUgFd4CudNvRaTZinGOmq1KlmeMRhawrAQdGrNCTrdhCxJyXJPb5QgdJVYSUa9EeeXjrF5bbw86pSUlPxv0azXeOnNz+elNz8fgE63z/2PHuL+R57k3oef5OjJMxvuwT83Erhm4+fHASuEeNh7f3052iUlJSUlJSUlJSUlJSUlf7U8pwKOqMC55RHtoeeml13LsWMLTG+d5Oprr+BX3v9LXH3gJq57xRV8/A+PIGlQNEn8ia1grKFer/Ho4w9jal1mtjTJRB8nBUKG1MOASGm8luSZISJgmDiCwFJpKQSWypiESCEApRQ6CHAOvNcU1eUZgZBcft1eTh47zdHDZ9g2tplbrr2MM2fPcOrEKUzXcvbEaXRQOH3GJybYumeK5mSEihROWTKXEQaSYe7pro3orazz9D2H6CVtGpsmmN0zx8T4BPV6jbXlNc6fOI8W0B2uMb1pikZzgrOnzpONBFFYo1IJgBFeGOTFfRV/YowA8CCkwBpDrRFRb1l01GOUdnjwmUe56cYr2HdFhZOnP8nE1B5u3LGdo09f4P67H6M2XqcWR7hcYH3O9PYG1910kK995RGOPnOe+pjnymv2cOCSfbRaNaTUbN42RbNRYzRKyPMcY/Ki1F06BOCkIM9T+sOUNMnxYo1KVOGlL7+JajNgkHYZDRPybKMfRQeEUYSShahljCHPDe1+l7HJKm/5nu9gOEjYf+kcn/3EUzz95GlmNlXpj0bUmrBn3xxCDUBKtAr53d/9FJWa5p3f/xJyl2HtkLGJGvWx7Qy6OaPRoBg3KYquIxWSW0Nm+0RVRW4Fd9z5IE88tECSJFRrFYb9DAG0JhoEkcY5z2hocWRs2+uoNy2acZ5+cokDB3fTmgjprfep1yTVsRZbt8xyfnGVwcAQVSTPe8EN7Dswy9e//gSdoacV12j3UhZWFxmfWuCtb/4+dm7axcraEjZf4pH77+L+B+8hHSrMyHJ2/hRBBFs2zdKaiVlZO0c+tJxf7DMxPkbmJJNjTXZtbzA21gBX48tf+RJjU5KFc0u8//0fYM++Sebnj+B9j4OXbaHTqxDGIeOTmnq9grWeaqXCuXNn+U+/+Av88N/7EbZu3fxN7700zbkwfxhhl2mNaypaIaVnenoMgSBNU6yzKKlQUpFmGeurEY2GxJs2/e55du2cZvH8ItJJNk3N8s7v/jtsnZvjX/7sT7B3726ma5v4xOe+wPh4i2q1Qv/CAJXWmKhKts5t4vKr9vAHH/odcp1Qa0yzfHwVNVXhwiBD2BrtxQGDzJJlfYbtLjUfsXPzNlrjTaTyxNWQQAdYk9PptEmzrIhMA4JQI7UDY2g2GkWMGpDmjv4wRwjP6PwqDlBRxEpnDe8ccaWC9AAjzi8d5YH7uhTzoyUlJSX/Zxhr1nn5C2/g5S+8AYD1TndD0DnEfQ8/wbFT839RQUcBzy9HtqSkpKSkpKSkpKSkpKTkr57nVMC5sLyOyRRSVji/uEBcsaS9AQ/f+RC9QcLnv/x55mYm2XNwBysnOwRRAN9OwhECIQRKKp56/AhS7yOuS9I8wwuH0oVo4HOPw4EQxDWNdZZR1iPPDcZmRFUNoqjbCQKPNQbjbNHtUpTOoKVm/xWbmd7SYHFpjVSPcdkNW2ltijj1xBkCEdHtDGi32/TOtVnrn2dqrsXs1hma002cEJw6eZ7DT5yht5owVh9j64GtzO24gmC6QlyvIjwEUUA4KTl78hTbtm7ikrnNbN+1g+mZGZ55+ignTyxx7vQqnW6fajVEKwE4hPizS9C99wSBoloLePXrrmXPJXD4xD2cW9R8x4uvZNBPuD5rgJ1gtV1n6+6dfDG9h/ULXbx0SB+BM0ipePCBp3nyiVNs3byNt7zterbvrzDsryO8QOuIXn/EsJ/hgeEoIc0y8izH46nEVZz3pFlGOvKkiSHPe7imoTVRZ2VtmTQzjEYJg24fISCKYqrVKgJBEIQoJbHW4FwPIWDvpXWiapPBIKNaD/AokqFDSYtSkhe/5MXMTrdYXmrz2Y/fzl1fXqI5VmN1+VOMjVeZmZtm/4G9zM5MYoxlOBjgkWitEUIwxDAYdHEiY6zZ4oufeYDPfeYQc5NbmJ5psrLWQSvBzt3bQeboMCZQVfKszcSM4rvefjNhZLj7q4e57NIDNJpVut0e3kds3boVFQQcO3KE0ydPg00YjQYM+2vc/uUTLMy3mRzfCYFjkGgmRM6HPvFelpZ6/PAPfi9PPPI4X/z813jkkQfxLicIQoSU7NxRp16tkKcBnX7O/kv2Mt6Au+57AGsTFhc6DNsLvPFNt1INmnzwgx/A6y7f+ZY3ctO1byHUdY4cf4SV1XV27trC5q1TdA4vMEo7IHps3TpbGOO8ZG5ukrmpOkvLZ6k1wQjHMB2S9PssnD3OoHuY3dtjqnEFqRRSS5TS5HmGcwEgUUohhcI5R6cTEwYhZ+fv5ZL9W6nX9/DUU4doNCsMBwOUTPnkxz/I8vll4iCgW+0hQoETFqccBJJEDNi6c44v3flxXvuat/JT/99Pce89n+cHvuef8ck7vszDxx9hvN7g1MNHiIOImU3TpLbP/MIhjh17hOp4SFRxbN4yAQIGwyHD7ogsGxBFsvi8eag3oqJXS0ZY63EWHGBEhhUSpQKUElSrAUIFhWCVG+JQIRDIRgXnLP1eWTNRUlLy3DI+1uSVL3oBr3zRCwBYXe/wwKOHuO+RJ7n/kUMcO3W2HKSSkpKSkpKSkpKSkpKSkr8hPKcCjk1SbBZinGTQGRFFkkwZhDcIFEI6BstdZqemcX3J4tIScVjZECkKQaXA4YEorDIcpDzz8BkuvWIH1UaVYT7CCkcuHXiPtUW8mFIOJPS6HYQQ6DDEOoN1FinBYzHWYLwpejmcJQ5C0B6hBDM7GkztqJFlA1a7Q2pTir03TqFlwGg4xnq7T7czoHN8xMlHzzN/eJlKvUIQaS6s9rFSc8Xz97J51xiNVkwYBnhlyU2KECB1xuxclZlNLZbXzvHC216FrmmG2TLbDtTYeekeBp2tPHrvcY4eWoSRJgw0YRAhpQDv+eM/xWhJCdbmKK3o9/qcOr7MZZe/gC2tKrs3KdqLkmwkqdVrCDVkvDpiy0wL++or+OJH7mfY9wShwGeCJx49C5Hnre94IQcvmaVWb9JrDxBUi9dIRiilMTbD2hytFcZLcgPeFdFvQaTRUUwlzNHjDZSSICx55vA2ABcgnCAKJFJpglCB0EVinAzRYUysVREP533hyOmBlmMYc4Jh2mdcTxJFOd3OiHMLa1QrFT703js59Ng5JsdncTbj0XvaKG2pVAxfqc6z/+AUN77gEvbu20aW5RhTOIaMyYiDBo16jbvveJjH7zvP7OQEQiasXEiY27SJK6+8Ck/G0ZNPESiFyx2L55bYtuNSslFMZ73N8SNnkC7m+huuxskEk4bc8bUneOqZk/S7PbrrGYqI8bGQ7mqHB+4+hXIttFJMTre4/PlzZKQsnsu5844HWDx3hAvzHRbPDJmYGKPWypmaqtMYH/G863dz+NAKjzwwYDxuEWiPrBpufdHVoJvccdczrC31+OKX7mVucgdveOOrWekc4rFH7ufMsS6ry0NOnDzLaJSxttLhc5/6Kt1+Fy8zsIJeP6NSjRglCY6Yq67eTp6f5Ct3fI314Ro5KaPeCjUdUo3qVMM6MvTIELy25HmKyyyhUij17CfaFp1R9YA8zdlVbzLZqOCc54U3XUWWDxDS8tSTd/LxP7qD6cmtkBnOD84wtmmMQZ6wuL7K7EwTE3h2H9zB5EyV//TL/5LvfdfbedVLX01cc7zopVeR62WeuucRxkOHDh27t7WYnt5Gf5Rx6MTdhFFIEGXs3DXBKBnS7eaktYCxsSadThtjDEIKokqAxDHodBn0RyipUVrTaNQIKiFSCOI4JgwihFaFIy1N0UIiEeTO4BzwF+upKCkpKfnfZnJ8jNtechO3veQmAFbW2tz/yCHuf7SIXDt5ZqEcpJKSkpKSkpKSkpKSkpKSv6Y8pwKOFLoQBJxFW43HEumIMAwQQlCxCoEjs30272kxMl06qxmVMABhcTw74yuKP74oLs8GGUcfX+DglVtpjmly4ajVQmxuGSUp3gcoKbAuxXmDkBorLMakOOcIdECeZjhvN/bU4z2kwoPwKAT9QQ+ERwiJyQ0IUEqS2AxdDZgbH2fWjTPakXLhVJv5oytFHFhHMjsXs/W6ObbunEEFnv5wQK87QAUKLzxaCaQXBJWImc0T3Pv4We6/5zCXv3grwywhzUdoLWlNjnPja64imBYcefQMgR1n1DNUgxoS8HkCyuNCgUPgvUApTZYaTK545MFjXHbVNqbmKgz7OUFY9PfkJsM7S1CT1FoaHUj6qUWKGOdCUpuwaZfkJa+6hv1X7WWUDDFZj9pYTBhohBfFOMnCwWQtCC+p+Ap2LMLjUSi0CkAIFCFKKhCCQASAwjmDMTnGGJx3CCnxwuHZENS8QAnQgURKjQCsc1jjqddr3HDTbi4srnHhfB853uPKa3ayvtbm3T/3ebptQb0yyWDQRmpDtRaTDAV54lHSc9/Xn+bwoUVuetFOXvual9KcmKHbWyeIPLVai3u//iAP33OYwI2T52sEVceByy/nqisup9Go8OSTh8h6jvFaSGeY4LOE511zHXNTB1k4d4w4HGdtqcNTjx/lxhdezde+8CBf+epXEVVBs9qkksaYzOGc5eH7TmMzRaMZEFb6vP61r+CWW67h/gee4Ej1As05zdlT8ygUM5vGOHj5Vq67cY7Ns1uoxSHWOL722c8QxSG3vvQKJmcU82cWSfoBva4hXdK4TpXznT5zLUHa93TONzhz7BxneIzZ2U3E2jNe0bzwpkvZu38n58+f4/T8CfbsnaASdxF0cL5Bp53yzNFHaI5LhqMhThgCIAwr1OOYOK7hPQhh8VLghMMoiYscWgdoFRauN5vjnMErAxWohiGYLlEYU6looEYYCdz+Lbz1rbeyY9tWnnhkia/c9xDjzRG6lhI7j657NodjyP46WXMA1bP89gf/DS+45vn0h/PkPmXCJ6ycOA+5YGJKcMmBWZrNBu3eGnm6Qu4VM9MBoQqRosX66jpHj6yzfK5DlmToQGOtpdcfsbyyjsRy5eV7GWtWEDiEChkOhggpUEphnENIgVFAFBAFURFdtHF9OkrKo05JSclfKVMTLV79HTfz6u+4GYBTZ8/xyrf/6LferFeOVElJSUlJSUlJSUlJSUnJXz3PqYDjESAcUgvQuoirCgLCIECqIp7Ie3DSoCqe3Zfs4MgTJxn1PFFUBZ99yxYdUkIc1kgHGU89doqrbtjFxOYYiyEKYwrvjgRvyYxDSQ/CgHOkSYZzAh+oi2074tn9RGOtZ5Sk6EChlQNRTLJ77/De4zbuJYRBDhKEFASRYsulU9Qmqswfm0dmhoPP205zS4PBqI1IBckwJc0NeSJRWhAEEh0IhBI0t1SoNac5e7zNzBUxiXZIDzKH9f4CWks2759gevsE2lU5e+I8Jw8vkA8h0AFmkBDLBnEYk9oMJxVBKLDGsbzkuP3Lj3Pbm7eRSc3IDBESjDXgBcPeiJXVhMcePs1oZKmGMYPBkN2XTvHGd1xOEKesdxdAaqJA4IXBeA1SIIVHYHHC4jVIJFoIlBd4fBGDpnNAoJB4LxBotFYIJB7QzmGsQ3hAehwWa1OcB49HConSOVIGxWvlHCqAzC2zbWeVt//tW3nfez/M1GSLnTsP8plP3EGvE4CPGKSLvPDlB7nqmgOEusLJk+d4+P5nOHlsnbHaNmxq+fiHv8p4q8aN11/H7NwsUlX5nd/8OF+/8wFCDYFQbN22lWuuO8DkxBZGgwErK4tk2QipFFkOSMmbv/s2rrlxN6NknSgUfO/3v5lBt8/v/vbvc/zUYS4srjMz1yCebDHeqHPu1DKL8318ViPPDYFWuNzjjaLXTvnoh27nmUPHOX92hagSkWUpmzaN8aY338bey7ag1JCkb2nWmtzxlbs5M3+M217zEl75muezcLbDyacFn/3kp0izVQ5etovtOxr0ByEnjp/m67c/ShiOEUURW7dv4fSJIaHU/MSPvYudu3bzxKFHMMOMVn2ChVOLzC90ET5A+pB+P8Urw8ymGXrdgCAI8NYXQptWRWwaAuEcXmmEVzhjyW2OEoowCME78jzDGotAIoUsLhcCKRVKq+KT7mDH1i1cf901HD82z3+983NY5UmTPpnr0WjWkT5gYX6dcLTO9j17+Z43fTf3PvIgX/n6nYzVG1x12QHkSCAISY2h1Rhndm6aMNTENUmoBFoLcgPLyylHj53jkUefYOl8F+lCmmM1TGrodAfUqgG3Pm8/+3btZNvWGVw2YjAa0B6M6A9ivPMkJsOYHGsdxnkQRaSa1oowiFBaMRoE5VGnpKTkrxWTE61vd7ErR6akpKSkpKSkpKSkpKSk5K+e51TAqTUqCCEBNmLRBGGoCJVCSkVRSOMR2uK8Z3J2jKuCvXz9K0+RZiFhqACPcx7vi4li7wDviOIq/YHl8KFFXrLzUrwaMRqk1OMYL3KshVDFxEoXEWPOgbCkzhPYIlKt2DFwArzzeAHWO4R1eA84cN7jPAgvLkaVAUXMl4A0MaSiR2O8woHn7cALC8oxWB+AEDhnEdaihMcrt1HiDhJHmg+pz9TYecUWnnnoCBfO1GnuqzAcJpA6nHNIKQGPjjSViuGSG2fZdlmT9nJOFFboX+jw5F2nMB1LtVHH+nyjF8gjswqPPXiW/ZeFbDqwhUHWxzlHnmVEYUi70+f2z97P2tmYMKwUsXahZd9VE1TGcpbOrRPEVbxKMCLA2RHOe5TShKHGiAxjili6jaEs+oQQhXMJBR4kGoECFI4E4RUIh/c5xlqcdXjncRiszXEItFboQAMa4ZOL20YUE+NBrAjqbb7/793GQ/cs8omP3UN7xSO8JHMLvOVtN/O6N7wA53OkVFz2vHFe9PKDfO2Lj/HRD95PIDazZdOVfO7T9/HUE8e56qpruXChz6c/eSdT05vop23i6AKvft27iFTI2VOLrLXXqVU1y2vLZMIhKzFTrQlaszXml59gcroBeojXTe6483ZWV7p0ejmzMw2s9hw9fYydW8a54eYr+dqXH+bc/ALVSpP+SBGpOoNuzHt/90sMhkMCFeH0kIm5nMuu2skNN17Lwcu20B/1SYYjokDixYDFxRPs3DWLigL+y6/+Jvfd/RiBavHi11zDTbdeRr3usC4lGUrmTyc8/OBTJKOEwTBnNFrB4pjdPsHDT9/Db3/g93AObrjxOpbW5hn5iDyP0bqKzSOsH2GFQUUOHQuqcYAUAusLkRMMQkiUEEilEIRYYyH1IBwqsEgpkDpAuBgpIqQI0ErhvAFpcSIny0Z4YWg2q/RHPd733q9xYXXEd7zuGk6vHmKqMs1LX3YD/+MPbqfXi7gQ5Tx0/5PcYCaI8t0kgyN4lzPWbDKPQYiINE8Y5Rmzc9MEusLS+Q7nl85w/tw5Tp1Y5MzpFZZX+sTVJo3aJBJFnjiEzjhwYJpXvfyF7NoyiRSONE0Y9C1RJSJuaUZphDWGzBi891jnyG2Od4XzKI5jwrCIPkyTUsApKSkpKSk5ctdH/59+/vtveVP5JigpKSkpKSkpKSkp+XPxnAo4M1MNoijG5DnOW7QqelyUAmc3pBAB1hmUUtQrIdtamzHdEfc98AxWjSGEIIpCnLWYLEeiwFucy4grVVYvdLjji4/z8ldfTdzMgYgwrABgU4/3lkBqbG7oVUcM0xwlFToOL9bseCGwG0JEsXrekDtLkqRI51FS4iSFE2fjuQkhC1FBCVACj0XrAGQNjaAaSBAeYxxWGAIcQm04cEKBwGOdwdmUbQdjTjwF54+OaO2OibVAuehifJz3npAImUGyNqBaCwlmNGjL7I5J5jZP8NDnjtM936ExViPzAV44ooojyySPPLDG1Pbt5KZHvzskjkN0tcbM9AzGHGMwgEY9QFhDZdzR2ATdUbsYnMwhdeGOyawhyzK00ngfI4XBWoM1Do8rotBEIdZJIZFZERmHdEghAUWaj5Be4zF48uJ18h5nLdYYHAKpFZEPi8lwkxcuKFzh4hGgpCDUkjCO+PSHn+Cx+9exeY4MBFt2Vnjl617I7r2znF06jvcSrSGKAoJAcdt3biNsrPDB3zlJr1en0Zji5HHB4UP30ZyssXP/djrdJbbsrnD9zXsZn1Ysnxxic4twntFwiMkN9VqTZDgkDmJWl9eJ6hIrzrN5c8yRZ+b53CcfpBrsBJfQXmtz+Y1XcONtN1DVisfuOcTYuOLqa57H44/MY/OQTq/N2vp54rCODgK8Trjx5t285o0HmJqewmSWcxeexuEIA0k9qrO42OXksTYz4/t59IlVzq0v8PI3XMuLX3I11VrIevsCncSSJhkIz5Z9Ibsu20OWWYQKAIkQmjRz9AcpanIHK4vr1OcMN1+5h7jqOfl0zl13pOigSa3ZwpGw1jvHaGgxvkIURQghcN4ihEd5gVYBWqSF+CkcPnBIJTDC432OFQ4vQCARBISqilYSFSqk9LhhF6UzVvptPvnhL3P3/c8QtRwvfNkeVtsh0zNzTM3WyYYjnA3ZtGOca6+Y5WN/+GmypMn+y/Zyy63XEUaaLPWM+gaExEnHfQ8+wuOPneLk0VWGg5zeKCcIQuoVxdaZOWqVJkvtFdaHXSYmp7j+BTu58rpZ6hVHu7eK85bUj8jCBBUUQmW1IXFWUHUKL4vPq7FFD46zFq1TpLYIAXGt7MApKSkpKfnzE4YxtVoF5+Gf/szP7vqWq4UArxV5moPzWDx2mJELQTbMSHtDbzygJdSi4qvvKCu+YhoHOqhy+tQRHr3zQyTDb07OkzuuJf6B38UP1sGZ/+3nIipjDH/2eeWB8H8D770oR6GkpKSkpKSkpKTk/y2eUwFnx0yLuBKTJBm5SQiDgCgIUQq8B6UUQoCxDuc9Gokwjhffsh/vh9x5zxnGxlobXTQbaR5Cg3RAhsMRVWLOHh/w+IPHed2bXkCSGLRWhbPDOCIZFJ06uaMz7NHPErRUeAR+o3tFCIG1Duct3jpSY+iMRgykvdiZ42XhxvEbP0X0GlgV4BUEyhIHEdji7DgKN7arLLnMEXgqUXAxQg1RdL0kNkdvDtl1YI7jh1a5vFdn27YZsv6GBUgIhPM4WzhyvKhgBhahLIEIkH3D1p11pr77IJ/+4L301hRRpQa+cLXE8TinTqxz7MkzXHLVduJ6jUolplKroVTAWKvGBZUgnMD5Po1mRlRTZJlASUlAAFQZmYTcCDwhXmhGI08QhFgrcNYVZirAY4vJciHA+qIPxA+xPkMQ0qiHCJ+Tmxwpi84h7zzWFS4sqRQoRY4jzzf6QoTHGEOWJuQ2p1mv0ZQN7vz8Ce796hrVeohhyLXXj/P6t15BXA1ZWVkmz3O89wShIjYhgdJ0+hlXPX+KSmPIFz56nt6yoBrVCbREhQZCySXXzvC33vU8qrUa557o0mknrLfbTE9Pcer0EZJ+QjIoXE5zrWl01qS/OmJyCrSv87EP3cOg26SxGUI8+w5O8+o3XMr5tucDv3k/Tz92ms1bJ+l2c7btibnq6n0M0g6nTs2T55pGvcHevVs5eNlWdDDi7Lln8D6lUotw3rLeNTz+4IAHb7/AqFNnyS8RTCm+/++/kpktiuGgzdqFDt55sIpRmmNcwtA4orBOrTaFNSOkKvqYVOxoxYKpuSmGnSpPHzpCpbaLqckWFxZPMxwOaNUmiSuOzHYZ5RWGWUZqBgRDRRQHG06xwunmrEN4RyUIi9dzQ6wBj7EJ1mXoUOLJSFNP6FtIGdFe7DDoD8lTx+J8wpMPdzl5eMDYRJWXvb7F3sthV74bIRyHDj1JJVQMXJ8t20LCiQQXGypWMKbgyOEn6PUtjz7UR1cqBDJm/jT89yfuAmGJa4IgCtk61kQgGQwzllZGeJ9RmUp4/i1jXHntdiYmWiyvnmW176nKmFGWMMxyjBBU6zXiYMNRIyk6nITAeYf1FqstTrqiu2lkCaOQctalpKSkpOQvgpQSrYro2VaVb2vjNI6gGoEUbHTRgXWQG3ySi6QzZLDe9500x24crktKSkpKSkpKSkpKSkr+hvCcCjh7ZqfQYYCzjixLio4LqRFeIKQgCiMQAuNS8jzFOYtJcxAJb3/zy8jaX+aOh44zOTeF9AatgiIKjWcFg0IyqDeaPHzfGS7ftYdXvfxazi8vMvICEUE1jKhUqmRpglaWMRcgALchcAgpQUBuDBCgpCR3luZQM6qHOOUwLic1GQ5QMigcB65whCgpkVISKIjDiDwDiSIOo8LR48A5j8JvRMJBFIUoGW5sF1wkuPTSbTz9wGmG5xy7rtpEV7dBSIwZgZB4J8kzQ2ocxgmcFFR1SCwVSWfAju1NXvSyy/jD338YHelCmJKyGNNc8vXPHGMqmmPPnt08eO8h+ulT7Ny1j7wtwCSIOMAKw8REnYnWOBKB8WnRY+IEwmu0EhjrSDKLxSFNitYe4Q3OKkIdoLTaiLkLiOKAIPB4q0lMjiBk/YKlUrFUGwFpCkJ4hAOsxDtD4XEqhDUlFUpLvM+wuUHk0KzWmGyOccdnF/nyp04zNb6JbnqSm148y+vfci2jZIW1CyCEQjgY5X0yYZFhiyhoIoxGCcPVV+3n6Yckz6ymgKValWipiIkw65aP/Le7mJzcRK+nOfL0WSqVKn07ZLXfYzRymKTPpq0t6jWBQiPSMSJf5wsfP8r9d69w1RUHiVvrzG2fo9bscOrUQzzy4Dxnj6wzObUTHWmW1k9z/c1bcdESU5NjzG65jLlNc8SRxPkhSbaI9yGVpqTfF5w4lvDYA+c5cewk6515phr7cGYKLeD6528l1EP67QhrJWFUQ6DIEwgjR+THUUoiJaRJhncWJcEZgfU5yXBYOETiCtdccxl5asBZZmcmkMIipSeueGamm7QamkCGpGmCMyOy1KKFxFrDYJDT72Y4C2PNKs1Wk2rNUokChNQoV6WzLlg4m7K+IrlwfsT66kna7R7dbp90kGBzQbdjiKNpwqDO9GSTnZuv5cE7ulxYOcFNL9lCNvAM2hnb53Yxf+Ic68s5e7cdZOHICufPr/Mr//kxnI2JgjnCaIzc5DgvqdQ0jWaDUTKg3V4lzxLyrMf4dI2dl02xefsYm3Y0md4Sk6RdltfWGCYZWEsiJGmW45wGoRkOPMQVstwg8IRBSJKNyGVKbnNAEwQhxjqyYc7Ck/NUo83wneWBp6SkpKTkz4f3Rf+i95D9GSYY921ai7RCtEIqrRqV6aaYHOX0Vru+0xsxCjaSjEtKSkpKSkpKSkpKSkr+evOcCjjTzVmU1kghsTYHL/BI8B4pBUpphBRYl2JMgrUGU7HkeU4ravB33/ZG1rvv46mzK0xOzYKzIJ89exUbEUweGXqitMlnP/51brn2UvZt3sG5pSV0GG+UonuEduhK0b3ybHRacWYMDo9zhQvI45FSMl41pHkGgWdkEobJACkEQRChhCxOqB3EOiSShQNBKkVuLFIUAk5uDJnJEMLgpbvo3gmCACUVmc3JfYoTnvH9Vb4yoTh6dJXXv7rJ2ERAmnucHwFF3JW1hizPC/HJe6IgLOLI/DSxc1x/2RZunzjO2lqPsYkWDoPwgkrYwAwdn/vow4yNHWVlLSGsjrhw2tNfzYjDAOcMDseBvQdp6TrtdhstixgzhaemqgilMD4nyVIIoB6HZEmKNRmVWhWTGZwYoUNJtdKgWmnRWct57MGTnD59gSzNOX+uzdXP28Y112/FiRytA9I8xzkJaKS0SAdiw7nhjMd7gbSKibEG9do4X/r0A3zhUwu0xrbTz89wxXVTvOS2S+kOugyHAXiPw+EJEaoGOsVLjww89cok7SXDF+48xJFD52g0tjEa9qlEllFvndVzlrHmOKNhgvDz+IrCiIxRbumORlTCiNxpgkARVwPmz51G6x4zM3Pc+eVz3HXXE9SrEwyzLuPVGlhNNmzwyN0rnDkxoFVv4FWISTyz49shabJr/xaMM5xbWCaYq5F2czpdgTERaZKyvtbn9IkOTzx0AW8jrn7e83jJy76HJx6f5/OffJKdE7uoRoK56WmSkcHoDCFCvNcEkSZWAqxGeIWQOVb0MS5BIZBCIJxE+Q3BjBCTS6Q0OGA0TBEuQDiJViHNZpNqLSPQYE1MlgYkoxwpPJW4QqgiYgVx3CSuVAl0SJrnnHxmhaWlZTrdjDMn1jh7ckCWVOh2UoxN0IEnkIIojAlCSa2SkiRnGWtVMR7e/76jXH7Ztbz05dezcGqeO790nko0xvh4jaWzK0ShxSYr5EAqG0TBXgg0SkiGwzZKa4SSJPkK7YWTNKotDh6YZduOaaamazSnYGzCEVZTMmPor3XxXhKJCiJUZGlKZrLiMh0R6LjojLISnxU9TjaXpMOcvh+RC4cWnjhSeA/NWoPD66s89Nhp+NflgaekpKSk5LnHuuIHIAoQtZhmsyKaY1U6C2t+dZhi6n9WNVuRi1sOZElJSUlJSUlJSUlJyV8hz6mAM9HYjN4QcIoVhAIhfFF47gE2IrS8w1iDM3kRS+YsCsW2TQ1+4Z//MD//ax/gnkdPMjU9gX9WefEe8Agh8CanXtEsr6f8ym9/jJ/7Z3+PzTPbGSYepSQmN1TiIs6reGC/0ckCubNYawnDkNwYjMlRSqOFxtgcMCR5QhYlaK0Iwxgl1Ibg46joKtWgVnR8eE+e50gZEMUxxuYk6QAwIMGIZ8+HBdYWj2u9I8+HTE43ePGt1/A7H7ub5fMZt153KaurbXQgAXHReSSVJM8NzlmU0jgPoPAWZrds5rte1+e//saHEHVLEEqMVxgToCNPb2RoD3poFZJ1KwyGHYRyBJEkzVM8ns1TO7hs+0E6re6GQ0mhNrpKPILV1WVWkzUqusqZY2f4+h2PMD4xyXe8/FZmZ6fJ3YBut83pZxZ45unDPPPkBU4cXicMKtQbNZyNOPJQl6RzmGtv2EWtVcVlWfG6eIeSDuUl0imMNeRZhhOO6ZlNIDTv/W9f5pknBzQb43QHp7nquh28+OX7GQ0E1iiMychMjnGeMAwJwgjSIXHcYLguuO+hIzx27wqjnmNyYhPpqMPlV29jy7YGUjgunHcceuwI2/aMc+5sSpYkNBqa9W4fH1epBFWiSkSrWcFJy7FT82TpArWTZzG5QSmJj1JW1zqY1OLzPkIGDJKc3DfJrMW7ebKBobfU4PRT89x7+/1MzVQZa81w15efpt/OWFsdwoZIN+p7AjWBlJrNuyzt7mPceccSC6cF01OTnFs4zkP3ROzcupmwYsEZPAF4gfIOIcPic+MVHovwGu8kgfIo4ciFJ9IhQmkUEik0MoiIw4jFhUVcXkUClbBJszJHLRoxtEO88NSimDQ0ZHlGnjk0EpePOHlmmXZ7wOL5dc7PG9aWM9I0Yzgc4RxUaxFjLcnOqxu0xieZmR2jNV5BBzlxReOtot8bsnPXVuJYkbuMMKyzcHaZ97/3dgbtBuNTEWeXjjPR3EQYKc6urFKrN2h3R1gjGQzXCAJHqzFGljmytMvVV+9k565Jdu7cxuxcA0HR3dMfdWl3V7GZQMmYWGpAFDGNWuCEYZAMsN4iCYlVhTiKsd4ROIkzDiU1aEE2UoRKEAVFJKTSkrH6OFqvMByul0edkpKSv1YMhqNve3E5Mv93YSzkBrSCzROMtWqieXaFpaU+3dz6b6/TSA1BXCyg+pMIQAHyG/7/jVvxQA64cvRLSkpKSkpKSkpKSkr+13lOBZxqqAsHjC9ixHSgCyHAFUXfCIcxBu89oQQC8N5cjIsYjhI2zczy73/iR/jxf/1uHjo5z8zUFM65PzbQeI/0Ai8c45MzPPDwcT766dv5sb/z/SydX0ZKiVXFJGpxcuo3OnWKGDSzIYTIDVGluF4hfYxA4m1Gno4wUYaKI+JqDe8cubE4a7HWEFdqxFFElmYkMiUKY2r1OibPSaMBeFsIVcLDRj65sxZjPdYKbJDRCCUvv/XFfOzzT/CVz97BS66+ki0z28kzhxAeB3gvis4gYzC5QUiQoohyC1RMSMR3vf42Lpxb5IMf+izVsRnCRgOhM4w36EihhQJvcE7g8EgBUjsCBN4F3PW1h9GmcCGNRqNCILMOZz29To9DTx7CZTkWzaFTiwySHG8vcN89i+zZsxlrDe12j/MLawxHGbObKrzq1Qe47JKdTE3MkowyxsZqNMdi4kpEqAOcsSRJQmYygrAo0/FSkGYpLnZMTLXoD1Le97uf5dD9HSbGZ+itLXLzjft57WtfRL0VkeeQ5zl5NsRayzBNCMKQVitGK8vZM2t88mOPMH9myMT4FEIknFtY48Yb9/PDP/oOPvrRj9JsVGirdWananznd74WmzYYdFcQKkFXGjzw2NN89fa7aDbHUGGNCxd6tCYnMJmn2zbUaw1UYMmMJE2HjEYJwijSzCBCSWOsRne0xpXXbmOsEdJdH7G+CouLffrtCt21HkrGTDabjNWhUvNMTNbJc02zFTA5VaPTXSMMt3L6xID15Tabt9XYtHU786c6fPYjj/H677yFQA5xNsf6ZOP9bFGERdRfIEEHeN9ESnDe0huMkDgEikjHKKXwQtNvw8pSThRWqY/FOKf4yheexEvH+tqIdNin0YrIU8+g7+i2M7KBY9hP6bS7ZHlKEEKaWCpxxPRUzMzcNJu31qk3La2JGjt3baU5LqnWQsIgRm8IJ2FYp72e8vGPfIlkGBIE45w+fYzTp08htGRyTjMaDkkTwWK/i7EJw+GI4WgIeMIwZvOmMZrNKovnegx7htYUvOktNzEz3SBNPckw2fgMpwjrqEeVDYFLIqmhQ4U1xaSVChV1NUGSjbBZTigCKrqKVBqjx9BCIYAkGjEWJ6AEUQDOpYxPjHNyPufYUxeoVuLyqFNSUvLXCue+bae8KUfm/z6e7cYZJIUj58AW5lp9WT17Il8cDf6kkOeXjsDaPKI+ge+tfKsbJwR2/M/eXkAGjIAukH67Gx2+8yPli/NtOHDrm8tBKCkpKSkpKSkpKSl5bgUc4UAYj/MO5ywIQZ7nCALCsHCxCOvwzl3sswGQQmO8I4qqdDoDGvUKP/dPf4R/8rPv5uiFdepjYxdj0LwALxXOW5RzTLWmed9HvsTzr7qOm668guWVVUIpUUoXawS9uyjgWFGEsIVhgPcggwApFc6DJUY4UChkqLEmx4WaqFLF+0I4ssaRm5QwDBFCosKQWhijdYBSIaDwUuCFQXpP4ME5h9Ya78HaHCc9uauBc1x32Qxveum1fOD97+ex172E2257FZ31FHxKIeEUkVd4T55lCCWxG+fHSgs8gnq9xo/90N9m6dw8p5fWmV9dJbWCVnMaLxwq2FhFqVIkIUJ4pMipVAKc0Tz15DwP3XuEaiVGBwG5Mdg8w9mMaqS46oo9bN01jlCSN775jezcuZNjx45w6PEnWVlZY3xqiqnLDvLEE49x3bVX8/Z3fBfjYzWczTEuLeaEnCXUDUYDz+rqGs4YqpNVKrUIGcqiX0dKjLXk1vD0M8f5r7/6UY4fW2J8bAbnhvzQ930Pt73iZobDPoiYoBFvTEwY8izHWsv49BRB4PnDD3+CD/3RnYzykLlt21jtnEUy4pZbn8+73vkuDj1xhHu//gR1mfH2t3wXJ2pnmIg0L3jNSyAZ0F9v8+SJeb521wMkI0ul4gnjCB1FdDp9giCg0lQY3y/EQO+pN0PCKKDfh8gYZmcavOoVL+PA/n3MzrWAjP6gSzoaYVJBpdbEmJw4jje6gAw68CgAEZC5EWFsqTebpMOYd/+H9yF9xr7du3nn972cfnfE+lqPer2Kkh4pc9K8x3AwwBFiraDbHjDoj+gNEtrrQ9bXh/QHhk63T5IMQCvqlSrOeowZoULD/Jkh1bqmWtecOHGSe+5ZJK5VyTNLJQoZjvo465EiwlkJ3tCaiJmeaRKpccKK4QW3HmRiSlGthGzfsZM4Lu43HCQI6alVNXGlwqhnOLfUod0e8sRDKxx64ixnz87jfEgYVQnjKlEwjvF9umsdQlknEIYDl04xPT2OFIZ6PWJ20xxRPWL33i1cWFrmP777vQxGml3VCepByKDTQfgqAXWE1BghUF4RBRU8fqPDSYODOFAopRAIrPVUZAWjUoS0KC8IRRUVRURBAAJGYkA9SlFaIKSiXqtjc8PnPvZR1lYdY82gPOqUlJSUlPyVIgSkeeHK2T6RN195y/7g8MMTC0n/wje5Zfxgnfwrv0b4tv9cCDh/Gt6DkBDGiA3XPTYHm0vwMYgYGEfIvzQbaupgaIqo5Vboyxe9pKSkpKSkpKSkpORvLM+pgGNsijGe3OQAWKux1hPqGOtjsizFufxiN4xAIqTGWY9zoLQkd5b5C4vs2bGVn/rR7+XH/81/YjAYUa3VEMLiEEVvjSqcPtVKzKA75D2/+T52/ew/oxnH9AZdhAyQLsBLhcTDxuNJIRG2cAQJKdBKFSFrLqPwAVmscFhhwXpsmhQRU0LgnSUQoH3RuCK8I4o0WhWdP84bhHBFhLj3eFdEtylZxKIJ4XCy6HjBKaIw5O1veS2f/+xX+epX7uP1r34NlSgi8wAGrEN6gZCyiOryYD1I6RHKIaUiT1LqUcy/+Zf/hMR6Hnr0KT7+yS9y6swF1vs9hklGHMfUJ0ICAKtACpxTZNYRxTHVSpU4jslyQ+QkUgqcSdHaEzUmmN1zkP17trNzboaJVpPrr9yFe+Mr0LICcQTSs74wz9fvvocTh49jXYVnjp6gP+oQhYLhsE+3m9DrZnTaHXKTMz7WYnp6kupYjdwk1OoVqvU6Tx16hvvuf4xezzC1ZY527zybNk/ytu9+K/XJFmZ1hTyTSFFFeU9qhiQiY3J2mu5al//0G7/Np798F3E4xlgV+qun2TnX4M2vfzNvefs7OLuwzG988IPEjQYvet717Nl+kMceO8LKSpdDDz7IqWNnePCRJ3j86aN005zZ6Tm8V0gRIiV01x24PrWGRwWCXmdEoAJm5yaZmp6gEuUsLp4vJvidZ/7kAk8/dpzUDMjSPlkmcFaQ5iP6yQAhFVoJQOKVIFAKLTVaVdCBJ65ILpy/wGj9FAcPbKY71Hz6049j0i55mpFnYsPpZsiSjMXFRaTWtHsZSxeWMLnB5JJuNyHNPNYFDHo9arFCaEmWZggEQuZs3jrGWH2W9qjN6vkFqhou2bWNifEJRumAMydPc8n+/UxPT6KDgEAFTEw12bxtgulWi1rcJM0HbN7RxOuEJCli4YRVtCqbmazGdHsJx54+y4nTxzl54jRnT54jTSyjnkbpMRoTexACJAlauEKQMhnTU5N4CzfdchNvessLEViczYgjjVSS1OagDO1lRRRFBLFn4dwKJw6tcP31l9AfZIBCqAhjYzKfI5VGIPDKbjgBKcRWPEopZKAxeYrTKY4Ubx3eapQI0CLEGkOgCheRFJJGrQFRzHve9zEefXqRrbPbqDdq5VGnpKSkpOT/CN4XC5KmpiYRQuKcJcsyhsMhxljEn9FfIwQ4D2vdlN1bpyr/8B/84I5ffPcvnHI2/ya1I3/gj9Av+ruIsU347oU/vRNHyiLC1VoQEhHXEdVxvM3w6wvFAiqTjouxOXxn8Tkbk9zBciLJPaACGHaIa5I4Css3TElJSUlJSUlJSUnJ30ieUwFnrbuAc44sy1FSIpVCqyo6rKKzpBBehCO3GVmaEOiYICwcAAIQGxFlQjmOnz7M9j3T/ND3von//N8+Tp6FBFFRwI53RHEFqRTJKGFybIKjp5f5d7/0G/zMP/peqtWAUX8EwoAO0VoRBBolBB6HyS3IQmTJscVkrcsx1pAai7EW5zxKFBO4gXZIpRACvHWkdoRUGpzDycIFYp0tHEYbJ7reeTJTbNvm/mJKuLMCpEAqWO/32LvvEt70hu/mw+97P4/e8yhXP/9KVgYOITRyI4LNAUIWzztwhkAV4oD0Eqkk3jua9QYt4E2veimvvOk6jp44yakLbc6cWeXJQ0d48sxxMmuJhMaaAG8lWoITQ2RgQUkiLcizIlJD6xBrLLff/ShfuP1BmtUqYzHUKiG1WoNKtcZYs0EcStI0oztMOXX2PKvtDs4J+oOEMNTFZLxUOO8Jowo6KN6C+YkzZKMcISXWJnhyvNAIr5mcnmR6WwQiI0az2l3mH/7kT3Lztc/jppuvZ+/BS9HNcVhZw6aGyW2znDt6lJ/6Zz/H46fOMz43ixsN2DQ2zne98ft47W0vRkxtprdwmHe/+xc4v3SWVmucT37lK3z4Ix9nmFo+c/tDOAvD4YggCNm8ZTNjzYBOv4+QEficSqzYtLnJoDfAk1GJAryzrK+tsb7eprlwAZBEYcTCwhr/5Vd+h36/T6hDlPLkeUq/53EmpDVRZWRH5FYQS/AoEmOJ4oA8zZFCU6mOEcoBN1zZ4LKDm5hfzbn9zof5/T+YZ7bVQkuHtTmZTTB5hhSSaiVmrNVgZbUNQBxq5mZnmdo+gXGWeqPOts1zVAI23qse6xz1sQa9fo/PfPpOlIqZbdb54b/7nezbfQDhC/fc7V/7GldfdTVbtm7Be0egA0CgtNqYRDJ44egPhtjEsWWiTmUsJM8tZ08v8fBjT3HnnQ/xyJOn6I9SoiikWq8TBorx8ZRusk7iqlSqNUKpWT51kl2btvD3fujH2Lp9FpyhVo/I8i5ZlqGVJk0KR591grGxMZbPnGB1JSEOquTeUtUzjEfbGHXPIoAoaGCVJCAj1LqIViTD+rT4iEpZCDM6AqFxKke4FOdTnHd4L5EoQhVgfU4UVvDGo4WiURnjV9/7YT779cNMTG2i3ggQvqyVKCkpKSn5X+PZ75TPSihSa4yxfPWrt1uEUCCYnJxg//79NOo1hsMB3W7vm76PfrttnltcZs/ObcGP/OiPbX/PL//H0996G3Pv+wnf/HOFgPPttlGfxJ1/2md/+FOZN7lHSkRtQspNB5W65KVKXfEKfHsJP+pAECPqU/j+ynMyRs4LLkpQHhCCLM9LAaekpKSkpKSkpKSk5G8sz6mA0+t20VqTmxzvLUorwiAgMX2ENIRBhCAgyfoYkxALSJMMKQVKerwzWGvAerx3nFro8B2vvIGFlR6//6EvMDa9GeUBYUlHCUIU7hQhoFUf4/a77+Gyy+b4vu96I2vJACMgCEO0DbA2REiFMQZnLVEcYTy43BNqjVSCJB0xSlK8cyilAIWUmtzmSKWQUoDwOGcRuUFrjU0SvChOGKWUxT4hQAiEluTGFBFNz55Ie1BK4bzAWkOajHjDq27ks5/4BL/wq7/LL/+Xn6TRmCHJir4bpMBbsxE255FaIZXCeoMVHrERtpZnGcJDv9cHCQcvvYSrrqtCGHH0iSP88L/6eVbbA6IYhLEMO91CyAogrAaMRpYglMSxxpii8yfUAbMz0wRSMhoOWR+NaGcZdv0C6SjBeYl3Dq0Dcm+JqxXisRYBgrExTxDqDaeTJIoijLPkeQ7CU6uNkyUZSZLiXb14D6gIaz06UPg8J81HaBkSxw2Onz3HY4fez+9/5FPsP7ibq67ex+tf/Vq27dnPp/7o/bzn1z9Ap5uzZWoarR2vfMNt/J23vYOo2QAz4uThR/jl9/wXHr73Ua7cuo99l1xOLtaZnGhhspCzZ1bBa+bmxlhcOsPyygpKSgwBOpIMshVGacb0Fk0ji1ldFEhlqVQDDl5yA/v2HeDkiZO0O22c88RRlS1bN5OMRlR1SC0YY2VthU1bxtGijjM5UU1zemGB/TvmqDUaGKN5/KEniWpVas06p88tMVaNuHw3nOsb0rTKtqkaU9MHWDt3gc7yOldfcyWNZpXBoMfWbZP0u0NqUZXLLtvOWG2S9eUeO3dsZ8fOHQg8lUpMa7yJyVOkFOhAYbKccHYTn/noJ3jfBz9Jc3KGzVs3ccOV11GrNxgmOUEQ8M53vpNkNMI6h5DgbNFplacGCJAiRgeS8ZkWSkn6/Q533fU4t9/xII88/EQhMmkYG6tSa1QQeKQWDPodojjg4I6dnDy3iCdhfZCwfccm/v1P/QT7Duyj3V8HHGmW4HNJtd4oBNwkJcsslVrMWLPBcDCkEkdIr6jGVaamJ4nCiFqtgfACqYrfA7U4xuMRCBwRuCoIkEJinUWpEGchNylRpY6Uxe8N5xyjJCXQEfW4gfeQp5aJqXE+8LEv8keffpiJiQaNWkBv1OPa/dvKo05JSUlJyZ+b4vuQAgRTU9NYZ59NHKZWqzAa5Xz4wx9NwQ7nNm0OVldXfZ6l6rrnXV+75Zab5b69e7DGsLa+RuH+Ft/mMSQL5xa45urLou9869u3/NGH3r/wjdfnd78XfeP3IGoThQjzrQQRJAPhVs8MgDaAX18Qbv4JZR74wyh48Q+Oh6/6J6FzOViDqLXwaR/y5P/4eAVyI2OZ4rt2scBEl2+kkpKSkpKSkpKSkpK/sTynZzRROE6lEpDlCcYkOBxeOXSoMCZnmKaEuoqQgqhSIworRYSTdDiKngkpHNa6It7IClbXFnnz627k8ccPc+h4m7FWHes8zhcntHrjJE0LGBuf4Qtfe5CX3nIN9WpE0u+BUTgfkuUBSoaAxDhHPjQIAUoqhAQlBbnNyW1WRJ5tRLR54fBSggLrLNYWnTJSSKQAKQS5tUj1bMyZxziLQKClIncW610h/lDEVwTSEQcxgdAMBh0O7t3M27/nTfz8f34Pd9/3MG9801vI1zp4BCbPMNYghURrjdQai8fKouIn9w5rbBHpJgCtSLOMrNehkgwZa1Q4efoJOu020lew1lCpOF794hcQhhGPHDrM6fklBv0hZigZGEfmHIQhtWqVRlRBKoESIfVaTq1eodFo0l5vkxqL8xIhApzzaKVQG26b4bDPcJBBoXkhRiNGSVK4PeoNMptgTYbciHGTQBwKdBjS7QzodAYEKkSEEUJVmZ6Nmd40xeK5Pg88/Ay3330v991/iGuuuooPfPAPCVydbXNbcConrIe0Bxm/8J7/xvriIoPRiHPLS3TSLrfe/GJ+9sd/kmjnHuhfwFiDHpsFmhx54uv83u/+DhfOnWLL9s3kWR0vFqk3q5w+tUK/b+mvaZCOWqS4av9eXvqSF3HDDTcRTk7j2m0G/R4AcaWGCoIivi9QIDV+kCLGKqBCSEeAYPXCEiRt5s/Pc26xx/lz59i1czOBytk+4zm7kPD1xwe87btu49YX3YpJ+lhV4+ihw/zKL/8yN12znx/6Rz/KaL1NNQ7IRwnOC6LxRlEYZXwRei8FOIfNM0aDHtbZwh2VFhEs4sIKDz/4BCqQIDMmpyaoxA2SUYp1HmsdSZLgnCUMY4JAM7IJUmqkdFSimKA+Rt4f8OTTh7n73vu55/5HOHz6PF4YWhM1xqYmyDNDMuqCTWi2xtm7dw/bt23hza/6Dj7xmS/x6JFT9Podtkw3+Y//5l+wb8cM86ePEcfVwkmHgEq16LECjLVY66nWqgSB5vSZM6RZzqV79xHIlNZ4gzDS1GoNtIqwzpKbHK0Kd5incOJJFNY4nDVIoYvrpSAQAiVBEIJyOGkRgUagUAQ4kzM7McUjx87yBx/7ClNjU7SqIUmSsnvTHD/09reVR52SkpKSkj83WZYy6FqsdfzOf/zZNEsSb11RVRNqga40dRBqkWd27TWvfpX6B//gH2z6rd/67+vvec+vXnjowftrE5PTE9///d8XHjywn+XlCwyHow1B6FsRzJ89y+tec1vt6JHDE489+tDaxaucxR29C/2iH8QP14u+m2/EWVAawGz8PEsOJPnXfrMrgnhz8MofrwF4kyPqk0W02jduxhdfU4ZGkDuB9RcrL5ECQumpak8o/+TeWw/dXGC/scXHGYQO6eWGRMqLi6ea2qLENz9u6gQjA7kX+G8IkdPCEyqoa48U5fuxpKSkpKSkpKSkpOQvn+dUwGk0Wghhsc6gpCK3FqdygjhCuwqj0RAnU7RWhFEFJQKEz3GkOEaEOkR6hXABAo0IBOnIMTYF3/uOV/Cv//376A1TqhWFFhL/bHGqECA99VqV80tr/I9PfYkf/b7XMhz18CJCSIHNPDiBCAKs9FgMEoFQEiMdxjlEJInDSlHOisC5QiDSYREV5a0HK4ruHqWQoUYpjUlTnAAZR0ghsGlCnuVY77FS4p3FCX9xBSUIlBJIqUhdzsgYXvayF/CBP3ofh54+yhvfJPEmxSuFsRneg1QCqQQi0DjvCJTCek+e5WS26BUKVLGvgQSJAB1BrcGTx4/QHwxoVBsM0x5Tsw3e+bfeyJbtu1heXWN+YZ6lxQsY4+l1+5xePMcjh5/i1JkFllZXSUeSarVGtSYZDNq01wZkucF5T1Rv4nD019vYPCXSijgoIuJqYUhciZFSopVCTFQw1pCMMkw2AjzDNGd9vcPM1BQy0qRJj4mpKo2xgKSX0+0MObe+Rn5mSKtVY27TZqpxRJ61OHbsJPNnnuKagzvotS1anGLv/h08+NQiD9zdQYqIZJjTSzPSQBCImJXOgCfPnOa6LTNQb6KdwQ3W+cqXP87v/d77GKYJ9eo46ShAB4rBesL8iXmMsVxy4ACXHLiSAwf2sW/fZvbt2IrUATbLYX0FGVdohBOQZWSjESbNSEYDFpbP0B0MOHZ4Hmc1U9NNVlYvcG51ldQ4Ru0Vzs6fY35phZ27d7L6+ArDfh+lYGZ2Mz/0wz/OC667ijxbxccSnTpuuOpKfufXfpnl1UV8+wJVrXFphtIKhWS01sdjN1ajepxzaC2x1mMBiUJ4gTeeOKzRHyacXVyiXqlSCxQH9u+BKCIdJYUCRxHHIqUkz3OEgDiOCeIKSFhbXuGeO+/hrru+zjNHj3N+8QJBXGPXgb0IZel0V5i/cIZKFHHd5Qe48Ybns2fvXvbu2k1jdjN/+Pvv4/c+9HGSUc7le7fzH376x9m3c5rlpfM0W+OAwjuJFg6pc6AQlbTWiIomz3JEvcKll17Kl+5+AmMtc3Mtxpo1hJDEUZVAVcnMCOtypBYouTGhZSzSCxBgnQPvkd5ufFIt2OI6IQTCCbSQWGtwztIan2B+cZVf+43fJxvmjE9Ui1+02ZDvetVLmJ0cL486JSUlf61IkvTbXTwqR+avBrvRb0gQg1Q47+gXhS48aA54IuURsvgGqSPoryRk8x2A3/qt37JZlg1+//d/f8u73/2L67/2a7+29m//7b/t/tIvvrtx8y0vnP5bb/0uXa1WWFlZ+xMijhACax3Ly0u843vePnXq1Mlhp7120SJjnvgc+gXvAKmLPptvj/xTLvfZF3/lvH7+d+0tunIsQkf4IP4mF47xgvMjcfG78bdsgpEVdHJBLVRM6vyb6nhyJ+jm8uJtgUJY0hEjWWWUW8CBCnCjHpPN6sX79o1kPePbPm4KDCx0MkEzEoxpW75JS0pKSkpKSkpKSkr+UnmOMwVSkmyAcaPCVSMdXhSlqjqsglBYa7FuhPMZSkR4kZPbIVJ4hAzxHpyzeCzShcQ6ZmV1nSuv2Ms7/9Yrec9vfZoonqA4tfX4jdM2C2gJ1Uqdz37xIQ7uneSFNxyg3zfkpBg/IhQS4RVCOsSz9xUCV2SgEYQVBA5rDcZ6cBsxcGHRBwMOpSJUseqQIIwuTmg7Z8nSFK0U3jqMyVAKnPc47xBWIIW4ePLpnMV7j9SaXpZTrQt27ZjjS195kCsv+wQvuuV6jDMI4dBaFW4fBUoIJBKlFRiDlRS9Mt6jlC5iqZREC4UOYkaZ4eTCMt4qoijA555tm2aJgpjVxVVQnsv27eTKS3ZijUEpCYFiYXWV02fPc+TIaU6cOMexY6dpd7soUWHz5s3gi0mgJw4fIcszbnj+NVxyYDcTzRpz003mJiaI4phqtVo8TymI4whrLe1OG2sN1gm6Sc6nP/Nlvnr7wzjX4rJLNvPjP/YDVKKI1dUevc6I4ydO88h9h1hYWkIIQafTJo7rzExvZqJuqAWe6uYWrXpIoy7YvnkGZESt3iJPHav9Hou9ddxI8Mzx0/yjn/lXvOrFtzC1aYZOu8vRIyc5efIU460JtmzfShhVWVlep94YceutL2R6eprduxvs27mF2U17QVbAJoDDDIekWYoZZKTLhjOLqxx65ggnTxzH5B5jc9aTDqudLjYLECImSfoEwmJNSqef0KxWUDKiUm8yGg0ZJQZEgE0DxonZtrlOf7TGaq9DmmXENqDPOuPNGrOzUwzaXVQQ4KIYKxXC+6I4CZA4nLCoMMR4icMg8AhfRP056wiDkJVOlzPLS6gwYro1ya7tO8m9AymRGxVOzji88FTikGC8BQiWzi7w1a/dydfufZCjZ84RhEUPz7Z9O/HOsb5yBh1EzExPcPmlO3nZS1/ITddej7JFxJ4OAn7nV3+VX/mND9IfZly6d46f++c/yv59W1hdWoKwgpUahEbIAK09CImxBu+K3h8VSXJjkVKyaW6aqVadUb9DeyUlCIoMfGMdAgOIop9Lig1BSuCdxZvis6ZUIQyBRIginlEqYMPl5oTESo8xOdWoRn/Y5efe8x4eeeowO3bsJG5WGLXXeOtrXsh1V+xiZXWRZnncKSkp+WuEsebbXZyVI/OXg/OQWEHqIN34G0A0pjac37LoPURQ/dHfif9YtRCI1ibskbvy0S+9YvCsqPIHf/AH661WS//Kr/zK+E/+5E+O/9iP/djSz/z0T3d+8Zd+qXfy1KlN/+gf/cPG9NRUEQ37bUScTqfLrl27uO22V8198H+879Sz19kT9+NOP4zYdMlfqL9G7bgGfev34/PE+VF39I32FhFU8N8g4ATSI4XCSVUsnrooFAmQEqwB7xkYgUkNc3X9TfvOs/dz9o/v533xIyWgQCqGac648xfd8LHyIHRxG+e4KAA96zRyBuehbTQ+G9Gqln06JSUlJSUlJSUlJSV/eTynAk6a9cjsCKE8QocIHxUr/W2AQBLIkADNyNqN5YYeBTivkCLA5ao4ifIpUgi0DnE2R8mIQS/jDa96CQ/c+zB3H16kNTGBskOEkuRCo32AdY4gjFE5/P4ffo0D+w4w2xpjvnsOLaEWNnDegTQILMZYvMvxtkIcNQl1ldGoXyRDoPFeojf+eMB7j1IhYaAxJkcTgjNYk6O1xOUpWQ5aSZRw5FmfNO+D1wSqRhAEeGlBxOQmA68QQuPIiRsBW3Zt51O338Uzx47xylfcRNazhGEFIQTG5MX5pbfgPc46cA4tBVIUJ+Tee6wzG108imqkOXryOIePnqYaN7Degcu4fO8u6q06nU4P4SS9nkMHGucNaZKgtWIsrHL13v3cePkVaK1ZXVtjrb2OlIKJ8RYegVYhn/v8F+j3erzxTa+n1apjbU6e52ipEVKRpikmN4BH64AgCNg0N128Dg6iWpUXvuAK3vF3f5qv3vUIr3vNTew9eBmjM2eY3bYVvTvg5msv5V3f99189dNf4A8+8Xla05sQeOIoptPt02xuoVarc25theVhiI8UZxfmyc8vUavUaDYbzDWr9AmQRqG158t33kMYtEiGObkdsWXzDLNz46hQUGsExJUpAq24+dYb2bV7J4HqU6uEdDpr5KlB4FheusDy4iJnzp7h1PGjrPeHnBvkXGh30FLiLGR5RnOsThCGiMAxPd2gVpuloUPsYMjU1BSXX3EFcbVCGIUEWmGt58jhZ3jqmadZWlrg3/38z/P3fvDvsmv3Trq5RYXghGU9WUehCJXEGIsTOV478IWTRElFHAWkaYpUAfiYwKcYN8IYQEqMz0ntiJNnT7Ha6SCimDBqMBbXyUcjpPNUoxoqioooNjz9UY/H7rmPL33lDu57+AlWuyNmZ7ewbcsuhkmPwaBLOhww2xrj5ptv5vnXXculVxygORYhhKffTag3xumstfnV3/wDPvbZr6Izz47xCj/9kz/Cvr27Obe0ipIVpFKkziExaAVKBkC40X1lyNKMMAoQeLIsQ0lLq65JOyOCvFZ8NkJN3kvweLSQRaSfK1IShQeLxGGRWhLqiCRJya0jCAKEBB1KEjfAeY9QAdYpKtUGUsX8xm//Fo8eOc741s1QCcmzPm942fN451tfQ7vdxaRpedQpKSkpKSF10MsEIyv4U/0sAgRuQ1QAt3J644pCYBAmxS0eDvBuHFh/9m7vec97ll/+8ldUXve618Zpms+++xd/sfX617/hzK0vvPX8v/u5f5f8q3/1r6YnxlustztI+c2mGaUU5xYWeOGtN4d33/318bNnTl3crj39CMGeF/y5BRy59Qriv/9BqIzhh+v49rkh3m/oKr448H7z02U8dKx0ugiTInEIwCOwKkJUW8Vzd5ZUVekOOjRrFQAC4Wi6Hp1BgqhNbGxQIk2CGA1wrnhc5wW4fGPZViHghNJTETnDXg/p8mJhC4V45oIaxHVwBpyj4yKqeUYYBOWbuKSkpKSkpKSkpKTkL4XnVMAJdB2lQ7wEoTTOSxCyKAT3FrB4b5AqKCZihUUqkE4jhMeTIlB4D1qFhGGEzXPwAcPhkCiCN7zpZp74z5/CJqACkMIhvCj6PhDgoDU2xtL5Hp/54gP84N9+NbWkipMaC/h8hNfFCZ0DvPBI7XAY+snqxkp9BRiEDIjiYhIX4RACnDNY63De4FyOx+C9RQiFlM9OpBcT6KNkQGbaCCRhPcA6ibcGKQxSOJQKinNyCVrVWFtZZ/vecV79mtcgfOE2UEohN1ZfSiGL4nUhLv7AxkkxvvjbA1IW8VDKs7qyyqCXEUVNrMsJQsX27VuQqhABlACE2pg0KLbpnSdNUqxxDPp9Aq3QWrF1yywmzxkOh8UEgLe84XWvJgg0q6srrKyMiMMQ5yCXDhCkaYIQYK0j7w8IwwCtNd47BJp2u8/kXIOf+Ykf4MSRf8oHP/wJXnHLC9i/YyudXpsgCBF4GlMVTq8cRYY5m+fGWFk4y8wUzG3ZxOqy5eSps7Q7QwZZykYaFkmWssaQSqVLq9nAOUkQhExMRCSZoL3q2L5rK0GQYvKMs2eW6PcHSCG47tpr6XVz3v++j9Fq1lheWaRShX17akxNCJYWRzx1ssPYuGb9gmD+1Boi6BFGMVfv28+WTZuYnZ5lfLxFvR4w1qqSJhnVuMLmTZuJ4xj1bK9RpQbGYPIc54suoeuuOsDa+q2cP3ee9XabWiWCLCPSxfvFI/FeFPnxEiSSKA4J4qiYKxECD/TW1mg2Gui4AkaSeUMcRxgjsc4hcoGuBxw+cYS0nVKbqLJtZoZ927eiYocJAtbX1zl/bJH5M2e5sLzMo0dP8cyJ06xcWGGsOcb+fbvJ85wLi2eIA8WlO7bx/Guu5IbnXcOO7VtBCdJem8H6OvVazNhUk8cePsR/+JXf5YFHjiPRXL13M//kJ36A/Qd3s7S0RKMxBgjyPEMLUQh+yuOFxRpTOLiswTqDtRv9VCZDIFhvdxmt52yemkQpVThthCgW41KoN1J4wlCTpOmG6Flc92zUnDGGLBuhpMB5TRCEeDzOGiIvqcUx7/3IJ/ngF+5nYmYz9dYY/bVVbrv5Kt759u+m1+8xGGXoIC6POiUlJSX/D5NYQTcXjP6Xkrj8t/zXPesS8d96yx/5kR8599KXvmSXc05cuLAc3XLrLbt/7dd+feFHfuSH13/pl37J/4t//jMzlTQlSdKL3x+fJctzgkDx0pe8dPL3fu+3289u351/Bv50uenbnGVEeCHwC4cubuKbE8r+ZKlMXTtc6CAKqcbxxiKkwsHeHXbpymbhqBGSdiZpVP3GcR3GaxHC5nSkKlw4UuFtTjOUhKHeMON4IEB8y2NPx5525omjKmEQXBS2jLGsJgPSoF44gFRAe9BjplUKOCUlJSUlJSUlJSUlfzk8pwKO1gonNM4XBelKSJSOkFIUfTjOogKBdBpQeJ8jhSSuVJEUJaLWjXDWoHQTLet42Ss6Z4Rlrb3I5Vdu5eUvvoSPfvRB5rZuJfcDhCgcHh6JR5BmGWPNOT7+qfvYt38rL3vxQRYW+1jvkDJHyypCRORYhPA4ZxiZCxiboESMdyHGZlSjFtZZnHcXJ4wFCk/RKyIoRBznLIjCGeN9TpYbjElAWLSqolXRoWNtDsIjpbw4sQwwXq/yuS/fzVe//iBvefMr2L5tM6PMbMQ7OZwQaB2gldzYl2Kfnj0Z9hv/NsaAF4ThH0dRnDp5imRkqTcUXjts7tFKIqVHSot3diNmAqxNMbaYBEcIdKAJAo2UReTKYGBwzuE8SKGx1rHeXkMpXUS6+SLmSkmPkArnLFoXz1OpDYeQdTzbd+t8hvABnbWEK666iu//7jfymTvuYq3bRlb2Ug01lXodai2+9PH/wYc+/HHC5hh7du3g4PY9pFnK/NkVli+0SUeS9fURIoiI4pBmaxxjDOvr63gHSdohzyWjgcWamNZ4DamGrCyvIB30ej1CHTA50eCKqy9hcWGRbreDDDTnFoaMUke73+XwWUe9GdBfd0Q+RvoGg07O1q2zvOIVr+Paq5/PeL1FFGniKMbjcC5FCIcUCpvnGFs4SqQUZOmIrN9GSY1zHmsNSimU1tTjkP17dmJsTpKM8DYjVpIkS/F4Ah3QGGsSRiH9fp8z55Y5dXqRwaCHEIJjx47x9XsfZe/2OW69fJxQZzx5bkR9ahfXXXaAarWCUoqdO7ezurjCWBgy0ZLs3Fnh6MlHePjxpziz0OXI0VMMhwOyPKPX7UAc02hNsHPPDpQKWO91iYE3vugWbn3RLezas40oUoySAYPBOjioRjGD3oB7n3ycux55ko9+4S4urCZIa7npmr38u3/zE2zaMsWFpRWiMMYYgxAWKYsVsyZLGSUJkQ4RKIRwWJeCz3HWg7c4O8RawzBxjLKMVqtJJQ4ZDDrk+QgpQGl9scvHWIPzDuee/TwJcpOhpEJIi8lTlI6wxqN0UIjMecqW8Qk++dU7+P/90WcJW3O0xpoM+x2uPLCbv/3WN5LnOSPjUTpEaf1X8ov++971j/8nU4KCXGhyqbFC/YW3H4YxtVoFtzGF2Btl3zzfKIq4R09RUfasOKzkRv+XKJuhS0pK/u/GeVjLBAPznPy++xMCztmzZ8w//sf/39pv/MavTw4GA9bX2+qHf/jvb//oRz965ktf+mL7Ax/8w/Cd3/Pdrfn5+T+xMSklKysrXHbZpbI5Nt7qdtbXAdzSEfygXayYsvmfb8+ypIg2c+bPs9sANBv1i2NmLiapCSpRSC81eCHBO7zUJGlGJY4u3rfRqNMZbQQqe49HEASaSvxnL6AQQjA+VoScWg+5Lx5fSkUlcKTOXhSOEqdw3xDBVlJSUlJSUlJSUlJS8lzynM4mepHhyfHCgN/IJ0JgncP6nGLlncT5ER6HkgGBrhYR1k7ircS4Ed5liI2Wm2I7ptiuB5NnvOFVV/Pg/YdY7w+IGh4ohBG/UXQuCAirILIqH/jI17jy6i1MtpqsLS8ThA4lQEmBCBTGJ2RmgMD+8eo7m+KdxLqM0aiHlAJrLcbaDTFp4+TceYzNyLIeiGDj/wbnCnEnCBSCaKMcPUeKEKUKUQQgz7LChZPn3H3fIwwyxTWX7EMIS2c4QhdSEUIK4jjGo3DGFr06tiioV1pddN5cjPz2ILVk0Fnj0Ucfx1mJVgKnBN56smyIdRlQbIuNsntrDLlJkVIX8VE4EK54JZTCO7shPsU45woRSzicy/BeI4RGmGKxpZIW5woxQiLxG64k7xzOeby3WGdReLSIGa62+cEfeCc/9BM/DDZnZaXN6soKFy4sMcozvvTZT7F362b6MmR5bY1qOMHpk4uMho69e/fQ6wyw7jwqqNEbjhgOO9QbNbZsmcG6FClTkpHH5ZY8gwtLKyBTBqmjGraYmZyk223zjne9hde++dWcOnyYbruHtTkra+usrfdJrWNkEtqDIRP1MXZPTjHWqCClZHZumk2bp5FRnWG3T2oShumANE3wHmqVKt4XQoHWGuk9PnMbkR4ghEfpYuLAe493htRkZFlOnid4bwjCkDgMqUhJtdHAEnDk2GnuuOt+Tp85x6GnjtLpDGi1xsiylCDQxJVJVtf7PPTw0+zaWWVh0fPEHU/ypc98kfHWGJVqETF3YWWV3Xs3oWuKhx59ki994XbOr3TJbECzOUm1VkXFEdtmNpOkA7J8hPAW6RU33Xg9r33pKziwfTu5SxhmHXrJiDCucHbhAk89dZT19R6HnnyaZ46cZmE9I7eGMO/zihc9j3/90/+Q2phm4dwJwmAMhMQ6c1GYFULgfE6WmUI+FRtirzF4Z7G5LVbeVqDdaZNbgdeasfEmeTakP2yTpgnOWvzG6l7nHekowzmHNTnWFFFnxhQRhM4ZnDdUdAjWYr0kHwyZHmtybP4s//X9HyP3VeYmxhgmQ6bGavzo97+NSqjodTuEUYXMSQqL218+j7Yu+1Nm/Ir9aZo+Y7ZPwwwYs32MUAxVBYfY+N37ZyOlRCt58ZbfdJ+Np2w34mvEN1yVmz++v5KSMJClmFNSUvJ/HUMjWE3FX8S78r9MFEVcf/0NWGtYWFhYX1lZGa9Wq7Lf71Ov1/jUpz617bLLLj11x9e+fOGmm25qbpmbkiura3/ChZOmGdPTM7zgBS9off5znykEnNUz+O4FRG0c/+cQcNTsPkSlge9dePaii+UxQoB3f3JEci/o5pLEa4zb+DKL/+M+GmH+uNdGFL2T3yjgWC+K7wDf8HSc+5+P/MgK+qZ4XHexf0c8u6Mb2ywC3TyC3BiisHThlJSUlJSUlJSUlJQ89zynAo5AbKy+9oDDYzFmhPEW5yFQDbLUkNkBQeBRqoWzkixNKcrFY6wNcS7A2gwnEqTQCOk3BCDNsGPYsXmc73nby/mlX/8IwjYJhUJIVzyuCDZEgoTmZI0jJy/wnl//OP/iJ96BCnv08z6x66JkDRkE5G5AbkYoX3T24AXWCAJdL0QKCuEmzw3eO6zL8SQI6XBUMdYyShNyq/EbAo5SCq0l0jk8ObnLkAREQYySqnDQuITcCEIhGCUpo35GqMcROiTL23TaS4SqWnTgKIlzHiWLCDWtdTHx7zzOsTHhX0RvCQlZlhDpJqcXznP85Fl0WMVgMVmfF970PK6++ir6/S55nm9EsVkEAutN0V8TeAJUMZFuDd5txLbp4uzYmKyIjdtI8TDW4owlCIqTXG8sgQIhC79SGEpymyN8ihRQrVbwvnAl1Ss1vIT1Uc7yhT7z9z3E0yeOcuLMCdbW2iwtroJJuPLgTmqNGjINOLfW4477TxOSMjddY/7cIoEO2XNgG2lmGUua9Pp9hLJkJi0cIPUa3iTY2NGo1RimGc3xFqESBEpz+NAh5ia2ctVlB1g5e5JGNWbzzCTeeYIwQsYSsj7eeIQs4roQVYzNcUbiUfSGQ8QoKTLUhcOSgyqmzJ0s+mNwDi8sxoG1vhBulMD9/9l773BLrvLK+7dDhZNvvp3UauUsIRRAAZBAgIyECYIBY4yxPTh8n9MEz8znMIzD2GN77DEeD+OAccLGNthkZBBgkADlnKWW1Ln7dt9w7olVtdP3R5171S21AgYxZnzW89yn+8Tap8Kuqne9ay1ZFglcKJ/zrsAUA6I0ZnJmBiE1hXH0u0MKY7n/nse49vNf5sZb7mb/gRVqjWnmZqucuGmGVnOC4TDHGMuG6Xkq0pMNPLv6XY4/aROVScWOXat0vceHiMX9y2gdEDpnZVUx7IIxDarTLTZONkhUBMGzd98e+tkKGzfO04omOe/M03nt5ZcwO1XDCcHB4QJppUJSr7Lz0Z18/gtf4UvX30V7NWfYL+h2M7SMiWXMy84/g9ddcR4vf/kFRFXFgcV9KJlQFDlJLEtXPxlKdRslKxlJ8STh4sAaB8JgnUFYjw+a5fYqQxOIRgROqbBzJGmEc57CFGW2lrcURY5zvszB8gZjSvKmtPgDP7K10yoF56nGMd2VVX7p/X/GEysFm6anwA4Q3vB9b72GYzdOsnhomTSJKHyBF/7/2ER/V+OMZ39DcCQ+p+EGzBXLbMkOsCVbIAjoqdpzEjkhhFEu2NrjZ3rj0futQ/BY5zG2JIdjXRJCY4wxxhjf6egYwUrx7SOmvff89m//NmeccToPP/yIX1xcWpmbm5tWSrGwcJAtWzaLj37ko8ec++JzH//wh/9q38/825/eorQqsxQPv4YXgsGgz6mnnBx97h8+mwIZeR+/517Ui14PWfe5BxNXnyReSlTXzgWlzfGRPnJ9p1jM1h6Z0tJ3LaOn9IMtP3fYmcQ9hZxZ++pvBMu5oGvXxmlLxZBUTy5GSMAf9liMrkfGBM4YY4wxxhhjjDHGGGOM8cLjBfbzkXgr8Ei8sARhiFQVLWKUqhJHLfKiSxiRLtJFBBxKaEYmXmihgBrOOgZmmUQ1SNIEY3IIFqUCq+2Myy99CV/86r18/fbtbJmfx4d8XU0ihMaZhNxY6pNz3PD1HfzBzN/xvd9zAXrV4zMg8uRukaHJEKJCyIZEyuK9RISISDmczbFCEIIjBDeygutgXR+pPM728E7hjMA7i1CgpESUoTngM4JTSBQgCMJjsXgMSimQCQGNFAKNJOt7rr/xbk47fQ4t++R5FZGcgFYx1nmCsUgtQAoIAWcN+AghVEkS2FL14h0EHXhi136WehkhbpIFS1MbXnXReUxNTrJ/z4FRSLvE4pBClnVyGfCh/B60xPsyv0ZJhQwB53NcyAlBgJM46wFJHCckcUIUpSRRTJzEWDNktb1Cv9cmrWoqKYAiz/sQFHGScuvdN3PbXXezfW+fJ/Z0ObiwQEN7jj+2Ru4EK+3AK84+g6nN27jtgQdppJAIwXBg6RkPwdHvL5ImCbPzkijSDAtLnlmiSKBkgncZsSyY21qlvaLodR2gcUXKwXYHXIdTTzqFH/r+H2B6qsy7iVVCzw8JQeL9KGsIjxipnHzwuFCUzZpSkcQJWkdIKbDWYrIMoQSx1ghRrlohy/2izFDy692vAoWQmpL0tAjhqVRiJiYU+XDAgw9v5/qv3oHwKe225457HiAzAwaDDtPTUxy7ZQvOORr1CEQg664y7BcolbDjiccY9NvUahEqcjR7yxijmJ+bJzMDpNNEooqODMY4hAKEQxWl7WH70DJxFDM1Uees009g67Z5LjjrLE7Yso1WvUVwBb1OB5kqcuPZ/tgOvnbTnVx/443sP7QCoQIBBplhbm6Os04/lpeeeyLf9ZpXMDc/x+LBQ/RXhyR6AkLAS/C2IFYagiI4j5ACGQTeeoKyRErhTEmUBgTGWUQk0WlKp5vjvQYVqNdraK1GyrSSjCyzmwKFGWKMIYpj8AqTG0IIxHFEJa2AqFMUBRJwBcRKMDk7z+/83h9x5/2PMT1/DJVGQj5Y5cfe9VauvOwidux8giSO0VFEMAGsJdjwf2Sir9jes74eACcUi9EEi/EcD9aO57jhHk4e7GRTdpBCRgxl8rzUOP8UHE74eO8Y5g6tFEmkxhY1Y4zxAiPPj6qkKMZr5pvHSlHm3fzTcBhZEcLzJiWMMezZs5vzzz+Pk08+mX6/38mybHqtIeHAgYO86NwX6R9+zw/N/uEf/fGhheW+aaY66g+fvsn7/QHz8/NMTE412ivLGUBY3IGIkud1NpAbTwEzPPyeI1r7aSEEwkjtCpB7yWI2YvmFAKkRJivf4yyCUFqnpY0jSKHwTZ6W2oWga8VhsnWBKAYEW4C3SCFwMkIkddYZnDDet8cYY4wxxhhjjDHGGGOMbx9eUALH2gLny5yUIhgCFhKoVBpUkhgpJEEIrJMIoVFKlMYEQo5UJEWZPZHleHKECAgKzFBijS1dDQgMioyasvzUj76VlV/+3+zYs8LkzCxgRvZmtrQRk4FYB5KJGT71yfuZnWjypjeeT3elByLFFIYiK9Bao6VGqwSpYkLQeCexDEeWZKXCx9g+Wb6McxZIURK8cxhXoJOCKBJINVEqefB4AlJpYhXjQ2l3plSEUiCUwKOwFkQl5c1XX8B9d9/NZz5zF3kx4D3vupS56SZCx3gRY1VWqm+sxFJal1lTWpFpXd6ArmXhxJEkuD53PfgQy72Mem2K3Dk2bJxj25ZNrKysrgfdEwLWWJTSZQE7iYmUHtm7CSQaVd55443HOkdROJQq/cXjekDHCq1rRFGT3TsXeHz7ThYOHKDbXWJlZT8LCzuwoo7Tc5jC0OmukiYxBNi1ZyfGW5SaxLkKcVRnGPq084hOb0hcqyOiJtsf3UFeWIpc8OhDOzn55I0cu3UDvU7GynKH5eU2h/YvkGeeLHPEacTkRJPWTJ2hDNy1fTfVaoz0mjRSzM9X6LQ7HH/8Vt759n/FheeeiSn6LK8cQIayauK8oyRVRuG3obSMcq4kbcomUUGaJOU+n+elPZqSaKXwI7uv9cKGkHjpCe7ISoAQpWrJGcPEZAMixa4du/jSdV/n+q/exhMH99Prdtgyv4GTjjuBRHWJKwkbj5mlGlfJ24Y0cizuX8FZQbUeI0JOJYVmQ9PWij179iNEwqAfs7B/P9MzOY4MKSIajSZxTdLv51TSlKlGRL06wca5eer1Ops3z3H8cZuZn5un1ZyjyDO8HaDTgn5/wKF2j1vvfJRrr/tH9izsIeiIYaHJBgn1KHDKqTO85PxX8tILLuH4Y4/FFksMh31279mDQBHHKSIIwmGZVEJohFDYYBFBIFA4b4g0RHGMKUrlV1kVssRJDU9g794DhCCQQrK0tERe5CAkSkUMBhlKeeTIdlBrXarsvEdKTbWaEMdxqfJBIuOENPTxvqAxtYHf//BH+JvP38j0zDzViqLfWeF73nA5V19xCUtLB0tySMpRiLJDOEX8z9RuRQA6OLRzCIZYoXi8ehyPV7ZywmAn53fuZ8qusqobuDKh7AUby1oxznlHP3MkkSaO1D/tuygzDPxhNbfw1N894sDHGONfKgpzVAJnOF4z3xy+IfJmPSBsdN4IlP8PvnzxG5yjHnjgAd74xjeyurqKEMJIKQtG9mXeO4wx/Mqv/MrkH37gQ4u33/XA8usuP3e+Nyh4qoOlMYapqWk2btxQaa8sl0Mdrj6/G4xzrkKd9yb8yt61p2bXf6vSYHMw2ZPry2oIduSurJDDVVTRI0likmpUNjoRWLTgxbdGoekDrK5tIyFKyqy/iA6GSpIQReVyC2NZecYcnzHGGGOMMcYYY4wxxhhjjBcWLyiBY2yBNQVSSWIZI2RCCDlF0UEJgcSSmy7ODxAuQqoYIcCHgCAuH3tPFGsCEjmyl8qLLtY5pNQgDEIrFlcPsml2np/7d9/PL/3WX7Fzf5vp6RmMGSCVR0iHFCA8xIkiaW3io39zJ63WBC+/7HiK3KBDRK1aRUsJXiADaCUIHpwv8AGCj/DeY4ohPnQwrkCEGEkV7zQ+DHF+iLMDgvRAhELgvCNSESpKcbgyPyYChEVoQcBjCofAk+eWCy48nV/42R/gv/7W3/LZzz/IheefxVuvOo1uN0eICK8kzhuwoczoIOC9R/qS/Fq3rHOWOE7oDJZ56LEnEDIlFYHeIOOkE85kZmaW5XYbKcuS7Fr+hDEG5z1RFBFHMdZYnAtIFfAhoKUuySYpmJysE4LCGE1n1bL9iUd5ZPt2tm9/gr1799NeWWRudpI0TUkSTbXeYs+uHgcW9xPFEucz6g2LkBFOzzE3U8f0M4pOn+mpBjuWOmzfs4TygW2bj2Ffu4vNOyhjOdTr8sorLuEn3vNWpifrrHZ6mMLQ7fXYs28v2VCwf0+Xv//Y37Ow73GCmcHYwNb5Y9m4cZ7hsMsTjz9Ed9VSr1Z51/ddw4WXvJL+/kfo91cRoVR8QJmHIoRESo2UslQ6iVGtQesy0yeU24MRubjWQCukJLgnb/xDKLfX2uvBB0IIKFVa6imlaM3Os/Phx7juC1/hU//wRR7avpu01uTF55/GxrlJBoM21ZZnW32Khx5eQHSh0pT0V3ucfvZWrnztZczObqDRrKB1IARLpZaQZTmPPboLawXzG7ayf99+ctsnrSbcctvDfPmG24lrGlN0ecN3Xcnb3/QGKAbUqhFRpIiTiEE2IM+GFNkKOqqwOhTcfud93HTz7dx+x308vuMQKmmSNlpkgx7DrM3xW7bwrre9gQsuOIapVhXpUobdRVYHHaRWVJKozJSizFAJa4UeGOVIOYQo17tzBVIppFQIIalUKphCkxVFqQYDur0u/UGPNE2oxJJYaySCSEUlBeF8ub+HgHOOOEkwxuBdIFKaOE5KVU/w5NbgjSPGMTNd4/M33cAfffjjFK5OM66Qd5e44uIX8c43X8XK4iK5sWhdTq/WlYo9GwqE++df/AkIVPBUbQcrNI/VT+BAMsu53Qc5pf84uYwZyBT5Arcgr7nlFNZiPVSTb5zEMR72D5+90CeBSEKiAjUdiMfObWOMMcY3iZ55nuTNGnGjNYSAsDkMhyibE9r7IXhEpQmV5jdEHjzwwAPl/CbXrgnpADNrzy0tLbNhwwauvuqK6S9+8XPd17/25UjRXs9FO/xaJYo08/Mb4wcfeEADNgxWyvybdYLpab8IdeplJN/7PkK/DbYAqSaAxtqbhFSE/EhlqHFr10wKrEEVPVrNOrVq5cglDBV4/1RrtqPjWTbBw4/toAiSJVM2VmzctJE0FCQ4JiZaR+TbyCiGzDOW3owxxhhjjDHGGGOMMcYY/yfwghI4hRmUNz7EaBGhtcbiGA76BKNIkxhbFPiQYzDYbEgcR2VznhBEsooUAa0FUgZKcynouzar7TbOBuKqIakkNOtNVtpLbNqS8vP/8Xv42V/5EIurq0w0K7hQIIJEjogN6x1pGmPCDH/8p9djRJ9LLz0ZU7SJiNZbwI01SBmQMkaN8l+sH+JsICt6CJUTxxFa1lCigrWutG1TAuMl1jhiDTqSOBfwUuKAQZYBnjg1eFswMBmEgKRCpOsYG9PuRbzk0gt4b03z0//fB/jrj3yFF59zKsduqjHoroIShGBKBZMvC8Qej/UWGTwBT3CePM+IFOzat5/FxR6RriGko5YEzjj9ZOI0HeXpjOy7RgROGJE3IQTyIkepsus+TiIajTqq3gIvOHBgP/c8eD833XQLd93zENkwpdMdkFYlzVbE5OwEc8fUaNQqxKrB8lKPlW6gWptgsy473yvVFlGiaUzMsjoILC/uxvkeG7fUOXbbDOqgZu/+FaaiCsfNTtDJHP2upLc65OUXn82//fc/iA6S9kqbWHvSWFGv1diy5WTq9Wmi6gZOP3Mju3bu4eKXXsLC/oMct+1YNm3dgvV9Htn+OO/7nQ9wz313c8ftd3PmqafQHwxHeR1luPpaIUPIUs0hhCSKopEHepnbIdbeR0nErP0561BaoSONkmUR2o9Cedc850MAqSRSSHSkqU00+cTffZL3v/8v0VGD3sBz8iknMDPTZKpe4dCBBZZWlnnw8ZyJyRbzM/O06hUuuuDFnHPm6WzaOIkJfQbZcCQ7EPjgCcGTNiMuvvhFCBEobM4pp55AFEGlMcnXbr2T1WHGxtYcx25o8r3XXM389BT7dvdYaa8gtKc1MU21OknuV3h014PcecdD3HTjw9x/9y6sjanVa1SrExhX4AeGM07cwitf9SJedvF5zM/Ms7S0j5WFVWKRobUkiTUOSaQ1kYxYs+FXqPUivtYKYwxKSZRSZM4TRxFKyfI3Vaokkcb7PlkuGeYZE5UWrdYERXGQVq3Opk2bSKKYPMsJPpDEMWkS4UaZTkpJvJdIBFoppJKjdQaVOIKgmWlt5qs3f5n/9cEPI1SLiUYL7yznnLyNH/2+a8gGA4xzJYk3Op6sLa0eERnD/Dune7ckchxVs0pfVfjq1Es4GE9z4erdtFyPVVX/tpA45b+OfuapJHqdZP4n/KBRMU8eFqEQ8ARyD7kvC67NSDAZu2/LOs49DGw5t07E48LgGGP834Dcw9JzZd6sHe6qvAwXeR+RdalEZVdIWouR3pTXCt7yjUpwHn300fVrE2stQHbk3FoO4Id+4N1Tn772H7v7DhzKK5FKCvP0rDbvHFu2HCMpFTw29FdKUkaIJ3+HVOVz4NVJl5K+5y8IvUOE/pJG6klgsvwyi9ARwWSEwzJ0ygw1f8QKSpL4aeTNwMmy+WVtPhc8TTV0xGkpBFAxQmRP+11v/MF/D1GKqE8Dgvf/xs9z3GyDZqN2BHkD0LG6zOtZW9ZYtTnGGGOMMcYYY4wxxhhjfBvxAhM4Q9KkipQKFzwqCELQCBkhZZnVgojwTmDsEGsMzWYTKQSBAkKOsRnOF0hZdssb7+j3Ouzft4gWKZWGp9VSiLRg6IYsdZc4ftMJvOfd382v/8+/ojCCOE7xFoTIQRQEpxnYnDRNMMMmf/mnt1CppFx8/qmsHOgghcJjS1JFVNFRSvACa7tYlyGkJ4oFCI2SHqEKhMiJlcb7BOkDofAIkZaZP7JFJCVD38cUOc47AhbnLVIm2EFph1ZJEgiaoTEooTl4sMtJpx3D91xzAb/+2x/ij//ys/z8v3snUuTkA4vSEUSKoemXygUpkQGks2Xoug+lckhUeOSJ/bSX+0SqSe4LZqdSTjr+GPJhXua4IBBrKeSiVOIoWWbIaFWSb3leMMwydj3wEI/v3MlCJ+fe+/byyAO7CGSkNUeQOXPHzJAkgmajQmEKsoFg12MLCFbITSA3ljStjHJIPPsP5HQ6XZoTU7RmZugPlvHGYpVm18EDBG+ZqDRoisCpxza5+f49HFhY4orLXsp/+Pc/hmDIwuoBhBIEJ3BOMMxsmaXR6xBpy8suvhj16ip4x4knb2Iw6NDvrRApyennnM4v/Ief5D/9l1/kM5/+DMdv3cwlF19Ab9WMCsihVISE8o/gEZTEgR8V/9dfg/XHQoiykK9KpZKSCqVLhY0Yhe4G7yGEkpiQkmargUiqfORDf84ff+jvqM0dw4aZSeppQqRjdu/Zy2133E+/3+ecs87i6ktfyhmnn8ixWzdQq8U06k1wlv7qEosrK4CgUq2gZalmcy7H+wKTZzjnCHgIlqmZSe647V6uv+EmGpObMEXO+Scez2xVsXhgN7Vmg8naPJ1skQd3PMTjj+3loQd3cvtt91AUFu8ClUaEzWGyOYXUgeNPmOSKV13AycdPMdMSFDawtDjEuwStyz7fzBZYqUirFaI4xhVlhlLwniD8qEainlZ00pFGK72ePxSCJ8iAjCRKa5Quizx5PsSHQJIkRJHGWoO1piRv0gpSKnRczkelPYskSI+WkjiOGWZDBr0B9WqN6elpHnjiMf77//4oOxc8jZmEWqrZOlnlh9/1dlAJvUGGUqosMCEQI0tI7xxaJVj7nVekDwiqbojzOY/UT2Ihnuby5ZuYMyss6+bz/p5y/T65Hb33z/uz3pfHSH9oqFeiI6wIvzGs5UmsPVyzKxoVSUMZOJ45xYbUfhPLeXYYD4cyiQmAimCwSlqTpZXkGGOM8R2NQ5l8rkl11JkQI4oBDNpUNNSa5XVRrd7EFGs5jpSKlPCNkcqrq6usrnaI43iNwDmie0AIQZ4XvPi8C5iaqCWPPHx/8dILzk0K0336fGUNtVp1/Z4hDLvgjiSVQm8Z0dpAfM2vTKhTXlEJvUMidBc1UqdAKWUXErnpjPL8vrpwRHiNoLzudCGUv1VFWB+vN3EESrJ7KT/sAwQQ8mkZOHpkZ7s+r+uIrsmIXUBJydCBC4JjN8+zc98hUHEZbucsRBVyN2CNNvKhzMfpF/7Jn7u2/cZqnDHGGGOMMcYYY4wxxhjj24QXlMBRRCRRSqQTBKVFGtai4pIU8d4i0aR6iooIDEIPbIRUEVonBG/xpihD1AkUZkBv0KHbHhKrGlKBEhGJalEMCvrDnEpliqUDPV525iztf3Upv/enX2JiZjNBu1JVQkxpmgN5NgCZstRN+MAfX8+GqVM4/fizaR/cixMdIqUJooqTmsz2RzknZdExjWsI4cjMAHyEivTo/lgSqRoehZQBISNCUEiREhkoTAd8jg8Wl+VU65PINClLvUHi3RDHsCSagqazNOC7r7yYHTsW+dQnv8KZJ2/hTW94OS7rgQt4GQjCQFBlRosvcHmEVzVEnCISRUbOfY/vpV94Wg3NoL/K9PGzzM3MkHW7CD/AeUGsWtSSBB0HpFCApd1us2vHIfYcWOGeB7fz6M5d9IuM3fv2UVhBJaowP70BERI2zDfpdPskcY32apv7732QWGumJirUalX6A8dyu01zqoqSkv6ipd1e4OJLz+GYrbPcef/97Dmwk8xahrljoT3EiyHHbJliw8wsC7t3sqt9CEeX00+d5Sd//J1IkdHtrFKJI7yHIErVRFqJQGiSKCFRVfL+EJkNCd5TOIP1BVoKgoHBrjZbtm3gt37zZ/nRH/n3fPrT13LRJWcRV2JsUfrueQ8i6HJfFgJPAXhkFJHEKQhwzpbkjJBIKRAClAClJUVeZjbhRsSaMziX4bzDh7KttVqrIZTiN37t9/niV67j2ONOREd1rOmyvNxmZbFDtVbhNa98CSedchKvuvxypjduIAx6DIddvHP0VlYwxhCCJ4lqCCHRUqO0AGcJ1o2K0hqCwI6yD3Rc5XPX38JyN+PUDQ1a3nLFpZejqhVq0vHE7j3ced2DPPTw/Wx/bDvdgSczETt3t2k1WmyYbjA7nfKyiy8i1RM4a3n91a8gijzt9gIHljqklRZxqjEmQoQy20apFBHKXCUpBDZIhPAICdYNGAxWEcTUatNoFSFkSZRJrRFSImRphVY4SyQlkY6o1WpokZJlA7JejgiCYTbAmCFSgccSfEQlreKCBA9xHOOsJHhblvi9QARDrCSZd0xOVnni0A5+4Xd+j+0LA+qNGSJt2FAd8GPvegubN8zQ7nWpVCpYa8lNjlQK5wTee/RI/SfEd2bBJyCQIVA1bVajFp+evZzLl2/muOGe5/7wyKYuyzL8KIxGqTIPSMiSrHs+ZI51jkhresOCRjX5xn+EVAhXoLOVUR1O4BF4qfG6Akm1DLQKjoKIAwPLxtoLsz59ECV5c1gxsDBmTOCMMcZ3OFYKgQvPOpmWxLHSiGEbnXVoNGrEUUQUJ6SVKsPhkE7nsJwZpY9mVfasKIoCYwxpmq5PoYBfvwAGhsMhs7NzTLRa8f59e4nilxx9vvIBXRLwcn08TyW3iz7Up9AXvi0JnUNJ6BwaqYsE6AjRmEMkNdyjX/OhvfeodnAN5WgTlUoX78jiFnvzgAzlOcKFch4vvZbtSF0TYZ9iVadEIFGQ+7XMGk8eT7CvCAhvCVKDybjmqlfx23/4V6VySEfs2bef47ZuZlU26WcB4R0ulCut3AahHFtwoBOc6493+DHGGGOMMcYYY4wxxhjj24IXlMCppHWSqI5SEVJEBFl2pktdFs6iKEKrCKgQRRohD2FthhIJSkQ4DCqK8IXAeZA0iJVierLB3LTAeUOSVqikLXqDFRCSWm0DUmiGnUXedtUVPL4z4xP/eDsbNs8STEQYNQ0KIcobYmloTDXodAO/+7sf4T/+9Hs47aStrK48Qi/rlfZHuoqSgiiqoYXE+gJnR914ISXSLbRMMK6LpyB4i5IlaaOoEoLHuCHBBWRpykUIAmMKrM1JogpKRRhblGojHM71sB6ED9TqKe/5gTdx771P8IE/+SInnLSR8844iZXlPjYUI7IlIfgMFxwuBELQJDKhVkkZdofs3rOME3Fp5eQNxx67ifqkoLPcplIpSOI6Ng/sX15k5969LOzv8Nj2PTz6yOMsriyViqVqiheQVpo0mrNIoaikMb12l1oSs23LVpaXA3fedQ/HH38CV3zvWzjx+OPYsKnFpi2b+PVf/336d93P/IYNLC0sUq8Ivv/7fojvfuOrqNQEK+1FHnzwUbbveIKgqhgr2b5rJzffcR9Fto80ilhod5meFbzhdVdTa1Y5sG8XSksiIqQMBGURodzHtNLEShIpyv1HgJce6TwaSaT8KGNe0e0uMb9tM9/z9rfzq7/2W9x9111ccOHFDHtDlCyt8UKIIJSEjo4UxlicD0CC956ARSqBkuCDw1qHHBVyrDWEIDEmw6NQOhAIFEWOkjHN2UnQMb/4c+/jo5/4R0464wTq9YRGTXDXnfvYOD/HT/7kuznrrDNoNCul3Yh1dBZ2MxgMiFREmsRljpFUCKFRMikL1SKgFGU2jHDlb0ACHmMcjXqNnbsOccON91NrTmOGK7z26ldx8vnncMftt/GVG2/hplvuYKUzpNd31KoNiuGAWqp4+1WvQYqIzVumeMObX0EURfzthz/H7PQ0ue3R6xVIXaPSqKGUQquA95pEl/Z81jqELbtirTGEIAhYlErAKnr9DgFPmlaJdIyUAjxEkcY5j3EWNyKm4lqFOI0IIqUqa+ACprAorRhkGcsr7dLqTpTrwYWM3Fisy/HO4l20nrMTR1WskzjrmJytcbB7kP/6vt/n7gf3MjdzLLVUUQ0DrrnyCk49eRuLi8soHZNlQwQQaUVubVk7CgGkxdp8VD/7zkVAUDMd+rrGdTOXcsnKbQBIKUYKm1J9Jg9T20ghCAH6/R6VJCmPJW8Z9HN8gCRNiaNopBp85i5zQZlDpaOITm9Is175xgYvBMF7KrGmWS+ZmRDAeU9hcjpZgU0mylRrZymiGku9Fabrybd8PUZy3c+tHFfwRFqPr0jGGOM7GH6k4HuWCXQ970b0lkh8TmuyRRRFpNV6qYopCvbv30ejdhh7LORhFpDPc46JSuV0eFKe4gHHYQSO9544SUiSSGfDoRJCPdv0yfpnVVT+SSXExEaEPPJzojUPExtLdYwzMFzF77kn2Nv/PtibPvyMk3wrDgy7ffK4MfJU8zgfcIhSJSMkIu8TbAHV1noOzjCewHmzbncLMBsZ9gwVROmTpAsQhCrHbguuuuJS/u4zX2Tn3n2IiQ382d9fx4knncjGmSlsCICC0blMZh08oswicgbiKt3cUB/v9mN8i7F7z0Jp4zzCoJ+xcHD5iPfcctt9Rzx+7PE93Hzrgzx8z0fGK3C8/ccYY7wfjTHG+Fgb4/9SvKAVoySuo2Q8KrxKjPPEURWpFb1+FxkbfCjwviAQ4WwPa3OkzPFWIVAIV9rdWCOo1VrUqtMMsjZSlURJnE6g40kGoU8ce6JqEyVjci9wLubHf/htPLxzB7t2LzM1OY0Tw3WLK0SpktDK0pqoceDgIr/4a+/j7W99FVdecTo+Miws7EeJaSrRFFLW8EEgkdhQEEJpixTrJoKYIBTGtTG2O1IdpYgQg7C4YJGRLEkFOTmyaRtifA+cx3hwNsczJIoSvLdY10cJSZZJNm6Y4Cd//O38/Hv/mF/7lT/lV97705x9zvEsrS6QZ5LgFYQIRAUrJUootAzUo5h9B/dxcN8iWmpMkaEiwcUXX0J9tsVKbzcLewfcf9+97N1zkAcf3cVjuw9SrTZYOuioJDWmZqfZ1BLoUV5tuz2knlbo9HrsW1pl2LVMNwUPPdxm09wsP/OT7+aSl7+MRqtFb2WZ+swEd977IPfedy+1WotD+9vMTFb5j7/4w5x26mmstFcYrhgSHXHJi0/hkgtORacJNDfw+U9/gbvufpxOe0AuJdnUFFs3Npida0JsSKua1XafOBKo2CG0QUmNkhFKJAihcS7HOotAIBVo7fEhoOOyIuGtByzdAwu84Q1XcdMtt7B33z4urSU4k0MQKAIhlF2tAkeSpgghyIYGY7OSHCGgI1VaBhqDLSwqKova1lqcF+V2tx5EQKqAcwUb5jfSG2T82q/+Dp/81M1s3XYsRRbo9y2PPXoPZ59+Hr/w3p8nThWDTofV1Q5SSKQQ5FkOiHWLEa0VelQIdrbM2REy8CRxIA4ryAi0VjQnpviHj3+WfbsWOenkc5Cuz77lg/z6+/47199wF72uIIkrHDi0ihMRW7edyEXnnsFlLzmX00/ahgc6gyXSiufWO+5levYEzj//XGAFHWniuFQtlcSJQHmBUhHOOkwxKO0DVYPCFHjrkFJgCoPHEScpcaRJk1JNFLwHCUpqrMsxxiAobe2sK8iKId5aosgglSROYiAwOTHJcFgwzAwIGOYdhrkrSTVnGQ57SClJIkXwAmccRZTSmEhYHS7zq//jj7jx1j1smNhCmkpi3+P7Xn8ll1x0EXv3LxBHCSDKzCmt0XFEyE05x5QGfBiboeR3fpHeC0nVDchkwtemyo7tosjpdxzOeZz3DLurTytsArSa9XLulRHW5EgpGA4zVldXqVZrxHGEcyVh+ky1SmsMcZLQGwypV79xEkdJdYSdm0aRxBH1EDjU6zCMmmANOEOPlHpRHJGH4APYAAMrML7stl+rrUoBsQxUdSA+iouSGxV43eE8nrcIHdM1lkzK9UppUzuUOHId5l4wtGCCOMI2SItArKCuA3KczTDGNwhrj1pTt+M1841hOX+Og09Qkge9JSoUtFoNoiQlSSvrc97CgQNUKxVazcOoAe9AR9/QWPbv38/S0hLHHHMM/f66UiQcOR0KiiLH+SBr9Zry3j7TtPmknRtA1kVUGiCVtTf/dUbWTZ/2AWcJgzZhdX/wBx/z/sAjljKH51llK/N1zcHVRbIQgY5L+zgCFMNSKVP0aNYqWNNhIKtgHRRD2kXO1ERj3fZSKcl8lHOoP8DrBKQuvycEsDkhH5DECX/033+B//EHf8m1X76RPcMOP/+Lv867v+fNnHX6KUw0aoiiDyZDuoxmtUq/6GJkCvmAPM/oK/W0nJ4x/mXjoUd2rP//iSf2HXlcHjjE7j0L649X2l2u/dxN45U23v5jjDHej8YYY3ysjTHGc+IFrSZGWuN9URImBJzJqNabJElEpBKkFgQkPhiUElQrG8jznLxoIzVomWKFR0SSxkSVRr1BEA7ZzXFBUhQZLhiCHZBGMV5BcAUERaw1q71VJucm+ZkffiP/3y/+Ef1uj0otRoiA9w4hy5D0YC1IS9qqsNSx/O4ffJw7772Lt771VWzefBoShxl6rPEgBFJJKlET53sIqRESRIAkbqIUSFXm+gB4CrSMSXSEDxapY5K4AUhya8rCuwBnDcb2iBIFQQMSJR34gHeaTs/xysvP56cP7OW//eYn+U8//wF+4ifeyCsuexG1CgyHXTq9DsiUOJ4iEjBRrVGvt3j08a/T6fdIKwlSOY6Z38IddzzAzj2P8tCDj3P7rftYXu4xt7FOpAUT9UmmZlIm6gFBTJxovLccOjhg4UCbICFOe3gE85um2LxhI4kG3EFOOfE0rvzuN2DabRZ37ybSmj27D/Cb7/sD8sLRqAnssMd7fvDHOeNFZ3Jg126EUkQ6Bh8YFBHWBaZaTT759x/l137tQ1Sb82w9ZjOdlWUOHezQW1mkvfTHXHLJi3nR2Rexdevx9Do5hRsAslRxOI2jDAcXHrwX5PmAMNovdRQRXGl7NswzrDNY1yOtOn7u5/4t1g3pdQ6WG8fLdeWG1nKkrilK8i8q1RxC6HWisiweKKIoQgpJEKXKxxQFcSSpVKoYN0QqyebjtvLEg9v57+/7M266/XGOOWEL3jv6q5adj+/kxBOq/L8/+W7iSLK0r0O1lpAkDucMlaRCtVIhEAjWoaRCSIHWGu89/WIIQhBJgTVlnoySEmMtUpQZMTPT0/QGQz7/hS8z2WhQDPoE4/jil+8mK1bJs4AdBibm67z1qis57dQtnH/huWzZtAVjLN3hkBAcNjhWlh2b50/hjJNPxhSrZe6VEAQ81lq880RxRAiW3BYE5zE+R4SAdQlxJCm8IXiBsQUyyqlWFbd8/WG83cMVr74MoUoiTQVQSqCUHOUYB5xz9HpDNIIEX2b8hDCyJ3QIFRDKY4PBj5RuUtaIZAUfSRAZKIH0guAKahOT9FyP3/iff8LXvraDmemtxMqT9Q7yiotfxBuufg3LnRWkLMcg5YigC5BnGUJAHKUYU9q4SaEQQv5fceIICJ36XNvgVZHU8MHTM6OOZSRmTf01KuZJAq2KXi8aZtmQ5sQk1hRIIanXqvQHQ3q9HtVaDUXAOf+MJI5zDqSmKAxx/I0VNoMPz1CkFMzUYnYPbZmL40ubnPagz/xhy7BBsH/4TEnWgaETrBpBLVZMa3OE05Dxgo6RR9ZSR8sZyipD4wAPKsIPu0w3q+uf7VnJSsFRl5sDfQerhaCZCFraja9uxnjeGOb50Z4e+0N9A/AB+u451Dc6QmRdEp/TbNaJ0ypxkmCsJdKadrsNhCPJm8Pnim8AzWaTWq22ln9z1ImjUqmwtLTI4uIiZ599jnT26AROqZC16wPx+x4ke98bAGHdjtv2A1OU6p7wlOX40Z8FzGiqelYIIZifqNPrD8jyVdyI7S6FS5qo0qBR5vGw3F4hLwxSCPRR7CfTJGajcnT7PYps7XqgPFcjYfe+g1TShB951zUAXPvlG9m7/SH+6y//6kj19AyK2ae89uBXPjo+AMZYxxve8jPjlTDe/mOMMd6PxhhjfKyNMca3HC8ogdNsVlFaIIRDR5BlBYPBgKWlJQ4cOEhRGPLMsNQb0rOCzophZalPr9cjqmjiWoXc5HgcjUaNVi2hJR3NepVKI2bLsTPMzyY0KoJITmMKi0CVmdTVhDwbsLTvEc4+43i+/11X8b73f5hKZStBSoSSo/xTR5AZIWi8T0kbMdZYrv3yIzz0RJuXX3I2571oK9u2TjDVqmJyT6ebldSABxccQg2QGCI1gdYtIl3BuBxjslGRWSNkGNlrlcHpAY+SpeVVHNVwwuD8oCS2RIxWCcRT+KLA+hxEFWtqvP0tb2VqYp5f+e0P87P/+QO85fUv4UXnnsBZL9rC1FSLTscjrGOyNcGga/iLP/8bPvOFm4jShFozIdEREPGZT34ZHzlqE1OQRExvmgEFIgr4PGN1SVAYx7Dfod0f0rcZczMTXHzJOZx0/GaO3TLPxo1zbNgwRT1tUY2r1OqCbNijt3iAwhqiRNHaspk/ed8H2bmrzYbpeQ4d2ME73/46Xnrhi1jYdQitqghZhsOEoPAuYmpW8aXrPsfv/c4fMzW9mVpjmsX2Ht76tst42UvP5s//4CPc/LV7ufv2g5x4/IO85tWv4qyzz2B2wwxKB/r9AUIEQnAYYxHI0b22gBGZsPZIiICSCusdSqZ4J5BSEak6w/4ARSCKklFXZ6ki8c7S7Q6IdIxSKVJBCB4hJVJIrLMYY0p7MCWQUo7IBLDe4cyQWr1KPDnH1z73Rf7Hb76froNjTziBbJDjbEFRZExMxfybn/kR5jZuZGHhAGklIammVCpTEGBl7z4WFg4xMz3NZKuFSisQa1AK6QONtAZRBMJDkYHLyIdxqSISZTdqfXaGT33wb9mxs8/s7AQ2X6YoPO12hreK47fOccmFZ/Ldr7uC4089FURguLTIysIiyIigNV4J0sYmkqJKK63y2COPktZyNh2TYp3DWUtRWKwvMK47UkN5hF+r8AgKUxDpBB0FTC7wHvASZw2VmuaGf7yV1c4yV333q2lNTuAKhxQKrTTWGoQCYwxaq/WcoSjSxHFp1ZZnQwIDPD28zxBCl+QxEi1jYuUYmg5IgXeemckG7d4Kv/m7H+b6m3YyP3cMsbKsDA5yyUvO4J3f80YWOwfprPap1etorRkOMoyzaK0Isiw2rbtkyUCSqtJ68TsTCqgc9pcGBDoYTHWSICRCStbUYEV1+slPSo22QyqpP6ywFrHaXqE1MYmOYmyRUatWqFUrLLU7JGkFpVSZKXWUwTjviXTEIM++YQLn2SClIJGe3D9JtOVB4rxft+eJZJkR5ssD/7BCniiJH1dmM/StwOaWDXV9RHGStc+t28WNwrZDKD+PAqkY5IZJH8piI5CqAEKPyCX/ZK10jRT0Fh+gbTWhGDJRHefpjDHGtws9+xzkjVTgLDpr05xokFSeJG+UlIQAKysrTDTrR1iB/VNRrVZJkqQku9cnmift00IIpGnCvffcE5aXljjxxBNUlh2dX9FRzOLSUnm6Hv0gt+P2tZcNsHDU+VRpqtUGKopI0yqtiRnmNhzDV7/0seccf71WpV57ksAOIayra9YwNdHE+4BYs0U+2ti1YrLVPOprL3/ze55lm/l/2mtjjDHGGGOMMcYYY4wxxhgvAF5QAufvv3APQnhWO22iSLPabvPEE3s4tLRKr9MByoJm5gKGCG8FAo2SmiADXoEPI9vu4AnekuKIowghHVNTDeY3TrN5fiOtZp1aLWbLMfOkVYXwgrlmiySq0S8KfugHrmGl2+XPP3QttcY8lWqjzKvBQYgIQY8K/Z441mzcdBwrKx0+8akbufbzNzA/V+e1l7yC17zyUmbn5hj0ewwGQAgEHwjCYZ0hiapEqYSsILiI4CVSRiglEDopi7ohKjM3TEYQBhkp0jQlUrYMtbcarWOSpIrXjkCOCwqbWXwwfPfVl7Fp2zH88R/+OZ/660/wdx9rcsnl5/AD77ya0048jma9waOPHOS//cYHuPfRvUQTDZrNCOdysuDoZgN0rQl4Br2C+bkZrHEc2p+z2O8xGCySxCmTk1XiiucV557JZZddxrZjN7B16yxxLMBaUqWRIdDr5eByXKGpVBuQRITC0u1kfORPP8Hff+w65jcfR0UITj3hBF7/mldjh0NssIBCBihsRj1u0Ki2+Ow/fJrf/r0/QsUNGk2NzRZ51zWv4S3XvJZ6JeG//Px/4Nab7udz//AVbr7tdr7ytd/h+JNO4eQT5/mB77+G4447FusKnM+x3hIwSK2ItEQIRQilAisIAVIgtEAFTZSm6EiR5TmmsMRJRKR1mWFkDHJUZHEux/shhbEo71AuRqkYYwRWKLxzGJuXnf5a4nygyAuUkuhYklYrxEmLj/3lJ/jff/hn6DilPt1iaAwBQa+7zOxMzH/8Tz/NRS99Mflqh/mNc6wcWuIfrvsKvY7l0Ue28+CDD7G0tMzMzAytyRYTrUmarSat1iT1RoMYTyVSKK3RsSKKNGnkqEaaPCh6ps+Bxa/yt5/+MkpNUWSK5cU9zMw2eN1rLuSCcy/kjNNP4pgt02gk/eUlrAhY75FxQqwjBIKBT1hZgQM7HqK7sEJ1cgMTGyfR0hC8w7hS8eJ9n8wsl8RWiEjiBGcdsa4hPfRW2oiQE5ImtWqLgKXvV3npxeczv+kEPvwnf8lss8llV34XcSwIMpQZRMFDUAggjWKMMBgylJAQHN468qGn2+1SmD4+ZCjRgBAjiFEyAmWRhSeYHKVg/9IS7/v9z3HzHU+wYX4z1VjS63U59cQN/MS//l4mYsXywf0okRK8J/iAsTmOAlc46vVZ4qTCcLCMFBIXLEKAUul32jkiBSaB2uHFvyfrkqLMFMOPSIURreDMYW8KWFPQ6WY0G092lleSiH6/T61WQ+oEpWNMPmB6okm3N0DoGCVL8uSppbm1PJw4jun2hzRq3zoLm4qC3IVRxlkgICmKgsooDFwAk7FncbWDsDmyTEYgIHAqQVQnyrXgHbmq0umv0hyNLxKepu+y2s8QtanRj5FImyGGfbwvl+uDAG8IhwVfxDJQEYZBt4v0pXWgAIKQ+KgGab0M7PaeVZ9QNQVxFI2vcsYY49uAnnkO+zSlEZ0F6rUKUZwSJynW2pEiRLLSbhNHmmrlW3OOiON4XY27NoLR3xH4wz/6w0JHsTpu2za5f/++o36XEII9e3ZTXow/N6SKeMnLr+a4E88gTWIhpCKOE+KktJ796pc+vn4GeSZ4X57fR5dRKHV0Ukt+g56R1pWEjxp7TY4xxhhjjDHGGGOMMcYY30F4QQmc3//QZwm+tMJhlF+hdYTWEVE0j5ISURFU17rUAYR/spMurJWuyvJYEBIbyk48AhzsGnYvLnHT7QdAWFTkaTQjtPYoI5msTdKarNGcbHDqKSdyxZVXENWa/M2Hr8O7gFeS4AVKJAQCIgQkjuAsWgtarTrGpBjjOLQo+euPfoUvf+VOXv6KC3n1lZey+fgpBu1V+p0u3hfoqACvcMYjfCBWFbyHONKl0kdIBBpJBa0kOVlpHydlWcQWjmHWoSgMvoDgPUKKUk3iwJhVHAW+Xefs0zfzq+/9Ub7+ilfxiS/czvXXf40H732Yn3rP93PFqy7hD//wb7j93j3MbzsGqwbljXAeCKEYrU+B8jHSKIarBd12G5cXHHfsDMefeDIvOvdMtmzZQK2q2DQ7xWytRbfXp7O8QEcKIhXRs9CqTzA5NVMqPYBOp8PNX7mVr91wM7t2HuLx7QsYrRkOeuSFQ1c1JoPgBFk+hLy0v2q2mrRmpvn43/0Dv/s//4y4OkelUaE37PK2172a7/u+axh2BiwvG2p1yStecz4vuegM7rj7ET533c187JOf57Htd/OO77mKOI0RRQDrRllHDh1pvLV475FSIrUehYeXO1ppq+fx3iHFEKk8IuhSceMDRVGsZ8tY65E6geAxJsOEjEhXSeO0tDPzAeklcRShpQYRSCdbRGmENY77Ht7NJz/zQW666TY2HDMPMtBezXGF5+CeJ9i2ZYaf/Y8/ynnnnoX0CpMFrvvI5/nCDbdx932PMuzmdDodglTEUcRje58obdpESRYpHRPFGlEMSaRA65hIRzg8tcgRS7Ayweqc1WyIoEnkU5YX27z85S/nHW+/irPPOoEoiRn2Mrq9Ad4LlI5x0lBgEcFR0ZCohIMHCx59bC/7H7sblxte89JLmJhOEPlOTJERvEUr8MFhjMG5AEGCsIQgiVVKXJ8l7/QIRY80aaGjKtbmJBiMKVABJuMqy3sO8tB9j3H8CbM0JtPSCpEwMsuTeDxSQJCOdq9DlmWkscYXObV0mkhXcW6AEgrQCFF29SpZJZJ1olQzDILf+8DHufXeBWbmJ2nWJJ3lJSabMT/6/f+KqXrE8oGDxLHEuBxJinelXWKZNxRIohiJwrkCrRKsCwQUaVT7Tjk3VHmSuPkWQMBROqRlcOsFRhcCUVojH/Zo1Kv0+kOkjpEiHB7EvY5Aue2s80ftzv4nj1RwpNHQaBmHo649PvaQxFTTtLTRG+VEdAYdOrJZKmqEpF1IGtVyfFLAZC1BOMOqVKUKRyqCMzRjSRzrkRgnACVBejhm00C7CKRJlTiK1i0brXUsZX3yqF4qgFREu99lbmJM4IwxxgsN68E8m8uZVGAyKtKhdUy1Vl9XxggpCSHQWV1loln7ls1j9XqdRqPO4uLS2lPrkjzvPTMzM3Q7XT72sY/bS1/2iujJnLynDl0yHA45dOiQ43nkItVqDd7xAz/Blm2nkXUXJqtRMW1dCCFkEAaEAJOTk087PfRzixZhPW8sM54QHEppVnt9Wo3aN61MGhYe7y1SaTBjm8kxvnOwbes8F1909tOeP/3U46nXq+MVNN7+Y4wx3o/GGGN8rI3xLwAvKIFTr9XWPaeFlGVBi6dWxxjZEZS9xoc1HLP2cL1LL7j1F6WSVHRKJY3xzlJa9ygKY8mHDhEUK52csD/Hs8S1X3yQjR//By6//EI2bZlhz55lKtW0DI59SmC2oEyEL4v9ZeZNpDUq1mzf3eaBP/kHrr3hNl7xyjN5/eWXs2V+A6vdPQS3ii96GKuJ0gZCRhiT4cIAm/dxFEhRJY03oGWKcwbnMoqhIYbS2k0IdBTw3pAXazeYChFKK6wgBgyGBmtS6o0NvP5tZ3DFd13KJz5/Eh//7Ff5649dzyc+fRM7dy+zYdM2nHdlATKUpUDhLb1uG28Mmph6pYGRhlNP2cRV330OJ542y8zUPElSo9MegtfYvOCJfY+U2SO1OpOzc9SrTYIXDPsD7r//IR599BEe37GTux7dwYMP7yUbSFqNGSoTU8SxQQrI8oL79+7io5+9jh//iR9hSqcIpQhBsOOJx3j/B/6Km2+6h0Zjlkalzr6De3jZFRdy1dWvo+gOiKOUUHEYWzA8tEScxFz0krM577wzecXLTuPgwiInnXQy1jiyLMO6HDeyMwpejKzVBLVagySqlrlMoxwi74uy3orHWIO1nuANSIl3dr3Y4r3HO42SKQGDCDneZ1ijULFCjtRjcaqRaQ3iCku7drD9sfvZu2cvd9z1ILc+tI9Ko8r8limEHZL3cvJDPbq9Vb73La/ie99+DRuPOwEKy5e+8I/83gc/wgOPLDAcWuqViDTynHTyVqZmZ4EyGL29vMKg36MwlhBKS7IilGHn/WyAtRYdKTpaQVBI4alUNbGuo4ImkHP+hSfzX37hp6gkipXFZRyGOE5H+55AIIiEwAeDFwYHOBWRuYJKrKlFNSZP2ES1qXD5EoQCISVRrEuVSqgTa48TFpCIEFOtTbJvYQVZa7Bh6+kcfPQBssVVQhXqjQrNpEpOzs6HHmCyOsHGLZsYFhmD4ZC0pp4s7Atw3mGtIQRLrCQL3YzV/pAk1sRSMje7gShqIOiXFnLCjgjjBIcm6AoiqfBXf/E5brh5N61WhTQt6GdD6g3Pe975So7fPMnSykGGwiKCJQqGmp7Ej9ZLpBoI6cmKAcFnSBGVblm2nJ9kov65nxMkMAc0v9VffLTCpFKSXmeV5sTkiGANpNUG/e4q9VqFbn9YWqwdJZtBUGbhRCMVTvMFuyA6OoG0pibygbKxYGTTWEliurkljHISgtRkeUElTdY/22jUWR2WZF+p8hFEkV5X+TzbOlyzA3KhLBr7AFIqKpEn926dOMq8wh9mwTbGGGO8MMj9cxxjUiH6y0SxptGcILBG0gqULAkSISBNkm/ZmCYmJsrZ68m5K31y3lXEccSP/diPGsBdeeVrq+2VlaN+T5okLK+02btnj+U5CJxqtc4P/djPMDs3z/49j1KvSqVjpA+OSqVCpVKH0Vz31LlcSUm73WZ2ehIpxXrzTNlAYzHGopJvzhbSOkekFVJKuoMBv/FzP/G0sYwxxguBl1xwGpe9/Dw2biiv2487btMzvvfYYzZQqaTjlTbe/mOMMd6PxhhjfKyNMcYReEHvXCSBsK6uCSOr/+f2jl674TxawU+IUU9yCGX+DAEpJOBBGOIIkqh8LEMMQiKkIIQag16Pj330BpqtKarVFBHsuk3OU0klH8JIASOJkzJHo5cPmZybQKiYXXsO8ad/+kXuuvlBXn3ZOVz2shcxO9Nk5eA+nJfEqo5SYJ3HugJPxtCsIClQskHuugzyRbwpiHVOHnUxrov3OdYXZei7V0S6ikBhbYGOLEJ4nAtYK2ivrrK8ukKi4J1vey0nnrqVn/pP72fQjpmcnMNJh8cRfJmx4L1DuoLjNs4xM9Xk5FM2cdLJx1FvVjjltA3gM9qrPXrLGVkcyAY5aVSlXm0wUW1SqdYpvGJxucPn//HL3HbvQxw4sMD+/fvprK4SxTFeWCrNBlMzkwgdYe0ApT0hSIxx1CdnufZLX2ex2+fk007CmcBqp8sXvvgF2ksrnHHaWQxyx4FD+5mbb3DlpRcwNdnkrtvvZteuXbz6isuIo4SQKWzmafc6FEXGi884k4mXT5EXBucsUkrM0GKtQakIOyjIM0uSxAx6hq/dcSMXXfxSGs0aWdZBa00IgiBCmWUjBSEEnDG4UCClKMPohSCJU6q1GloLjBmiIk8SNyCeASHIlpfZs28vnf6Q2+7fwS233s3ePfvwviQMaq0Jmo0aSg4wZsig12dmKuG97/2PXPzaK4FlFndv588//Fn+7MOfw1hFmkaceswsZ518DCedfhwXvfQlzMzOIoUmACtLyzhjKJxh0M8oCkN7tUOvl5GZnMWDB9m9azeP7NpLz3hqaRVrBcPhkMlGDS1yrnjlhaSRZ+XgQZSKUEpTSntAesnXr/8axx2/jRNP2kxvuIiIY5a7Bb1+j1gJGo0Wk3ObkDJHhj7W5kjpESJGaoEPEdU0xvlhGfcRKkw1N5NG8wQqpLUZXEjYcf99xLU6QsScfNoWBhisUMxsmKdSqXDMsVuZ29BgMFgmzzMkYJ0ZKTkkJi+IpGDvwSVWBzlp3CCWjmoDhmYVoUpbuSA80ggCDqEtrekWH/qrz/C5a29ltjELFchdl7mq5D3f8yZOPWWOXTt3ENXrkEjwAeUV3mpQEZGskCZ1jOvQHyyiVEqkYwbZMlIkaFVhOOz+cz4f1IF5jmKz80IiiXWZdSPliOgTVOpNsn6HRq1CZ5CjR3k4RztXaKXI7beum9r4p1QWvX/aucgEQcdIsqCxfpRfU4aqjT5nn8y1ERJjzBEEjguiVN8c9rWHWR09I4ZO0LPlcv16/s5I3RT86DtHilUExtr1jvYxxhjjhYF51kO3PDZjCrRKUVpj7VozUnmtOxgOqSTJt0x9A/DqV7+mnGuenDeba483b97EP/7jP/LBD35weNHFL4vn52bFnj17nrb8EALNRoOHHt5OUeQ5z0LgNJsTvPuH/y1T0zMc2L8HhMJ7gnXQbE1xx513uZtvvtkLIWS73X7a51dWVrDWju4R1BHmamuEzjeLRiViebWLMZYsy3jFRecdYes5xhgvFG6+9UFuvvVBXnLBabz+dS9n0M948bmnjlfMePuPMcZ4PxpjjPGxNsYYzxsvbOuZGIUzrz9eC15e+1uzSpNP/v8pWpjnuC1mLZiekWpGiAB4hLAELKAJThOCJkkbxGkNqQLSWwICGUpFkH+GBQgh8NYhhCLRKcN+D6kV89OTWAuP717lf/3J57n28/fw5te9hMsvPYF+vsjK6j5q1UmUitDEeJGiXQMZmkhSXBiAKFUR1rXp9ttYnyNFjHM5zltEiEsCR0hc6BFcTiRSkqhFkef4YJBCUJ2q8tCjj/KBP/xrMBXqU3WszECCQBG8JIgC5wZs2zTDf3vvf2Lj9AR5WEUljjwf0Ouu4q1gojLH5MQMlTRFqhhUzMriEu1+wf07nuDa627g5lvu5cChZQwFaSVFANXGJLFWeCfwwTPMym5OrROk1AzzAU5ICucpgGu/dAN33vcw/d6QwaBPQHDSSafQ90N27H2Cy19+ET/wva/n+K2T7N25nf/x27/LpZdehPMF3U5GmlbLDAgfqJCgC0l3sYOLJTpSZRh5UlqgERTGWdKkTlpJiXWNr15/K8Oe4c3vvoaqNFRFjDMBj8EHhRASQZlvYkxZIK1VS3uTrCjoddujDI4KB57Yx8FDPRbbnn1797F79y56vR5Z4dh/qE2UVJmYmC3VAlrgbJdgO8ikRnfQ42Av58ff8xYuftUrufuWW7j9vrt58LEHufXmh0l1g4Ye8q53vJY3velKJpt1dKVC1uuTZxlSSOI4ob5lAq0joCQWSJPSrd56UKWCxg367D24F6EsuxcO8Wu/+Se4oAhywMknbODCc8+mGOajrKACT4x1Hhk81WqNQ4cWGaw6Tj39DJJgcEJycGmF4BQhGKpT00zPbsLZVbToIqTDu4AS5VQjhKBWreK8xHmHElUWFtrUGpuIZZ28X7DltNO585bPEg+7TM9u47779uJdH9cf4l1BHMfUGw2ccxRFgTEFsdJkWY5UCiEl3jqC9dz/8COsdHq00jpSOUS0SHegMa6F8lVCMGjn0ULTaE3wkWs/zZ/85aeop1PUEk+eW7Rs8yM/8C7OO/NUHl94HKkaiKAo8j64gDIRJs5HBLUgNz1CsORFDyEMUX0WY8sskoBlWKz8cz0XTAKz/ycWLITAmQIVJYAYdUlrlE4gWBItcc9iT+RDQClNXphvCVmRuSctPEEQvBs1CZToO8Vitl66BaFgzdonhMM82J4ctHsKOROe+/T2NCzngq5dG4ctbZmkenIxa40M4cnzr/cOGBM4Yzw7vDvqFdA4qf15wvpnvw4OtkArRVqtHjYDPElKZMMBjdo/QUEYQqnuqbSeNqMcd9xxhz+MgHiNvNm3bx+vf/3VA4Tyb3/725Ll5aWjTkhCCBCKO+64MwDDZxrGxMQ07/6Rf8Pk5DQHF/YhpeLwXapWq7F//4J45OGH2kB+1O9o1kbTp3weV//lvO/8KINy9H4pBeooZE8IYTQeT6w1eV7Qajao12qHbcOw3syllSi/34VSpS9KpZSS4319jG9NUQhKi5Xveu3FnHfuaZx/3mnjDt7x9h+voDHG+9EYY4yPtTHGeFZ827wDnixnCQgCMeoQLu+ujgjBWf9EOLwR3APCI4MbEUGSIMRTIlDlaCHysO9W6+91wRCwCFcGTosgEYj1xT/1vlgIgTV2xDd5lFJonSKlLy23fEDpCnHa5IndbX71t/6G+x46n/f80FXMxI5+b0g08jcvvEKFGCUkMgBWoEkRkadvl/E+Q6uIRE+jlCBWugxZdw4hDIocZwyeBrpWwwRPPhwwOzND5iy/8/6/46bb9zI1sw0vLZ6AzR2JlqAkJlgCntmpJvOzNWTIyPttsn6XEAKTzQ2kSZ3cSpZ7GXseepwd2/ewsNDmsR072Ll3F8vtLp2eQUZVpibnSePSnifPC2ywuOCpqAqxDqgoYtAbkvcz4nQGWUkBg3OOEAyNekmANVsTtJqTDIYZi0sHmN2o+KHvfwNve8ObUTqjlw342CeuY2ZmI+/4vncyHHYICDwZSAtRAClxwYOMEVKNCpcWKWOq1RhvHZEuvc/xltpMlddfcyWf/uR1nH3v+XT7h+j0DuKsH9l2WB7fsYPpyVm2bt1KnCQkUczKyj4ef2IHO/fuZPv27dhhoNFsYFzBvoVlKtUZBoMhE60JqrUqQ9OlXq+glKBaBecEy0sreDdE6ZT2imC159h2yjbuePAePvWOf2D/wR7dvqE+1STWTTqrj/KeH3kX//qn/jWh32ZpZQXR65X7ZxDYvMADkY7IhjnGOHwIpGlCkeXkeU40yu6RGiYbksameb526w3sP7CbuY0nYMwyL7/kQo7ZvJHu6ipKKWywFMUAI6CWNkmjlJNOPp17bryLAw/tYcMJm3h81w6sjWjVm2RuESVBB4EIFuMMeD3KAJJlLIdSBBEIQZEmMUlc5dDCIfY+cA+XXPY6ZKxR1Zip40/l9q98hteccDLLCx0O7NyFsRmDLKPRnKQSaQa9FUKwRJHEmAyPQ8kysDlQ5hl1OwbrFUIGJierNFophR1ibULQBVJIinzA/NxGvvC12/mTP/0ktcoE1UqNbLjKzGTKO97yZs44+QT6A0eSTFOpzJBlbYK1KKGJ4gQVRSgdE4RntbdEFKWIkBK8wFmHCBqtNEI4gvtnWROdpSRwvsFJvSzula5ra2qTf3IJedR5XVr1WetI0pTVlWUqaUxvaBBCHD0Lx3u0VgyG2TdN4LiRHdqTJ62A8gaty0Jf7iWLWRjVXwVIjTAZwebgbHleExLSxpNqHI7sY/inoF0IunbUECHLhgdRDAi2AG+RQuBkhEjqPGk5Or7AGeP5oT/MjvZ0d7xmnh9seJaJTyhE0UVKQVqpjZR2o/YjKbHW4Z3/xq28nEXUp5ATm7CLHz384rm80dy2bX2+DCE0ATZv3sTCwkHOO+/Feb8/GP7kT/27ulaIfn+wnqd1OJqNOrt27eauO283PAOBE0Ux/+p738PU9CwL+/Yg1dEFnM4ZCfSbM9uKzuKOp73+VPvIZ1PrOx8Y5GV+5OF9Yt46AgGtIirxk7+nsKEk0YUApZmcmsKZ4gh7yUFWIEXZ/6KkLC2DlRhNuYJBf0ijlnzTOTxj/N+PL3z297jp5nv51GevXy/8HA07di3wv//oY8DHAPix97xpXBgab//x9h9jvB+NMcb4WBtjjGfEt4XAWe9HDnL9TiuEw+KZxVoWgCzJlFAW4dfyb/yoMCiDRwoLfhRYHcobqdJ+fK1atVZMPLwD2iFwSBHwQZRROrJU7IS17J2nIqxRTKocnwAdlbY1xnqEL3MHIgWtRsSGuY0sLTb4u2vvYFA4rr7sTKamp0kTTbVapdVqMelzvA8IJFpqRJYwDH3yoosPOdZWIGQo74iEK/N30GU4u/NlcVoJnHd4IahPNVnoLvM/f/evuOWWPUxu2FbmIRhNb9imUalS0wl9kyEjSW49cUVjQ49KUmW6NkkR6qyu5Gx/4iD33nMDj+9e5PF9h9i1Yz+r7QxvNUkcoROP0lVmZxKk1igEypbklxSaIAIuBFwowCu8UWiVoEvLc/JBgTNDGvU6xsiR5VWg1+2Rxgm1tEK/3+G1L3s13/+v3w1ZTpF5fKF4+MF9vPbKi0kSQ6/riaOELFvF+RwhND6UhF3kDCKTaK2pJimjiGCUFGSDAZVqQn1+BuyAyU0TLGdD/p9/+16cM1RqEEUJUkikUhw6eJDCFDSbLZI4JY4TjLVYY2h3V9iy6Rjeec27+OIXr2PvwUfZeuIpRDIpizEhUOQZQoEiplKpsbTcob26inUOpVNCX1BkA0RkibSiMJJ7H9yJkjEyKOjDi887hcvffTWveOVl9BcOkecZkUwIlPteIKCkIFIKXMCbsoCcxhGxkpTN+h6ExdgCm2dUKjE7HtvN33ziS0TVCazNOf3EE7j0kosYZl2ECngkrogxtqBjVhBAPW4yv3GGE47bzCN33c/B3QvYSoV4ZpogwLmCRkUiXYGWMcbGSB+jo9J+MAhHHGuMs6AStE7wIbBt2wz95Yd55IFbOeMllwOSc1/ych6692Yef/huZmuTGLNKUeTMzG5iYnoC4QYooYjSFpkZgPOkaUysE1wIDE0fGzxFLtGqSlKJOOmYzRy74VSM7RCQYDOiuMrk/Bauv/lO/sf7/wJrqkxPTiNMQFUj3vm9V/Cyl7yEfjuj0WohiypJ0mTZDYkjhVYN0rSGUgFrPJWkgTEDdFyjmk4AHmNzatUWYqSMalSn/7mdA54/eRNKNZeoTSHSBsFbQj5gXVH5zTjchKeoMke5M/VmE5sP0RI8AncUKU4IoSQd3Tdvo7aSH6a+URpsjsauk6ArVsOa9aZUyOEqquiRJDFJNUIpBQQWLXjxrSn0+QCrZq3JYZRl1l9EB0MlSYiicrmFsaxIBd6Or2zGGOPbCBue/QJYeYeKJFLKI6wgpRAU1o6UI89zvhhdQ8tNpxL6y+Qf+glnvv4XAlj/4kajwamnnkK328M5J1ut1lS9XuOrX/1qeMMbvtssL6903nTN2ytnn3Vqsnv3nqOSN945Jqem+fx1XwLCECiONpzJyWlm5zeytLjwjORNp9Nh6zHHcNElr5h68UXflX/w/b+8zvf7EMizwVHn9WeCkuVdRVHkWFeqDqUUxHGMFIIQPN1+TqNWKd+vJMM8L/kbqQje0x8MibVaJ87EyHqyvdql2+/jrKMwZeOAlIpOp02rUT/CChPgiV17ufLyS8YHwRjrOGbLPMdsmeet11zB7j0Lz6swBBy1MHT6acczPd0ar9Tx9h9jvB+N96Mxxhgfa2OM8e0hcEpTs5JTKU3UAoiwTuyU97glKSODhhDhhSOoghHLgg+jmzTKAlmpnpHrn/ficGu2Z7iPDpTql6cIf5551BzxfaJMuy+H5AM+OKTyrKxYom6VagNOOfVEbrplJ1//2n1MTLZI04iZuRmO2zLPMVMNNm+eY37DJoQwCJWjlWaqNku1FiFFQvApxuQYu1JaRulZpEzIBxmZHRL0EJd5rLc0pzbxW7/9Z9zwj/ezeds2jJfIkLJwaB+vueI0NmzZxCc+8XWkjxGyQqVW8KILz0BVNI88sYtd+w/x0OO7uf/u3Tzx2AKDfo5KFLkpUFIxNTVZ5gd5v+5BXhQ5mkCQCuM9QRqC9DgnsFazlHUZDLKR7Vig1UqZUZK0EmOdxBqLlpo4itAKVtptbCgI1hPrJp/42K187av3cfXVr+Kss87itpvuoZIozjpjC8PBfoKVWA8ORxEsUiiUjBEhxgkPbkC9WuPAwWU6nRWOO34T1VqD6uQ0xTDj1htu5GMfv47H966QOej0FtkwvwkhFf3uEGOG+OCI4jqViqYoCoK3RHHC5i0bUTqwcqhGHDTVyPFz/+bf8Ucf+gM+/cVPU2nMYK3HmgIhJEop2u0hBEmsNdaXhRpje0gZUaulTE5UObRviKjHXHbOBQSRs3HTJK+47KVcfMlLkWlK5+AizlmEUATEiBAJOGeJdBnqG7xFKAguEILHB0eSpDjnyuwjUQYlt6Zr3HDtjezc0Wd2w0ZcscxFF55Ds9lkZWUFrSNc8PigCG6SVn0SLYbY0GN6VjHcOs++Rw6wd/9+Nh53fHk0OkNhcurVGlESo5RDxTUkClNYhIS0kuKdQ2mJCqVSrjAFU1MzHLNtC9d//U5mNx3P3Akn0JrezPnnX8AjN95AVARarSb9Xp/J6QrTGxKSJGL1QI/d2w+WdobzLXSscM6go9JGLaBYXmkjpabf6zLs9UhchQiJEJZKWmV2diOfv/7r/M8/+mt6JqI508AFgbdd3nj1BZx3wWmstodU4imCFCitGWZ9BAIpI4QIKFUSdsNsSBxr4jhFqQQhIoQslUBejCY+BFqrf07zf4vnQ954D1GMnDqGUAxwu+4O+f03eO6+3lETkvZ+RKUJleY3RR4EX+bgrJ0RnPNorelkpYrMhzW7zKNN7oKAwDk3IlGepZ76DPN+1wj67rAXpSbkyyRxvN6pbdyo6igUWIMqerSadWrVypFfNlTlens+JM5zEF/m8IB0pQlZlwTHxETrCMWRjGLIPGPpzRhjfHvhw3Md4gE5mnjC06bX556zDr8sFWkDMbkZ98gNvvjEL3q//6ECWOUwxdQv//KvUK1W2bNnL1u2bJ4DxPve9z7/0z/90zmwetV3vzm9+qrX1Pfu2X30xYRAs9Vk9+69fPkrX7ZA75mGNDO7AR1Fz2TDB8DS0hLnnvsirrzytc1eZ4VGvTqyPCsVSIeOQuA8F6qJYmVo0EohdTnPFnlOnCSji31FluekSYKWkEaKbn9IrV4v48koySOAYZZx/Y138ucf+RT7D5VZPKurq88rlwwYEzhjvKCFoe967Ut57RUXceH5Z4wLQ+PtP16p4/1ovB+NMcb4WBvjXzC+TRZqghDKnBnvSnstqTXOepz3pT+1tQTv8c5hjMF5A5EhijRaR2UtzHoqUpYOMjKAMvjgcUHz1MyBMgvnKaP4JkNQ8zwvi+ejO2mFhCDx3pHnffK8QKk+aSXBasVSO8eaIY8/0ebGrz6IEJZqNaVaqVOJNfWWolqtMz3dYHa+wdzcDJOTE8w0JEk0JE5qVBIFuktjImVSThHHiuAMreY0X7/9Mb769e3MbNqAycvMnG5/H6+68kx+9/d+mf/8C7/B0HTZOD1J7hUTs7Pce++j3PClu9mx/RCH2h28tNTqNSZak8xOTNLuDNDESFFanXlfgAcf4lJFABhj0RFY7wkuYtAb0FtdpdVImZ6u8pIXn8LMVAspFHfdeR97FpYwaJwRxDpQSSKSJKCiDBVpms0Jgg8M8x5RRfLww3t44IE/J04TIqk4bn6elbZhduMcM3M1CuNww5yKrBJcGW4eR6USSOs61is+88kbOf2sOc659HQO7j7E7bc/xCf//vPce+8jFDZQbTapNRXHHz9NkmhW2jl5MSRNUmZmZsiyHGMNSaIpTIYQit5qRq/X59D+Zfbv3c/KypD3/kyTd7zjbexa3Mu+x7az7bitNJoNpqdmQHiazQozczM0Wi1q9Rr9/oB7772Pe+9/gMe27+XAjiHNiRbHnnoMP/vvfhQpPTqSOF/Qay+jdIJCIHWEdQ7nDFI+2TFakgQOMSJE06RUvPhg0SohjmOMyQlB0JposXCozac+fQNJNIu3MDtR55wzT6M/6JfqIykRQpKmEV+94UZiPclrX3cJA7OPOKowNTNHf7lg2MvIhhlVD4aCKJJIFeFMTjCSSCflMRiXncXhsNnGFgY/8pQPwTGzcYbJiSoP3Hcb0xs3oqpV5jZu5GE8g0FGrdUgSSrkueX2W+9ikFv27DvA7PwmTj/jRGrNQPAFWW6whaTWbPHEzifYtWc/UVQjIkd4C0YgZUKl2mRu41auve5L/Mb7/pgsREzNb8Aaz+LiHr7vza/ku7/r5XSWVqlUpnHS0x96wGKKAYLyuDXFgCxvQ4hwzmBMUZJqcYLWaiTSEJjCgfekcWXd3/+fAVJg/tmLhQGCR0xvhRCw91wb8s//b8fCfQMg/PT3vSn56vVfWbxtZE34zUlwKAkP+ZRqJRDFKXhD8OFpsWqHj3VNhVJ5tmJoCIinBBm4AB0j6JjD1Dc6AjNEmQGVemv96SNtfQJJEj+NvBk4WRb+1poFxFFIo/CU9axihMiOvlrW3i+efH+zUXuaXVzHavDu6HFyY4wxxgt4hfvctOkzXX86759bfTNiHISOEJUmxSf+izNf+YAH+sAKYNbe+q/f82P81E/9JKbI2bJlc3XXrt36mmve1L/ttts90Hn3D/5I/dKLL2jt27sH5/wzjCswNzfP/3r/H4RsOBiMlnNUzG7YiFbPfitRZuDsDw88+LAQAvK8HK6UAmv/aaS/kpJ6rYr1AbG+BRzeeyQSKQT9QUaalIqZNImxvrTcLDMOy3l5974DvOff/zKr/QKty+ufTqfzvMmbMcb4ZgpDH/iTj7Nj18Kzfu7az93EtZ+7aVwYGm//8fYfY7wfjTHG+Fgb4184vj0EzujOVoqACxnDgcEYA0GDVETCk0SSSi1CSkulGjM9MY8MkkOHluj1uohIoiIwwyGdfoERmqjeQCcVtAO1ZoW2nsMwuqlb63oM6wk8hw/peQ5eHHETLkbfJ4TAOVBaUqtWGfbVyGK7tPqp16sEXw7JIymCJcsKuh2PEg652sX7g0ilMCYH4anVKszUI6qRR+uYyalZhM6ZmJhn8+bj2LKxybGb5mi0PX/14c9RhJSJaoK0lig4Ws0G73jHVXzl+hu56/bH2DC1ER1b8ryg0x5w000H6XdjkqhBozWH1hYdBURw5FkP7wxKyTJvYq2tdPT/EBzDLAMhiAMMOhnFcMgFLz6b177qIrZunWV2ZoK5+Wka9RSlInbu3MuOfft5bOcu7rzjfrY/vIeFfXvoEKhN1mg0auvkQbVeJTM9Tjj1VA7sX2I4HOBQPLL7IL/0Gx/k2GO3cvqpp3HWOdvYsrXJhvkphIQs66JUhjSaOJnj9jsfQVrJRH0jH//rz/PxT3yeJ3buwwfJSWecglKaxYUOLgwZDoYMhxlaJ2w5ZhbvBM2JCsWhAbVY05iosW//bh574gGSpML87AbOO+8U5l97Ea2pBgu7H2Vmfpb3/vx/wXf2MNGsE0UJURRDokCbMrNJKigKPIFXvPwMsrzPY9v38Zu/8SFuv+0hHm0ZFpZ3cMyWbay0u4QgkShCcGXRet1BSRHHEVJIjIMgPMGD8CUJKqORAscFlIxQUmEQKKWYmmrydx/8EnfdvYO5zScy6PY488Jz2LxxjkGvj1KKKIpIk4Q4qjI1NcUDdz3G4qGzSaeaeFtQbzaYnJ1BhhXSShVhHTr1RLEiUjHeWYLXZf4LpeIoiNKOJYSA8basRRlDCJ7hcIiONKefcSL33LuD/fv2suXEk6jXm0RKImypxHDO017t8eje/egk4qxzzuaUU09BxQ5CgRNq3dqqUqvz2BNPsLy6yszGedTAcOqJx9JoVVCVlJwqv/37f8HHP/MFnE9pNetoIeh1DvGSc47nLVe/imF3kX63S1qdomc64CEJAmMyqpVJ0qRGlrXJbZdqMomQHmMLvAs4+sQYsI68GOCcxeY5xgypVhr/XOb+Tc/6qncQpcgNJ+Me/krIPvXfPLvvHADdf/P//kj6n3/512q33X5H56Nf+HrJOpTyr2/yNBGewjmUarMkSeh1Mjwj68CjZCMEQIoj7Yk4YsYf/Sap6LqIPJcgBNZ5Cj+y61yb6nUM3iG6B6lV0nXLHEFpeeRCKH+rirA+HuWllR8fWMlSfviiAwj5NNKpbBgfsVHego7omozYBZSUDB24IGhGnliEckzWgLMQVcjdgDXayIcyH6df+CNPcEIwVuOMMcYLDyWeXYXjEc96JAr5HGyr0ggdE/I+w997s/P7Hsx5iurmootfzoZjz2DTCefz95+5gRefeRz333Nb/sY3v2W/ta5yxlnnRldffdXkccduru7buwe3ZkX8FDjn2bx5I7fcenu4+aav29FynhEbNm4hz7NnHf709DRfu/Fmf+1nPtXnGWWUzx/GgbEOUESRXL/GV1GMNQYfAkqIkQI5rCsolVIjNXP5mWGW8fO//n527l1gYmISgShVs2PyZoxvY2HooUd2cO+9259Xl++4MDTe/uPtP8Z4PxpjjPGxNsa/XHybFDhlYS4vcjZtnuTCC86hGmsa1QSCo6oFrXqF1kQNpTxSW6ZaM1TlFMuLi3S6HSKtUXFEL+uza/8i923fy633PMYTe/eS6JRGtUUcJ+WNWhAgwDpDXmQIIUsbHKWx3iNCQBIQo9DsIJ7p1loecQv+tF818t32LjAYlAVGfCA4iQFksCADzjuEEEQoZEXSqKQgFNYZpAIpI4Kv4ZzAe1jqwCFnsa7A7diJkAlZsYj3d1JNLVMTdaIQs7Ka05pokfUHVBKDiMB7yS+99zc5tFIwVdtErBJ6PYOMBUoq0rSBlhFCOIRweOewQwXaQ/AoEXAhLzOAgkAQEQI454ik4Lhj51k6tMihAwdp1eq87QffxA/+wFuozjYgG+KGBYMsZ9AfAjlzcxPMz8W86PRZrvmui8kHnscf28WH/+5TfPFrt9LvRlSqLRoT09iBxxaSXncPPhRMTrfwXtHvW6yscc8D+9h/sODTX/gy9VrG2eecxCmnnUCcgvV9bMdyzz37iKvzvOnSV/D1W+7h/X/1YSamGmw55jhsGLDU2YuUFYJqkEYVBkUPpRSSCOcC/X6X5dVlVpaXSdOYLWKK8846mbe/+bVs3LiBE046gflmk0ojwbmcL//tV1jZ12YqaTA1PUXwGYUbMsh6RHmMEFC4gjiKMcYwzLpEiSFKJWeeuZVf/28/xft/9yNMTCboSpVDS8toFQMChyO4sO4TH8V6ZOckUTpgfRg13BuMGWCdQdgK3ody3yTGFI7+YJW5uQl27t3PtdfdQr2+geCHTDYSXvvqV5KkEZ3VLsoHEDmNRo2de/ayb/8yzVaNpUOrbGltJIQeQgsm5ybprbQJ1lKv1sl8BxMg0nGpuhEB7wPOWxxlUDMBjDUYl6OjFO89RWGIIkdRGGrViKlmxMLeHWw54SRq9SmcUNhiQO49/UFObXqa4084mQ2bZjn99DPoZR3y3JAmFbAW53OiCPK84JHtC0Q6xZscLQJnvOgc6ptn2L37AH/wwb/gbz/xZWpTG5ltpSQysLR/Ny8+61h+7qd+hGoS0V4K5EWGM2BCRvAFOqQURU6l4ugPenS6q3hy6tUZlHxSoadEwbDXJzcDnDWjOn2pQEzTyj+HeX/2Wed+7yCtIyc3Y7/0fp9/4pcM0P6u11zhf+4//2L1kksunvj617/uXv3qKxSQrBUYCd9k0etpOTjlfq+UxjlXxtI8U6d3KNU5/llyE8qDQmFkUhb/gj9MGiNAKRASYYbQXyKNFK1G7YivaChHm6hcR96RxS325gEZyqKfC5RkrRAlMRNCSfQY8ZSCbyBRkPu1zBpPHk+wrwgIbwlSg8lIgiOJIxIZyJUGZ0BIVmWTfhYQ3uHC6Oyk9IgQciXBpBOc64+vcsYY4wWGkiWpcPR5x+NVDMKO1LLiiHwXKeQzyAqPnLtCf5kwaBM6C10OU92klRrf+30/zHnnn8sTe9o88PCjfPnGe9g0N8Wn/+4DrtGcaLzpTW9tnH/+uXVTDNi3b3+pQDkKeeO9Z2KiSZZb/uLP/9xD6ADDZxpWpVJj0+ZjGQx6z7mOBChKK7beN7OunQ9keYGU5Xk1y4YYY9bXaZqmRHFcrrP1lMtRI9dTiJkbbrqTW+564OijfQ7y+9jN8/zQO95Io1YdHwBjfFM49eRtnHryNt56zRUsLa1yy233c8tt9/FXf3PdN1QY+tH3XMOpJ28br9Dx9h9jvB+N96Mxxhgfa2P8X4pvD4Ez6k5WStHr9Hn8kYc5/ZRjOfe0E9k026RZS/Ee+r0+w6xPYTKUgSAMM9N15mYbRLoCQpO7ISefeAxXvOxCHt++i6/dcjsP7e/x0GMLtNvLaB0BGmMKosgxPdvEWclqu0tuHTYIYpXQrFZIVYQxBSgQwh8mc3CjQZd5PGBB+iNuBMUoKDWEkuSx1iCERgiPkKNOP2sRUiLWckscKCERWELwaBGD12XOPIJICIICoQXICEI6urGXZS6Jc3hv6XYNkpy0VoFQEKTCFmBMwHuDt4K56U0QAu3OInEak2hV2tY50ClYY3HOghRIGQh4jLM461gLJJcyohyQRyjPcNjhqiuv4lWXXsrdt97NsSds4syzTqPIM1b3HiB4R0CQpAkhCIxxDLv5qEiRIKWkOVHnpZe/lHMuPIObbruDr950Bzfe9iCP795DnlmalYRGrUkSVeiuWqRyBO8ZDJapNySTc4LOE/Dg/avcf99NRMntI4suj1Kw2htw2aWXU02q1BsRjcmI1lSKcRZvY3rtmOFwCDInihW9/ipZ35EmTRqtiEpVs3XLVi57xeXc/8B9bJtN+dkf/xGSiRbFagclFb4o6B/ssloYTKSwUrO81AG6TE5KsIpYRYQgCUIQRQlSQqOWoERO4UDamJWDfWYnmvzyL/0/DAZDjDOYwqClwHuH8Ya8yBFCEkUaU1isyFFK4vwQZxUhxAg5YFgcxNoCXdQIPiZNatS1xJCjtMX6IX/+kWt5bFeb2YlNFINFTj/3eE4+eQurnWWk1qXSSkBWFOzbv5fFxQNs2bCNgwsHmZyuMjNbo9dfRSYRlVaCGypknODaFjHwiJYmy/qkaRUdrSnfPIxUaNYbAo7gA0pHCAPFELQAIR0zU1UOHNjPYKlNXNtAbWozi4t3kKYz1ObmmJnfSiVNUTqh08/oDdoEMcBSweYFeW6IYk13YLjngYNoXaeWalwe8Zefvo49nUW+8uUbueXmh9h27DaCEtSEZLD8/7P33+GSXXeZL/5ZYaeKJ5/OrZZa6pbUysEKli05yTI4gm1sDAwDGJjBQ5i5d34z3AtMgHuHuTCYGcYwMDZgnADjhKMsW7ailWNLLalzPH1ypZ1W+P2xT7diy5ItIWTqfZ5+uupU1a5Va6+99t7fd73vO8vFZ2zgl3/+n9Gu1VjuZ+ikjcoH4AWNsIkpe9iyJKolSA1p3iU3BqkDvIzQoUKWltJ00U5hja9s+KXCe4FSEVEUY1z5ktcaebbcG+8grCEnN1H87b935Q0fSYGFP/3TP9M/+7M/MwWoBx/czuWXX+6oeIOKIRDyqdzLCwNfrU633uE9BN/t7U9ZBu+hmkuf2i6lVrpCPG4VZwrIe8iyTxxFjI60nmZt1A49abdPHjZXPNUc1nksAlRYEUB5H28KqLWP5+Ck4QjWlU/a3mRQciBVEMSPky6AFwpUAKagP+gRhQGTKuNgIfFBUu0j5zDeV79hxTJOZh0cosoisiWENbp5SWN4nTPEdz3Ohkqt7+tCWjzL5OccPowRYoApcoKohuVxy0mp5Hfnb3pzT9xHR489GJ+Y5p/97K8wOjrC3j075XLPy5FGrEYaYRSqovaBX/rF2vRYTWslmT16eEWNIk/QTEe9XmNkZJTf/a+/55aXFzNg6dnadfq2cxkdn2B25tCzTv7en/h7n+Np4DgKU5H1QmiybEBZFiRRRBSFBFrRS8sn0C/i6TcjT8Ad9z6ZvPl3H/hnXHjOmVz5jp853t0Pfetvj78+v7jMb//Bn/Hl629h78EZVk1OcMUrzhseAEO8YBgfb3PN1ZdxzdWX8Zu//n7uuvthdu46wE233nu88HOiwtDVr7t0WAwa7v9hJw4xHEdDDDE81ob4Qb7v/AesEKCVoj/IuO2uh/j2TXfwyb+9lk0nbeTcLRvZummSTZvX0B6pEcUtIlHDDAydTo+8zImTEZyDsuwjhcOKnJExy3vf9QasabJ3736uv/UW7n9kDypQtOstLjxnM9vOOQ1TKHbv2sfBQ0scOLLIY3v3Mzu3RNdqwkhSVwEg8cKCcE+4WTwWCC3xXiGEPfFdJaKyVxMS7z1hGJBZu0KISKSQK9Y/orLL8MduJuXxjR1L1xFCrCwA9DjvwFuE9ygpUFITBuHxL/e+8vsWOkIK0EIQNVsEQUCeZwjpkMKB1yihEdJRFAV5keOcRyAr0klUxI0OFFIoPBAGGmMdpbUINGHY4O/+9us0w5h3/NS7oLtIt9uhzAuUDvBC4UpDUVSe5mVZorVCBwqwWGOwwrO82MHjuOrqq7nq6tezZ9cO7rjzVvbuOch99xzg/u2HmEsHTI6P00g8SoGXhkGW8fD9HaKowbr10zjrcNZhjMU5S5hENEem6Xb7zC0vs3HTRkZaCWlvAeFClpdK+h3DyEiT0bEYJRNecc42Npw0xtpV69mwYTWT0+NsWL8OEdeYP3wIaXKEhO7cHMJ7kriGDio119HFI1gRUG83iVuSIElxIkYoiTcCfIgnw7kC70BLjY40NgNTeOIwqZQFPkXgcKVFSbkyjqpQd+fKKuh3JRtKoklqbYq8Gj/OOnSgiMMxltJZ8oGl3Rwj1AlOAMLRHhnjgQd38qWv3EYctjBlythInavfcBXOlWRZQRRGVXaOc+R5wZlnbqEW19m7a55eb5F9u6HV2kaYNMiLAbpREXs9s4RxBVpVJJoxdiWPRCJ1SFGU8ATPeecrCzUlasRaUhro5bO0R6vsodnZgywu7Ge6sYmp1RvpHdpDfWqCvFAUuSWMYWJiorKJc47CpJQuI5AJpUkprKHTq1ZDSy/BOXQYsuOx3Ty8YxeRrrF+w2YKb/AyI5tbYOupG/nAL/00I62I+flF0BpHSaPZpNloAop+WSKkQ4gC6xyBDglDDSv7xntBGAf4IkBSJwzquKKDMyVRkCCkw5PST3sv9Zw/9axlMqmQUydTfPr/8uUNH+lPjI0sfu3r32ied955o2makSQxP/7j73VUq6gXxNgG/MK+ioDQwQtynnhqkc3jV4gZ8Tw/WZGDoelRPOPyeF/lITiD8AbtHWEYUBtpU0viE37PdENzdHmOzAeVtZlQ1bcXKZgCih6teoIpOwxkDYyFImWpyBkbaR5f+a6UZDrIme0PcDoCqavteA8mx+cDvPbH37sqKpnt9jEyrAgeWY1vUfShzJA2o1Wr0S+6lDKGfECeZ/SVelpOzxBDPBG9wTOGyC8Pe+a5IZDPPjMJHVBkBhuVhE+YWpz3BEHw7MrB6kLvaX+K45hf+Be/xOjYCEdnDodJKDesGhVCCoSWEqUkHkOn01nJujlxDo9zjnqtxvj4BB/+yF+4HQ9vL4E54Fl9Mc88+wKKIn8+5P3zpvir3DOPFOLxP6xsyVpLu9k4bnNpnUPrKhdRSvX0L3uKsOZTX/j68ce/+v738KpLzufZ3NPGR9v8/n/41/Cbv8eXr7+F3/7gn/GxP/odxkeHVhhDvDg4/7ytnH/eVs46azOXX3LOc/LfH2K4/4cYYjiOhhhieKwN8YMJ/Q/5Zd4LIhGTNFdTNjz9Iue+x45y7/a9tCLJ+g2rWb1mjI2nrOOCraexdXqcdruF8SVhrFjqzGJtF0Sb/qBEyAzLLKFcZvUqz0++93KWemeDj2jV2gj6WJuiRMD6iVOQQZ0SzdzCPPfev4O77n6EA0eW2XdgFmM0Wju8BO8jhIwQ2qBVVSDXgUY4f9wC4/HfVK0GlFKglMQ5S57nqBV/caU1URhT5DlSSqyzGGNQKli5J3/63aK1FlYyQwC01tUN/vGbeP+Em1t53D5ICFGFwtqSrJPCivoDJM4L0kFBURRYY5FKUdmCV3e01hlqcUIQhngvjt+PF2WO8FAWjigMKK3gv37wQ/TKeX78R96FMxbnPdLbarW8EkhZ3Ug7b9FCgShxPkcoSaA0UaDJrWF57ijSG1aNJPzom14NOqTbC7jjtoe59is3cN8DD7OwsERhJc2RBO9DFHWkdJiyTxCEhGFFlnivCcKQQeHYu28v++dnuPj8c2iHbW6++yE2rl/PxrXTnPn6M3jHe96FHcxw1207ecfb30RjQ7PKlvBANgCbYRa7jDdjoNp3zUZjxZ7Ikw1S4lodU1iajXEmpsaQURchNLZU4EsE1Yp6YwYI5bEWvJGoQKOVprDFSvCuoCgMzlYEXlmUSFEVX8Igxq8coqUxKBQ6iCgyg9Y1oiiuFDtBhBRjLLuMQAXU40kQOYUXeCKUiLnnwb10exnt9iiyHHD1G1/LJZecy6C/TBQ2wLPiDR+gdYBzllNPXY+1OYcOLDI3L3lsx362nX8qngjvMkpT0lmYJ3YFlAWjgaKmawivKuGBevIxopQEVRGJzirK0jMyOk3fKDr5UWKlaI7UKYocYUpWb9zE3t2P0O8OiGpNoqRBrd5ER4IoEhiTgM5QgacWjFKaAXESc8+9D5P2e4yOjJOlXeIkoN1uIaSg1ykorEFJyezMPGdt3sC/fP8/Z3SkRpr30fUa1nkaYQvnLUorTFkph4wx9Ps9ajVBo9Gk2WxRuhIpK6JVKYnWkiiMcM6TGtBhSBgneJdiyhLrspdyvlfAiUN4nEOu2oy59ZO+/Pb/HqxdNbX8rRtvHj3llFOaBw4cZN26tfzRH/2Rv/feezNgASC4+J0UX/k9Xrislaek4Eixojx0FUl8gkKnWAmheWp9UgnPZCJZKvtVttcxK52V9kotEVKjVYTWijD47iSUEILpkQa9/oAsX8Zad7yuqLUmSJrHbXUWlhbJixIpBDoKn16EjUJWK0u336PIqjlfUJ1PkBBHj1d7wyBg1Yii3x+QF30c1XwkpUCHiihsUUtiGs6xsDSPMeUKITy0+BliiBcT4XMQl/SMZFRW14hVFkulLtZKnZBYeTZc9ZrXsWHDevbs3kmgFVI8znFYX0VmHbu+fLbNO+doNhqMjI7ysU980t1047cNFXmTP9v3T02vZuNJp9DrLD1jho9Y+V6lJFqp4zk0z+dcIITAeU8YhtXCCUDi8VLhnSNJkspSrbCVgtgfy/WpFsGIE6h+hHj6uWbr5k3V34Tnqksv4Lqbbgcq1c1TCZr3/egPHVfhfP6r1/PTP/bW4UEwxAuKhx/Zw+7dh56TLcsQw/0/xBDDcTTEEMNjbYh/GniRCZyn+kgLhKtKaFpAIwqpxQFetLBWsetAxkO7HsXddB9fbF/PmevXsHbdKjadvI6xqYT2iGdsRCIxECqEqlOWGd28Q5otExSOKJJ4W6c7cNgiA5lRWYUrCr8EStJoai6/bB2vumQrS52Aux7cy9FDy3jZIakJjswM2D/TYX5pgV63w6BrEcTUG6PHQ7RX7hFXVDV+hTyRSFn93a3cJAY6QAeKLK3yFqSQVfC8d3y3oqcQoroBFQLh/YnfLSvVT8XxeLSWxHFMnheV9ZBxlGWBcw7hqxwJ7z1JkoAQlaWYtxhTEicRtVqCkILBIKUoPa4AHUjCSIA2NCdW8fFPfZWp1jivf+2VLC518dYhpEQqgZBVvpAQDo9BSUWgIkBQmkHlAKQEQle/y1pJd7HA+RwZRlx1xTlcdcXFHNl/iIf27udr37iFb99wG6GuM7dwkMJ2aDTGaTZHSaII56qbdGsKut0+b/nRdzC+bjW9wYDTTzqZDevX8M73vJNNGyZpTiQQ1vjq39xOoC0qtPilWfLCgq/UUVJKgvCYfZ0lTBLQFRvhS0MQ10BG5J2UkbFpVACCHKlqFLnFlEWlqpEOrWKCIAYnqsgNKdFSYYyltAbpxIr9nkRKiXee/mBAs1knihMcFmcdQVS1S0goihSlFXEUEOiV324ltVqLJKohZJU1I6KEuFkjz0rue+gxrKxW5NZbIRdctA2hysqWKoixrkQ4SZamSCXQMqAsDZtOXk2e5SzOGmZmZqg/FDI20iRfSJEGwkQjpaWX9atsHgdpvySJ4ipA2BqcqyxmAq0JAoWwIc55Fpdm6PYyTjnnDOgJukcPghcoQopBysTajSSTa1nctYPpNeO0RlcxNTmCDroYlyOVJhQJSIOXMDLWptvpce2X7wWvyPKUIAxQWuOFx8mMQqQI71k8Os8F527lV3/2p2kmmoXFwwRJDDokiGoEEtJ8gDE5zhm0gswY8tzQaEqMKfA4rDOYMsP5kNIMKMsBSsRIqZC+xHmHKyVSScKwgfX2pZzvWyd8xXtEaxK/dJD8k79WAEvXXf/tkVNOOaVx4MBBRkZGMMbyG7/xG45qZb4FUJsuehGbW626TvMc5xyh1ickcKAKrNahevpJTismxkZe8NY16jUaT8g/eLx4+DjGRlqV0vFZVr9rrRhtt57TdyopaTUbx7/Pr5BWT9y2kpLJsTbWuadZwA0xxBAvPCL1XSQo1uDrI2R5F+8hrjeRUuJW5B7BCun/XEmO6enVvPZ1V3P40CGEUNW1H8/fCc85y9TUFFIF/K8//bC9/bZbLBV50/9u16Y//Pb3MT65Bu/lMxI41kE9QtbCglqjWV1TPQ8ESlLa6hxQr9XIspQ4CokCyXI3QwUBWuuV66bqmjrLC0xpSGoJSimSWp2iKI8rdKrrLL9yXf3U31T9DukErSdkn21/ZNfTbNLqtcd/y/W33DkkcIb4vnHMR3/HI3v40J9+Ztghw/0/xBDDcTTEEMNjbYghnoYXlcDpD3pEUYJWAeDAGayQGARCeJwXFalhHVJYgrqhXgtxJmKQS7714FHc/UcJ9P3IwDA5Bmec3GDLaVNMTU/Rak8yNTZJSylqsaJQHQgUZenoLy8SByFSOpzPq8JwkOCEZ6E3g/SC0VrM5ESdt119DoEMWO4/gpA9hGzSd4qlTpelxR6H9i+w4+HDPLhjlqWeRXiF1hH+KUqcIrcEgahWeTuL8IIszShKQxRHFGWBLczxzJZnQ5WbIdCBrgJa3bHi4NMLBdXq86qS5/GUZUFZrkQl2CpU3nuPUgpkZbsFktKYJ6xH9wipyAsDIiUIJNYaGo1alVmiAtJBRulKdFhj0LF88cvf4KpXv4okCcnyFCWqsHFrq21WllMG5yqrOI8jzftYZ1BKkyTJSnFdIlUd6RwUGZ2FeaK4yaqTJll1xhquesPF/OEHPwpWMTadcPDIXh59bJm9uxbo9lO6vS7GlogsRwnN0ZlZ5rtrCIzhjVdfxaqTJ1h16mbyhT0gLXN7D3Dv3Yd5zeuuIBmJMGVKZnsEOqr86BEI7yolkndIodA+BCFwwiGCALzHWktcb2ERVTaH57jNXlVozvB4rHVIGaHDOkqFSAFeukptIxWltUhZrd6P6tFK7rlGKkUSC4y1BEGwYnFWosOgIuOUWVH6gC0ttXqNRiPBmgKTZqRFTrM1xl133s899z9GozmOs47xsSZjYwFFOcB7AV5WxIqsCKtKkVViCksUxmzZuomDjQWyXsT8oUOYpRhVlDhr0DpESIPEkw0G1JojpHmfQ4cOETccQRisxIAIlAecAOEwNqXWsNz/wJ30RMm5F11EbJu4QZXZtLA4S2t0kq2nn89+rQi1QuIII4EQjjzPUCpBuph00CEZFVih+chHvsB99x6lPjZOVvSJkzpxFCI1zC0sMuh3yVnmta+7hPe++4dRps/cXBcXelxZ0Kg1cTanP8hRSmGsWYkm0SgtkdpWNnoqotObpbAZYdBEKUWWmiok2ZREocaoiEHaQwhLECSooImzL2lB/Vl8XjxyZA2DP/lpB3Q+8+lPN7Zs2dI4fPgIUkoajTof/OAH/cLCwoCVEGoxth4xsemFa90xVvwp6PV6OOfRWmONOcFnqxyyWhz8g3Wmcx7rPM6DlisKs2fA8115buyKYu27fO6Y1eKJMCRvhhjiHwYCqCtP34oTXdCBkMzlkvWxoNdZotEa4ZiwOklqFFmf+BlUes+Et77tHQRBQJZl31O2jHOOKApZtWo9u/fs9R/72Mfd7l2PlcA834W8gWoR0JGjcxy99qtVjuQzIAhjep3Zpb2P3RdpHdQW5uehUoE+J4SBJMtTelmJW1Guh1qRxBHtZo3F5S69oqwWOK38JmsN9VqCLQsG/T7WWpSSxFGIEIJAweJyn6I0OPvkxRTeFiwtLuO846zTN/OZr1wPwDdvuv1pBE5/8LiS9rZ7tg8PgCG+J+w/MMOt37mfL3zp23zn9oe+p21cc/UlXP26S9l25imsXzc97NTh/h9iOI6G42iIIYbH2hA/wHhRCZw10yPs2XcU5wNa7TpaGaQLkCLA4rDS4YRBo5HOV4HPXiBlRKgCwqS66TXGYmzJkZkuh/bPcPNtc4TxTpK4yckbNnDKmgZjY4KR8YixyRr1RsD4ZBslIcsd/d4iYRDQbCcIEZDlCUp5wrCL9znWthHEeAqEiEjCcSIBk80G0SkSdUlJt99jZsbwyI4OX/jS7ezYs0AUJ2AFSgcsdTqUacroRAupIZQh1oN0Du+qbBGBR67EJRxfMS3c00sBXq1k6QhMafHOAuIJ1mbHSByJEB7vqArH0gD+eNaOUhq18j1CSMBjhcepAE+AKwokBQjQOsCWmix15GFKHEMYhijhCKMqlN2XBVEUopQkGBljx74+t9x6N1dccRbdtI8Ka2R5hpIKKRVIT+G6FLkkUDWsNZRFidKVZUiZpRjh8cKjZEgQhBAovPLktkNvdgY8jE5v5IytG7j15jv5hV/4/xG2IvLeUR595Ag333oPjVZImqYc3r2X++98lL/+q49RDHr83PveTN7rs3PnUabXj5AWy0R6kiOzsyjZZM3q9eBTPAN0YNAqwpOhA0dZekyhQajKngSPVJIy71VZH15RCkej3sTLBrglCjeHc54gDDAmIs37ZGKBoiwI1SiT7TagsXKAUhYjK1JN6QDvBFJ4UBapqswk56qAY4UFYfDeUJoMFSp0qPDCULoc70QVfC4hL3OkdHhpIfP4wrNr92E6CzntiTZlPsepmzYyPi6xpUH4iKLMiKRCIpFSY4xBSkkYhkgZMtKMieMaux/rgtKknQ5aQzLapCxK0tmlSgmV5uhWgHeWxYVZ1KBk9Zq1lRVfkaO0A2fxrof30GwnnHH6Rm68/jp0ptl26ZVMqAm6nUPYrI/Nm6xavZ69e/dR2B4jtT5CK4SoVt06bxBGUiMhQvNnn76ZL91wP+un1+CkoxEElPTo55bYNZB9yepGnUsuO5t3vOWtmKLDQrqMDkPiaJSiFJi0BCxCKPCKougAEmctab6MUqBViBQhgpBaXCeOWiilKIsMlMTZAiciVFAn8oBQSBUQBUmVCfTSIABOWB0UzUnKXbfhHvxS/10//Eb5tne8oz0/vwBAvV6nLA1/+Id/6IAOK9JBteZ0RGvqBSyCPrH4WR0bzjmKPEd/F2uzagX2Ckn9PFCUjuIppJA/QYFWSIkWniis2pKVDu8tSmmWe33azfr3TZqkhcM5g1QaSksS6eFVyhBDvAzQDKGfnnByA1vioiad/hKteky306HZauG8J44jet2l50TgnHPueZx1zrkcPnTweZM3x+bIVaumKQrDdd+43n3qU5901pQplS1m/ly2Y0zJl/7uI8/lrSUw84Rzz/M6AbYadbTKMMagdZ0wfPw8MNpuUpaGoixxK9aTSqnjaps0yyhLQxDo49fbWklGmg2Mreb8H3vL6/nk569d+U2W0XYD8Fx0zhnHv+cTn/sa69dM82Nvu5okjtmxcw8f/LOPDwf8EN8T7rr7Ye65bwef+ptrv2df/F/8ubdzwXmnc8bpJzM+PsxfGu7/IYbjaDiOhhhieKwN8U8FL2p16N/96jt56KEd3HbHfezYOc/skkXLNkks0KGsMli8BifxVASDQOEJq8fO4IRHCkmkNVFzHGEFxnry3JPlcONtO7iZAUo5dCBp1mNGRxucdvJazjlnDVu2jSAbIb2+pnM0YzBYIs8MxnrSNKPMBYplpBR0+nNkqaHf38tyPwNfFc6LskdRpkRhHVsELPUMOggrrgXo9DtccP4mxuOAa2/YgUra1MYlGIM1EqzH2LSyZxACL+B4WPXTb7Er6YyoSBjvQVAV+AUK7w1IWykvhKeqyx6zhKgyFKqbenm8cCCEwMt8xdYppD8wpGmOKBUKgScD4YhCRZSEOCnJcksgPAKLs54wSIiDEGsd3lb7RciQT3zmy2w8eZTxSc1i5xCSGC3b1feLPs6X4BSoiogytjyuPrLWYexgJYtHo2RCUmvhfUFRDsjzLlJrOp0u5567jW9945v84e/9MT/5Uz/O1KYRtp1b55GHHuL1r7+G5tQYuJSs5/nEX32KL/79l9h34AxWT6+nHmpK0aXeSpg/3OP2W3Zy2mmnUm87nF8kzxYRMkFrwyDrY1yBNZ489QSqRqjqK7niiiw/CkpQZA0CPUqzNQLhHL3ew2TZIu36epQK6A3mycoOSkmk0JVVXdGr7Nkih8WCpCKzpMY5gRAW56tCtJfVWDDGYa1FOI8xGaXJqcUNnHcVEUgVhOyFX3m/RchK0RNqhQ5ijswughEEAlqjMa997atAKLKsT6QinBOUplgZj5UFXxhGaDR4RZ4blA7ZtGkD/aMDBkmTxe4cS91l4iigPt7ADTzlICf1JbJeclJ7mjCJV2z4SkBgncXJrFIahTWKvGR6aoLzztzKN772RWTS5oxzzwVyltIeZdkhCATtkRaxHGNizBGEOaYsEV4jBBi3xOR4i2u/dRuf+MTX0HISpTX1miWXktLFHDlwlNVtw/t+7K2cd94mtPb0lucp3TJxEoCXWOtRQpL2BwShptUeZWlpnt5ghjBoY4rKUjDQEWnWR6oUhCMIImxpsaWpiDtbUpgchEHKBBU0KMplEJ4szzAmfanm+mdJsfeI5iT5x37TxlB86C8+0XLOka/kdrXbLW666WZ27dqVA8fTztXJr3hhW/jEYqSo8g/mZmerFdRx/LQV08ffKgV2hcDR6vkVNN2KlaX3HLfBPHHzJEtLS0yOjyLlSj6D8zjnKQpDWRrUc1xBf8LCqLUVYSwl3cEAQficV+UPMcQQLx0i6VGqUiHzTI5q3oOzLMoWiRkQBZqlpUVGRkYBCKPaM9owPhXv+JF30e10ntN7q6+tbMhqtRqtdhtjHbffcbe77rrr/J7dOw0VKb/ECxdk9lSUPE/i5omoJSe2XgsCTRA8821MEsc800ef+Jkztpx8/O879+7n/LO2ArB50wYuPveM4+qa3/3QR/ndD330Gb/nmisvHQ7+IV7UAtBJG6Z59ztfz7lnb+H0rSeRJPGwU4f7f4jhOBqOoyGGGB5rQ/wTxItK4EQscvkr1nHe2ePsPZhyz72HuPP+nczMzdCZLwiCFrXaKEIJhDcIESEQVXGbyvaruqOsLL5YyZ7RuvonpCapNXG+gXUOayydVLDYK9nx6P1cf+OdbD59jDBWLMyVpD1DUeSUBrxT2FJgjKyyT1RJnrsVEZDAyhiQKxyLqOyUfI4UhkasqYUhzlmclszPHOCS8y7hZ/7ZW0n+n0/y8b/5DqgxarEApxFeoKhyajhG3oiq8OidP36TDSt5BsqukDgS7zTGghAasUJyVW8EJwRSrPh6C4GUosrlsBZrHUJUKhwhQKqQ5eUO3e4R6s06mzZOsmp0klB5dGCpxU22b3+YfUcOIqMWzXiUIjco5xCBRkYKFWiyPEW7inxr1xP2HD7KV6+/iZ9632vJTIdYCcBWmSrCYkyOFnFVm5Ue7y0gKzsqoZHaIkRFyBVpjlI5SvuVPo+QUmDLLrV6wk/+5Nv54z/6FB/980/xph96HadffAag+cbXb+Qtb7uCpeWjjIxN89P//E1cedU2Zo70EFqxafNa8kGf3pLm9lsfod2e5tQz6hh/lG4vxZUQhIruYIludwHwBDrAWk+gYzwlZZlT5iFpvlxZ9C0FjE9uIQg6dPv3k/b30u8MCHSTGE9WLCNEAT5Bq5gwiEBajLMI4wnCEGsNxhn8yv5SWsFKRhFSgHOEQYDTkiwbVETOijLrcSiUEgi7QuRgscYgpafWjMhNwfYduyolj3dsXL+KkzZuQLgQJTt4CsKwUZF8OIy1K0Vsj8OipCTPLN1uF0lGpGOaExOIWsjc/l3kPiOMIsgMg16HaLBM0CjpZR3GkhitNd67KncJS2ENcVhD64Q8TZlfWGLreWdyuNvh7z79x6TZu9l21nnEcZN+b5l21GDtmnEWjh4CH6NkTG67aKUoTc7oZIPtjzzCH/zPv0KLGoEU9Ba7xHGCEJpD+w6xYWKUf/X+93L22ZtY6nSwpcA6Q1ak5HlJEocYNSDWmqIo8NQq6ztviMI6UmhKb9A6qH4rBu891vWxLkRqTZ7neAqktAQ6RimDFKB8ghFdjMnJ8wIvOi/VXH9iFkBqyLuw4+/zd77xleHY2Ehw5MhMlS2wMi9df/31UJE3x4t8cvVWfN57wRoo5DH1jEcrzWAwoCwLhFTV/HsC+zQpJIXJCQP9vBU4x0hyKSVFUVT73/sTZu0YY1YUjupJ1c5jhM5zRZpl7Dt4hN37DrJj516WOz1uvuNe9h6cIQwru588z48XCEfaTdavmWb19ASbNqxlyyknDa9ehhjiHxlGZcacT8CZZ5pqwFlQmiN5xFplqEUBC3OztEfHabWaLC3MU0uiZ/2OVavXsGvno9V1whM3f0xtTTWfKa2JopB6vY4xjsNHjvg7777H3333Pex4eHtJZZXWA7J/qvvrkvPPOv7476+9gXf+8OuPP//Z9779OdmjveHKS4YDf4gnn9/TjDvufIg7736IL3/15u+pAPSKi07nzW96FZe84qyhzcpw/w87dTiOhuNoiCGGx9oQQwAvMoFjc8fM0Rmczlk1FfLut27hmtetZ9/hJe57cJbtDx9l176jVfFaCJKogVYhOlJ46bFlZQcmjtuMHcuAqeBdtajvmH6nEsU4vADRHKEs4IF7DR6PVLoqBMqEyqzHIQXosFKuIDSNoFbxKyrHiWMFuur7vbBYFSGIkdbhnEVJxWJ/kUsvPZtLLz6N3Ycf5F/+i9cxOT3Ff/ufX8Q3W7TrMdaUeK9wVMSMXynSK6kwK79BrKw4V0ojVIk1YI0CqejnS5giA2eRMqCS8HjqzTYqUEiOxUcItFJVsd5WChfnHd5B0cs47ZQNbDltinPOPo3Tt57KREvR6ezCOUurvoalpVdw3449/OWnvsmjD+9hfGwVnohEKdxggF2xZrPWoqPKBq/eHOXrN9zFueefwtYtq+gtFRi74j3uNUWZoiKJVCCMQ2oIQkEQVDZrwmgQHq08VhqsGyCcxjtBoGs4V1KUyyROcuo5W/l3v/HzfOJjX+ZDf/K/2Hr96UytWsVjO3ewa89aVq2boLM8Q+hzNp1+Fs7s4ciRZbARB/bOsXfPHKFucP4F07THBnS7y+RZTBQmWCsZDHpY43FGYZUjiMAJRy/rYssUKUcwZYJQLZbmA1y9g892kWVHSMJV+HiRXn8OT4pUjjBoVcHCx3NPHPgVQkMqwlAS6Igsy7HWkcR1QK0QNQ7rcuIoRnqJlJIoCvGALT1BECKVpMhTpBJEQYA1ltJaPA7hPSqAR/fs5rG9+4nqMaGADaunaLWaSBlilKPILcYUBKFAIPHOYL3D2RwlNU6AEhFhKJk5uh+TlqxbdwaTk+swWUEnPUpnYQnfyUhqnnx+kXpjHB+6ynIq0NV4lAKERrtgJbNJEQYx1hekFGzZtpnde/ewf89DtNtNRup1et2SqOZQ2pE0eghtEUwivGGQH2ZiYpT9e2f48Ee+zLpTzmXfrl284y2vQwUhH//M35EOulx16QX82r/4SRoNydziHMZoWq02UhSUpk2oGyglKU2PfukIw5gojsiLQUWCBRNVDpIcoGSMpaS0JYmOqddjPJ68GBCFIf20pDQD4qiGdwqhQpAKIULSLKUsU4QqXqq5/oQeZCKqkx3YAVjz4+//QAgcX90dhiHee7761a9Y4HH5UNSAxhgU30VRdGy6FvKEpMjKBAgrBLXWlY3f/PwcWimSJKlWtT/LZ/M8p1H73le4SFlZdSZRQBAEnGgxeqXUkcfPRs8G56uMHO+qhQhZnnPPAzu454GH+eOPfvppxdeKbJeUK4q1Y+e7L19/CwBKawSiIg+t4T1vvZpXX3Yh5565hSSJUVLyVAHS6a/+0ef0+2/87P9mfPSFkXV/t+986Ft/+4Jt6x97+18u6A+e8TjuDi+Rnx/qoWKpX2BUBKZ8+iRxzEpNhRwcONYkjkY9YXlhlqTRIqk38TZ/VjK4yHNGR8cxpuTYXC2lFK1GUi11cp6iKBkM+hydnWNm5qjfvn07Dz+03WRZWlIRNv+kiZtjWL9mFRefW9ml/fDrryDNMpK4Oo9c8Yrz+D9/8SdOqLyBilx/41WXDwf+EE/Cua/4ief9mZM2THPN1ZcNbVaG+3+4/4cYjqMhhhgea0MMcUK8qAROlvbpDuaJmpK+NpQ+JNaWUzY2OH3LBtJUcP+Du7hvx26We5Yjh1KW5lM6vRKkIwwDVBhWhX6xQt6IqhgmvF+xEHuCE5kHhKgyYVZW/TcayQo54nAevDuWCeMBh8fihQAiKt8qg5euKp95AVQWO3gI3IrNlNWAJog9QWa58PxzmV5/Erv23AO1gHe+9SKcKPnz/30980sFI6M1vPMIz4p6olIUHStoeiqbOB0ElbLA5VgvkSJkbu4IZ5+7iddcfi77dz1Goz5KLRnl0Ud2ccdd25mZzRkbGyVKwsqazIFUqlrFjcALz/zcLFtPmeQ//NbP0Kg5ir4hcC3KXkqWC0qbU5YdWs0Gr3vNxZx91ln89w/9HV+99gFaE6uw2qCKASrQK2HhFqkNUimUUBydz7j229s5/fTNKL0IdqWbXIbWCq0kHoMQjkBLnC+xpkSFAd4LrDF4Lwh0iCCgstsqK0LOOYpCcN89e4miWc57xRl84N/+Evffdic333gf1339enQgObA7ZeNJo4joKLYTMHP/o9xx23ZGxzcS6hGs3cXqdRHrNkzQbGuOHuphbUC7nSC8oywyvJfgqnwTnCMbDLDFAB1ItAYpHLVgA0G4it3FAyzKA7juAQIhCKIpao0xrBkAAmc9QofEcYwgwXuH8ynICCkDSlMihAYEQkisM1hniIIYZ6kUWFCpw0Rl9VTkOcZYwriJ1hJvBd5JsqIkasTESYgbGKQCpTxKwe69e1jqpcQqItKaLaduJq5HLC30CMMYHRgG/T5QkUOV4qFSdOEqAkv5mKWFHhs2rCO3XZZnD2JKRW1iFW5ZMig03eVDqKJP6C2OJu2RNjafQyuNxZJneUV8NBKUCvFeEIZ1gqiFsYoiTbnikjeSJA06nWXIc5wrGAyOMj5Rw4YNTGlYHvQZH59iOZvl8Owhfud3/ojXvP7dnHnxZfzy+3+BVSOTnHvZK/jo33+GN73uVfzGB34Oa1J2HTxI0qiThCFaSoQQjLY2EEV10mwB6/poLYgChZIKvFyxaMsJdYInwIYG5RXS+xXrrIqYDAKFsQXel1hrKPK8sh7Uikg3AIctC5SMkPIlk9GeOEQmaWPv+Zg7aXpUXHn1D4e9Xv94AbFWqzEzc5Tt27eXwHH2STYnIGoeJ9GfEZ7KFk1pyHrH5+tnglgJB9NaY6zl8KFDRGGACirh0LGMg6dCSokpDc4a4vD7sxpz3pHEtRNa8jzt53l3wtes8wzyEiUlS50u37jxNr503Y3sP3wU5zy1epNB//H6eBBGjLSbgMS5So23ML/wpIDwVquFkgLnIc9yvnbjnXzj1rv5Vz/zbt70misY9FOa9eh7yuH5q09/kV/+2fe+bC9kXu7tf6nh3DMem27YM88fE6HhiI0qUvq46vqJk52oSBypOFCEjOuCRqOOc4ZOZwkhJK1m/YTb/9znPs/Fl76KzvLSSqaNRghffuVLf7+83Om0i6Jwg37fLy8vueXlJbsyb+cr/2eAHe6lx/HH/+XfHydtnoqf/rG3snp6gj/404+z9+CTV2S+561v4P/4Fz857MAhvme8992v5+ILtw3DjIf7f7j/hxiOoyGGGB5rQwzxnPCiEji9bECeG2r1EaSrI5zEmoLSGMqsIIrqXHbh6Zxz9gb6WUZnqSDtSWZmZtmxcwe7Dx1lYaFg0GtQOAgiQRRqgqCGlhbIqmI3CqRBYCpix2u8k3jhgBJBRdYoBIiqjnmMOkF4nFgJs5EFVYC5XyGDVl5DIryuMnG8Q0pHoBQgaCWjHNk3S3epoBVM0O2mDHo7eO8PncfJE9P89z/5a/YfOUp7ZJIo1Fi7EtXtchwW4avweO/BFg4rLF5pajqit3CUDasF//cv/wRbTzqNudk5DBlJM8GJq9mxfTe3fuc+rr3hDg7OdGg3WlhZFQ89AyIdUA5yLjlvM//mV36KkaZm5sgMgagRhIIs74OLiVUNIQKygSDrZ0yPT/I7v/Gv2LDmr/nzT36DUE0TYFFixd4LCdYhVEEYeEICbr1hO6+7/FIuvfg85o/sJTcdpK28162xFGKAkA6hHKYUlJQoleN8gfNmJU8oABGglCbwirwYoDSMj67hk391IzffeBsnn7KBdevW8NY3Xc3Pf+Dn2btzJ7fe/iCJtHzmT/6E8akJNm55Jd/4+o1MTo+xZnoVtow444IruP1bn+MvP/IF+gPD4YN93vWe1/Dqq86js9RFOEmSjCLkIlm2TBKPsjAX4HTASLsNskQxgdar2b3nKIP+EhvXhMThBMakzC8dQAtNoAWdbrGyKtYilaQeTiJ1QGmW8b4AYbBOIrzHlIIiLyhzSyoKRM1gra1UZVKR5ymIihAypUepiFjHWFvinCTUMcKDtQUISZ7naF2REFIm7HrsEEUmqLUgL5dptRJM3iPPF2nW1pPmSxizTClHq0KQq4pO1ntQHiEtOog4fPgIjcbJrDt5Hc36UWaOPErZrxGGozRXrUckTRb2P4Kc24lpKESylqTZxuUFOEMpDbmQRLKFChv0ejmCGIHC5CVm4IjDGs32OF7A8tICgY4oc4/JBUFQA5Uzu7TA5OrVJGzh53/hF3E24G1vu5rrvnkTeb/H4SMznF2mvP89b+FNr72S3BYsdXpESYO03yHUAiUa6KBOpCSlTRGiynhSWmK8QWPQIkGgcM5hjMP4HLQjVpPgE4RwmNIjqKzhjC8obEkYBoQ6wZaACMjKAYiAKKmRlxnieVt8vfhzvVABHNplz9i0XkS1muzMzh1/LY5jtm/fzvz8fPmkwl/cQsRN/GDp6Rv0K/90WKkGu7O4rIscaZ2wcUqHSClJ05SjR2eoJzFIdVyN80zkjaciq/uDSgH1XImXZ4JzjkBrBllBXPlbHl8ZIKUkeIbd9myKIiUFWV5wy+13858/+OHj2WRhGKKUoFFPMGVBUVQ2ad558ryorI+UgpWw8bJ8hn71vvq8dfQHKRvXribPC7rdZQLVpl5Lnn8R86N/xxuvuuxla832cm//ED84iAJNs+jS1W0o8xNMulSLNKxj3mkGTjCqctqt5kpm3Inx9a/9Pd+6/rrjBLIUEuuss6Y4SkXQuCf8Myv//HDPPDNORN4cwxuvupxXX3oBDz26h6Nz8wCcueUU1q9ZNey8IZ4zjq3W3XLaSWzatIatpw3PVcP9P8QQw3E0xBDDY22IIZ4fXlQCpyyhVpsk1E3SQYoiRCfjeC2BBFOELKZLeJ/jy5zpMUVtbZvTT4u54PwGWSE4sD/jsV3zzC50OLxwlNnlRZaXArxVqMAhREAUNAlVgBQOvEA5DV7gvcUrh5caJxzSyxWFh18hdQReiOpzgkqBgVqxybF4YREIhFOAXLEus0CJxyNkRKTqRKqG8iGRqmNDixSCpfkjnHfuav7zb/0SH/nzL3Dr7dtZ9ppWc5IkUFX7bHUD7kVl6iZYCcXG4aTB+wHvfPMPsXXjRg7tPkhpLQM3x0LPEtaabDlzE5e98lUgP8RnP/8NtFIUVJZwzqf4TLA0t8iGN5zD5ESbhZk5pK+DTLDC4TAoKdEyRkqNFAFSxOR9h+93+Kn3XsNju45ww82PMDXdWum1KmsHB2VmwUIcapyB6699gFa/SX0C3KjHlIt4AYFoUOXeKLxTFJkh9xkChdQK7y2lKfDGoHWEKQXOh3ib08v6BLrkTT/0aubmZ3j4wX08cO9jLBzJOeX0u1m/fpymhttv/jJ33nALYW0Vb/upDVz9tnfQCAOc05SloS5bTK/byj13fpTDh45w6avP49St6zE2RyqB0IZa0iBpjJCmgjga4ehhTaQjmo0pusuGmZk+nc6DLCzOccop06xf36QsUtJilmXmqAVVgTrP56nVNVEYUotiAu2QwgMBxuRVzpFQOKPAK5QKCYJjNk45pSkJRQTek+X94/Z6WofUanW00riyQCqJlPq4LVueFVgnkE6SZ56i7LHv0BKljYmVZqQmcGXB0mKHsiixRlLkFpwEpwiCCKSscnAEBBoKk+N8Rl502bf3IJOrJFl5mKC+iCojMBmDbAI1MkVYFPRmdnD00F7C0RbxxElkZQcVeMJCkQ+67D28k8X5ZXbu3Ee/VwCCOBEcmjnM61/7Gk6KBjy2eydxGFMLx4mpU2aCONKkdpb2hMdSEoZr8HaCN7/1lajA8sDddzDSqHP6maeBy3nVReehpGe+18c4T7PZQgsQNsXZHOcDvMwReKwrAY13IIOwGhOyImiVqGNsj8L2COM6YTSCtyGIEhnElGVBnmfo0JPmiyAi4qCJVAFSCbIiwzpDnEhsniKcfqnm+mdnjkzu2+1K/uvc4wvvhYDFxQVWioCP/70+WhE0TyUxjqludIgoU8RgkUB6mqMnJhZ0GCOlZGlpieXlJZr1GgiJ0uFxm6Bn/EEr6psizxkfbX1fnWOtJYpitFY4Z/HOwMpclxcFTgmiMHjO29uxcw+//cEPc/u9T85QkFIxNjYKeFqtJnNz+cq5Mmd5OadWq9FoNFcs7J7Std7jV/LYitKwvLzMmqkxNq1bRZZWJNZ3K0Y+G37nDz/MX3zwP75sL2Ze7u0f4gcHY/WQrNen1DUwxbP7LTpLWkJqY2LlaElPaR2BOrGSrnxm60oPLA97/4VHEsecf9bWYUcM8T1jz74ZPvSnn+Gaqy9hdKT5tNfbrQZbnqFAtGnTmic9nxwfHVq1DPf/sEOH42g4joYYYnisDfFPFC9qNVFLTT2ZIg6DKmeltYokGic3fYQqED6itA2kmMI7EKpDr7/MIJ1BqZyx5ggjWwLO2jZNEm5maWA40u/xyI4uhw5l9AYZRw4vMDfTY3muB9Kgwhpx3CYWEoHDeoMjxgpwpYWywDuH9w7vLVJaEAWmdHhfKWG8kOgwQOkQrTSRUviV4PJjMTzGGExpCEQIQlHkDuE83gpyEVBoQbezzOrp1fynX/9Zbr9rBx//zHXcf/+jONmk1R7DkeOesDBSCNBS4oWll/cZXz3NhWdfwHKnS+lLwCAUIBRlYQmiMe6/fzehEPz0e97OZ752I3vnOjTaCc5pun3DoMxZd/IEnd4SeVECkno9IQxijItReKQIEIQIoanXG0Q6ZnHuCJMTo/zrX30/+/b/F+aWFmjU6whBpapxVe55FCvi2BPHCTff8iBzO/bzmtdezGmvWI+PKosv7wTGC6SvEUV1SrWMsRZBDVOk9AfFCmEQYkpP4Sq1ThhpbGo5evQI6zZM89M/8yN86q++zL69i+w+PEsZNbnx5ruZDkumJhV9K1iYW+Sv//x/MLVpKxeecx5T41OkLmJsLGHb+efwZ3/yMT7/+c+z5ZxRpldNMej1saUHSpY7R7FWUE9GWDjqWVrwbNi2HpPV2b/7EEfnd7N6XYOLtp5EkFTF40C3CEJBrdYk0i3yPCOOmuigWi6fhA2MXaJ0Dh3WQcqKMJIS4yTWesJAE4QKrSXOGxwQaI2xJUKWCFnlxYDHWou1VZ5KkiiCQIGQFLnDWYkzkJWGRqPO7MIRdu3eR01KThtvMDWa0IhHqCUTWLOMc4JANRFRiLUS7wKkkAhvEBLyYoDWDbTWTE7XuOXG21mzocXo+DjKC6IoIKg32L17H7PLh7ngkks5KgN63UMc2neUoLWOxvhaSmM49ODd3H/XTSz09wGOOEqYaNcJwpAtZ57OX3/2Mf7r7/9n/veH/gdFtsyuRx7gpI3nsGbVOMIHCBUhveLar32TJDjEu37qA/zH//hrNNtw//0Pcet37uStb3sr2846g/0HH2PVqpHKCk7KKjvaeWpJC1cGZMUA5w1aN/GFIc8H1BsR1g1wRiO1wJgc0IRBDWFTCiOIgkm0jnDCI0SCsxYpHSqIKPIu9SAhUBHGQFnkhKFEKUFpCtI0Q0hTkWQvDSrG+kQvCYmoWMbjzj/HsLy8DE+x3RFJE5Ti+MJuX6kWhQ7Ae+RgAZF3SeKYdrOB1s/MHwmpsNYxO3sYYwwjrQbOS6RSz0reACil6PaWiaPvj7io2iEr+7viyRlFFala0i9LpifHntO2vvLNm/jV3/pvxz8fBJW6SAiwzlX5YEqt5N6oJ+f7CLmS+yZ4JoHPMVKn0+ngnONNr7uCsRVl07Hcou8Vt92zna9886aXbabDy739Q/xgYTrxHMwKvA6fncQ59ndbkjnIZGW/pguLaE6C9/jBIity8yGGGOJljC9/9dYXfJtPLDD95q+/f9jJw/0/xHAcDcfREEMMj7UhfkDxohI47ZE2zXqTJAmJElGpMUKHlwqpEkxZEiqNFBpTOgZ5hncQxnVKDwtpjikcSQzOKpJghPWTjs3r1iBtjDQhg45g375Ftj/yGHPdJXYfXmL/4WWKoIkSIR5Nv9+htAXNWkQcaQKpiZMaWglKkyFlxEirQa0Wo7THWjh4tEenC0vLGanMabT0SmVTgFBIKhKoKAruuONeLr/wJM7atpqiLCm8x1hHiGRxaR4Xp1x87smccfI67rj/IT721zdz/6NHaE8n1IIQXFV4O35/7yNs6YnjOioQLCwdpiwscZQgrMN5ydTkWlqT67jvb6+n389Z7s7Q7S3ibcHswXmkK9HAD11zEeeedRKdpQVGW1NYK5BCopSkVqvjCfBek2clYRgT6IQ0TZHSYwYlp526kVe98iw+9qkvU6vVjhcbiqJAygBw5HmBc57XvP4a5uf38OCDD3Hyuk3UTz0TKVO87WKtxQmB8JogiCt7MSkw1hOGEYGOweuqwOl7ODPArqiewqBB2gvYsGYz7/qxt/OZv72Ou++4j6WFCN2oYymYHh/j/LMtS4sDppttvHQs77qbzVNX8NjeA3z1q/ew/qavc+WVr+e88y6gLCxKrycKjjKzsJ+kVsfagk5niXxQ8PD9C2zYsIVmaw2HDx9EhDmbz5ikPRohpWUwGOC9J6rXUCrBlxmFWcZ6i5Ae5wAvyLIc4/oI4dA6RElBXvYQKsC7iNyA0DGhjrDWrRR1Bcb1yYs+pszQQcRgYJAiQClIsz7GZDhfKQfCSBMojXC2ioH3nnot4bE9GTN7dnPJyZu45uKTufex/bQnpqnX22RZhpSOMIhJ4hpZnqOVQutwJZ9HUBpJHId4n3PqaZs4sG+JL/zdd7j8sleyevVZxPUGmohQ3IkdHGCyGaI2bKQ346Gfks12CcMa4UiL9to1rF7ewtbGqYyPt0jimCCU1BpNovZqatEYv3vg99ixt8voyCruves+BDtpNacYKUJaI022P1KwZ4/kXT9yFSjF1q1ruPfeW/nSF77OZZddwbvf/S5m5w4yMlpnZLRNluUopQjrdfI8w9oejbBBoJLjtmnO5ARBpd5LywG4mHptHK0UadYnL3KEDKgnkyTxKEWekecGpWK8LQilwNuCPHW0auuRWiB1Qo5DaU9RdonChLToVXlQuvZSzfUGeMaQGO+BuCG7vaPu8Yqif2qF8cl5GNY8meVRGqFDfN7HDxYJbEqz3fqudl7eWawricKAIAgqo0slMcac+DOA1poszzFlycjYyPfdOUoqBnkOzq6QJ9Vvs1SKpDh6bvk6H/nk5/jdD32UIIioN+qEgT5uiXast4wxVc6NUk8jXIQQeGcRSp+g1itwzh1XSZ2++aQnffb7xR/86ce56NxtjI++PFcLvdzbP8QPDpRSTAc5R0oBOgBTPrsS59hrzhyfsAkiUAFknSGB8w+AHTv3cPPt93L/Q4/x5etvAWDj2mm+8vE/GnbOEC+LAtOwGDTc/0MMMRxHQwwxPNaG+MHFi0rgyEBQ+EWkTdBhSGF6GDkA1aA30OggI44jokiQ5wO8DrA2IUsjrNOUrqQWjxLIAaFoIE2AsSWmNCig9IaoVufcC7Zy3oWnMBgsEAjFddfdxIf/9h58vJGsd5BNG+EtP/w61oxNMVJvEYYRxpfIlagDHWhqSQ0dBnhACcfc7F4GxvPI3gU+8elvsO/AEhMTUys+RFUpTkkBSjM/u8TOnQe44PwNxEmC9IrcFEgnCAJFagTZwgBlBVdcfgFnXnwGH/mrz/LFL99Bv1SMj46htUBKhdSSwpdMj7fozh7loZ27uPKSi5hPZ/CiwNqSMG4xOjLF7V//Jvt27aI2Nsl13/4my91lTj91HWdvu4jJkUnWr57gvPNOBV+S2qqvvZc4WxUVAyTeNfAupBAzCBRF4TEGXC6ZmznMiM151WUX8Nmv3MAgzUCAQtLrLpGVPcJQU09a1OIa9bbjnLNezUPf+AaHDu5hw4YrqCca73OsHWB9nzQ3eBfifEY/XyRSIzTicTwW5x2aCJ8XGJMjvcBZB1rinKfXT1mzboofefflXHDBFN4rvnP7Tvbu7XLPjgOUZUFhIlJr2DDqCKKI+x58jIXU85rXX4GOCqyeJzOOHQ/NcOjwbl752quYGm9QmCME2pMEq/jyF69jemIjZ247jeXuQRqjy4ytSihyzdJCjyioEUY1EB2yfBGtBcZ5vJU466rMDuURWISKCOMRvM0xZQGEFIUFaYmDGkkcYq0lLTPwEVGY4H3KIJ/BugxXViOuLAvioIWKFQKPklAUGc45SiMJdEgQR9R0ZeulAsGefTMUmWGk3WDv8iL19VOs2jCNs5ZaLULKAhXUsMbRbEboQFMWBQ5HqCNqtSbGdClLMEZxzTWv4bY7v8NDO7/NwZlJBCGLSz3OOGMbZ5w0xXe+8DFUe4p+v48XnjQOWSMLmt0661dPsuHkt0AYAQ4GHWZn9rFz32EGg0OMtcf49V//LbY//DAPPfggSjV49JHt7Nz5CNdc/VakPId1G87iN37nbQhrefT2b3L/fTextLTEm655K5s3n8bhQ4cItGdycowsG6BUgHcCZwxg8c4RhRqpLGXRJc0cjfo4hpKsmEMJhQoCpDymQvFYO8CZjEazSZGXWGtJkoSiLPE+wxpQaExektIjimKSoEEUa4wdEISVRWBaOgJdQ8nGSzXXn5DAoejBxrPVzgP3mzLtlVEUBWn6uEVPo9F4YnmxQpAgGuOQ9arn3uP7C8czcSZGR06ounkq8rzAeYFUGo87YebNsZlXSYn3nl63S6OWPGdy5bvBO089iakl35ua5y//5gv8/v/6OFJpWq0GUlTkz2CQUpYFznvwniRJiKLHlVhvfPUlXHbxOayanGBkZATwSKXodns0ahFFYTg6N8+jew+z47E93LP9UY6RbBedu+0FHSR7D87wV5/+Ir/8s+99WV7QvNzbP8QPFqIwYNqnzJQR6Ahs8axayGeemDxCajzFsENfJOw/dISPfPLzfOJzX3vGOeWp+Mo3b2LHzr18+Rs38ZkP/973rQAdYoghhhhiiCGGGGKIIYb4bnhRCZzO8jI6NNhGg1ZzDC8cDz0wz3333M3M7AI/+q6rSZKAhfkDFEXOkSMzjI03OW3LZlzmKQYL9POUVr0GsrIxC8QoUo6TNEfoFEv0yi6d5YPUwwRfGtpjCRddtJ6PfPybzM91OWmT5Jd+4b2cd/ZZ9GdzEhWRmwylRwBDVi4jlEP4EowCEWBtl9GmY1wnbN50ARdt28ZffOxLfOPWR0hqLZTLkRI8Hqk8VhTMdTqUuaMcFCAjRup18iylKDMEYAuHdCFmwRA2Cz7wc2/iNZdcxI3X3cHXvv0gC1YyMTXCVLuBpWD28B7O2bqBLWecjkEShyG+dIyMbGR0ej1f+Px13H/n3Zy08VQeOrLMI3sOsHF1iw/84o9z4Xnnsvexg6yabmGKDkVRkSjzS0eoJRNEQYiQCm8DnAPrc0rfI6BZrYinxAhFWqbI/jJnbdnMOSdv4Ja7dqJVDSUKfuiNr+O8bafy2P7D3HPvDu6/626u/8pXueRf/Rua7RrLZQdfQtAIQMYY5yhthvEpkjbeScrSUx+J0ToiL/ogBMoJTFZWFJmolCWB0khhEAK6nR5hrLjosotoJpr2SIv/9t8+jck8phgwUmsx0mgQuoIoXsd9O2e46k1X8tofuhJX9ipLPauJ2uu47Vu3s+/WW5lslhxefIwjh2bwosmq+gSXvfoKvE2Z33MP2eAo3UzTT0OSVsCW09cThyVp0cdLRY4AJQjDEAoIkxqeKtcCQKBwaPKyDxQY4xAyQsXhyop6SWkNWtoq+0Nq8iIjjiIao+sAR5r1sdbgvKHRruFsgPdVgTzLe7jCY53FmpI4jiEKePiRQ4y0J2lPtLjlgZ1ccdWltNs1ep0lojhBacBLrCuxziBsDUFEoDXOe4osW8lT8gQ6QkeaV77mAk6fX01ZOvLUsXNnzs6HbmDQGXD7XfdxdGaBy179Kq58y9sYpJZHHtpOvRYR1+tEcZ2R8VFGR0fR9TpjrQkiEZAXGUeOzuGsZ2pkgo1v+hFWrV3P/OIs137p83ztuuswVrFh43ru+uZDPLL9HhbnFjj91LN4/dvegtGSvXv3opRl/YZVlHaAMZYojMDryr5OabwLMG7luJUWqRSDYkBhu1hykvo4SiRIJbDOgPAEkUSqJqU1GL+ItyDRxEqw3OkjwzZRnCCDZZzKcTIgqMUomzG/f4bR8XFEmGBNDGGEJ3+p5voT+pH5wTLijEvFzrs/bHY/cHf/5PMuGxkMBscVHSMjbZ5acvQL+3FHHoW0ilzwvbknKXKeibypXNY83oO1DmMd1oMUGinBWgPPUtv0VPaNx/JyAq1pNusvUPd8fxnf37r1Tn7/f30cgPqK1aSQin6vx2AweNJ74zjhnDM286bXvpKT161m86Z1KFnlXQwyg/MWKTW9XpeRVp0wqLJ3Lk8rq8mDR2a55/4HefjRPS+K0uSPP/p3vPGqy9hyykkvy4ual3v7Xwqk2TPOS4Nhz3z/iKOQaXKOZiU+boItwbnnR+IM8aJh/6Ej/Ny/+U/PSNScCF+7/tbjCp077n2IK15x3rAjhziOP/ivv3r88W13PHD88eJS90WxYxliuP+HGI6jIYYYYnisDfFPAy8qgdPrD2iqEGML+uki9dpqPvvpa3n4wRmmVjX50P/4BFIKuss5QkREEbz9na8krsXkpkagC6x0qGCMAnBiCe8CIi3Ze2CGb95wM6du3chpmwN6aR9BmyPdAl9v8Au/8G7yLGXz6XVWrWmx//AMqmyxVCyBzBltN7C2ZKk3jxOWJJygFrWx1tLpdQl1DdIag86A6fEGv/Jzb2fPgT/n0b3zjDRD8CVeeBwWpyV7Dh9mkGeAA1sirEL4Amwf60tKU6B1g1qwmnQxpb98hPPPnOKK897HFVcc5Pf/+KM8dnAn6WCcMl1i88k1PvAr72XN2ASH9u2m7KasW7OV9lmv5q4bb+Dhh3YyPTFJHCp273yEN7/29bznR15Daef4//7LB5meWs8117wa8EgtcGVeWe94T2FKhJTkpanseqRFKA/SUhRdyjInbjYxWYJxjjg0vPdHXs/DDx8hK8HiqMV1LjjrYt70hrXMdXpc97XriKSjmdSZXnsSjfYIWliUBGMFWidESURRLuONxBPhhUQFEUKC1ArhPK40Vci8y0hLQ6s5SaOe4L1AKkUYt0kHIaYIOJotsO7kKV73hku47dY7aCY1LtxyKldfehHZ4jxZtI6zX3cSG7eMknUWwEvuvutu7rr/Qc7Y9gou3XYS83d9ncXufvYvLlELJZ1+xuTpl6OdZcf993Hfrbdhc0drYj1nXbiZ0fV1VLBMd3mG3BiiZgMRKhACWw6QIiCJRsBJhNdYm2Nt1fdeOJzL8WikVxhTYJ3BFAqlNEEASjuk1ERhgyQepdaYoiy6GO+Q1oJyCOWQQuO9BA/WCVxhKcuC0ljGJtZijeP+Bx8jUAGhliwtZ4w2R1FCY1dynhAajyAIFXmWgfdEYQ0hPUWRURY9dFQFr+fFgHLR0hqpIURAVMtojdSp1bbytzffzI23PcCuZcu4g3O2buXkNRspUrjz4R3cfMPdrD95C40oYn5xH6vWrGbD2o1YV8Wo2Cxj0OkxMbGKdVNrGB1fhYlqTJxxElu2XcR/+q1/z+e++DectekUAgHrTprizI0nMzmxiYMHD7OYLjC1apKJsSY6FBSpo1ZLMLbEWYvWmtIZjC3wSmONR6ARSlOUOXk5QAUaIRRBkICAshwgpQYh0TqitH1wA0rjUDbHCksYhsT1UZRWyEitHEOarDAoLFEYoKTCSclIax1xJOkO9r1Uc/2Jl2+bgmT9ZgZzJvzUX/5Z7/++6IqRY3kqaZpx0kknMzY2phcWFo5/xO65g/T/vRKOWX35ZydAnPP0sxKekvlV5RN998YfI2+UUnS7XbxzjI6PHCc+Xih4//yJnN37D/I7f/iR48+V1Cfc3knrVvOvfu59vPLic5FS0e91n7yxp7rXPQNWT00yculFvOvNb3jRBsvv/OGH+YsP/seX7YXNy739/9Aw9hntucphz7wwiKOQ1dIw11+gCJugw4rIOZarOMRLhv/rv/zP50XeAFx60dnHCZzHdu8bEjhDPAnXXH3ZMz6uCkWPP56fX2Z2fvH489mji/T6j6ufdzyyh+VO7/jzj3/q2mHnDvf/EMNxNBxHQwwxPNaG+CeMF5XAGZucIkk0abqMMQWjowlvfttr2L//kxjv6PZ7lKWjVZ+gP+jxQ9e8niuuvJT5xcNIoWk1pqqMj6CBVBbLMo4e9XaLv//8vXzx0w+zaXOf9773LLZunaDIIyI9gvVdrnzNRhrNBnMLO+l0O8RhiNICj8L6nF52GCELvBxgrMT4gn6+iHVppUYR0wipEAIOH51lfKrFttPX8+ieg6BijAUJWAtRMMrRIz32HjrA5o2rSQeOQZZjSouQATiHUArrDaUtSeojZEXGwaP7iBqOSy7bxB+s/Wd89DNf5cs33cepW1bxax94K7XIcXDXfqJQsX7zVpLxzfz9Jz/ON6+/gYsu2EZsS4peRrMsuOrcbSzv28937vs2Z557Lm+85m0sLc4w6KW02iMEvobwiigMMcaTlynG9AmCiDhqg7AIHFI5Eh1RSxJ02CYbLDO7sI/zLzuN17zuHD7397fQHpnmY5/8Kp//8rf40ddfwjve+mbe+Y43U5aGuZkjXHTRVSwu9RBK4cUAqQSR0oShQ4gW6BjrLIGXKFkFiMdxhBSOnJK4HiN1SK3WQMoEJSUChdQKlMBaiZIRkRil2YpZvf4g/euXabcmaDSmkfE4UVORGo1SHpQliBR4yZZtW7j55m9z3zc/R+O09cztv5+F+Q492eaUdQk1Z2nWYmb27GZm7y5e9aYf5+BclyQyrDlllE5ngYW5DmmRIpUky7qEtYR6vQ5e4Dx08g5RmBBGYZVV4UEqTyhDjFFoXREvRTlAIEBYlNQoFSAFFEWJkgllaUgHCyjticKQIIxxNicrFgmDNmXh8c7jbYjQCmuyFWu3kO7iUfKsR7fX4cEHH6aexJxy8sngS4KgOuyLvCCKosq6yksQFi/yqr3SEcWK0vSxriDPC6QwuLJB2i0oygWaTTiwp8vCYgoWxlTAW668nDPWn4Jb7OCLkrM2b2ZkfJTx9aex9uTTeOiuG9m7bydHF5dBewLhMLMLFL0+drnD0R07iOOEDWeeQ9udRjy1mte/6Ro+/4mD5GnBm9/9HsbXj7Lrtjs5fHQ3rfWrWDc1hcBRFCk6SMALrDWI41ZoFqkqOytrSpIkIVtaZjDoE4Q1wmAE7zPSvIsgRClNaUoCHaMDTZ6nIDzCRmAK6u0Geb5MP+sRuoJAhQQhIBRhkJAPUvAl7ZFxLA6T54yNrmIwWCbPXzIbnGeR/nik93DBNa2//PiH5//v//7n2fj4eDw3N0e/32d6epLzz78g/PrXn3IhYvLq33PGk5mJ58qVeEBKiZKSXq9HWRaMjbSOK1O+HwhWiKRjbNLz/HReFPzn//anhFFElmUADNI+I+0m3ntqybEMIMnrX/UKfvn9P85Is76Sd1XNfc+4ZXGsdU9uKwgQ/kUfLLfds52vfPMm3njV5S/LC5uXe/uH+MFDEGhWj2gWOkt0fQLhSqbgsTyxIZHzD44bvnM3t92z/fjzi889g/e87Wo2bVjL2/75vznh55r1x7Ps9h+aGXbkEN8TxsfbjI8/rqLdetpJJywqwZO98tM0Y+/+I8efD/oZM0cXhp063P9DDMfRcBwNMcTwWBviBxgvKoGzdvpkutlenI1IotUsLC1yxavPZu/eQ/zdp69l3do1ZEWGx7F+4xRnnbse63pEoaSXdxBSU6uNoYMBabGIVx28nEPoNqecuoFVq3cyNzvH7d/ZzwUXbiYrDxElk1g7RpZbpOrS6/WQGMqiS60xgY5iOv1ZvLZYZ/AioV6bRitFdzBDEBjiRkKZ59giJUo0QU2CKpkcD5Euo3QOpEA6iRMSoUM6ywN27jvIySevwVhBKFsoXeIpCHWDougRhjGOkrTIQARI1QQj2XdwN412xAd+6cd569uvIgosSuYUrsa6bZeQNBL6S0t8+M8/yR//xSdYPTXJqy+/kNAIVOl5zWUXItMltj+wnbe844fZcu4ZDNIuQpREcYhSIbgA5wRSBoQhZHkHj0GICOcESTxCXnRRWhwvgDfrI3h6pFkPY1Le+IaLue7rt1AWhubYJGm6zIc/+Tl6nWV+4l1vJx5bg9B1ummKCAJQDXK7RKANggDrZGVRh8PYjCAQOJcjEJUCR4V46QmjFlE9oBbHGOMorUHgsM6ghaoUQ7IkiEJK66glMd4JRsY2sGHrRZj2FMtpyQDJ+slRhM3BGoyxjE20ed9PvI8H7nkIMThMq1VHErF11SaEWaQnNK3WGJMbN2JC8I06k82IkaahlxeEapw4Ubh6SKPWIJQKawxKSozNV+yPPKVN0V6A0yA9UjmK3ON9gJICYwrK0qFkglISHahqFbQvAYFWdcoyo5B96lGDsigpiwypXKUCEazYn9njRAMEaB2S9gdEccz69WsYLA7QUQNZDGi1mqCqQrU1Fu881rrKzUUFWG/Iy85KzohCKotwFucKhDRo5cmznMn2ajo9xaG9R5g70qNWa3P21jOZmFzLuWecxtjqVYQnb2F5z2NAzklrJghGVjPXWeas8y5my+lnMr84gxE5oVaIk0uwDkrLnh07aLabBH4Ai/vxYsC5J00xe+F57H14FweOHKCnSmRQ4/QzNkNTcfToEQIdIJWmyHOMsRhviaMAGWjSQYqzljCKKW2B9gFIhTWeUAqioIaxCmsWycsFtI/xeISMSOKINFtCeI8SmiSWKB2hXETZT8nLeUqr8L5EqYrwKssUhKSfxyit8VaSZYsM0iWScPSlmuvTE78ksPMHSN7xK+qxf/+R5G1XXXbws9+85ZRms0m3WylE3vjGN0Zf//q1EnD/kI32gFYKEHS6XUxZMNpuvXCZA0IghERIedwy7jkXZJXkK7fcxeHZBeq1GnmWk+cZRZ6xtORptVroIGAkirjiorP5lfe/FxD0+wOSWoJSiqRWpyhKkjhaaY5ESl8RO09tjpCVWtGJF6TY+ws/8Q7++KN/d8LX/+BPP86rL73gH22+w8u9/UP808RYq0G9KFnqz5OJCKJ6Fcbo3Yoc0Q8JnX8g3HX/Q4/feF95Kb/97/7lc5ovNm1Ye/zxzXfcO+zIIf7BkSTx04pHQwz3/xBDDMfREEMMj7UhfrDxohI4SdzGqwa1pI5wLYxL6Q8WeMObLmL79sc4tD+nVo8oyj6nnX4G7VHHwuJ+4ihGaYe1jtIYVJDhxDLdTsrCnGT8vBpCLaACGBtvkg5KlpaW8SIjtwuEapQwSEiLOdJigUApItUiCls4DDZTxGEN5Rqk5RwiGQANXLmMiwz4BKcyopZEyBJjAB2x5ZS1jDQjOrZEhwrpBU46kJasV3DgSI96fQqfDzBGgpB45wnDgFqtAT4iy/qEYZ0w0GjRBeMR2mHIyQddpmqCbrdHUJ/m5NOu4OjhHtde+w2u/do3uG/7Drq5ZVIotJL40uMCyWlbN3L3Dd9i3aY1nLL1VLr9ebLUI5UkkhFlYfBOEYU18BKhHAhLvVYHQqzJCXSElApT5gg01tpK4eFzUJZBXnLuOVu46NwzuPaGh5jcsA4la6R+jK9cfwtnnnoKr7y0RpJMknmJRmEQZEWMdV1qcUgYNOil83iv0TrGuZR+Nk8Y1IgYwWXQS/tYbyi6JVleECdBlctjq8ySQMcI6SnNEjKIULrB2rVr2bb1EvIy4qNfuY6gWeOdP3wlI2MjZNrR8CVSeMJQ4tMuk2um6D+wg9ApJldvxbOAcIb2qrU04wZqZJob7n6Idasn2bhuGmQPX3bJnUNoTTtuk84uszg/T39xmUe3b6deazE6tpqyHDA21mJycgwbe1IpIUwZGQ1QQYDLK+LAe4GzniiI0UEIKPCgtKyUOhJAIlSJMY4sy7GuD9KCj8hdSZYWqGNqnqJAhx6HYzDoMzk5xjlnnc7Oh/ehwhpxVFnT4RymNAgkcRTjrMUDUVxDSstgME9WlERBHW9L0qyLUI4kDmkmNZaXehw93Gdudonduw5QFhljYxNsPusk4iTCNQTffPQ+Zm65ky2r1lCzGbWxFksH5nj44R2s3riOtWvWU5aGQzOzJHFIO3YMFuZY1WqzefU4cwf2cdNt36aRCEbHa9i4yeyBZUQCg3KRZjHO2lO2kEy26PX2IxgQJW28sWRZgTEGrzylgUAHFHlOmuWMtBpYV9Lr9giCGChBZEitCWWN0ixT2lkGuaWeTGKcYLk7QOqSrL+ALwqCJCItFF46rOqT2SOEvoFSDqksRZlizQCh6jgF2guE9fQGGcYZkmjspZrrPdAFms/4apmikhbybf9x+nOf/Xe7fuvX/93cb/32/zNhrcVax1VXXaWAiGclgl7YxgohCJTCGEOv18N7z9hI6wUtyGsF3e6AvCwrtaduPOfPdntdfvN3/ydhWCntyvJxx6miyJmfnyMIQq689Hz++Y+9mSzNsNZQryXYsmDQ72OtRSlJHIUrvxcWl/sUpcFZi63FsCI0UjgWF7s471BScMzm7nvFllM28n/+4k/wux/66DO+vvfgDJ/87Ff56R976z/Ki5eXe/uH+KeLKAyYDgMGaUYvmyd1CoIIVAhSrRA6lSWnUBovhmzOi4Evf+Om449//id/5Hs6tzxf+7UhhhhiiCGGGGKIIYYYYojvBS8qgbPcnSHPNUpFTEyG9PIFFjuLTE1u4rwLzmT3YzfTaCVIC416ACLD2oyy9OR5gZIhve4y/UHGqrVTLM71+YsPfZHsfS0uvnwb+/Yuc9MNj9IfdMjykiCCNJ1DRgFhFGLLAhyEwThxMIbzMCh75HZA7NtEwRg67JPaWUw+z6BcIBIhgSioRRMktQjrUgaDAXlRsGFtk1Wr2xzdldMMNW7Ff0fgQHoOHe5hnKfW8CwtLiOlpFFvoDSkeUaZp2iVUK+1ESJnUBR4nyG8AVFHyZCxdWcwXhgGfc0d37qLr177Ze6+fzv7D84iohqj402OHD3Mjl17OWntapYGS7gjOUc7S2wcO5WB6SOlolaLKYocWxZkhcF7gQocTvRRhDgPUmmMycntAF80KIqSPM8JQ/C2IOt1ye0ShbfgO4w3J7j00rP59m13UZZdrPU0mg3m5zO+edu9nL3lJEY3jBHWG+SlRSnwvqzCxhttpAzpp4YoionDGv1BiS1ylBQILyhtgdA5yoNQHh0pEFV+kHWeUGm80ECJlzlCJsgoYfPpm7nw4iU+86Xr+eK3vsX6TZt5+9uvJmy1cDIHpXBlgfGeKIrwYcBDu3agDx/g9LFVHD04y+o1EwS6RXN8AzfcuZ0b7nqEX/8/fo3Z3Y/y4N03Escx+JKHH95BHMf0lzps2XwG37n1Fg7s3c/GVatpJXvY8oqr2PHAYWbiWbyyxKvGKeyAdi1k3YZ1tCbaDCixxlb2TUahVQhCEWhJGIQIEQCeiASExlmBkBopAoSXhKqOVIpCZCAdUgNOURYeZyGO6xCGjE+NsNCZo1mL0SKm0+lDqLE+RykBuiIYpQiQQuK9wSMJdIiWijTtktuUUAW06iOkiwW3XPdtrvvGt7n40suZmBjjhhvvIlvqUisccRKxN13iY9fdjtJj/Nd//+uMjbWJ44B8aYbJVoMNJ69Gii5CQj4oKJc8E+MRB7fvREw2GAkdN3/rW/T6HaZGQnqLCf2oxUwXSEIWl+ZZu+5MCuMolnK0ClFCgrV4IwikwgpDWWZ4JBKFx6Mjh1SeIjfoQGGtxzuL82CtRyKwWNKigzElZS5JVmppMrDMLc0TuBJtLcZpms1xtNQ4X+LIyE1JpAKCwEMZonSNoF4jTRfIen2SRKMCSV4uvTQzvVQAi3jXxDuetrxbKuzMY9Re/eO6d89nR//D7/y/C2s3bqr93PvfX0vTjPPPP49t285qPPDA/f8gBI7WGu89aZrRH/SIw5B2q/GC2KY9EUpKRloNSlORL1o991Pi5796PdZa0vSZu8R7z6qJNv/mF97HSLuJgEp1s6K2SbOMsjQEgT5OxGglGWk2MNYAEAaPtyeJQ6DBCr31ff/22+5+kJ/+sbeckAAB+N0PfZTLLjqHLaec9I/u4uXl3v4hhqglMbUkpihLBoMBWdqlROJkAEJBkeKVxns37KwXAU8kX57PHNEfZMPOG+KE+A+//b+e9Pw1r76IK145zEka7v8hhhiOoyGGGB5rQwzx/eFFJXBE2OfG6+7jumvv4Efe9TpOPr1GszWGtQmdboZUBdZGSBEhnScMJEUAzlu8F+R5SlEUCC2QeoSHt+9k9yNd/vSPv8Zp29bwpjdfwHVffZClOYM1kqimSbtdIjUOoUf6gEa8mlbtZIS0ODcAmeLp0+0dIZUGj6DTy9FBThI1kSJAK4EUNRQNwqhGaSAvl6jVU1atHuW+HXPoUuGlQXuFRGCkYHamw5HZfUyNSLSWRGGLSNdJ8z4egwxElanh5tAR4AVpmZKEDaZXnUqg2mx/5ACzh5Z4bMdu7rr/AfYcOkQ3TVH1hCQUxLFmvvDccv9dNMavBONZWhwwMrWB8bWTFKZHEk4hRYwQZWX1pgTWK4zPkLKgKGIsntKW9NN5nHdIHSCExAlIy2UUfYwZYJVBhwlRELG83OGMbRtZs77Gvpkl2o0xSu9QtRb37NjFzv0HufikbRg0OZJYQaz71BKNkAFptow1KV4FGOtBWJJwnDiok6cZpe0RxRYpE4o8IwgUIDG+UhPpIMA4g5eeQNeIwgZSRRCUpHIJGWVcceGZnL71bMhKssVlptY2oRahsgwVBqA1koSf+rF38oW//HP2zOzhtFO3cOaZF5M7COIpLrxgNT/81vdCPuDjf/wnNMNKDZTnBe2RNkf3Hebo0QXmDnbYsXsnZ5x9EddceQk3f+VviOoRZ5y1jbtv+Cr9wTJnNc5jNJqgljY5fOccD5mHOe2KC6lPTGFEistAWI/WijCOQCnAo5QDAd5prDNIBZo6kdJIIZHC45sJVlm8dhSFwJkATYLSIU7C6GSdsC4ovaDXzXlox2Nc9roLkdqCyil8DoTUglEEVfZOqFsIqUgHHaRSNMIWcRyz59GDfPavvsTkxDTtep0H7voWrWaD8aROr5/j8oyNp2xGLIzwaz/xCk49+1wmJ+p4W1AKQW1qgrYTSC+ZmDakvS4nn9TgG5+7h07c5swzLyQSMzz24E0c6C5Tj2uo1gRmZJK79xyhl5WMt6G/OICsoNOfIdUaVRMkzTGwBcp7gkBTlB0KO4O1FiFXISOPsn2kCvCyQIcReVZU5IwOyE1OGFhckYGo0arVMakkkDWiBDLboVYfpx7G5PYAzmXYAkIxQmk7pC7FOkUkaggJYaRwpcLmJdY7hBQ4a5HaYcvBSzLRi+YkQAYY35nROFut8n4iTInrzlP/+T+f7P/2q/vv//mfP3Do8OF1v/mbv1kD+Lf/9t+2f+In3jfHU8NsXqg2SomUlaIsz3OyNMU7R7tRp9mof19qk2c9CWqF1up5fSbNsmclDo7hV9//46xfM/2MryVxTPIMC76DQB/PqXr6Z6IX9LevX7OK97z1DXzic1874Xv+5C8/ze//h3/9j/IC5uXe/iGGAAiDgLAd4L2vLECtoTQlvjNUd/xjxM69+48/vvjcM4YdMsST8NTg4osv3DbslOH+H2KI4TgaYojhsTbEEN83XlQCx7uIV1yxiUce3cEH/7/PccqpJ7P1jPUIuZ29e47Sao2RZzmh1iwu9smyktIOCNQYQRBQFl3CUBBGMb3eMvW2YdWGkLm5AX/2P65jyxlbCIOQPPX0uyWr1o5CUQcvycsuznhq8ThKi8oezJc44wh0TFH0kSpiaalHp7PEaaeejVaCTv8ApVugdB7nJwjDALxdyV2RrFu7FskRvFN4KThWAhVS0esNWFrssHZ8FYQxzjt6gwXysocXGVEUk5lZlI/QsoUXig3rz6Fem+ChHbv53Gc/wW0338X8fB8VBKhQoWJJFIBWgkgrfFniTJ8LL9hMsw5zcyXeGxr1BuNjY1UAvQiQ0uNcVtmoxSGq1CRRgPMpjpx6rU5eLAOSMGhS5Dn1eo3cCAZpThQUBDpGSoeUdVrxKnrL80xNt9i27Rwe23M7oiEBj9SSwSBldnaWMssgDonjhJ07HqTTfYA3Xn05Rd7FkxOECYUtWezuR0lLKzmFIGiQ5QOKfBlfBNRqEXmxjOiH1JJxpNCEOgSvMLZEyhApPGWZorxDRYK8LChLy6mbN3Plq69itDVGGEisMRSLXcpOl6I0LMzOIpXCWsu6sVF8knDOORcwuWo93X4G9REaYQsvBXG9xvr166nJlCzL2bt3H7NHZpicWs3CQgdrB7z6itdy570PcvO9LY7mmtu/+tesHW+ysHc/S2mByjtIVaPenOak9Rs5+uge6BnOuuaN1Decgs4XAYOXAqvdynjyoB3Ce3AOjaMwBc55XKCr9nuJjkLiOMIIi1M5Uho0GVblOALWnjROuz3CwmyfUJU8/MhO5o4soAJf2YxRIrynEBm+TBlkGe1mizAMMSYGp6mPtLj79tv4i//5Z6RLJdte8Rra0+u57nN/xab6BFec9Qqo1VnOC9qr1nPlOetYGhSU0jKzfx9Tq9cg4xjnYWrdWky/y+xSDx2mNKYdV/zIWdx87aPsOXqYrasU3cUek+0xxibWMjU1RWvVWrYmGzk4M8M5F53FKavH6e16gOb4aqbPv4jlNKW3cIA4zCh8incxyJTM7sQLTzftIYgwZYbwBdZ68jLEegveU2aGfprRUiMEgUKxjnZ9FGoFeVFSuh46gFojJtIB+UDixDJWtP7/7N13tG3XXdj771x9rd3L6e323lSvilVt2bIxlm3MszEQYjrJCyUJeS+MEcKA5PFeICHEJJQQIBhwwMaWDbaQ3CRZtrrulXT7vefe09vubfW15vvjGj/yXGQgsiV5f8Y44+y9zzznzLXm2nPuPX97zh+aYeB6bSyjSsaokqQSIRRiBqRpRDxISJSYOHaRBLhpgK5+a/JxhH/y04BETB6o6Xf9+IRsr31lIUVBdrZgZBvO//5ns+6vf9flX/iFX1i59MIz4+//3Q/kv+/7vlf5pV/6xdyFCxe6f/8aCYS4uk2aUK5uF5QkCX4QEAQBMk2xbRPHtjEN/RU3cD7z/NmXLHPjsQPce9etr/gXAe9+iQDIAw8/zhs/94VX7LG82uv/ShCE4Vd7eLjM4JtMCPHlAK79tV5XS/maOM5Xgr8Z/H3uxXNce3jfN/R7f/mpz3/59s656eGFOzQ0NDQ0NDQ0NDT0snt5V+CgMTpa5af+6Q9y4w0LfOrTj/Low0/gOHnKIwVUNQYREkY6S4urNBujVxPD61lQfCxLRTdshCao1a+w+2CeH/2ZO3nss+e4fKHJxsZzGBZ0GilbWx327C8iVEFKRJxKNNPB1PLoukYSh8gvTbCqShYpwTIKZBwFTYvxBgGSANVKiSNJxs6hqpJE9oliD5lGFDIlJkYqaGpEpBjoQkFIcTXRtmYR+iGtposqHEIp6fYaKEJBNa7mKLFMB01TkWpIELVIkyKdlskH/vB+/uIvP0GzNSCbLWJkCwghsRwNQwdVvZrs3h+EdHob3Hffjfzw+76HT3/iKaIwxNAV4iRB1w2cjEYS9wijkCAcYFkZosTDD3uYdhnbzqOofRQ0PL9LGCbk9QIhHdxggGkWkWmRMPBIFRB6TBwnV4MmeoRidTl0ZJQ/vz8himIUTUVRBHGS0uv1SSIPw8rhpwmqbvHsc8scP+6RzUG/10NRNMJ4QGuwThw3CSIXRVewnDz98GpdXb9HKkJiBEEUkBCRKhIhDYTUMfUsmhkgRUJKAELDUC0cK0s2m6VUKmNZFoWSzcXzJ/nUhz9IEoaYKBiKztbGCpm8xtzOnUzP7ibQC/ioaFmbju8RpzqlkTyGLpicnuaBj3yASrVKJpfFME1My2b//v2cvnCWTreH7/s89OmH2Ld9Gq3fY81tkjE0NGHTXu8SqF3clUUuX36BEcsml25x4nMKN3//+7ALDkkwQCKIZYxAoCgCzVBRJVe3UIsibMMhjSNAECsqQrHwuiFR3yWby2CmCbGRoOoCXdfRVIuJcoHJcpknzy2xbWeBy0urnL14meM37aLV8tGIEQKiOCZNUxRN4IYD0L60Fb8iWLxwiUf/6nHmJvZx2V/izz7+YUYLOa699gaOH7uW6W27iFQdq1Uj6DVZ9QeYxSl8kSAUFQl0+n0wNYSjYGtZ6peWyI8aOBMFTi+d4QMP/AXXjOYZuecGarUW41NzHDl6A5Gq4esmU9US07v2U6jmiPobbK6epS8DFp+KePjRZ9heKXPPHUcYmFtESUQ1VyHsByTCJ45ULK1KkkI/aKOrgiRSSNMERVHxXRcFC0XoyFQiCfGCLdJ4QBQnhKmLZmjEqYlAEicpXriBZRfIOruJ0j6WXkYVBjKKyDolGp0GfthEVzP4XkASu6S0iaVHxqp8Szr6+PSXPl1y+tM97cDrq8rUIT3dmv/rrdX+P6qGrF1BGd2p2P/oj7d7n/uvi3/0kT/fePD+kv+xhx8e/fCH/3z8jW+8p7exsfG3nkVUVfXLS3eklMg05WqOnYQgDEm/nA9Gf8UGbv7a30x+/TUnB9/+plfFi4C9O7e95CqW//hf/4Q7br7uf2n+oWH9XznCMP5qDwfDMzP0WrZ/z/Yv3/713/0Tfuv/+bmX7CN+/398jKdOnvny/RuvOTg8kUNDQ0NDQ0NDQ0NDLzvl5fzjiWzT6/bo9evcec92/uXPfxfHrt2DbhgkqUssAxRFRQjJxuYqWxsdKqXtaIqJgo+upWiaTSo8UNv48TozuzTe9xO38fp7j6IgkEmCqimcevE0jdYWqdJGKANSESFFCqlAVUykTElTF9erk6YRplYgiUwM3UZVFSLpoRkGWbuKY+eBGFWNiROPIBhc/dSjVFEVBcNQSACFq9unASiKTipV6rUeSZISRj0QAxQ1AjSSGJARqiZRRIKlFdhcSfi//91v8Xsf+HN6PuTLVdBVFBPMjIauqmhYGOTptds4WY8f+Ynv4B//k+9leekS58+fI03Btq4GhqIwJk193PAK7e4VVB3y+QJpOiCM2qTSQ8qQOApod2oIVDTFwjYzZJ0svucjFJVstoiQOmmc4HkBcRwQJX0UDAQGs3OjFCsKXtAHQKB8OY9IRlNQ0pRut8Peg4dQjRE+9/lHMGwYuBEDr0O7s0kY+fT9Do3BZRqDRQZRn4iIftSl63VIBAy8Pp3BFrH0SUWAoieghiSih+u1SBIf3UgRqiCRAkXodHo9ao0tUAVeEFDMFrl8doHaRpvp2V0I3SGIBPWNNt2BxtT+W8lvO0orMfGiiCCJKI+NY+Uy1NotMvkcN1x3HZqm0mq3CIKAjY015q8ssHhlFcvUed1tr2OsWmX/zjmyo7NstFy2Hb6Wo3e/jZte/3bufsNb+M7veCtTcztwRsYoVk2unDpDfaELGCgaaKaJaRUxrCyaIgn8iOZGk8blVfxGj7DtE3cTgmZCb2VA40KNjReu0D+3xOqz52luxGxe6rH4xCW85ZhgkMExc9x041FarTrr9RZb7QFPPPk4uqrgOAWCwCeKAsIwQooEVQM/cOn2O/jRAKmkPPvMCwQDyOXGCWJJs77G5TMnGC85jE2V6Go+igWx26XfbqArGoqewSxUMMsFav02fhgg4wQhUwrlChVzipVnN0k34X/8wUd54dwFRidGcQcDfAnZ0TKKoxMl4dV65S2KoxWkP2Bl/iL1XgdVDVlYucjS5hrl8gSxr2EYYyAc3H6AY0/T6rRpNluUCtOoikEYe6RqTKqESBHRdZuoqoHjlJAyReIRJQ1cf51Ob5Uw6qOqMYNBEz/oEIuIUIYItYjQsrQHNdLERFOursTxgh5JKonTCD/oYqgaWbuI0DWCxMPOOiRC/ZZ3+sFH//U6igpm5mqi7P8/VSfduozIVtXMe391h/1/PlKozd7VvuX2Oy/9+5/7J8HUaHnkywPI6M5v6H8KIYijkG6nQ7fTpdvp0uv1GAwGpElMxjIoF3OMVEqUCvlXdPAG4Lc+8JGXLHPDsVfPEux33/fGr/vzxdVN/vLTnx/Wf2ho6DXj7ltv/PLtp06e4R0/+M/4/JMn8PyvXHz23Ivn+MX/8DtfsXXmq6mfHxoaGhoaGhoaGhp69XpZV+D44RYSFW8Q4/l1cgWb47ftp+++iOt2iZIUIWwQAV4oeP7kFkeOpHS9GpIOpmWgaRoDr0sqEiyjiOu6FMcKlCoOYZigajq2E7CwtMzGxg627SwReT7ECpFUMa0Cmmbhhx0S4dLurWPbNsXsKJpi0uq28MKUoq1hGjYDd4CqpURRjb4nkVLFMi1UkRImEY5joWuSKJGgyKufJBdXgzJpmrK11SCII+LUwzANVMUgjkFVBaHooaBQtLdx5sUuv/M793NmaRUzl8WxLZIkvjphrapoqkoUuHhBgK1b3HrrLt76XdcyM1fC93qMjWeY3Zbn4vNNpsdLJHFKEEaESUyS9hGqgWXn8MOYVIBh2yRJjKIKdEMhHvTJ5YrYVgXLzCGFgaIZgEYcRmTzVZK4jyot8oUqQpEM3Igoddi5cy/XXDfNo59ZJOtUQAYoakqqSDRTx40TMrbDufPnOfnCWWr1AYePbGdyfButxjJR4KGrWXSjihQBvX6LgZsQpQOkTLGtCigacXR1WxfHKZMmCYOgTRT2kK6KLgzsxCFMBwjNACHQbQcnlyEhxQ8CUkVnemqSO153AydOvMCzp1/gyuoqeUtHdSOmdx7mg594hGdeOMtP/tj7eP2bbsKp10lUhVq9ThD4ZPJXt7pLY5U9uw7QH7Rpt5tcvLTCDdcdx9TgysWzbK2uEe3bxtTsDsJui0yuhIvKVnsTU4kpWpA1c1xz0+349VOwsoC32SYY79OprSPjkF63jWbk6QxiYrdNY3mZSq7K7PQ2Bn4AYURtq8XYyDSJ22H50hWO7NtHv9fEKo5SVkbouQnJ4gDfb9EKF3nTHXcQ/6sxnnj8EV44c5YPfeST3POmWzl2aCf1rR6moaEoEQiJVFLiKMDzehRLBULPplnzeObp55mbm+TwtftYWlznxsNHuemW46ytXcHXU8pGBjVOyI7OMHAKZHIZWp7L0y88xWi5wr6de4jDBMfSkUFEpjpGdmOEaKvN99x1J416zLVHjxA0FnFKBbLlPFuDFhnVIWMaGOPjpFGKYSgoExMUnCy6pnPzzfu5+83vJNONGGwukAyqOMYkStQmq+RZXHiAajGL42TpeAOIA5JYIY4TkjhGYDA6ugPPlXh+i1pzBV3XyZgOcayhqxZShqQiQNc14ijA9wPKhd2oSp7N7ouYZNE1SSz6pFIy6NeQpJhGhiAMiJUAP/BxnCokCqGvfMs7/XTppB8+8Ctt4x2/WEwXT3z1QqqG9DpIr4PIVccyP/mH2cETH6r99/v/4yLh0tVELKqO8R3/Ev/3f/gl/6cQkMvYCAGKuLrKTAgFTVVRVeUVs6XON+L8/MJLlnnznTdTKRVeNcf0jaxi+flf+W1uuvYwM5Pjw/oPDQ296lVKBX7xZ3+Mn/+V3wauBnp/9F/8268ot/+Od33V3//Fn/2xV1U/PzQ0NDQ0NDQ0NDT06vWyBnBMI0ev30MoMUHkk3QG3HzbjZw/v8HZUz5CCYniAYZuYatTXF7oc3lpnupkRKO1TEbkMGMVN/CRjGCrc+RKDi+eWOTJL14hXyzR6dZQjAGdbkS3nSOb2Und28BIQaQRtfoGtVqHyliOKFEQik0YXc0hEicd/LhLxi6habDZvgSqxDYyqNKg4TaJ45hxrUKc9FFNC9VUsXQF349AS5EIUhRIY1Qi4iQlkhJFNdDVIlHikcguUvPRshMUtBm+8Lnz/MmfPcjltQ3KpSK6mpDEIZpmoCo6URDSH/TJ5EIO7M9zx+3XcOfrbyFJPZrNJroaMzk5x+133sLl8/dT628ynh9BComZyZMvjJKGWXqDkECEaJkcIlRRVB3TMREqZJOIWI3wZBtkghQxiR6jSoEqdKxMmV5PkjUMDL1EEKRIPaEbrDIxUuHt33kr5052CTyJ6ehEaohiG3iKINZ1snaOB37vdxl4TQaDCT7y4RP85E/upFKcRJMxXuxiOHmStIsQFkKkhF5Axilj2xXSxCeTqWIbFaJEkMYpnW4NKTwmq0cgFjQ7G4Q9l4zlgJqgmgqSq/v5DzwX1ajg+zE7t0/y5Bef4dSFFQrVMrlsEc12Wd5c5vLCGp3OOr2gy2bdpVnrI2SbYi6Ppml4AkamtmFQoFDIMzc9ygsvnuDaw4eojI7w+ccfZ229hmUZpIqOCD127d7D8pkTEPSpbr8GxcozXbYpqnWUyCWXGee227MI9wzLj8b0t5oQN9ncWmZ06hqKe26gvnEFpb6GF2lsequMTM3iD9YxRYSQEV53EbtkooxPkRUu6cLjuGqZyvR+6ivniTefpr25ytS19/BD730X/+CNN/PoU8/w2/c/yOMnlti3I0M+n0VggZRIAYKAQtGhWa+Rxh6bmwnd7gDTVkmUlMe/+ASOkaW85xifWg5pLoXcfO1OQr9NdWSMnlNgPfB58uyTnD1xitHsCDsmdhAFEtOy8COJiAbopsbEkYN84S/+hEPTJX71X/0zyqbKiYVT9Ht9HBQSxSQ28jh2GSNKUXULxdAp5IvEcULge3idCCOu0es2qa2ewxiMM733GGEsCJsRUxN3MTE5RRL1CPwlVEMnjKHb7qOqBmPVSWSqX119o+QY+BI17qGZNrpdQhcWXuSRkqIKiZIqZPUqfr+JDDvogKrF9LwtpIiQEaRBQqKG2E6Bfr9NJFpkMnkcq0y31yHvOK+Ijj/6zH/eUnfc6KgH3mCkq6dA+dpDgexuIXv1jHPd2zJcf5+bPP+JVrp+Vmr7Xx+Kib3f8P8UQpDLOK/6QfPK0upLlnnjnTe96o7rpXLJAPz+//g4P/9Pf3RY/6GhodeE737rPfT77lesrHkpb77zZt76htuGJ3BoaGhoaGhoaGho6Jvi5Q3g6NO0onmi1Me2Mriu4LGHr9CoJeSLIwSBQ73WwYsgb6gkLjzzbI17JvOkskcUFxj0GyiaTj6fJ4oSPvbnz/CJ+5+gkBsnk7GQJCjSIfRTlpdaRJ6Bo40h4y6WZfLY488iVIs3TL0OtxNQyI+TypQg6WKaBpZtoesJQeySIqiUppCRQCYBdt4iCH0GYQ/P66BoJkGoXd32LZVfmvhWAHE1l4T0qW106bt9VLVOKh2kkiKViHx2G+XitXzi4w/z/vf/AQE6TrmEI0P0JEVRdVISwrhFqRxz7XUHuefNdzI2YRNEbVr9VQzNQVVtEhT6A5fRcZ0d+3UWzvcoqaMksSCXryBkglSy1FsrZAopUpV4aUggPdpuDFKgGDEpKYouQItoNJdQVEHWLJAmPl4kUQxIpU+9sUzGGWFsbJxe36TXXeOG41XuuWcbf/AHpyjoswjNYawyhh9J9JzD/OXLPPX4FzEdA1WoLFxu87GPPMZ733sXSZoh6UdkMhaKXkYmGaQM0E0d2xhFkiCVBJnqJLGCO2ijmR6KJsg509h2icBtI4WCqmbJZopYTkiSRDjZKpqpk0jw/ASRcxif3MHhQ4ewjcukMqKc19g5t5fC+AhH9u5lsbFEM9rk4S8+xmRpjMnxIrZtEZIQBz62AGO8yvwLT2JbKfvnttEdhMwvr7Fz+04OHryGmV17GZscZ+H5zxNHEbsPHcNIAnLlWZAdbMNiaupO1pprhFtLVK0Er+eDOYpsL1GY3Yc1cZQRI6R98bOkscXs0bfQWrvMxtJF1FwZJT+Fqddwe+uURvcxls3ghgGjN7yVwfwTbDz7IFsXHiLVNdIohxqEeBefwxx0aHe2uHXvfq7/1/8noWMSkuAmZ0jTdTQli2UUABWp+NjZFCFSlheX6fV6ZEo5FpcXmRgfx3JK/Otf/g/UfYVf/YV/zd6jh1idP8Oa53H65FOcPXeehXqN62+8nbe98R2kaYofBuSyOdx+h5QU27HxA4ViqcjChbPc/v330q6tsbiyhGkY9NodKrPTJGYGoWioKaRRiJ5xMEwDS1WgP8BrNSFN6bot+r06VuzTtaG+UUOIIkev+T7WW2epNy4ShQrCcAmCAFUzcGyHgd/H9c+Tz1Z58YVlms2Io9eNEsQuUqrIwKXjb2JkodleR2ELQyvTbUfkcgV0zaTfrYEZgprHMUZo9+vo+tUtIFVdxbRmiYMBrtdAkiKl8Yrp/IP//uMr1k/9xXZlfJ9IN8597SCOuLpqSNavgKI66oG7He3YdyLddiQ7G5HIVpFuC9Lk22LQXN+sv2SZ0WrlFX8cX3zm+f/p/jeyiuWDH3uIu269gduOXzOs/9DQ0GvC+95zH9ms8+WVOC/lx7//nfzo971zmFNraGhoaGhoaGhoaOib5mUN4ChqlhSVOEkwnBxLK5t84oGHcD0NIR2SyEGmWTQ1IgxdDNPkwql1tu2MOHhsJ91OimVYGFYG09II/B6ZXMLtdx9kZXmdxYVFRFpCFQ6WoXD69GnufP0eSkUbxVQxtQLr6zWO3XQEw9GwPRvbzKIbOSLZQ8oQoYSksodmWGTzJeIYFE0SETDwWiBiTNUik8th58B0JIqmkYoYKXQgRZIihIplZVhbX2WjscjEdESQDDBNjZHsAXqtaX7rN/6Uj3/i03ipSmbExtQDsqlK1PeJ0nWO3TDNTbe8ibEZh/HZDEKEDAYR9cYmY+MjaLpKvxugaCau7xNGqxw8OkFz3SdKJEHo43sBoddGV0E3Q1ICTL1IYmbw/QF+EKKqGppqokoPSEnTFEmE63URUmAqWQZuF8cuYjs2cepi2hqqZlAqTVNvxKgEvOu7buETD55joVFn78wok6PjCKGhmQ6f/OQDJJGPqeoEgy1yziiPP3qeo8em2XuwhJv6ZHMFojhk4Aeoika1NIdtjFBvXSGft4hCBa/fQhKTJC6O45BxxvB8nyh1sW0Hwyph6gaaenVSNU1S0kQihIKuGWScMg1hIYi599YbqMzu4MkzT3Hu9BfYFSYEhkNv0CDRHfZfez1TpXFkEqCpGugGRr5M5/I5cvkM+w5s58qZs9TX1pGmw97du8mVxwilyvjsNjKmwnLUwhImy8trGHrKzmyObu0yg1aD/Uc8LJmSFgpII0dvEFDIQH68Qr5aZqURsFBbJZt47JmZZbG2xPadO4h27QcRkcvqzJ9cJ+q3KB64l9OPfYbdUw5xsJ3lC+dx8tNIpcqgscquW+7DSHosfPYvGCydIbIt3NTB0jcxS+Mok3fjpvOk8QqWLrFVmySOGPR6KGqCY+Zo1LZYXVvD0DSyuSxjY6MMBhHKoMm+yTk6m5d49FOLrG2s4Xk9XnzuKVZXt3jdm97O29/8TsbGJtjYWEdL1KsxTiGQUqIqCrZlMz0zxVPnnmZxeRk9CUjSlFyuRKfboRDHTE+MM/AD/DBEs0xSRSFIUjKmhaJH2LkCQk1xsiXUyRlkkhC11zn31KP0mOZw9igr/XVmd7UwbBOvH0CiUK2MI6RNq9vDNCWImIvn1yllD+DYebz0In6/gZ5oxDJEQ2cQetiGhqkp2E4R2yniun2SJIOuOBSLU1hqCV2qCMMiibvoikKcmgS+i5QpbuDS7/dfMZ2/DL3Y/633LNv/6EOzyvge0o2LoHwpIP3VCOVq0LqzyZcy5+gomo6dA7/7bRPAWV7bfMkyGeeVP7G3uPqVx/GNrGL5t7/+u3z09/79t3zy8tVe/6GhoVeO737rPdx96418/MGHefjxZ3nq5JmvKPPj3/9O7r3rFvbu3DY8YUNDQ0NDQ0NDQ0ND31QvawCnH22QK+sYsU13UGfHvlF+7Kdfz2CQsr7scfbUMlcu1enWPCLLIkglMoQnHlmkOnaIsSkVUthai5mfX+fwsXHuefM+ysUdnL9whtOnTnHmhRYvPLeFSpFW3eXMmXMcv2U3SlJgad2lM4jYf3gXIVsIo0e+NEIaGxiqQddtENFCV1MMPYvrDej0Wxi6QiJCWv1NhBqjZ6roagEvapDIDDJNEYpAoCJFCiKBVCeNBYUi2LkuETFClillZtlYtfjN//IBHnniYZz8CNlCjqwiKRgSt7vOzM4yd9zzOq47Ps7o2DSLKytcWVpBUUyyzhSpvBokUjUFy86jqw6aluJHPrv3TOO3Ik4/vkCvHdLp+FhmjB81SOgTBym6oaNrKrGWECYDIjfAtk10Q8Hz+khT4nkd+r0+aq6EllHIZIuYhkO700bTLWwnj+e5xEmXNLXRxTh7Dwnue+ct/D+/fj/797yD0YkpKuNT/PkDn+Pjf/EXTI0VUGWAlgr8bg1Nz/ORP/8CPz73FsrjVWQkaLZaBH5KLjuKjC1iNaZaHkeoHi2/haarpEJiORVUFeJkgEx10kSSpiqWkcM0dGSqoasa7mCA2+uRtQsomZQUhVy5Cl6b+TMr+IbD8dvfzOfri6yeP8VoNkexXEZLJUG7S1oZQ00FKRLN0JH5CoWRcbyNBda3GiRWhm6Usry8ShRcZqRcoFwu091YxhvUCHsbJMYEqpnBHazzyU/9FVlNsnNihJWLJ9HyZaq7bsLITjLpbbBy+mFG993GxsIlROQzsf8WMmpMvHWeqLVGunMfgYg496kPkc1WOXj9HfTam0i1j6K0mX/mUUq1U2j1NcrH3kHlhntxa5dJFI1w6wqZmT2Yuk6aKVO57h6ShWe5fP45JvffgWWNYet9HMvC77sEkYeUIaoOg6DN6soKge+T6ip79uzlwvlzzM3OcsP+bRiWw2Ofvp+4P0AhxjJ1okFIolqsrG7S3Kph2xm63R5hEJCmEhnFqCIhThIsy8Q0THbt2omuaSgywrIsslmHcqnEwO2Tpgm5bBY1iInk1a0KU1Q2210ApGog0hjHrmCPGEgh6K9fZrIyTisQjBQCDtx6Oyuth0lDwWhlhkG/gSSgN3DRNJ1CLktty6O+JVFll24/wiwEoEuKuTHMJIvUIxTDwNYcLGMErVIABKlIsQslwkhiGnlCt4upOfSDhCiMkLhYBZNctkC3m2CZCqr9ygpyyF7d997/jhXrx/54Wpm7lnTtDEh5NWnNN/xHJELRkITfFoPm/OLKS5Z5tU7w7d25jX/xE9//dbcTWlzd5JHHn+Xeu24d1v81Iorjr/ZwODwzQ99OKqUC73vPfbzvPffh+T5LqxsAOLY1zJ019Hf20z/7a/z0z/7aN+V/nX/hQ8MTPmz/oeF1NLyOhoaGz7Wh16iXNYAziK+g6gq2o6E5AUG4TraQZXyywp79CUePw5X5PJfOdjn7fIvNpZCGCOkvGNz/oed4+7tn2Htgis88sEqqJOw+0idsC1QSMlmLYzdlueWuMR779DQf/sAZiIucO9GjlO2yslTj0sU1du3fRiZnslR7iqyTxcyErG9eIUkHDPwmvV4Lw7DQlAxRIBCqpNXbQugCy9aRIkFoCkJRcP0WQSSvzm9KwZcShyBFQoqk3w8ZnzHIlj16vYRyZo5LZ8r8we9+jBdOv0C+VLmaQ0dJyIYCQw7Yd3uZ73jXLczNbqO2tcLJS4+iyByFwjYcJ4+qSQzLJIwjvGiAlR0j8lNiIhTVwPP77D2co7lcpNfWiX2b2N7Ej1y8UKJqV4NnYRgRRj0QAd2uS16GlI0iiqLTHzRJZczIyC6ITPxgQCFfQRLi+h0KuoWiCLxBSrOzCEpIqZxj4Kl893ffzZNPnWTH7Bjjc7OcunSJ3/it37p6aoQgjRKUOECkIWEiOXfK5aP3f5Hv+4evIwxcLDNHHPhkMwVkGrG6Mc/UxA5INWyrjJ7NMug3KOQKCNUniBoIaRKlBpppkhKiqAajo2OkqcT3BrR7LUbHxlHUFC9NMPJFjh85zLMPf5wnP/VBvmvqZ7ju0D2cOfkku3aOceHCC6gXnqUwtp1U7sIyNOIowrJ0IsumNLcHxQ1pnTxDVMgzc/g4nr3MmeeeQMZrtFYv0esOEJaJbgW87vV3s21mjme/8HGm5m5k97jJ2vIaN37HTzHobdLfWiJNFqid/yJB/TLRjruJeyGVsQJGdZSlE59hujLKntvuJFRDguc/yrFbjhPYu6jccDuDRz7EhU/+OiJ18fUQq72JLqGzeoHALpOfm8A/+wT9xoC5278bKrMEvoJiC1LzaRK6xP0GilFEkEO1BtQ7m1hGgVZ3Ffp9ivZuPM8lCmOEgFa9haLozC+uMVUZo93scGF9GROF/TPT9AcuqZFnPGMQ9mr8yR/9V6z8GPfeey8IiLsxOhJFAZlIVM0kjFMU1SBfLBL1UixdZ9DtwegoEg03CCll8+TtDFGckug66Abpl+qk6hrdep9SoYRul/GCgLkDM1RzFQaNRXobL7KZbjA5MY1lTdJPfNpJh3CwiWmaCN0kFSmhW2X33BEU+zK65ZMSYNsZLCtHEtnEWgdLT9Gw0E1JKjukiYphxSRBA0UUSFOdkB5JIoilgm6muF6IqWdQNA8zTpCqQhq98gYB6bZd7/3vXDbf8ytT2vH3KrK+gBw0r24LOfQVvtons19L3vamO18yH8TP/MKvccOxQ6/IBN6v9vp/KwRB8NUe9oZnZujblW1Zw5U2Q0NDQ0NDQ0NDQ0OvGC/vFmqZPkngIFJI4w5RqhLFgkHXp+93UHRBeVTjWEVh1+EyF19UOXmiS9BzWJ3v8ckPrnLuYMKJM5scfd0EL66dY7Z0PcLvsLZ1BaG1qFbgyO0VVOsAD99/noUXQqpqjmYoyRRV7njTAVY3LmCYJZr1lLXFZ7ByTSQ63iCi7fbxhE/kN7HUDDNT0/T9GmHfQ+oumYyNpuUJ/ZTNrRUY6GiqBSJCKAkoEiE00jTBNASlGROpZZkt38baYo7/9O8/wJWli+RHTNI0xRYJBU1gZ3ocf9MUd7/1KHEiWFjeBFy8uEXOySM1Dy8KyBqjGJmUJJZ4fp1OsIqmFyBQyWSKCOFQKZsUp9osnqsh+zvpGwoNdwVV5kjckIE7jyoqKCImm8mQc6pEyQrNZpOcM0u1OEchN4JChpW1y1gWpAiiqE+hlMdQTVy3g2EYjI9NgdbHT9qELYXJSolf+Oc/wdJin07i8mcfu5/19TXmxscZuD45M4OBiu1ArBookc2nP3GBqYkJbr55BG/QRdcdXLdFNpfDtjL03C75bJFspkgQJJi2QYxLEgboag5dz6AIl1gmBFETIwqxnQhFDUlijUZti3BmGoFLlPRA1ansO8ze5iLVZof28kUCz+OW+94NacgeyyI1bBaWl9Dzo0yMlCjkMuRzWYzcCK6f4MzsYtvhOs8+9RiPXP4Ie66/iaM3HOXcM09RHJ1DyQfsmJ3FcLIUCiU6mxdI3TYHr78bpbdCJeNTv/Qolbn9qNUiSmaEQZxFyV/g0snPongNNKWIdlEnlwoa7YCCuoDb3cRxJtC9FDVep33ic+j9TTKaTpzmmDhwPYW5G+mtLlKujrN8aZFw6Qxmd5XR234AMXmI+sUTVHcdwHe3CDWL0CgQhRHokzz+5Ee54biB6aQkcQ/VjBFCRTcExWqe3buO0NqqM1sdJ6vaPHfqLOuRipQhY+UcGdsi1U1GxkZx/YBBEOCGEXGvSXd9HdOSHD5yiFwuT87KoCl5FGGgFquQKXOlMWBEtwkDD8XrYIg8I5MH0EeniVSTQQjZjIptpCRuG9mJ8bcaRJ5HEsV0/YTRmybw6gtsnTmDcuQGRm59C3LxLMtffIy1pz9G3Ftjbs8cO25/J7t3vp2VhYeR/jlaskkYT9Fa7zBbnmJq90HMTJde6GOZCqnaJfL6+KJF3+uT1aqkSpsg3CCONHQxjm5IZNInkKCYKUHso+jQGWyBHtN2L2OoBlZeo9XZIg7jV+ZIEAde8Ec/uZQuvziu3/VjljKxj7R+BeLwyzlwhr49VEoFfvFnf+wl80F8/MGHed977hvWf2hoaGhoaGhoaGhoaGhoaOhl9LIGcOq9LrPZcQK/RjfqUCzuIfZjUiGxzQpB0sO0FLK2iZPtMTlT4ujNc5w5WeP0kx7rKzZbW32EaZNXxnCSDFE8IJI13KBNwcnQaLr4SY3d149RLh7lsfvXaPcjjFweIxeS0CdNJWqS5TOf+iyh73HsxgpOxmJ5cZ22H6IWisRRgq36mGaOamU37d5FvDAi8nXIgh91KVYytAYgZYqiJIACUgWRoAqJoSVYZpGivZ94fYo/+r0/YnFpnmLZIhYeWTtHNlUpFF3e8YO3sX1/mTQqYihFlEwdL66jBxWi2KAdDhBEhGlEknjknXFUNUOrt0Kc1NB0k5QppH81MJOfMxBX1hjUPEIyNPwOhtbF69mksWC8msEwAywzgSTCd310I8fm5hYAxfwoKDb5fA4p+yiaxDFLqMIkTVRUUvIFgyQ1aPdjBv2YydHdCBFRLTgYUw5brQYnnn8e09RJkQhUYqkSpQIrkShCUipmcLcUHrj/Sa45+nbGxsdo1Fu4bg9dsxkfm6HT6xAEPgKFVruJk1HIWkVCXycKI7yoTRoOSGTKVqNG1smgKhXyhRztlk8cBSyvLJDL21TzWXQpUaqz5Gb2khvro+smva0O2YzK0vnLjJcLzN36NqLzS7RadUo5C8cyCKMQ2y5hVXWiJGXyyHFy1SpPf/6zOIMGO/ddw6kTy+y69V04juTcZ/+cOTMm9gfErR7X79tFaXaaVj3Pobl9LF2+wMbyg0h3lTTRcMavZed1r2dqdh6paNRXzrF+6WHyzjSpViCRy5hJRGXn9TQvn0URPo3NBhKPqaP34G8toplFwuJetE6ffCnPWLlAPj+DtG+hVV8l7Vyic/k8maKBnS+Q2hNsOzqNVqqS2Fk+89h5FlcTDh/ag6KEdLspvW6fmek+sQTSkG2zO7j+xut4/NEHIHGpVKeZnJigvlXjxpuO89ijT7G+uUVppMz46AQnX3ye2XKJY9fuptdrs7m5imkpeELHVA06cYKZhOw4fIz1lUU6PR8RK1x7462Mj0ygFMdoNpoopobXbxDrKf3Fc/T6HnZpEiWWWKrGwoWL7Dt+nO7WJVprG1RHJzh95iTbcckDt77+HraOHuHzH/8Qn3vwUTYaCt/9z36ewuy9vPgCEHto5gy+u8rEbEicbaErKYoaEyc+fqrS89ZZq23hBhIn7VEs2fT7a1SL29Atg35Uw9BUPLdFHHXRlCqChCDuoxqCIGrgBirqAIKgTyaXfSWPB2H0yH9dik89OGJ8x78sqQdej9AM0sYyxP4wkPNt5K1vuI3/9if3f9U8M3/t3/3mB7jlhqNf8Sn17/kG8tAM6z80NDQ0NDQ0NDQ0NDQ0NDT0jXlZAzhrG03skS2SMETPjRCT0GjVyVllnFyROAqxbYMkNAlcg1DxKI5EvPVde7nu+iM8+tA8ra0G7Y0eJx48x4037cOaUOkNIqZKB7EqMa3eEl67x9ZWi5mdh9h7U8qVcz66oZHJ5IlDFccY5cSJKyxf6fK2t7+eTCGg01+lUh1lIpuhG7tEfoz0E0wBmkgxTYtcroqiqui6wLENpmd38cxSiOslpEpKKiSKFAgJKhIlDcnrM5Sc/ZzZ6FKrLZDLX50QNjARMkXLetz9tmPs2DWN1wowMipmXiPou0gJxewBWq0GYXQ18Xl/0KMyUiFJdKJQIlOdWq1ONp8jY7SRqk/oOmilGKfq0ax3KFZGELFJJqNjyQppolIulLAsD6m2Cf0BI9Y2FFkkDuv0B01y2RKaBrohcF0IApdcbgRFmkgiwtjHa6/SG6yj6Rbl/A5kJNgcNKhv1NhWPcRTC2usL6yTt7LINEHVdfw0pt2LSGNBKiSpmzA2MkWtscaHP/gEP/kz38to1cQNPPr9JkHSQdVMZKzRjyLSNML3VSplk0RL6PW6xNKn1V6mN3DpdGJmp7eztHgZTUhEkhKGPnEU0el06GS65JQsUWJRmtrO8198gErWxhCC2tIVyhasLC2Tv17jxtvu5KmnHqfb7VEs5EklhEJByZXJagpmJotRrHKHneHsFx7m6U8/xG3Hb+DYgQMEImL2vT9A4+IzbJuZYjPyaLXW8RsBhl1AZBWmD92IWciy9sxf0dnaoFxSaS6fob50mVx5J9ncNFHvIlnHRIiUJArQrBztZpPUKRIIhx37DtDrrBE623GMHMHaRcqdc9QWT7C4NcLcbW8iqNew91+PNv8UmycexMxa6GaWtBegCIfq/j10Y4FqgxcW+M3f+CvGyufJ5QwG/RQk5LKnEalONedgzlhMzM0ws2sb9uknqNdXcQcuumbx4gsXGa2MIEWdyJC4yYDEG5BRVAxdZ3JqCtu2CcMIkj4KTZRMguEWKTkljl5zM113gFHME2/fxdL8PKLVJ2PoOMUiPV9Sd3vkhGTbjln6mJw/e47V1RXmL8yz2N9g9759TMzsp9f2GSvmuPz5T7L84gnyB49z433v4Z4f+N/RUpUnPns/mmlxw133sWvbm4kshfWlDY4cO4gsd3ny3G9QzIXkMjYkFmZisdVuc2Wri0xzVMyUXrsHMkvBLFHvtQjjJiMjDoNBC9/vYaoOtmNSyFXxfI8wjIiSiNB3gRgrk3nFDwqysVQL/vAneuqOmyrajd+dUQ+9CeEUkX4P6bYh8iFN/nZ5coZeVWzL4qd/5L38zC98/T2A//RjD/Hz//RHh/Uf+jsRf88+REo5PIlD/8v8dd6bK0urf6vfG+bTGhoaGhoaGhoaGhp6ub2sARwpJU+fe4Z90zdSSUY4u/gkliySy9kkxBRKFWrNSzQbPSqF/ch4hNjvUQ8HmFmFN75zmnYDLp5MqS34LF68xAvPJkhNZ2z7JOU5g/KUSiE7Td7czmhhgonZDvUNk+XFGsfvvJ5dO3dx9sU1Pv6RJ3nrfa9nz/4JtmoLTE1uI1fM4ocei+vzmFkLI82Stw38aBXLzmHbeZLUJZUhuXyWJJK0GgG+pyAcAykTBApaKpBJgGUo7Ju7iUr1CCvLHyKIXFRNgNAwhY2j+Ry9u8C+a6v0agmaabBS+yzWIIdllLCtKrGXJwrXSGhjWyXSNAepTZrqCJGiaRY7tu3DtGx0TcHzOtS3zpEYKUZ2nH69z7gyx3RpD/mCjZBZ+oMNsraJIky6bhtJyrbZo/R6PWzHIPCvBkaMTEy310ZXqxhaiVqtAagYmopl6QzcGM/TKJklLDNPq9lkY6OOTh5EhkzGoZIv4vYCVEMnQZKIlACI0dBNE5mkNDaXqJTHuXCqwy//0h/wf/zc96NpfVx/Fb8TYZhZpiZ2oaQZBt2IXq+L6UjSpMtWc41sLkeqdHH9LrNzh9EUk9WV05Bq5LMO7XabfqdHbWMTyyxTKZWxRIqIYmZGx4iikNzYNLFZIINFWVhk8gW6nS4yTRkELj3XIxfECBuELrCzJTTDIDEt1CRlf39A8MznMb0LtJ7qYo7vIXfdnZjWKO2VF+m7TXYevQW27aS/fpLG6hXIjOOUjqNP3szOayYYpBHN0w9TSNbRPAUzu4PRkRvRNQermEEaGgk2Ij+CWhkhlDqinCd5YoPUThk9eD3LV07T+uKHEHaR/N5r6Ccqjfnn0VYvM3L8bqr3/hhxbQ3VrhIsn8Sq5pGWihomdNoNahuL+H7I2kqAkC6KYmBoBhurbXRFo3TzKFpW54UXHkfLmLzx3u9g/uI8Fy4uous2a6sb7N63i1w1y8rmBrEbsHN2hpxjMxj0COMA27TQXQ/VkfieR68dUBybQdgKg/YG/aULjFUrXLp8hSsXTzE9UWZsZhuRK9D1InZpjo7X4anHn6K+vMj61iaDwCdXzHPl8kVatU3s4otY/Zjtu7ZRmq6wvpDDEoJ8EHDp0nmm9mzjJu1eVi+eZnZmB5VtO8gXy+gI1pdjooGOFmfwvS6aWiTyshhSod0TeEGGes0lP1FA1wx2bNtLHLbpe1vkckV8P0LILEpqo2k2YRiD0IlDm2bLxXEMolQFIQhj7dUyNvjJ5SdWk8tPZJXp38+re2/PKHPXCWV0J6I4iTBsZBxAKhGqhvw2CubMTY193ZUdAMtrG6/6pNf33nUrH7z/wa+b8+eDH3uId9/3xldkrohXe/2Hhoa+ORqtDh9/8OGXzJ319fqaoaGv5b3vvocbrz80PBHD9h8aGl5HQ0PD59rQ0N/LyzqbuN7YomrPYjtZ2uEWcaxRGM2TqjFRqmLIlChWGB3bjirLpLGFUCVxlJDEIUFSQ80NOHq3hcEEJ55ZYiQx8Ps650/Mc+l5lem5DLm8w/hkD+OwSd5wWDh/gXyxyvhEgaXlZX7j/X/M3oMZ3njfHFcWXkBzJFOTu2l2Wgz6PjumjmBbOrHfpNNaxzQVUF0SAjo9Fz/sYpsxveZO1pYiJBGKon/59AlikCmFfI7nn73C5NQ8lVEDwzaodz0sYaLqglLJ4e47jpNqktJ4HlNxSYMJfBmRxCG+5xOHDXRDhaiIoWXxowbdXoRdmsY0MhhWi0zWwTAVOj2PWitEFzaJV8Yxt+FrKb4XMFI8CNTRzQRTH0UiMIwSRU2QpG0aWx7dQY+JiSyD/gat3gZWbGLqo1SKOzBsm55bww87JKmBoZdIwgym6uAOJEnYJ3A1CmKWXKVEQMju3Rn+4Q98F//5/X9CkkiEBkJIvDhB88AUIZqAKAxoNiN27d3P2Rcv8tvv/3O+/4fewLZtO+n0XVZWNqg3N7CMPMKMyZoJSwvnUNSITnedVMtimjajYyOoeh/T0KmUJ1m+skK5VKTb7bKwuECcRJQqU2RzPZSgTWt1mbGRnURmmfzoKMrmWc58/iGmDtxE3GvjChU7l6XdaLK2to5hWJR1i6xeBEVDMSzyIyN0vQ725AyHrrmRtZVzrM0/SXzheQ4Uq0SaTiO2Kc7tJwzrjORMjGCWldVVMsT0rpxGc3u0/Q0C08IoTjIxvQ/f1IiyVXKUSEWCUS6ilMbByH35+WR96XuSn6F35gvol2JKs3N01WMErRaz4wUuP/842YyF31qhff4ZSrM7UYRGECX03Q66UkKVJt3NNa5cOsM73r6Dt7w1w6XTPhvrTQxTUC6XKRaLTE+M0tqSzJ9fRjNiWrUNJstzXHfN6yiXJjg/fwFdS0mslG7QxRGS6sQEpq6hGIIolvT6LeIwZSQaIZQZTFun37SY6LjUFk6y+OSfItICaapRHN/GdWMVwuYizVqNze4G19x2D8sbDT70J39I2qoxXS1TMkwsTaFaKTDodijnTEZKOXpxi42VS8TqBNWZXYyNT9FpNLEL44zuPsj0ja/jykP3I/t1LNOA0hh51aKx/gy1lRPsLI8QGXkSpYBiZq7mUNKrNGQdw+2B1qNU2UaxMEYUm+iZPqmUFPPTaIqg3d7A0HN0vS0Grodl5skX8xiWoL3iE4caQuqvtjGin66c6qcrp0wgKwrjtjJ10FBnjmrkRq6uzAk9pEy/bQbNW64/yuLq199iy/X818Sx/txP/iBv/8F//nXLvJJXsbza6//t9H4J+AngKPCbwP8FdIan5ZtLCPEV7SClfE23w/n5BX7qX/3KSwblh4b+rm68/hBvftMtwxMxbP+hoeF1NDQ0fK4NDf29vLxbqNVijt14kHajSW48Ydf0NIguUezT6ZlEqoORy6MyTjEzTa1+CZI+qpYjihNUwyAVGXylgVVQeeLFBQw5yXfee4i779rJX93/ImfPraCqHS6cfob1+REavToIh/1zM5z47DynVk4wNQs/9ENvxOv3yOUrqEbAwOvT7zXoe0uMT5oUslOsb8yj2pJeJ8SL1hjEPoPAJIpTRgoVinIv65c/hSREUXWEBCkFSBVNs0gNhz/98F8QpYIfft8PsN7y+atPPUy33mfb9iq33DODT5tuYw05MiDuePSDLoXSGAkaUejjB3Vss0o5P43QYpJuiK5ZCG1Ao7PIZmMerSmolOfQ9BKFgk0axbiDiIWlFY7tfj0CjTSskjXniMI2Iu3gBqv0RZNypYTKGGsrknY/Zqt3knr3RYrWKFl9H45TpN6+zMqFk+halkIhi+kEhIlKr1+gWC6wtHaZQc/l2JHjFMYMCMHr9Fi5sMmNx+dYXDnOh//scSojo8RpQEBKEkdkfRjN5xjNZugFLdZWzjBVneW5J+fphC7vet91aLl1tsI+uDn8rQvkKhqmlOSdURRhEYUxlpYSBCFJamDbFoaeZe8Bm5WlLn4/ASUhlR7NVpPl5SWcTIFITZHju+lkCozN7MQiIfUGjOw+isjnUOwM5UwOxdKpFMu8+OIpLlyaZ6+moStgFYuAQpqAtPN4dh57fDsHtu1mc/E0/XYbRYWKBaJaZG7fNZz/5IdY+eQD7HjdrRg7rsfQHIrTk8ReF3/QoViuYlemwI8xbBVh54ndCEUTKIYJYUrq9UjjiDQJ0RUNkS9SvfZ2dL9BsHwJQzVx3QZyZBJGdrLt2ixBa4PR43eyceIZVp97gplb70b12gwKFqldQLoKumLi9j2OHN7FroP7WFsO6XX7jFS2gSoRosdIYZzL822W6wuYhSrU2zz79AkmJursO7wPp5yn73tEpAT9HnGSUsipKJqOmnUI/RS37xJFMYlMKMsq+UKZ0lQBLQqpXzhNZ2Ge2BhHy0+ycnGDJOqxd3oE12syNjLClUtneeCTf4XfaXBgaoTJvM1Wz2MQhqTSY6RgUjYkonmBgZegGw79zSVyY9u4fOoMHlkOveFe8pUJED7JRoNP/dEHaCR/zK3veDeZ0Qm27d5Dom7R9Gqg99G1gFxuHKGYBDJk/84i+3aM47c3qRYL6JkOphrR39IQQqIAfbeHYulEUUImYxAlgqyVJePYrHfXCFKD5pbH/MIG3/2ub/2k3N9hcjQAAtnZEElnQ0/OfEYH9Jd7DHklKuRfOo/RlaXV18Sqjr07t71kTphX8iqWV3v9v1ni5KsGYONvYhV+Anjdl27/iy99/TTw68OX6d9UX9EOQoifllK+Jtuh0eoMgzdDQ0NDQ0NDQ0NDQ68KL+vkm6mOsbrRYXNjid3GGEVbpeZuUCxX0DVBFEeEiY0kwQ1qdL0FhAwwNIlhFRgZmUOoEc2eT7s7YO/2G/j4H5zm1ON/wHe841ZGJveSnItATRktluhsxfQGGnMHpzl8ZC+PffoLHJg+xl1v245MW2yu93GKFpurCyTJFqowIYVGfYXYNwmCCClUSA2UpEzVKJBRQhJVZWbsNuYfG9Bvu+imiiISkAKJCkJBCoV6zyUBTjz3Ao239ZmcmWX3vh3EvZhrr9/JgeMWl5YfQ1fLJJ7J5eXLCLWHUCI0UaWYr9DtblDO7UUVKl6wimkIwiDB89q4QZ0o6uNky5h6HssokJqCrdomO/aMY+gKT554jGv23sRUaRrTz6EpWTq+xDIj9GyfVruFY4xj2yYos5xZvIA1kqc6NoXsp3Q7W2y4Kwy8Jnk7T6fTQzMsvJ6H169QLZbYNXkQlRxF3WTQHpCkklp9nc3mCtWywlvedYDnTz/HpXN18sUcsZSkAqIkwbIssrZNIn2avRYlo8t4Kcv5k/P8zvuXOXxzgXo3Jn9wL/1+Qj9eZKa6k8nKdvq9lOlJE7QtXC9ESgOZGMhUpzoWc9Nt03zmgYsEfopl5BCKQrfbptfrkuTy5CZ3oTsObhyipCEyP0P5sEm3VyNTLJHaGfwkQAiF8YlJnj3xHKapY2oaAomTyRInKYrq4FS3Eyg6iaGjlz1GqxGViTKDzRrlyUmEk2Fk1xFyfZVsdRp7bh9KahIIgVYMyQNBGBCmCj4pUS9C80Li0IcoxtZ0EndA6gUkUcjVaGGMVRrF2XaE3NG7ifUcraUXULw2hYk5wEQpltm4fIZCZoTR2b2ESYhmZugtPk+ceAjdQKYKlqWj2zGCLL1OTLe7SSE/TipTQq9P6LcQUUpuLGZ81iFrVikUe7QyXfwk5NyleRRDI1MuIeIYt18nZ5v4QZ+R0hS50TFUL2KrVqfb7uBFAT3XZ9s2A2U6QhBQmd5Jd/MQX/jMIzx/ZoWeG1PJGajNBtP79jM+O8dv/uffoNlss3vHHIbwafoB9SDG63fIjufYMzvL1tISURKwfXoKwzBotzusLy2w9+DtHLz5FhQrQ9jpYpQLzBy/m2t6ESeffIbPP/RZytunSFXBxmaLQnUErdBGxh5R4OKHPmkiUJSYMIyZGNtN7AtWNs9TLOYROIRhj43aBrphghHjuT1yRhbDdmj2W/iDHg23jZ2poBgRavgtWanyVSdHpZS//nfIQSGB8EtfnP/8R9h72zu/rQbNvTvnXrLM+fnF18y2Ov/4fe/+ugEQeGWvYnm11/+bwfvqK8YG38QqzH6Vx/4jcAdXg86fGr5c/9a1gxDiDuA3pZSvqXb4+IMPD4M3Q0NDQ0NDQ0NDQ0OvCi9rAKeUGWO1vs5qZ4vG6Q43HzjGxVXIttrcsKuAGiSE2MTKgM3NK9iOThAYRDJltJSh3eqQiIBYaqysrnPk2FG4L+Xzj5o8+dgG996zi707Jnn++ec5vbLK7MQUY1Pj7N6+C6dUoN3bYP7hdXbsnmLyoIE0Wqyst3H9Bttmd9Fu6Pi9hNjSqLsdDCtLnARohk0pP0PZnqPVWqYyNsnpFyx+7w//kDgNsWyLMI2RAgQJQmqkgBuGxEHI9PgYhqHyiU9+kocfeYS7X3c3xYJO0F1gx+gspjaLqhq0sjXCJCVJ2mRzBTQlRyE3h2FkCIIOgoA49SkXdyOTLEY5TzE/jVBMdCNLGCYgbEqF3bSafXbtncUQAcsXT1FfnaeUz7DnwEHQbJJ4HEdINLmKTAUDt0bsVzi04zsoznisrj/LoLmBoeaAHPt3vAUIWVldIR5MUlsckNUsqsYOUtmjtnGGMy8+TmswYLPlkjg9RnYMuLwWUion/OBP7uO//MqzrC7VyeXKSEKiWLLeaJKUC1iazUw1y3i5QKNRZ0e+gJmMc+XzUJ5U0fw8GbNKrX+RnhNytvsMvVZEuVKhNCIp5HbSbPbZqi8wO5MlpsX4jOTa49M8+XCfem0d09I5cOAgpmUTBj4qeWzDIOg26bpt7NwoljNKTlcJwz7dwKPV62GbBjOzU6i64PkTJxn0exw8eJCxsTGiOMJSbTQjw2aYUhAhlUqZxuo89eULZMuz5Gb2E0uf1M5jF0fR9SKGZZKqCu7aBq3mBk7GIvADFEXBcmyEECjo2Kok8AKII2QUQBoBKWmaEkch/uoikRdS2HsYo1qlvZlldOYQmuYw2DjDZm0TAx9/dZ5BdYrc5G4Sf0AYuFgK6JrKwsYqmuZx9Jo5/Dgl9LfQVItqcRet3jyu36Scn6DVu8x4wSFvw+bqBpZjoxdzKLpJP/Bpb3VRazVMO4OGQpDEtNpdKpPbcHI51JzBfgTzC/NECRiZDNXxCpNZAyvt01JUwtwYB47dwJUrG+w9sANbiWhvrfD02QvsEaCJhEIxy+79e6mtLWKXK8yZGUaXlhkfnWB+bZWM7XDzbe/GbC9z+uzTrNYHzOw4xr573oAzM0W/5dPv9/HbqwhbY99b7mXX3ffSa7ikSQvXaxNJhTgJiAcmhtWkVWsgTDBsaLb6WHqFMMiyVZsnk7NJkixSdpAyJBQmGip6GmLZoKc5VAGJuoUrXTQlwVYDMpbP9u3fkrwoX3dSbjg5+rczWq28ZJnf+sBH+NHveye2Zb3qj7dSKvBrv/Az/Mwv/NrXLPM3V7Hs37N9WP+hv63fBH75qzz+ji99/dqXfl4bnqpvXTsIIX4N+GUp5WuiHR5+/Nn/6f7c1Bg//SPv5YZjh6iUCsOrYWhoaGhoaGhoaGjoFUN5Of/4/h0HGR2xqIylmAWPAZtk8zly2SKkNtXiQWw7Q6mcRzOyqGqRNC6QyoSeu0irs4jbV1Cj3YwVj9FPNtl9h8n+a2aprbdYWN9kct9+jt9wE8VCgQsrC8SKJPF8Lpw8xcHD1xFIjY//2Z9z4cICRlagWwMq1TFGRnZi2yXmF5dpeR79tMfl1TUWtxpYRRWh1llqPUGgp/TaWR748CdpdJoESkqSJAghkICCRMiUGJVUaGga3HTjdbzw4mkuXbiAbZqsLK6xVVtEGCso9ga98CS13lPkCxqjI9OY2hg5ZwrHHqVcnMGP1lGUBEffTbWwn0pxF7Zdxjar5O0dmHqZdqdDEAVMjG0na0+RxhbNZp+9B2a44Y4JqodKLCmbfPrFB3jx/EkuX1pj8VIPEY8h4jE0pcrW+hq6kiATgaZbZPLj2NlR8rki/X6XdqfNaOEwUWOKqGVQyjhs1Zc5s3CGJ+ef5sXOaerOOfSpDhPbx0hJqbeXcKM+uw+M8GM/fRPZvEanHYHUiVOVVt/n0voaW70WkhS/H2FLB/yIHClHZncxre/gyotrdFouoStYWrrMRvsMnlxBMQS1Rpc4FmTsIsX8CAJJ5Nn4ruTAge0cPrKHTM4CAb4fYOgGnVaLrfUNhEyoViv0g4ie1yZOIhTdZKtZY2V1hV6nRxTGJEnCxMQkhw4eYHV1mSee+CIXL52n1arTam/QaK4TuzFBJ8AojpGf24OCAb4k6YUI4WCqBioRUWeL+sJFZDwgZygoUUS/1ydJEmSaIoRAN3SkhDSBJJb4QUwqVdBNVNNBNTMY2QJWJkNYW2LjuYeJpML4NW8iGd1L1ynR7gUY0qI8tovijl3IUoVYMfAGfVAULKsMSUizvUir08c2x+i5LXruGqVyDjdepedv0e41qHc2cAOXTtPFEAbrqyuYuTyTu7ZjFLJECghNIY0T1jdrdN0A08lRHp2k0/fx3IB+v0upUuLo0Ws5cPAARw8dZfv4LHJjhStPfIq1E4+SNNvMHLsBe6xEu32J8e0jvOPH/xH5co4H/vJ+4jRiZnqE/be8nsNv+B5uvuM72bt7Lze86e3svPkN1GKDyrYDZAsltmrreAOXVNWZ3H8EZ2qWSDXRLYdURnQvPM+Hf+NX+b3f+k88f/oJnGLMzM5Zduw+yPU3H2ff4QM45lEGvSJCTbEtCym7GJpFNlum720wNVNkcmKCRLbwk2VyuSxONiWIN0hTUFULxezT788TRwMK5QrZvMlkZZTt4+PYZvNbNSnH15iUewj4D8DIcDj8Bse13du+oXLPPH/2NXPMd9x8HTceO/B1y/zpl1a55DLOsP5Df1v/N3AL8LWyyP8MsAX88+GpevlIKb+hdhBCvCba4amTZ/6n+7/+Sz/LvXfdOgzeDA0NDQ0NDQ0NDQ294rysK3AmxhSK9gjX7NvGqlvHC9Y5NJZBEwrt3iqqZxCKgEHQIKLJ1mpI4o9jZLpYjkW1sBsYw7YqrKyepmCNMjE+gv3mRXp9hV67jojgultuYm56krUrS3hRSK3TRg27TI2V+d4ffheXl1Y59eQKpr6d3KjFwF+n2VwH0SdVIx5/oUcoazSba+zYPUHkBIxnpmn355kZLfHQJ5/lqRNn0Eo5YiVBkRIFQSJApCmKUElUnWazzu6xIuVikWdPncMwTcYnJhmtjpIIuLy5gtB9ZJBhdHQcaST0+zq93gDLSNGzAj9cRzMihNRwvQEFc4w46ZOIHiIt4JhT1DaeI4jbVMqzDLwNNhuX0NUSqiyxeKWGqgqCdIsduzQMNUuntsDq2gVOn4tw7CLFwiST47PEfkJjq4Y9UkDVUzriIqV8Fhlm6HfyjJdugKTApc1TmKUi9XCTfvc5ZrdNcOOuQ6xuncN124yOTCGjmL5b4JrD12EaRZRIY9tOjX/4j/fx+//lRfrNPPmCQaxI/CRhtdXDUHX0MpRyFu2tiPOnLpBVNfbu2septS2efnGR7TumUSyFqZ0W+w7solqZodNp0O35NFpLSNElTGx838M2i5SmC9x0m0MQr3P6zAKXr1zByZbI54o0Wy3OnjnLNUeOMLVzD1HYwEoCYlS6cUwUJ0gJyyvLGIZBFAWUi0WOXXOUF089z4MPfoK9e/cyM7GN0fIMekmltzWPXFokNzqGPrGPxPNRsxpCT0mikLA9QCgKhdlZwkTDzI1SmlLwQ5fADwjDgHAQYkoVTdeQcYpqOAihkMQpYRQjEEiZImWCYpjoJYOw16a9UaM8mQNDJQlVUpESazpaoYqiaaimTRL69DttVMNhQA6Z1FDMZaJgO/VaF7+nUijPogiFzcYVQj+hlJ9CM1QsZxJHrzI5kuOx7iP0B9swi0UcTcPIZAl6XVpbDbxIIUxSYinYNrMNoZk0O31U26C+1iRJJalQIUnYY8T0l57hhdPPMTq+m9mRvTz44Ke5dP5Fdk1WWVxYYKMT8Pz8Mh4pB3ZNE0tB3+ux89ABnvrsJ3jmMw9w0023oOs5du24jiM338HlJz5Kq7mJj0GhWKa8czeJsEiCmFRJ8USIbpuM5EqsLa1xIfgcaycf5dht9zCz6yiGbTFZ2Ee5vJtnnu7QaK+iGj1UHXTLZ+CtoBsaiYxY3TjLyFgWR9WI4wG6YpGaCX7UxxRjtN0NLAzKdpFmWKdUqCIUsMo6PS/3LZmUE0I8wtWt1L7/a0zK/Qzws8CvDofFr8+2LN5858088PDjX7fc7/7JR7nt+DWvmWP+4fe+4ysmPP+mD37sIf7x+949rP/Q39XjX/p6DPjtr1HmV4C7uBqU/svhKXtZxovHgceFEF+3HYQQd3F1W7XXRDu8+c6bv63zYA0NDQ0NDQ0NDQ0NvbK9rCtwaq3LCEViqBp5PYuW5OnUG7hunTBOuLK0TrfZwvf6NFs92i0XKQZEUUgcmihqDk3TqbUucPHyU9Q2unS3DEQm4Yd/9o3c8x0HWL58gYuXFkmkhqFZ1BsNTp0/yzPzy3z00Wd46tRZMoUKo844Jx+9xKBnYzoltmoDlteatHsBF+fX6XdyTI0fQkY2aysBFxbqhGEZ2Slz/uQCqa6gqAIBpFJCCkYqkaS0+k3WVy4zPVrlbW/5TjY2t5i/fJlOu4dtWxw8vJtcwcCLQrbaLRY3XOK0imHZxNKnXJ6l3Rrg+k3yhQpCqARxD0WPaHU2qDVX6fTqrKwusr61TKOzjKYpqIqF6wdgxPihQhwrSC0hVGw0NYMSWQhpYo9mGN83hjYacLl1kucXnmCldolOt0l7q4MmRzl/KaKx1CfciLHDCUacXShhjkce/izLzRcZOIushSfIjQ6Ymxth2/hhNBSSyCWOauhmj0LBoFQsYNs6Ekm31+f6m6f4gR++BbQOra6PULOgmkSKwWrLZWGrRt1rUZ3KkysoXJ4/zcOPfY6w36EqcoQ1mzMnO3z8f5zhwT8/y+cfOsXCfJdEmmBoeJrHpncZV5lH2DF9F8bGRtm7czuOZdHvdrl8aR5NVShX8rTbbRaXN8hmi+SdAqptYRXLaHoeTdPpD3rUtjY5++JJTj7zLI9/8XGeP/k87WaHOEpZWFhifbPGVqdFLwmIbIv1bpf51XX6qkF5336ELvCbDYzKCNbkNN00IQg8VFSE6WCXyuh2Bt3JomdyqIZFnEAQSVKhgWkRKyqBApGq4CEJFEGgCPppQmBYGCMzYGVZWV/i0ulnWH/+88Qb8yhhn9bWGn5tA93rIHs1LBmRswvIFFreJTC2QCogdSbGKpimYHFlgyBI0A0HKW22zRyg4IyjazZzs1WKeYVWs82g4yJTkAJM26EyPsbU7Ay7du/GcLIsrK6jZrLM7NyOQGDoBtXqKOVSFYGOTAVG4mFpkrPnznPi5DOM5jSuP7QTRdd49MRZ/ugjf8mpEyeZmtjGm+57L9OzO3j80x/ji3/1x5x5/DNkQ5eFFx7n5DNPcPjQceJQJ/VCbCuLGwkUYeD3QtIgZtDrEIQDCtU8VqnMTXe+ge/7vh/gDbe8jvaVy3z+oU/RanVo9TxavT6GJrjm4M3ks4dYuOxDOkY+W8X3G+Ry0KjXaDa2iCOBKkt0e238gYtpZklEgBABAouRyd2olk2nt0UQDGj3OtR6Heod91s2KSel/AfAj32dYr8CfAJ463Bo/PreeOdNL1nmqZNn+PyTJ14zx3zb8Wv4nvve+HXLfPzBh4f1H/r7+h2ufrjol7/Gz98C/AVXt4GcHp6ul23M+IbaQQjxqm2Hv9kfFAu5YaMPDQ0NDQ0NDQ0NDb1ivawrcDTVpp8kdJsrVKoFCtEka+sutp2j4EySFWXidIVe06e1mSVNIkTWo1yexu2r1GuXscwcYbLB3PZRiHIMUo2+rKJ5Hs3OMhsbPSzFpqepnL1wina7jqLCuqfhxgI3OE/SGVCx4NTzz7DUvsR3fNfrSYSg2/VwjAoFdZkDE3uY2e6w1ThHsTxCvdVg98QbWX8xYnVxDduSiCQGrgZxVKEQRQGNdpNtczP88Pe9iVtuOE4lX+QTD3ySzXodwzAY9DtsbC5y6IbdFEqzbNUtdoxfSyZTotldBTUin59GVxNC2ccy9rG8uoptq/ixAOEj0jy1epcgTNhqvsAgWMbO7MT1EjQ9Rzm/F2kVaLTqNHurZHI7mKpsp7nS5typCxiZlLG5nUxO7WV5cYlMzkU4TQZJn9b5Htv3HOLauTtonjNZ/MxZDh3NIJyY+x/5HQJF5drXHaDlXiDxfda7LgvPPMhYdQfOTA7dtDEtBdfrIAUEYcLYRJHl9Xmy2SK6kuH2u6dQdIP/9B8eol5PyBezqJokSFIWa33CWLJDE1SqKiPZaepdSep32TM6QqCq1AYpS5cb/MW5ZQxbEil9bnzdDVx/8zFirUCxnEFVu3huRBosgwzQHIFqwNbKJutrNWr1NQw7pteN2LNjH/1aDcNSwSjQ7g548otPs7m1QGdQp1GrMWh3GXghYRQxMjLC0aMHueOuazh3bpnF5cvEMmF0bBRUg76TR1UUaDaIJYyVK6iJRpqpoDo5CrGLiHyiTg21rKDqGtlcjpQuYRyRpgmaaiASlTCKiYT8UqAQEpkSJRFhGKJoCqYQdDptFrfqLK9tYqgwZqnMOBI1dEn7OoE7ADXFSl2UVGCRYigCw45ZrF+mRwvNX6bbmqKoK7SCDopwMLQ8mirZ3NxgfHQckwxbzWUMQ2VubpKNlkpjbZNc3iGVEqkI7HKBjFTxBy5pKsnmCjT7A1yxyfrqCppuMj27ncnpaXJ2kSrQ76+TWW5giBZdt01WM9ClT7PbpoVJatjccdPNXHPwIN1OQC6bp7v+Rb5w/gz57Ai333oXlfECK1EBu1qmfvEkpcoYOBZL623WG236tSZCpgRphOy55B0NO1MgTLqolgapzU233s2p5U0e+9RnOHz0RvKVHFa2jG3ZHNh/K7ZdZuBvEQQeYb+Ln+thmjZhWKa2EVAplcnny5AmJElM4MeIpM34yDbWGnVWalcoFMuoSoRuOzS7EbqdfMsn5YQQ/w34JeBffo1JubcAv87V1Tgrw2HyK91w7NA3VO7f/vrv8tHf+/eviVw4AO97z9v44Je2Gvtq/t1vfoA//o1/M6z/0N9XAvwc8GGurhz84a9S5qeAnxJC/Msvbf019L9+vEiAnxNCvCbb4cZrDn65P5hfHA51Q0NDQ0NDQ0NDQ0OvXC9rAKfZcpmamiJJB3TaLbKZUYr5cSJPJaNWyRZGaXSa2HaeUtHEcQSSEEuv0m/7WJqNY1u4tRy9lkuxZNCNevQSD5YNnv/MGouX1qhYefYfOIC2bzvnznkgJDu3VaiMVBkplog9lyRyOXboOB035OznVpg+6nBo9y76Y9Ps3T7H1NQMq1vPUii00NSEHVP7KRvb+O+f/gPW3SbFrIOUAiFBAH4U0E8i3vOu9/CWO++m2+/yxONPcWl+npWNTboDl+pYHlWVPPPk8zhlwbG7pymao8yMTNH2l+l4TfJ2BTfo0+6FuP0uQimg6kV6gwFhFBEnIeWiimkVKVWLuIM+ttdDNxJWN68wNraDqunQ7PU5f+4CiqFSMmI2lp/HlEXKIsPKiYuc+/RFDh65kTvn3sSJFz7FqcaT7Nt7O/mJMZbOPEHSXebyhXMsXTjHxpUXUXOjjEzu4vjd99EedMj4OkuXN3jg4QewDI3XHbfYObUHgzmiIEJRII0NLDuHrpTIGBH9ro/nDhDaJjfeXuUfizv4098/y9Jig7FyFV1TiQ2TWgcU6aKM2OQJGC2VCaKQZq+FnpPsmxun1WrRaOv4Xpvp8ihr52o8svoclqJSLBW47pbjVHYVWe88y8mLD7Fj543sOrSTRt2lXq9z9vzz6FbIjTfcimmlhJGLVHVsM0e312QQLHDdTTtQGaNWW6cXSMr5MqPVMlOzUxRLBkG8hWqXePKxRcLQxbGyxEmKYwt0Q0NoGisry2wtXKFUGWNibgc500DYKoooMGi2iPwBkVDo9XrESYRtmfTjmChKUBSFNFVIkgiQKEIhTiGJBXEsIYyI0pR2q4PrtlhfO8+VC1eYGp3ltltvZv/4fmxDI+k2EGGHyB2gGxkiKfHSGE9PaYUxA7YoGqeIpUZ9K49Tuo5Svkm9vomqx0gZsbR2lmp2H62mgqVKdMcmm2TprnVZXW0zNT1DmMTUak0s1SCOQ9Kwz9TcTorTc1xYvExldARLdxj0+3h+RE4dELpdrNEdCGeeNN4i0RQajQ56kjI+MsPW4iL7x8v8b9/7g8yfeoq/+G+/SqobqJrK+PbDvPF/+wmKGZPG+UcZHd1BIiL6tRfI5C0S/eqTs9ttIQ2BYulkcPAHHmHLQ88U0Hp95KBH6IcUp3czoTk0tlqc/cIjaIbG67/zbbjo5LI5rjl6E+1uj2azSym3g77/LD3/InGoYRkqYdKHNMbJ2LSbPaJQYXpiJ1KBhYUFypUKxYyNKvrU6qugJNiW+qqalONqkGc4Ofr/UykV+Bc/8f38u9/8wNctt7i6ya/8lz/k5//pj74mjntmcvwlj/svP/XosP6vMhL51R5OX+r3hBAvd9WeA34E+ALw+1+jzC8LIV7P1e28PjLsnb5xf4v2e022w1/nxnrq5BmeOnmGv/rcF7j3rluHF8bQ38vHPvwr/9P9kUppeFKG7T/0rRu/vtZ7oeF1NDQ0NHyuDb3qvKwBHEVziaOYQmGM9Y2I1WabKO7TbHR4sbVIIV8gW/LZsWOG8WkDoUi8gcbAHYAaoFsGrtcgjD1MwyBXyNMLA0TYYul0wtIZn0p1nMGgxWDQYmJ8FEPTqNc2UWXEiCEw4gHtbpO+67Nr226mxzOsd9dYeKbGiH6E8W27iLyT1KJ5VrqXSUmZGznMsZ338ce//ae8cPEEiuMg0ZFCEilXB3zfC7n+wCF2zszxex/4Y0688DxhGhOS4pgO5VwW3w3I57O4ruTzD55ndOJ2nNE2jd4pKsUsZjKGUGza7iYbtYAgULBbNZKkT5QEyDTFGwyIUhfPU6lInWq1gOiU8Pw2mqiQupLTtWfZWockylPJ2ATdBr7ikp8aZf8d+5ndvp2Fp8+zfvkM22dvYcfu1/HCc8+Q3Zdl+7YSz37ucT75wKfphQo6Ic1wCTPj8dPf/UNMT86w/vQ6bsdl0KizK1vg2K03M3PjMVb65wjjPkEI0zOzxIGJO6izGQV0OzZbjVVsu4w38NEtg927tvOzP7+bP/3jz/H4o/NUsuOYikRNYaMREYQQeC2KbsggCAhJyRc6jIyOsH/3FGcvp3TqHuOlKpri0NhoUcgViOIun/zTj7PvyF7m9swyXZmimK2iVVNuuDFDt9NgfWMV3Yx4/V13kngSNwxQ4ohcMcPMtiz3jh6nVBynlC3jBhe4srHB3MhR4sTFDer0+y6247Bte4blpTphP0DTTfL5PM1mjd6ggRASJ1+lUqrSrdVYunSe6liVcrWKZefIVEdI45CN9XXa3S75bBHVUDA0jUHoMRh4qKqCaRmEYUiapviehx8EJGlM4PsUi0UK5RLLyxeRvk8SJzxz8iTNboPS1CTHjt2Am8vR21zCrzUx0g5OLouh6yhpQioMer7K7pkxgs4Kmp1BOCpry6epNzrksyVyhQJhmnDq0ll6WwHFokMj6KMbVWw7w1ZtE722RTaXgySh0+/g+S62ljI6Okp1fBw/iVDTmDCIKZcr6IpJ7Pfpb50lrTdQewOII0wL3CjGsvNURke5NH+RIgM2znwOv7lFLmsTJxLL1ClXKxw4dpj6yjJPP/IIe++sUKlWEImP0MtohsXslEutdZ5Wewuhp5iYGJk8freLoqnYtkVvdRUzVyLKlMiWJeXyBA995IMU9YhO9zq04gTFrEOaRDiOTiZTQcoS6xsGy0vrGDpEUcLi5VUymSyzcxNowgbZpdHZoNsKyGcrbJuZpNNtMQhMNjdjZrdXWN+89Ep6A/MNTcoBr+dqzonh5Ojf8Ibbj79kAAeu5laZmRzjfe+57zVx3O95+5v4048/xOLq5tc83mH9X136A++rPdx7BVXxD4A/BP418PNf7ekIvEEI8X7gP0opLw97qGE7vBTbsvg3/8c/4kf++S+xuLrJz/zCrwFXAzuvlVWTQ998+/ZsG56Ev9/r2h/j6oeLRr/02vPfyFf6bPuw/YeG19HQsO9+Vfbdw+fa0KvNyxrAyTpZVpa3WFluomgRvX6XVmeVjfU6k+O7cAoJqdLm3PwahdwEhcw0kVckSUJa3SXa3YRKaQbVSKiWx0hkQLd3gW1WlSdOPIe0DZxcFtWxCaQgwmD7nsNMTO9k5cp53Cgm1SSqpjHod3n+xVMkQhAoAdOT+/joR59GLdhcc8cU2At02122z97JrtF38ek/fYGHPv00IRqGZpBIASIGQKYSx7FptZv81u/+N3r9AMdxsCwHUxFoIkHXwR1IWltd0sRhaWWDE889zXf9gxtoey5xINlcS4ibaxhWStfXMJUZmp0WqqLQandptNYhVZiZqSClxsBr4K1uEYR9hBIxXjYIwx6a6rB/3xEGDRVBk8XNM2SzNv4gxHBcKHlsv3mc7tNNPvSJT3LfO76Xd7/7B3nu5GNsn8sxPjlCRtXxooByycGWA0ZKRbK2Rhx0abVrPPncswRba7zlrjdS2DvGQvQinf4aivARhkOjaRIG4HktFNEmDjUMrUwY+shUZ/nKgDj2ue6GQ/zDH7qTrJPw2KevkMpxsFUSPWR94NFLJCPuACE0bNNGN3yWNi+Rz48yomsI3aDX61PJCmwjodbbpFSukncMLp44T9iM2b5/J4EUFEdH0aaLVMtjjI1Nkc0JZKQjEoNUTSARBFGNfjJPLl/E85p0u4vUW2s4lsNmfYF+r49lK+iGIEUSJwbb5mZw2zbLV5aZ23GAqYlZBmGeRrtNu+uSzwl27T5Au76GQgxCQeo6Qqgkgz5BECBQ6HZ7DAYD4ihCqile4JEkEXqgEccxSZIQhxGe59FqNQGBH3ao1+p0N/sYUZ4kVBB6Qm19ngunTzA1O0c+n0PJ5AgGfYJ2l1hISrZ1deuztkWtJegVJInXI9U2OTe/jN8aIGOFOLXwPYsgSal1LlPNTlEsl7BLGbx6jGHq5PN5stksFy5exDB0MlaWjc11ts+OMzY+jpl1KBRKPP7ow+iaiaKYCMWkmLdg0Gf98inWG30GUULRyJOkOm2/zcapNY7t30N5pMBzzz7CNQf2kT/+OpzJIySdBpqqcOXk4/iBi5UpYuQrdLttNCkwx3biTO+neDjEqzzM+uoSy/MXqUzuQtUNEkOHyMWxLHpaHsUpEyk62ZEp3NoKZs5ix/79XFh9knG5n5x5FCklYeASpwFg4VizTI4fYWXri3gDj2x2BsNKaLV69LsBumGyvraF76dcc+gwbqvPwvwqmA6KaiOkjm29Ij/V8Q1NygHvB/4jMJwc5RtbzfHX/t1vfoBuf8CPft87X/UTg7Zl8dM/8t4vT3YO6/9t70e+9IbtmldAXf4J8E+EED8ppXz/t9kb52E7/C2dn19g4Pq8+21v/HI//nfpF84+8uFhLzD0v9y5Cwt88fHnefjRZ3ny6bMAvPlNN3Hk0C5uufnoa3nS6SeAo1+6/YvALwohvkdK+T+G7f9t0f7fyHj3FRPFgBz2GsPraGjYdw+fa0OvdS9rAOfUhU1UcgRdl/HRDDnVws5tx4knGKvOIlMPXVj4SQ8jKeEPTNJEo+c2iIKI8eoYXjdE11K2NldJZcq22TyLjzW4crFBZWYS01IwdZNECMJEstVsMTI2wc5rbmZzdRW/1cAwVKamderNHguLKyRScvDAGGHk88E/+xBoN3Lt3RnG7YNcM/VmPvSBh/jcxz6JXilgmzoi9pBCIU0FqkxBJqiGwXqrQSqhVCkikaQJkAo0TdAbuHh+D8c2KZQy7Ltpmh1HdS4vnyFXEDR7m6xu+XhJBzsv0SjTjzosrK/i2EVM0wIyLC5uILEYncjTC1r8v+z9d7Rt2XmXCT8r5533Pvncc/Otqls5l7Jdsi1jnIQwsqG7oY2NPxpsk92EprsZJjTBHzRDYHfD122wYXQj2WAsJAfFUqlyuDmdnHZOK8fvjysbSypJpVCqW1X7OWOPcfZac68w3znfuffvXfOdQpyCoLC+2cdbcXD0gLxQ0JIY09QIJQHRLeGYZVTBIHA93LDDMBxinLF4ZPUh9vvXub73NEdKVYQk5+wDj5L0fDa2djiytoCQeJRbK5BmaLrO/NEjVPrrNO6oMG2mHPiXSJQe5VKTJErwownZKGdhfgVZqnLQ3UNSFEy9gVxE5IHGxJ2CFLK10yaJBjz+vlMYhsynP96j70pUSzKSlDPOcopQIEtCdClF1wxib4TbjyjrLfIEDr0+g/EQUZSpteZRbYeqaqFmA3ZuXCBPx4T9I6w8cBfYGqbpIIk6shriBwmOUkURcnTT5mDwIr3gGhIVZCFHMUzy3ELIZCbhId3hIfPyHGmmkWcqcRawtLxIUtH49O6LXLkacedd9yIZJuW6hiAccuHlzyHf/SAnT5xCTKcgiiRJgqLK5EVO5AdImkySBLQ7PQbDEUke4ccuqqRTZClBGFOkUBQZ3e4BYeJy59nTJFFIv9vl2IkVNq5cIk1c5poL1ByZK5fPUcgK3//9P0SpXCNWFMyVgnA6JcwKHMVEyEQEQuJsQDQNyfIEf5iwWj/OcDiEOEM1dXqjkEICp2WRihF3nD3JS58bEMkSx0+eRNM1nnr6aaqVCkdXVgjGA6REhFykXGnQm/pcu3adNClot/vImsMPv/PtrEgmQ2/K4aSDrtrUVA0N8ETINId3vP1dpGLElavn2dzY5Mjpe1k5fS8TL2RtvsJTT3yc+arDo+98HHH5KMPOJqpRItcdivoCaqXOva0lKs8/Q+gGSIIAuoxcmKRBSFSIqLUWkmmhCDEoCoEocd+Db0csq2x1f4NMPqRklqmWFsmynKmb4gc9BDFGNnSSQsb3czTNJA19kiIhE2TK9hx+KFGpZ6RFzuHhhDgtkI2ApdUaiiKgadU3kgj3iqIc8Oe5Gcx5y/O1ZnP8Qf7FL3+Y589d5qd//Ee5784zb+j7/p73vI1f/bWP8fSLF2fXP+NW9Fc/9Rb0UTM7fJ384J/6S7PeO+M14aMf+xyu6/PIw3eysjz3dX02CEL+5f/xYT70Sx95heN+no9+7PPAv+Gn/vQP8ZM//sMYxptuttgrCfG/KgjCY7xBZljO7P/tF4qBDwL/buZHZu1oxsx3z/rajDczr2kA58ZOhKMJLFlVkoOc7fUtqtUV/FDn6tVdwjCkVHYwdIu0qUDFpzCndEd7nFk7iaOUOdw8x8KKTblWQs4tDi90eeLJfTRrDi2TqGo2lVIJQxJwygaD0YQXLryAWW0gFzKSaBN4KX4mINsOx0/dRhQluKMxjXKN73zvKeZOeCRanbtP/gj/v3/2EZ548bM0VxfxvBApyUEsKMgRBYlCzBHISNMYCQ1ZlUiJyMgAlTRNKdIUu1Lw8NsWuf/hNcplmzDxWd+/wvWrE0pNkXF3gpYfQSjKJIFBkOb0elvICkymPeq1FiImzdYx+r2IvnfIyeNz6JHAYDBkeydiPJ5SNyccdHdZndvg7nvOMvSmLJRaVKwWL7z8NLZjcurkHaSKQG9yFb1cwpuYdLc3OeEskQQK/c6Ie9/zHhb3ukiSjmbp2CUbOVd49vnLPLV1Dme5oO1tMAm2KXyRkt2geWSF0Uin29vCqGgY+s3ZLh4Zh8N1tnd3Wag3USWLOB9QSBFu2kfTFAb9Hvc8tkJreZmPfeQy/b0Ix9bQhZgiF1BVBS/02B2Ag42fCuznHdwoIJcFJtMpmq6RaWM0USJ3J9Rth0mk8pkXXuCBzhStWmL+/tOoikgS6gTeCN3KqdcrpJOATJAYRQVBYSIVEr3OBvOtE0iCytTrIIkiI2+XkbfJvWf/ELaxQKd3AT/oIMslVk/EXDj/Ar/zmR3mWkvYhokkJAyn+zz5zCdZWlzAlhXyPEOQACQiSWLzYIfpuMtguM9kMmU0CsnFgv64R+AHFGmKKIiosgZCgWEWvPNd93HnfXV2rx1wfKFK+diU+24rMSkcBnsG5VoLRc3oHR7wO//lk9x59l5WV5dACDBNgzwtkJQy+BmdnYtckieUM4u16h00V+4izzzKS8dwRwNUOaRSNzCiNSZFiq2OaTpz1Ksl3HGX1nwLz/XQFJWFhUVEKefUsSWMRGPY6bN8+wPUqvM89PBDfOJ3Ps3VK9dRtAqdO+5koWqiV6vMRRElxWB1qYGslig7Dp5ep7Z6hvZz/4mGLjKd+ox2d9h+7reRl0+j1FZRZQFh1EUvNUBMyYoC0a6TJSFiMEJyyjhzK5x8ewl3PCaOIgpE0iIjymXiJEI0JdLUBwHEPEPTLKrHb6c3vooq5fQH17mSPcHZ098BSIReynDSwUuu4AYTokil0wkZjg+Ym1ugJKpUqzUETIbjbSoNkevbO3h+hJ+7lEUNp7LMcDBi93DwRhThZuLoV8DQdX7+5/4cP/Y//I1XVf7pFy/yY//D3+B9736UH/re7+CBu2/7umfkBGHIsy9d4iO/+bv84//5L75u9/4//vk/9YYWQN/o138Lce+sCmZ2mDFjxn/lF/7Zr7C5ffPBjrXVOX78T/4gH3j/469KCPq5v/XPvyD4fHU+9EsfYXP7gL/7v/zZN5sg9CHgX77C9t+b2fc3i6L4OzP7v2nt/w0LxcBjvIkyBcza0YyZ7571tRkzvpTXNIBT92oMtnpc9IcQJpQMk+moy3jqYhoqaV6QJwKhWLB9cZ3m8gLmvEV1ZRG7WWNve5vFo8fQtYhaqcFzn9rg2c9eZuSrlComNVPBMGwMy6FRbyDJMmtHT9J96UU+8dlfw3ZEVurLSKlJd+gxCjxERcC0NSZRzrVzn+HkPTrllQrz8nE+/p+e5LPPfJojJ1Yg1Zm6PkVRICFBUSBJGYJUUOQKcS5AId1Mj4UEGWRphqwXHDmjcccDVY4eP0qj0eLqlXX29naIyQiLMuODMcFExiHFtkSIYgb9IYPBhEZjHk3VGfVjoijDMsuMhhkVqUQRWWRpRlm3mGupxInI4sIRxNxDK8Mnn/wMxxdOkgc77LLHxO9jmznd4SGS4iCLNUbTPcIwxHB0PvyJJ1gUrvK2O1oMaya+m7NQOY6CyTToIErw3AvnGao+C4spWlpG13S67X0kYcr61iVss44sVen2hnSGnwGrznjiYeoShVzwxLlzHF88Rp5H1EoOmibhu2PiUKVUO0K51eHh9y1w/qUB1y8MsXPQhJBFu4KnlrnU9rELBUODTIpIRZE0FUF2QIDB2MXtD7BllWqpRKlc49jKGlvtQ8SrVzj60FnyQsAwVTJfwbHLqLpM7HpM03160x0ELWd5roGsxeztrJNEOY5jUmQxiuZALjEJRkzDAyzTJsskNEukUivzju9o8Vu/fY6nnryMiUAhpCRJCoVC58G3IzbnkRTQVA0Q+Oxnn+Azn/kEFdvEtAxKzgLjQY+L517kyPEGtRUV5AS7IlCtadimTq1c5ehRi+7kPN1Bh5JQQXZUuvGQ5u0y6D2KsYZjHsUqlwi8Kc9+9uOYj7+XpeNrZLEEBUiSzsrcGbynSnzu4jXuPN6gJa5S0zQOO9dwpynL5ROMp22mqofvySgi5EVI6sdMhgpBEJJTIMsSrVYDocjxgwglD1D1hIk3Jg4T5ufmWFs7gm4aZEWCZeuUG1W0uk7j7APUVqdUrSba3DJOZY1qtcZU0UizCellhdMnjzMaJ0jGHIVeJW1fZve5LnFvhyCLUVUTU9EQdZtSawFZM6AAoYA0zdFNiyhKieIIUbg5AyrJUvIiIw5CZFlBQCD0U4pCQBALfG8M8VFajRpZbDAehciSgh+O8IIBYR6yd9DG90NUo4TjVBkOYoLII8Oj3w1xpwW51KVAQJMbrKycYDKOuXjugCyPEGnMRLg3GffdeYZ/8rd/9utKvfPRTz7JRz/5JAAf/IHv4qF778AyTVqNL5+h5fkhnV6fg3aPc5eu//7nAP76cEy9Wn5d7vv08TX+zJ/4Yf7FL78xl0Z6o1//LcQLt6Df+tDMDjM7zJjxevD8C5d/XwgC2NxuY9vmq/rsP/jH//erEoJ+/7vExz5PteLwP/31n3jT1F9RFL8oCMJHgJ8DfvYVivyvgiC8C/hQURQfntn/zWX/b4VQDPxNbqZVm/mRWTuaMfPds742403FaxrAkbou6iRCMwwy06AQIS8yVA1UXcEfTShEm1Q2mPg5lUnEyPXpHyiEByFlRyI2Cva6Uz6xe5XtG31yXwdToSDEsS0QRTwv4ohZQdJNJqOQ1cVjLK5VKNQO3VGb8SDFMlWUVCRIPGQ1Yv5Yi9aJ03T7W+xdFbi8+QKfeuIc1XqDeBgSZjGKplJQkGU5iqKQJD6KIJMjIQCKKuFHkMVQsaC+WMZqqKj1EUHi8+zz16iVfeIkxvWgyHUE0aDb8dApE2QS3shH0hKyPMQ0NXzPp991kUQFUVCZjiakacGZtSqD7T3GscL9Z85wX91gc3/IeDzl2OoK13pXOXn6DI5QYXP3OosLLTRFZ+SnVPIIz+9T5CBkDpKUY1tV3vnoGXpXDnCjhMHFy+RxRO2khpHn+HmOXSvzvj/8brbzbS5vfI5GZQVBmhD4KnmikWYuB22fwAVVLfCCCd14jEwNW9VwKgu4kylP7GxzankRR5QJY5lQqNMZ7ZGLh2zv7pOIEe/4obuItefpXjokiSRGQYwrTFg8oiPlKf32lCxUkGUTWVYhjynihFDMSAA/8uhFEY7r0TLLYKgIioRaSGSSAoqAZdnYpkOSZuSiihf32etdZTjt8vKFFwmnCfWajigXhIlBpaoiihqO02RjZxPLgoqTEPoduhd71Ko15uZL3HnmOII7Ydo7xHVHjCZ9Tp08QkGfOC5RpAKiqqFIFvsHB0iCwfu+64dYO3aUUn2VZ574L7hc5T3fdzfHjp7k8uWniaOExYUmUdwmjlyubwYcPX6c5pxK+9IOoy2fz18+jxhXcIwySdDHi5uU8wr1ig6FxTPPPQuayNLaCUhjAt/j9ImHuf3Y4/zGhX/F3sCjabY5ZJtpdoU8kRA6Dla5zIVrnwPBolXWKEs6nm9y42qApOtIioypGYiiQJokyIrBOO1z/PQSfuwSTH0arTppklEUKdVylSNHF1HsnLTaoj73OJHnIgkahepQlMt4uk6qGghRgdQ8yvzqA7QSkcPDMfXbHsMbrrN+6XcRvBHqyl3oC2coV6r4wRTLXiIWFaJUQIhD8jynyDIEUUAURYq8IM9z0iQl9ANUWUJTVPIiJ00iJDSiKGRn7xqSLGHr8wi6ged6iJKEpMR0B7ukBJQrLcbjbfJcJE0KDjsHHD1Ro9vxGPYTEFKUSOD4sVP4XoSm6aRJzEF7j0rVRCqMN6IINxPlvgbf8563cdDuvar1cL6UX/31j/Orv/7xb+i8W7sHr1sAB+CPv/8P8dHffeJVpZC7FXmjX/8t5A9ulZmDvwv86TdCip2ZHV5//snf/tlZ753xLafd+fKZ1g89cMfX/NxnPvsCv/Lvf+vrPt+v/Pvf4jve9SDvePub5/mfoii6wF8QBOF3v+DXvvdLijwOPC4Iwj8F/mFRFDsz+791nv96NUIx8K4vjIsfnvmRWTuaMfPds742483CaxrAycSbM1bSNMKybUbTCZqmIckKSZph6Cau71KQopZKRDIIuUhvY0DYD9jIQ7xgnTQv8KMJlWqFVJYZhxE5IOoimiqjyBqFIKNrNpIiUZNULGuNSN1gmnQ47LscOX4M1x0znMQg5dy4cRnLMtna6FIszHP96haSqiIUGkkYk4o5oqqQ5ykIICsKRaaRpxJ5LiAKOXE6JRVzVpabtCoSpmkQyzHrNzqY+mnm546wvrFBmkw4eXKJCy9fIyxCtNTCHY7wEx/RMBH0kJU1B9uqsLM5pFKpMhoNyQBZlKGA7sEASQjxIh8typBFmUq5ztbmFVw3oNpscmLtKId7PSRZQRYyWs06USqSph6F1CMTCsrlKlNXpt8RaKghd55YxBZzYjISKcWaL2GVq6iFjm7ojMQJhwcXUAyPLHMp8jFRMiaNoFKukuUhpVKZghRFtTExmExE4iCniAyWK4u4soetOxzsewyNBL1sEnoeAyFGFkwq1gp7lw9YnLOomSuM9yY0FhY4vmBz910n8LyAG1c67F4fcf3yHkUaQq4iyyqZkJIKICgFhVCQJSHFKIMs45RwO5KkEuc5kiwQhAWyrFEUIrpZZePaJtc3r6DYEu44x+8JIIAge1RLVcqiQUZKmA1p9w+pJi0sU+DSjSvsbe/z2MMPsrW+x2L5Hbz3Xe8mCEf4gcfU7RCmHdxoxLJlEAQJQRigaCWa9ToXXnyW7e0twjDkyHGftaMtvuMP38Uw2eCFi2O88QFSXmbzxoD5ZRtZ0YhjickUNg72GPpt6lEVyXUQ8gqpXic24GBygG6o6FqBoVdJxIznzr2AZlao1arkYkGRw3c+8l1cPPwMehxxuN9lHPQpdJ9WtYGfJMihgCaVyYqIUbCHFq3RvhEwcSN0QBJF8qIgyzJAZDqJmDvdwF6WGV0aE4cBpqbj+R6+75NKOsPBIRkToizG82D3+hWqmk6t0iQeZFRrFbTKArJuoC3cRjCeIGcB9bKDPx6jNo+zVIyZbl9HXb6LvL5KFMeoYkGKSZjkhEFI4rqopkWRgSRJFHlBGASkWUqaZWRZRi4UhL5HWhTISoGqSPSGI9a3rlKuaURxSL2yiiAotJor6BYcdDeJopxKpcqwP0VTdVy3hyiGpHFC+yAiT0Xq8wVJZJJEEqoqMBkH5EVMAciyQbfdfSOJcF/KPwX+v7xJUhN8q/mTf+wHWJhrfEOLYH+jdHr91/We69UyP/Onf/Tbes+z67/l+KUvvF5L/jvgX3+V/b/NLfo09rfxR/PXbQdBEN7Sdvie97xt1ntnfMt5+tnzX/T+4Qdvo17/2g9a/NK//sgrbl9bneNHPvBeFuabXLm6+Yp59n/pX3/kTSkGFUXxG8BvCILwV4C//wpF/jzw5wVB+LmiKP7ezP5vKbrAX+DmAwNfUSj+wm+XfwjszPzIrB3NmPnuWV+b8UbnNQ3gyE6JwPVI/QBNN1loLjAajVAlhSgO8AOfHBHHyIiThJ0emIYFck4qpBxOPEJfxbYKGg0bUcoJkjG6quEYNuVKiapmopYa6GaJrFBA1nD9Mb1el432k7jKiHKpReAFxElAkPrEXkGcKFTVJvWmQBB7RGGII2uQFcSyDEVGmqcUFCAAFOhGiUHfJUo8lldqlCs2zgLYtkzQiQiCgMJQqJdXmHYTTAa0TJNSpYGcC1StGqpaZv1GGyNVOHpsgV4cgCJTpALDQ4+KvUSpohF4U1TF5MjRNa5cvsFhP6NWKbM6p5MFAzr+lF5UoJkyTm2Bs7efZOPGVdI8ZW2tztDtIGQ6smBxZX0bzZChSND0mE7XR0kajJQxJWFCOmzTXFlCmz+O3lqmvLhMEmVMej2SLCROXfrTdTQpIAkVup0JtaZAdxKRRTKiAqIERZBS1h0CwSUkYX+3Q8VWmF+oMQ0D2r5L4QksiwFpGLPRDVhdWkJBQY0llkpz7KcDlt62gCzmJGnC9c4+parF0l0VtEaGs1BHzjQOtyL2t4d4QYxpGoiSSJaHxHmGJ8jknsd06hEnGXGWYNsSoiiTZwKyDkGccuPGIbubfeZWmwy7KdNRgmap6Bb4yZDd7hBdK8jyiGqlyeb5y2zt9slSCc1cYOjF5ElAVYtYPdKkP5Wxa3VqWZNPfHoTuzzl7rMWspqzvbeLH+S40ynVusL19Rex7EeYjEaUKzqjwxA3C+kPriOqGaPRgDBIeaR8N1ubG0hSgXm4S2+8h9qKaA8U7j75GEGUMyhA0xyev/Ek653rPHrH21lIIxrNFkEY89K5czz22NtQdZN4MuXE6km+754f5qOf+L/oWxGFVUIs5ri+3efOeZG6rmBoDdxpBz8RcNs2Fz/Xp90PWDHnyJOcLE8pl8vEcU6rNcexUza7/aexxFOQpwhizmg8RjMsTM0hk0KGo31aRot+7wAOLmBXq8hKgq07GJ5EmhZElXn0xhrC5HmC3jVk0SCMPCaTGvVyGWPpLKXFIxSaxXDaRhIEBEkmCn2IY4ooBEXFVC1EUWKaTMnynDRN6bUPcScjVEWhVm2g6DpRGpMVAp29beRYpKZUGfb7kKX0u0Pa27vUmg5uv89gmrFz0CPLUqqSyrDfptEw2bqxTxjILCw0UQSVg/aUIr9Oo2XgGCtYhoxUlHFHCopsfktFuG9AgPuGRTneoE+xfbsFwVajzv/4d//Zt2VWx9MvXHjdRch3PXo/D91zO0+/ePENabM3+vV/K3A9/5U2T26BS7vvC8LMj3+VMreMcPcmZmaHGTO+QY4fW/6aZT7z2Rd46plLX7b9fd/9yBfly3/fdz/G+3/oO/nxP/O/flHKl6eeucTObvvrXnz59eTr/P76D4BfAf4S8NOvsP/vCoLwTm4GkP/TzP5vKX7jC6+vKhRzc7bO35v5kTdHO/pmf/8WRfGWHpe+RfrBm9J3z/rajFsd8bU8eKfdIfQCWvUWmqqRxymmrCOmEUU64vTJBb7z3W/j7Y89yP13nebs0RWOtio4GliyyKnFBVplCYqYFInRNETMZSxbY6FaoiprVCoWtfkWil5CVkpkOTx35Uk+9tKH+dyl5znspiiaSZhHhOmIkp2xtFBmbbmJo2eszlUREpkwTEkziTjLKNSMXMopihBBzEEQEUVIspxI8CkvwMpJm1NnF2jOi0hKQJoLjN2Aq5e3EEUd3TZxJ2NMxcB2Wshqk5qzDKnEYqvJwlyTwXhMEPuQyfTWPfxORjwJmHam5K5GMEwZDdqoWoYqR/iBSy+WebrtkoshizZU7BJz9Tp77R2GXo+p36fX6+COYtb3Ouz7Abv7CddeCtm+LHH+vMvW9oR0MGTYPaC5tEQIXNzeQhYE/OGQfq9D6sdIiYrr+Uwjj8k0YTic0ulMKIoyw3HEZvcGfu5y4LXZnmwgyym9wYSJO6WgT3NewWnIpLKEn3k0Fwrm5iqM/RBBNnD0KiWzSSZKSLaI4ciUHYfF+jzJRGWwH+P2OxxuXMMbtCk5OafvKHH/2+t87x89yR0PGziNEEgp0gjSiCIFP4BJknM47hKlAaIoImsypuWQZjm94YDB2GexdgfvPPt+KskiNclhrlXFcEoIWoX9YcSVvT5DX8fzy4ynBm6ocOH6Hu2+RySp/M5L5/jo51/koDukVipjqCVs04E8Z9DrU2QSSCKFUFCpVAl8l6uXLlK154j8BHcyQRRUAl9i53pIOrWwNYvpJCOIZHKhxJVrI6aujqg0kNUmrfnTGOYSw2FCezKh4/UYjnfp97ZAyCmMmKe3PsuWu093v0M+DpkMB7QP9lHQMSSLdBDw7ge/l4Xlu3CTDCHP6HV9Ot2Ug66LHwv4gUa7p5L2jtN+Xmfv8phcEFE1BU1SmKu3KJVKFEXOfffdS8Wq8/xTlxiM+ihyThKH5AhYVglZVVB1iYk7QiajJGmUm8eQqkcQ9RKGU0LWTaIoJhoOsGQd58SjyMu3M00DCl1AzH36/SkoJXRdQC9rFGqKFw3xozGymCFTkMUxQRAShTG+6yKKAoqmEgQubneXUWePLE0RREjSiFSUUBSdy08+S3TtkPzGiMnVLoPdKf7hhM1nXyDa7nC6cZSSYrO5d8A4TumOujRqFoJoMh7FWLaJ7ZSYujlZkpPFOtubLt1uD9fzSGKB/b0erhffaqLcL/HVgzc/B7yXWfDm1VfqnWf4yL/6R/yVn/oTr/m5PvfsS6/7/Rq6zk//+I++Ye31Rr/+bwVf4Xds8bU/V3xTr6+CBPw88BxfOWjwC8DKLGjwzQkYX8M+MzvMmPF18qUpVR564OzX/Mx/+PXf/bJta6tz/M2/9uNfttjxyvIcf+Ov/fdfVv78hRtv9qrdBX4G+CHglfLOvg/4j4Ig/BNBEBZm9r/1BeRv5vUVhOIVbmYKeCX+LvCbwB+e+ZG3tB+ZMfPds7424w3NaxrAqVdMzpxc4+ixZUQyNjav0jra4nvf/yP8kR/4MY5WKsRBn+G4jSkVnDmyyO0njnPnPfeyurLGSq3F8YV5aoZF06mxWGlwz+pRTs43qZdNVEdlKESY9SaG0SQXcnY7LxFzQLlhs7RyDG8Sc/ncJcIwoFZ3aDZK3HXb7cw36+TZFF0rmAynSJKMIIqkX0iZJggSWSGQ5xLkMkEcM0y6/JEPPMLbH55HsXv4eYfxcMx45DH2PXb2xzhOk2a1gqNbKEaL9X2PMJJoVuc4POyxu7NHrV7Bcix6nSF+NyV2LaLCYeDFHLbb7O62iaKAqiNjpAn3rJzk3uUzfPep+3jX3FG+87Z7OKFWWE1sjmV1tJc7tDZiHq7ex20LDxN5ELsCUuEwGkXIlBgOQg5HAanmoMgWdqHQvbrLxWtbtG6/G0WuMk01jJWTCJUVXEy02iKH3R4bmzt4nk4cV/B8hd7Iww8S4olOdyDRG2e4XsbeZEJ3eoBpSziORcGYznCHTMioN+ZRFAVZyZAUDcMssXJkjmnYw50MEASB4bSHY0NehBw9eoyzp+/CElvMN44gItNr99BNnc6gz43dA6pLFd75h29n7T4LqilhkqIIKpKUkyIxcLsM3AMkzaDTj9jY22Xj8IBcczi3cZ1nL1zhYDeiJh9l1TnJ3Uv3Yvg2vWsDRD/HFERKpoXjVNne3iVOI8pli5yES1cv4MU+IzejO/IRpBzHMZFFnfFkQLe7j2lWSJOcPC+oVmsoskC33SGOclZXjuK7HmEYU2stceToKQY9j8k4JQlkLK1E2a7geQmabjMcBmxvHyJkBlI0hygv8fLGHr3JFDXPyHKfUl3HkASSaZ+nb3ycy71zHHT3iN0x6xcvMuqMwCiRFzlKIfKnfuRvIWRrbG0P6A/GjIc+G7tt1ncO6A5drnUndDoSm9sDjIaJIks0K3VKpTJzC4uounZznRgxZ293j8koJ0tz8iwhz3KCKGVnb5+JP6JSL6MYBp4XkU1T0CokdgVJM0lFhViQkVQVSSiY9NrEuYDeOopz7EFKpx5Cqa8SiDqeJDMMAvI8Q8rAVnU0RcbUVeQip0hTsjjG8zw8z2M6mRKHPv50RK/bJo0jRKEgjm/OmJNlHRKXuLfBsVqDwW6H4V6bSbuDahqohonfi1izFjhr1/mh2x9gTi8xcKcEQs5hf8Q0ihgHE7woJIwKckEgiqHINPwgpH3YxXU9wjBiZ/tbOyvjGxRNX7Uoxxv4abXXE0PX+ZN/7Af4tX/1D/kzf+KHX7PzbO216Q/Hr/v93nfnGT74A9/1hrXXG/3632T8JJByM3j8SvyeAPOzRVHszqrrNRPXXpUdiqKY2WHGjG+Cnd32Ky6C/DN/7ke/YhqXd7z9XtZWv/jJ3StXN28V3/FnBUE4LwhC8dVe3HxI4Bt5fQT4agP2zwD7giD8xMz+M6H4lYRi4HUN8s3a0TfmM75F/qMA/uCxzguC8Gdfi/uSJKkqSdIHJEn6CUmSjn3Jvj/4+qJyX7LvFct8E3X9JwRB+Nw3W39vZd8989kzXm9e0xRqqmYxGrvs7xxgKDqLS0dorhxh7dRpnvnPv87++nWOPPAwJ+55lOH+ATuXX2I8HNBYXGRhaZHMy3DHHrcdO4FiGgShy9GVZSaRh6ypLK2tcmV3l/5oSvNMmcvPnufjH/vPCFKKWauj2Sq1is5o6KGKKnmisLO7y7R/DUGQWFo8QedgSJzkiFJBnobIsgyFSFEUCIKIJMvkmUyWB9x1p019KWJ/N4cJZEmEKhvUqiUMLcR0HCqleRplhyTxWT/YwrAtuu09RvsHtDuHxHFMnuckScxCa5GdjTGpKiEbNlPX47blZe49ukrijllcnKfIc2qVOt2tLfzrN3A9H2m+ghsE6KlKS3cpKzl7NzZYCg1iy2LeWIIkIkfkWneXYeih1hUEXWPOttEzhRoGh2GHzZ0DltfWOHX3I+z6GeEwp6nZtPs7PHRmjfZgxPr1fZqLRxh4EUUBiqYhyAliUmY6KMjkHKdWY3sQoCOiSD6SCqGXEcU5sd/HtBz8MEMSXSRBh0yg2zsgyiPUREXVDZIY3DRC1XL8eMR07NOcK7PUaLLbPiCJ+wy6Kd1BTJ5rJFmGaObYLZ2TlWXGGwF75zsomo7hyEyDIVc2NlkPFrl8ZZNuf49HHnmITe+Qf/uRj+IHHo7lsLy0SBbkWO2IOadOgxhBLLDKBmXJJifHmre/ENgTkASdY63bGHiH9KM+x46cwgvHpGmGopaJopBatUal3CSOE0ShAFEgyzI0Tacocubm5ugPhgiiQH/YJ8piNvd2QRSpVmrEcYSoZKR5xmZvD9NUabaqXF/fQBIUVK1KzVrA1mUkJeegf5mSlVO1ck4dvYNJnnD56jXkBYNW2uDw4ICrN65zb7WGZJUI/CH1xlHuOvYe/u2HX0BfMNEUFdUwafcH+F7E0dIKlUThat4nVFSyJEKTBIoCEhEUTUXIC/I0Jk4y7rn7bdhZnfFojK6Z1BtNsiKhUrUpigI/SEhqEnkmEKU5CiDKBlEhkgEFBUnkEacpWpGh5AmGbZMIEmmhkikSmCpunGG6U0qFSJzL+GlKVhQoiomi6Yi2TZYWDHsj3MmI0B+zceMygeeiawbj0ZAoSREVHauxAGLK4+/9LmqCxDQY0ok9NiYR7fGUeLxPJ4yoN20cIed7vvP7ObZ1kd96IWZrr4OgyOiORZLDYDQGBMrVGkEQEAUhge/h+WMkSSYIUpJYeF0d/hdEuX/xVYr8JjfTpf3GbHj85jl9fI3Tx9f44+//Qzzz4nk+/snP89FPPvlNH/ehe27n3Y/ez2MP3k29Wr4l7vVP/rHv51d//eNvWFu90a//TcCj3EzT9dWmrv1lbuayn/HajRGvyg5FUbwh7BCEIf/u1z7GP/jQL/O+dz/Kz/7kj7GyOA/Abe/6I9+Sc1z61P87azgzvmE+/9S5V9z+7nfe91U/99ijd7G5/V+fHB5P3Fvlln4KuOMWuY5fnNn/lhlb/uwt1DZ+BvgZQRB+siiKX5z5kde9Hb2e7eKOL5z/n38rDypJ0v3AbwHVP7DtJ7Ms+9L29mXluPkg0y9+tTJf4Vivtq4fvcWb8y3tu98qPnvGrctrGsDp9zzuvetuYi+g7JS46+67+OyLT/Gx//RvGdy4yPziCq3mcaoLt9NYPI3gDwjMnKwI6O5vcvL0AzyyMMezL73AeDygtbRA0aohpiaNUotybZ4z1jw7/RHXds7zmSc+ycblHvVag+k0QimNuOveJSaDkP3dXSqVRSKvRqQIeMGQYc8lmILvJciySJTGGKqFIAikhcv8QpUsh9E4ZGG+zgN319ne32LgKpjUby6gLkzZ3d/AMiqcOTtPe3+IqDlkaU6rrmE7Gp29XYREJfADdENHEATyPEcQc8pVjSQcsVhucvvR2/j+x9/LUSXic//lPxMOfAbDkMvBgGbdQJVlCtskCXNuv+0sJQHOffqzDCyNvf4hnc/sIysq1ZPHyYYBSZrwyINnudbeZT/PUK0SwsEYYZKxU3joa/NgKVy5cY3Hv/cH2fnsOX7+H/0FkES+//3v57HHHmXsBahSmUkvQ5IzSiWDKJaIghxF1xGDFE1QiKcCCk2ysCBUYoQ4QSqaJFFMGIRk6RRBzImzFF3TiOIUQTIwdYNCivECnzwziBOBQRpDEZPlAZ6fsrUZMxmPqTvHSYKAUbdHGLc5e9cZBFFis32VZtOiftrAVnQuX9xDEGJEtYqk2iiqxo0bV6nVK8w15nnis08yGvSp1qogCux1O0RhhBILdMYOQmRhZGPkJYdm8ySWIdP1u6yv3yCKIk6cXGVt6QQv7H2aSqqwWl9DN1XGbh8plwiCCEUxydICTVWJ4xiQ2dzcQAAW5+YpioKFxSUmUcr63pDNUYqfZsiojAchRR4hqhCTYlg6nu9RzUpUmxXG7oQs9qgZNrKqczjZpznXwJBGkIdkWobsmdSaVXrJIYN8gZq2yLUr1zl+/Az1xTk0rQTA0bkjrDaPE1YzhCRBVk3GQcC83iDvChhaTLmis7ftIksZoe/jOA5BHLPX3sfSLSqOiRmXee78i2wP27zrvu8jSwRU1aTRqjKdDrh+I8DQJBoPLpLqZQ4nA7xpxCDzsUoWcpqhSQJJHpLGAeQJOSlFEuB3D8lSgWlnH7mY0jhxFvdgCzNxKWSJHJUiB0kC27CIBBh7E1x3xGTQYWf9BoPOAadOnyJMUtzpFFFSmas2qZbLSJgcu/9dyJMh04vPo2oympVw1JmjWnI4HPYYx2OEYMTvfP4JBtMBP/jod3Jxb5ffeu7TyCWNyAs5dA9urjvkxsiySiLm+N4YRTFJ85xKtUWWZLe0KDcTR18b6tUy3/Oet/E973kbf304Zmv3gE6vz9MvXAD4ioGDh+65neNHlimXbE4fP4Jlmtx+6tirDtp8O4XFlcX5b/n53ujXP+NV83PcnBX4lfjH3JwN2J1V1Ws6TrwqOxRF8Yaxw6eefI5/8KFfBuCjn3ySStnhb/2Fn5gZe8Zrxo/+yHu/KCXL08+e533f/dgrlg2CkP/jX//al23/q3/xj39ZGpYv+37wwNkvOs/nnnz5VqmCO2b2f0vb/6uJsnfcgtf0i7N29Lq3o9e7XbwW5/+rfHFQhi98j/3Fb6Dcqz3Wq+FWD97M+tqMGV+D1zSAc9ddJ7BtiYvrmxj6EYajA/A77F35PA88+AiLd34XaVAgF6DIBdWKg+TbJLJJrItEUs7Cygr2wS6DvV3qqys4iwuMOh0SxSLVy6iawJFqnd/4z/8v5y4/Tb1Wx9AUBE1AMyzaewGJr3PkTBlZlnEnCUWuEAYRWeKTxAqCJFCq2kwnAXGcIkkihi1iOAKeF1Oua7RWm7QHIl7fJhh55GaKrGk4lRKCrOGNNUZDE8Mp6Hv7tPd9SrJBGBS4HshKjlMpceL4CSzbJs1SkqyDUsp47PTt0PN5933v4FRzkf/0K/8CsjGD3oQjS3dzp7kKUkHzztsZqgoXP/Up/M1zWM0SD7z7XeRKjUelgs65J8i8A3x5ylY+Ymvk0nRPcoe9inr9CqPikOv7HUyjTD+JOHLvKbzEQ09FDruHrK0ss1p3ODjY4thykzwNmfoBpuYgyTZTb0CRZciFwqATkGkTKtUaZVFAyWQCClIVwkhAUWzGYw/Xi1hdWaAQPTRTw/MiKCR8PybPFUqpgVISUXKBcJgiKwpB6GPoCo2mjaqKjAZTJpMQUTDw3AJNLjHXMAknAZ3OiDl7Ea/bY5oNOHJ6DTSJyxd2KLA4snSKvTin2z/g/nvvpVptsrW3hShLZHlKnkES+aiyRJZJTN2ARJQQBYN07yqu30ZXFKbTKaIk0et12dnvoTz5JMgelfIiulFH0nTssk0YRDz55Od56YWX+MPf8wEkRUcrBEBmNB3S7XXIshRJs9mKLaZ7Hnq1QXc4pT+YMFerkCRgmzqyKaGKECUBOQJhEJEmIb4Xosgig8gn7weYpoCpaUzCMplgEPYmRNMRkiixsbdF6sE77q5TTEOuXb6IUyujCCpIIAop3WEH06lgSzqj/pRypUT7oMfOhX1ue/A2zIUa87FItzemUW9Srze5MThE1jXmFuoMBkPOX7jE+rU9zi6dxHIcwiRh7ehRao0qvjel0+sxHi+gaQa5DTVFwp36TMMItVJCklVyUgRRQFYVdFUjjGWCAiQxIZscUgwPSSRgxUUI++ztbzG3vIam6yQFpEnMZDwiKASiJAYyuof7jPsd5moNxCInTWIEQaTebFCv1xAmQwQJYkMiFh1WH/l+xJ1rXHv+vzC/NM+R295Bfu4Cbm+PLIdnXvwdjt37TlLRganHncdOcvHadQLXY355AVVR6HX7gIgoAYJAFqeIkkqORJxFt6woNxNHvz3Uq+XfD8B8z3veBjATFGe81fmpr7D9I9ycEfhbsyp6/e1QFMUbzg6/FyT/PX711z8+87czXlPKJfuL3v/Kv/8t/spf+G9eUdz55Kef/6KFjX+Pe+46/XWfd/NbnKL3m+ACt4ZQ/6GZ/W8p7ph5h1k7ukV9xoXX4JgfeIVtVUmSHs+y7Le/VjngceC3v85jvRqe5NYP4nxo1tdmzPjKvKYBHNn3OH/xPCWnxPXL53jmyU8zV3d48M77WVw4Rnl+FbO+hFKETNo3CGUZeeEUlulgxgmKrNCdjhnFEapTI0sFouGQplnDSxL2hm3qdpXh5jbnPv8CeVGwuFxlPPYIYx9LsplOfGRZx7GXOWzv43lDdKuMplkERYqKQqWiEkYRug0EMVCg6ApuEJJmGQvzLQI3JI4LJsMYSdJIi5jrN/qsLC2RpRJxJDLJEtIswnIsZLlg6EYYpkmaCeRSgdOqI0sSCiLj0Qh3GnDHsdtYqdjsddoEQZff+S+fwR3tIVCwdse9vOO7f4TtT36Y3u4O1150yZYaPP7BH0PcPYeqqzQe/EFcL8KSC46cvoPpwS5RxeZY5LG9s4fX7jAZ7bF6bJHb7RKOeYPndveQRRFxFFC2bMpqmZ3r2zSWj/GDH/wBqvU6Zx94B+1pn0niougGWZQThzHTsUYuSBSCQk2zKMk2iT/GKZdJg4goS0kiAS/LyJIYx9IgF1HkEr47xp3E9LpDRFGlVKsynvpYgozrZqRxjijGuNMAVSmhqibD4RhJ0KlVbfb2DhAFAdOycQyDsTtmNJ5iLpSRfIksttnaHaGYMq3FEvOVOrqi0tlpIwkJ9dY87Z0dRpc+w5wiM9oZoxsmZALOygmKZpVRr0eexiiWxbSoIw27uIjEpFhlm2azwnSUMp5OmV+yOXH8CJWKQ+gGyOiE0YTD7i55EROFPqQ5omKSpRFXrl6hUETawwGl+RA31wjCkIoyT61aQ5EVDFMkTzLc0EOVNAyrhFAIJEFBvxeRxT5yLCBUDQRRQUdk3mxwdW+LPAJDruJHGZGbUSXnjz/wAU7cdjtL1RK6kjDpXsO/oaHP34EulVldPcKaXUUZ5VSaZQZxiD5WCdwpx5dttCIjxiYrBsSpz9ziMnmR8tLLTzMZT1hdXcHLU0behMXFFrZlstO9Rm2tAoKP6w2pVGwUfY5KrUpv0EWIoMhB1zQk3SBKJBBkElFENVqYpk6W5MReAHKGH3jkJZEja2ewSja+O8EqBFSlzqDrUaoLyKZFmPoEAw/RblJ1ymRTl8H+IeQZVkkj6h1gGBba/AooAp47RFY1tDxi79xlaq0G2uqd9Nw+tbJJvWKhzS0RXbvM4cEOlumwsDiHH/bY3lXR9RrLSp3miQqXrl9kGqUUJfDjEBBRFZmiSDEMnSwT8YYefuDPxNEZM2bM+GK2ubnm1h/kZ/jKixHP+DbaoSiKN6wdHrr3ji+a4Thb62rGa83pU2uvKPp86RO9O7ttfuGf/cqXlX34wdu4794zb+Qq+BCv72yLTwA/WhTF4cz+txS3SmDvS9vqzI+8tX3GhdeoHawDx77C9q+33Ks91tfT5m/FIM4B8PPf7rSGM589443GaxrAWVo+SpQU3H7H7SRZysbmFmkqsHD2XubXVkkkGVPPuPbC04z3NrEqDerLZzCdMv50QLezx8b+LlGekiYRvb1tdjenLB29j/rqURATDg63+OzHfhNHdbCtEp4/wssLNEslCD1AoMjg6tUtxpMhsiyS5zlhmBJHBWkc31yfpEiQFLBVFUVVydKcrBDQDJ2p5+J5Uyq1EktHVrl+/RqGALpWZmtziCIryLLIxPXIsgI/KFA1GT+L8ScJmqohqiL9wYhSoTIajfHCiPtPnMVUDXa291hdXeTq9RcYHF6jourcdttjHL37HXz+medYPnUfZx95J9Zcg432Hvu769x1z3uRtCpyZRlL7hGFI6SVUzSW7yNQJErRBgunVhl0Buxdv0FhaMTtA8pyxmLJopcqxGFGuVaiWWpilCr4oYtasWgdOUohyex3timkDEmW8EYehmHiejGCplKv1WhYLXZ3tknTiDRLqJfKiEXGNA2IMsgFGUkTycSUYd9nPO1g2QZxEhMnAag5WZKRZzVUxUBSE/I0pWTbSKLC7k4PWZRxbJVBf4AoSqRpiiCIrF/fBTkjKTI29taZc6osLK1xY2efkTukEELm5+pkWUKne0icZMiqTJoGVGwdQQBZdEAQqMsqZcljY/+AyupZvPGALE0IrBbxoMdCw6SIfYS0IBFyGkvzSL0JQTim3WkThiFpkqMpOpvb63T7B5iOhijkkKYga0wmQy5fvUSYpIxDj9Gow27g4vsBC8dPYOomC81lNEUDFSh08lxk3A0wTI2q1WA0mpAEIjVLRxF0kjAkSXN2NzokUQFxgerojP2AsmrzZ//oD3P2yDJSq0n3+iXkQuLobfciqDZu9xzhuke5iPkrH/wxHFVip9dDb7QY9aaUdAPTlAmyDNvWOez0ePl6j6PVOvmoQ02IsKUMoQgZjnskSYRTsplGEZ2dHYLjq6xvXSTNIlZWTqDqy6RxxuWrF7n3zN3kCYRJgWGbjMdTgjBAlECWZPLxFD/wyfOMNInYPzjg8LCNl8QsrSzz6JlTxG5IuTGP73uMR32qlophKIipiF5ysCpVBru7iAiEUcj+/g52nDB3rM7C6iq5IGEUKVapTOGOUOKI/vplnn/hORJJ5cj8PHmQMTlcp+QYNJcW8fpjuv0pzVKDuWqZEysnubi7ixcU3HnsDB959nfJDAVZl8ninKzIiJOEXBCxLYfECymK12UNnK8oygmCMBsFZ8x4ixOE4Stt/nYmZ/69H5R3fOH/vw+MZ5Z53cSs37dDURRvaDu869H7+TN/4of5F7/8Yd737kf5k3/s+39/3z/52z87s/iMbzkPPfDlGuTP/OV/AtzMkW8YOs+/cJmf+5v/+ys+gfvBP/o9r+o8Tz97/ovev++7H7kl7r8oin/Oq1hP4hv4/rkM/CXgp7/C/t/k5kzB35jZ/5YdX26VNGq/8IXx7XDmR17/dvRqfcY36T++9Jyv9W39IjezW/xBfjvLsvVXU44vDs682mO9mvv+ZeCXv82//29p3z3z2TPeaLymARyzWuaOyj00Gg0EWaTUbGFoJXBMqNWp5yKHVz5L2Gtja/NUKssgmaSqTaCm3Dg8T3/iYjk2qqYRTQcEeYxmmtiWzd7BBp/81KfYaR9SMcuIgsDYTykMhShP0CUFXVeJQ5Fut4skSSiyShikhFFCnip4XogoglN2SOKEOInRDRNZkugPBhSyBNxcsH00HLG2tka9Vqbba7O81CJLBxSA67vIioKmqownY7IsIUpyVEmm5NTJ84S6UyYtCrYO9nngjrs4Vm1yY3+LldVjSEnAxvXL1Csmy3Mt6iWD9sEW7/6+96HX57n4/KfY3bjMSqWOqolIVhXdmaO/8TJq6OJlBbk9z8LqAkLqc/n8efq9dSS1wdzRO7BtE6HSJBiOOWU2wEsIkgDBlCnVq1i1Bt0oZDCecuHiJe42HHZ21+m226g6CKKMaevERYioqJQrVQ4Ouoy8gDxJyZlClDMc9VBVmSJXGI7GxJFEj10CL8epVWnMNRFlibLdZOKPkGVo73fod8bMz89z5MgyCCmD/oQsy5ifLxF4IaORi6Zq+F5IlubM1+YYTHoMB0PsSolSs8F4NGY8HtKcqzNKIhRFRtU0wjREMy1US2JvYx0vCjEMHVlX8TyP1dUVDg4OiMcd9L6FUl3DyGLiPMGtzBOJY4QC3PEYc6GCs1gnUzTWDzLILF7uJpRLJTRZ4NntHp1cYclQyUhBufkFYb+9R2fYJoxFurmGlcu4oYukKOwfbCKbJbzYoR+AbBiYmobs9kgCkSwcU61WyHzoDUCrzTH0YDhOMXSdsm0zjkYIgotGyH33rPH4XQ9w9wMPkeUZ3d11sGtUVu5AcKrkhAijXYLOBaLxiFySGUoKVc1gqV5jL84wDRtT0bCbcwTddaqmwMP/zQcpBi7dwQ7vf8djfPdD99MfBYymPkF3ifM7W4yKEEU2MUSLnZ09VFXF9wN297v4bsDR95yiNldn3PcoVIEwT4jTCQgZkRdweHCA53uIkoRj28iywnR6SJqNyfyQyy8OkKOU2+dLWLU6tlUmmabkmXSz35oWRrXK2HVx/ZDTZ2/n5ZeewR2MOHnHXSydPotRqaPrGloW4QcFkzBFbi5hKDn3Ghbt0RhVVJGjgM0LL3L6vkdZbi3z7Cd/kzzwmVs6jqEaXL98DqXe4mAy5Lve9Xaujva51NtFkCUUSUWSVKRcIYoTsnyKIArEafR6+Pk3nSg3Y8aMbx1Z9oprc307F+z6t194zXh9hdc3nR0MXeenf/xH+ekf/9Ev2/d7KSxnzPhWUq+Xvyyn/h8UhL4aa6tzX3Mh5N/jxvruF72vVpw3c7X+1VcQMP8gP1cUxd+b2f+WHl++SKR/jQTkVyUUA7/xbRDwZ+3orf29+u9LklQFfoKbKdH+H+AnX6Ho3//C/q9W7tWUmfnuWV+b8RbhNQ3gnHvpKZrNBgd714mSBM/3aJYqZGqBWxQ0nSammRIFMnfefjdXd7aI+gOW9RNcv3GFsTsCVaLcqtLQTZ791A61RpPpdMjVJz7OSy+9zP5mmyOrC2ThlDDJkJ0yhZiSpyFhEAE5imxTckqMRlOyLKZUsRDFlCiBwI+R5IK5uRZu4ZJmGbIkoesGuq4znUxvrpUBWJaNrBQkaUQS50RxTJonkAtkWYaqql8QrH2yDPIsIxdFgjDE86YcXzuCpRuUu33mJYMbV68zzkNOSzK72/sojo1IRkXVEaZtFF1h0t3k/H/4RSZBF8mQ8FeOEugt4s2XGIZTdqdj7r3tQXRnGclYIAW2ti/jjUO0osnZUw8hpilu7KLVFrj/e/8I+3vbtHoDrmyv0xsNMfUOty0us9BqYNYiilxmOOrw4rlnyIHATxAllbQATTOoNVo41SpuHGPJAoGXoperbLYPKUSJ2M1RCoFaa540ixEQMcs5kHG43yEIXYo05/57HmFrb4ONYB2zIpEScfXaOgsLTSgEyERcN0IRdQQUJlMfUZRxpz5zx8+wsrjMuesXMGslupMhwTBgrrnAcDzGtDTuv+9eatUqQexTqdukecLe4S5eOKG5UKHX66FbClbNpHeti2xKuP1riEaZQtOIwxC5XKK9t81ySaF+fIW0SLixdZWpskhsV/Ekgb3hkEIvM+q0eeHaOnJjmYEg8dR2n4XVbZrNVc5dehHFEClMk9yucXWviyxJ1BsNdndvIMgq+5lBGgVUNAWUEtF4HUPOyZMIRIFEryDOL9KPCyLfQ9MckkIgVAymYkicwZ23v41HH7yTM45BGpWQWzXycYRh6wi2xtjfIy8EyquPIspzhBufIxrs4thVslRg1J8wGAXojs3+4SV0QcZQFvHTCcn2BYpcw/c9hElItVym2ppDKee858yd/NpTT/K//fL/SWlpGblSY9DvUS6VGA5HeJ5LmkEqFVjVMqORTyaIaLqNHo+4eukCkiwjyiBKYBgqeZEiSRJHjiyRpi3iXMT1cybDCftTmXx7i+OrS1TnF8lSgSwpUBwbo1zhsD8mjmPmlxc5OGgydkPKtQUq80dQaxUcUyHsHXBj6xKjwIciZey7LK2usdcf0XIq3H/mbqZyglprYOkmwtUafq/N5euXad3zAFd2t1l0HBTbIE5TTiyucn53Hdk28H2XPLfI0owwihCQMAwDRVZmotyMGTNmzJgxY8ablB/5wHd9mRj0avgbf+2//5oLIQP0+2OeeubSF2176IGzb6g6ejUCuiAI38fNWRvf+xWK/FPgfyuKYndm/7Nv9W73NYXir7F/5kfeQO3oVg/AAWRZ9teAv/YV9v3Bt1+x3NdZ5papvzea7571tRlvJF7TAM7eYIRgWLiuh2roDKc+L17ZwpBTyo7ChmawfGQJSTEJrj3N3t4Qw5EJ0wJ/OKLmyHT9gJ2DNn6lgVytE6URg/6Ap555joKMVqsGOUiSSKYrTHLQC5AFmSh0kWRIooAkSwDxZmq0HHw/Jg5BUTSWV1qIkog3dW/OpnFdRFGiyAvyPCfPc7K8QBMFkjS6KWAXMoIgIEsiuSBh2zZpmuD7HpIkYlgmw/GUIodBv08qZfQDF8YRdiaThQmOXWIwTZl0Duge7pNqJmI2ZZyInLjj7fR6PQ6f+gzmwjHmtFOUF1qIR4/RfvoJ4jRmtzdAMCyUyhHE2hLD3W2mgwOqqoi+eowbe/so1TolQaa7vkEQeHR6m1RbFe5pnMULY56/egFBkSk3alw/bCPIFseOHOXi1efp9g+wDAvPCygEBb3cQsLmMBAY9DUEfQU/GOHGEd2hgGIfZ3G+SqfdYzAOybUSpbIOssxwZx3d2yefpghigiAVvHj+eeaaC9x5xwMc9vbw3DGqruF5MUJRMNdo0OuNSBIfWdKIoymlkoMoCKiSQrNSZ2n+KLmR4wYetmVSRAVhmHLq5BKlksPUD0jyBEnLuPlXoJhw8o5V9G2RwA/ojfdx6gbt9iGGqlO2UwrNplqzCDORLGtiVHO0soISZmiyywgfwdAI85CkSFBUhb47pOtO0CydaOqyMZhwteNilye8dPklNFtH08qkIkyjgDTw6A97LCzOEQVTZCXHKZcxDBNBEUnFGEHLyJKcvndzLSJVLZMVBZZYgjgFVaY/niJkIt//jse5/dQdhPoiXQUWiAluXKUIE0ajLlraxx/3SWIRa/4McbWCF52iubhA1t4EN8IdxhSkZFKdSn2J4eAQpT5H6/QDCGaJiTtkOhrCxEPJFARFBkKKLOX93/V9SJKKIii8dPkK/UEfWRGpVUv4gYwiCOwfHvDC+RcpaTWc+hyCIuKNNM6/fAkv8LnzrrOIgkpRSMiKRhDnTL0plmVSGCaCGCJOM6aFxrKg4PW7OLU6uqGThjfXnJIkg3K1jKKp7HW7xHGEJskgGQiKjaToFOT4rsvz555hkAaIUsFg0OHa1nMQx8T1RXpbV+mkEw4Hh4R+wPlhm9SEybTNhe0beKpGc2mRF557hmvXTZab89TsEqM0BEFAkkTSJLvpP9KMUqmErMizUWfGjBkz3iA/wGfM+FI+89QLeP5/Xc9uNpNnxpdy5tQaf+d/+kn+xv/8L1/1Z/7qX/zjvOPt976qsk8/++VrbtuW8aapP0EQmtwU3L9SnsPf5mbKnQ/P7P/GtP83O/7/gRk8r0ooBnZnfuSt5UdmzHz3rK/NeDPymqqJtcY87f6I7c4BdrNGUuS0Q5+WY3J0+ShJJtAfpWSqy7WD5yhymbpYoTsac2R1kZ3tfVBsJlOfzRvPU3ccsiBiunuZNEy49767KNkGn3/yabI8pFpr4Q+npFGMZZmEqUuapojCzUBMGPk4pTJ5kSMgYDkaRVLQsm3G0ylFllGpV4mSBD8MyPIcURSJ4+Sm8CoKTKchSQqFCF4wxbIdRgMXBAjDhCAIOXFyjTANiXsxiiAjyxKVVgXbtmjmFvudIefOXeDMsaMsL88ziXsYFYsikzm+fBtzp89QuvcdlHITKQ8YeFMss4ZZXwBDo3U8x49TyivLHD1yDC+QGF+/gSJFWIaAZmoMQhG92mLiuWgVh5Vjpxj1umwc7FLVyiSySSKmaIaIYmj4YQSZwNz8Gnt7PZ55/imGoy6FYtPzLSSzQhRWiJKc/mBC3hnSWighSCJBEiDmMO841HSDQ39K4o1x8xhTqSHYKoWUIUgZuqFjmCbj8QAEAcuyqNWazDXuxh+PORzu02oucu3qVfK8Qp4l6IZBAdhlHVUv0AWVUIy5trOBpVlIukRjrkLip4x6E9aOVkhij3MXX2JhLSdMJ0z8Ppu715HUAquis9vZor5QptsNmUwmLK2u4dRKOJaOXquxP8oohAxJEVAqJUbBJnauIsgFhSpSZsLBoUtFNrCne3jXRmw99ynE8Q7W0lGKvMD3RmiNU1y5fp4rN67jxkOamkSy9yxCroDZIityBt6Uwe4Gpj/A74ZkqkSBROaPSTQdVdcoioIiHpHHAoIbo4sCheARBQlmrPCuh9/HvbffjxsGTPsDXu6kRGoPoYipVVvkeU7voIdhKJQth2jcoe97aE6VytxJru2PieiyvnsJq96Ank4jhfbWNdLsFImokkwCRMuhbJqkwz3izLi5xtG4T+fiDc6YDt/32HcwzlWeWX+RaeQjpzrjcMBtR1doVSvsdTpMekPqR+YJgwhV0CgEEVnV2L14mapuYDVrRElMrVbDKTnkRY7nhYReREGOqWskcYQxt4quiURBhqKBppfRzSoFIrXyHIJa8PTTT9LfO+Rdtz2M3VpAtixUSaWIpuy093nx6stoFYs8S0jigCgUUCWRKAwZjz20Spmx18MduwSShlTcXM+qPeozv7jG7v4OgqaQmhKbgw5JnJLnGaJsEPgFRSFiWyUEsUBRRARuLcHym/0BdfWzH5mNojNmzJgx4y3DJ554hl/99Y///vtZAGfGK/GB9z/O1PX4+//o37wqIehP/bc/8KqP/bHffvLLtt1+27E3iwD4E8BXU9H+ZlEUf2dm/zen/b8OXpVQDHx45kdm7WjGzHfP+tqMNwuvaQBnGvoM/CmpBIfDHrpl0pivMhlPOb+5h64arB1dI5dkNvevMj8/xziYYlsOu7099gYTyF0W5haI3BH94YTuwYg099EUDd9zOTzcwgt9nLIFWYqtiaCVGI8miLKAoirkmYAoS0i5AGKOO3WJ4xhVUW6mUysKRv0+uq4iigKKIhFGEaZpkqUJAHmek2QZUZwhSTq6kZHmGd5wRBLdnKVDAYqsIMsySZSSFTmKAJZtoxYiK3YVYRgymk6xNJ3GQoOB6LG710GJQswMqiUH0zRoH2wi6S28LEcUAwpS8qmLFngIkkU/6HDi2AmiJCcnIQp6FKqAVS0hSBKxbJDmCWKSEcQRjVqLw04PSavTabvkDZEjJ47RG+zj+wOGgy4lvcHKwlHc4DLXdvfwlAax4CDaMqWSTn/QJ05yZFlHkjSGPY9qo4JlmBhywdJ8ld39Dlmes7AwRxyB5weIQo6qS8zVKkQxaJpIIZZwjDJuOMZISpR0g6XmHIqm0BsdML9Uw52OWFyZZ+q7dDrtm+vMaNrNAFUwZWdrl7m5JkEcMR2Oae/1UGWBWqOG6wYcjA9J+xZBPEVSFPYONlCyiLn5Fogw8XxEVSEtXDIiyq0KEgVR5JLkGkkWYjgV2nsdjtc0EEWu3rhOuVxGTicYcZ/Fxt2cWTuObKR8+vN95OiAbG/CVGii6SaIBlfbU+I8IstiomhAuSxQMQwujSNsXWHihhzubrPiCCzULDzPo9fpUC5VWF1eJY4Duu0RYRxgiSmjfoznmDgtgSW7xAcf/yAPn7mL83sjQqtEOOyhGzYb3S5zLYcinqKrDmEk4XtgZ1AyFcqyzLUblxCzEzSO38eTT38MzzRpLMwhCgoXX3gZqShwByO84DJqpcpS5T6GozGR7+Ks3EEoWjSwcA4zrj71u1RaK1RXTuNkMUfmF+lPQ8I4oVVrcHiwg6A5XN/fRpAsVpdPgpIjCgXVSoX77nmA937H40R5zI3tdZ569imyLKVWq2HoBmmSIYkKmqaRpIeYlsp3PvIooh+R55DlKlmYI9kgKTKFBJNgjKjrCHYFQVXQNRVDVtBki+3DPTqDDk29hSiIxElBmgpMiwQaJQIlwJuOmQYhsqSgxCn94c1ZYgtmCaSEdu+AoT/hueuXEUSwKxXyICCIUrKsIAgDVK2EqggYuoyAMBt1ZsyYMWPGjBkz3uT8qf/2B7jnrtP8wv/+K1+WPgVu5s//mT/3o7zvux971cfc2W3z0Y99/ou2/eiPvJd6vfxmqbaf+grb/xnwC0VRrM/s/6a2/9fk1QjFwN+Z+ZFZO5ox892zvjbjzcZrGsC5NjjA0HVETUGNEtzDHqKlo2nqzYXk55vsD7tEfoTvBwwGA5IwZSgPECWFNM3ICo+9wwlbG/sokkOUZjSbFSRZYhpMiYuIKAuxRJ2YEMVQ0bQSXhhQqVp48YQkiTEtC0kVMXSdPCtQFIU8z4mSmGu727iBR7lSJgwjJEXCNnWiOEQUC1RVBKFAkgUm4xGe52FZBqIgkKYRFCKCAJ7vIckiaZaTJjlJnjBXq6PpKpYkUs4kru0dopgWUeCzs7/HxErxxhEVWUUr5bT3XmRuziFOI/zsIqmkUW7Nk+YRE/aYTsZM3CFx7vP8czeQ5RKra0fIUg8/zgmDKYtHbqc/HnPx8otUrEdA03jp4ksMRz0iPJrVBk6txuGgh204zC06eP4+8TRmJR7zxKUrtFOTWslBkzQUVcYwRXzfBwGCMERVFQzTwfNiskxkfnGJubkVzl/aBFlANQwQC0RFoJAEshDiIERRdfIkplKp4rkhk/GYgeux9OC7iKMU1/dZWT7Jja0r+JFH2N6lVC1j2yol02Iw9AjiHHfaxbYsEHLyNGE0niKLItVyibJTozMYcWlng8PAB9HBVCqkSYw77tIUFfKiYNgPiOOEWrWKbia0JyOkQkFWNVJJZxJF+NMRRZFjGBa7u/sIgkS5VKZ7uI+qSZw6fZKzZ+9mt7/Fbm+IpNlU61WyzMawTAJ3RJAU5GmCZRuUrDKqpGHYFuXcpt85RNZtqvUFNHnAiVNHuHFlC3Vpnlq9RpqmOI7NdBpSa1TxJlPiskgk52SewF0n7sDRHD75wtNMwoJ24lCxLIxmHS90WK7U6Y18SrLM/Ooa4TQk8lxyRSNNJ5xbf45+1qVhL7DX6aGoBmGUc3XrZRRRomXO0Rl7zGsqVXOByeE2buwhN1ZpnnqA/uEmLz5/iftPnKXbG7Bx7reQ8HjotgfZ2LmN//iJ3+HE/BLxdEpvPCbXUy7cuEb1ffPcXa4SxyGkBRW7xMj3udo74FhznlNHz3D6xO1cv3GNT3ziE3S7XdI0Q7dMdF1nMp1w6fJllhpN7jp+CiQV07YJwymKK6HaNebmllFkk0pdJU0i+odtVk6eIs9zgsRjv9NGkCREQUSWZYIgJCdHkRUmY5fhcIJpmIgIRFFAmqUUSo4fTEk7BYIiE/ghbhwwmI5QFQlRlClyEVmRULSIcrVMGII79Sg7JmmczEadGTNmzJgxY8aMtwD33XuG//v//F/Y2W2zubmP6wUAHD26yJlTa1/38VaW53jxqV/m0uVNbqzv8sTnX+I73vXgm6nKXmlq+AeLovh3M/u/Jez/aviqQjGwPvMjs3Y0Y+a7Z31txpuR1zSA4wvpzQBHmqKJAoauE+Y5uqMjIRH5IW4QocgyiqIwGo7RVZ3JZIoggCQLWLZEuVKiVDJJUxEEyIUY266QklCq2lT8CqIKsiaQFxluMCaloJBldNkgCH3COMEyDBRVwfcDNE1DEAQOB33yokDRFaI8o1oq4blTRE1GVUUMo4Su67jeFEVTyOICWZJQNY2CCF3TUC2TLEtBEMnTFFGQkGWFZqNGs1HloNslEyQ6owFdf0zJLrO6vIznTZgUBaKkM3JdREthnOfcmEzYXj9krrXIPXefQTNLuH5IJuSMIp+YlChJiOKUhWqd9RtXOX3yDPu7hyAleFzlxu41UjGhUHL6ow4Xrlyg1qogqDnNhRbr1zewTYu1288w8vdJioS9ScH655/hwtY6RV4w9QIMXSBJIoIgI8tkgmlMjkQh52h6SpzdTG+VxiLtwyk5IpKikBY5qq5jmCq5JOBHAzRFvbmYu65hOTbd4Ri95OBUyqSkZEmIoinIqsyxI6c5f/klEDOOH1lhNKpyY2sHWRXxRhMajSqGaXB42McyTVqtOaJ0hGXZjN2YzsDDrFpIusyCM8e1q5s4uQ1RhJe6IEvESUqappTKDn44RElyZFmkH8a4YoCoyFiWQzxS2dvfw7RNFubnSdMUL4HUXGRaPsnvXB9w6fI1DpIKkqGjOU202COOI4osJYhCEGKq1TJZCKgCe4dt1FIJy7TQ7BL1hkVwGOC5CSdOrLKzs4c3DfDDLkdPrFEuV5BEAd+RyGQZXXY41liiNHeM337xKmKuo5k2qeyyPu2xNe6z1ioxmk4ZjMZkhUy51sQsl5HsEkHYZRz5xHLO7nCHze1tNnb2aLZaFN0eqSRy9PhJFLlMp7uHVquhyyb70xGioVOp1Ll87gnCvStoIozNJnNve5hxHLPV6zPZuI4chNw+30QjQRiNqDkVzm9t8J33vo216hzdQR/DspAFDadkE48C/sNn/w2iH/CeO9/Lz/5//jJvf/gRllrz3NjcZGt7i43tLfzAxzJtKuUaYzfAy0HTVXIxoygSAn8IisTiwhqioDIYdlEWbyP2Q/IoRS05HPRHrO9tohk309NFYYQkyvi+j2KrxGGKYdiIokQYRCABikLk+6iqiqprXL5+lTwDyzbJSCAvEPOcQigQpQzb0onDiDzOUCUJ0pwsmQVwZsyYcWsRRvErfoWb1cyMGTNmfGtYWZ5jZXnuW3Isw9C5794z3HfvGT7w/sffbFX1IW4K9K0v/P/zxZtgwbSZ/b+lvKJQDPy7mR+ZtaMZM98962sz3sy8pgEcSZRIshRZhCjLsSsl1CyjKhuYhs35nS0aVokiT+n126iyhm3aiKJAQUy1VsZ1XQaDKbffdZqd3TYHewM020YxJUaTMWKqoZgycRYQ5QW2U8LteQwmIyQ9Z36xQpRAEuckWQpxThSH6HoJURRRDIVMEkmTlDAJiVOdOI2wFZ1mq8HhYZv9wy7VSoVWs0GvO8axJMIgQNEEbNvGnYQkaYKAQKVSZTrxcAOPY6ePMGx3iOOYWn2OOM9BUygEqB5ZRC5qVEUVIYoYj7vs7G7h+Qq77HD6tvt57D0/yNQP6cUTBE1ia3uT6bhHQYymSsimwPX98xSFiHwoEIYpgiTR37rOxs5lHKeKaMDLLz9DSMiN3TZxVCCqBjmwvXuDckVmp9vmMFI5mMak25/BHU2QNRlZUvB8H0nJkGWBMBDICwlJktA1sCwV4hC7pDKZDOjt7yFIEggZSBJhkiAmkEY5Qpaj6zJxmuO6U4yqjVEysS0H3XG4vHmFtdoyqmEwmk5xHIeV1eNMwgHrOzvUnBa1ShXfd6nWLerNOq4bkOUJpmUgaTLkOakiUUQqx4+fQTMgDBKmgUu9WUJSTKIoQtMVBBnyPKMoIkbDMe7I59TcIpWlFr+7HZDFGUYhookyoiKzurCKIIl0u12cuTWslVV8L8Q0DRzDZuL65JJCtVIiSXPSOCdLcwRuziRzSja2Y9Pp97GdEu32ATX7ONValUSQmYw7qECepySZxGAwIAhCllYWqdUqXDx/g16kkzo1UKBl2zQXlkgMk2nqUS87NJo1ur1D/NAjIEXpuEjDgCwXWJxvEQd9NEPBi1OiyOfixnUOh0OGOwOCcUCeCkwOAp671OPsmRMcXYaJe4Cq29RX7uLy1WsEmcuZhUV2drf5/PWXefT2+zhy7E7OXbkG23vcfuZeoixjd38PIYt55OgKVy5dxFLnONpc4dFT92BbDtcvXuGBxiq5KDIJQzIE7lhcpSnl9IYd5pslnn/mM+iaQRJMec+7HmFu+QP8+n/8z1y/dg2nUuXMqdtotBbxM4GKJDOajDEsFUEo6BzuMDd3iocffoyP/sa/pVGrUq/ME7oe0kKdzd1tupM+tmOhazpRlBAELoIgkueQJCmiJDP1PLIkpVyvkmc5RZpjmDpJFJMkKbphkBc5SZYTpxHNWg27KhCEPpIgERYJuiWgZCIiAqqqzUadGTNm3FIkSfqKm2c1M2PGjBkzvp0URfFLwC/NamLGV+HLhGKgmFXLjBkz3z1jxpud1zaAkwtEQYCkG+i2xWjqkScJy6U6aZogKRJ5npEGEbqioNkGo+kQTVYw7TKyImMYDgftNrImU6uViaKcKAlJJhGCKKIYOtl4iq6b6JZOu9dF13TqLQunZJGkN2dAZFlOlgmUbZtatUIcx+iahaZpxOSokkQahOi6jqJKCErBxPVJ0wxVU5A0jYnnEfoBumERFyGqqiKkKXkaYzoGjqnT6Q3RZJMja4v0em0moyGO08CpVMhJyVWJMM/whJSAnHzq0mxWsXSQx1PaboxRFARJRCpLTKKA3c4Ga0daGJbIxE047LYZeyMQU7woQFV0Lu7c4OSJ2xAzkc3dDeI0ZDQZ8PSLzzAYtlF1la3dHerVJpe3LiLLGi+9/CzlqkFRWuFwCuNJFwlI04hWax5JUvC9AE2XiSKPVC3QDe1mKrLARRgJiIKIJhtADnJBkWYoqkaWpRQFyKpBNPWJvYhASTCNElqh4bs+QiGwf7CPMTZpmTalsknPnYAgkJHSnGuwf36P8WSMptgoooAkSthVizhLSfOURquGrAokSUyS5oikCAIEXoA3dLFKFrIM5Wqd9iAgijPMusHRo6sMhz3a7Q4FMqdP3E5TUxmEKWEOSRwjJSkBMlrmoekNhqMhXioSBhqaIqFIMnrZZnf3POODi+juPmnSIUPBXDiGbtikWYImySxUmui2zr48YuwF1Bt1ROlmOrpUVBh3+1jJkFrTYjDyqc3NEYU+9WoFTVEJlTKiWkaVwCnXsO0yGQlROCSa7LI33cUdzyPb89jVEgeDHrIfYOkx5VoLSZNIhYIoj5lGLqIMz7x0jq3DA6qtErnkk5MjyxmyLHI4HHJjawsxjplrLHJt6zqibWDkOYPxhPNXb3Bu4zorR+9kUbOwSxpu5NHzQyyzgqzWqdSPs1I1qC8dZWt3DwQZW1a5cPkCVnkORYE8TYmzBFlRuP3YPUwa81zdu8Gdd91BEqY8f+UFPNcju54wjcYsLtXw/WV03URSFCS5oNPbpWIYyIVIHkeU7RpC4dNrH/COR9/J+tXzjP2AYyfryIpAHHjcuH6NXrtDtVlCLan0ekNEEVaXV9nZ26UoCgI3pMgLVF3FG00QCwFVFYmjiCiOMHQN27FuphYEapUqVslhOukjIzEcukiCiKyKhGFEQzXxomA26syYMWPGjBkzZsyYMWPG10lRFL8kCMJMKJ4xY8aMGW85XtMAjiJJ6KpGGqW4iQeCAKJEL/RvCuSqAJrI6VO30R0OOeh3MCydwPOJhyGCVCNKkpufK2REWUNSIfRvzqSRFYU8yylXbURJQ1UN+mkXqyajyhJFkaFIGkkUIggihu0gAOPphCIvmJtfYDgdIQOKomKWytglBz/y6A07zFWXue3UMu3eHp2xT5wWyJrExBuSUmCiUjUMSppBZosgp5QKB1mWmF8ss/f8JrZTpmyX6Ay7WLaK06rSP+gwmo5YWVniqfVn2epuc/ttt7FwZJn9/V0mqc8zF18gFBUsU2c43Mad1DBNmyCNQNPY3x0z8iZYtkVRRCiKgjFqMx67uJ5PIUi44z6f/MwWtu1QrpbRdYtMSLl84xztgx6rR9bohiZRUpAnGUUETrWEVlIIowSlkJhOfQShjCyVEMQxvj9GVFTs6hyIAtPhEFWKwbIQDIN0PCaZ+pRLJWRZIvJ9SuUSYe4xDfuMk5wjzQZTf4RmapRNE7nIKBsKuZhCkVOu1EmTjNFojC4bZCWfQomZTj2iJEaLNQwjI4xcJFEgLBJUXcfrjFEMkXJNxDY0+u6QtEiwKiaG6qDLOUGeURSQpRmD/hQBGdvW8YIBslzixliCNKVaKqEqGv6gS1VOsByT7d1NAnWFecviwsXLNOfmyUfXmaYTksl1hLhPpdpCEDPa4wGiqhAnLpE3Rcphf7+NYJkEcUaj2WDHdcminEpzgYksEAYhfhQyHvnUF1tIasbm+jpWySFSyqiyRJ6DVkB2eJlBX6N++hRzcyZ7u+sUcUI7djA1GwFwfZdL7U2OKxLXDjZpNZcQ3QkA27sbtIc9TMOmSCDJEsQCGo5Do1HlYDiie9DnzNHjoMT4yQ6r83fx0oVrJJMph96AcrXOZNJmc7egPWjzrse/i1Z9mQvPX8TUqqw9dB+tlSo7neus82nanTbpcB+tabN6dJkw89ATGUWUkTUToWRxYf2Qy4fbnI3vJYwjNkYHrCwuU2pW2T+8zubWAY61hmlY6JqF7Wj81u98jL2dXb7jHd+B5/qoRoauG3Q7HY7Vj/PH3v/f8dwzzzBJApadeZI44dq1q8g5kBcMBgOGgwH3PXA/kiQzGA6oVesAiLKEIIiomoSpavihh/CFh7xkWUbMIU9SbNPCcRy2traRBYGFuQWCIGU6nWKKOnEqsHPQJozC2agzY8aMGTNmzHjNEQRhVglfgWvn/8Mb+vovX91kY2P/i7Z9PQsrz2Bm/xmzdjRrRzNmzPrajDcUr2kAJ4wiREFEUSRkRUVRZIQCHKeE77q4XkipUuZad5fpZEpZVECSiIoCQYAourk+jm1bRFnKaDwiz3OKrECQRCRJpqBgfqFG4Mf0uxMkQYNUQxZykjTGMqosLSzgui6QE8YhU8+lXC5RrpfoD03a7UPqjTq2bTLxJ0BBHuUsluucXjvO1vo1bElBNQ36vQ6CIJBlKXEck8k6mqJwdX2LzIJWY5GqXcF1AyqlCsPhFMOMMUyNJImJPR8UGAUuYqdNJkKSprieh+tNQRaJ8gJZhCs3LiEK4NgqbhjgBj6KriErMkUhYFtlDNNkPJngeRM0tUdeCIiyTJbcFI/L1TIU0O92MXSHYS+g15lSKVdpLJ9g0kmZHLZRFAVV0Qi9lFEQ0O+PMFQLUdSI/Am6JWBXa8T5gCJ0yTp9tNylTIGhSHhU8PUKiiaRRDkiYOga7nRKpVnDkcqEh+DpMUgxopCSywrNmok/GDOZBAjiFMuo0T2YIkkyrhsgFhrTQUDkb1M25ygbGlAwmYzJi4RqpYaqaQRBgGE4tBbqxMkUydQ5ceoUHXfEYBKxXJNZrjepqRFl5YAnPv85Sk6VarWKJCvsHewyCGVcqY4qg6WqTJKCZNzBWnRI8gTd0HCsOl7oo5o69arD4fqLxEnIyO1iljQU42Y6viQXyBOfdneTQTDk2sEeuV5QKdVJkgRRlBFRifMERLBtnVbzCJbhEOkiYehCnlIql3DDjChKUEUNzSrhD/doOAVBNGVr7xqF4NJsNAgFhb1RByucIks6SRj9/9n701jb9jW/6/v+u9GP2a251trd2ae7Vbeqrm+V7XKDKYjBL2KsCEgQ2MEIJUGIKAGbOJ2QiASRSCREYpJAwpsgRZEDxDiNEiUCERncYBuXy1XX1dzmdPvsfjWzH/34N3kxT7lcVbfcJD5V99rjI21t7TnmGmvu/xxzjTXHbz7PgxtaXr3+mHrY8PpmxYOrd7h5u+WTTz9CqgHnRsZREawHJensSBQbYqXYV3tutvfMQoqJIv7cf/Fn+c7L5zy6vmZ2uUSeNGWS8md+5i9RJjOef+cFP9P+FdZXT/DScBQVza7lJ3/hZ3h29xbrBmZZyXa34bKpubQWjyNRGoTgZ77zTX7yG3+ZWZ7y09/4aUyacmobtsc9v/XrP8pHn3yDYRxIk4w4yZBCMzrBi9ev+Kmf/gY/9pu+zvX6imO1x+gMCHz26Sd89Ud/nB/pel5+/pwf+k1foxt6Xt284cn77xBUoO07ZouSqqm5efuWvCgwcUwQIIQCIEpidqcDbddxkcQopVFCYYcRay1SCt6+eU1X1bzz9CnDOOKsw1p7nrkV5VgXEDKazjqTyWQymUwmk/+fffbZa/57/6N/c7oYND3/f1c+/38HjEWajqPJZHqtTQsz+Vv2pQY449gTpzFpmiOlRApJ37Tc3r7FKE0UxWzutyitWCQZ0npuXr9FLXLK1ZLhUKOEoshzDsc9cZwQhGAYembzGcZE9MOAUoKmqen7HiUVQ29Js4SyLPEetNE8fPCA4+nE8XiuWlkuV7x9+5a2bynLkrbtGIaR2WzGfD7H9ZbLeXFurWYFq3XB3WlHpA1xFOGHjiIvyLKYw/0BI2NMFmHducLj009fIwTkRY40gTSOOB5aejsQZSmVG7l78TmBQBxHfPvTj6hOFfP5gqEfefioJC4Nh90B21iqTc2hPvHgwRV90+G8xRjNYbuj6VrAY0fP1fU1m80dSpyrn4osw1nL559+RrAxSiZorZjNrtm0GZFuiaIIKQXOufMF9cRwuZ7TdSMCRzc6Qi+It29Jd69pD3cki4If+9Ef5ObuNVppjruKxhvKxFDMCjwBpTVCKaq6wtuG5WyBSkf2hy0PLpb0iWC3adgfBi7yESO3XK2e0DUdbVMRgqMsSx74x3zrs59nLCOePH7EMPR454njjOOxZrWKSZOcLIXtZgeq57jfMZ/PmKUZabpmGAZebe5J2YPYYIPHERjsSO88YvUBLr8iGUeKPKNThvrNd0j7HacOLsUFT955l4+2nrqpubxaE8WSqjmwrzakWUQca+JE08YziiHmfvsKJR2vdq+pvePh6gptHXEhmC8T3vSgHSSxQ6+vmKsjIvTkM8Hm0KGDYJEkVE1LVswYhpE4SQip5u3mBR+89x6H0x1Jojh28FEt2Q872rbmYrEmjiWzeIbUjrrZAB37wx3Pnr0kBMF8Nmd3aCiKgsePH7Ddbnh5f8uHsw95eHlJYWLsqPn42WuevXrO0AX86Hn+/Dmr5ZIXb1/zarfh/n7HxWLBy7dv+JEf+BpjGvH2fseD9h1+5ud+hrfb16hUcfPmDWM9ILwDIj5878dQ0hBcIEsz/uSf/s94/fo1djnjz795w8On73KyA1F15D/9s3+aX/jGT7NePuJ67VnGKWmaczo1HLuKt7vn/Cd/5j/in/hH/wDz5Zy2spg4oetagmt5+u57QEDEOd73pEXO7WmLw1HOFyx0zP5wwDqHjvT5Z1dsyLMMb2FXHSguliyNoT1WjHak7QLWWpIkRQhJ3w48evSItmk4nSrGcSBYhw8BnUcIJxjsNBd8MplMJpPJr6v/BfAU+LeAPzMtx2QymUwmk8lk8v3lSw1wsjwjig3OW7puJE1SEAFjNFoqdtUJZy3vPX6MkJK73QZTZMzmC5qqhuBxIRDsOawoipKbzS1RkrBcLqnrCucE+33DbndkHARFmVGUEcYk9N34xdD6iK7viZMY3Wq8dWilefvqNUpr0izF+/PjyrKMY3ViW1dcXC2puooqBC4izWhHlrMFkdactj3eOyKtqJqGqhu4jhaURc7tZkPXO1arOdfXa6pmT9dWSAEPHj1idzxRDQO9CMyLjEgZOtezuFyDhyIpGINn326JZwn39xWDH1F5TDUMSEArjZSai1XJSsHLV8/ph466rqnrEz4IlBKM/cDd7YaxEWgt6bsTj9//gJf7jDzsyGYJ5azEOUffdwgRWMxyhJd8+tkLrIDQ7cjaE1JZ7NgxX+R87es/RmtblIzJ84zLuCCRM/zQ44NDK8Vmu8MHSxYKfNCcGofJNN0Y6HrB7nhA6ZQ0ifnab/oK29sdm+0dxWxON7Qcdxtm8xmPHr3HzWaHtwE3SrrGM4ye1bLk1Ox5+2bH6mKFD5L9/sCDxwsqObI/HFlezLCMPH/5ln1Vk2oL+ZLiyWN2VctASdAZq4tL9rs9uMDy4SP2py1Ff0e5npPmMYf9niRKyPMlcRRxrE5UnSI1KWkaE9xI27X0KmWIFqxWOfVQcdvdMChLrAxd0xJFmqvFgiwOZIXB2p6uOaFMyu3eU0aOqmsJ8QKhcpalQ2QpN4cc124YmiMyy6jeDhyaPWmpkXHOp3u4OVmUCJgkxouR3nfYcJ41dNj1sC7o+gOj7YlMxjA48nRJmpRkccomBO6OBy6bhvX6imFsSOIZo5NsjluCc7TVwLEfyFZzEIKf/vlvEEULMBnz99e8bvd8+y89w/Y9n37rmzx//YoRj9eCZujYH45EAgY3YoxBouidRwqB7EaKOCFfzLF9x2evXxAVOfe7Dc+lISLw6vUrvvqBQAhFHBm2+zuClqyvr/nGt/8isz9V8k/+1/4blHOF221ZpAvq0wkpBBdXVzg30g49whiqtibKU7pxRGuFEyCMOv9MiAxKaYQW9PUJYwTGGJz3WAIkhm60KCnJ85y2aUm/CHKapsG5kRACEkESJ3gsXfCIqQBnMplMJpNfF//Rf/qf/23f5yefv/x+WwYJ/FPAA+CfAH4K+KPAHwfsdJRMJpPJZDKZTCbf+77UAEcAVVWTZjllOUMJQaQUkVQMg0UphdGKeugp8ow0z7DCM08yxv0JmcaM1qKERCnFaEeUUKwWc5I4oapOtM25EkaGFGdbpAoslgXHY8tutyOOYrwLKK047k5IG7i8umD0FhPFXK3XWDx1VVOWM7q2ZV8dObYNLzd33NxtsCKA0SwXM4xU3G12lEWGCBIpNUELqq7msdJIpdjtdygRIaXCWss4uvOw+iEQpx6EIMkS6CGKItIk4c2bN3SyI89LlFIIpYCRrCwoBs3t7o7IKNqxo4gz6rplOJ14lKZoZYhNQtv2CBR97xFCIdFs7rZ0TUeWltjRMSsLbltDi0dGPdKAkgqtNFprqtMJgsQGDyYmb3eMu2f0RpHOc/K04MH1E4bOsT3sGNsO7yWifMA8LfFpTtvWDEOH0YZyvsQHgYlyNq8rUhvQVrKtHJXXPJmXvPdoTds37NseGU403YhznuyLVnSyMNjWMw6W074jBIm3kmFwDGNge79l7Dz1qcLSczhU9MPI9ZNHzNYrDlvH1WqB7S0mLRHLBd6PJLqn6QdcVTFbLvFKEZCEaov97KeJE8/soiRRhrbrcMNI0zTIi/cJAvI8pZIzNieB9gNOO/AxRkiSYk51V/H8+edUjWfmFUMzYN2I4oq+7slMRpPkHE4V5dzA/BHb+wojZrg0palPiMhQ5DFlH/DzJcEPpFlO//S38ll3YhUn3L+uCXHB4jIn8hbfnuj7Dq0y6soyWyyZZzPcGDNb5IDAe4GQhlcvb8iSBJmB8prlYsV2u2cmND4MaK3Z1weEkTgpqXyH1ZLd4cDDqwe8unlLO3gIAVUUfPvtJ1R3G1ZfzNZJYk2ZZgzWoZQBram2W46nBqMNY99T9wPWWxZlwcglaZKjooi8KNl0Pc14Io41UZygvEBKyWgdJkoZnKWzPavLS+zY8f/+//w/+Mr7X+N3/87fw4E93ltevXpNkqSMrudue49XAhccQQpGO5JIgQueACxnC3aHHS42XK4v6LuOLowIKWlORzySIMF6GJxlmZXUXUvTtuRphg2OdhzBB6IoQkQSJ8FoSSSgHfvprDOZTL6nDOP43W6eBnZNvu/9kX/135wWAf5+zuHNL/px4P8E/CvA/xT4DwA/LdNkMplMJpPJZPK9S36ZO1cIJAITGSJjSKOE5XKFylJCZhDCs1jMydOEPE1Q3lFEMUF63vnKuyRJwulUYa1DCIkdHavVEqUEIXhGaxFS0nU9UhqybEZwks1mT12fW0ON1lLXNdZahlNHGjTzMmXUFhHHdO3IZrejG0earsURsMNALBQfvXzNtz77lNhIvLfEMfhgaUeHcwEtDduqxRnF1TvXeCk4nE7EqaF3Daf6yHa3Q6IoizltN3Bzf0+WpJQmxjjoTy3VoaZre+I4QwhJ1Rzpu57qYDmcerQWBGdpquY8WyPAsWuox4798cDhcGDoA+Pg6bqR+iQY2hgRcvbbmjyNmc0ShNI0g+bV3rPZ3dENLd5BcNDXPZvbDX4QFMUCKwVF+4Z8vOHh4yuuLh7zw1/9cb7y/g9x3LZ846d/jtksZnExw46S475nd7Olb3usDyAVq9WCLE2o2waH4/GTR6gQ0VvH4ByDg+2pRmc53/7sFa/fHL84YiRFnjP2PX3XEZyFcaQ/nei6Bm8tzjqa5ogxsFyUzJOMRGoSnVFXgabvacaG+jCyLh9jkpJQ5IQkp/eByjk6pdF5QTabY7uOmJaw+Yj2+V9G07JcJggBJolxdkAIz6MyoE5vSOcrBmF4c/MGpQ3Z/JKovIQ4J1OB6uYZ1sf4JmYcNKfZU7zzFDrherEiUgnNm58H6ZGrNZ3UtE1FbATzxSU9mto5eusRCormY5rtLTJdEGdL8izFRClez0jLNUZGzMc9/niPKtcIYjo/53lVcnusCaNjs33D4AYWiyVaS5wfSEsBYcAPgWbforpAGiWcRkvfS7abW47NDa1teH2/AxOTqIhMGHSaMFuvSWJF1+847O/IQ0yRzHEywpmIYCJOTU1VVVSHI8p5lA/EOiOKU2wY0EbiFAwaumGkuj3w9Xc/5J3lA9p9T5kXRKmh6z15enFuXZgmJHEC3uG9JcoShsGxrbf82Z/+z0AIHlw/QApB0zYIFdGcKvbbDcYoTKTpu5F1uaQwMeM4EoKnO1TYbqDtLX0Pp8PAqR1BaAyaoW3xo0c5iJ2k7weOXYczklY4Gu8ZnUcrgxISr+AURoRS2FPN9WI5nXUmk8n3lH741QFOCGFKmyeTvzP8/l/j9h/kHOT8JeAfmJZpMplMJpPJZDL53vWlVuCYKAIlaJsWPzji+RJjDFXXoo2mnM0QUiKUwItAEIIsy5BKoyJDNw4IKTBakyY5u8MBacRfnYNj7UiUJlg7ghBYZwm9x3mBtQNFYRiGgSg2tG1LViSUWcGL5zeI0pAnCe1poOs6sjzFB48xEVobZrOMuqtRRlHmOW3TUR12CC2J85i2G8nLCKQkzXK0FiijyXSKGxzeB+LE0Hc9iYlxwZMVOUVZMPY9fhjp2pZiNiPPMrIswzmHc44kT/HO0dYNIokQX7R3G0eLl54gIC1ywphgrafMU7Tu2Ww2JPEBKeJzG7PIk2Ypt7cHhNCslg/49C4gIsn15RVdXbH3ex4/fky5uibSMZvtBp2kXDafkZaWq8v3iMyMsdX0rT3PLdpsefruY3aHO+QY8fDRD/D843s6J1npCzCKpmmxY0eWJWitSfOctV6RxIrnL15g7UhsItwoqU4Dp32N6yOMULh+wCIQAew40jctcZoSRwmzLKUbRrTWJHFGEANanAcZDkMgziPq7ZZW9tzd3nBdXnC8+3k+ffmSNEuRNkCkmC9ymrZGupFUwHhXcTrtWC4y1g+WlPMrtqeO+/2BzX7LPMtYzRfkSYrebHj26q9w0nP6wZHMFri+Qw0thWuoX77E+pFOpIjyfT64WvPm5ce0o2MtJNJ4qsMB5yx594y3zweS4oLLiwd0Y82hGjC2w1W3fD6MtOOcLMt5VG+4f3FAtY/AJFSnA35ouZzn7N48YxgrFrOUoZFk6/c4bjZEUnM87FgtEkY/cH93T5FkHPZHqrZjMZ+z21dk8Zyrxw/Jmobej/TWsTseSaXihz/4Cseh5ZNn36QVlosyY3VZEEQAFEk5Q0hPoQ3lYs63Ts+oh55VUSKA9tiz3eyZFwkLbbi4uOR3/OYfp0iXjK1nVs7Y3tyx3W8Zxo6L1RVZXnB6+ZYkikhSTVc3dIeKH/ja15FKIZWmmJUgJCEEDoc9vR2ZrRZ8+uxj3rx5xZOHj4lMzMXFmigyHHdbPIEf+/Hfyu/47b+Tn/v2T/OoWPCV9z7gT/3sT/Gi3pFkGZHJyKKUuu7orKXMCvw4IrQijRMCgqZtQUASRxgkw9gzdD34wHq+5HI25/buLVJJciMhBFbrNVqp6awzmUwmk8nk18vf9zfY/luAPwn8MeCPAJtpySaTyWQymUwmk+8tX2qAUxQ5h6ZmlqZIBHEaU51O1E1FOS8RStMPA6fqwOV6TRCCru+ptltW9jxrJE1zojhGaUVVN+hE0w49i7KkKAqOTU1kIpQQzBcZQiiq6oS1Aec84EniGO8DD59eMHaO6nZEdhWDNEidUc5KqtORyETEUUKaZeRFTl03RFFM17XkeU6UFByqHYO3iGDQsQA8IQSElORZjhtGqqpCG0PXdkQmJkjJcX8kK3KiOGJ7t8G7wGhHQGDihMV8Sd3U9F1PnOagBEjJMAxoIVFSo3VElmRIqc9VEu2BamjxzlNVLVFkWK1WyNCSl4pxqJBC4kZNCJbD4YiLHpJE0bm1nLeMneXF5694cO3Ikpx72bH//BnvrwTR6oquMeSrgs3pBkHEixc7VKTRJvDmzZ5EFjT1JzRtQlIuz4FZlCIENG1L27VcPXpCdTrRHZ4TxZI4SVDa0HUjp27gm5uP6aqOpq5o1zXaROzrmuvra4w22NHRtAOL2Yz33n2XfXXg1Zs3jMNIlESkZUq9qTnt92y2Lfk8JzQ9IZZ8+tl3wEsuU8Nx95bLcsHCFFR9Q6o8XliUkohEkS6WlHFC31vs7sTqwSW9t+z3R1aXa8JgefHqFfXphPADw+YtF8Wc6LAhOE+R55Szgt6s6caWm7dvkYeWsU3ImorrB9eMWD5++ZK6rpkt5hzrmlLuUdt7tLuDvqMfBy4XK3zicDLQdSMhSGbzGYE91e3P07YDqY5I4gjBjHlscbHh8dMHNE1Lc/dzJFVFFhnqXuBUIElinA8oExElKbr3NM2I95aPX37OfDFnuVhg90d8GInLBLqRy/k14bhFCYdwjsVyxrHbkSQlTx8/wQ4DIz2DOFeGFVnGeDoQQqDtesbBslgu+eEP3iMNHRcXV1zMV3StwzmJD5a6PtI2FUoK1lcXbJqazenAxaIgqBGHRBU5148fkGc5wX/xmisL2rYDAUJItIo4Ho78ub/w5/iHf98/QlIsMInF2p63b9+yXK8xMuX9d98liiO+89lnFNmc6+U1r27vkdpQmIjISw7ViX4YmOUJcRJzrBo8AaMjyqKga1twAa0FQkXoWGO04gefvMf1xQVt13G7vcfhkSajmM149erldNaZTCaTyWTy6+W3Af8I53DmJ36N+wjgnwb+IeAPAf/nadkmk8lkMplMJpPvHV9qgJMkKUErunFguVri+5GAwweHMZok0+xe73HB0ruRIKFpOtqh41RXKBkRAhBg6HviJCaKI9q+4/Lqit472nHAKE1exORFSlN7lIgJErTW5EWO9wEEDMHyenvLk3ef8kNP3uEv/MU/S3mRMwbY7/eM1hI4V3McjyeyLGOz2TAMA0VZAJJ2gM5a5vMIJx1hDAgBhEDbd9huwFlLEIKmalmsInzw+ODZ3N0xW8wJgPMe5wLeBw77I+NoMSZGSk3bdVw/eoIfz63GgjoPbU/iFDc6dKpxMiCCgAA+eOI4xlrL/f0NiUmpqoY3L1/jR0UIkjiO2Y8KEUm0MUAgTTOapqVtW7a7PXFekMcJ1bNvsBUXGFKEVxy2FXlW0LYjm/sb5osZn332ApNGgOHZp6+wi3cxKfTdgEoNSZxS5Dl1e74IXh8PjG/fkpUxZVmSJuf2V2NlseNI6KFMc7IkQ0lJmWXkScLQWzb3e7p2pHiYU9cVx+OW0Q6sL6/x3rHf7VmVc+ZFzqs3W+I8ojAJsTA4F7hcL5kXJZ+Fkdm84Kvvvc/et7zd3YEWNHYAEZinKTSQRCmb3R0y2xKnEeEoGMeRRCoWRcnles2uq6ifPyNfphQmpdofyBcpWZGRi5gHD77Kixcv2B+PnKqar/zAD5HFKR+/ekFnLWiN1Ib5fIYUlkRKjBFEpCTJmlhIvvrhV3h9c8dH3/mMmdQsVzO0ETjhkRrKWcl8NgcBWbpiv9txOp2QwGBPJPOIsshJ6oBKFHmSYq1nGBusH0BYoiSitZLWOvrjnpv7e6QNrC5XBA3N0PGz3/wOJjH85h/+KjiDE44heIS1xEZzUSy4OW44tg2ZMBitiY1Ba03wDUkSU9cdp7ahdh1WHXjStwg50HUVXd9R1Uc8FonF+p7bzZb7/Y6n7zxisVrhWs+II89zHlw/YnQeay1aa9brS9qugxCYL5YMx4Gf/pmf4id+1+9mtb5EGYF3A1V15OGTR4Blu7nDuZETgm988h0+ePoeDxdr7k9HwszQdA1YMEEinMeKQOdGhHP40RPHCWma4q0D68jihOViyXw+4wc+/JBXr19zVx2JZiWRdQzDyOubNyDFdNaZTCaTyeTXwTf/1J+YFgFG4P/yxZ/fAfzLwD/MObT5lS45z8T5vcAfBqpp+SaTyWQymUwmk994X+oMnDiOkVLS9x2n/ZEQAkprhmFABME4DvRDS5TEeAT7qsJ6izKGpmkIIZDlGT44uq5lMSvRApIkYbPbMfYDTVOD9swXJU3XsDvsaXuHVJq8zJnPFwTvcf2I95J27JmvSn733/t7+PrXfhTrzq3HdGRoh55+HHDB4wn040DV1kit2Gy3vLm7pR88QmmyPGJwlv2+wrlACIKh68+zOZKM/e6IVBI7nity0jRlGEeOhwNJmiKNwuHx3jMOA1VT4Z2jrivGvkcIONUVbdfx9uaGpqnRUmGMZhhbrLVYa4njDC0j8rxAKsPd5p4ok9RVy37XI2WMMQprHY1PabuGKI2Is4x2GGmHAReg6Tqq0RL2L8jKhFlR8PD6IYtlSVHkeAvLecFP/MRvY7nI8SPM8wXBeuIkYbVaoSNDOZtBgKbt6IeB5WKJUorD8UDfd0RxyqmqsW5EG4XzDq0T0rQgTwtwnrZuyZKUpq548/Y13/74I/qq4XQ4crfbkCYJsUrouobd8ZZhGDkcazyCdx494uFixXw558MPPuS9q0fYqud+t2U5n5GXJYehpR8GYhPjbGC/P9K0AzjFxWrO1fUcoQUvnr2hOXYsZ3NuNxu8ECzmc/7+3/UTPL56SFcP9PXI0AfSNKcfB7qxp2paqrrl+tE7xFnO7/j6j/G1Dz/kUO/o+yNSBuazkrLIuVyuWZZXSGVQkWY1K8mMIUoibja31E0NQiLVOWuVWlKmGXmenyuv6ppxHLm9vyOOI8ZhpO46gggkaYS1HVJ4tBHk84SsiIgixehbynnE5SpjMUtJ0gxpDJGJkEJSZDllkpOVMwYJTd0yTzPWq4K2rRkGsA6c69gcN7jgKZOEq+WSrEwJMqAUaKMYhg4loR56dtbzyZvXHNoTozuHOEI63ty+xAnLfDXjsN/z6vYtnR14eH3JLC847A9459hu7zicdgQf0Fqz3++ou4bV+oKsKFCRQceG0XU0TUOkItI0pWpPWDugjQZiQNH3PfPlHKkUzz97hhtHojjCCUGapCzynFVZEpuIqqno2haFIjYGLyAYRWQMcRRhtGZ9seaw2/Gn/vSf5sXNG6zwVH3DoW8YvANAGTOddSaTyWQymfxG+IvAPwr8TuBP/XXu99/iPBvnR6Ylm0wmk8lkMplMfuN9qRU4fd+zPx7QQtFXLfvDgYuLC9bzBeui5PWxPrcIGzzVoWO7OfHwwYI80/RVT2kSumEgzVJ8ltIPA0hHFGvu7zcIGciKCKtHqr5m6B2D9RyajkQqSm8xIZDpiIvFnMGB0TEvXj7n2fYF8weXvPipP4eMY8rZjGEYqOuaPM9J4pjXb95gtGaxWHC32eGDQBmF1gLpoDn1tCeHzxSLMmeWapCOH/jqB8RpSt/3tFWLCJIsL7i8vGKwFhPFdNsdh/0BpRTX19dEJqLve0AgfDhfsJYCLyQqywhKU1U1RZGjteZ42lMUOVjD0AwM7sTy4TUqydhvOu7v9mRFzrE5opAIG+G1IaieB9dzbu4PtEOPUAIhJK1z2P0d4vSSi4drHqyfsNvuKYqU3f2eosw4HG7Y708YHXG9XmAQRLOEpEgwj+bcHgcG12NHh1ISbx3D4DGpJo41Ms0wOqXtztVUzjpM3GGd4th2LExCmuUkmSPJM4yIWGqL+Lzngw/e4dGjB2ghGPuB0O6o3QGvBauraz759kccbm75B3/X76QeTpzamjxOeX53i7MeJSKEgsik3Nxv6FxDXs5xXUzKmsgbdGdIsoxnz264v7M8vnyPVZxzd9jT1p77aODj15/wAw+e8mF6we96/8fJZil3xxO3mxu8HDnsT2RZgbO3mCjmtBlZPl6zzDKKNOL9xw/px5Sx9Qjlqao9EYYoLTicbsjNinmeE8UF3/n0O1TtgdVqTZQmHJqKWRTzMF9ySFo6bxm7kb4f8M5hhKIjcHA9cSQRUoJ1REpRJBmDHVFCIGNFMJKrixXzKCEOPWK0DPSIVKByQZkKCCmdGUiMQQ+OV/d70iTmYn1J2w2M/YDFE8bAiCeM8PZYs7i84OJiydiPpGmCEI6+H2jqE0VZMkjPdndL0w7EUcbd7Te4efucNMlJ4wLRW45NRRRHFGnJm9u3jEPPo8tLXrz4lK51/J6f+K8Q64i3b95Sn+pztVfd4k+evmpZPlgitUMpgdIJVddwrCvSOAYkP/De15knBZFQxEXOzes3OO+RkSaPYtq+ITIGpQyH+kiZ5rx38YRnr29Ik5TO9pxODdfzFaPtOfQtP/2db1Lt9+TSULYdsZBUXYfU6hzyeo9mqsCZTCaTyWTyG+ongX8Q+K8DfxR48F3u81XgzwF/APiPpyWbTCaTyWQymUx+43ypFTj323uklLjRIpUmzVI22y11VbMoSy6v1phI0w8jTd0Sq8Dd5ohQBiEkdddSdxW3mztObYtXgAKhFFVzroxZrJZERnNqGvrBIkdBKTR+7Dm1Nc3Qc319zVe+8iEhePIsI8sz/sJf+vPc77ZcXz0kTlPK+YwHDx8yXywwcYzQiihJ0FFE27VoozHJeT5OkeaARASB0AYdR2ilsKMleI+SkjRPqeqatm1IkpTD/khTN8xmM9IspVyUJGlyrg6yFgIEApExpGmCNIooien7nixJiKOIcRxpmhqlNUWZMp/NUEIyBofXCh88RZoBEqUV2khCsDhrEdGCosj44L13OZ5q7u5u0RIWi5LryzXz5ZKwecl6teDpkw9omx5tDLe3dxSzEmMUH330HV68ekFVVXRdRZamPHn6hPkq5+3NKw6nPff3G5q2RUpBCIG6bjkeK4wxFEWM8wNFOmNsPWlk6PuKuj2S5SmPHj5CxRFJUdD6gV174tvPP6NYl5TrGcV8RlaWjN6xvrrAqAgRFF3VkH8xt2h0Lda3dL3l9Zu3nJqK2XLGwwdXWGfZn/bcbTeMo8doRdseUT6wvz8ifGC5XP7VmTMfPHmXr3/1a8xnJcZotFRk6QzvPMfdkTyKyJOcJE5omp4gJEJJfJAcTzWv37xGiYASnvubDZHIycwc4TTzckESJWw3W559/ow4zlktHpHoEk3E4X7HcByRg0Q5T7/fMzcpX33nfR4/eECQGiEU8zzDSEWSZ7TjQB8ch67h/nik7rvzSzxovA3c3m355PMX1E3PB4/f5531Y1QwmDjh0fWaRArGriWONXFiSJOEse8ZugajDVGS0XvHaM/HbFWdaNuePM8osoyA51Qd2O03zOczjFE8fvyQDz74gPlyxtDVpHHMrChQShElKUFIXr16ztgPGKFRGBarNevFih9+9z2ezNdIKxBGYf3IdnPHs5ef4P25WiYyETjP7n7D0LQMTUvfd+xOe7q+5ebmNaPtGAPc3N0gEEAgjRN+7Df9KOvVxXmGjlZIrfjw/Q9I4oSAInjJEDzGJLzz4CmPHj4mKEk99AjrKKXBOcfpVMHoMUazenBNUJLN3R2uH9FBEAmJMRqJxFs/nXUmk8n3DOfcd7t5mFZmMvk7XgD+feCHgX/317jPHPh/Af+dabkmk8lkMplMJpPfOF9qBY5zgYvljM12z6mpWK4W7Lodp6ahG3qkEuRFjhXqHOBojYoSRusZCYRxwEvonScyCmkM9emEkRFSSZx3pCLBiIR4UbB5u6GINF//od/Ez958jDcKhWLbVKjtLSqJicaIru959vw5qTY8uLrCbm9pu5ZoHiODpO97xlEgpMA6R9f35PmMIEAKgYk1szJHeE/T3NOOJ7JRcewb8jxj//aObhgY+oE0LxjHge12Szk7z9EZhpHr6wdIqbl5/ZaqbiBAmmQ451BxhAUQEi0VbrRUbQ9ftHbSShFFObZ1uGBReczl5Rrl4Hi/ZbWcEYJlf9hzsV4x9pY2xDx4+ohx6Hj98p53Hj8kTmJub2+5vLpmlkYcq7do3ieJ4nM7sK6jH3uyLOXTTz7BC0mSJjR9jQiWxcUCbTT7/Z7DUdCEc+CVRDEEgZQKISXWQxbFlDqmak9ELOlbj8ShMKzmOevlBY+u14zuHAyIxFA1FYemIZKazelA2w+4YUQKgZKKYrlAhkBd12Rxip2VWGFJCsX25ZFCBi6uLxix3O+3nLqKIYwgwUSawQ0Mw4hzECUxdW/55rc/o64GNnf37DcHLlYrbu/v2dzdk5qUspjx0asX7O7v2R56yu6EEwopFAGI4gTnRrwPdG3LPDMc+w3VvuF0aHm92bK6vOZ0OuJcS5KmIDTWGpwXoAL3mwPH3YG2algsczIT03YVP3T9hEKnvKlrxuA5HA6oNEf4wOl0YlYW53lJWuC6wGZ7IF5dEktDCAoTZWhjiUTMOp4R6YTP6je0Y88iB+dGtFK89/QH2e133N++RAtFoiKQgkRH9IMnBE9kzu0Rm7rCzz3L1ZLejphZgYkjlBIURcp+v+Xq6ooH1w/YbXYMQ0esI/J8QZqmtM2RdFYyny9xoyVPMwKCeZTz1Xc+OM+Iqk6URYntB7qu5VR77u7uee/dAe89Rhgu55eMowUCnelRUjGODu/Ps6mqU41zjqbpaaqKrq+I04TQnRj6HqU0Dx8/YLVacX93B0DdtUBgtVzx6uYtP7v7FiMgvCFTMZLA7v6erCxRUiE4V9rRDgjvMVFEGAaCAynBRBHW2umsM5lMvmc0Xf/dbm6nlZlM/q6xB/5ZzkHN/x5Y/YrtCvjfAUvgfz4t12QymUwmk8lk8uvvSw1wTJKAg1if58VYa1mtlux94HQ8glIMw8DscsHqYsHp5sSH7z3mOy/fIOOUDMGpPhKZiHkxYxhbvPO0fQtCEacZCImJUoTT54qYCG6rWy7Wa+ow4seRxo58fn9DEsdExoBUKCGRzkOAcjZjfzhgtMY5h9aGuq7RSp2/pixxFg6nirZreO+dRyghccEjY0XbVcTJJV0w1N1AZDQKxWK+RCpJ23Uoo8mLEmtHxnEEUmazGad9RRKnAGRpxny54G6/4+3dHTIIyqwAoBs7lFRkcYobehyefrAEA0YKVrOM25dvMUqQ5TEv3tQgAtcPLtnvDwg5Z3CeF8/fcLGc85u++lVu7u94PVpGa2m2W8rZjDjOkEoxTzPiOCJJInbHDW/vbsjLJU17wKSKSBU451hfrnjaPeJm3AIp+WxGkeV478nyFKUUQ/C4puOwP5EUOQpJ03YcTyNCxcRJQpRpsixiGAbiOKcdPb/wC9/CCVheroiTGO882/2OJI4xkeZivSKPE5LI8NEnH4G01HbEjjU6MQxYytWccRx48fwFymhEJJgXJYtFwe3tCe8jCJCkhsF7yrTEhnuePn7Ab/v619FlhEhgucxwoaUeWr7z/I75ck5xeYEb7Dk4cB6CJo4SELDbHcnygjROuN8dGRsPXmNMyt3dPadqy9X1ksW8JIlhGKHrPUN3Ijea5bxkNk9w0tIPPa4fGU89u9bxycvn7ENDXXekNmI5WzB0FnpPqjTzKCcIR3Oo8AGiOKXrRxaLFZerS5ZRBlbwneefs2sq1osl3vV41zGfr3j+/DUff/Yps9mMB6sLHqwu2VUHuq4jTmJOhyNGx6xXK4a+I9IxQpyL+fK8IGiJ854sy+m6nu1uy9APLBZzmvaENwWXF48wOuLNzStevX2Dt4FYxwQX2J+OLLIck+b8lWef8snrlzxdP4C+x4eAkIL9fkfbtsRRxGq+IolSIn0Okj7/7HOctZyqAw/Wj6mriuPhwDgMDL1HBMmzZ5/wyafP6N2IkIqrxQV13fJXvvGzaCUwOuAFaCQCSzCCpMy5ThPadmAcRpZpznuPn/Jqc8d9fYQhUMQJSRIzjANCKGb5HG00p6oi0hol5XTWmUwmk8lk8r3m/wb8FPB/BX78u2z/nwHii78nv4av/ug/MS3C9PxPJtNxNJlMr7XJ5G+7L/VqYt00EASjHQkCgoA4SoiN4Xg6fXEBNkYqTWwUF/OM29sDd/c7UpNwMV8grCXTmkhI2mNNFGkWiznee+w40rYdbTuwu9+QRjEqS3jTHejoiZRgHHpc8AQp6caBOE3Ii4I0iRFCsN1vMZFhPp/jvMdaS5LEKCVRWvP+Bx/w7rvv4oIDAUpIcIFxGHAhUCwK1lcXWDfSDB2b/Y5x9PTWoiNDHMes12vW6zXDeJ5VEkcR1p7bllxerSnLGVEcMZvPiCJDP/Z4EQjBUxQ5D9ZrEhNBCNhhoK4auqZlcCNWOa4ulmRKECmYLQqarkYIjzGKtqtBOKI8w46BNC3IkohYGwhQzErSYkZka1bzGe+88y5aK5q2oR965vM5SZrS9C1pUVDMC9JZRpynHA8nDvsDWZ6RpRmRMUQmwodA33dExqCjhOXlmixWHI4DcbRgeTEnLWJGwAqo+gYVJfR+pEgTGC1/5ad+knp/ZGZKhPP4YSQ2hkgbuqomiIATDiWh72oOhy3lLEEZiYlneHWevfPm9hZpJMksJejA/OLcii2gkCoQJ4I8TxFSgxBEecrgPLNZwaPrx3zy2aeU85QsiTjutnjhsUrglaK3IwiIIoOzjkhGRNrQVDV911NGM1Ixo+9gfXFNmWeE0eJHz+X6CbNyTj8MtF3H6AN5mlFmc/Ik472nj0nzhMEorITBW471SD84nNMYkVNEC+b5nMxkPJhdIhqIRihCROoUs6wgMjHjaHn54hWnY83Di2vmSYkNnlPwWCdx3Uiex1xcrhAIPv74U/rB4oUg0gYjBPebe0bncd5TVzVKaeqmJktSpBC0XUccRzjr6JoOO1rSNCWOE7qmo+t6jDHM5yVpkpKYhLHrud/csjvuaZqasevomo6m77mp9/z85x/z+v4tWZbhnacoS37nb/vt/PBXv8putyNJEx5eXpNEMVJAsI7Ddsd+f6DvRrSOEELSDwPncUAjddNQ1zWfffYJdVXhnCdLM9I0oe9aEAKlDeMYiFVCUSypTg1N1RLFEYkQJEIzeA+R4gc/+JC6adi3NS4EGBwCgXUWEQKLsuRqdcHjhw/RSqPENANnMplMJpPJ96TnwH8J+BO/xvZ/DfhnpmWaTCaTyWQymUx+fX2pFThBBLow4GQA51FCYL1FxBqZxtRNh5SSVMf4zjNaS20tVw9WPHy0ItQVURLR2wGlFMZo4iThYnXJYnnB4bSn6zu8UKRJSp4mNG3DOA50dYsUgqauGccRATx49AihNM5a6rZFIGn6jgezJeMw8OrVa0xk0MpQZAUhQN/2vN3tCD4wn81J1hdcXV7SVR3745Zg4N13nhDageb1G7RWIAKRib+o4onwPgAeozS97VFa4r1FCHEesq5gfXFBmmZUdUWqI1yaIYXCusDhVCOVpszS8wV8G1BSYVVAGUERGcosoZyXvL3fgfekaUrfD/R9jx1HTk2N0jBbpbTjwDc//Yy6bxBK0fY9vm/QqSbJSg67ilO1wxhFnqZorejHjlN3z2JZMo4dq8WCD68e8ez2NVUY6UdHEhekkeFuc08URQSd0FRHFmJL7PYUX3kPpEfQkc8j+pAhnSUtUpKoxDvPaaj42e/8HPlVwsNkRZEkiOAps+w8Y2RVUNegI0PdtBRpzKE9YeIIGUtmq5K71ydMiDAI2rrifi9RWUYsFLu7HfHDiMXlis1xi3MD83zG1cUDkkjx2c1L8lnJo8tH/MKLjzmcTlzm17jGUtcN22pPmuSMradrR4SOSOKYvhvo3t4SvEMTWCWaMo7BKbrOsZctzanBWYfRmvVixXa75VQfKVYzikwx1g22daTzOXaE29sjR2ree/QEkRR4IQgIUh1h8DQGbjY7+rlnXmS0bUOmEuZZSlIsGZ3DOgdaoLSmOx4Y6obNZo9TgkIldH1NLx19LJkX17w8vgUg0hrpAolJGJxjHEbQinoMzOYXZGlKddyTZylGa8ZxZGx73DCwWC5I4wzXWRKV0NHjvMcNltgYnlxcUUaOXfWSl7cvOGx3+NGxbQ7oJMY7S4QmsZAExWUxhwBf/eGv8YNf+Qr/8Z/8T9gcPqc6bsjTHO8cwSmstdxv7om0JokztDoHpfPljHaouanf8nrzBqE0N7s9+9OR5awkz0uGtiPSmqIsaI8VOsS0zQjS4UfYb7eUqwU6jfEOCHCoG/7sT/0k/TBSmgJhLdY7OjegtSFPMqIoQkrB1Xp9bln49mY660wmk8lkMvle1QC/H/i3gH/+u2z/d4BnwJ+clmoymUwmk8lkMvn18aUGOFmRUQ0NQcI4DJyahiQ1RFmC0Jo0S+iHjoeXl9jBsz/uiXAIo0kSgbWK2XJOPzpGN6KNIUkyZos5L1+8QAhIs4y2Owc8Xd9T1zVN0xGbiNF72romMoY8zxnants3d4gAs7zECWj7kebUIKQkTRK0NlTHCgIorRjagXGwjONAVbesFjNiE7NtjjR1xygs9rJnkZesF0uCDHRDy6y84FQ1tH1P5BVlWRIQjGFESUXbtCitSKKEoiwYR8vxeGS1WhEnCd/+6COUkTRNgx0sSRIhxDlQ6rqBLM1ZLdZ0w5Hj4cjl8gJHYHs4EGuDDAGBIABKa9qmI0rgYr0mhJxDXTEOPWWZo4ymEx6ZxnjCOVxCYEzExWrJ4XhACo+3PVm8RpUR15dLhJK0Q0dQAhlHZOsF82JG2/fMLq6Ix5ru9lsMNkNqSxoJpFKYKMIPltgIsrRgff0Q7wLPn72krvc8fO8abTSxjwk9OAJ1W+O8RyiI0hgvBOjAsTtybA78yI9+lcdPLnn95i2744k0LbiYZxwGgReKZVpikhlNVZFFKb7teHj9gHvZ4HYD711ckhcptR/I6hPr1Zzt/pb1co5z8Lx7hhOQzwtikTB0LWlUIqyjbRvSJEFEnixJyGODUR4peqrjSJIWVPWJ0Y60VctsfYEdLQ/Xj8nSjGwZM19mdBr2t0ciFTOOAYKka3tOp4onq0tyPcP2HTIIxt5yf3fH/X5L0J7EQJbFZEVGVs7J8pKhbbjb3HGsa5Ik4cFqibCOKDGMziPqjiKOCUFyOHQU+bkFnFSS+lTx/qMnRCaiHy0X6zWNd9RVj+0HhuYWvKPIMqSU2MHSnRoWizlJkkAIGKEIJmLse/zoyZOCWBlSk3B3/4ph/5q396+oqyPCB7z3HE8nsjTlh9/7kIvlkq5q6O3Asa2JopS27dntN6SZ5dWbFyRxTAAOpyNJkiKUJI1TVqsV3juUEiSJ4fMXn3Gzv8fE55DGpDk6ToiTmKY9V9dcX66pqoZ6dGijyWcl1jtG63HWYocBX+QMY4f3Dh1FHIeWJIuJXCCOcwYcddswK3IuL69Iooimrrm9u2OwI1GSTGedyWQymUwm38sC8IeADvgf/IptEec2a78D+M60VL+xoigmyzICgVM1fNf7SHmu/hZCIAS/9AFCKf7qtl/LH/hv/+t8/uw7pPmMrjn98v2++1tJ/tn/A6Hegf//f8ajSOeE/Wvaf+cPwPDLR7HFf+DfQP2Wf5SwffE33I9cv0f/7//3Cd2J5J/5d/F3n36Xey2mg2cymUwmk8n3lS81wGnqhhDCuUVRktGPPQFHmmXsd1uuL9dkeUrfd1xdXlE1B7TS1GPP6eiIvCRNUsp5yv50JIrO1Sxt0zCMI9ZZ5sslzp5bmllrcQ6SJGUcLcMwopXh6uoBzjo++/QF4+hIkwzhFQjQ0rDb7ZFK47z9ouVSj5SSKI4oioIojnh7e0NgxGjD0PdIKbi+fkAz1hwOB9azOQ8fPcTjuN9s0ZEhyVP6piXLc9K8oKpOSCGIIkPwgWEcGfqB3vTnYfZB4JyjbVu8tdR9T6RipJI457BWslgsqeuG+/sNF+sFRZYRBBzqnnH0xFFMmecMbUfnOkLw57k+RrFYLCmLkrZtWVxfEmnF7n6LD4FiXhBMy+A7vB8wRrJanQfNf/LZp0QmYjGbYYeBxWLFsappw8D8YsUQHGmQHPsWdMKDR09Qp9e8/fY3iIuYrFzy5vUt3jmSOMVaj4oMKjLM85yx7fnWN7+F9IJikROrmCIqef38huvLS0QEh2qL9AbvFVJqfBhQKrDb3fPk6SUffPgYZz3eQbGegZCsLlboTlO7kTRoqromKwreuXqIa3uS1QyGt+wP9yRaUbcVXdviHLx4c0tkNGmScHt3z8cfPWf1eEmmc9bLJdJ7hk5hRECpiGFoWT9YoiR0TYuQkuOpQquS4AJVVROZiPXlmmxWorVGhoAfz+HAcGyYpymLJ3P6tkepmNV8wWk80NU16uohURJTtyeC0QzNifXlAiJHnkd8+PQhRZ7y+Zu33J4OVM6hxgE8NE1DGseEAF3fUpQFh/2RumkYXaBtK8oyJ00TykXB4EY++vgjlNZkaUZdn5BK4u1I07RoqRm7lkiBt5auOx9nZVngvaeqahbzBQiBkBBFEUKO5EXGPMpBw89/55v0zqO8I40T2rpBaY33gfXqgvfe/4C2bXDOMVoHQiKE4/7+lijJiaKCbujRiQGlGJ2jMAYTx4ggyYqU0Q5IFbC25f7+jjjKOB1P7PZ35EVKZGIGD0YHtJG8efGGBxeXqPUFb/c7TFAoKYnnBQ/nGXmSYSLFKTg8nvV6ydB3dG1L07RkRcwwjlyuliRRRBJphv4cXHfjwND3xFOAM5lMJpPJ5HtfAP6HQAr8d3/Ftjnwx4CfAMZpqX7jSCkxWhLCLz5lv9q5E8QvN46/9PVKSaJomtE4mUwmk8lk8r3sSw1wpDZkcYoxmkgbMgq67kQIAakMQkgiY3jz+jUQmM1m7A477GgJxiBEQIhAnCaM2w1SSrTS3N3dMY4DKo6o63P1TGQMQgjGYcQoSd+NtP3Ak8ePUSrm2acfA4Isz7DjyKE6sVjMuVgsGe2Ac5au7zmdTiRJQp5kxCaiPp0QQrKazdmfaowxaHW+sO9VYDW7YGwrrLU0dUuURCRxiveePEsZhxGpFdaf21kpLdHa0IuBw37H1cU1i8WMKE7Z3G/Z7Xb0diROE+pjxeAGojjCe88wjDx88Bhc4G0/YMceDxyqhiA0q9WKNMuoq4pea9I85/Xr1/jgWV7klGVO19U0bYPSguurxxgleL070HUtWnXMZhmZEhh9nt/jvD3PholjtInIiwLvoB860iLh2DfcHXZE6bswGprbl1wVsN+8Jco1aR6zu98i0DjrEImm72vuX7/lnadPWL0z4+PnL9BKEIJkaAJpVHK/3XNX19xWR9aLAiM1o3UoqRFBkGYl+/2Wrj3w7ns/yHa3Y7c9EuUZF1nG6XTizd0t83lBbEfuNnfc3N/xwZN3WZZLPtu/IFQ9718/4UXl2LV7NvsD9WnPEAK9g9RE1E3F/e6e2WJBEuX4HhSa5XrGaXduGSYQRJFGCYdzjiwvuL/d4/qU64cPUFLiRkue5Qgl2Z/2JDqhyHKeP/+cuCiZXSzJtGSzazlVR5yAwVquFivi2JyPr66l6jpOTcUYHO8/fUyxy9ACpAjUTUvTdux3O8Kx4nqxIM1SLqNzwNE0DetyThIlvO3vQErGtiWSEqM03jqsCJRlyfJihVCK7ou5MN7BMA7EcUxsEnSeEAuB95AkMTJNaZqGw/GICedZUs2pRinFrMhp+o7lvGSdzdnXJ262t5goJo8TVosVfVrSdA3r1Zqn77yHMRHBBxaLFf32nkgq9qcj3juabqQfKwZrCTKglUEJjVaG5XzFMPT0fc3ptGUYWt683bLd3CKFoqoqquZA3zXstkdmZU5Rauracjq1XF54Lh9c8nq/IwwWoQ1GK+bzFVpqnO2Zz2YkXc8wjue5VM4hlUIIiRQSLSSuG6hUTduN1Nai5Tkw7vt+OutMJpPvGcH773azm1ZmMpl84V8AHgH/1V9x+28H/hXgfzIt0S/5X/0bf+TX9fs5LxBCEAJfhDh/84QA7/35PeYIeV7Q9R3O2umJ/D55/ifTcTSZTKbX2uTvHl9qgKOkpChLwFMdB7QyCKEI3jPiaPqeKEqQquHF81e88/QpUsUEP2IHj04ikIHB9gQJ1jsM4JwjiMAwDGRZhtYKpRTDMNJ3I0QGqTRpIijKkru7LdWp5fJ6RVkW3N7doJSkKDMIjtjExEnMUi2oqhMiBIwx1E3F0HdIpQj+/H2bqsLO5+dZO37k4fIBflTs91vaIWCDJ3Ce0eK9QmtNCP78B7DO0nUt4xePfX15ibWOu9uXHE8VSZKcq0CsJc8zhs7inSdOzm3gXPDoyLBcXyCALInwOibKC/q+Z6xr6qrGB4mSEIIlSzMcFucGlD7P3LF24G5zh4kikjSikimRcmhpSecpQz/SdS1xnKKlQSlDkZUUecFXfuB9UPDps+cc7yp+y3s/BLP3+fN/8c+Tuj1JdEFWJhiTk2cZL1+9Jk0LrD6HTfPZjNliiXSBPElJlEGjKMoFRIa7mwNJpFhkKYd6wI2Q5gnOd0gJRkkOhxPPnr/k0cMlm82BsQnINIbgET5QpDGvDhswksSc56M8vXzIKi14dfOGj58/x6vAxWJNJAVN31DXFfP5DJNFDP3IPM6oqppyWWCyiNOpRqtAW3UYpQnO4MbA6CyL+QIROkIIaJ2giIgSyTCM3N68pWka+nJgu70nSiN+89d+lCya8eLzZ4TRnStXup7NYY8xmmrssMFxMV8w2J6mqTkOHYMd8QIOpwNtu+Tpkyf0dc3tfosHsrygrBrquiZPUk71iTiJSZQk8p4oilBKIbVGSMnV5SXvPn2HFy9fcXd3w2q9wjrLu0/fRXlBN/TnYNRagg9EcUwaJawXJTI4bN+TJgnOB5I05Xg64d35gmDf9yRJQpqlKKNYzmbEIqLre5wIXC7nxDKiOtUoo9BGsrpYYt3Aq5evyPMMoRRIgbeOFy9fYr1lX53IMs3z1y9Js4jL9Rof/BdzrWBezvjBr3zA/njHz/7cTzP4gbaryNKUODZsNre8efvqPNcqL8kSj8eTzxaMUvL81Ut811OUKcF58ll5rvzrO+zYEUcRUZRSVQ2xjnBW4EZBXZ2ffwtI7+npae2IE57gA2VW4Px0XXQymXzvqJruu948rcxkMvlCAP4p4CeBH/kV2/4l4P8J/BfTMp39vt/79/66fr+ud4yj/VsOb+CXBz5CQJrGJElM2/W0bfNrBfy//IuEmJ7038DnfzIdR5PJZHqtTf7u8aUGOGPV0qgjURxD4HxxM0A3jIxDhdSC1XzJMI4EJLd3W+IkIc0ysJZmdMRpTBLHCCEwkUGKgNKaPI7xIWAiQ9f1ZFGCN44oNqyWS4Z2IC9y+q5ntD2PnjxACjjsd0hxHipuh4GsLNHa8PL2Le+/9y4rs6Q+VXhncXakKAqkVOx2u/MvslKx2WwZLZhEY0fHannBcbOlbo6kItDULcpELBYXXK0u2B/2DH1PcJ4k0tSHiiSKefLuB0ilefX2DfvDkSzJ0MYghUJIx2BHBjuyWi25fnBNHEVYN5LmMddyRZpHlGVK7ALKCIQH24EfPZv9jqxMeefpO8zSnHtb0AnFMA74EAjA9nBgGEaKcka6vCIhcNjuuL56BMIhFQzjSGQMWZGxrXYU85Q0TXhz95Y3r19zuVjz9NETvvkL38Kc7vnwa19h8B1Juma0HilASs3d5g6lFG074GzgR37kRxhOPYeTY1aumJdL3OjZnI60TUORLEhNRCdHQJHEGQRBwJNkKd/6+BMCnrhIabqewTpEH5BuQDhLGkcUWUwcK1KTcJCCq+tLyjTn2cvneBydc7y4ecsPPHqH5WrJ6dRwbPas1g/wiUY6j44kSMnpdKBrOy4vL2mamuBGlMgI48j6Yo1zI1GeMgbPce9RSrBartls7rg73HD96AHLec7jhzNMOsMFy+50T5JHdPbE3W5DnMYUeYEM4JXE6pFm6NjXR0LwKJ+SGMMiL7FjT5FmPL66YrPb8uzVgSKOKaKIq/mCbjZjsV5ybE7sthseXj2gyDKSNENISXCOx5eXPHnnKQGIjWGMIuIoZhgH3OjIi5JFmtPWDQdRYcK5rSB+BGe5XK0I1mIHi9ERURzzqntNkJL61GCkQUiFBZK8IEhJNfSEWJMUOSiJsxZtNEYqLlYXEAL39/csZ3NGN2CMYrlcsDsdGdzAMFriyLCY53z07Of4/Pkv0NuOwQ/UXcvl/IJ3Hj/BGM2nzz+lab6Fk4HWjdjguTm85dOXH9G1FbNZTp4lFEWEVwGUw0SaFy/u0NKQZSnH05G6bUCdK3ziKGW9WjG0I673jIMjeIlzlv3uSJZlxEXB0DTsjieChCE4Iq1RcYYfpwBnMplMJpPJ95UG+MeBn+LcUu0XKeDfBn4n4Kdl+v71ixU8AsjTmDSJOZ4q6h5GF757TiM1mAS++4eTxBfHh/xr/v3X7iVwbr83HTeTyWQymUwmfxO+1AAnQjLWDRrJYj5nfzpyrBt0fK4CqE8VeZzStA3GxOeAZxxIopRkNuN4OHA6HnlwlaGNohs7utFhLfjgKcoCa8+l364fmBcFu9MBKQWXyyXWOnZtxWA7yixHBDBqzmw+QwpB09Q8ffSE2/tbBkast7THmqppyLKE2WLGYXekbTuWqxXueEJHkuPxRDGbkyQx1f7EUJ+HoCdRgvABjQDrqU5HrhYXpFFEOZ8xzwu29zeM7UAaJczSgs46+mFEaEOcZhR5gQ/iixL2EeeORGlEXha0TY33lnHoCVg666k2JyKjMcYQJzGL9ZowGpqhZ319xcU8x7UtmYRmDNR1R993xEmGFDD0FWM8YNKCfvOSw6nhwXWMMQJhIE0TilmGUp75ekUcR7x8+Yrtfss8S5jlMR+9eManbz5j9WCJMLB5tWGxXBDFMXVVMV8uaYeeiJj56gKlBH3VsVjOyYqEqj4y9D34gBYek8SYJEZ0PUYJ4sRQtw1KCMpyxvOXL0kSxaP3HqOMRBmNMV/MCRotAYf0UC5muGFgHHvSLKV1A5qIKEvOj10Kdpsjth8ZB8toRwgWFSxlUdA0NaPvmZc53i45nI6s1gvKNMMOAziDawSzKKLuLEmScTpsCMKgNaSpQhgorwrimaIfK56+8z6ClPvNHXES48TI5fUaq8AjCNYhfGCWL6m2e+q+QWrFqe6Y6Yx5mrNcXqC1pIhT/DByaio6a7leLFlnJTIIjjiavkXHGtUKGB06UeDPAaMPgavViqFt+OjTz8jSlIvVObwUX7xRW68veG/9kM1uy+1xxyxOWa5XtG2DkVBmGW4c2Wx2pHnCsapom544TQg2sL644NC3HKuaNLaEwZGmGemsxB323G+2xEGhlGQ2nxNCoKkbpBAILai6mihLiPGkncFJSMqS+VIy9D2v796A98RJwuAtRVFwcXFBdTpxOGwZnOf57WtaOyARBARv7t/Q/5WBEBzL5YzBDhxqj1dQzgoYLU8ePsJZT+cGiA2t68GOzPKYKEooizn5RYbrBjbHPUIoxsEipCTNMgKCAVCRIVaS0HWUWYHWBjdObSkmk8lkMpl83/km8C8Df/RX3P7bgD/IeSbO5Ptc4IsgR8BiVvDVx/LBi0/Ht23d/ur73nwHti8RxYpwuv+V1TgR8O7f4Nt5YABa4AhMfYYnk8lkMplMfg1f7gwcKVFSsVjMSIqcm9tbnHUUuSFPM6S1dE1L8J40TcjyjLqucXVD+KKf735fMfSv0VohkPTjgNKKRMdYa4mSmDRNcb1lGAaEEHjvubhYsTvsYQgkcUwgIKWknGWsVgv6vsfEmrzIEFtIjCGLDMnVJZvvfIvFomC1mFOdKoQIREZRljnD0KNjg9Ya7xxt23A6jqwuluRFfn5MOqbrB7p+AGC5XKDMeanTtGDhBUmSIIU4t1Mbx/O8meCJ4xhOR/quZ7VcYiJDW3e8efMavGOxXLG+XtD356/rhhatzfkCsvAk2lOUOddc4b1HKY3QMa4+4XyM0YrgNISAkII4ThjHAREUkoIoi+hdRxKdH5/WmnJWkGUlWZKitGK323O/2fDgekHQI3f391w9WpGmBTf3N9R9Q9THRELQjQNCSh49fozvgSDQWhHFMdEXreuSNKY6HUGcB96nacZ8PsdZh5SQZhmntsFJyfZwxEjJ43euEbHAmIjIxDjvCc7RtS2npqdpHSqNGb07zzBaLPCjo+06TByRJzFBCrSMiUPM0I/n0Z9BIYLBmIR3nlwReMXxVPHOO4/I99n5GCpzlCxpjg6hDVFsiJILOjHinCcoy8XVFVIIZrMZIpJoaUgSQ5os6YcepRVd1yGExGjD3eaWrCzI0pzhVNN1Hd45LpcL8kXJ7XaDGmPWyyVpnsEm0DQt292Bpmp5dPWAq4sL6EdG5xjGjn1VMVrLcnHBrCjRxhB8OAeUeY5JEu7v77HB0o09ox/pxh5rLXGaYkdLlmdUbU2R52TrNcfjkaHtMGnC3eaeoR9QOqIbB05VxYOHDxBKkmpDFMfE3nKsTgwB3GCRytA3NUPd4+zIEGBWnNssnqoTSghMmtLZgbbrSAkEZymjlNoN6CgieM/95kSaJgip2O22RFHELCs4nk4kUURT1wQp8MHRVzXee4yOOdU1/TCgvphJo7WmG3pMGlPXNQbJ5eUaieB2e08YA4KI0QZGO6ClousHri7WeAJKKxyBLD9/IDWEcH49AbGJiI06v46Uph96bJg+aDiZTCaTyeT70v8G+KeB3/Irbv9Xgf8AmD6l8neIEM5hztPVOPu9f98Pmm//5dWrrrr9Zb/EhnrH+Cf/t0T/5L95DnD+ejsTEqIEISQhBHAjuFFCSEAkwBIhd9j+jqH92/MfMAlI9UuJ1C+XiGwJwRPaw/SETyaTyWQy+Z4nv8yd50nO+uIS7wXH/YnYxCQ6wgjF04fv8OTBIy4WCx4/fIxWGqU1gvNAxePhyGFf4UbP8VAxDp5IxxijybOc1WpFURTExpzH7QpF1/a0dQtekCYJUkriKCZNMwiBLEt4+vQRZZnhvCWKNLvjFlSgyBKuLlbMFgUmMaRxwmG3Z+haVqsVQgiyJMGoc/jUdT2HqgIBRXn+5L/REaP1hMC5NZSSeO/o+4HT8cRhd0AIgXOB/aFiGBx5nhHHETJAsI66qlBKojSM9hzivPfee5R5SUASgiDWCUZGZHHB5fohsU7RwoCTjJ3DBej6gb7rSeIMbSJSLb64iH2epYMIKCXJ84xIK1QYiB+9j3eeVzefk6YpaZrRNB2RiZkVczabHQKJVgbnLaO3CK3xErJZgYo1zjusc7TDQDeOIAVBgDGGxWJBGie0TcPQ93Rdx+Gwx1tHmqXM53O00kRRRFM3dO25skmp8yyhZ58/4/WbVywuZiRpgrWetunAB4yWaCUI3hFpQ/CCzz59xv3uQF13YD2RVPRdx839HePoWC5XRFFCwFPM5lxdP2Q2W5ClBUZGGJUwn80Aj1KQ5wnD0DLaARBY32FMhIliVhcrgj23ixPKMrqREKAbRy7mFzxarimimO1uw2a3x46WNI5ZLS548/otfdvSNQ29c7RfhCiLcsGD68dcXz1kMVsSmZhhGDmcKtq+I0lSIPDw4UN+8IMfJARJ1Q7M5hdEMv6lWUpZio4MaRSTFznlrKSczRgGS121XwQbDa/f3iKUJE5TpFJYZ88zc+S5fVjf9zRVjbCe4ATHpmV3qujcQDW06Ngwm5XMyhJjDIjA2A8wWrIsJcpS2nGgrRuKNEErTdt3VE2DC4HFbEacpiRZBlJ+0WxBMCtKsiw9vwHzgbZtz0HwbIaKDEopijTHDyPV6UjdNeyrE904kOcFdujBebSUxNoQaYMAhq4nhECSJPRD91cD4N12z/5wIk1y8i8q4uI4QUmN0QY7jtzd3TNYew4ZkcRxTJamaKUQAozRSCkQShOEoulaBjuijZnOOpPJZDKZTL4fOeBf/C63fwj8Y9PyfG86v486z4uVUiL+FubWbI89HzxZpP/iH/rn3pXK/KovHH/yT+Bf/wJidsVfdxCPlOAtYWjBDog4R67fRyyfnLcHD7ZfisWjd0Va/uoHGDy/vAPb3+gKh4Tj7Tko+tWVQe8BT8kX54BnMplMJpPJ5PvAl1qBc7Fek2Ypm+2Wqmux1nF9cUGWZ6QmwQOzrGAIjk9fPOd0PJ5bEInAZrOjrr+Y1yLOIUjXDhgtKMs5Sp0DiSzNaOstrh+RWmGUQSCQUtL3PcF7RmfxUhLHEUJ6tFF4fx7QKJQnyiPW0Zwi0hybinJV0I0tzakiKwqQEh1HjMOIVgoVJXTDkSgyZGlKmedoranqmrZtGYVES4UV/vwpo+BRQpClGR73RasuQTErkEqxmq84NjW2t3RtR1CB2Xx+/jVVCBbLOff3jrbpGcd7cIFhGPHBIyR0fU+SxIAHK8mSFGMMXnh2+x2a8wXluO9ovUEbg4k01o1AQBtFZDTFvOTN7Y5VdOCz5x/x4Xs/hPcBax0Prh7Q+XOYoxEsFkvu9yeCTBAmY1/1hNBRD4566Gm3G2Zzf76Q7zzCQ5pnOOcYhwE7WuIkoW1rxnE4BwVKEUURUv7irB5PlufsjkdevHjB0DY8eucaEZ2rV9IkPa8vUJ1OeAJOCUJQ2K4nMznBBQ7Vgch5Hl6eq2KqtsFaS5rlqOjA5u0G37pzVVAU07YdXdszdAOd7amr6hxMeQfCY52lHzqCG3i727Hb7DDGsDmd6HSPTjWPHl7x4tkrPnr+kkVeskgy+r7D4siyDIQk1ilNXXO/2ZLNY8Zx5H64Yfd6y4PVyJOHj/BK8Onrl9xu7hmOHbe3EqTCY0nThK6qSExEYlKebZ6x2+54/OQdJJo4SvB42roj0RGL6wfMixk2wOawZd/s0FqTR4ZgFJ0b6b1jMS+ZFSVpnHJ7f8ftL1bOlRlRkBAC+WJG7yxohY80EBjaEd31xJEh0hEiCPIkRUp5DvS8JUsKHj96TN+0vLm7pRs6nLOkScJyPqfpWkZnCc7z5MFDmq6jHQZscKAkQgnGYUTIc1VLFEMxn3Mxm1MdKpx1HNuGQ1eTy0AWpZTlnCKKCSiSLCNOYuqqRmtNCAEpBN6dw81A4HA64V2gXM5RUYR1FoEmihOiKKI+nmhPFV3XoY0hzTKGYUBKSWQMp/r8OKTSpGnGOFqaL16j0kxvFCeTyWQy+Wt99e+frv1/H/kzwH8E/EO/4vY/DPzxaXl+4/zqXEYQQqDrur/6fkkphTHnDz+FcG5D/tfLXYQQvH57x4fvvWP++X/hDz/9t/7X/8vPf+V97F/494j+sX+NcLz97vsoLvBvvhmG//BfGoIdA1Ii8pWUD39IqR/+PUp9/b9M2N8Q2gPi4mkc/9P/ztPu3/1vfo6zv+I/F/7m1yJbYv/y/x1/8zEiXxC66hc3Kc4hznl3fwth1mQymUwmk8lvpC81wMmz/FxFEcUkAjprSZKIxWzG/faOYC1lWiAjTWRivHNEJuJwPGIHT3AS60Z0pKnqFrxjVuZEJibNEk6nA30/oLRGoxBa8vjJExZ5yamqEJwv9o7eEpsID5yqijhOcN5DEFR1jRwlZZLy8tUrTJZR5jOq3QGTRiRpwXZ3oB86YpOgtaGclSR5xvBFpYT3npu3NzR1c24LZS1lMSNNsnPYpAU+eFSQdJ0lAMYoBjswdBbnRi5WS8ahx/WWQKAfBubzBVmWcn+/4Xg4kWUFzg00TYu1lnEY6McOqdT5ArKQjPFIksQYYxjsucpFAYPzmOFEEy5JtAEhCTi0UYQQUFqj/MiYXrC7f0uR9bx+/ZKymOO9Yz6fw6vzL7lN3+MRDDagdcoqjvj5b36LqmkIwaEAi6NvGqKyYLQjWkgibZC5RArIs5REaZxUjB7yNEUohR0tRivWqwu2WrPfH2iaivXFnIcPVyRZgok1idd4a1FK4ZylaVrqrkWbiMvVJR8+ep8yLXhx84I396/Ji5w0ybjfnMizjKvlud1YhODd994hJcJ76PqO3f6AUgqkoBtb+q4jyzISE+NdYOh6PAPjMDJf5qhwrlARiWTfHNAmokwLHjxYMyiPbQbq05EkTv5qq60kSRh7S5Qo3nn68Nz6T0uavuWDH3iXRxdXXCwWvL57y3ZzjwZ0mRCj6ZoOoTQ4AWjqumO33XFze8fxVGGSHG89RV7SjwPeB9I4Yz5b4p2naTqcDaRxQhLDsW9xnSeOE5I4JYtThIO+6fj8+IKmbjBaI8I5uDNG07kB4Ti3QjSaw+FA3/WkJsZZD/ocLMZRhDaG4+aOQCAzEUWWM3bnNmZpkmGERCtFXpYEBO1uSxollFnJ27d3vLx9S1YWSCExI8yyGd1gzy3oEMRRdG5FJzXzMqbyA0M/IIIgkZrLxRKjDdvDEWsdUQhYaymyEoVgGHpkUIggaeru3HUhTXAhUJ2OhABSCqQQKCGwgLUWKQRFkiKEoGo7VKyJkwgtFUKJc7Wd93R9R3AQ6YTgwnTWmUwm3+umH1STyeSv51/nVwc4PwF8Ffj2tDy/nj+tw/k9yxfVKeGvSWOkFHgPTVORJjHOB4K3tE2P8+f3IuaLynDn3K/5LaSUvHr9it/ym78W/+O//w8+/hN//N979dduH//cH0P/Pf8UIl9993ZkJoauFn7zvAb2AGH3SviXP6vsT/6HsfkH/rll9Pv+x5H3I2H7Ev1b/pFY/z1/8Nr+5//Hm1/chcgvwA5/8+vyRWVN8BbxyxuO/NJOhPiismcymUwmk8nke9+XOwPHC7wXaBVB15/bmgmFIEAQDL3lpt2iI0MWZyip2d5vub2/A28IwdD3I1oovHVEscJog/OOcRwJXnA6nTBRRFlmdEOPUBIXPNvdHgTEUUyWlIx2RKAZRnB+JElLoiji7u6Gseq5eDzn9eaWtB+Y5yUUc05VxemLoeqxic4BUxKxXl/QDD2fP39OGiV0TcvxcCRNU4xSeB1Q2hBFMeM4MAyeuu2QgFaKpmvIs/QcMilBmiZ4N5AlBh9pvD3/PikBnP+lWSpC0bY1drQoBFGaIRTnypE4/uITVOdZHUPfnSts/Hkmj9aKVHs6+cWHjrqBOI5QJqJtW6qmptpXBKXpkvcZuj0+H/nk04/58IOvYPuOLPr/svfnMbeth3kf9nvHNe3p+74z3HtJShw0y47sJE7sxJbtFI3dxImTFmldO0DjBEkTFP0jRRO0QND2jwIFGgRomqJtiiKI0dR2JnVCnASJIye0JVIRSZmUJVGkKJJ3POcb97Cmd+wf7z7ncrikKJPXkaX1Aw7u/fa49rvX3nut93mf5ykukmHoScAHn36ITbPi4fYOA1gh6MeJbrNFK4X3juk0oJUiywA5U1WGqr6gqxtEjIhYOk6sVtS2wWqNMZqH21uOd3es6orHrz1FVxKXIlkrfIq0TUdwjjkGIlBXDdMcSFPgw48+wO/+gZ9gmI+EPDFMJ7bbS2xT063WbK3BJBiv76h8ZrtdoaViHhNzDky95/GrO9a7Le627LfkEo9W25rsysnA5XZH3XRYY6jqhsPhRAqBzWaDTLDtNvxQ0/Glr3yZ6D27R5esVx1DP5BzJgjH01eu+MhHPsLzZ9e8/c4zqs7SNh1109CPA/PQ8+rFI9qm5e7+rgiWSlFrQ6cr2l3L68/f4ou/9kW8d0glmMYRozUpClIWxASV6bBmxZwnhmEkZcFqvaIfBkKIzNOEkYqL1YaN6Rj6Hieg1RYhFTF4coK6W+HcREqJ9XoNMTEOM0cHF+sdm82WFBNKgpLlRFIqQ9eukICNgqnvCSGilCmRfk1LXbf0p5G7+4civOiK62e3XF/f4kIkTjNtVfNoc0VbVVwfDxxPJy66NXXd4EImCUGKkUornu4uGOeZVdXw6sUVb15fM4ZAcr58B2WBSJKcMipqVIg475m8Y71ZUdWWeXb4KSCVJIWETxOTVGhrSDlhtMbkXD6LSjC5mclbktRIWU4M+7GcJCs0aY6EsMTDLyws/Nah74f3uviwjMzCwsK34a8AfwP48W+4/H9I6cNZ+FuEMZL+2DM7/0LCeanACyCd/9huV0XcEYYQZqQUjMPM4bCnbTusNQghvk4A+noEb7z+Ov/IP/zHuy/86ucv//ovfOru5VUpkr7wV9F/+J8jD/el7+ZrSRGUhtKR9LUHwh6Y/F/5vx2EqV8zf+xf7NKzL5Kuv4T5/X96G37uL+7xbhLdJeLRh2E+ScAAa4qLRp9fZqbE+83AEZhfCDgiJ6haGPcaIS6+bu4jBdAWUa1AGRBwPys2JqK+5iWkDHMUjAF8Ei/HVFDOdSqVWZmMXMw8CwsLCwsLC+8j76uAcxp6Us7EmJiGAS8C1hik1FhbcToMOO85HE5cXV0hZeL+9oHoIlqWWCYRoN8fMUZRr3as6hYJPNzekTNUTV06TygdFzFTJrCtZvaOtmup26Y4bcpxLQqJMRrvPG4O7DYXPH30Ks9v99xc39E/HFnvdtS243Dc09QrjFb0femnCS5wPJw49T1udMgEISSGcUapF5PC4Tw50hNiZJgGNJLNdk2OcNwPZJ6zWncIrYpzQmseXz0iGfA+EmLgfj/x2gdfYxgm+tPAbneBGye0UhijuT/eM80OW1VIBCnF4pYQAiEkVVVTaVPEjyTo/Nt4+wMIUSbeQz9irCVGOByOXFxcYJuGt6NBH+5oVpqvfvXXuLx8wiuvvsLrb34JY2yJ5Oo2nPqe6/0DT155CtHz/PoaW1lyEri95+HU0zY1opXc3d6TckZrjes6jDUM88TsPWm/p7UzKXqcKwX3F5sNrzy+5P7wwM3tPZvdllXTMI4TPgeMtdR2hdCSxAPzHJnmIxaJSAJiQpEhZ7TVjH5mmkbqnLm/u6MxFU3bklIgS0nV1AgpSTmxatfUuqY2NdY21FWDQCAkBJHQyqKk4e72lrqq2Owy+8OefhzoVh3zPFE1Dcl7XnvyhHT1mGkcaZuGtmnp+56mrthsd2htSLmIHZv1FjdNHA8HlBK0Xcd6s0GfRZQ9R3JMbFc7VqstUSRO/ZHgHKvVhtoHyAEpFW1riONMDhkpK7ToiKqspJtOR4ahpu8HUghcrDdorUgxsd/vz2Nm0KZ0Etm6QmmFDw5/FnO6poWUub25J6fI5cUVq+2W/f09KUTqqiWliDIGIxU5RNrtirpqiAmGaWKaJmxVXDo3N9d473ny+DFWVdzevUPICSklWgguNlsuL3YYbVh1XRFW64YsBNYqcggcT0eUMNRVTRaCuq6xVc1uuyMYS0qRShlSCATvEUJgrMGSmfzMerUq7qjzviLIKBQiJUKOBDJhnvCjw0hFVTdoq6mjQ+ZMSJEYMkoWITmEiLUVxMQ8TedVkgsLCwu/NciL2WZhYeFvjn8b+Fe/4bJ/lEXAeV+JGUKiLNDKApFhzpopJpQu/TYvdQQhkGS2rTn/KZimkfXmghgdspOsVg19P3I6nZBSAuW8/RspHa6J6+tn/JN/5k8/+vKXf33YP9xNL64Pn/tP0H/gnwSpv52r5Vt172b3n/0f39a/75/4AVFvyMcb5Ad+HP1j/+2r8Nf/ozfJCXImC2mA7y+PpECos0qVihgDHXCJVMcsxDtA9n/tz1H9qX+NDBYhL8qznbcvRdAVYtOSYwQSh6BJ05GrbfvufIaX3E/wsoNHvPwBhZgZvOBhlmwq2Nm47KQLCwsLCwsL7wvvq4Bz+/yGy0eXWGsY3YyLASVHhn5mmgL7w0AInpQS9/cHchbEADkophSwVmOM5XTsqW2H0golFdMwMQwD290F7XpNW1VM48yQerQQiPPk6eQznVFEH4jOY4yltWXSWSGY5xEjDI1psbKiMQ19GtjfHJn6wNNXXqGSDWEKTGHEGsVmvWN/OPD8+hmZIrQID0M/MTuHNZYPfPBV1rs1fg4kIjklLrdXCGB2M9ZUhJh5uD9wc3uLNhYQtG2LX2eG/sT94Q6rLUoo7q5vz0JYpG4sIidqXWGM5tSfOIwnRtnT1DV1U6O0gCxQwiCyYQ4JRLHKVz5wDAO71z7Cw/Uzrm9vEUIxjTMpZpqmxSpBt3tKrF7hg90J7z1f+vwX+fCHf5Af+uiPcX3/DsYYQozc7e9BS4QUWGn5yPd9Hwm4uSm9MPMUSFGwXe2oVEs/9CilsaqCBNPkOB4HHm721JVByMTFbsOrrz3BGo3LkYxAZolMgk43NI3lgCpxZ3cHkoBxnlg3a17dXqGyIsyOHBK+n9l2G+YQ2e/33Nzc8eTyivW6pW43IOD27p4njx+zajq0bNhuHmGrGmMNYicJczi7UAIqRyKe4zjQjyN931NVFaqyKKVxLvBwOPLRq0cgBHN/ZNduccGTJo8RmtV2g5YSHwJWKe7v73DTxJMnT1FaQYxFCMyw3uzYbrcoBEZZJudRSvLk8Sts1peMYaCqSpzca6++xnaz5fXXv0o/9Ni6oqHGjxGpDEpVJD/QDz3zOOJmj9KGtbXUtmIaR/pxJORE9BHGGRmgXXX4ELi9viFfXnJ5ecnD/Z7gMlLKs/C0Ybfdlfi/eWaz2lK3K/p+YBwdbdWxe7Jl061x3vHm2885Hk+0TYuSBu8jxliMscQE9+MRnyK2skQHq7qh1hofI6Mv4onVhhgiSghee/SU0G6x6jlBCkY/gdUkIJGpu5Z8OlHXNSpCTInaVqRzR5a2hjQOrLYtbdvx8BAglIlNa6vSNyUE0Yfzd1ZESMEpTMz9jAsBaTUuBNxYYhWbpkYphQAqUxGcW35xFhYWFhYWFn478B/yzQLO7wWeAM+X4fnekDJMUTCH8l/3NfpABhAa7A5h4ZtkE6XRYaRp3r2mrg2Hwz2bzQVKWWKY6Trouobb21suLy/P8dTfLEQIIdjvD3zkIx/hj//x/84r/+5f/H9++cV18Us/R/rKpxGv/ij5dPMdvz71/b8X/Yf+abKfUh4PI82mZE3njHztxxr++n+k8vAQ01c+g/67/7supZjk9qnMbgQ3FjHG1IiqJT28XS7z01q98oM6wOvhv/4PMH/wzyI/9BM5ux78TD48o6gwoghDJSsZUCAVw+S5WGfk2VJTqVyEKVF6QIsAJECKEsMWPTln9t6AH9l1dtlxFxYWFhYWFr7nvK8CTvAOKQRXu0tu7h64PzwwTZ6hnwkpEnzC+YibHd5ntNL4KJh9wjnP7DNKC5quRdeWnOE0jjQyoYymbmqauin3dx6BIKaIUoKIAK3QpqZrDIRAa1taUxcRxTaYnWW93pGcZ3/zgM6aTXeBSZpDf+LwcODpk6dM88zbb7/FTObx04QLpdDehZk8KnSQRXhKAqMNF7srdpdb7u7ukBL6fsQ7jzaaeZ4JKZdJ4QRuTIzOMc8zOUliEEgl6dqGGBIpROZ5JsZIPwy0bcN2VSb0p6m4cqRUNKaishYlFeM4EEIgx8wpBCKZ1arFKEWIiaf2SDYabQ2b7QqlNeMAm+2GGANeZHQOqOqCL7uOenyDpx94hSH03H/lGcIoUs7004DIUEmFm2d65+nqhnW34mK3pWkb5kuHSgqjK4ZhYOgnTqcRP06IDHenA1VT8cFXXkMSmIVjtV3hvSf5SNM0CKnYbS9Yr1a0skIa8LNnxkOCcRppVh2Prp7wfY+f0GSLHxw5JdZVh8uZd/b39NPEer2ls2059s4gpEIbS0yJcdyTs8KYVenskYYxSbbdFikl8fxeuNlxigMxRqJIKJFQ2tJqhdYaP3v6U4/zHlJGS0U/nKirhlWzIodIDBFSxs0zPkZyzqzbFlUbjFbM01x6cWKmPw1oqTgNI0IK6rZit9vSdSvynOhWLSIrmqbh4vKCw+EASKTVzD4gjELKxOyOnI4P9H3PulvTdh2H0xEjNNEHTseewY3YuiZLwCdMVdE0Lceba4Zhom1n5tmfV58JlNJsNhtiigzjiJvnEiktJKdhxvvEfJp58vQRVxdXzOPEPHuOxyN9P9J2HV3TlR4bQAiJd57D4UCW0NQNxlokAjdN3NzdcjgeGE8DvRuRQlIJhTsNIOXZ+RJJKREpOYSzm3kYBuZ5whiDn2ZS8JiuY3COfhi42l0UEWq9wRjDAYFCoZTBKsvsJrwvoiCilMBmMvvhxNAPSKOpZSaFRMqJaR7RRlHVVXH9GY3RiuPhtPzqLCwsLCz8jiXnLAB+9a/+v35Hj8MP/cF//G/3l/DrwC8BP/YNl/8hiriz8F0wR8HBldiub5VqJqCICd/Ur5hf/tc7x/4wsd2sXl7b1IZh6GnbDqksUhmCL+c1Nzc3PHr0GCklKX2zk0YpxVtvvslP/qG/3/7Mz/y1i9e/+uX7F9fFr3wG87E/8B0LOPKDv5v6X/h3odmSh3vyw1sDfmoQkhxm6K4kUAN9+vJ/jfyH/uc5H2+u3V/63z9Nb3wu5/07Mfsxi2ol1I/8EW3+W/8TstSkh7fRP/EnGv9X/9xFev2z9+GTfwH75GPO/Qf/y7vsx8vqT/wr5JyKi8cN5OlUhBgg79+B5M/u1HcFnEZ4huMRkQKS9PJcJ5quRLCd88/3saJ1DmvNshMvLCwsLCwsfE95XwWcLGWJMWpXtE3H/f5AVVVkEof9gRASOQqmySGQ6MYwO0/TdQg9k8hUtWXV1SgjqdsKRGKaPVIInA+YeWYeJ5IPPLq6IKWE0pqZxPjsGcfDASM3XGwvqU3FPE1M0wxIdpstKWfkSjCdekiwblcYqZDaoJRCCslmtWHaTAgp8C7w/PoWKSy1SozJEVIihIBWht12xzzPHI49KWWEUFRVRd/3jGOi61ZYU7E/HPHBUdkGoUBKhUDgnGO92SBEZp4dLpaxSckzDiPjMHK1uaC1DSKDFpJ2u2G72YDk3A8UygIhMkZpZM7InM+RxAmjM/7u8+znFau64urqiqqyHI8nTscj1ljuhMR5T9euuAkbVNSs/DOEgRwDN2+8QxSZbbfGTzPOe0JMnE4j0SXaVcu6tVR2JHtJjrC92CCVZJ5nqsby9MlTPqgEt7c3WKup65YGT3V2VgQy0zQz+0DXtmhlmOYZhUAqTUqJ1WqDrRuSBCUl+/2JpCt0ZXk4Hbg79hynkbGfAUXbrZnnQE7QeketalbtmpQy++manDL3e8F2fUVT1xwOe0xV0dkaoxJhjoDCmIqUHTFHjsPEs5sbKl2cGvjEs3feQSjF5cUFdw/33D88cLm7wAfPMA/MbkIJxZwiPkakLGLIo8tXiJvAPPec+p7T8cjDfYnhM9YwuZkwOVIUkCUiC7q6RQnL6XSibVuMsVRVjdAGqxJJO6RKHE833N/fse4uWa0alFREH7m9P2K1ZppmUspMs2d2M+uqpdus8amIrFVd4X3g7v6eFAMIwePHj9juLni4v+f+/oGcMrauOJ5ODCePtRatJJv1lhgywzgxu4kQAy449scjm/WGlBJaGzbrNTc3NwzTxGrVlS+SINBas11vyWSm2RFigJhJKpMU7PsTgUwfHOM0MQSH1JpkBcdx5NQPRB857k+kFCFJ9g8HApGUMlnAZrulWxXxsJwYlxiKcZxApOKCC571ao01ln4aGWeHshXaGLIQKCEwRqBVh/cOsqCuy/dO9IGYlmiFhYWFhYWF3wYCxgL8V3yzgPP7WAScv2mm8K5w85s44/42FwuE+OZyFikiOadyvpgzxnZlkZP33N7e8PjxY/LZnfKN93feY4ziH/ij/8DVn/tz/9bDi2dKb/8K7+ED+jazEBVZCPKbf+PlQ3xdf46UL+cq/Kf/38R/6WOkmy/vyUme79BT5Kuc3vlVmd75/EX1Z/6NyxxmsB369/7JK/f6Zx/8z/w72f/cvx8J8w22zfypqytON0XASQH8+PIpH68NYPiaILpyeZt5cJm6brFWn+PmIITI7dAz6xXE0qnzcDry5HIRcBYWFhYWFha+t7yvAs6UIlFK7o9HDn2PNRWXm0uGoef4cAQEUoBEoKWisRV9OtK1NUJlnB/YbmrW3YYQPV3TEoFpGNFVDUEw9gONbdC2YbfecbXeIYDrwz0HeYeLHm1qVN3x1vVz9vs7UohcbLa0tSUjqNuOrCQZMEazWj0i5sQwjBwOJ0DStCs++OqrvPn8bd5+8xnb9YbXHj2mzyNvvPUWd3e3XF5doI3Gh0AaBhQlltdaw3q95nA40XVrUozc3z8ggN3FmqqqOJ2O5BAxStLYltP+hBIGaxRtvSpRT7Pn7p0bHrVbXv3gE3bNmvvnNzjvUUKSREYiqGxNDEVUkkKiMlghabuGUSpcTKB64uFIePxhXAggNMPoiDFiLYzThDYWqwNpnjn2MzfiESr1hPsvM/cHLi83pJCRSHLM5NmDscQs8HMg5ITzjpRgvVlT1YaYKpCJq6eXfORj38fsHcfjPcPcs922rOyKEAJeRnwMhJiKhV0KRG2IOZOAMApSEqVwPkXu7/eEceaQHblteeXpqxxOE7/yxut0pkLEjMqgasF6vUUSOZ6OkDNt1TBOM56yvdFHPAk9WYJP+NOBrqq52l7RdCtW84RI0KgWZzzXD3e8/fYzmqpCS4WuLHXX0bUt0zjx5rO3MZXFpcDt/h5yLO4aEt55vPcYa1BCs7aPyBUM4pq+PzBOPafjCSElOVuGYSQEcD6TsyRHiYoWLTLH055cvGdIDQZNIxtOwXF4OHLc98SYWXXrsj95yFGRAgQBVdWiAWUMwT0Qcubm/g5yJuaE1AppFEIJTseBnAWPrh7jXWIcPMGBMRXZK1KKeO/w80TbtPiQGIcTN3d3OO+RStM0Dd577vYP7NYbKmvxKTAHX9xRKOZ5ZA6By80GoQ3Je5TSKKkRIiJyEbLGFKnqGpMyU3aIIBBZME+eMSXcHNBJEUPC5wQZUsworVi3BjfNHCNIbZjniXFyRECQiSmVKLeUUTnz6qMnGGP40htfhShYb7eQM855VpsNfppKzOPUYzoDKXE49ijU0jexsLCwsLCw8NuFn3+Py37XMiy/eVKGu0nS++/9Y7+XgKOU5Hjcs9lcEGNZzPSRj3yEL37xi6SUuLu/5/Li4j0fT0rJzc0NP/7jPyY324vdYX9/D5Ce/Sq5fwBlIH6HL8RN7wop30jpqikbHxykgHz6gyDEvfrhP4z6wb9PimZnMJXM+3eIX/7UkG++fEF3IfJ8Qlx8UAINMBBmxOqK6n/0f90TpiuCK9Ym8fW1PF3XfMsxvLjYABDLWj1SLgswG5uYQ6ScvAimpEjp3Qi2hYWFhYWFhYXvBe+rgCO14fbuHqSkqWuULNFQwzkKLCVw3rHbbdlsNmhtMIcDWmm0yNSrjouLLVppUlKQIPiIVoau7VBKM7sBJRXSVtw8v+Z090COiSjh6aPHTN6DUDzsj7z97DnH0wPrpqWqKhLQ9z2nfqSUNgb2+wO/9+/6O0k586Wv/DoJGPqeV58+oVl1xLcSGkVykf3DEWU02+2WfihdKADeO8ZpPHf2SMapuEOsNYTgiTGy2+6Y3ETbtmy3WyAz9gMAh8MB58rthmHAGsvp1NP3J1w/cHr0BGM0Vhs2qxVZCrrNmofTgRAjmbJiSmrJdBxIIbJePUYrjfMBHyO2qfm+XWCfHSdX4dyM1oq6WmGMoa4q2qZ5aZ9XStFWBh/XPJNPmaoLsmyYI8TosWGkrmra9QZrLW4c0VKjrMIFT7daUVlDypHJjQiRzy6jke1uU9wKAipriSEghEBpjY8zQimaVQdS4J3HOUffT2QySmmSACUFycfi8AmeMc9ECVKrIgg5x2a1Zrfd8ujRI/w8cHd/y+l0OqeBSUIopaC6qslCMc6u9BKFwOF0Yt1tuFht6OoWgUA3FS4ERud4uL8np8x63WKNxWqN0gqfApvthq7rqEzFw8MDkLHWQi7xaA/7B6w1PLp4DS0kcwgcjkeOhyP9MDH7QDi/bpAYrVBaoAzIWSBQ5JTJCbwrJxA5Q1IzkkzwgbfeenaOWLukritSTLjZoaVms9kilSIDc3Aoa7G2ph+OGC2x2jLNE1pr2qbBWMM4lH3XO8/+YX8WKitiLJ8jrTVtXXN/f0eMidv7e6Zx4O7mhiyhamq6zYrjODBNE3NVEWMo/VUpYpTGjROH/ZEkYX88cri7p5KaEANaaqR84ZQpJ1HeB5ybibGMl9KZoBR1U2OEZkozWURyiEgBdVWjjUTIxDjNnE4D4zAwO4ePESUVISWkEiijSfPEbnfJR77/+5mmia++/Ra2shitGfqB4AMiF+dOiJGcQSuFDwGRwBq1/OIsLCwsLCws/HbhF9/jsh9chuU3xxAEt6Mg/S1e41NXmhjPi8pSIufMRz/6UX7t134N7xzOeR5dXXJze/dNItA8Ox4/fsIf+AN/YPef/id/qQg4t18lH54jugvydyDgqKc/iGjW5OPLyqSX5TFCKpiOcLbmqI/9fpp/6T8j3b9pmY4XYvWoy37SuAFyQlx+CPnRv4f88Db4GcKMqDuAChjkB38X9Z/9v0O70/n2K2Wl22+CMQhOQb0UZ1724Ahx7sU5O+xzJiHxPlBViwtnYWFhYWFh4XvH+yvgCME0jpyOJ7pmBXmgHwa8D0gpOZ1OxBDpuobdbss0zWgjWXUNXVejrKCqanLOaF0is7Jz1HWDUppxHEk5kTI89Ed0znS6JpPZPbqiaTtOz97h4e6e01iyfS82F1xsdmhdczyNuMkx++I8iSkzjRPTNHJ1dcnrb72BD57Nbos0hmfPnzMNI5umI+XI7f0tbddRNw0fePU11tsVxhpOp57j6UTXdazXK4Z+BCFYr1d4F5idQwhFTpwjuuzLfOOcEpHMMA6ks4PGO8/UD7hpZn1xWSKq+hM5JKyteO1DHyJLuH24ZxonbFWhtWEcR7wP+Hlmmh1ZCB7uD6jKkIVk3XWsuOcL9xPohu1mi7WGFBN1VZGBYSjvUUoJQYUUgke7NadTj3eRSdVk05Hu3+bqlRV1UyMQHGeHtBV1Y5FKQErkFIjeo0WJirt/eMCHQE4JkGVs6kgQAucjKEnTdtRVhcgwHgbGucRheR9Lt8zhSFXVdLYhel+i14Ti+e0d43FgpSvWmxXyPKl+d3vPYb9nu17T1qvS0yQViYybimtIKY+bB1arDVe7R4TpOeM04CbHQ9jjfEAqhcwCLTVd15ESXKw6amvp+wHwCKUwxmLqMibDMCGExnuPcxPkzPHQM/QOLS05ZfanB2bvubm54eH+RHQZEVXpX6E4QaRUpeclJ7RWaKuZT5Fp8lS6pV115JQwpsa5gFEaUqYylu1mw8Vux8PDnuPDgUxGKkVlK5TR5BFcCKQY0bpExYkEbvYYY1HakCJcXV5hTcXD4UBKma5ryQi8j+f3MxOzwdiKnDPDOBJCQGlFSJFxnllZcxZJPG72iErgXcS7hNKKeXLknLG25ng8kmfPxXqNNoaYEhXlsVOIhBAJITCOJQYhI0pUX1XRtR3eex72D4yjJwqorT27mnLZt0UJTEs5I6XihdQSUxmfEDISxWa7pW46nI9orTEpgRDMrgjTIXjqqkJrSds2IATRe5q6opaGaZqWX52FhYXfMvTDe34nHZaRWVhY+A749fe47IPLsHznHJzgfvpvxqlRujYdSlcIIYixHNt+4LUP8Mabb/DwcI8QEqUVKaZvuu8w9PzID/+Q+U//k79UAxNzT3rjc6jf84+8EF++Pbb9RgdMWw7iMyhDPlzDOZPN/JF/nnx4vskPb7+ClOTrLyHWjxHdJeREjgGhDFkq3nXXKOB8SJ9i6do5PP9Nizd3k+DoXqS2BVDqxWO/yzeIb+/VH7SwsLCwsLCw8N3wvgo4CuhWK9q6YX3uljidjmTOq9+1RgiBNholwRjFdrNmvVljK8voTmilzquCwFY1IWXqumaeJ46nI1VjOYwDx8MD665lt90hyLTrNbWpIUJwDq0VNlkutluudpekmMg5EVPifn8o3TlKUVeWN996C2stSilm57i6vCLEyJe+/OtkF+iampRhdA6kxGgDNiOlPOcJJ3Is0WJSaEJIKC2JMZOzIMYSb+acx4iSN1yK0TW2rri6vERrw+3NDaSEkhIlJE3T8PjRIy4uLkrPzsMBHzx317c0XU30ickHqqqhMhWHhwMg0MYyjZ55jogMZMHp1LOqV1xeXfDo7kvc8wG6qw+S5gElc4kvm0rvTo4JbTS3d3f44KibhlXb4YLHVhVCWcasuLm+xZgD67NI4KYZKRWrVc00jjzc9cQYadsWcubm5gYhBOPoEELgAe9ui/MmRUKMdE3DdrUip0RGICKIDLWtsLZGCoEWit3lE0SMHPZ7kIrr+3v2dwdMlGy6DU1T8/z5c479iNISqzS1rVl1G4xRTJPDzxEXPU1tmYaZyjhELnFdXd0yDiOqEWy3O/b7B955dk2z7ui6DSFC122wRnFzc4eNFmUbpNIcT0eOhx4ydG2HVoZ+KmJmnBNNveLq4glaGo7HB+bgSx/RYSDljAC0MmipsaoCJXE+MI4TKSdCzASXEFkTQ0aLiqqtQMD97dsE57G2uKqMVPTHnjffeIP7uzu6rqNbrbDWUrUNISf2N9fMbj6LeZkYS6+UVpbD/khOiSdXjxGU7dhsNmxWK0KIHPZHvPeEHIk5UTU10zQzTCOrrmW1eso7z97BzzOVbVDaMI+OcZxZrzZYWxP8HiUVTVuDFihrCMcDylqaukFXtkS6Occ8T2cBx708WaqbhtFFxnEkxrIiLpxdXVoryBkJzMNI3dWlvwa4vHiEkILrmxuijyglUVJDlnjvUdqcRapAjOkcjyCJMeB8cUillFFSFiE2RmQISKmo64bG1tze3y2/OgsLC78leOudaz7+yc+811VL1uPCwsJ3wjXg+BrnBGUSfgc8LMPz7bmfSt/Nb57zfV64P76rr+xEyunc+SIIIdJ2LXVdM00Tz+56v6m16Uf3Tffs+4GnT5+yu7hcP9zfTQD55ssIU31HWyRf/eGv7Z/RQLGsSEn2E+nNX4yAVz/0k6gf+6NNevbFV17EnsmLDxK/9HM5ffUzpNuvkOc+i80TYX7/nxaiXpcx+toItvFA3j8DW33n8W7Awyw4OlHGWEgQAuEGcnAQy7lFkgZRr3i5GnNhYWFhYWFh4X3g/RVwtKaua7quo21bnj+/Zp5muq5lt9tRVy2n/kTb1GhjqeoaAcQQkG2FlBqpSvRYPwwEVyKznHNM84TRmspY+nlCKoWQiofjAaMk6v6ey4sr1usVh+GEIJW4rZTIApRWVMbw+MljlDG89dZbODdjrWUcRmIIbDdbQkpoZXjYPxBTZNW21ErjY2Ard1xcXNDZmmfPnxNCJKeyjcYYtNE0bcMwDczTzDROKCWZ54kYIt55qqbGz45+7Mmp9G3UTcOrr7yCD5793T2TmxBS8eorr7Jer+n7nomJcRohw7N3nnHxeIeU5fA7BI+SqriWYiouplREsyePHnOaB+4eHkgxMc+Oi8tLLvLA7fEZo15TCUFOZdK+aWpOx1OJgRKiNEYOA1VVsd1uqYxlfxoZh57VWrJad7SrlkpbJIJ5nmibhrpqeP3119lsNrTtCijOBykldV0hhCR4B0Jgq4qpP/L85jlNVXNxsWG33rx0c6WUaOqKpmupqxojJZUxqARGaSYiz6/fRinBxXaLlBLnPVVVsVqtiCkyTRN932PsDiktSpdYs0rVNG2D1jVSSq5vnjP0A5ebLUZIttstq3bFs+tnXN9ec2Ukjx4/wTvH7By1brnaXjK48vghOnz0hODJMeO0xti6rOHyESX1OcqrYpwmlCkr3ZQyCKEQOVFVFa1SKKNBSVIM5AiTnwne0596xHnbpmnCuch22zIMJ/q+RypJVVUopXn2/Bn3d/ccjkdSjGzO3TOZRIqlt6Y/FaEtRfkyYq4yBilFcbikjJDnlWgZ2qZh1a04HY8lyyxltJAv94GcMlpprLHkFLDW8uSVV7h4dMlX33ijOG20ZpocSpnzfivIZLTJpHOvk8oZYw3G2pIxPc0EH9DaEL1nco7gPSmDmyckpX8qpVRiG5uW9dqyP50QKaMrQdPWGKsIvmJ3scM7T4qRED3GlELX4AMpg60UMUZC8PjoiWfBaB4ncky0dYPSipxTEXNyJs0z3Xp9FrI8YwzLr87CwsJ/I9zcPfCJT3+OT376F/nZT32W1996tgzKwsLCd0OmiDgf+IbLn7AION+W37x4I0pXzIsos5zOAkX+rt9CkfO7j3uO4f7AB17j137tS3zqF37p7h/6o7/36WlwfGOVjveey8srXn31lebhvEApj/vvbALiJ/5h1N/1j5Pu33xx0ePyABmxfUp6/XPEL/6MA5z+3X8MEI/JCaRCrK/w//m/Ed1f/j8FYAQmIADC/L5/4lVsJ7/pCaUCKX9TIkvKsJ/fFcsEIPobdC7OetMalFY4F7iXCpZj/IWFhYWFhYX3Ef3+PrxEqxLJdX1zjXcz292OdtVRNRU+RrTXCC0ZppF5nlFKQZY83D/Qtg2PLh6jlCK6ZxwOB5p1ez6qihhjqY3FGIMAfPD42VOtVgzjiDFH1psN5vaudG+QsUahhMKNM7Nz/O7f9bv40R/9Ufph4PrZc6RUoCRSaeqqQkqF955pnlg1Ky62O8ZTT38qnTerbk1lLevtlnGcIFMmzJualDPH4xEQNHVNSon+eCT4RAgJ7z3H+wNaSpLPhBSZQ0QgMcqwbtdoFMMwYnaWqjY0VUM/TCWKTJ3ju3zkeDhRPCplRZQ6O39CCDR1TacqKmVoq466WyGyQAnJPE2smpZHjx+xvbvni2++TvX0B0CvCWMPEeqqeXHqgDWWqm6w1tJoA0JwnwYygqpb8corj7m82DH0M3c+4F1ERUFlarpuS8yCGGG32SBEOfjPGZqmwXlPiJGma+jHgTAHTvOJaZrI6w2T8xz6gWEY8CERYkDtJNq29KeRnBPWGnLK1CiiiHTrNSEn3Dyz3myQSnFzc4NzI5Ob0baiaTak7JFaYm3Nqt2wbS/wIfD8+h1m53hy+ZhVtwKpmL0jA8oYlFLolNnWLf00otSKD37gVd55/iZv371NPxZxRUhJiIGQIoaMFLIs5JKQReZwfCDnyGrdoZC0VcXQNEhZBB2Atu1QWjJNIzlFgg8MfU8IsfS8GE0IGshorWiajvV6ixBgTMU4TuwPD/jgMNbQ9x6hVYnWe9hT1WdRUJT3WUiJm2cqbUhIYoIsJNYohJCQMs053k5AEX1ywIcZIQS2tpisERLcPHM4FvvUxcUll4+uuLu/5+76ppwEhsT17R0+xCLSRs80zRhrsNoQZUJJQCgEEqU0jW0Yh4ng4zniTOO9Y79/ILjEdrumrdf0w0jKmdpahmlC5IixBjIYY5BSgvDMk6c/nchJYFWFlooUIcVM3VQYY3Gz52EYOA0DIURAEn1AZ4k2hrqumJ1HqYxIiZQzWmp8jJzGHr3kYS8sLPwt4nDq+bnP/OJLweYLv/76MigLCwvfa97hmwWcp8CvLkPz3pzcb0a8EaB0ER7CjPATKswkMkpCMB25Wv+mXCXfSM6xHNefny/GhNYaay0//Zf/0+kf+WM/iRQP57PMr71fxhjN06ev2l/+pV/SQMjDfem/EfKFA+br7gKgfuSPUP2Zf53cP5S4M6l2wLrcIiF2rxH/438VijCT5OOPkocHi5CIdkd6/iXO4s0dULLaTE31j/2vEbvXyMN9iVb7Dob22+HT19xAavJ0pBKR3W77df02UlsYvhdi2sLCwsLCwsLCt+Z9FXCssbRtR4iBh4cHqrqm6lqqtsF7T9+fzg4byzyXThQBXF5eMg49Sgm6riOGItZYa7HWlJLwqkIqhVKKSmsEMGRoupr1eo2PAWMNbVtcGjknMhkhDZn0cvX8F37t13j86ILHj67ws0cIeXZnDGy3W4wxxBARCKxS5JgJOZOBpqrJITHhqaqaFBMxprI6PyeS93jvMNbSdB3zODLPjhAi8+yY55k6GtqqYbXachhOeB+Y5pndasMrT5/ifeD29o6Hhzu0VKzXa47HI/txoLIVXdcRnKef+pdxvDEGYip2+GGcyQmuXtmSQmQeBjaXF7R1ixCZ3W5L07SsuhUpRj4cAtP0Bgf9mPbiCc4HuL+mMRapS9F907ZFdJodLkTq6oXLqmO3u2K72SLy/pylLGiqmrZp+cGPfoyb+ztSAqUMSmSGYURQJtKttUx+BgFGG4w25JQILnJ7d8Pd3QOHw+EsqM24acY7z8X2iuwDk59ouhqrLY0x7IeBcZ6RSuBDYBgGcspstxuOx8w4TXgfiam8b6fTQExH2mbFar1GuhlrLFJKUioikDKa6OFit6OfJrSQhHmm0pqLzY6LzRWrZsM49PTzgXbVcjydygG+UgilsNZitGUYB7z3JBI+hOIOA9zkmacSYWaqiuA90QeUFLRNi7UaKUs/URkvQc65CCgpIQRM04xSmouLS27vbiGUlXNKa6SWSCnP0WGZeS5Onpgisy9uISEEq3ZNUzekEPDBF0dJyszRA4K2bRjuJ4ZxYNV1TONYosakJKSIzBmjDdEanJvYH/aQBZvVluvrW379y19lHqeyqk2U1x5zJnrHaRiYp4k6VtRVzWa1YRh6jocjbbdiu6vOzzlx2B8xSqG1ZALiHNCiOJvyObO7aRvc4JinCakVKWdyiGehVxNCpD/1zPNZfDIWAee+KqjqIujM48jbz59xOByJIVJVBiEqcioxbeMwwbn7yLkZJQTz7MpZohRYWy2/OgsLC+8L4zTxqc/+Cj/7qc/xiU9/ll/+1V9/eayzsLCw8D5x/R6XPVmG5b2Zo+D2O+28UbpoAm5ATgdqIxASqs5gjMZYze2QGb/LbSpe96+/BODRoyveerbnrXeu58aoyvlv/j1JMfLBD35IUmL0Qu7viygjxLt6xoteGkjqB/8g9T/7/yCfrsn9rUbqC+CiPFhAPv0h0hd/hvDJvxiAXmxfgcsPijyfyiYKCSlAcd4c1ff9HtSP/BHkD/0k6iN/9yo9/6IkRc4rKt99MVKVfzm/+1JzAmW/5bikb9Bjcs5sNt3XiTcAh6Ahx2XnXlhYWFhYWHhfeV8FnK5tubi44HQ6EnzEmozRqkQ2nTtnrLXUdcU0TtSVPU8wK7YXO1ZdyzAMTONEVVVItWNyIymVlUFSSuZ55nQ6UVUVVVVxcXmJAEY3E5xnHmfIsOpWpS/FalbdhlULwQcOhz1vvf2cVddh6xqtNP0wcBr6MgGc8td1acQUqeuaFCM5Z3zwVFqhbJnwdc4jo3zphpFK0jZN6dOZfYlZyxklJFYbLnc71u0KYTSzmxlPJ66fPUfEfI4WoziQYiR9jaC13+/xscRRtVXNOI2M84wQkq5bQRY0MRFD6YXZnw60TYuxumxvhixEEcCMwUhFYyu6rmWz0Vy4if70FSZ7QVxvqeuGyhjcNGKUIuWIS5HTOKCU5tHlFa882dI2a2IQ5FziqYQMpXNESZq6ZrteI1SJACjbX2HPYk0IAe8Cgx9x3tO0LauuxRjD8ThyPJwIIZTtlpKYM+MwY9WJnDL96QRk9KYiJ4GSmnmaQcLxeOTh/oHtdseHPvQhlNQoeUCeLfFaK6ILHI8nhosjV7tLqqrGGEtlDH72RBeo6xqpJEJIHl9e4VwR5LpuxWqzY92ssNJQNTWbzYY5nsUm5yAJhBIIBFVl2W62DMNAVVf40XE8Hjn1PSrLss/lRF3XVF1HfzgxDRPdqqOuGkSG2c+44Er3UsqEs2AYQ2QaR0II9ENPOotsykp89Exuoq5rrLXEGPHe07Utd/s9kfzy8Yw2bLoVfd8XYUiqEoWmFJU5d0TNRaAc55mYMuRMRmCqGoTgNPTknJHS4v2RaXLc3D7gQ2AafenYSaHsh1WFkIppGJiGgb7viSGw2W7p2o4UPSc/l1PNlAkpIDKs6hZriyDW2gqx2aJNRWUrUorUTcUcHKObXvbdeO9QCOZpLpGNUjHGgeBjEbe0JoRQ3E21Rcny9zRNhJtbUkxIpdBGY6Ulp8x0FsLC2QnWNA3Oebz31HVFZS1unpdfnYWFhe8J3gd+4Zd+lU986nN84tOf46//0q/i/XcV4TIBnwD+/DK6CwsL3yHvvMdlT5dheW+uh+9EvBGgLfgBMTzQaOi2DVIq2m5N8DM5n7satYDwXYoH3xQrVhaG1VXN5S5Uv/r5v+F+/+/7vZXzx2/+HQqermtfzink8XiOEvsaneR0h9i+gv3v/W936of/cJNP1yIfbzRS1+XJUum1ee3Hycdr5n/7n0/Zz0dgkE9/EPnkYzk9vBNxo879HfLq+9F/75+ygKj+sf9Npt2J/PD2Or39K09LzJws21CvQBlB9GQ/I3QFSvucM/gJUgRtEFVH9hMAvRfELNjYhJW5CDzRQ4oI2zD7gaYuryvl4qTq528QtoQElsUTCwsLCwsLC99b3lcBZ7feYJWiNhVdU6O1Rp9XE0kpubgs9mZrNTInBC1Xj5/go4eUudhdklMipYi1FshMDpz3Lx0bLyZE53lm1XXM88TxcKAfZqZmRgTBNDrCHEkkWqDWFc4Hbq6fczwdMZXFh4QQgsurFTYk1DRDloDkeOwZx4ntakXbtsSUqKwtE8wx4pzDzXOZOBEgpAAkIXjWbUfddLh5LpFmMeOmqby+iwuuLi8RIaFkoqtqxmEgeM/t7S1CiJf9LVIohtPEw8MRkQVCKvq+p+97Xn3lFdq+4/rZHp8jRhmapiXFyNPHqjgapglhDJuzu+C1x085jEeG4wmTJZXSrLsVddNwe3/HnAKVTjT5lnQcmI4NtDuirJG6IkrFhOPkAraxXG12PL7YQYzsjwdmN2MrjRSayXlEHsi5rPBKMTLNE23T0jYGIQTOlcnxYRjxzjO7Ga01q/WGnBXT6BkGh0BRNaUfJpOp6pqUMilmUkz0xxGlG2YX8T6WFWBKYE1NpkSFPXnylMvdJV3TkVMmx4hCoIWmqzvCHBmHgaZpaWyFG2d8LO9tRjB5Byny9PIxja45jj2Yc3dLSvjk8T5wOAzsTwdCDEVcA6q6wdjiJnvllaf0w8DD/oEQPSlLjscjImdCDDRNQ9d0pBTP3SuBeZyRWiFlcdFEH6iswftMDBGVBVpKhr7nsD8wDD2bzZZVvSLEgMwK7wNKOQQC712JW2tb7DRy6E8opUikc/KBREmDNTVSSNCGylbM48zheCILSUIwTDMRwRQiPgQsFf1ZhFmv1qQkOPYTQz+Qs0RpQwqZfhiJwSOVoq4aoov0fYkny7xwtQw07XncKkvOGeeK4DVMA1e7R0BmfzqgjWWlK5yPOO8wQqPrIg5mEiXyLCIS6MqglERKhdGaaS5iSxGIXriZMlZbsk+42eN9xLuetmmp2zVK5vNnO7wUcH105JzR2pBzRojyOlLKxLic0C0sLPzNEVPilz7/JT7x6SLYfOqzv8w4fVeicAB+HvgvgJ8G/lrOeVxGemFh4TfBezlwFgHnPbifBPE3StkSEqRGjA+o+cBm02GNwdiKqmqZppHDfk9dVzRNRZb6vaLKfpNkvjH+K+eM0prddmvffutNjP173/OeKWW0UuWEAYpr6BvLclwPq0v03/M/qPLhusqH63I7RBFQ1k8QVUf8wl9L7t/7l3O6f2ME7gHE+jH5eA2HZ3tx9X1X+fic3N9j/+T/qgU+mk63kduvSBBabJ6ArsgPb5PnI/LRR5Cv/LBJb/4i+eEt4hc+jv49fzIQ5pGqa4pwlRDrJ+X1R8/NbMBPVLm47CuVmbMuIo6U7MWGfsiIFAnpPGqyxFefc5cR2hLjsOzwCwsLCwsLC99T3ncHjtYGxMSjR4/QxnAc+uIWmUZSyqUbJiaEUlRVTdd2zMGVrpgYiDGilHp5eykFgow4u0farhSNj9NEzJn9/sDDwwOnfsB3pb9jHAf6/lTcDzmyP+0Zh5m7+zuOxyNZwHq9oWkajofDy9zfmEq81mq1QkrJer2maVvu7++xxnB1dUU/ltdyOos5xhRb9TAOpPPfxeORWW83TMPE8WGPNoYPfOgDvLK75Pr6OfPgUVURAWJOjPNEcB6pZDmsziX27f7ujrqpcfPM8XTg7uGBy92Opm2LI8IHEAJjNPHcxfM4peJakhItZXHTKIOLDu8dzjv6YUBpDVoxzo5n19ekFHl0dcW609j+wHR3U3aZeo0XhnFyuH6gs08hBuLkidHhw4RzjhgsKQtChCACVhmM1NwPR6q64sJa6qri/uGBm+ub4qoyEltZpmlimkakklxcXDLPDmuPCARCCoa+R2lF+7hFInHTyHq3RghZYteCxzlHFtCuOx4/Kt2Yz569w36/Z7teU1c1/dhzf39f9slxKu9xUzMOI0IIjNbMTKSUsNaSc+Jhf888jlx2OzabFYfpxKk/olFU65oUHIf9gcPhxOznl06aui4uMe8DxhgqrfHeE1xxoKxXK3JKOOdoupbHV1dkoD9NVMagjUYqiZtm9ocD6/UKozRKaaSAIGUREIVgnibmecKY0pPUrTq899zc3tAfTwzDwGa1xliLlJqcM13bMs4T8+jO7rJACIEYA86X2LSSXBB52D8glUYpyf54YJwnYog89CfkWTTz/oXDJhFjxrvANDmUGjE6ElMkZ3Au0rYaIcB5h3ceU1mqpirfDVJCKj06p/0RISXVUFxaWQiE1Tw83LM/HUscWxbF7SQFykiGcWRyE1qV/hwpE0Io6qo6f14zILAGhmEkhoDWGudmmnZF0zbM80xMiZwzZM6OIcs0nNgfDsQQiTEgJNS2nBPO84zW6mUsoxASpeTyq7OwsPCdTanlzBd//XV+9tOf42c/9Vl+/hd+icOp/25n6T5LEWv+MvBf5ZwPy0gvLCx8F7yXA+eVZVi+npT5jXtvhABlEP0NVZ7ZXm0x2lA3K0DgnOPtt99mvWppmur8O/Hiq/27/cH55t8fKSVVpfU0jkq8yOn+FpvNSwHHlH9SCbF7FSG//n5i+xR2r4KQpStn3JPe+GwOn/qpHD7xFyIwADdABAif+inCZ/9jhDa39h/6l1v9k/9MQ/DkcQ9uVEJXSmxfgaojfvYvpfT8i9L84X+OPDwgukvqf+rfXI3/hz+hcn8f3b/3v8D/5f8zVN3b1T/xv/uoMDU5RfKL+DOpQRuIjn44UVWGx2bizVmSTVOEslwWYyIUKFUSkqcDEYGoN0XosS1H51ktu/3CwsLCwsLC95D3VcBpmhatNUorNrsd3nv8YY+REl21+ODZdCtmP9P3J2KMzPOMUhKB4Pn1NU3T0DYtOWZm50jnVfQ5gVQaeY51ai30w0CMGU1Z+b4/HLDaEmMgxIiWGqUMD/cHjsfTS/eB94EUAzlG5nFEd2ViXCtFjhGjNa88eULXrbBKl44PmbFWgxNl+lcqkveEs1uosrZMzofINI0vHUPz7DDGsNtu+ZGP/QC79Ybrm2v6oUd4RfCe0+lU3A45U9kKLUppu2mL6+R0PJEz5CTo+4Hbh3uMsQTnuNxsee3pK/SnE97NbK82rLoVzjlqpdm0pevGZ9i0K4QovSOn04kQIsIoQoroyjBOmftxxEiFEYKmqct2aFAqsWHmftxjDj0JycNB0G5bTv3I2E/EeKLtOq62O0gR7zxaStZVU6LTlCb6wMP9nv3xhDWG9WZNU9dMZmYapxJHVpWun+12yzTPnI4nTr2jbRqELhFnh+ORrm3ZbDekmIvDp+1w3qGFpq0bnj55Qn86MvXDeXJd42bPPDumcSLGeH6eC3LKxJAhS5q6Y3/Yk7NHSYHNmjEInHckCUJrpmlm0jNTPTP1M6exJ1Mm+4UQXF1csFlvOBwPuGliTIkwO8ZTT06RpmrZbS5omob9/gFrK1IS+OBRyvDk6QXWGoZxYN8fOO6PkODy8gKlDHPyoCV4wakfEVJhbI3UitV6Q13XOOcYhoFTP2GNxVcZYzXBRU6puGmsqbGhuH0kRYSZ3YyUsiQCZJi8I4QiOimtGL1DaU3XtUgtGceJmCLTMJIzjMNEzon1qjsLI54QQsnv7jqs0cVhc3Yx1U1DiA7Oq/pijEVkG0ceHo6kDF3XYYzGasM0TgzDRMilzydn2K42aK0RUpBCprMtKME4TAgly2cbgQuBEHzppslghCRRohLbtuXJ1WNiTgynAQk0dYOUEmMtp+OBsR/IWSCEJMSM0Zbd1RVSSvYP93gXqKryXDllUhbLr87CwsK35KtvvsPPfuqzfPLTv8gnP/OL3Nw9fLcP+asUh81/Afz05z/+UzcvrvihP/iPLwO+sLDw3fJeDpzHy7B8PbfTb7SAp4g3nG5ohGO7W2NsjbXNSzf3s2fv0LY12+3XSAPfs+6V945RiynLbtWplN47nlMISF8bwTYdEc0apArhk39xYjrW33SHGMjDA3n/dk7Pfy2ld341UCI8e+D04qYf/tiPoXLk1770ebIfmf/Df+WN8Et/+TX1o3+0k09/GLG6gjARv/Bx0ld/Ifuf+XcyKd7lh3eM/kN/dp0P18TP/5eQ4iPgeQ5zzs++ABCm/8ufep0M2OZdB03OJQLODWRTXpNSklcaz/NDTxAWoc25hydB6BFhQoaJzaql90e8rMGVHs9eK7quWXb+hYWFhYWFhe8J76uAE2OAGURK+BTpx5GcE7Vt0Erjoufq6oqH/QP7wwM5lW4KrWqUUhwOh3MvhkBpjfCOGBJaG7QyKKHKCv7RobSELM6dN4La1vhUotZ8iKRcXDvWVkgh2T/sSSmjzg6GVduVSe7ZMYoBYww5JrQxxBzJMRG8pxKKuqo5honD8UCOGWRZ7Z8AkfPLPo8S7RVJxVpOiJGqslxcXLDtVuSYeTgeyFJQ1RWHvi89HqKjaStkzlRa01YNiFJQPLqZcRiK+6jt8LG4HpqqgpxptOVivWE69SQfGccRZQ1KStbbLVZpxlNPZSzWKIxQTPNwnpDXRDczDj3aaNLsOY0jKoEYHU1Vk3PG1pZXnr7K5eUFXWU5HPeEEAkpMM8BN6fzaUBCKUFbV8zTTO96pBBYpbnYbJEIbu/veP7sOT6VuLOYyiT4xfaCylTklLm7u8V7R9PUhBAgCdpqxXazJfmMF5GmaxGi9FZKrdBa07Xd2Vkz8vBwz3a75eriknEsPUrOudLNZGusLrFmtrbnGDyJoOTLCFmi9Pr+xHq15vGjp3TtBmMt/Tjg5pmUUnEwKUkUiSwS1hh8dKzallW3whqDVpo5ZYZjT9XWtE2L9x4heRl9F8KL7GiBoLg2utUKYw3TNBNTLPFq3Yq2XZVelhDIsrx2oytijiAlWhsSxdkSgkcIiTE1UipyFniXqJTCVLZse0zorSXlhFEaY8w5vs2TM+X/Y8A7D1CEyhAQQryMRzzs9+c0hnSOx/Os1iseP3nM8djzzrO3cbPDmDWrroNVyzQXlxOUviQ/Opyb0cYW90+mdPs4j5BFZCmRZJH+0JNjRopSUKqlIuXE7B0qSipb8fjqMSjBG9MbzHNASEmci9AUYgQEtamx1iCUZHYzjy6veHL1iOc318xTcfBsVx3WGubZMQwjSioMEpccIKiqmrqqaNoSmzjND+eT4BeOpKXkdGFh4V2e39zxs5/6HJ88u2zeenbz3T7k67wr2PwXn//4T72xjPLCwsL7yNvvcdlry7C8S8ow+N/gRsrAdKJmZr1ZYasWYyrC2RW+3z8A+evFGzhbcL4Xi4O+/jGkFMQYebi75+/4O35CxvDeAo6SklC61zJAeuuXmf71PwmIEL/8828DlxQ3Tf6GJ0rnfwHwwAyglObVD36EH/3dfw8//nv+PqSU/PJnf46/9lf+fxwebnL85Z9+M/7yT2+A5mvmMfL5MRyw9x//twif+PMh51wT5nS+7uuHbdiXuNBx/27kW850YQ8S6upd3claw6uXitNpwLm+CFY5n7t1FdV6Q9vWrFLi7u6WEDxKSoRol51/YWFhYWFh4XvG+yrgjNOE1hofI4e+Z3IzVVVcHELwcpLde0dOlIijFz5sKUv0U9uWMvIYscaijEZJRVVXxHDuuZAKiSxF5sNU4qCaliQTTbciCUnnZrQsHSV1W9Otiivl5dGkEBhryZToo3LAZjHWYpTBeU+MifayRgpRXDIZmqY5H6SBMbqs+hcCKQVaa2Y3Y8hIpV5Gaa2alsZabu/uOA0nYkqsN2t8ThyGE0hJ17WImAmzQ0iJlAI3zzzsH/Czo65rNrstIUX6U08GqqZhmice9g9kyvbM88TwcI9SisZWZFOK64PwZBKRzPF4Yp5n2taX7Y+JGHxZIZUyh8MRtz9xsbsoj+lm2qZlmoqIlWJCnUWqaRwR5LOAlaiqCmU0jKXg3XvPer2mthWzd9zf3XM4HKi7lspaivdK0HYtTdtwOh156+23yLm4TVbdihQFTVWzXq8Zp4FI4tHuCmNKmfx+f8c8zefJ9Jq3336b42GPQNA0LVVdI6VkGAamaWS3rbl6+gRrLdM8cdwfiamIDyJnqqpBCuj7gYvNlsvLS9brgHMjp9OJvj/RNh111WCMRmnJ5cUFVVOhe01d1wgBs3eEGHHeE4LHpBITttlsSCGX+DAfaNuOnMvJEwhSioTgAAjnDpWu63j8+DFddxaARJF7QgxMYUIqiRSCuq5QUqJkER4vLi9BGUIs4qbRJZqtaRrqrqGaJg79iZhTcZoYUyIMxwkhEtaa4sY5r8zTWoMUmLML6vr6msPhSF1ZVm2LD8VVt90Vd9E4zqSYGca+CGdVDaJ8joWQPDw84M/RgVJqUnwRW5ZIucQmWmPIKeKhCKvOE3NCKcm6XiNyxk9z+ewZi1Kq9FZVlovdDikEwzgQYizhhjm/FHKmGJAZrnYXbLdbtNHM88w4jlzsdqSU6IeRaZqo6hotJdNQ3FtaKZqmQYjSLeScK9Fp5++zuIg3Cwu/43nYH/nkZ36RT3z6c/zspz7Hr3/1ze/2Ia8pkWgvBJsvLKO8sLDwt5Cvvsdl37cMy7uc/G8gsEgFKaCnezYXG+qvEW9enF/d3d1zsVuh5Dc6eb5Hzu6ve5j8Mo757vaWH/iBj6npW/StaWO5ub2FIsYAmfjlT7242gPP3vsla9p2jTKGum7Z7h7x5JUP8aGP/DAXV0+wtubh/hpy5id+3x+lXj/l//Pn/zVSgqbbHKQfD9E76rblcDqKRP46C1H20zUl1u2bC36+ka+56+PHF+95EyUl283qfPNMzkX3EV/T96Ok5PGjLTGl93ifFhYWFhYWFha+O95XAUcIgakqRu/IlNXy9ixwvDiW6oexOFbOEWOCIpwgBEYbrK3wwXM8FZGhamoqWwElworzI43DgBBFxJFSULctLnhizlhTsWrXeFein7TRPH78GDfPDON4Loe0pbQcqOoaawxNXbNar4gxcTweiSmWSf3zwVrOmeg82lqqujq7NorjIKV4zg4uGcVKSbJPpBh59OQxXdsynHrGeWR7saXrViQhCDESvMdNM1optNHEHPEuM02e07EnxUzXrairitHNHA4PxBho2pacM8fT6RzdJVFCQsrEHAgx0G52ZFshhMS7qcS/CUE4iwqrrmW16uhPJ6JPdLWlzoK7ORBTxGRThJKc6PsT3nvqukJKxTTN2LqiaVuC96SUkFKd3+rSX6SUxlYVCEFOgpxfiAQaKSUpRPrTke1my2q9IjiPmyOnoWe7u+Dy0RXr9ZbkPNZotGipqobdeocxmn7o6U8Dz5/f4Jxnt9uileZ0OjCNE6tuja01Rlnubx8Y+4HtekPXNLTNCpEE9+MD4zRSVxXWWlIsjg2lFCnl0mskFEN/JPhIZS1dV0pGX0zYr9cbpDZIIYg54V1kmiZSzuWfAFNZYk7oylI1BqMtQmTqusb7wDgOxBhJKTFNM+u1xRh9ds+U85EYw9nxVZNyJIbEcOwxtji9NtsNm9WGylS42VNXDV0Xcc6BoHTx1BXKGuqqIkvB/vDAMIxIoQkuvYwIE0Bla7z3uFT6ccgCW1XkLJiGiXkOSFlcQFJolBQkERFInAuM48g0T8SYzq9F4FLpuelPJ8ZpKjGFVYVU5TlTzkRf+rCqqqKyxSEUg0dKhbWGRAal0bVm/7DHeUfbdjRNDQhub2+o25qqqmi7lmEcilAki9j68jOdcunmalqMMfT9wN3dPUZrVqsVMZf3QsrijFKiuH1eZIXnnPE+MLzo6EkZ58pnJH/DSeLCwsJvf/ph5L/+67/EJz71OT7x6c/x+V/7Mil9V98DB+C/pHTY/DTwuc9//KeWL5aFhYX/pngvAecpYHgP58PvRE6/UfeN1HB4xmrVYEyFsXVZ9HMWCB4eHqispm3r9+uMnfcSgu4fHtDGqo98+MPy7bff+pbn+m+88TqcO2t+I6Qy/L0/+Sf4yA/8OHVlhZCqxGpXNUIIpnFg7I+cDg/lnBu4u3mLujK5Xl3xgZ/4J/nY9z/SP/6B0Gw2G/uFL71x+ot/7t+Ykx/f6+nSN16w2T3i8PDeTlelNDFmlBK/4fyG+DY3WcSbhYWFhYWFhfeD913ASSmiRFmhr70vIkLdklNEK1kK0t1MU9cvo6vsWUxJMXF3f1cmm3PGzTM+RYwpMVdaFzfONIycTkdiiFRVS9eVro2qLjFmcw4oqYhSUFUllquuah4/ecLdzS1SSVbrFd55Ju+xVSmbr+vi4JjnmbbrkEKUGCop0cYgUybGhIxFrAkhEEI8T66DVhlrK3LOBB/xPuBDQBvLer3FzQ6tDdbWaK3JZ+eKm2dO/QmrDFYbUkpEnzDWst3skFKyPgtLwZeJ9GGc6M4xcC8cFwhAKdbrNUJIpJDFgWQtKUMMpWenbirGacT7wDRObJRhvVoTskAqxQeePOW1V14jx0AIqThNrOR4PBWHzbnMPmeJ0Rp7jmzTxiCVZDyLZE3Tcup7mrNA5lygqivaVUfTdaxX69LP432Jq7OWy8tLhnFmfziwbldcXV4iJaR5JvuAUhVCapQuAuA4jaQX4xIjUimePHlC3Vq01qy6lkxCKU1d1fjGIaViGEbIknl2SCEQWbBeb1FSMM8OpRS73Y71el1i4Kym7TqE1bjQIwQchx4rNEYblBToUESs/WGPD4GmaYpIKV44axKzczRNQ9VUWFNT1w3GWqZpYhiGl/t4zuCcQytJ2zaczoKm1poYQxHLtKJpW6w2CCFZr9esV2uMNcRQ9r8QAsYapFJM41RERmPKSjvnSCnSrdagFH4ugks6O22MNsSUOJ1OjOOIMRYpi6CBFEVgrauXHVBWG2opsbY4nm5vbjkcDmilubjo6FYr+qHn0B9J55g0AG10iSI0mspWeOcYQiDnTFM3CAkSVXp+pOJiuwUpmEIALelPI7qVbHc7fPBM44xzMy56Vi+EZFsRU2KaRkQsMRHSnAWjlHBupu975n4u+3RdYYzBjcW11TRN2e/P7+OLz3zf9+ShP7twikB0XqKHVAqj1PKrs7Dw25jZeT7zi79yFmx+kc/+8he+W/fdAPwM78ai/fznP/5Ti51vYWHhtwoT8Bx48jWXSUqM2ld+pw9OSODTt7mBVGQ/0amI0ZamXZ0jhSl9qMDhsGe3XX2d2+N7fML+NX/k84K1xDAM/ME/9IfVe+gg500v53jX19eREoX2bem6NX/6z/5P+eCHf5Tp+OyiNe4qxJxzniAP5AzrDujEyykKKQWrTcdf+PP/37fn/n64bCd+10fXT75vu19dPNrwV/7zn++CH9/kLCAZY3j6gR/g8HD98qVpU/Phj/0IP/xjfyeyuuC/+s/+fSoj+cIv/fy727ba0DYW5zP4SFPr5ZO9sLCwsLCw8FuK9/XoZJpnJjczzTOjm4khkpUujg0EIUWs0aSkqdsG74ubQAhxFj0c3rkSn1ZViFAivSRQ24ppmjgej8zzzBw8wXmarkMoATFycXGBUJJ3nj9HG4HSNW3TlGL2eaJpKuq2LvFr2pDTORKK0vURUzrfdsaY4lCI3mGtYdt2xFDcBikm5v5I9AHIiAyIYq8OwRWbtQRjDTlGnJvQVrHarJj9xP3DA3cPCR8jSkrarmUcB6ZpYEiUkveqZrVaUTfFaSFkiczq6grECu8D1hpqU9xLh9OJaZ5o2pYEWCNw08yNv+FytysuDoqLKMVUihtFwoXAaehp2pbLq0usMWXSX6/IKZ0nospr07pExjVNfY6ISmilsELhZUYJgc6ClDPSmLK6ahhIuazAqhu4urxEW41paow2+HmCCFVVY21F3UiePn5EV1e0bYsWUNcV6Bo3TiijSzF8kqQEQmi2F5dIrVivO+qmZpACpS6oTI3W1fm1C1arDUpKrDWEGBmnAednqtpS1ZbLyx05J+5u71DaYqyhbisyAomhaVeobDieSjRczj2qXqGlLl2kkuI8CwGtFG3TvHQ8VZUlxlR6XFIiIzC6lGJG78rt2xYfAjEnUo5MzlFZy3q9Lp1DITLPrrwnOSGyLOKdsdiqjKebStzXOE7sH/YcjgeqtsR8SSUQEnz0hH7GVhZbVVxdXrEJW25ubtnv98zzTNd1KKXIIePPYkrXdQgpmOYJEGij6GTDOI5471BCsNlesOo2DMNAPySEFKzWa+q6ZhiK4ybmst9H74vm2HYooQgu4JEvXSwp5fIZyplpKlGJ2RSxsqoqbFX6sZrz+E/TxO3NTYkPrCu8D8yDp6orHl12HE8nrufr4nLyMAsHZOz5velPPSkmVusV2hQhNcdETolpGKlsRcyZEGP53siZfhjK/i4FOZdVeALgLPKkZVXewsJvK2KMfPaXv8gnPv05PvGpz/GZX/wVZvddLTr3wM/xrsPmZz//8Z+al5FeWFj4LcxX+XoBB0qM2u94AWeKv3F8mjzdYWrNal3iu1JOpQVTyuLsF1DX9n3bRsG7x6YvkibeeecdAP74H/9j5uH+/j3vV1cVd/cPvPnGG4HfQMBp2xX/zL/wL/H4yVPefuMLrFqptEWmXHo9m2ZFWcn5DRMVStJ2G95568uPY5hf72+/kJrqx5kmx/E0MgxTDVhgBPiR3/OT/IP/6D/NzTtfIqZMpQUhGy52G0iOwySo6pYPf+zHv07AqaoSuayk5HgaEMJSV3b5ZC8sLCwsLCz8luF9FXBiSrh5YpgnYiql6FIIrLFsNhv2+/258QSQEmX0S8EgOE/25T4xJabgCTFiznZqayzzNOOmEXeeUFbGYGqDUKAVaJFKz0UOtJ1FCIUSCudnZufo2paua8mprKKvq4quaXHBkVNmnMYSnaXLSiTvSvSTItEajagrpNZMoysuiLP7JosMokQxZTK1rbBVRde1NFVD01Ts9/fknKjqirtn9yXTt6mpmxptNSl6XMogoDKGuqkwlUJGmOdzgbw2tE1LE2tO/Ym2qbBa83A8sj8cSsSVEKWjRLXklOjHEWMNTd2cHTWBUz8wjhNNU5cJcikYveNi1WIrw93dLeTMarUqMc0xo5TEGIFSHev1+hzzNUGmvC/OlZOAlKGq6aeZaXZEJOPsGSaH1pq2W6Mry5xKZ0htDV27JguBDwGDxihF1zZorRj6EzlGhGhwQSCzJ2fIUZKmkUjm4vKSx48ucW4o7qm6QmRKyb3QWNsBka5Zo5RCyoQUiZQSWkusbdBaM03FAbPdbcgZ7u9vOZwOxCjQwhAIZOGZgiOnhJaaLCKTG4owE0rk3qpdoa0uYkyMrFYdKUWm2TEHT0WF0iDIxOgIoQg/SknGOZboPifROtN1LXW1Le6deabvT2SgrixKSabREyM0tUKmzDw5Qk646FG1gVEy+5mcEyJDzoqUJFoaqqZDa0WKmXB238Rzv5HSGmMMTVXTNA2Hwx4hBNFHUk60dUNlK/r+xPF4YBxGcpN5JBVaC7abFbvdlneubzgeDgCMbmJ0I13bYrRh9IG6qtltd4QYGKf5pRhmlSWpjJDyLIYkrK0gw/Wz5+x2Wx49eoQ1FkLgOJ44HvZURrHZbrC25tTPJYZNWaqqZhxm4gwZiRSK4CPaGNpzf1PwHlNZjCjRf86Vc1MlJCkmRC7dVkopxHkiVwiBPvfdGGPISb506OScySktvzoLC38bk1Lm87/2ZT7x6c/xyU//Ij/3C3+Dfhi/q4cEPkMRa/4y8Fc///GfOi0jvbCw8LcRXwX+7m+47PuBj/9OH5jwbQ/7BOSEFQ4t69LZGOOLM2MAhmGgqav3z30DCPHCHZ5RSjMMA8fjkaqqePrksXjjjTe+6flzzmzWa37l81/EuXnm2wg4m82Of+qf+59xefWId95+A4QiJXKIsNle8unP/EL85Cc/mYQQ8hvsQAgBSmlxf/9gAJmyTj7mnM+brJRKvPDwC8lHf+zv5/7uGSkrQsroDOPk8c+fUVuJbK5488u/zP7++uu28e72Bmst9w/3jOOE1utFwFlYWFhYWFj4LcX7KuCU8vUykbmqa5RUpBhLcfyuRIH1/YCPjnEcsLZCawMCvPd450CWovKEIKdIRBRhyLnzQeGWfhwYxomqsiVe7RxrNI4jp3FgGgea9ao4L2LmxfIeKRVaaVwqzh6lFF3XUcWKfugZxxElNU+ePimr9IOnqhQ+BCpbIY1mfzwx+xmpSnyVABIJnwLkUl6utT47i0rfS86Zu9s7YoqkmErBfF1W/khRyudLr4k9xy7pl84keS6MF1JilMJWFc4VN0TbdoiYmcZSqt40DXVVIZRASnXu8JG42ZVYrhSY5hnv/MtJZiEyUisEgv3Dnt4M5JSL0CZASU2WpQfmxeu11uJ9IKVM3w/0/YnD4UDT1GzVlpBLLNV+fyDGyHq1ZhpHtDWkWO7z/OE5OWWeXj5CKcXsHMfjgaZpkFKU1ybK44QYITt88MgUCS+i1JQq8Wdo5nEgpnfj7DabDUoZum5FZVqcn1BqwiiDUAlIWGvPE+0J54pLq65rttvteX/RnI49OSaMVvTjyOgmMJLNaoWUEu/92RGT3x1TJVBak0I477elg2YaiuCklAIhmINH6SJyphhxZ/fXC2EAYJ4dpl2hlAZmxnECIbDrhkzGh4hW5uW25Jxomppde4GLgRA8d3d3aGMwWlPVDZvVmtpYVGVxbub6+pq+H5jm+dwPo0kplvdusylxfzkzjiNSSWpT03UdZIqrKJex7LoOISSzc2ili4h2/jzM81zEXG2oqxL7pzcbrnYXrFYrRjehjTlHxEWcd6UvSClcDGX/axpSiMxhJOSM0Jok4Nn1Dc9u36GqKjabDaaukVLTrQzzPJHIODeX75gQSCkyO8dmvUYbSYqhuNKEoFblszBNMymVnpuUM7MrC+JXXVci3oYBhKBt2yL2ev8yCnKeikPJaE21nAwuLPxtx69/9U0+8enP8bOf+hw/95m/wf3+8N0+5C/xbiTaf/n5j//U3TLKCwsLfxvz+ntc9uFlWMCnbyO8CEEOHqUUddt9w1XlfuM4stk0f3NPLgTi3M/4bW4EotxGa00IgTfffPPludPd3S3v1Y8jhACh+PSnP5M5u1/ei93uin/qf/wvcnFxxfNnb53PR9+9vus63n77mfjVz//KAzC/55O93FCSkOJb3uAHf88/yObyA8KPb6rKSFvrrJVE6IYsBT5m3DSnmBFUVp9fh3wpnPlzf+tut2a9evf9CDGXDksh0KokS8RQOk2FEKUTczHYLywsLCwsLLzPvL8CjgBkObBp6oa2bZnHicPhQN22VFZzezOQRaZtW9RZUCkO6mKj1sZgc+mxqeoamcrBkg8e72eMremUQGlFZWuULJPNSiqc84zj+LI3PJNRWrPquvLY5+eSUhBiRGkJMlNbCyKjZIm1spXBmBY3jgjAmNJjMnuHGyfmeUZpTV3XCOA09KXDparLBDiglcJPMz0QqoqYE6e+lNQ3bYNUZWJYJdDncZh8QORMSAkDL3t2hJRoVSaDrTGYGIoTQEqsMaXPJ0WUUqWkvqqIvkxKt12LtZZ5nkBkAokscnEaVBUxc+6WOa8Ck5Kua1HyRYdOBsp7oE2JhJvn0lGktQYJSWQiiXGe0ePIxbZ0u4RQoqbqukacRTotFVJCOItI4oVL4ezySSGehTDO4gplDIQkpZl5nhBCUNcV680FPnqOhwemecS5ASFkGafKUpuG1pZtyTliK4tUgtkNpJwIwWOMYRw94zgQQhHBvPcopWnqmpgSUkm0kVTJ0o8nkgAlFUrps+sJjLWEUMbRGkvdtsScyPd33D3cEWPpa2mNRQiJj4mkIcRAcI4YAsZaVus18zThXIlKk0KWGK9cHF5lZZoqjh/vCSFQVQ22qmiNxgdPu17TrDruj3tcCEhR9h9rK9ar1VmgyhyPJw7HA9M4veyuir587ozSjOPI8Xhku93S1M3LniqpBUqW/qBhGIihiHTb3Q6lFNM4k/OI88V5lWN66dCSSjIHz269pbG2iGgpYbXFZUcmk3Jx3CipgEwIgRgT3ruSF96VsZ3mmRA9x+GET5FKSabgiH1GSY1CI6WkrmtSjFSVZXexI4YAFMFGSkHwRXhebda0bcvpeMJ5R/BFxDJKIZvmZdSjsRaGgeA9WmuEEOfeKcF+vyfEiNUace7gWlhY+K3NW89u+MSnP8snPlVEm+c337W+8uu867D56c9//KfeXkZ5YWHhtxFffY/LPrwMy2/gwJEKMR1RUlA37dcJLUKUfsaUIsb8TZyuC1n6daIny28t4Lxw37xYMPWVr3yFnDPr9QYhBH0/IN8j/nezXvHVr77OL3zmU55vIeAYY/nv/5l/lsurxzx76w3kt+iBjNFLoN88+rD74Ed+nHksXZJIxfUbv8L+9g0AqrrjIz/0dzH2B3abb36cD3z0J0hhsD/2Ifv9l7uSspBz6ePdHw5Ms0+nLA8xq+fXb3+5bKO1XGzXICQxJS4uLonRlQ7LM8PgkAJS5tz5ysvrpRQMw8h6XaGWmOSFhYWFhYWF95H3VcARQiDOwkQIpWi9qiti8AynntXjxxhrGcae1cX6ZSTSNE+QE0orrLW0XctxHpnHCUJ6OXGaUjmQarodTeMIIRRHgLIYAW6akELSNs1ZkEjoStE26yKGxICQZWW892XiXakigkhRJltTivh5Rp+X1ngfQIqXGcFNU5cJfWtp247T8VQOuHPGOUfKZdVOlzOVsecDvkwikXI4T1BHQKF1ETOU0tRV6ezIKZFFIlPKLEsmcslF1meRRet3I5xM1bDb7ejHkXAurdda4YOnspamrs/ujwlVFSdP2xXxTAiBc76UxTc1xli6tqWyFjfN5JQJMZTuH1VK2WMIzM5ByrgYUFajKkuz7pBS43JkjoHddsflxSUxRZQofSEhBKQU1LXl6ZOnJQNZmbMbqoh+nE9ggveAIOd0jhcD5yaOp3u6tqOqLE1bobxknjVCZmJOyHMHjZs9Wlpc8KRxLO4dqQBPDJ5ERIoIlPd1tVoxTfNZKAhAERuapqatOipTsVpDQjCEAWPOrpRpfOm4yhkqU9E0DVmC1RXDyaKNxlQWgUBrjZRFDBSh5F7v7/fUleXi3EcTguf6+oaYIvM8oSgr6kIIZ9eaJqXAOI3ECG3TsNtuUYCYxyIeOsc8TUVkMMWhY62lbVuEFBxPPYfTkWEYzsJMLF02zhUBx1Y4H7i/v8cYQ4yRoR/wPvz/2XvzODmu8l7/ObX0OlvPaLTZsuUReMGYbWzMbsiVAgQSkoBMQkhu+CVXDkkgCyFylpvkZrmxws0NCTchVgiEPVggQyAssbwhG1u2R15kyx4vY9nat+lZeq/l/P6o6pmenuqZnk2a5X38aavndNWpU+85tZ1vve9LS1sKrQPPskQigW3bQR6dtsxYLinXdag4DrnRXPiGmzcWdi9pWXR1d5FpbaM0kqdSCnIbGaaBUwnyC6EDWynDQCkDzwvEWS/ugx94TKGC8EZ23CZltOCHoQBLTgXLsDG1Rdyy8T2fiuPgOBVaW1pJJZNBXimlqVSCZ9BYLIYdehpZloVlWjjl4ByWSCTxdZALqOI4GEbw9l6lUgmPt0AoCsRjTTKRxLQttAEtra1y1RGERcbZ7PBYSLT79x/ghSNz1ldOMO5hc0f/3t3Pi5UFQVjGROW66RGzMMHbJArT9zDt6r2tN+EZ2nVdTEPNXBgwY6A9jNxp/OIoZmd7w0UNMxbm2iny/PPPj93nplLJ8Dlt8rZ9zyPT2cV/3XYHoItAJaruTKaL7jXrOHvmZEPxZmRkhIs2bOD1b7yu8zWvf2e5o2utqpSLWKbCTK32vn3rV4cfvWcXAFdds5lM93rKpaPQNrldXvEsrZnXVnLlZ93jTxy0Tp0dwXUd0ukUL7viClpbLGN1Une89U2vVf/1/W+fhODlwFK5gmkaQV5b7ZPPl4jZ5phwppRCV6O0hc+l5bITvohmMjoygm21kU4nZcALgiAIgrBgWAu9Ae0HYoXruVScCjHLDjwfDIUXTvjmS3lKpRK2FcP1XLLDw4HXA0EiclsliMfi+J6Pr1xiscBrQRkqSPYeT2CmUhQKRSrlEnE7huOUyZeKxOIJDNOk7DjgK3QsEDpM0wrcuD2fsuMEYcIMFXoGGKHoUcZ1XAoUKJVLwUStYeA5oReGFQggKNA6CJvkeu6YJ4rnODjlSjCxnEgAQVgt13UolQIRIZiUJvD4MQ1sO8g3goKKG4TjisVixCwbT3toBbZlgVKM5HP4noudSGDF44DC0R52zKYllWJoZATHcbBtEytmoZWmUC4wMjRCoVCksztDSzpNS7qFSqVMIV/A81zi8XjgERWG9vJ8DyfMl5PP5fF9D8O0sK04ruvheh7lShnH9TBtEztm4+tE4Pmk/SBnDZBMJXGcCpViGWUYJJNxNBqtUsSSSZKJJF6pTKlUIZ1K09baBgpGc6MUiwUs28KybZRS+K6PaShSyTSJeBLLCsJ/FUtFSqUSru+Hb58Z+GNR84LQV2XK4Y24xsfFtG0Mf/zhybbt0EsoCJ2ldeDxYoYT+ol4iraWDIVyDtuOo9xS4MGlGZvEdx0HpQxi8QQ+PpVyBcoKZSjSqXTgQVYJhAxlxEOPIg/HCULD+fjE8nnakq0kEnHWda/G0z7DI8M4jjv2kBePx9C+plCuUCpWiMXj2HELH49yqUwxX8BxXcy4TT5foJAv4HoOtmXT0dZGLBYLxJhikVKljOsHAlnFqZAv5HHcIOxa1VPN116Q5yYX5FnyPA9lhqJJOhWOfUU6mcZ1HXK5HOVKkCMqV8hRKObRvh/klHEc3IpDazJJW7qFTEeGrOuRy4/iuRV8DY5TwfP9IOSbHQvzQ3lUKmVc18PXZXytKZVKOOFy8biNTZzRfPAGXywWCx67lCZfLFAZLgcCoufRlm4n1taKGYujtcdobgjP9WhrTQd5bkIxz3WrYewU5XIZt1xG4wfeO6HgqSwDx3ExUBRLJSqOQ0trK8lEIhCRLItkUh7uBOF8M5LL89AjB7lv/wHu63uMZ58/PE2ImWnJAncxLtgcFCsLgrCCeDaiTAQcAq+NqVDoSJEEwudVy2x+Y8oA00I5RVQhi21oWrvaGwoLppXAMAyGhoYYGRnC931i8TjtHe3hi3MRz/Va09bexuHDR7nr7rtcoGHOtlXda7Hs4MWpRpw9e5ZXv/pVvOMdb2/LjWSpVM5itJm0t6U5cuwZntj37TJQAujsXo9bKaNi0fZ68K5beO7JPn3shadewC+uIsjL4wGsv2Bj4jd/88Nttu1y3Ztf1373nbeNlsslnEqZ4UqZdDpNS0sLaMaiHEyyl2HguC75XA7LNLBsi3KpTDxuk0wmZLALgiAIgrCgLKiA47oe5XI5eIvdtIJJVtOhUqpg2iZmzA7Cbbku5eES5XIFV3tkR0eCO94wgbrlVkjEAwEnnkiQSCYplUv4ClztYwC+4+GHgonnuFS0T77i4CoD245RKBWJWTa+71NxKhRLRWzLRmuf7MgwpmHS1tqK43l4pSKmUhQLQY6PYDK8hKEUyVQKP3wr3/GCt/tL5TKG41AoFKhUKtiWTSqZCMJp+aPE4zatLSlc7TOaH0Upg1K5hAJSiTixWBzHdXC9EkmVJGYotPKqfjqBKGIHrt2plgSxeIxSsUS+WMFzfcq6hFmuYHd04mqF65SIJ2zSXhDyC9PENA3KbolKuULZK+EpF8d3iKfiGMqgUChQKpXQ+FiWAQQhqfJn8piWRaqlBV/7+AYo08IDPO3j+h4V1yVXLlKuOKSIk7BtfDPwQIjbNpZlBnlF3DLa9ShXimF4uQQajWFa4PlUCpVQBPOCfDa2DfiUnTL5Yo6YjpMyA+8jZSjidpxUooVEqgXLilMulckOZcmNjlCplINwd/E4lmVix2OYlo3vBZ4WVbFOo8OwAYqYHUOpIMxxdTLP87wwFNp4DhfXLVOq5CgUcxQKJQqFCnE7jlcpVNNoUiqVUMpAmwpPuxCOJ6XAMgyK5cAjJp5KYiiFbZl4HvieRSqVIp8vMDwyQtJK0NXWTmtbGzrMAZQdzJLLj9LW1kYinmA0P8pILsjBEk+ncHyXofwwXqFIuVjE9uLoosHQ8CCjuSG0H4QsLJQLnB48je/7FAolXMfFcR08rcmXS2FuJwNlguuUAJ94MknFdyg7ZZQZhAbUSuNpFyuWQpmBSDUyOkL2zFkqjkPMjuNrqJQrxGIWsZhNsVAmXyhiGwYxBV6pSDE3yujoCIVinrLjYdhW6FnmE4vHgtxYlUog3Pi6qsmBMjEtm5GRUVpaW2lra8c3ylSyw6AVWhuoWAzT9PFx0UYgLJsxC+2VKRdHSdktFMslio4Dvk+5HIg8LWYLsXiMVCqJ41bz8RSx4yYGCs+pYMVs0i0pdNnAyedxKx6u75NIJoknE5i2TdI0icfs6Z/kBUGYd4qlMvsPPMV9fY+xb//jPNH/XHBtnD054B7GvWwe7t+72xdLL20ue/PPihEEYXY8F1F2IRCjgXfGikEx9mwwU3zfn977RoVvqZk2aI1RGESVcySTcdrbWrAaCEDKMPE9n9Onj+O5Lh3trSSTSdra2vBcZ4oNalavXsM//tPNulQsFIB8oyW7164LwpVPQZAD57g++GR/Ne0PAC3pFo4ePYrrlLqAowDF/GggUjWqqzVD59oeXLfsdWdS+csvvyKV6Wi1Qanjx4+qoaGsNlW7SiSSrFm7vuPFFwYmdJQO7wuCKBt19lIKlCKfy5NIxGhvSwcvOkKYQ1bJWUAQBEEQhAVlQQUcp1IJcke4Hhod5PAwjMAbQYNTcUin02NJyovlMkWnjKcC8UZ7GivMfxFMqvvE47HgzX3XCTxRgNF8HrSmUCjgeoG7d6FUolguUygVSafTYcgxC9A4odhiWVbgMeBr8oUcpmGQTCbxPI9SpUK5WCKeiGHZZuA5ozUxNLFYDNBUKg6GAcoEtB8INhDk40nESZtpTMvEUMHEv19xcd1KmH/EImbHaGlNB149RQ+nUmZktEwqmcI0jSD5ZOiqrW0L2wrCrCUScWK2BSjy+VKQmN0wsOOxIN+NW8YwTdpa2yg7ZUxL4fkuju/iOR4+YMdtVOhNEI8F3ibKUBjKRPuabDZLsVgMwsSlUmggnkgEb4Ipheu5jORy+L6H63toDZVyGa9SwurIkEwkcBwHz/fxnEoQlsvXeL6H9jWe51JxgvxEpXIJ7fvBMwgGSilyudHwZhlGRocpFktgKGzTDtqJgVN2iMeTJA0DT/sUSkVcx8E0gxBjXhhuLRa+/eXZHnbCAk/jey7K94kpA9Oy8EwPw6q6ygdvXgU5e4LQcr7vobWmXC5TKQVeNmWnEni0+C6+z1h+okqlgusEYea8fBCKwDTNsfER3OMrYrE46USKVCyBVlDRQag8Hf5PGQau71Eol9AjCmUZVDyXUqVEoVgklkhgFYvkC4H3TDweo1wqk7cLJP0ESnugNIViAW0qKk4Z0zJJJdJ0ZDrwfZ9jR49QKJWJxQIhqVguoiEQ0YBYmEMp8HbR2JaNYRrE4vHABqUyvvYBFXg+uS7lconR0RyFYhHDNLGVouJUAoEpHkf7fvBWoVLE44E3ned6jIyMjIV5QwdCnaEUth0DwA29lvKFAsViMQixaBvYhoEPxEyLpB3khSpVyviOi1NxUNrHNkxcP/CgMYwgrKMCHN+j4pYxSxaFfB58PziXhDG/bdumNR3E0T5x7AS5Qg60wrZi+J6HZZrE4zGKpRKFQgHHqYAb1G1ZJvFYDNu28SsuSStGV3uHXHUEYYFxHJfHnnyG+/qCPDaPHHw6DIU5a8rA/YwLNvv69+52xNKCIAhAIGqfANbWlJkEeXCeXtEP2gqcKQQcn6kn/pUx9e++NsCyUZUCFIawTU1rR+u04by07+H5Dom4jR+z8bVBW1vb2LNKFJ7nc8EF63jgwT697/4fucDwVNtYu+7CIOfqFHR1dXHvffv87/3nt/NA8EAxcQyNVeC6lYbeSoZpcdWbPshVL39Zel16cFVHixVf3d2F4zhUnCA6xosvvkC+UKC9I4FhGvYEOysjfJ5hyv1XCtrbWyYIayLeCIIgCIJwTu4rF7LyUinw+PC1xvdjVMqBeKGUwtc+5UqZVCpNa1sb5EYYLRTQvo8Vt1GGxvGDl7asWIxEPBF4cfh+ECqrWMYOc9WU3QrFQpF8Lo9lm0CQ+yL0g6aQLwR5SOxg4tj3fLQfhLWq5nxxXId8Po9t22g05TD/SaUS1gNYtkUsFqOtrR3PdShnsziOQzWrjWGYxOPBhHd14lujcV2HYrHA0MgwyVSclrYgD4ZlWWGencDDQBEIBBqNbceI+z65XA7f9zFME4NA7HBcZ0zYsSyTYsHHdRxKpQKWaVAoFPE8N5h4tkwqbjlwBffBdz2UMgKhxwsmzdNpP/Q0CbxTAkEl8EyJ2Tatra3E4wnK5RJaB8LGyPAwjheEa0smkmjfxymVGS3m8X1Na0vLuGARN4IYw6VymGtE4TouvudhmCZOpUy5VAYI8oVYFsVSgYpTRilFIVcIwu85MbyYh23YeNql4hXRriZfzhHzE5SdIq7nBF5d2g/yD4UuNZ7noareNniY2iQWSxBqKfiOplIpAwrTNEJbGKTTaSwrFoTy8hxK5RIJOxHkSjKCMAP+6BC+79GRyeD7PkPZIUzTDPK1EORH8X1/TLyybYuOTAemaZFKJrFMk1yxQLlUxnUcyuUSnufi+3YgVsY8RvMjVDyXsu8zNDpCLp9D2RY+QViwIJ9MkKconkgEoQJ9TdnzcDwPjcJXBunWFjo7ukgm4gwOZhkZGQ1CnKWD8VgqlYLjQ2ts2w7y31j2WN6pslNGV8AI98/z/dAryaVQKIQ5dDxQBnYsjus4FEslUATh77QOxBY7RioNyXgsCGNYKRPz/TDEoIc2DLQPvvLDcIeKRKIFN+aQy+col0vEiWMlUyhXY/qQaW2nNd2CrUwsFHHDwvPLeKUyKp7E8X1yo6MYhkEikQhDFRqUHBc3n8NQBkk7RrlcIZ4I8g+1tbZhWia5XI7R/Ci+5wfnCK2JxxO0pJKUKxVGcznKTuDRpbXGUEEsbcuyw/xVilVdq8h0ZOSqIwjzjOf7PPn089y//zHu3/84Dz36ZHDemUOVwEPAncDtwI/69+4uiKUFQRAa8hwTBRyAl7DCBRzTAKeRLqA1nhED5aJ9b+xeu4phGNOG9zQNBaNZjMrotF439ZRLFXxfYZhW+HzXWLzxfZ+OjjZKZZcvfuELPugRoNho+WQyzfoLLqZQyE3bDhUINTmmCMeGUnR0rqVSLkJq8s9d6y5jw4YL4qvNpy9IYDI6Gmfv3nv8Z597lny+oH3P0de95S3Gy15+peE4Hr7vq4ld0ZyblGmaM89JJAiCIAiCMA8sqIAzmB1MJ+IJEvF48MaMESRpr76p4rouhmnQ0d4OaPKlEvF4HCMewwAMHXgCWJZFPBbD9YI39XO5HMViiZaWFjw3TsmtMFLMU3HLGH5w02oohWmYGNWwaZUKtmVhW3YwkR6GySqXg7wpvuejVSDVlIolyqViMPGuFZ4X5LzxdVBPuVzCcQLBx3EcYjEbQ5mht0bgWRTcCBqYZpCzI7j3NNB+EOu4elNeKhSCvCO+j2lZtMXjxGIxTMvCdQNxqFwuUzQUiUQc0zCplCtBUviKh1MJvJfQmpMnT5G1s7iVMPQVCsu2gv3SQf4dX/uYqOA/Ba7jUMjnKRdKlAolfDxS6QStLePJ1q3wxn5kZIRSqYTn+RSKRVw/iM2sgaRpo10/zEcSJG+PxWws0wyTY5bwfQ87FkPrwBPH1GYgTCkj8N7xPZLJOKapMU2N71dCN3aFoYyxBxvf9zAsSLWlMQwLz3couyrISxSz8MqBt4znOHiui2vZgb3DG3TP9/HRuNrD873Qa6QMno+hwDSt0DtLYYZeW4ahKJcroUgXJ5FMogyDWDKBmbDwKkHIrWqek4pTwQrzPcUTCUqlIqZh4IdCX2trK6Zh4vkexXKZcqmIH9pTVQJxzvHcMGSdgedpipUK+XKJQrmMiw4EHc/FQ2OaJoahiMXjWJaF1j75YuARYibiYCg87QGBeDo6OkouN4rruWGuI1Bao3TwkGYYBoZlBQ8qodCotWYkN4rjOCQSCdpaW7EsE9O0As+kSiXI6aPC0HOORmsf07JJpdJUKhWKo6OBbU0bw7TGQhR4rocTel8FAi+B+Ok4uK5HS0sLyWQSI51iZHSEkZERHMelXCyjnSCvlKEMXMdFFYuU8sXgvKNUkN9Ka4rlwEumuk/KMCh7FbTnkU6maE2lwQq8tdra2sh0Zogn4pw9c5ZTp06hlArWU4Fka8cslDKC84fvY5kmfnjuUQTjsVgs4nsenR0dJNIp8qViWi47gjB3nnn+MPfvP8D9fQd44JEnGBnNzaU6DTxO4F1zO/DD/r27h8XKgiAITTMAvLGu7KUr/kHbGEvEGXHl8SCWQBkFXKeMHU+N5biEQMCZLvxaJuagKRFrTdHe1hK9mfCZV2twPR/X8/F9MFSQh9XzXKbSL3zfJ51O0dGR4W8+8bf+8HC2BAxN1a4rXv4qMl2rOH3yWOP9D5/LjCYEkZe+7Fo29FzF0SOHIWVMaBtAW2YtL1lndSV8SLau5o7//K73ne/8RxkoEAhNzlt/7O0bbNMwHMebrNg040QzFrtZEARBEAThPNxXLmTluVz+mGmY7XasmnvExMMLPB68IOTY6OgwLekUyWSSdDKJo32sZBxcn5gRhOtKp4OE4sVckUqlggrvshynQi6fo+IHOUbsWBBeDcC2Y2OTwYlYgopTwXO9II+L7+N5LoZp4fke6CAZfCwUTcrlEmiwrcCLBqVxKhV8J7gJdioVypUKuVwuuP9TkIxb2LaF67mUKyVUBTzXJZ5IYhiKtrY27HicUikf2iIQIxzfx3OcIKSTYRBLpUgmEoGnkO9TCYWi3IiirbWFZCo1djduGAblcoFSuQQohoaH0b5PImaTSqXQviYWj2HGAsnG9wMvG601julgxwLPiqqQZlgGhjICYcEN+qnqPYIfhLHL5wtjnhl4DrZto4B4LMaaVV0UKml8gljAra2txGMxtOMzmsuRTMRpGxPQ9JgNbNuma1UXvh/kmonFLHzfC7x1MLFsF8uzsGwLVCBoKR8SLYGIUnEqlAt50skU6VQq8HxRKsxDM35H7nkevhsII8OFHPlSEafiBPupDIyKR9y0iCfGBbZKpRKE1CsVKZXKtLe3Y8di4V28wrYs4vEEuTBsWDwRCB1osOxAKNDaHwtBVq2zWCiEwlYZ0GilSCSToHXgqWbZ2LZNIp0mkUpSKgXeSI7nokyDVGsrpm1ScoL2xUwTOxYb8+hxXBdP+3haox0HV2lKTpm4GUOhQtGqgu/rQJx0PVwzEE6qNtOh55JpBmH7PF/D6MhYXqBkMknCT1BxK2j8sT4N+tWFMNxgS2sbShnkc7lQBDPCxx+DlnQLbcnEmKdedTxUSiW8UgnPdca85Gw7sEksFqeltZVioUi5XMbUCu16lAoFYpaN57gUcjkcNwinaFs2pUKRfDGHaVkYSlEpB/YslQPPr0QsTqlUwvA1plJhGLtAZC6VSugw/5Hv+aCCc4NpmJSKxTHvMaVUcGwY9pho67oudsym7LkcPn6MQqEgk8KCMAsOHzs5Jtjcv/8AZwaH5lrlM4x72NzVv3f3KbGyIAjCnM6p9ax4AceeSpvQGmXFqFRG8OIOdvBEgGL8+cj3p/fA6exqb+gV4vuafMEJatbU5HZRY/fdU+H7PulUiq6uVXz2c5/3+5866ABnCDxVG3LlK3qDyAZT6Fd1TLnUla+5jnJxFCNIRaM0Qahi3/cU4Fsxm1QqbZI3gvDRhZwBDGbWXlrYcPEmLn3pS2KbLtlgnT07SGtbB5ZpyhErCIIgCMKSYkEFnO/uvmujmFgQBEEQhJlw+mw2yGGz/3Hu7zvA0RNz1leOEog1dwB39u/d/aJYeeWitVZP33PreW3DpW/6GekIYTnxXETZppVulNi0ziWanGOQMYzAa90wx17iCzzkp/dOmTakV/iyYVW8CTSb5jxJWlta6Mhk+PJX/92/954fugTiTXmqdVavWcfFGzeRGxmKzOGjCNpimkYYqWFqhefl12xh/carGDx9BCOWJh5X2FaZUqnAVVddpR579FHr9PHDlacGjo/0viSTLBXyvPlNb1Jr11/UvvaiK1NdnR1Wd7vR5lbylEpFYjGbdevXmQMDz9a0STVWkFT1d/G+EQRBEATh/GGJCQRBEARBOJ8Mj+TY9/DjY142z71wZK5VniHwsLkTuL1/7+6nxcpCLSKgCMK88mxE2WUr3ShxaxoXFM9FpzoolUfRQDzRGnh8h6HBYrF44C1vqFm3YQZ6zQRM0yTd2sbOf/ms9+AD93nhdTU/1TpKKd79Mx+kq3s9WhuRAo7nQzqOkYpVSLW0YscSU7aja92lHD9+jPzIEGUXjqBGNq7pbkkmUmzZvJmjx052fP+7/1H417//o+GBt7019RPv2NJ6wYUbeNmVV7aW8sOUK2UefvhJnht4QW/+b29TnZkO3vPT77V+dO89Yx5I1dDKUeHcVBjGW2kRcARBEARBOH+IgCMIgiAIwjmlUCzx0KMHua/vMfbtf5wnn31+2lAx0zAK3E3gYXMH8Fj/3t0y2yIIgnBu6I8ouxhIAKWVahQFpGwoOI2W0GAYnCkabEgq8rkh0i0dY78mk0kqlTyJeOzctVkpTNPEcRw+8Yn/4z0/8KwDnGUa8QaCPKInTp3h1G0/wHEqkcvYsQS5kdNDLzz7WNyy7NTg2bMADWOa/ej7n8X3gzyeSoGnyT20cePghRes7xwayvL8wEAKSOGXC3fe/oPjD+9/yN148UXtyVTa0L6vs0NDuv+pgx4w+nT/k+riiy/qyGazE5Sl0ZEh8obB2bNn8XwPzx8XlUzlkx0aDb2i1FjIaEEQBEEQhHOJCDiCIAiCICwo5YrDo0/0c1/oYfPowWcmJGueBUXgR4wLNg/1793tiqUFQRDOC8PAMWB9TZkBXAo8tpIN0xbTFByDhm4wroOfaGU4N0R7S4JcboSWlja01sTjcXKjQ+dMwLEsC601hUKB0VyOwcHBAjDINGHTxnbFdfju7s81s6gDnARiNX9HL1gpTiobeLb/zMCz/SXADtcda99Q9uzpR7Jnh4F4OAY14AKFQ88/x6HnnyvUbBcIcv34vk9nphWAmD0+RZJMxkC1zCSfjyAIgiAIwvzfp4kJBEEQBEGYTzzP4/H+57iv7wD79h9g/4GnKJUrc6nSBfYRhES7A7ivf+/uklhaEARh0dDPRAEH4HJWuIATNzWmaeB5PtEijgbfY0i1kXIKxGMWw8NZ2tszAMTiqQXz+lAwFjrMNC2KxSKFYhHXGdNTTsCCJX9xmEK4aYLcFL9Vwk8UeRp4EyWT8cgVkom4HN2CIAiCIJxXRMARBEEQBGFOaK3pf+4F9u1/nPv6HuPBRw+SyxfmUqUPPMq4h80P+/fuzomlBUEQFi1PAm+rK7tczAIZq8QZnQS/gaOo74Fpc7wU50LTJZW0yQ6epr2ji9bWVoaHzpJKzU5EmKj7KFBgKIVSQb4X13WpVMoMDWWpVCZpHhKKVBAEQRAEYREgAo4gCIIgCDPm0OFj3Nd3gPv3H+CBhx9ncGhkrlU+SeBhcztwd//e3WfFyoIgCEuGpyLKRMAB0nGToYqDa8bAa+B04jlo0+ZoXrM+5dHSkmQ4e5pkqo1kqhWty7PywjGM8fQyWmu01jiui+d5VCoVfN/DNI0o8UYQBEEQBEFYJIiAIwiCIAjCtBw/dWbMw+b+vgOcOD1nfeUFArHmTuCO/r27j4mVBUEQlixRAs6VYpaAVXGHE04clAHaj17Ic/CVyZGSTVesQktrGt93GckOoUyDtrb0jLaplMJzK+TyJZQKBJyqU41tW6RSMWK2TSxuSwcJgiAIgiAsYkTAEQRBEARhEoNDI+zbH3jY3Nd3gBeOHJ9rlScJwqHdCdzev3f3gFhZEARh2RAl4FwKmIC30o0Tj1m0lkcYNTvAnSKFm/bA8zhbtij4ioxVpj3TSqUy83QxSkFraxJlBGHTDMNAGQrLNDFNY0Hy6giCIAiCIAjzjwg4giAIgiCQyxd44JEnuK/vAPv2H+DpgRfDt3VnzRBwF+Nh0Q72790t8fQFQRCWJ4cJEsu31JQlgEuAZ8U80NkaozSSw7FS4E4Tssz3KJah6CZImD5tpsbxfGzTmNE2lVK0tqTE+IIgCIIgCEsYEXAEQRAEYQVSLJV5+PF+7t9/gPv7DvD4U8/i+f5cqswD9zAu2Dzcv3e3J5YWBEFYMRwEXltX9nJEwBljTVpzNF9BmzHwmsg74zmUPEXJiINSWHhYysc0DNptB9sULxpBEARBEITljgg4giAIgrACcF2Px558Jshhs/9xHnm8n4rjzKXKCrCPQKy5A9jXv3e3ZEEWBEFYuRxgsoBzJfBNMU2AaZqsSZQ5UVJg2uA1cx3W4LvBtRxwUWCaeIUhujvSGIaIOIIgCIIgCMsZEXAEQRAEYRni+5onnxng/v2Pc1/fY/Q99iSFYmkuVXrAfgKx5g7gnv69uwtiaUEQBCHkYETZVWKWicRjNqv9IqfKcbAT4JZnVU+p7OF5HoYhj/SCIAiCIAjLGbnbEwRBEIRlwrOHDnN/3wHu33+ABx55guGR3Fyq08ATjAs2d/fv3T0kVhYEQRAacCCiTAScCJKJGGtUmVMFBx1vBc8FPbOoo4ZpoJR43wiCIAiCICx3RMARBEEQhCXKkeMn2Rd62Ny//3FOn83OtcrnGBds7uzfu/ukWFkQBEFokigB51IgDpTFPBNJxGOsM1zOjGap2K1gxsKQalqMIwiCIAiCIIwhAo4gCIIgLBHODA5xX+hhs2//AQ4fm7O+cowgh82dwO39e3e/KFYWBEEQZskJYBDorHvevAJ4RMwzGdu2WNdpMTiUZcRJouIpQImQIwiCIAiCIEy4oRYEQRAEYREyMppj38NPsG//Ae7rO8Czhw7PtcqzBGLNncAd/Xt3PyVWFgRBEOaRA8B1dWUvQwScKensaCFddhjKnaWo4qhYGgwDfB+0D1ojgo4gCIIgCMLKRAQcQRAEQVgkFEslHnr0Se7vO8B9+w/w5DMD+P6cJmxGgR8SetgAj/Xv3e2LpQVBEIQFIkrAkTw4TRCP26yJ2xQKJXLFsxR9E20lUKYNpgUoxkQcwwz/FgRBEARBEJY7IuAIgiAIwnmi4jg88sTT3B+GRXv04NO4rjeXKkvAfYyHRXugf+9uVywtCIIgnCOi8uC8QszSPKlUglQqQaXiUCjkKZU8HG3gGXYg3CgTZVr4novW4pUjCIIgCIKw3BEBRxAEQRDOEZ7v8/hTz7Jv/+Pcv/8A+w88RbE0p7zOLvAgcAeBYHNv/97dJbG0IAiCcJ54LKLslWKWmROL2cRiNlprXNfDdT0c18FzS3iuj7ZEvBEEQRAEQVgJiIAjCIIgCAuE1ppnnn8xCInWd4AHH3mC0XxhLlX6BG83Vz1s7u7fu3tULL2yueK69810Fu9GYIdY7vzheZ4YQViuHAivVUZN2QVAN3BazDNzlFLYtoVtWySJn5d7meXEs0/s5qUvf68MLEEQBEEQlgwi4AiCIAjCPPLCkePcvz8QbB54+HHOZofnWmU/gYfNHcBd/Xt3nxErC8uYnvCzZznvpGma0nfCrFnkAmAeeAa4rK78VcBt0nuCIAiCIAiCMDNEwBEEQRCEOXDy9CD39T3G/fsPsG//AY6dnLO+8iLjgs0d/Xt3HxUrCyuAm4HNjAsAIgJI3wlLl0cRAUcQBEEQBEEQ5gURcARBEARhBmSHR8IcNo9zX99jHDp8bK5VniIIh1YVbJ4VKwtzZMs0vw8swjZvk25bskjfCfU8ClxfVyZ5cARBEARBEARhFoiAIwiCIAhTkC8UeeCRJ7i/7wD37z9A/3MvzDUe/DBwN4FgczvwRP/e3ZKJWJg3nrz769N6QFxx3fvEUIIgLBSPRJS9WswiCIIgCIIgCDNHBBxBEARBqKFUrvDw40+NCTYHnnpurvkGCsC9BGLNnUBf/97dksFcEARBWK48GlF2KZAEioukjXkgXVsgie0n2Wc5Mwq0Sv83pCgmaIoRoE3GkYwjQa7Zcs0WFhpDTCAIgiCsZDzPY/+Bp/inz+/il37rT3jtT/wiv/zbf8Y/f/EbPPLE07MRbyrAPcD/At4KZPr37v7x/r27d/Tv3f2AiDfCImczQU6T5wAdfh4Ky3qmWK8XuIkgx8Vgzbq3AduBTINtVZdrVK7D9WvRdZ9GTLdc7W+ba7Z9S03b59NG9Wyta8NzM1j2oQVqW09o79sitndz2M9z6bup2jkY2n66kGyz7Tfh3HEUOF1XZgEvX0RtfFC6aUXb5yHpYjk+ZByJfQQZS3LOFpYK4oEjCIIgrCh8X/PUc4dCD5vHePCRgxSKpblU6RGEi6l62Ozt37tb3oIRlhoZggn1rRG/9YafrcCNwM6632+m8aT75vCznSA3T98i3f9eAvHi5gWyURS7gCzj4lZPWEdfg/rr153PtmXCPto+hX16I7Y7m3F2C+PCS/1vW8PPNuCGJsZLM/0mnB8eBn68ruxVi2iS4e8IXrIQGttnue/f26SbV2z/z6edfkzM0JD/KyYQ5vFYu07MIOfslYwIOIIgCMKy57kXjrBv/+Pc33eAfY88ztDw6FyrfIIgh80dwN39e3dnxcrCEuc2JooEA+EnU1NeFQkGgD11y1bJMj7pXjtJX524vzpcZrFRFTAWykaN2MVE8Wsz0aJFvTCzcx7blolYvxF75mjjZrfTGy57dd34mk2/CeeHh5gs4FwD/MtiaNwzj3/jP1768vfeRCBqChPZAfzHct7BZx7/xrdf+vL3/jXwB9Ldk/gb4Jtihqb4DvC/gT8UU0zi/8g4EuaRb4Xnpt8XU6y8a7YQIAKOIAiCsOw4duI09+0/MJbH5tSZwblWOcC4YHNn/97dJ8TKwmLliuvet3mq35+8++v1k/DbmTipfmP4MFClOple9RS5GdhU8/seAiFiJ5OFgZsYFyh6wu87atZT4XddV9+Wc2y2bTX7VxU/+ubRRo3YyUQBZ2tdvdW6a8OfVT135qtt9etnw/V3MS6e9NQtM5u+u6Wujr6a7UAgXt3EZMFpyxz6TTh/RPXDqxdTA595/Bt/8NKXv/d+4LeA11IXX3+FkSPwjvp7gomyZc8zj3/jD8P+/20CcbFlBfd/vqb/vymnrxnxR8B9wO/KOJJxJCwo24EfyTV7ZV6zBVBaa7HCPGKa5iSDep6nplhejHZuiBroSsxyfpljYnjhPPP0Pbdy2Zt/dtKx1b939zlvy5nBIe7ff4B9+x/nvr7HOHzs5FyrPM64YHNH/97dh6THhcXKFde9b0Y3c0/e/XV1xXXvq/6ZIchDUp0E30kQuqqebUwMU3U9zYXTyhDkNqnSaIJ/JgKObvJ6Pt1yUXa7gcneLQtto+eYKNBsYqLXyXYCYSOq3rm2rYeJuXeyzDzUXTN9t5mJuWmy4X5mp7E1YX17ZtFvK/r+Sanzfpt7EfBCXVmZIHG8cz4b9szj31jR14yZJn5ebvMFzz6xW/p/Bff/fDHbc+xSP//MNnG8jCNhPo41uX6/V461FYx44AiLld7wYX+HmEIQhHpGcnkeePjxMcHm2UNH5nqzMgjcxbiHzUGxsrBC2MrEyfJG192dTBQAepqsvxpSrbfm+r5YaZQfZqFttIuJYcA217Vja509d81j2+pzF+1gYTxYNkdsJ9tgvOxgomC1lalDtzWbc0g4t7wInAa6a8riwFXAfjGPIAiCIAiCIDSHCDjCYmIb48mOMzUP+IIgrHCKpRJ9jz3F/fsPcF/fYzz59PN4vj+XKnPAXsa9bB7p37vbF0sLK5B6kaHZZPBRL1lUk9BXr+ONEtUvVnadAxtFsZPGAk596LJd89y2ekFtzwLZdnOTtq7+Vivg9M6y34Tzz37g7XVlr+E8CzizfYN8Pjmfb8iu9LdzX3Llz67o/Ze3s8WOst+CjDmxhbD0EAFHWExsJXqyRxCEFYbjuDx68GnuC3PYPHrwaRzHnUuVZYL41FXBZl//3t2uWFpYjjx599dnEtejfnJ8ttfhaq6bzBI23cAC22iq7dZ6KVW9arIR29o5z22rX36h8sf0NmnrqN96Z9lvwvnnQaIFHEFYlswyrNbfAx8BPkWQ22FGyISejCUZS4Igx5qw/BEBRxAEQTjveL7Pwf4B7t8fCDZ9jz1JsVSeS5UuwUTk7cCdwL39e3cXxdKCMG/Uhr+6jclCwB7GvTm20Xw4seVqo+nYxUShYnNYVmvXqtBzrtsmCLMlarz2ilkEYYwfBz4afv8o8J/Af4lZBBlLgiDHmiDUIgKOIAiCcM7RWvPs84e5b/8B7u87wIOPPMFILj+nKoHHCMSaO4C7+/fuHhFLC8K01HsvzPQ1tO1MFBn6CJLO1woEm1naAs5cbdQMO5kYNixKwNm5AG2r9fwh7KeBBbJhT5Pb6Yloo7A0ieq7VwA24Ih5hBVOAvh6XdnXgdVAScwjyFgSBDnWBKGKCDiCIAjCOeHFoycCD5u+A+x7+HHODA7NtcqnCQSb24G7+vfuPi1WFoQZUz+JvpWZ5RTZWvd3vXizEGSZGKqtl8kTxT2LyEbN7tOuGntuDverdj93LUDbBpgo4GxlYfIP7iHwxKJm/3Y2WPZchXUTFp7DwCmCiY0qCQIRR/pVWOn8MdBaV9Yalv+xmEeQsSQIcqwJQhVDTHDuMU0zY5qmNk1TE7w1ruseVrcBtwCD4W+3EbyV2exkhK77bK57KL4ZeK7m94fmWL+ew/LbG7Sz0bq3LWDXZKbZr2q/PFdjt5sb2C0T2vShcNnnwmVnEpu+N7TPbTX11PbZLXWTITNhc92+RI2DzTPs53pb3Ry2XdfZa5ucBVYGp84M8q0f3M0f/vX/423v28aWn/t1/ufffJr/vP2e2Yo3h4EvAP8d2NC/d/dl/Xt3/1r/3t27RLwRhFlTP9l/EzPLY1MfDikbcT2cacik6Zbvi7jmELEfi8VGzbKn5nsPE8WxPUR7rMy1bXvq/t7O3EJc9TbZZ9sbtDMT/jbVPgpLi4ciyq4WswgrnCuAjzf47ffD3wVBxpIgyLEmCAAoSZ40v4SizAQ8z1NNLLclfLi9hcaT/FngRhq/sVilUd3VBMPMc/0wdciOqZbfzswmWfaE+zNTmm1z/XKdBJMoN08xKVFvt63h8o0mUG5k6rdbe6fZXj19wA009yZjdV82NzEOBpgsmE0XmmUrzYmBA8ANnuftkbPG0uXpe27lsjf/7KRj6wM/8w7u33+AgReOznUTpxkPiXZn/97dT4vVBSHiaeK69004Dp+8++uqiXVq/6wX1wfC69QuJgoyveFngPGJ/+fqzvm117ie8L6mt4lrSX0enV0119UBJooXUfcOO8N1MuG+bG7iuq9ncI2bi42aJRPaM1NzPa5+v2GK+7O5tq2+D7M16w/U9GUj75xm++6hurHQB1xft4368RJ13zeTfluWeJ43/UOeWjRm+XPgf9aV/Svwqyv5vC3P4Mt0cqW5406F97fXTbHMD4G30sQLdDKWZCzJWBIEOdaEFTBWZeDML3MQcG4IH76bmbS/nqnfRpxL3dNNEsDKEnBuoPm3Wa+umXyYjqlEnK1N1lFLNtz+VHHrewkmWZp9M/fGiL6ZyeRWU2PN87ydcuZYmjQScObACHA3gWBzB3Cgf+9uuUgJwjTMg4CTCa8Pzd4n1F7Doq7j1Un7zVM8BNUz1f3AFiYKDvVCx1T3DJun2O5MhIC52GgmRF1Ls8AmGoemm2vbZnJ/MJe+a7Sdvprf68fR1RH7LQLO0hJw3gN8s67sMeCVK/m8Lc/gy3Rypbnj7peBzzWx3IeAf5OxJGNJxpIgyLEmCCLgzDNzEHCyND+xPhA+yDc8H8yh7ipX09ijYz4FnG2MhwjpjWjnnoiH/Btn85zUZJvnYrs+AgGnmeWnm4zRNcv11dmhl8k5B6q2aiRuZQjefO1pMJ6qws90Id4a9XOjCbydNXbpJXpS6mrP8wbk7LH0mAcBpwjcy7hg81D/3t2eWFYQZsY8CDjV68QtNBfqs/56c9sU6/URvHRy0zTXkqlEiC0R9wPTebvewHg400bbnakQMBcbNUvUSxw7w/1ZyLb1hutP50Eb1Rcz6btmPIGrbby+wX2SCDhLS8C5ADhSvwtAG1BYqedteQZfppMr0x93XUB/+O90nAUuB87IWJKxJGNJEORYE1Y2kgNn8ZCpmei4PnwY3US0J0wPM/N0qA3FcUNYtyIID9ZIDLnpHO33zvDhvhrmLerBv/Zz43nqlz0Eola1X6I8oGoFqBvD5RrZOEO0CEPN+jeE628heFO2+rmeaG+bzTSeeNkW8Vt1PGyqsW8nM48130t0KJtNYXv3MD75dHWEHbbL4b9icAgEm78A3gZk+vfu3tK/d/df9+/dvU/EG0E4r2RrrgU7I67J1bBbN0Zc17aE5/jaifpdNef9vojrRqPt76y7vu0hehJ/V1j3zprfB2quPzsXmY2aZVfE/u45B23rC+12Q0QbsuG6O4j29J1J3w3UtbO6TO02qr9n5bBcFhwFTtSVmcCrl/E+fwQ4GH4+IkNAqOETjE8CNnqBrVreBfyNmEyQsSQIcqwJgnjgzDNz8MCpHvRRoSKi3myd6m1MPcXDdZRIUk04X08jL5z59MCZbj/n6/XB2XrgVCc1rm6yvYSTD/WTI1EeKrsIxJjZEtVvUSFbGoWaiXqLdrp9a6bfpvMQi7JFp+d5MlGzxGjCA8cHHmHcw2Zv/97dObGcIJx/IjxwBEFYIiwxDxyA7wDvqiv7HeCTy7B7esL77lo2UTfpI8/gy5Npjrs3E+RIqPJO4HsRy70D+H7N39fVrYeMJRlLMpYEQY41YWUhHjiLix1Ev23YyAtnpnU3ColWTfxbz1bpEqCxyLJzClsvRB/Ws6fJOrcRHZpuqjd6m31zuYfJQs90Mf9lrC1vDgL/CPws0N2/d3dv/97dH+/fu/t7It4IgiAIworkgYiya5fpvv5WRNlvyxBY8cSY+OLdLUyc7KvlB8DXav7+53B9QZCxJAhyrAkrFEtMsKhoNGkeNdG+eYZ172ri9/pJ9F7pEvpo7Ca5p0FZlAiXDevpmYV9e8K+6WE8l0xmimVpoh93znG8TDUOp1s3yp4ZGWpLlueBOwm9bPr37j4uJhEEQRAEoYb7Ispetwz382eBj0aUfwS4C9gtQ2HF8nvAFeH3EaYX9X6H4E3vtnC9jwN/JWYUZCwJghxrwspEBJylwXyElpouSXyUGCECztS2z86iD2bidbOVINzYXPuhp8n+nq+6B2dRz2am99wRFiH9e3f3iBUEQRAEQZiCBwjCqtZGf9gIrGVyfpylzIen+O3fgD8BXiXDYcWxCfifNX//ITDdC0/HgT8g8GoH+GPg35kcnk+QsSRjSRDkWBNWACLgCFWixAjxijg/ZAhyxGybp/p6m+zv+apbWCFc+qafESMISxbTNMUIgiAI54Zh4Engyrry1wO3LpN9vJGpIyS0ApfJUFiR/COQCL8/CHy6yfX+Gfhl4Jpw/U8DPy7mlLEkY0kQ5FgTVh6SA0cQFh+NxJu+8OHwemBL+BEEQRAEQRCExc6DEWXLJYzahcBfN7FcIlxWWDm8H3h7+N0DbiDwRmsGP3wm9MK/twA/JyaVsSRjSRDkWBNWHuKBI1SJ8rbpE7OcczYzWbzJEog2sw17lo3o3wzz54VTz42zWEfGmiAIgiAIwvLlfoK3UmtZLgLOxyLKfhX4TET57zF9HH1hedAOfLLm738AHp5hHY+E6/1O+Pcnge8ReLUJMpZkLAmCHGvCCkEEHKFKlMv/wAzWX0hBYCUR5XkzF/EGAnFkc0R/75qH9u6JqHtXs2PH8zzpcUEQBEEQhOXP/RFlvYDJ+JuqS5F3M1mQ+SfgFqIFnN8K75+/I0Ni2fPXBHmeAI4AfzrLev6EIDfqhcCasN5fF/PKWJKxJAhyrAkrBwmhtnLYOs3vUQJOI6+IPU2u38x2hckPss3YG5rPUTST/pppvw1InwuCIAiCIAjT8DiQqytLAy9f4vv14YiyT0/znP1hGQ7LntcCv1bz90eA0VnWlQvXr/JrwLViYhlLMpYEQY41YeUgAs7KYfsUv/US7fnRyEMjytMmav0MQT6XZomqd6WJAT0N7EiTNo+ir8G6m2c5XurHSH2/3US0ECUIgiAIgiCsTDzggYjypRxG7feAn6gr+wsCsWoqfkIp9XsyJJYtFrATUOHf3wa+Occ6vwn8R/hdATcj0VRkLMlYEgQ51oQVgwg4K4deAlf+eoFgM3BbxPI7aRwGq5FHx8019W8N6+2ZQRujhIbtjAsNmRnWtxRpJLZElTUrju1p0Ge3RNTdE/bbTASYHRFlD4V9N1V/9ZimuU0OTUEQBEEQhBVBVBi1pSrgdAOfiCj/s/Df6SZpPqGU6pYhsSz5KPDK8HsB+M15qvcjQD78/kqCcHyCjCUZS4Igx5qwAlBaa7HCPGKa5iSDep6nmlmOcbU3ipksP12nVifze4ieYM8CV9NYwMkAz9F8CK9sxLJRbW8kJtVyI9GCwXQ0az8dYastM6h3quVvY7LXS30bthMtzNSKMFtpLLA02n4vgagSxUBNX08XWq3RmHtoijYNRIyl3poxscnzvAE5ewiCcI6v12IEQRCWJM3kEFRKLcam/yTjb6NW6QcuX4Ld8LfA79aVfQj4t/D7RcAL09Txd1rr35URvXxQSm0AniQIDwjwceD/zNMzPgReX1XhMA+8TGv9olhexpKMJUGQY01Y3ogHzsqhdoJ8c/hpJN5sYeok9FkCIaUZ+maw7B4a591ZKTTyfNpMIOzUhifLzsBefcANDX7rqRkTtX0xE7ZM0Zba+qufWkFPwq0JgiAIgiAsfx6MKLsU6Fxi+7GFyeLNZxgXbwBiEevV5wD6HaXUFhkWy4q/Z3wS8DHgk/Nc/yeBR8PvaeDvxOQylmQsCYIca8LyRwSclcP1TC3KEP4+1UR8LTuZXpipeoPMxLviBqJz4awUsmFfTdcHfaFt6/MUZabps2bsuytsw0zbfXU4Jmbafz1yeAqCIAiCICx7TgCH6soUSy+x74cjyj5d93dLxDKnmqxLWLr8VPivDp+73Hmu3yVIhl19A/ynxeQylmQsCYIca8LyRwSclUMf4xPstd4VWYIJ+xuATczMA2ZHuM7OmvUGaurbwswn8/vCOuvbOcDK8dCpijNRNqja9uoGtpjOm2XnFPbdGW73+gb91jeDMXFD2NaodfaEv90IXO153g45PAVBEARBEFYESz0Pzm8BP1NX9tfA/rqyKAHnbLhsLT+jlJKY+MuHzwI+QS6k+xfwGPozwAP+RUwuY0nGkiDIsSYsfyQHznlmgWLwzzQmoyDUkwEG68qmywc0Y5qJ4S4IgrCErr9TnVO3MTGH2QCB6L1zga79apbLCML5ZLZjdEWN7SWcAweCRMF/X1d2G/DjS8D07cBQRLlFMClTy7uA79SV3UUQSjjq7d4OrfWwnAJWFkqpSc/tWmu5NgsylgRBjjVBmIB44AiCEMXWiLI9YhZBEIQZ0ws8xMQcZhCEj8yIeQRBWGHcG1H2OsBcAm3fHlH2a0wWbyAQe+oZCpeNykv5hzI0BEEQBEEQhChEwBEEoZ4egonGekTAEQRhJZ4PN8+xjptpnOurT0wsLJNxLgjN8giQqytrBV6xBNpen6/mi+E5PopVEWVV7/ad4bpT1S0IgiAIgiAIgAg4grCS2AbcEv7b6K3vrQRvitf/vlLyDwmCIEAwIfdc+Nk+x/NurddNlvEccVsIwqgJwlIf54IwEzxgX0T5G5dA25+o+/vTUywbJeCcmWLdR2VoCIIgCIIgCFFYYgJBWDFsJhBothJM2vQRTCZW6aWxsHOjmE8QhBXEtnk879ZyA7BLzCsss3EuCDPlXuC/1ZW9Efh/i7zdVdHlovD7fVMsuz6i7ET1i9b6PqXUHxB43rzI1GKQIAiCIAiCsIIRAUcQVg71eW16m1gnSyDeiPeNIAjCzKkXcES8EQRBgHsiyt60BNr95fDTDBdGlB2r/UNrfRPRYYsFQRAEQRAEYQwJoSYIK4MeZh6qZw9BiJ+dYj5BEIRZkRETCIIgTOJ+glBqtVwIXLyM9nFjRNkL0vWCIAiCIAjCTBEBRxBWBgPAJgJBZgeBODMQscweAo+bq8NlxfNGEIQlh2mavaZp3mSa5m2maQ6apqnDz22maW43TTNKWNkM6PDTqFzTXK6QqHqoq2dzRJluos7plpsNW+vqHpxi2W11yz40w231hDa8LaKem5naO3Qz43lbattaze9Gk/bbVtOW2vqeC//uOQf1TLdfD82wjmbtOtdxPpe2ZiLaWLXV5nN0eugl8Hi4LRw71XbcFrYtcw6OjdnacKxu0zQ3h+e6zaZp3lI9vy2h0/QocCCi/I0sDwyiBZwX5QotCIIgCIIgzBSltRYrnEdM0xQjCCsWz/PECIIgzPd19Wamn8zPMlmk3kwwiTsdNxII4VPRzM3VFgLRvH5Z1WSdaoa/T7fMc0ycPL6e6JBv9fZtxh4wPnm/vUm71K97C9NP8vcR5Bnqm2bfbwzHwM0zGB/zXU91v25mcojT+jpupLE37EztOttxPte29obbzUyzzZuaPB6Yxfif7blhvo6NudpwurG3x/O8LWMGUIpFzv8DfqOu7J8iypYiG5gs1pSBZLUf5RlcCI/TSQNBa63EMoKMJUGQY00QapEcOIIgCIIgLCdqvQuzjE/E1k7+VwWBq8NlhGDCuHbyfDPRk9RbI9abjgzB5H0zudf2zGHdqkhwNVOHDd3M1GJQdZubphkfc62nfr8Gwk+mprw66T8wR9vsmeP4mEtbmxFvYOFzgcz23DBfx8Zc+7u+77cv8XPOPUwWa5aLB87lEWXPMP/ek4IgCE3zoV/6vSl/1ygcZeEYFp6a2YvGZQe621T7hV10VbzIc51SoC0Tp+yAr/HQeIUKjlJUChXKowXtasAyIB0PTpjFSvBGhuuDZad44dDTPLL3FkqF0QmVGxe/hsSv/hs6nwXfnbOtVLIdPXSM4qffD5XihN/i7/8E5qvfgx48PG09xqqNlL/6u+jSKIn/71/xTze+PTVWb6Jy65/i3Pv5yb91XUR825fAtCa1Z3LjDVTHOspf+ijeE/813r/y4oAgLGkkhJogCIIgCMuJPQSTq1s8z+v0PG9L+FZ6JxMnVHuY+Lb8HoJnRBVRn6r5NONtElUPdfXsWWR2q5+QjvIS6GXiJPwumhPAtjNx4rrqZbCpxh6bCDwb6rmlbt2+cLnqevXeEtUJ8KnYXNO31xOd7y3D9N4ac6mn3iZVe2whEA/qxcWb58musxnnc23rLXXjZqCuDzcRLYgs2LkhPB9sqfk+1blhPo6N+ejvWrbVbK8q9iy1sLf3RpRdBbQvg+vQVRFlB+XyLAjC+eSRjisjPw93vJyHO17OQMvFjCTasQzo9HO06SKWAYahMA2m/CgFCRsrk8bqSGFHfKz2FHbcJpVpIdXdRuuqNjou7qb7gk4u2LiKS166Tm1Y0646DYVZdsCVYB2CIAhjiAeOIAiCIAjLBs/zqhP89eVZ0zRvZOLE7GaaE2RWAgMEk87VyelM+H1Xnb1qaUaEquZmqdIorFjVG4G67W2OWDdb14YtBGGuMnXrTdW+XUwWNrJ1bW3Gs2U29dR7T+yMGId9BJP8N9fYsbY/5mLXmTDXtm5jYvixLJPFiqqgcwtThxebK5HnBsaFr0bnhrkeG/PR31H9AnCD53k7l+g553D42VBTZgCvB76/xM+nIuAIgrDoeKT1yqkX0B5xv0yrV2B1ZZALSye4sHQSrSBnpvFRqCkcCT0fv+JCZRoHGN+fXGaZqI4YyY40ye421VV0GD07oodHixRtk+YDqgqCICxTxANHEARBEIQVged5tWGToLkJ+pVE/WRx/aR07eR6lubCp9V7n+ygeU+BzRHrRnn8ZJk8IT6dEHBjE/vfzPiYTT1bmeit0UhErLdvzzzZdSbMta29TfZhI1ueK6Y7N8zl2JiP/o601xIWb6rsjSh78zI4l74mouxxucQIgnA+Sbq5KT8Jr4iP4ozdwcGWy7mt6w3c1XkNx+OraXXzJP0yeoGUFM8PwrCVHYjbqFWttF2yRm3oWaPWGAZWqTJNBUoFH0EQhGWKeOAIgiAIgrDsME2z+pb8ZoLJ06hcJRmx1AR2EXgbVCeOtwI3hN97mDip3WzIq/qJ8JmEjtsc0b6p2n7TFNutJ8orpV4A6WmijbOpp/7vm2dgjx3zYNeZMNe29s6gDwfO0TifzblhLsfGfPR3ozG/1Lkb+EBd2VuW+D61Ay+PKH9YLjGCICxmFGBpD8vzUBRxlclA6hIGkhexqfACV488Qac7zLDViocxpTfOXHA9cFywTFjfSXtHWrUdPsPJkzlGHE9H6zSGBXYCfK/RrpmMv8BeH0pWAw7gn/9OEBFKEIRoRMARBEEQBGFZYZrmTUzMESE0zy7Gwz3VhoqaiZhSS/16M/ESqZ/8n2qCf2CadRcTvdPYaKHtei7bOpM+PBfM5dww22NjPvp7Ep7nDbD0+WFE2WuBJFBcovv0WiZHuTi9CMa+IAhC02gUpvZJuSO4yuK5lk2ciHfz6tEnuSw/QNmIUTASGAsk4igVeOXkS4FHzmUXsLYjZ6QODzgnivnJlwd98mkYPIJq6USPnqkXQmLAxdNs0gcq4bVnBCifU4N7LirRioq3NDaIIAgrGgmhJgiCIAjCssE0zdsIJllrJ2j3EIRnuhGZRJuO+pBMm+v+hfGk6cK5JSttnRNzPTec62MjuwLG9FPAybqyGPC6JbxPPxZRdq+cvgRBWIoEQo5Hyhkmbya4p/Na9mauAaDdy+EvcHIapYKwaqUKXNTptL39TZduSLR0TprH1Pkszh3/iGrpmmaHNKAglkTF0xBLgWkDGKAT4T3CxSijG7cMlXl4l0DrwDvIMMPtTyKh0plu/4X9q9wDESnglBGdOEgQhBWFCDiCIAiCICwLTNPczsTJ1D6g0/O8LZ7n7fA8bwci4ExH/QR0NbdHrV1nErppNmHJatvS7Lo902x3sdl4wqN5k5/r58mu57Kt56qd0xF5bgC2EIQpa+bcMNtjYz76ezlzd0TZdUt4f94ZUXa7XFoEQVjKaBQpr0jcHeXplpfy7e4fI2u1scodDn+fZv1QuFi1qovu7m66ujppbW3BNI2x3xqhFPgaBkfK9FzYkfytj2y72DDtScqR8+DX8Y8dRLWtbiSUBBgG+C66UgS3goqnMVZdgspcGDbWB7ecUR3rL1bJ1skKlfZhJsKVYcDIKfCcKM+gjcBFtHZn3Cfv7PRPPZsce6659C0kf++/gk2WRmUQCsIKRwQcQRAEQRCWC/WJ67d4nrfY32Kvb19U6K9zPeldOwmdYXLIqZkkTh+Ypo+mot6TYarQU+cqpNh8MBebzGcd52I7M1l/87k8NzA7D5fZHBvnqq+WKlFh1JaqgLMeeEVE+W3SzYIgLHU0CkNrUs4Qw3Y73+l+G88nLqDTGcZEo2tEDaUUSimM8GPbNqC48867vTvuvIu77r6H/qefJZlqYf369bS3t6GUmlLMUUpx7MRpNm3cYP/Gb370oqhl3Pu/gko3jpKqWrrQQ8d0+eYPlkv//IFS6eYPlEqf+eVKedd2z3vyTlT3Roil0cVhSGfi8V/89EWYVn1DYAah41Qqg3foIfyTz6ISE0KkmQQiTuBhY8UA4gBmz2tJ/NpXUB3rAsFIPHAEYcUjAo4gCIIgCMuFCeJHvXhjmmaGmedGWehcKvVCw7aIZW46x3bcycTJ7Zvq2jsTL6Z6EWb7DGzaF7Fu1FN5hvHcJFUWc4L3XRH9O9OcLHOx60zG+Vzb2mw7Mws8zuu3mY3Yfu8CHRvz0d/LmSgB51rCSawlxnuZ/Fr2i0C/dLMgCMsFjSLtjOAqk9tWvYknWl5CW2WEtOGaq1Z1smbNGlat6ibT2UlHJvisW7eWVatW841v3Fr+xtd3nbn33nuGP/e5zw393sd+d3Tnv/yrf+iFw3R1ddGZyaC131DIMQyDo8eO8upXXRl/3/UfuKD+d+dHX8I/9iQq1RHdeDsOpbzyz76Y18PHj+ns0WP+kQNH3Qd3HS1/4cMnnf/cUTHaVkMsiR48gvXqn4pbr/vAmtoqVLoL3ErzBjPMwG6+S90U7HglpoUu5wAq5iWvJfGrnwet0cPHx6wuCMLKRgQcQRAEQRCWCxMmT8OQatXvPQRvQTczcVo76ZwBbiHwDtjM/HvD1E9wbwNuDre1NWzz+Xhjf2edDaLKm61noK6uai6SWlv2MFmE2clEEafahz3TlO1hcefoGaizYw/wEJO9OSAQFbYx2TtlLnadyTifa1sbtfOmmm1tC+vsPVfnhjqbzOTcMJtjYz76eznzOHC2riwJXLME9+WDEWW3yqVZEITlhq8MUl4Bw3e4t/NaDqx6FZVicfS+73yj9LVP/637ub/9X+XP/PUfl27+339Yuvl//2Hp//3p75Y+//d/5doxSwGD7/qJd5558IF9sY985DdH+h564Pm//+TfnfiTP/1flcNHj7Fhw0UkEnE8z2uwdcWRw4f5yXe9I/3KV/V2TmyYh//MPaj2tWGos/qGe4QeNW7NxwFKwLBz184XnD3/kDe6LgKl8E8PYL/uA+3YsQSASneiVm2Ecs4geNFgFYH35UXAxeG/F4TlwYsIoYCjtA/xFGhtAd3A6rF2jZzEXHc59pbfysQ/+Klu7VW6de5Mtx46Hrj/xJJBLhy0CaTDdS8Mt3cRsAFYF95XmDJCBWH5oaaLNyksLKYp51Zh5dL4pkwQBGFW19TtTH6LfyD8bG74FDiZqHqqbKF5cUA3sa0M8BzTTx7vqdsHNYttNbNMlZ6wXbVkgU3MPPRUL81PkKsm1+2r+b2+v6+OaGOz+z7dcvNVT1XIaFa0uJEgT8t82XUm43yubd1McyGksuE2tzY5Rmdi7/k6N8z22JiP/tZ191BT2kaphU0sPc/cCvx0XdkfA3+1hPbhMuCpiPI3APdNGrDyDC4Ex6mOGBtKLCMsxFhKbb9jvjdrKbTlKtt0E21m69nnGL7j82lGBpMYpotpapQRtMuKQ+6Mz7EHhoE8wC/+4i9mvvCFL3SXSqXsP/7jPw7+xV/8hTc8PNz6xje9pfv912+1tHY5c2Ywcs5Ma01rawtaG/zlX/3Vi8NDg6Wx54Ge15LY9iX87NGqiBMnEFdQbavxn3+I0r9+6Aww2GC/jNQf7n0JZgxdGsVYdxnlz/9a3n30P4+qVAeJ3/g6JNvilEYvDpY2QZnBXYPvg++OV7R602j5P2864d75z9q65n3Ef+5v8U88k8Iwg4Q7VZHJraBauoL2DR2HSgG1aiOVz20bdp+8/STpTpIf/ndUsj2jy7luUMF2jfCdfN8dD7OmDF91rMuWPvOhs/5z98l1R87bwjJBPHAEQRAEQVgWeJ63g8niSg/jE7R9BBOj01Hv+bGQZIEbmFoUuYFz71EyEGGDXcwub0gfgSDQTOi1qFw2V0fsfy+TJ8P3EC3eLEayzEwM3DzPdp3JOJ9rW/c0Mcarx8FCHXfzdW6Y7bExH/29nLk7ouzNS2wffiOi7AXgfrk6C4KwTDCBFgLvkYuAHo26yNLOBXbhzNrRtgvWxn/xb1tTH/m8lf7NzyXSv/6vyfSH/yWV/vBnUi0f351K/sL/jqOMfLWyL37xi9mPfvSj2UQikfnYxz626fTp0+0f/73fG733nh8O3PQ3nxgtOz7dq1ZFvvSplGJ4eISurgzveMc71054Hhh4AP+F/ah058x27uJXE//gp4i9/xO+Lo4UtRdGONMaY/3LkoCpC0P4LzyM0dJVIZbyje5LUK3dqFgSZcVR6QzGqouDfDa+B06p1Vz70gsB3Ae/jn/kcYx1l2uVWY9q6QoEHK1BGWjPQVeKqHQG1XURqm01WntpwCA/CLkzYCeKKt2JseriYDkrjrKTqLY1qEygCeF7hjLtLnPd5V0yZAVh+SACzkJc1Uxzu2mauubTK1YRVji9BG+OVj/bxSSCICwEnudtYbLgsQu4wfO8q5k88Rp1ja5OttaHf9rDwogDuwiEh9r8GtWwS5uYediy+aJ+X3fMoa6+cF9uYPJkd9XzYgfRYsRA2B/VPqn2Q+161d+zS2i4Zuv2qy9iv/cQCAs3LoBdZzLO59rWneEYr29LVTjZxMLnLWp4bgjb1sy5YS7Hxnz093IlSsC5DvjJJdL+DPChiPLPIIkLhKn5LDAUfh8K/xaExTaWEgThuS4hCBeWCcuAICeOoQzibgEjewQ9+CL+mRfwTx/CP/08/ukB/DOH8E/022h/gtfwpz71qdPf/vZ3SgDlsrPmbz7xiYv3/nCvOnbkxeN/9Zd/ddr1oTPTge9PDodmmibHjh7lLW9+Y2zDRRsn1Ou98HDjPDgRGBdeReLDX8O65nqsV/0kxJIFnBIohXbLkO4yqvvsH3oQY3WPVso4Xfnu31D+t2269On3u8VP/bRT+sf3uZXv/g0q2QaJVvyh41ivfHfS2PCKDIC776tQzlUqu24cLO/6fZSdBCuGal+Lf/gxr/yl33TKX/xNv/yFX/fL//wB/GMHIfQI1oVhjI2vKOnBF/Plr/8Bpc/8sl/6p61u8R/e45T++QOu17c7CB2nFLo0gvXarV0YZlwODTlvC8sDCaE2z4Qx9h9iPJzFDs/zbpxieTGasFK4iXHhJgtc7XnegJhFEITzdL0WI0xNfdirnQQT3YIgx8Z5PjaaCUG7xEKomcAZoKOuXBOEUfszYDHH3f1rJotuFWAjcDxqBXkGFwThXDOLEGopgnmt9Fy3rTrW4T11N+UvfPgU45PfAGzYcJH15JMHL/E8X5VKJVav7vb+6Z8+ffQ3fuPXS+sv2NDxP//4j1YPDQ1SKpUnXdt83+fCCy/g3nv3+Z///GefC68bWK9+D/Ff+CT+qQFoIoSasfFqEh/+Kvrs4WoVGQJPI1TXRbh9t1K55fdPAsNYMYzMhfhnDhEKUpogLJwXfjfMy6/LxH/hU506P4ix+iU4d93sV/7jL4L2WXFwyxBLdaX+rK9L586gOi/CvfPTfuU/bxoEygSiTXVn84APYKx5Cf7JZxXQBRTDZavXRzv2nj9ZY7/xl5L+2cMYmfWU/vVDOe+ZHx2T644gLH0sMcG8s51x8SbL9G/kZQhiffcyMZRDluDNvD0EbwfOZqK7hyARai8TQzH0hZ9dzDwky0LUOZ+IPefflj0R7au+1dvHxDfGp2IH4wl7M+GxIpOBgiAIi49egoT29edwQZBjQ46NhcAD9jLZ40YR5MJ5I/DfgcOLsO3rgY9GlH+NBuKNIAjCIscAVgNtC1D3JBXh8OEX3d/7vY8PfvrT/9SVz+fJZofMX//1D1906623vrhnz21DX/3artgv/sLPdRw5cmRyQw2DM2fOcOWVLzPa2jMdI8PZLIB/8ml0fghMGzynuZZVSkFemZocNuOt9qvXJHAr4LsYa14KSmXNy67DfOkbDJXssLHjhh4+gXeor6DPHMqQzihdzqEyFxpAEijgllEtXcT/+z8P45a6cCtBG2Mpg0D8z0c1z7zwKrQyMLou0rR2n7Fe+S6MtZdbKtUeQynTP/akh1vO63w2ie9BLIVx8dUp75kfGYQCkCAISxcRcOaR0PtmW03RTs/zsg2WzRB4JGxrUF2GYMJ8c7jcToI3u5qZKJ+u7mrc+G0Ek/DXN1HvQtQ5n4g9z50tq8tsDT/bw8mL6SYwsqHdq14420zT3CFeOIIgCIuK7Ux8GQUah+ASBDk25NiYL34HuBx4acRvbwMeA34d+Ooia/ffE7ylXosL/KV0qSAIS5AWYA2BZ+SCEo/Hee1rr8XzXI4ePZo9c+ZMJpVKGblcjpaWNN/5znc2XHnlyw798K7bT73hDW9ou2DtKuPM2cFJXjjlcoXu7tW8/vWv7/jB978bCDhnX0SPnEKlM+gmBBxzzUtRyVb06KlqUaz6RRkmlEYhFJ/MTa8j+fHb8LNHY5RGM6plVVo7JYtKAbSP6tyA0fNa9NBxcMrgllGJNASeQAXjwpeT+NBnINVh6bMvgDFhWjYyzUXs3X+A/eO/DU4JnTubRhntKtme0uW8gVMEZWCuugTKeXTuTCBcuRVUstUI96UkQ1sQljaSA2f+H+xqiYxZb5rmVuA5pp4gr2cbQbiGzDTL9YbLNVv35rAtvee4zvlE7Dn/9pyJLauCz01NLLtzmmNGEARBOD/0hteam+qujTNJ7i4IcmwIs+U54Frg+w1+7wC+AnwT2LBI2vy+8FPPl4CnpUsFQVhiZAi8Cs9JnGHf9/m///f/smfPHv78z//cP3PmbNbzPEzT5OTJU8TjMfX1XV/fAPDVr37lmFIGpjW5aUopCoU8l192qU01N085j3/kACTbm2tMLAVqwvRoIMxrDaaNHjkNoReL/dZfQ4+catNDxzfqSqHdPz1g4TmodCcq3QmGhbISgTeP9oM6lMmYXX0Pku3okVP14s3kfUu2E/vpP8P+b7+Jf/IZ/BPPrNal3AU6n23xz75oYFqollWQaA22F08H+6E1mrrtCoKwpBEBZ54IPWq21hTtmsKzIMv0wkGjB8hbprng3kz0RH81fFjfFOudqzrnG7Hn/DLbJMLbmRhmLYqBuv3bFh47giAIwvmlL+Ja2keQcF0Q5NiQY+Nc3dO/G/gLGod7eQ9wEPgtzm80iUuBz0SUjxCEfRMEQVhKdIefmaFUIELYyVCwaD7PiuM4HDlymGQyyaWXXkpXV9dIqRQ4iliWxYkTp3jVq19lbfsfv9L9wvPPFU4O5p1kLPq0n88XWLNmDR2ZztZqmT5zCGXHm2qLse4ycIrVPy3ADn4w0E4J/+jjHuCYl74F82VvS/pnDq3F90AT5MN58VHt7PmULt/y+7r8ld/yy9/6M41TQsXTgJoYgq04gh4+CZY9bbviH/wUsbf/Dv6JfqgUV6FUR1BvCpXuwHv8v/zKd2/SlX//XV3599/1nL2fDcQcM7STlshpgrBckBBq88fWuoe7hnlLPM/bY5rmAEFuEQi8EvaEnyzjuVu21yxTpRoGLKr+KGGgjyD81kBdHTfVLdvLeCisha5zvtkTtkXsOX8Pz7vCNtTaskpPON6jPG62MX3Onp1MFKO2IfHjBUEQFgN7GH8ZpRoaMytmEQQ5Ns4hHvAnBDlxPgdcELFMC/BJ4EPAHwLfPcdt7Ab+A4h6tfsPgaPSjYuXZ5/YLUaYAy+58mdl/CxZOqY6pzX3UqX2wbQDb5NEK9p30eVCEGLM96oSRdMcPHiQn/7pn2Z4eBillGMYRoUwfJnveziOw1/+5V9mdn7mS2f6Hjk4+BNve/WaXKFCXRQ1HMehs7OLdevWJoeyg0FTi8NNtcF65bswe38GP3u01h6gNap9Df7hA3jP/qgCVKyr3g6obrQPholq7cLZ8ymvcvv/c4EiQagyF1D2NVvXEUtPfmneMMEw6sQuNUlsMVZtxLzsLXgvPgLaN1BGJ4CKp0BrXf7ib3je0/c4QIEgd47Lk3fEY9f9j9X+8Ak5fgQ5fy8zxANn/tgc8aA3FTsJJsk3ESRy31XzIDgQ/n410d4YWyPKeiPKBwjeDhyIaFtUTpXt56DOhULsOb/sCG15Y8RYHgh/v7GB3ZqZBJnpOoIgCMLCsys8t1fP/zJBLQhybJwvbgOuAv59imVeCfwncDfwhnPUru6wbZdF/HY78GnpOkEQlhDtNCPe+D6YFsbqTajWVXhHDuj8t3Z4hb98T6X4Tx9wyzs/WPae+ZGvMjOLcHnw4EEADGNsanKk+sUwDM6eHaR79Vre/a7NXXfe/oOSGWvFUJO9fLTW2LbFmjXrYoQvqutCNsh/oyKnPYN8Npe/lfgv/D0UhsGtQKByBV482kd1rMd75FsQCDO+0d2DLgzFUAYq1YF/aoBQvBkETgEj2IlCfOtf51XHenShydsFxcR2mjbx9/5VII75PigzHjYZ2tfhPnEb3tP3lIHTwFlg1Fjz0mLyV/6tqEuj0ETeH0EQlhbigTN/1E7MDzSRmH2n53lZ05wyHGWWYKK8PsxX7zTbrzLV24FVUaNWEMgQeEPsXKA6tzPZayMbPgw3qjNqnT1MDluxs4mH6eVmTxbQpn1NjPkdEfX0NLHeABM9prbK6UMQBGFRsEtMIAhybCwissDPE+S9+TtgXYPl3gLcC3wH+BsC752F4BXA7vA+u55jwAdoHPpNWACUUlqssLjQWisZR0sEK4ZqWz1Nh4b/VkNylQvowhB4FQBa0ynyJ4v4WuN8/xM4ez8LfoR4YFhQKU4q/vKXv8yXv/zlsb+TySTr1q3n3nvvvSwcTwD8yod+ufM737tz9NiJ0+WkbcYrzuRTre95XHjhBoPAg8fV+Wwgyig1vh+GWRVqfPOlbyLxP76Izp1G589aGFaGqpjluxhrLsV/9ke4+/7dBfKqfS10Xqh0OReMcWWA70LgeTNqXvQqzMvfinHpWzAvubrFP/Wsge8FRjTMcWsaZjXc3PixUilgrN40tkziVz6Lcdl1+CeeRrV1o4sjBuV8sJ4ywHNVeI0sWVe9A2NjL9Yr3w0tXZ369PNhf6nA02cR8dKXv1fO2XLOFmaJeODMA6ZpztT7Bs/zmn1zL+qBsbfJsukeNqPa2bOAde6I+D1DdCiu6npR4sQNDR7wVpo9F9qm0xH1pk5fk+vumeYYEgRBEARBEASArwEvAz5FEGKtEe8Gfgg8APwqQai1+SAG/EFYb5R4kwN+luDta0EQhCWBaumaegENKBXkkXHK6KHj6NFT4FVIJhJs2LABrUz8aigwZYL25tQm13UntlEpyuUKr+m9hs6OdPzp/icqyWQ6cl3HdUinU1D1wCmOgudSG9dN5wZR7WuJvfcvO2Jb/3qtzp1ep0fPbMCwLgEy1TBmxvor0YUhyv/2a752yqNAwVjzUozVmzRWwgPQ+UGMrouxrv25mHXtz6nEh/8d+103KqP7kjb/+FPr0LoqtkCiBUxbAWinjLLiqHjawYqD1ujR0xhrL8N+y690xN51Y8y89M2mf/KZVhVLZCpf/0P0kQNltXpTEKZu6BjmpW/GunJLMvau7cR/9d+w3/ZrpoYufeZQ64T8N/G0DHRBWCaIB878UD8xP3Ae2hA1AT6dqLFnmn1ZiDpvAB5i4uT/NgIho37dKBFixzmy71Kx5/m06baIsmbfUB2I2Kc9cioRBEEQBEEQIhgCPgr8G/CPwOumWPaa8PN3wLeBW4HvEQgtM6Ed+AXgYzT2Mi8D7wH2SRcJgrBUUKmOqldINBpQBsq00PksOh/klbEsk9bWNrq7u8nn8+Ryo+PrmNakPC4zxfM8dF0dxWKR7u7VdLS3x44fO4oduzZyXd/XWEGEG2OsPfXJcip5aOnEeu3743rkdFyPnB73VrFsVOtqVDyN98y9fuWW39d+9kiRcM5GtXajR0/DyMlh1XVRlx49hc5nib3nT1JAj58763H2BQOUpdpWgxUPRK/yKMaqSzDWXmb7Rx9HDx3De2Yv1qve4+KWi8TTST1yEl0cJvauP0hqr7LRP/mMVu1rlf/8Q7iPfLuo3XIp8cp3l+jckNBnDkGihdgv/EMXvtvmH3tS4zkWhmmozg3gVdDDJ6AwhHnhVQCmjHhBWPqIB878UO+J0DePdUc9LCzkRHfvAtdZzZ9Sz811f29mcmitPcw92f1ys+f5sGlPWPdNEeO+2br6mugXQRAEQRAEQahlP0G+m+uBZ6ZZtoUgBNstBBNw+4D/A/x34FrgAqq5DoJlLwDeRiAU/QdwnEAsanSfOgi8HbhDukUQhCWDMgKPkEZUPW8sC507MybeJBIJLr54I93d3VQqFY4ePTq53jkGyPJ9H8/zJpXF4nHicdsqFYumUo31iFCvCQUcO/gYplKdGzBWbcRY/RKMjvUoZWC0r8FYeynG6pegui5CxVL4Rx7T5Vs+7pc+/X7PP/tCDjgJuABu324Kf/46Sp9+/1lv/61FY93lGGsvDbxiSjlTWfGYsfollrHhFfgv7Pfd+76IsboHlcqg0p0kfvnmFpXOmACVW26k+Ldvp/RvNxzXIycwLrkalbkgyJlTzqPSncpY8xJItgG0eo//F8W/+bFj+kS/Nja+BpW5ANwyVAq2SmdixrrLDGPdZbgP7vK8J/ZgXNyL9n3My68j/v5PdDTIAyQIwhJCPHDmh4VMwh6VH6SvQVlvxLozjdmdWeA6IZjk38xEj5QegtwsVQFgvsJ8rQR7LqRNe+vW66GxCHb9HPpFBBxBEARBEAShGXR4/3wrgYfMHwCXNfHc+9rwMx8cBN4LPCXdIQjCUkKlM9MsAMq00aNn0MURlFJ0dHTQ3d09lpPm+PHjk9fzPbDsuZ/g9UQVSClFpVLG87WRbkmbvu9GN1sxHs4NoDSKSraCYbruvn8vURpNTFrBc9GFIfTwce2fes73TzztAiUgT43X5sZNL8PUHs8N9KOdIuVv/PER9+Dt680r3pY21lwWhKNzS3jP7MV/8RHt/OhLGt8b1EMnbOvNH2rVI6fx+u8G31sFnNJuWeuTzwC4pU//3GHr9R9cb2y82jQ61gdeTG4Z58hjeE/cHvYI+Ceedouf+tkX7Wt/br1x8WtsteYlqEQrunAYffwp7T17n3Yf/U8PyKK9VvOlb457T9yBf/ixBOhWYFRGvyAsXUTAWQA8z5tPj46oMFU7I8r2MFkcqIbRasT2aba9EHVWiQr7tT3ct60R252v0GnL1Z4LZdMM0aHfqlS9f3bOsB/6IrYjCIIgCIIgCM3iAp8HvgT8FPBbwHUL/agHfBL4Y4JJPmGR8czj3xAjnCNe+vL3yjhaYvgaDo9O441hxqCcI+GO0N7eQiKRxrbjuK6LZVkMDw8xMjJMd3cG0xiv63RBUXAb5EM3LHSlSFesTGtLaspxpOrCniWTSc6ePcOZM2d4xSteaXhutIBjGgau40LoB+Qfe5LS378HUK536KHjQGd4Dq+qPNUN+eHHBRyC0JiYpsW6Cy/hiqtey5WvegOGYfDkYw9w713fZmTojPaevPOo9+SdbUCS8blVHdZRAYadvZ/Fvf8rrtY6gVv2w98moAvDRef2f3wRaFeJ1rj2XIVT1GGbAIrjC/tl5/6vvMj9X2nDiiVUPG3owjBov1p3CciVv/b7wyrduVrnB626fZZjTc7ZwhJFBJz5YaG8B7ZH1L2L6En3PUye7N9MEDLgxrp1egiEgyhxILvAdVapTvzXendkCEJz9Ubs2w6x55T2PF82Jay7h5kJbNmIOgRBEARBEARhpngE3ji3Ai8HPgT8HLB+nrfzHwTePgfF5IIgLEVyjpp6AcME38UqZWnLtJGIp8bEGzPIL8PgYJZMR8sE8SZAzbl9SimMmnq11iQScQ489pgePHuWl7xkk1kqlSPXtewYZ86ehTHhQ+MdGntv1CEIhxaxyxapVCumbZNIpGjvWMXqtRvYcMllZLpWE4slGMqeBq155TVvI9G6hm995W/xfUim20YMpzjiORUSqRQjuVHlM9GFSDul0wRh3TSNhRQHOKNLTTnJBF42bgXtVlRo+Pq6PZ0fPE6Q/8aTkS8ISx8RcOaHhRBwNhMd8urGBsvvIfCCqPcw2Rp++sL1M0w9Wd63wHXWEhX2a2vEPl8v9mzKnufaptWxvy383Mj8iUKCIAiCIAiCMFMeBz4G/D7wZuDd4eeyWdZ3Fvgi8C+IcCMIwhInV5lOwLFg5CQtLUlsO44dSwQ5aVQgrgwNDRGPWaRSiQVpn2maGMbkHDc7/2VnxbJj5iUbNxrHjx+LXFcpxZEjh6FJwcIwba59y7u55CVXkojHlDJMYrE4sXgCpRSlYoFifpTcyBBKKRQweOYYibitEy1dXPDKD7Lp4lXWlRe4yba2ttgzA0dy//75T5V9pxi1Ob++oK1jFSNDZyaUda6/iktf8Uau2LQ2YdgJfe/eH5af6vtudQep04ZgalGoaVsIgrD4EQFncdJL4JVRT72XRtTvvURP/keVVcWCWgbOQZ21RIX9qv89K/Zs2p7zbdM9THyVphpSbRuTQ6vdFP5+oxzCgiAIgiAIwnnEA+4KP78HdAOvB64GXgq8hMBDpwVoC++NC8Ax4GngAHAPcD8yASYIwjLA9cHxp1jAMNFOibTpYVsxkqkWfD9YwVCBV8zIyDAd7S2TwpzNF7Y9nkPH931WrVrF6Mgot976TfdNb77OjtBBwqYbFItFTp8+7RGEQpuSdLqVD3zoI1y48QpKoyczKbvS5Xpaa10CXUBraE0DaUV12tQwFC1tab76lf84Xs5nC52pEi/vaV19UftwS2ZVG3fteSjtOsWj1WuGbdusueAljAydBgL9xbITbNx0OZe97DUY8Qw/vG0Xcdsg1drJqvWX0b76Ujats9aubx1ua+tcR2H40jNP9X13EACtufKVr8OOxTl86BnOnj6GUoqWtgymaaGUIpnu4Mzp41SKIzLgBWGZIQLO4qMXuI3Jk+87mT7PSBbYQhB2q5l8LDvCbdUycA7qrP+tPuxXbX27xJ4zsudC2zQbrr+LQMS5ue737eE29sihLAiCIAiCICwSThOEQPsPMYUgCCuRkjd9+DQjN4idsGhpDaZPfO2jCMKalUollIJEIrZgbYzH42PfTdMkFrP58Id/zQG8d7zj7amhbPR7qIl4nMHsEEePHHGZRsBJpVr4lQ9/nO7Vazh+5BlaUoZpxTB87ZFMJkkmWwhcjiauZ5kGqXQbJ44d6vbc8uH82Wf8ZPxKSqUKo7kihUIpAcQIc9Zc/qq38OM/9f9x5sQAnq+JWwpX22Q62sCvMFJSxBMpNm66kste8WaGsicZzJ4hbbe3dHW209bRglE5vSqsrwjw+ut+knUXXMTJ40cZeOEoqaRNpiMDSmFZilSmh1tv+VeOHJTpGEFYboiAMz8MMD9h1BqJDbsIvCaaoRoWbBdB6KzNjHt2DBCE36qG8oqa4N91juqs3+8oqtvpE3vOyJ4LadNadoZ11YeE28bMBZwBOY0IgiAsGbYy7tlaHz6zPoyDWgTtXYxtWqlsr7m3uYHpX6YRBEGYN773gx/x2x//uwll115zBV/41z+fdt1f+pU/Yd+DT04o++Qnfod3vv0NM95mM+sJMo7mexy5U3nfoED7xFQFy0hgmiae56FqbpkKhQLJRHzBvG8AWltbAfA8jwsuWM+dd97JZz/72eLr3/Dm2JrV3erIkSOTtq+1pq21laf6n6VSKZeZQsBpa+vgl7f9Lp1dqzhx/AgoE99Hux60tXey/+FHvH379vlKKSPwmamxkALTtFQ2O2QDhq8t3/G01mHEN9M0/bF7TmXQ87I3kh08ia9NXF9jaSiWHJxTJ0nEDIxkF0cPPclw9jQXbOrFcz2KuSz/8KkvP/vSizsvdBw/vv/h/QaQAIqXvuJNJFu7eeHQM6BsVq3diG36eF4Fz9N42odCflnd5co5WxDGEQFnfpgg4Jim2eN53kwnpBuJDX00LzbUrzfVJH2GyRPvfUw9kb4QdVbzvzTiZoJQB4g9m6pzIW0axa6INm5usn/qjyFBEARh8ZNh3PtyAJmAb5ZtjIdQ7a27D9hF8y8+1IYxrd7jVL1rZ9sXm5ns7buHwGN4un3aXNeWvpq2RL0mW80F2EMg5Oxi7qFyBUEQmmLN6s5JZfUTfFHsvefhppaL4qu3fH9S2WuvvlI6Q8bROR9Hjj/FzL5SaNfBNE0SqXTdT8F6xWKRtrbk7IymFMow0LpxuhbbtrHtGKC54IL1HDt2jJ/8yXcXUKb/cz/3/vjg4Fmi1AmlFCiT/fsf1oSeKlF0dHTxyzf8DplMF6dOHsMwTLwaUSudTnP8+En1dP9TQ0CZxlKIAnxlqIYLvPRVP05b5wXKKR4147YRS1jaMg2UlUQbCsfTVEpl39Mo4jGLWMzCyRfNV166OlY8c7F34NH9o2jfTMTjuVKpOJzpvoDeN74bt5xNpOPKcDzXP5sdKbUkMDNpkp7GtE3tJ1oSFc91y3KsyTlbWH6IgDM/1D949jCzCenpPEUW4sH2ZqLDip3LOmsngaayzXYmvt0r9jz3Np0JmVksI5M3giAIS4NtNefwHXL+npatBEJFT4PrcdWTtZl7lJuZ/NIEjIso2whEl5n2yU2zuM7fQvQLG7019xnXM1mYyobjpnqPsx3JnScIwjkilZ5d4vU77n5wVuvtf/ipSZOI2z/2Qbq62qUzZByd83HkTpP/RpVGMQ1FIpmaILQopfA8D9/3sO1ZTCEqI8iv4zloo7GA09LSAmgsy+LkyVP09r6mnM8Xih/9rY+1WCYqny9gGMak9dpaW3jxxcM88nCfQwMBx7ZjXP8L/4POrm5OHjuCYZqRbfA8xwDybas2Vi685ErKxXwgEBkmp488xfDZIwDEE2kuubSXYn6EjrbJ9VzQ80p8txB72YbYxZ0drZimidYK3/cYHhmhVHb8nDZGPG2eOn38ECPZM2RSdstlF6fWXNbzfkZ+6p2sX3+B/uKXvsx/ff87fntmDW1tHWSs4xvaMoYqVDQJy82tX92aXtWZVtqH1pY03/zunvKpgX1HWCa52+ScLQjjiIAzP/Qx0eNhJuHUpsrRcsMCtfdmJnto9DE3wWE2ddYLFNnwYb/+LdDt4cTGgNhz2joXwqbT2bueZursidg3QRAEYXFTnXCvXl92iUmmvdbd0uSyW0P7bpni+r6tyXugmXjZbqdx2NVGY+C2unUGGPdG76lZ7pawLfX3BbsIRKOqp7EIgYIgLFrOnh3mK1+7LfK34ydOT7nut7/7w0llb3j9K8WoMo7Oyzjy/Kl/N30P0zZQysDzxuf/lVK4rotpKMwIAWXqSmOgPYzcafziKGZn9ES4YRh0dnZimiaFQoGXvezyyuBgduRn3vtzyVdcdXn88OEjkeKN73lkOrv4r9vuAHQRqETevGS66F6zjrNnTjYUb0ZGRrhowwZe/8brOl/z+neWO7rWqkq5iGUqzNRq79u3fnX40XuCW9+rrtlMpns95dJRaJvcLq94ltbMayu58rPu8ScOWqfOjuC6Dul0ipddcQWtLZaxOqk73vqm16r/+v63Tx58ZC/v2frfnccP3umn00kjk+mkVMypYiGXAUZHhk5XioUcRsb0fe2baI9XXLq65diJs9xx12M6nU6p9o5Obt31Bctz/RQwKseanLOF5YUIOPPDQMTD7fTXMtPcVvMAW8sOFuZtxN5we1FvTN5wjuuMCvO1g+BNzR01E0RVe95Mc6E8Vqo959OmveGYzjbR1u0R5c2EgclMcwwJgiAIi49a7xsJfdXc/eGe8JqeJXgJo3qN7GE8D16VqhfNzojre6140xfe1+xhXCTqrbs2N+Nl21NzHR+oKZtuDPROcY9VKzRVBb/6+5eq+FcdT9tYOK9gQRCEOXHHXY3f5D585GTD36ImET/w/i1cfulGMaqMo/Myjnw99e8KHSmSQJCTxrTM5jemDDAtlFNEFbLYhqa1q510OjoE2+rVqzFNk8HBQc6cOQMw+K6f+tnEu9/14y1HjxyOXEdrTVt7G4cPH+Wuu+9ygVyj5qzqXotl2/hTqFhnz57l1a9+Fe94x9vbciNZKpWzGG0m7W1pjhx7hif2fbsMlAA6u9fjVsqoWLS9HrzrFp57sk8fe+GpF/CLqwjy8ngA6y/YmPjN3/xwm227XPfm17Xffedtowf37ymUi0OF/kf3Hv2Jn3jnhT/1Uz+tXNcLugUM07RQSqE12teajvYOnj10XO/c+S/uUPZMGXCAIP4c5OVYk3O2sPwQAWd+qPce2Dzdg2go3kSFuprvhK7VUBb1kwT12+w7h3VGhfnK1uz3jrpJoqkmNWonFFaqPefbptvD33aF29zDRIGlN/x9O9FiZTOTMJunOYYEQRCExUftSwJ7xBxNURVtonLC7CQQX7bWXR/rr8s3TXFPMEDgaftQzTW5WQGn1mt3B9OHYK0fAwNMfkHmhnAfemqWj3oBZQ/jQs9WRMARBOEccPGGtTNe5zOf++bY940XreHQiyebWi9qEvHHrrtGOkHG0fkbR4pgen8W+L4/vfeN0sFGTBu0xigMoso5ksk47W0tWA0EoFQqhVKKQ4eep1JxAPjl/++G5JvecE37saNH8Dx/LA/PRDSrV6/hH//pZl0qFgpMIVx0r12HZU49/RnkwDmuDz7ZP2FzLekWjh49iuuUuoCjAMX8aCBSNaqrNUPn2h5ct+x1Z1L5yy+/IpXpaLVBqePHj6qhoaw2VbtKJJKsWbu+48UXBgr9j+4FKF7Uc2W+XMq3xOIpwh6b0GuWZWGYFl/84he9oeyZXHh/6c69l+VYk3O2sJgRAWce8DxvwDTNgZqH1SmTuJumuZ3G8cZvbvIB+sYGD7vVyfdemvMEakbgmO86o3K73FgzsVGNj35TxARG1Bu/K92e823TajLibbM4HG6kOW+a2mNkwPM88cARBEFY3GSY6Hkxn+HT6q/jAwSht7IR147aV+OywCYWxhNovto0nZ1uZLKAU0svE71iqi9XUNe2XUz0fNk6zba31WxrILzPaOZ+qT50WhS198SN7nN2Reyj3AvMI1MlihaElUoyObN8CnvveXjC5N/7t25hx99+qal1aycRAa695gre/KZXSyfIODpv48hS4ExxafBRU66vjKl/97UBlo2qFKAwhG1qWjtaG3rdVCmVShQKBQBM0ySVSvO6174qEG98HSneeJ7PBRes44EH+/S++3/kAsNTbWPtugspl0tTtqOrq4t779vnf+8/v50HfJhgEJPQ+wbAdSsNvZUM0+KqN32Qq17+svS69OCqjhYrvrq7C8dxqDgulmXx4osvkC8UaO9IYJiGDRhtHd3+63/8l+hafaGfzw0SCjiTsC2L7NAwIyPDHjDIxHw3Wo41OWcLyxNDTDBvTHhINk1zKhHnpgVuy2amFwb6womInee4zqgwXwMR6+yImKDINJhcWMn2nG+bbqXJEIAR7GB23jeSQ0EQBGHxU3vunm/vmx11dfYQ/RJBlCfKQoVxO1dtmi4Mb/01c88U9wy1TJXXJlPX7h2zbG+jF1F6pmhXo33ZLIeYIAiLjW98646x7xsvWkNrS7qp9eonEQF+/vp3iEFlHJ3XcWRONfunNZ4RA6XQvjdJNDEMY9oXA0xDQT6LkT9DOmGyqqtjWvEGAu8epRSpVJquVatIJuMcO3YczydSvPF9n46ONkplly9+4Qs+6BGg2Kj+ZDLN+gsuplDITdsWFQg1OeAUcLLmc4xALAGl6OhcS6UcvcmudZexYcMF8dXm0xckyMZHR0f4xu5v+f/3k//g7/ib/+P9xV/8hXvo+UN+16pVaK3xfV8BqrWti4t6rqQwOqSiPY7GugoAy7LHwrIJcs4Wlj8i4Mwf9Q/UWxdpO/vCyYWrmb+wVc3W2UiAaZTbJSpvzdZF9pB/Pu25EDYdYOYTc30EuXSazTO0dZpjRxAEQVh8NDspP1vqhY+bmChC1Ode2cXCvwBwLtqUibim1lJ/zzMwxbW4lqkEnNp8gX3MLNTszrq21+ce3F43VnZOc/8QNb4EQRDOO4ePnOR7P7h/7O/3b91CS0uqqXXvuHtiKJ6NF63hrW95jRhVxtF5HUeWMYUAoz2IJVCGgeuUJ3mXGIYxrW9HJuaQokRra4quzvbIkGlag+9rPE9Trnjkiw4dmU5WrVpFS0saz3XxPB/DMIjSMHzfJ51O0dGR4dOf/rQ/PJwtAUNTteuKl7+KTNcqnEp5yvZr3TgHUC0vfdm1bOi5ikJueFLbANoya3nJOqsrYUOydTV7997rffWrXy49+MC+wYNPHDj21FNPvaispGdXFbVQGfN9N6hTKZjOG0opplR55FiTc7aw7BABZ57wPK8+T8hiEHCyBJPjuwgm1zcxMy+R+a4zKszXHhpP4O9pUG9UPSvRngth06oYsylsT1SolmqbbwyXvZqZiTAT4ueHx44gCIKwuNlcdx2YbwaY7AlS9RKp9xjJ0vhFhaXWpvr7xfprbk/EdTyK6dar7cdaT6IbZ9jenXXb2kaQf+cmgnw+N9Utu3Oae6AqvXKICYJwLnjn2183qaxYnBxa6bbb75/w96tecVlT9Uclwn7/1i0zDgUkyDia73FkT+OBo6wYlYqL5wV5aHSo2GitsW0b35/eA6ezq532tpbI331fk8tXyBeCT6Xi4ns+lqnwfR/Pm9qZxPd90qkUXV2r+OznPu/3P3XQAc4wjRfKla/opVIpzyS42JTCyJWvuY5ycRRDgVJB5h/LMvF9TwG+FbNJpdImygClKRRyBjCYWXvp4CuufWfxfR/8iLHpkg3W2bODWKaJZQZCl+OU8aYIzSbHmpyzhZWN5MCZX2oTwGZM09zmed6kB1fP88YuCKZpLkQbdizSOq+fxTo3NDEhoha4T3cs4joXyqYDLExC4W1MFJwkabEgCMLSo2+B6t1BIDBUxaLNBAJHT921YyFDp53rNm2v+7veg2e2nilR69V77daHiWuGLMHLG7fU2KQnYj92ML041CeHkiAI55pMR+ukshcOn+DySzeO/V0slvjarvEJvWuvuYLXvPpyvveDH01bf1Qi7Pe8+61ieBlH530cxabVBTQ5xyBjGGjtYxomnufh+34wb6WmFxbM6cQHrUEx5l2j9XhIsKnwfZ/WlhY6Mhm+/NV/9++954cugXgzpVvN6jXruHjjJnIjQ5E5fBRBW0zTwDJNjGny/Lz8mi2s33gVg6ePYMTSxOMK2ypTKhW46qqr1GOPPmqdPn648tTA8ZHel2SSpUKeN7/pTWrt+ova1150Zaqrs8Pqbjfa3EqeUqlILGazbv06c2DgWcrlIr7nYRqmMk2NZZoRoewUpmlimr4ca3LOFlYYIu3OI6FYU+uFs02sIggTqPe+2SkmEQRBWBKcKw+JqLBlteLATs597rSFalN9uLGpPGjng20125vrixp7mFqw2sZkUWcqJAeOIAiLhof6npyQD+Enf+ItkcvVv7UNkxNhf/h//AxdXe1iVBlH530cxa1plBLPRac6KJUrVMp5lFITct/EYvFpvXCmQzMu2ugmq/J9j9Wru0m3trHzXz7r3Xn7nqp4k59qPaUU7/6ZD9LVvZ50S4a2jlWTP5lVtHesMlKpFlItrdixqb0uutZdyvHjxxgcHGJoKMuRk9kRO5YmmUixZfNm3vpjWzoGjz3Bv/79Hw3v2v2fo64HF1y4gff85DtbX/mS9s7ulnLbww/v55vf/i8dT6TozHTwnp9+r9Xa1pEo5IbBMOlcvT5sTxu2HVMApmXTnllDS1vGTKVbSKZaxFNHztnCCkM8cOafWi+c3kZeOIKwAql9g7l6rAiCIAhLg3MVurQqLFRDcfXU/Xbjedj3hWjTVs5tWLjeuu3dyOw8hnqB2+rGw86wrlov22qIuR7OTbg7QRCEeeNfPnfrhL9/7K3XNLVeVCLsd7z9DWJQGUeLYhwpIGVDwWm0hAbD4EzRYENSkc8NkW7pGPs1mUxSqeRJxGPnxH6+7xOPx1i7dgPPH3pBf/nLX/GfH3jWAc4yjXgDYJoWJ06d4dRtP8BxKpHL2LEEuZHTQy88+1jcsuzU4NmzAA3D5Pzo+5/F9z1AoRR4mtxDGzcOXnjB+s6hoSzPDwykgBR+uXDn7T84/vD+h9yNF1/UnkylDe37Ojs0pPufOugBo0/3P6kuvviijmw2q0qlYrvnOoWHf/RtffLQ+tMvPv3QasuOJU+dOglgjmRP8fjD93D3kcdOjg6fWu152hgZHjLCbtVyrMk5W1j+iIAzz3iet9M0zdqk8NuZW44UQVgu1IZt2SPHhSAIgtCA+rBlVWYrOiy2NvXWXRMhEDkGIpYdYHZh1OrDk9WKN7uYncdQDxPFm2o4tb4aW2yv29Y2xnP9CYIgnHdedvnUp9Snnj7EvgefHPv7A+/f0vTb2PWJsK+95ooJYX4EGUfnexy1xTQFx6DhnL/r4CdaGc4N0d6SIJcboaWlDa018Xic3OjQORFwDMNg7do1VCout99xl/+1r/2777lOERhkmrBpY7viOnx39+eaWdQBTgKxmr+jF6wUJ9+oPdt/ZuDZ/hJgh+uOtW8oe/b0I9mzw0CcIAKSBlygcOj55zj0/HOFcLsVQD9z4B6eOUA5bI8dVlMp5Ia44z/+GWAkXH/FCDdyzhaEABFwFoYbqAkVZZpmr+d5EudbWMn0MlGwkYkcQRAEoREZokO2bT2P14/5alOUB8sNU9RRL+BsJjrMWr2wVCsqbav7fWsTD/2ba5a5kUDA2l7X7huZLBRVvWvrRRy57guCsChoaUlN+fuP7nt0wt+NQvHUE5UI+3986GfE4DKOFtU4ipsa0zTwPL/BrYAG32NItZFyCsRjFsPDWdrbg8t/LJ5Caz0pN8t8YRgGhmHQtaqbBx962L/99tv1oeefcwmEiyEWTrRwmEK4aYLcFL9Vwk8UeaK9iaZapyDHmpyzhZWHCDgLgOd5TccV9zxPDCasBPqQZMWCIAhL/Tx+rvLg3Ex0yLatnD8RZz7a1Ei8mcojdQ8TxZdGryH2RKxXZb7C39WLRI3avZOJAk4z+W32yCEmCML5plEi7CqXXLK+4boPPPTEhL83XrSGN7/p1WJUGUeLbhxlrBJndBJ8N3oB3wPT5ngpzoWmSyppkx08TXtHF62trQwPnSWVis9q2xN1HwVqXLTRWuO6LsVSiX/4h0/5/U8ddAjEjRxQkpElyDlbWMlI1itBEARBEARhOmo9OhZSyKkKIlXqRYKbOHf5eOazTduYuXgDk8OqNbJ9fflCCCI9sxgrUX+fi3EkCIIwY+oTYf/89e9oet2v3vL9CX//9kc+IAaVcbQox1E6bmJpB0y78UKegzYsjuZNPE/T0pJkOHuaUrFIMtWK1rNzhDEMc+yjlAINrutSKBQYHh4mm82SGx2l/6mDQwRhxM4g4o0g52xBEAFHEARBEARBmJZaL8qFElAyTMwNM0AQpqtW5OghCOV1rpiPNm1jogdPlubEGwg8e+rDodULKT1heW0ba/trB0Gs9Ok+teypKd8RMQao22YtmyPqamTb6ZYRBEGYV9as7mz42ze+dceEv9/6ltc0VefhIycn5GAAeO3VV4qxZRwt2nG0Ku4El3g1xZSg5+BjcqSUIOcoWlrTKOUykj3L6OjMo3gppfDcCsPDI4yMBJ/R0RGy2Syjo6OUy+VaYUiEG0HO2YJQgwg4giAIgiAIwnTUeoIslOdEfZiyHQTiRX1Y2u00F5ZrMbSpKt5QV8dAuHzUp16gubHu71tq+qAn/Lu+/oWgPkzcTRH7vDmiPY3Cy/U2GF+CIAgLRiqdiCw/fOQk3/vB/eMn9Y99kGQy0VSdt91+/8QLwsc+2HQSbUHG0fkYR/GYRas/AmZs6gW1B57D2bLFqXIMT0N7ppVEIjbjbSoFra1JWlsTtLUmyXS00NnZJoNJkHO2IDSB5MARBEEQBEEQpqPWQ2IhxJP6MGV7GPdQqXq91OZVuRnYtMD7PNc23US0Z85N02z3RiaKMDvDdlTt3gs8NEU/7Vwge+wI21EVXjIEYeEGwk8Pk8WnnTQWcGrHkeTJEwThvHL/vgMT/n7D61/Z9Lq1ORgAtvy314lBZRwt+nHU2RqjNJLDsVLgVqZe2PcolqHoJkiYPm2mxvF8bHNm74QrpWidJim9IMg5WxAmIx44giAIgiAIwnTUh+XaOo9114cpg8leJFWvlSoLHUptPto0n55K1zN9mLE94XILyRYmCzI9NPYcuqFBPbXjpw/xwBEE4Tzzmc99c+z7O9/+Oi6/dGNT6+1/+KkJORg+8P4tbLhwjRhUxtGSGEdr0hrlVqb3xKniOZQqPqfKcY6VkhzNW5wsGJwpWTielgEgyLEmCAuECDiCIAiCIAhCM9RO3M+nF059mLJdRIsV9WLATZy7cG7nu01ZAvHkhrp2ZMO23RD+nl3gMZAlEImuJhCw6tuyh0C42cTUodw2NxhXgiAIC8rFG9ZOKjtw4NkJE3rvfc+PRa6bTiUnlT3yWP+Ev3/yJ94iRpZxtGTGkWmarEmUwXPAtJtcS4Pvgufgej4lV5H3TAaH8/i+iDiCHGuCsBBICDVBEARBEAShGXYSeJhkCDwobmSyYKBmUW+zXiN7Zln/+WrTlgXqg4UKkTYTO/Ux+7Bn1fFDOH52yqElCMK5IipHwre/+8Ox7xsvWsPVvVdErhv1lnZtKJ5rr7mC17z6cjGyjKMlNY7iMZvVfpFT5TjYCXDLs6qnVPbwPA/DkGlGQY41QZhvxANHEARBEARBaIYs414VGWCbmESYBVsZ927awcJ7DQmCIEzJvgefHPv+qx/66aYTYQMT3gL/+evfIcaUcbQkx1EyEWNNoowqj4IVB2XOuA7DNFBKyUAQ5FgThAVABBxBEARBEAShWXYyPuFe9cYRhJlQzRMk3jeCICw6fuyt18xqvY0XreGtb3nNktjH7/3gR1z2iq1c9oqt7L3nYel0GUcAJOIx1rVArBTe5pkxpnLO/eE9D/P2n/wt3v6TH+WhvoPS6YIca3LOFhYQEXAEQRAEQRCEZskShE4D8cIRZs52oCf8fgPifSMIwnngnW9/XWT5B96/ha6u9lnVOdO3wM8nDzz0+Nj37tXyHoaMo3Fs22JdZ4JWL4su58G0Ggo5jx14Zux7Z2ebDAhBjjU5ZwsLiASnFARBEARhxeF5nhhh9ixkHhZhebOD8TB8giAI54VMR2tk+VySWc/2LfDzwXMDR8a+X37pRhkQMo4m0dnRQrrsMJQ7S1HFUbE0GAb4PmgftObFwyfGlu+55ELIn5FBIcixJudsYYEQAUcQBEEQBEEQBEEQhBXLXJJZb//YB2f9Fvj5oJo/4tprrpCOl3HUkHjcZk3cplAokSuepeibaCuBMm0wLR598kUwLV551SYwTKYKtyYIcqzJOVuYGyLgCIIgCIIgCIIgCIKwIvjTP9rGn/7R7CKA9j+2a0nv+1NPHxr7/ppXXS6DQcbRtKRSCVKpBJWKQ6GQp1Ty6B84BvkhMExetmktVIr4novWWgaGIMeanLOFBUBy4AiCIAiCIAiCIAiCIJwjzp4d5rOf/xa/9Ct/wmWv2Mov/cqf8NnPf4tisTRhudrk1QD7H36K3/7433LZK7by9nf/Jt/7wY/Glt17z8MNf6vy/PPHxr5fdulGnnr6EJ/81FfGtvFLv/InkiRbxlHkb7GYTUdHK2u62xkdPIsuZNG5M1x5YSvH+x/jy1/Yzct7f17GkSDHmpyzhQVAPHAEQRAEQRAEQRAEQRDOAd/7wY/47Y//3YSyfQ8+yb4Hn+Sxx5/lk5/42Fj58ROnAdh40Ro++/lvseNvvzT226EXT/LbH/87WtJJnnnuxcjfAN759jeMledyhbHv/U8fatiOz/zTH/LmN71aOkvG0aRxpJSiWCyPLXPk6En+6M/+eV7HkXjyCHKsyTlbmIh44AiCIAiCIAiCIAiCICwwwRvXwQTcJz/xOzyy74s8su+LbP/YBwH43g/un/A29eEjJ4Fgcm/H336J7R/7IJ/8xO/wzre/bmyZv7zpXxv+9sBDj0/Y/sGnBsa+f/pfbmX7xz7It77+Cb76+b+YVKcg40jGkSDHmhxrwuJAPHAEQRAEQRAEQRAEQRAWkLNnh/nVX//fAPzln94w4S3rn7/+7WNvY584eXas/Ef3PTb2vfYN65dfuYnv/eB+IJgobPTbV75224TcEbX1BROH42244vKNE+o8fOQkGy5cIx0n40jGkSDHmhxrwnlGPHAEQRAEQRAEQRAEQRAWkG99566x7+/+iTdN+C2ZTIx9P3rsFBBMHh56MXib+51vf92E8DirutrHvl97zRUTfqudwPvA+7eMfa+t79prrpgwEVhtQ+3y+UJROk3GkYwjQY41OdaERYB44AiCIAiCIAiCIAiCICwgX9t129j3V137iw2Xu+zSjQCcPpsdK3v75tdPWOaFwyfGvr/1Lb0TfquG8AFob2sZ+15bX/06UXR3ZaTTZBzJOBLkWJNjTVgEiAeOIAiCIAiCIAiCIAjCAnH4yMmxN6mnoyWdBOD5549NKqty+tT4xN66td0Tfzs9/lt1YrG+vvp1qnzla8GE5caL1tBV88a4IONIxpEgx5oca8L5QzxwBEEQBEEQBEEQBEEQFoja0Db1eQwacfzE6bHv3asnvlldm3PhkkvWT/jt5KnBse9rVndG1le/DjAhEff7t26RTpNxJONIkGNNjjVhkSAeOIIgCIIgCIIgCIIgCAtEOjX+NvYDDz3e1DqPPf7s2PfLa97KBjj41MDY94s3rJ3wW//Th8a+d3dnIus7cODZCesUiyX+5XO3jv295b+9TjpNxpGMI0GONTnWhEWCeOAIgiAIgiAIgiAIgvD/s3d/MZJldR3AvwV4g7iRtDwYDRDp0WQXiSi9K3+C+NIjxBgjklkML4YHuwOiYiLpMcGsGExm9EE06jK9CUYfSJiJrEaNkO4Hn3AJ24oEA0a3o+iDgtpExSVX2PLh3tq5XX2ru6q6uudU9+eTVKaq+t5bt079zt3N73fPOZyRl7z4W/Pqhx7IJz/1uXz4Izt54Tfflze8/lV5wTc1C2F/6YsH+Z+vPH3oLu8///gTSZrFsMd94i8/k6SZNqe7mHaS/NWnP3/oc8ePlyTvfd+t3HffC/Kyl317vvTFgzz2e4/nk5/6XJLk/Y9sHtoPcSSO0Nf0Ne4tBRwAAACAM/Tud70tv/hLv51//MK/5dHHHs+jjz1+6O+vfuiBZ5OB3UWtv+Ol33Zou6ef/uqzazO87rXfc+RzRkm9t7317pQ6n+/c4f3+Rzbz3vfdyrvf8xtH9n3/I5u59pZ1P5Y4Ekfoa/oaBVHAAQAAADhDr/q++/Ph3//V/PGf/kU+cmfn2YTeqx96IK/63vsPLV49aVHrJPmnf/7XZ5+//P7VQ3/rJv26d2R3F8P+kR9+fa6svjh/8OE/e/YO73f81Jvzpje+7si0P4gjcYS+pq9x7w2Gw6FWAAAAAJbGYDA4ksz4+8/+oYY5J9/1ircceW84HA7EEZcxjtDX9DXO0nM0AQAAAAAAQFkUcAAAAAAAAAqjgAMAAAAAAFAYBRwAAAAAAIDCDIbDoVYAAAAAlkbfgtjcW8u4ILY4Ekfoa/oapTMCBwAAAAAAoDAKOAAAAAAAAIVRwAEAAAAAACiMAg4AAAAAAEBhBsOhNaQAAACAi+0f/vajGuEUvvO7f1z8IH7Qf/Q/zpkROAAAAAAAAIUxAgcAAAAAAKAwRuAAAAAAAAAURgEHAAAAAACgMAo4AAAAAAAAhVHAAQAAAAAAKIwCDgAAAAAAQGEUcAAAAABOYTAYzPP4zcFg8Ez778z7I5bEEuhrXIJYHQ6HWgEAAABgTnMk5j6W5I2d1x9P8qZZDiCfI5bEEuhrXIJYFTgAAAAA85sxEfj8JE/3vP+NSb467UHkc8SSWAJ9jYvPFGoAAAAA5+e9M74PYgn0NS4pI3AAAAAATmGGO7kfSPLpJFXP3/4vySuTfG6aA8nniCWxBPoaF58ROAAAAABnb5Dk0fQnAZPkG5J8sN0OxBLoa6CAAwAAAHAOfjLJD56wzRva7UAsgb4GplADAAAAOI0ppuJ5UZK/a/89yX8kuT/Jvx+3kXyOWBJLoK9x8RmBAwAAAHC2fj13k4D7E7YZvf+iJL+myRBLoK+BAg4AAADA2fmBJG/vvP7pCdu9s/P87Wmm5QGxBPoal5gCDgAAAMDZqJLc6ry+neRjE7b9eJKPdF5/MJMXzkYsiSXQ17gEFHAAAAAAzsYvJHmgff5fSd59wvY/326Xdr/3aELEEuhrXF4DiycBAAAAzG/CYthXknw2yfPb1+9K8jvt875kzOgg7+xs99Ukr0jy1PjG8jliSSyBvsYliFWBAwAAADC/CYnAjyV5Y/v8U0lek+SZ9vVxicDnJHkiyUPt650kPzS+sXyOWBJLoK9x8ZlCDQAAAGCx3pq7ScCvJ9nM3STgSZ5JstHulyRXk/yEJhVLYgn0NS4fBRwAAACAxXlhkg90Xv9Wkr+e8Rifbvcb+UB7XMSSWAJ9jUvEFGoAAAAApzA2Fc/vJnlH+/xfkrw8yX+P7XLcVDwj9yX5XJIXt68fTbPOQnMA+RyxJJZAX+Pix6rAAQAAAJhfJxH4/WnWQRi98eYkf9SzyzSJwCT5sSSPd/Z5bZJPJhKBYkksgb7GpYhVgQMAAAAwvzYR+LwkTyZ5Zfv2nyT50Qm7TJsITJI/7hznb5I8mORr8jliSSyBvsbFZw0cAAAAgNP72dxNAv5vknct6Lg/k+Qr7fNXJvk5TS2WxBLoa1wOCjgAAAAAp/OSJL/Sef1Iki8s6NhfSPLLndfvS/JSTS6WxBLoa1x8plADAAAAOIXBYPDRNOsmJMlnkqwl+doxu8wyFU9ydJqfjw6Hw7doebEklkBf44LHqgIOAAAAwPwGg8HXkjw3TYLvdWkWxD7OrInAJHlNkk+02z0zHA6fq+XFklgCfY2LzRRqAAAAAKfzoSTPpJky54kz+own2uN/PcljmlwsiSXQ17j4jMABAAAAOEeDweBIMmY4HA60DGIJ9DXoMgIHAAAAAACgMAo4AAAAAAAAhVHAAQAAAAAAKIwCDgAAAAAAQGEUcAAAAAAAAAqjgAMAAAAAAFAYBRwAAAAAAIDCKOAAAAAAAAAURgEHAAAAAACgMAo4AAAAAAAAhVHAAQAAAAAAKIwCDgAAAAAAQGEUcAAAAAAAAAqjgAMAAAAAAFAYBRwAAAAAAIDCKOAAAAAAAAAURgEHAAAAAACgMAo4AAAAAAAAhVHAAQAAAAAAKIwCDgAAAAAAQGEUcAAAAADO14eSfLl9/uX2NYgl0NfgkMFwONQKAAAAAAAABTECBwAAAAAAoDAKOAAAAAAAAIVRwAEAAAAAACiMAg4AAAAAAEBhFHAAAAAAAAAKo4ADAAAAAABQGAUcAAAAAACAwijgAAAAAAAAFEYBBwAAAAAAoDDP0wQAAAAAAFxEVVVNu+lOkvWx9wbL/v3ruhYES0wBBwAAAACSVFW1lWS1fWTs+chBkr3Ov3tJdrXeZBLIAPMZDIdDrQAAAADApVdV1byJsoMkd5LcTLKvJQ9TwOEe9+tpNzUCh+Io4AAAAACc4JgE4HqWe/TFuZz/siQQT1HA6bqeppDDkv3+XPjr90nXOwUcimMKNQAAAIDpraVJ8I0eyXIl+Jb9/M/bQZqE717nvdG0ausT9rnR/n1T88HpzTCCxvWOC0cBBwAAAGB6NzI5ce/8L569uq4fnpBAXkmykWSrfd61kab4c10TgusdzOs5mgAAAAAAZnaQZqq0Kzk8QmdkK5LHAJyCETgAAADAhVJV1UqS1bqu9zrvaZj5LPsaP+fhIMnVJE+mmTqta0v7ATAvI3AAAACAC6GqqpWqqraSPBUjHxZl1J4bOTpNGHcdpH/Nm/UcLeoAwFQUcAAAAIClVlXValVVt9IUGm5kukLDepLRPsPOY6c9xlpn263O3/sKQ8Oxx9aEz1xt/3ar/ZzRZz/Vvt6a4tyfHPus9bHzfLLzPRZ1/qudttqKQs4ku+kfbXNtin1HsbHT+R3+c4a4WPTx5omziX2sqqph57FTVdWNqqrWhAw91/OVqqo2qqq6XVXVk1VVDXuuk8cVRRd1ve7rUzfG+sbovznzFGlL6KMsAVOoAQAAAEupqqpRAmxjht1W0iSVJyXV19vHVpq1TfYXcKprOX4x7dX2Mfrc60m2J2x7MOE77eRw0eksrLTfYyvN2i/bE87nMrvT8zuf9Ltste3a197deNxsj59zON5p4mzqPlZV1ZW6rveFDe01fSuTixfd6+SN9vpz/ZyuQZPOq/vfoGn7Zwl9lCViBA4AAACwVMZG3IwXb/bTJL0mFUB2Mt2IiP0spniTNIm2aad0Oyn53ed2zi5h93COJklHhZzRiBxThN211/Pece1zK/2J3L64uJ2Ti5WLPt48cTZ1H1O8ob2mr1RVNRr9OO1os4021s76+rM1xXmtzNA/SuijLBEjcAAAAIClcMKIm70k23Vdbx9ziK0cTW7t5fAdzqO7n++MbXO9fb6RownD6z3H7Npv31tr/93N3QLRappk93iB51a73Ul3l6/l5OLQac7/IM1om5vtvt2CzaiQM7ob/mYWV/RaVnsTfqNJ8Tgey9ud330tR+/6v9HG5sE5HG/WODtNH+Nyuz0hvrqxc60nttbafa92Yvi01+txN2b4Hjfac8k59fl5+ihLZjAcDrUCAAAAUKyqqkbTyfQlp3aT3KzreveEYyTNmgBrY/v2JdtW0iT9+hJ7Oz3nMZjia2ykKW5MOs++KXWupymKnPT5I/u5m+g7SP8opHnPv+ta+336zqO3kFPX9bLE2niibLeu66tt/EyrL9k23saraUYwdfWNHFtpt1s5IS4Wfbx546y3j43//lVVrSRZret6zxXu0l/f+659B0keHl3XO/1vVLBZneJaOc/1blLc77XHv9M5j630jzR7MJNH4t2TPnrCjQ0UzhRqAAAAQJGqqlpvp9XpS1RtJ3mwruurJxVvOsbv3p6030FOvit7VtvHfF7Sn3xcn/H4V3I30XeWCbs7aQpfV3N0FMVGmuTjrZjK5zhbPW26PSEWr5+w71kcb944m6qP1XV9oHjDMfE36bq+l2Zax75jrJzR+e31XOtG57E7w3W7lD7KklHAAQAAAIpyTOFmdLf/lbquN+dIAI9PQTNpVM+9Mp4MXJ1hv817dL4Pp0kWjicJN9KMxtiJaX36pj4av3P/uOnExv+2kqOFkkUfb944K72PUda1/lqOFl5unnBtH42GGY/ha2d0mtczefqyvuLIpOt2KX2UJWMNHAAAAKAYVVXdytE1AkbrsGzXdX1wisPfGTv2SpoCw277t2nWGViE9dxdr+C4xNy0BZx7fYf1fpqk4fXcXSdnpfNd16uqul7X9c1LEMKTpl/qWsvRpPVBZit0rHWOu+jjnSbOevtYVVXP9rFT9mEulr5r3DQjKndzdFTK6hmd4+6Mf1ud0L9K6aMsGQUcAAAAoCR9axtsLyjpez3NXdDjibTRouq3MmENlwUZLWC96ERjKYvBP1toa7/nDfGb9MRSX8FuZ8bPWTnD450mzk7sY1VVbacZZbEfLrsjBYwpp8Ts2+ZeTNk47X+XSuqjLBlTqAEAAAAl20qyVVXVIooeB2mm+zouQTia+mtjgd9hpT3mjZzdXeKlWB39Zpc0XqcZgbPotTpWCvr+U/exqqo2ApfDiiZgXgo4AAAAQElu5vBdxCtpigFPVVV1awGFnIM0C1JvZnKSeSXNaJxF3dF9O/2Lu28meTDJoH3sLvHvtta22VM5uqD4di7HneGr6V+H47LdFT91H6uqai0ATKSAAwAAABSjruvduq4fTnMX//h8/htpCjm3q6o67cLo22mSzFcyeZHqRYwiGU0d1ffZ2zl5TYPSraeZCqhv1NJ2276bl2S6rL4p4/rWVeqLtcGMj5tneLxFOa8+xvI6cv2b8tret03JBfBS+yhLQAEHAAAAKE5d1/t1XW+mv5BzLc3C6DtVVV075Uftp0mMXcnRtUrWF/BV+o5x/QL8RBtpRtvsjH3Hg057buZs1hIqtT36YrFvUfG+ot1p4njRx1u0/bquz7KPsbz2p7xmThPfJV9rSu+jFEwBBwAAACjWCYWc9SS3q6p6agHraRz0HH9lwnbjjpsGam3KY6zkfBbhnvX8x89xK03h5lYOr+dzkKYwNRptcZkWqN9q22PczfSPCtjraZ+tzL9OxqKPd1Z9edo+xuXRN0Jt44Sp9dZydLTfQfqnKjzN9W6RlqKPUiYFHAAAAKB4Y4Wc8emYVtOsp/FUVVVbVVVNSopt5fjk3fid3313Tfe9N17M6H7+tNNG3cj5JPNmPf9R+44KNzcyuXBzc8L3vYjG26SvnY+b8mi8kLGWZjTT6hSfex7Hm9ci+hiX57o+qai3M6GIM4rrcTcXeL07K6X0UZbMYDgcagUAAABgqbRFmo3038V8kGbdlTud7ZMm2b6a5k7o3Ry+I3ojRxNl13M0MXgtye0TTu9q7o682Ej/6Iztzuf3ffbIYOz1+JRlfdscZ9bz30p/gWK//Q7bmaJoU9f1ssTVsCeWxpPAazk+6bvXtuFJ7fJk+osde237j/ZfbR+jbb/lHI43b5zN3Mfa6dW43NfynQmxe6fT/9bSP+3YXpIHF3S9myfux68Zu+0xi+mjy3L9ZcL/BCjgAAAAAMvqmELOocRwezf3kzMc+rgk/KQk3Eg3ITjN9iO7OTkhd9oCzqznP/55ozWDtmf5wCUu4MzqZqYfiXRc4vo4fWvJLPp488TZXH2sHYXB5b6Or6YptMwau9MUS09zvZsm7mcp4NyTPqqAs9xMoQYAAAAsrbquDzoLpI9PrdY1yzQ0d3J8UnAzs00VtpmTp4razPFTbi3SrOefNMnDSWsRXXajaaBOisG+/a7O8buvndPxZjVzH1O8ob2O77exuz1D37meZuTNSTE0z/XuLK8V97KPsoSMwAEAAAAulKqqNpKsjI3ASZoE87U0dy2PT4O1m6bI0p2y5ziraUb+XMvdxPVouq3NHL1beqWz/SgZN5pm6mb7vG8Ew4Nj57OIETiznP+N3J0ubW5LNAKnu77Ges8m++1j1FajqY9Oqxubqz2/yehz76R/9M2ijzdvnE3Vx+q6tvYNk/rgpBja7/S5qaZunON6d9YjcO5JHzUCZ7kp4AAAAAAAABTGFGoAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUBgFHAAAAAAAgMIo4AAAAAAAABRGAQcAAAAAAKAwCjgAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUBgFHAAAAAAAgMIo4AAAAAAAABRGAQcAAAAAAKAwCjgAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUBgFHAAAAAAAgMIo4AAAAAAAABRGAQcAAAAAAKAwCjgAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUBgFHAAAAAAAgMIo4AAAAAAAABRGAQcAAAAAAKAwCjgAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUBgFHAAAAAAAgMIo4AAAAAAAABRGAQcAAAAAAKAwCjgAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUBgFHAAAAAAAgMIo4AAAAAAAABRGAQcAAAAAAKAwCjgAAAAAAACFUcABAAAAAAAojAIOAAAAAABAYRRwAAAAAAAACqOAAwAAAAAAUJj/HwDkOwKnORuGVQAAAABJRU5ErkJggg==) ###Code from google.colab import drive drive.mount('/content/drive') import os import string import glob from tensorflow.keras.applications import MobileNet import tensorflow.keras.applications.mobilenet from tensorflow.keras.applications.inception_v3 import InceptionV3 import tensorflow.keras.applications.inception_v3 from tqdm import tqdm import tensorflow.keras.preprocessing.image import pickle from time import time import numpy as np from PIL import Image from tensorflow.keras.models import Sequential from tensorflow.keras.layers import (LSTM, Embedding, TimeDistributed, Dense, RepeatVector, Activation, Flatten, Reshape, concatenate, Dropout, BatchNormalization) from tensorflow.keras.optimizers import Adam, RMSprop from tensorflow.keras import Input, layers from tensorflow.keras import optimizers from tensorflow.keras.models import Model from tensorflow.keras.layers import add from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.utils import to_categorical import matplotlib.pyplot as plt START = "startseq" STOP = "endseq" EPOCHS = 10 USE_INCEPTION = True ###Output _____no_output_____ ###Markdown Data building and cleaning ###Code root_captioning= '/content/drive/MyDrive/Image captioning data' null_punct = str.maketrans('', '', string.punctuation) lookup = dict() with open( os.path.join(root_captioning,'Flickr8k_text',\ 'Flickr8k.token.txt'), 'r') as fp: max_length = 0 for line in fp.read().split('\n'): tok = line.split() if len(line) >= 2: id = tok[0].split('.')[0] desc = tok[1:] # Cleanup description desc = [word.lower() for word in desc] desc = [w.translate(null_punct) for w in desc] desc = [word for word in desc if len(word)>1] desc = [word for word in desc if word.isalpha()] max_length = max(max_length,len(desc)) if id not in lookup: lookup[id] = list() lookup[id].append(' '.join(desc)) lex = set() for key in lookup: [lex.update(d.split()) for d in lookup[key]] print(len(lookup)) # How many unique words print(len(lex)) # The dictionary print(max_length) # Maximum length of a caption (in words) # Warning, running this too soon on GDrive can sometimes not work. # Just re-run if len(img) = 0 img = glob.glob(os.path.join(root_captioning,'flicker8k_dataset', '*.jpg')) len(img) train_images_path = os.path.join(root_captioning,\ 'Flickr8k_text','Flickr_8k.trainImages.txt') train_images = set(open(train_images_path, 'r').read().strip().split('\n')) test_images_path = os.path.join(root_captioning, 'Flickr8k_text','Flickr_8k.testImages.txt') test_images = set(open(test_images_path, 'r').read().strip().split('\n')) train_img = [] test_img = [] for i in img: f = os.path.split(i)[-1] if f in train_images: train_img.append(f) elif f in test_images: test_img.append(f) print(len(train_images)) print(len(test_images)) train_descriptions = {k:v for k,v in lookup.items() if f'{k}.jpg' \ in train_images} for n,v in train_descriptions.items(): for d in range(len(v)): v[d] = f'{START} {v[d]} {STOP}' len(train_descriptions) ###Output _____no_output_____ ###Markdown Choosing a computer vision neural network to transfer ###Code encode_model = InceptionV3(weights='imagenet') encode_model = Model(encode_model.input, encode_model.layers[-2].output) WIDTH = 299 HEIGHT = 299 OUTPUT_DIM = 2048 preprocess_input = \ tensorflow.keras.applications.inception_v3.preprocess_input encode_model.summary() ###Output Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 299, 299, 3) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 149, 149, 32) 864 input_1[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 149, 149, 32) 96 conv2d[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 149, 149, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 147, 147, 32) 9216 activation[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 147, 147, 32) 96 conv2d_1[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 147, 147, 32) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 147, 147, 64) 18432 activation_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 147, 147, 64) 192 conv2d_2[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 147, 147, 64) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 73, 73, 64) 0 activation_2[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 73, 73, 80) 5120 max_pooling2d[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 73, 73, 80) 240 conv2d_3[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 73, 73, 80) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 71, 71, 192) 138240 activation_3[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 71, 71, 192) 576 conv2d_4[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 71, 71, 192) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 35, 35, 192) 0 activation_4[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 35, 35, 64) 12288 max_pooling2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 35, 35, 64) 192 conv2d_8[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 35, 35, 64) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 35, 35, 48) 9216 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 35, 35, 96) 55296 activation_8[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 35, 35, 48) 144 conv2d_6[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 35, 35, 96) 288 conv2d_9[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 35, 35, 48) 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 35, 35, 96) 0 batch_normalization_9[0][0] __________________________________________________________________________________________________ average_pooling2d (AveragePooli (None, 35, 35, 192) 0 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 35, 35, 64) 12288 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 35, 35, 64) 76800 activation_6[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 35, 35, 96) 82944 activation_9[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 35, 35, 32) 6144 average_pooling2d[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 35, 35, 64) 192 conv2d_5[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 35, 35, 64) 192 conv2d_7[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 35, 35, 96) 288 conv2d_10[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 35, 35, 32) 96 conv2d_11[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 35, 35, 64) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 35, 35, 64) 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 35, 35, 96) 0 batch_normalization_10[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 35, 35, 32) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ mixed0 (Concatenate) (None, 35, 35, 256) 0 activation_5[0][0] activation_7[0][0] activation_10[0][0] activation_11[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 35, 35, 64) 16384 mixed0[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 35, 35, 64) 192 conv2d_15[0][0] __________________________________________________________________________________________________ activation_15 (Activation) (None, 35, 35, 64) 0 batch_normalization_15[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 35, 35, 48) 12288 mixed0[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 35, 35, 96) 55296 activation_15[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 35, 35, 48) 144 conv2d_13[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 35, 35, 96) 288 conv2d_16[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 35, 35, 48) 0 batch_normalization_13[0][0] __________________________________________________________________________________________________ activation_16 (Activation) (None, 35, 35, 96) 0 batch_normalization_16[0][0] __________________________________________________________________________________________________ average_pooling2d_1 (AveragePoo (None, 35, 35, 256) 0 mixed0[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 35, 35, 64) 16384 mixed0[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 35, 35, 64) 76800 activation_13[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 35, 35, 96) 82944 activation_16[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 35, 35, 64) 16384 average_pooling2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 35, 35, 64) 192 conv2d_12[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 35, 35, 64) 192 conv2d_14[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 35, 35, 96) 288 conv2d_17[0][0] __________________________________________________________________________________________________ batch_normalization_18 (BatchNo (None, 35, 35, 64) 192 conv2d_18[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 35, 35, 64) 0 batch_normalization_12[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 35, 35, 64) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ activation_17 (Activation) (None, 35, 35, 96) 0 batch_normalization_17[0][0] __________________________________________________________________________________________________ activation_18 (Activation) (None, 35, 35, 64) 0 batch_normalization_18[0][0] __________________________________________________________________________________________________ mixed1 (Concatenate) (None, 35, 35, 288) 0 activation_12[0][0] activation_14[0][0] activation_17[0][0] activation_18[0][0] __________________________________________________________________________________________________ conv2d_22 (Conv2D) (None, 35, 35, 64) 18432 mixed1[0][0] __________________________________________________________________________________________________ batch_normalization_22 (BatchNo (None, 35, 35, 64) 192 conv2d_22[0][0] __________________________________________________________________________________________________ activation_22 (Activation) (None, 35, 35, 64) 0 batch_normalization_22[0][0] __________________________________________________________________________________________________ conv2d_20 (Conv2D) (None, 35, 35, 48) 13824 mixed1[0][0] __________________________________________________________________________________________________ conv2d_23 (Conv2D) (None, 35, 35, 96) 55296 activation_22[0][0] __________________________________________________________________________________________________ batch_normalization_20 (BatchNo (None, 35, 35, 48) 144 conv2d_20[0][0] __________________________________________________________________________________________________ batch_normalization_23 (BatchNo (None, 35, 35, 96) 288 conv2d_23[0][0] __________________________________________________________________________________________________ activation_20 (Activation) (None, 35, 35, 48) 0 batch_normalization_20[0][0] __________________________________________________________________________________________________ activation_23 (Activation) (None, 35, 35, 96) 0 batch_normalization_23[0][0] __________________________________________________________________________________________________ average_pooling2d_2 (AveragePoo (None, 35, 35, 288) 0 mixed1[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 35, 35, 64) 18432 mixed1[0][0] __________________________________________________________________________________________________ conv2d_21 (Conv2D) (None, 35, 35, 64) 76800 activation_20[0][0] __________________________________________________________________________________________________ conv2d_24 (Conv2D) (None, 35, 35, 96) 82944 activation_23[0][0] __________________________________________________________________________________________________ conv2d_25 (Conv2D) (None, 35, 35, 64) 18432 average_pooling2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_19 (BatchNo (None, 35, 35, 64) 192 conv2d_19[0][0] __________________________________________________________________________________________________ batch_normalization_21 (BatchNo (None, 35, 35, 64) 192 conv2d_21[0][0] __________________________________________________________________________________________________ batch_normalization_24 (BatchNo (None, 35, 35, 96) 288 conv2d_24[0][0] __________________________________________________________________________________________________ batch_normalization_25 (BatchNo (None, 35, 35, 64) 192 conv2d_25[0][0] __________________________________________________________________________________________________ activation_19 (Activation) (None, 35, 35, 64) 0 batch_normalization_19[0][0] __________________________________________________________________________________________________ activation_21 (Activation) (None, 35, 35, 64) 0 batch_normalization_21[0][0] __________________________________________________________________________________________________ activation_24 (Activation) (None, 35, 35, 96) 0 batch_normalization_24[0][0] __________________________________________________________________________________________________ activation_25 (Activation) (None, 35, 35, 64) 0 batch_normalization_25[0][0] __________________________________________________________________________________________________ mixed2 (Concatenate) (None, 35, 35, 288) 0 activation_19[0][0] activation_21[0][0] activation_24[0][0] activation_25[0][0] __________________________________________________________________________________________________ conv2d_27 (Conv2D) (None, 35, 35, 64) 18432 mixed2[0][0] __________________________________________________________________________________________________ batch_normalization_27 (BatchNo (None, 35, 35, 64) 192 conv2d_27[0][0] __________________________________________________________________________________________________ activation_27 (Activation) (None, 35, 35, 64) 0 batch_normalization_27[0][0] __________________________________________________________________________________________________ conv2d_28 (Conv2D) (None, 35, 35, 96) 55296 activation_27[0][0] __________________________________________________________________________________________________ batch_normalization_28 (BatchNo (None, 35, 35, 96) 288 conv2d_28[0][0] __________________________________________________________________________________________________ activation_28 (Activation) (None, 35, 35, 96) 0 batch_normalization_28[0][0] __________________________________________________________________________________________________ conv2d_26 (Conv2D) (None, 17, 17, 384) 995328 mixed2[0][0] __________________________________________________________________________________________________ conv2d_29 (Conv2D) (None, 17, 17, 96) 82944 activation_28[0][0] __________________________________________________________________________________________________ batch_normalization_26 (BatchNo (None, 17, 17, 384) 1152 conv2d_26[0][0] __________________________________________________________________________________________________ batch_normalization_29 (BatchNo (None, 17, 17, 96) 288 conv2d_29[0][0] __________________________________________________________________________________________________ activation_26 (Activation) (None, 17, 17, 384) 0 batch_normalization_26[0][0] __________________________________________________________________________________________________ activation_29 (Activation) (None, 17, 17, 96) 0 batch_normalization_29[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 17, 17, 288) 0 mixed2[0][0] __________________________________________________________________________________________________ mixed3 (Concatenate) (None, 17, 17, 768) 0 activation_26[0][0] activation_29[0][0] max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_34 (Conv2D) (None, 17, 17, 128) 98304 mixed3[0][0] __________________________________________________________________________________________________ batch_normalization_34 (BatchNo (None, 17, 17, 128) 384 conv2d_34[0][0] __________________________________________________________________________________________________ activation_34 (Activation) (None, 17, 17, 128) 0 batch_normalization_34[0][0] __________________________________________________________________________________________________ conv2d_35 (Conv2D) (None, 17, 17, 128) 114688 activation_34[0][0] __________________________________________________________________________________________________ batch_normalization_35 (BatchNo (None, 17, 17, 128) 384 conv2d_35[0][0] __________________________________________________________________________________________________ activation_35 (Activation) (None, 17, 17, 128) 0 batch_normalization_35[0][0] __________________________________________________________________________________________________ conv2d_31 (Conv2D) (None, 17, 17, 128) 98304 mixed3[0][0] __________________________________________________________________________________________________ conv2d_36 (Conv2D) (None, 17, 17, 128) 114688 activation_35[0][0] __________________________________________________________________________________________________ batch_normalization_31 (BatchNo (None, 17, 17, 128) 384 conv2d_31[0][0] __________________________________________________________________________________________________ batch_normalization_36 (BatchNo (None, 17, 17, 128) 384 conv2d_36[0][0] __________________________________________________________________________________________________ activation_31 (Activation) (None, 17, 17, 128) 0 batch_normalization_31[0][0] __________________________________________________________________________________________________ activation_36 (Activation) (None, 17, 17, 128) 0 batch_normalization_36[0][0] __________________________________________________________________________________________________ conv2d_32 (Conv2D) (None, 17, 17, 128) 114688 activation_31[0][0] __________________________________________________________________________________________________ conv2d_37 (Conv2D) (None, 17, 17, 128) 114688 activation_36[0][0] __________________________________________________________________________________________________ batch_normalization_32 (BatchNo (None, 17, 17, 128) 384 conv2d_32[0][0] __________________________________________________________________________________________________ batch_normalization_37 (BatchNo (None, 17, 17, 128) 384 conv2d_37[0][0] __________________________________________________________________________________________________ activation_32 (Activation) (None, 17, 17, 128) 0 batch_normalization_32[0][0] __________________________________________________________________________________________________ activation_37 (Activation) (None, 17, 17, 128) 0 batch_normalization_37[0][0] __________________________________________________________________________________________________ average_pooling2d_3 (AveragePoo (None, 17, 17, 768) 0 mixed3[0][0] __________________________________________________________________________________________________ conv2d_30 (Conv2D) (None, 17, 17, 192) 147456 mixed3[0][0] __________________________________________________________________________________________________ conv2d_33 (Conv2D) (None, 17, 17, 192) 172032 activation_32[0][0] __________________________________________________________________________________________________ conv2d_38 (Conv2D) (None, 17, 17, 192) 172032 activation_37[0][0] __________________________________________________________________________________________________ conv2d_39 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_30 (BatchNo (None, 17, 17, 192) 576 conv2d_30[0][0] __________________________________________________________________________________________________ batch_normalization_33 (BatchNo (None, 17, 17, 192) 576 conv2d_33[0][0] __________________________________________________________________________________________________ batch_normalization_38 (BatchNo (None, 17, 17, 192) 576 conv2d_38[0][0] __________________________________________________________________________________________________ batch_normalization_39 (BatchNo (None, 17, 17, 192) 576 conv2d_39[0][0] __________________________________________________________________________________________________ activation_30 (Activation) (None, 17, 17, 192) 0 batch_normalization_30[0][0] __________________________________________________________________________________________________ activation_33 (Activation) (None, 17, 17, 192) 0 batch_normalization_33[0][0] __________________________________________________________________________________________________ activation_38 (Activation) (None, 17, 17, 192) 0 batch_normalization_38[0][0] __________________________________________________________________________________________________ activation_39 (Activation) (None, 17, 17, 192) 0 batch_normalization_39[0][0] __________________________________________________________________________________________________ mixed4 (Concatenate) (None, 17, 17, 768) 0 activation_30[0][0] activation_33[0][0] activation_38[0][0] activation_39[0][0] __________________________________________________________________________________________________ conv2d_44 (Conv2D) (None, 17, 17, 160) 122880 mixed4[0][0] __________________________________________________________________________________________________ batch_normalization_44 (BatchNo (None, 17, 17, 160) 480 conv2d_44[0][0] __________________________________________________________________________________________________ activation_44 (Activation) (None, 17, 17, 160) 0 batch_normalization_44[0][0] __________________________________________________________________________________________________ conv2d_45 (Conv2D) (None, 17, 17, 160) 179200 activation_44[0][0] __________________________________________________________________________________________________ batch_normalization_45 (BatchNo (None, 17, 17, 160) 480 conv2d_45[0][0] __________________________________________________________________________________________________ activation_45 (Activation) (None, 17, 17, 160) 0 batch_normalization_45[0][0] __________________________________________________________________________________________________ conv2d_41 (Conv2D) (None, 17, 17, 160) 122880 mixed4[0][0] __________________________________________________________________________________________________ conv2d_46 (Conv2D) (None, 17, 17, 160) 179200 activation_45[0][0] __________________________________________________________________________________________________ batch_normalization_41 (BatchNo (None, 17, 17, 160) 480 conv2d_41[0][0] __________________________________________________________________________________________________ batch_normalization_46 (BatchNo (None, 17, 17, 160) 480 conv2d_46[0][0] __________________________________________________________________________________________________ activation_41 (Activation) (None, 17, 17, 160) 0 batch_normalization_41[0][0] __________________________________________________________________________________________________ activation_46 (Activation) (None, 17, 17, 160) 0 batch_normalization_46[0][0] __________________________________________________________________________________________________ conv2d_42 (Conv2D) (None, 17, 17, 160) 179200 activation_41[0][0] __________________________________________________________________________________________________ conv2d_47 (Conv2D) (None, 17, 17, 160) 179200 activation_46[0][0] __________________________________________________________________________________________________ batch_normalization_42 (BatchNo (None, 17, 17, 160) 480 conv2d_42[0][0] __________________________________________________________________________________________________ batch_normalization_47 (BatchNo (None, 17, 17, 160) 480 conv2d_47[0][0] __________________________________________________________________________________________________ activation_42 (Activation) (None, 17, 17, 160) 0 batch_normalization_42[0][0] __________________________________________________________________________________________________ activation_47 (Activation) (None, 17, 17, 160) 0 batch_normalization_47[0][0] __________________________________________________________________________________________________ average_pooling2d_4 (AveragePoo (None, 17, 17, 768) 0 mixed4[0][0] __________________________________________________________________________________________________ conv2d_40 (Conv2D) (None, 17, 17, 192) 147456 mixed4[0][0] __________________________________________________________________________________________________ conv2d_43 (Conv2D) (None, 17, 17, 192) 215040 activation_42[0][0] __________________________________________________________________________________________________ conv2d_48 (Conv2D) (None, 17, 17, 192) 215040 activation_47[0][0] __________________________________________________________________________________________________ conv2d_49 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_4[0][0] __________________________________________________________________________________________________ batch_normalization_40 (BatchNo (None, 17, 17, 192) 576 conv2d_40[0][0] __________________________________________________________________________________________________ batch_normalization_43 (BatchNo (None, 17, 17, 192) 576 conv2d_43[0][0] __________________________________________________________________________________________________ batch_normalization_48 (BatchNo (None, 17, 17, 192) 576 conv2d_48[0][0] __________________________________________________________________________________________________ batch_normalization_49 (BatchNo (None, 17, 17, 192) 576 conv2d_49[0][0] __________________________________________________________________________________________________ activation_40 (Activation) (None, 17, 17, 192) 0 batch_normalization_40[0][0] __________________________________________________________________________________________________ activation_43 (Activation) (None, 17, 17, 192) 0 batch_normalization_43[0][0] __________________________________________________________________________________________________ activation_48 (Activation) (None, 17, 17, 192) 0 batch_normalization_48[0][0] __________________________________________________________________________________________________ activation_49 (Activation) (None, 17, 17, 192) 0 batch_normalization_49[0][0] __________________________________________________________________________________________________ mixed5 (Concatenate) (None, 17, 17, 768) 0 activation_40[0][0] activation_43[0][0] activation_48[0][0] activation_49[0][0] __________________________________________________________________________________________________ conv2d_54 (Conv2D) (None, 17, 17, 160) 122880 mixed5[0][0] __________________________________________________________________________________________________ batch_normalization_54 (BatchNo (None, 17, 17, 160) 480 conv2d_54[0][0] __________________________________________________________________________________________________ activation_54 (Activation) (None, 17, 17, 160) 0 batch_normalization_54[0][0] __________________________________________________________________________________________________ conv2d_55 (Conv2D) (None, 17, 17, 160) 179200 activation_54[0][0] __________________________________________________________________________________________________ batch_normalization_55 (BatchNo (None, 17, 17, 160) 480 conv2d_55[0][0] __________________________________________________________________________________________________ activation_55 (Activation) (None, 17, 17, 160) 0 batch_normalization_55[0][0] __________________________________________________________________________________________________ conv2d_51 (Conv2D) (None, 17, 17, 160) 122880 mixed5[0][0] __________________________________________________________________________________________________ conv2d_56 (Conv2D) (None, 17, 17, 160) 179200 activation_55[0][0] __________________________________________________________________________________________________ batch_normalization_51 (BatchNo (None, 17, 17, 160) 480 conv2d_51[0][0] __________________________________________________________________________________________________ batch_normalization_56 (BatchNo (None, 17, 17, 160) 480 conv2d_56[0][0] __________________________________________________________________________________________________ activation_51 (Activation) (None, 17, 17, 160) 0 batch_normalization_51[0][0] __________________________________________________________________________________________________ activation_56 (Activation) (None, 17, 17, 160) 0 batch_normalization_56[0][0] __________________________________________________________________________________________________ conv2d_52 (Conv2D) (None, 17, 17, 160) 179200 activation_51[0][0] __________________________________________________________________________________________________ conv2d_57 (Conv2D) (None, 17, 17, 160) 179200 activation_56[0][0] __________________________________________________________________________________________________ batch_normalization_52 (BatchNo (None, 17, 17, 160) 480 conv2d_52[0][0] __________________________________________________________________________________________________ batch_normalization_57 (BatchNo (None, 17, 17, 160) 480 conv2d_57[0][0] __________________________________________________________________________________________________ activation_52 (Activation) (None, 17, 17, 160) 0 batch_normalization_52[0][0] __________________________________________________________________________________________________ activation_57 (Activation) (None, 17, 17, 160) 0 batch_normalization_57[0][0] __________________________________________________________________________________________________ average_pooling2d_5 (AveragePoo (None, 17, 17, 768) 0 mixed5[0][0] __________________________________________________________________________________________________ conv2d_50 (Conv2D) (None, 17, 17, 192) 147456 mixed5[0][0] __________________________________________________________________________________________________ conv2d_53 (Conv2D) (None, 17, 17, 192) 215040 activation_52[0][0] __________________________________________________________________________________________________ conv2d_58 (Conv2D) (None, 17, 17, 192) 215040 activation_57[0][0] __________________________________________________________________________________________________ conv2d_59 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_5[0][0] __________________________________________________________________________________________________ batch_normalization_50 (BatchNo (None, 17, 17, 192) 576 conv2d_50[0][0] __________________________________________________________________________________________________ batch_normalization_53 (BatchNo (None, 17, 17, 192) 576 conv2d_53[0][0] __________________________________________________________________________________________________ batch_normalization_58 (BatchNo (None, 17, 17, 192) 576 conv2d_58[0][0] __________________________________________________________________________________________________ batch_normalization_59 (BatchNo (None, 17, 17, 192) 576 conv2d_59[0][0] __________________________________________________________________________________________________ activation_50 (Activation) (None, 17, 17, 192) 0 batch_normalization_50[0][0] __________________________________________________________________________________________________ activation_53 (Activation) (None, 17, 17, 192) 0 batch_normalization_53[0][0] __________________________________________________________________________________________________ activation_58 (Activation) (None, 17, 17, 192) 0 batch_normalization_58[0][0] __________________________________________________________________________________________________ activation_59 (Activation) (None, 17, 17, 192) 0 batch_normalization_59[0][0] __________________________________________________________________________________________________ mixed6 (Concatenate) (None, 17, 17, 768) 0 activation_50[0][0] activation_53[0][0] activation_58[0][0] activation_59[0][0] __________________________________________________________________________________________________ conv2d_64 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0] __________________________________________________________________________________________________ batch_normalization_64 (BatchNo (None, 17, 17, 192) 576 conv2d_64[0][0] __________________________________________________________________________________________________ activation_64 (Activation) (None, 17, 17, 192) 0 batch_normalization_64[0][0] __________________________________________________________________________________________________ conv2d_65 (Conv2D) (None, 17, 17, 192) 258048 activation_64[0][0] __________________________________________________________________________________________________ batch_normalization_65 (BatchNo (None, 17, 17, 192) 576 conv2d_65[0][0] __________________________________________________________________________________________________ activation_65 (Activation) (None, 17, 17, 192) 0 batch_normalization_65[0][0] __________________________________________________________________________________________________ conv2d_61 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0] __________________________________________________________________________________________________ conv2d_66 (Conv2D) (None, 17, 17, 192) 258048 activation_65[0][0] __________________________________________________________________________________________________ batch_normalization_61 (BatchNo (None, 17, 17, 192) 576 conv2d_61[0][0] __________________________________________________________________________________________________ batch_normalization_66 (BatchNo (None, 17, 17, 192) 576 conv2d_66[0][0] __________________________________________________________________________________________________ activation_61 (Activation) (None, 17, 17, 192) 0 batch_normalization_61[0][0] __________________________________________________________________________________________________ activation_66 (Activation) (None, 17, 17, 192) 0 batch_normalization_66[0][0] __________________________________________________________________________________________________ conv2d_62 (Conv2D) (None, 17, 17, 192) 258048 activation_61[0][0] __________________________________________________________________________________________________ conv2d_67 (Conv2D) (None, 17, 17, 192) 258048 activation_66[0][0] __________________________________________________________________________________________________ batch_normalization_62 (BatchNo (None, 17, 17, 192) 576 conv2d_62[0][0] __________________________________________________________________________________________________ batch_normalization_67 (BatchNo (None, 17, 17, 192) 576 conv2d_67[0][0] __________________________________________________________________________________________________ activation_62 (Activation) (None, 17, 17, 192) 0 batch_normalization_62[0][0] __________________________________________________________________________________________________ activation_67 (Activation) (None, 17, 17, 192) 0 batch_normalization_67[0][0] __________________________________________________________________________________________________ average_pooling2d_6 (AveragePoo (None, 17, 17, 768) 0 mixed6[0][0] __________________________________________________________________________________________________ conv2d_60 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0] __________________________________________________________________________________________________ conv2d_63 (Conv2D) (None, 17, 17, 192) 258048 activation_62[0][0] __________________________________________________________________________________________________ conv2d_68 (Conv2D) (None, 17, 17, 192) 258048 activation_67[0][0] __________________________________________________________________________________________________ conv2d_69 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_6[0][0] __________________________________________________________________________________________________ batch_normalization_60 (BatchNo (None, 17, 17, 192) 576 conv2d_60[0][0] __________________________________________________________________________________________________ batch_normalization_63 (BatchNo (None, 17, 17, 192) 576 conv2d_63[0][0] __________________________________________________________________________________________________ batch_normalization_68 (BatchNo (None, 17, 17, 192) 576 conv2d_68[0][0] __________________________________________________________________________________________________ batch_normalization_69 (BatchNo (None, 17, 17, 192) 576 conv2d_69[0][0] __________________________________________________________________________________________________ activation_60 (Activation) (None, 17, 17, 192) 0 batch_normalization_60[0][0] __________________________________________________________________________________________________ activation_63 (Activation) (None, 17, 17, 192) 0 batch_normalization_63[0][0] __________________________________________________________________________________________________ activation_68 (Activation) (None, 17, 17, 192) 0 batch_normalization_68[0][0] __________________________________________________________________________________________________ activation_69 (Activation) (None, 17, 17, 192) 0 batch_normalization_69[0][0] __________________________________________________________________________________________________ mixed7 (Concatenate) (None, 17, 17, 768) 0 activation_60[0][0] activation_63[0][0] activation_68[0][0] activation_69[0][0] __________________________________________________________________________________________________ conv2d_72 (Conv2D) (None, 17, 17, 192) 147456 mixed7[0][0] __________________________________________________________________________________________________ batch_normalization_72 (BatchNo (None, 17, 17, 192) 576 conv2d_72[0][0] __________________________________________________________________________________________________ activation_72 (Activation) (None, 17, 17, 192) 0 batch_normalization_72[0][0] __________________________________________________________________________________________________ conv2d_73 (Conv2D) (None, 17, 17, 192) 258048 activation_72[0][0] __________________________________________________________________________________________________ batch_normalization_73 (BatchNo (None, 17, 17, 192) 576 conv2d_73[0][0] __________________________________________________________________________________________________ activation_73 (Activation) (None, 17, 17, 192) 0 batch_normalization_73[0][0] __________________________________________________________________________________________________ conv2d_70 (Conv2D) (None, 17, 17, 192) 147456 mixed7[0][0] __________________________________________________________________________________________________ conv2d_74 (Conv2D) (None, 17, 17, 192) 258048 activation_73[0][0] __________________________________________________________________________________________________ batch_normalization_70 (BatchNo (None, 17, 17, 192) 576 conv2d_70[0][0] __________________________________________________________________________________________________ batch_normalization_74 (BatchNo (None, 17, 17, 192) 576 conv2d_74[0][0] __________________________________________________________________________________________________ activation_70 (Activation) (None, 17, 17, 192) 0 batch_normalization_70[0][0] __________________________________________________________________________________________________ activation_74 (Activation) (None, 17, 17, 192) 0 batch_normalization_74[0][0] __________________________________________________________________________________________________ conv2d_71 (Conv2D) (None, 8, 8, 320) 552960 activation_70[0][0] __________________________________________________________________________________________________ conv2d_75 (Conv2D) (None, 8, 8, 192) 331776 activation_74[0][0] __________________________________________________________________________________________________ batch_normalization_71 (BatchNo (None, 8, 8, 320) 960 conv2d_71[0][0] __________________________________________________________________________________________________ batch_normalization_75 (BatchNo (None, 8, 8, 192) 576 conv2d_75[0][0] __________________________________________________________________________________________________ activation_71 (Activation) (None, 8, 8, 320) 0 batch_normalization_71[0][0] __________________________________________________________________________________________________ activation_75 (Activation) (None, 8, 8, 192) 0 batch_normalization_75[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 768) 0 mixed7[0][0] __________________________________________________________________________________________________ mixed8 (Concatenate) (None, 8, 8, 1280) 0 activation_71[0][0] activation_75[0][0] max_pooling2d_3[0][0] __________________________________________________________________________________________________ conv2d_80 (Conv2D) (None, 8, 8, 448) 573440 mixed8[0][0] __________________________________________________________________________________________________ batch_normalization_80 (BatchNo (None, 8, 8, 448) 1344 conv2d_80[0][0] __________________________________________________________________________________________________ activation_80 (Activation) (None, 8, 8, 448) 0 batch_normalization_80[0][0] __________________________________________________________________________________________________ conv2d_77 (Conv2D) (None, 8, 8, 384) 491520 mixed8[0][0] __________________________________________________________________________________________________ conv2d_81 (Conv2D) (None, 8, 8, 384) 1548288 activation_80[0][0] __________________________________________________________________________________________________ batch_normalization_77 (BatchNo (None, 8, 8, 384) 1152 conv2d_77[0][0] __________________________________________________________________________________________________ batch_normalization_81 (BatchNo (None, 8, 8, 384) 1152 conv2d_81[0][0] __________________________________________________________________________________________________ activation_77 (Activation) (None, 8, 8, 384) 0 batch_normalization_77[0][0] __________________________________________________________________________________________________ activation_81 (Activation) (None, 8, 8, 384) 0 batch_normalization_81[0][0] __________________________________________________________________________________________________ conv2d_78 (Conv2D) (None, 8, 8, 384) 442368 activation_77[0][0] __________________________________________________________________________________________________ conv2d_79 (Conv2D) (None, 8, 8, 384) 442368 activation_77[0][0] __________________________________________________________________________________________________ conv2d_82 (Conv2D) (None, 8, 8, 384) 442368 activation_81[0][0] __________________________________________________________________________________________________ conv2d_83 (Conv2D) (None, 8, 8, 384) 442368 activation_81[0][0] __________________________________________________________________________________________________ average_pooling2d_7 (AveragePoo (None, 8, 8, 1280) 0 mixed8[0][0] __________________________________________________________________________________________________ conv2d_76 (Conv2D) (None, 8, 8, 320) 409600 mixed8[0][0] __________________________________________________________________________________________________ batch_normalization_78 (BatchNo (None, 8, 8, 384) 1152 conv2d_78[0][0] __________________________________________________________________________________________________ batch_normalization_79 (BatchNo (None, 8, 8, 384) 1152 conv2d_79[0][0] __________________________________________________________________________________________________ batch_normalization_82 (BatchNo (None, 8, 8, 384) 1152 conv2d_82[0][0] __________________________________________________________________________________________________ batch_normalization_83 (BatchNo (None, 8, 8, 384) 1152 conv2d_83[0][0] __________________________________________________________________________________________________ conv2d_84 (Conv2D) (None, 8, 8, 192) 245760 average_pooling2d_7[0][0] __________________________________________________________________________________________________ batch_normalization_76 (BatchNo (None, 8, 8, 320) 960 conv2d_76[0][0] __________________________________________________________________________________________________ activation_78 (Activation) (None, 8, 8, 384) 0 batch_normalization_78[0][0] __________________________________________________________________________________________________ activation_79 (Activation) (None, 8, 8, 384) 0 batch_normalization_79[0][0] __________________________________________________________________________________________________ activation_82 (Activation) (None, 8, 8, 384) 0 batch_normalization_82[0][0] __________________________________________________________________________________________________ activation_83 (Activation) (None, 8, 8, 384) 0 batch_normalization_83[0][0] __________________________________________________________________________________________________ batch_normalization_84 (BatchNo (None, 8, 8, 192) 576 conv2d_84[0][0] __________________________________________________________________________________________________ activation_76 (Activation) (None, 8, 8, 320) 0 batch_normalization_76[0][0] __________________________________________________________________________________________________ mixed9_0 (Concatenate) (None, 8, 8, 768) 0 activation_78[0][0] activation_79[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 8, 8, 768) 0 activation_82[0][0] activation_83[0][0] __________________________________________________________________________________________________ activation_84 (Activation) (None, 8, 8, 192) 0 batch_normalization_84[0][0] __________________________________________________________________________________________________ mixed9 (Concatenate) (None, 8, 8, 2048) 0 activation_76[0][0] mixed9_0[0][0] concatenate[0][0] activation_84[0][0] __________________________________________________________________________________________________ conv2d_89 (Conv2D) (None, 8, 8, 448) 917504 mixed9[0][0] __________________________________________________________________________________________________ batch_normalization_89 (BatchNo (None, 8, 8, 448) 1344 conv2d_89[0][0] __________________________________________________________________________________________________ activation_89 (Activation) (None, 8, 8, 448) 0 batch_normalization_89[0][0] __________________________________________________________________________________________________ conv2d_86 (Conv2D) (None, 8, 8, 384) 786432 mixed9[0][0] __________________________________________________________________________________________________ conv2d_90 (Conv2D) (None, 8, 8, 384) 1548288 activation_89[0][0] __________________________________________________________________________________________________ batch_normalization_86 (BatchNo (None, 8, 8, 384) 1152 conv2d_86[0][0] __________________________________________________________________________________________________ batch_normalization_90 (BatchNo (None, 8, 8, 384) 1152 conv2d_90[0][0] __________________________________________________________________________________________________ activation_86 (Activation) (None, 8, 8, 384) 0 batch_normalization_86[0][0] __________________________________________________________________________________________________ activation_90 (Activation) (None, 8, 8, 384) 0 batch_normalization_90[0][0] __________________________________________________________________________________________________ conv2d_87 (Conv2D) (None, 8, 8, 384) 442368 activation_86[0][0] __________________________________________________________________________________________________ conv2d_88 (Conv2D) (None, 8, 8, 384) 442368 activation_86[0][0] __________________________________________________________________________________________________ conv2d_91 (Conv2D) (None, 8, 8, 384) 442368 activation_90[0][0] __________________________________________________________________________________________________ conv2d_92 (Conv2D) (None, 8, 8, 384) 442368 activation_90[0][0] __________________________________________________________________________________________________ average_pooling2d_8 (AveragePoo (None, 8, 8, 2048) 0 mixed9[0][0] __________________________________________________________________________________________________ conv2d_85 (Conv2D) (None, 8, 8, 320) 655360 mixed9[0][0] __________________________________________________________________________________________________ batch_normalization_87 (BatchNo (None, 8, 8, 384) 1152 conv2d_87[0][0] __________________________________________________________________________________________________ batch_normalization_88 (BatchNo (None, 8, 8, 384) 1152 conv2d_88[0][0] __________________________________________________________________________________________________ batch_normalization_91 (BatchNo (None, 8, 8, 384) 1152 conv2d_91[0][0] __________________________________________________________________________________________________ batch_normalization_92 (BatchNo (None, 8, 8, 384) 1152 conv2d_92[0][0] __________________________________________________________________________________________________ conv2d_93 (Conv2D) (None, 8, 8, 192) 393216 average_pooling2d_8[0][0] __________________________________________________________________________________________________ batch_normalization_85 (BatchNo (None, 8, 8, 320) 960 conv2d_85[0][0] __________________________________________________________________________________________________ activation_87 (Activation) (None, 8, 8, 384) 0 batch_normalization_87[0][0] __________________________________________________________________________________________________ activation_88 (Activation) (None, 8, 8, 384) 0 batch_normalization_88[0][0] __________________________________________________________________________________________________ activation_91 (Activation) (None, 8, 8, 384) 0 batch_normalization_91[0][0] __________________________________________________________________________________________________ activation_92 (Activation) (None, 8, 8, 384) 0 batch_normalization_92[0][0] __________________________________________________________________________________________________ batch_normalization_93 (BatchNo (None, 8, 8, 192) 576 conv2d_93[0][0] __________________________________________________________________________________________________ activation_85 (Activation) (None, 8, 8, 320) 0 batch_normalization_85[0][0] __________________________________________________________________________________________________ mixed9_1 (Concatenate) (None, 8, 8, 768) 0 activation_87[0][0] activation_88[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 8, 8, 768) 0 activation_91[0][0] activation_92[0][0] __________________________________________________________________________________________________ activation_93 (Activation) (None, 8, 8, 192) 0 batch_normalization_93[0][0] __________________________________________________________________________________________________ mixed10 (Concatenate) (None, 8, 8, 2048) 0 activation_85[0][0] mixed9_1[0][0] concatenate_1[0][0] activation_93[0][0] __________________________________________________________________________________________________ avg_pool (GlobalAveragePooling2 (None, 2048) 0 mixed10[0][0] ================================================================================================== Total params: 21,802,784 Trainable params: 21,768,352 Non-trainable params: 34,432 __________________________________________________________________________________________________ ###Markdown Creating the traing set ###Code def encodeImage(img): # Resize all images to a standard size (specified bythe image # encoding network) img = img.resize((WIDTH, HEIGHT), Image.ANTIALIAS) # Convert a PIL image to a numpy array x = tensorflow.keras.preprocessing.image.img_to_array(img) # Expand to 2D array x = np.expand_dims(x, axis=0) # Perform any preprocessing needed by InceptionV3 or others x = preprocess_input(x) # Call InceptionV3 (or other) to extract the smaller feature set for # the image. x = encode_model.predict(x) # Get the encoding vector for the image # Shape to correct form to be accepted by LSTM captioning network. x = np.reshape(x, OUTPUT_DIM ) return x # Nicely formatted time string def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return f"{h}:{m:>02}:{s:>05.2f}" train_path = os.path.join(root_captioning,"flicker8k_dataset",f'train{OUTPUT_DIM}.pkl') start = time() encoding_train = {} for id in tqdm(train_img): image_path = os.path.join(root_captioning,'flicker8k_dataset', id) img = tensorflow.keras.preprocessing.image.load_img(image_path, \ target_size=(HEIGHT, WIDTH)) encoding_train[id] = encodeImage(img) with open(train_path, "wb") as fp: pickle.dump(encoding_train, fp) print(f"\nGenerating training set took: {hms_string(time()-start)}") encoding_train test_path = os.path.join(root_captioning,"flicker8k_dataset",f'test{OUTPUT_DIM}.pkl') start = time() encoding_test = {} for id in tqdm(test_img): image_path = os.path.join(root_captioning,'flicker8k_dataset', id) img = tensorflow.keras.preprocessing.image.load_img(image_path, \ target_size=(HEIGHT, WIDTH)) encoding_test[id] = encodeImage(img) with open(test_path, "wb") as fp: pickle.dump(encoding_test, fp) print(f"\nGenerating testing set took: {hms_string(time()-start)}") all_train_captions = [] for key, val in train_descriptions.items(): for cap in val: all_train_captions.append(cap) len(all_train_captions) word_count_threshold = 10 word_counts = {} nsents = 0 for sent in all_train_captions: nsents += 1 for w in sent.split(' '): word_counts[w] = word_counts.get(w, 0) + 1 vocab = [w for w in word_counts if word_counts[w] >= word_count_threshold] print('preprocessed words %d ==> %d' % (len(word_counts), len(vocab))) idxtoword = {} wordtoidx = {} ix = 1 for w in vocab: wordtoidx[w] = ix idxtoword[ix] = w ix += 1 vocab_size = len(idxtoword) + 1 vocab_size max_length +=2 print(max_length) ###Output 34 ###Markdown Using a Data Generator ###Code def data_generator(descriptions, photos, wordtoidx, \ max_length, num_photos_per_batch): # x1 - Training data for photos # x2 - The caption that goes with each photo # y - The predicted rest of the caption x1, x2, y = [], [], [] n=0 while True: for key, desc_list in descriptions.items(): n+=1 photo = photos[key+'.jpg'] # Each photo has 5 descriptions for desc in desc_list: # Convert each word into a list of sequences. seq = [wordtoidx[word] for word in desc.split(' ') \ if word in wordtoidx] # Generate a training case for every possible sequence and outcome for i in range(1, len(seq)): in_seq, out_seq = seq[:i], seq[i] in_seq = pad_sequences([in_seq], maxlen=max_length)[0] out_seq = to_categorical([out_seq], num_classes=vocab_size)[0] x1.append(photo) x2.append(in_seq) y.append(out_seq) if n==num_photos_per_batch: yield ([np.array(x1), np.array(x2)], np.array(y)) x1, x2, y = [], [], [] n=0 ###Output _____no_output_____ ###Markdown Loading Glove Embedding ###Code embeddings_index = {} f = open(os.path.join(root_captioning, 'glove.6B.200d.txt'), encoding="utf-8") for line in tqdm(f): values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() print(f'Found {len(embeddings_index)} word vectors.') ###Output 400000it [00:34, 11520.90it/s] ###Markdown Bulding the Neural Network ###Code embedding_dim = 200 # Get 200-dim dense vector for each of the 10000 words in out vocabulary embedding_matrix = np.zeros((vocab_size, embedding_dim)) for word, i in wordtoidx.items(): #if i < max_words: embedding_vector = embeddings_index.get(word) if embedding_vector is not None: # Words not found in the embedding index will be all zeros embedding_matrix[i] = embedding_vector embedding_matrix.shape inputs1 = Input(shape=(OUTPUT_DIM,)) fe1 = Dropout(0.5)(inputs1) fe2 = Dense(256, activation='relu')(fe1) inputs2 = Input(shape=(max_length,)) se1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs2) se2 = Dropout(0.5)(se1) se3 = LSTM(256)(se2) decoder1 = add([fe2, se3]) decoder2 = Dense(256, activation='relu')(decoder1) outputs = Dense(vocab_size, activation='softmax')(decoder2) caption_model = Model(inputs=[inputs1, inputs2], outputs=outputs) embedding_dim caption_model.summary() caption_model.layers[2].set_weights([embedding_matrix]) caption_model.layers[2].trainable = False caption_model.compile(loss='categorical_crossentropy', optimizer='adam') from keras.models import model_from_json from keras.models import load_model caption_model.save("/content/drive/MyDrive/Image captioning data/caption-model.hdf5") ###Output _____no_output_____ ###Markdown Train the Neural Network ###Code number_pics_per_bath = 3 steps = len(train_descriptions)//number_pics_per_bath model_path = os.path.join(root_captioning,f'caption-model.hdf5') if not os.path.exists(model_path): for i in tqdm(range(EPOCHS*2)): generator = data_generator(train_descriptions, encoding_train, wordtoidx, max_length, number_pics_per_bath) caption_model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1) caption_model.optimizer.lr = 1e-4 number_pics_per_bath = 6 steps = len(train_descriptions)//number_pics_per_bath for i in range(EPOCHS): generator = data_generator(train_descriptions, encoding_train, wordtoidx, max_length, number_pics_per_bath) caption_model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1) caption_model.save_weights(model_path) print(f"\Training took: {hms_string(time()-start)}") else: caption_model.load_weights(model_path) ###Output _____no_output_____ ###Markdown Generating Captions ###Code def generateCaption(photo): in_text = START for i in range(max_length): sequence = [wordtoidx[w] for w in in_text.split() if w in wordtoidx] sequence = pad_sequences([sequence], maxlen=max_length) yhat = caption_model.predict([photo,sequence], verbose=0) yhat = np.argmax(yhat) word = idxtoword[yhat] in_text += ' ' + word if word == STOP: break final = in_text.split() final = final[1:-1] final = ' '.join(final) return final ###Output _____no_output_____ ###Markdown Evaluate Performance on test data from Flicker8k ###Code for z in range(2): # set higher to see more examples pic = list(encoding_test.keys())[z] image = encoding_test[pic].reshape((1,OUTPUT_DIM)) print(os.path.join(root_captioning,'flicker8k_dataset', pic)) x=plt.imread(os.path.join(root_captioning,'flicker8k_dataset', pic)) plt.imshow(x) plt.show() print("Caption:",generateCaption(image)) print("_____________________________________") encoding_test[pic].shape ###Output _____no_output_____ ###Markdown Image Captioning (Soumitra Dnyaneshwar Edake) Auto Image Caption Generator Steps:- Feature Extraction- Descriptions Generation- Model Training- Model Evaluation- Caption Generator Initial Step ###Code #imports import os import numpy as np from numpy import array from time import time from pickle import dump from pickle import load import string from keras import Input, Model from keras.backend import set_value from keras.layers import Dropout, Embedding, Dense, LSTM, add from keras.utils import to_categorical from keras.applications.inception_v3 import preprocess_input from keras_preprocessing.image import load_img, img_to_array from keras_preprocessing.sequence import pad_sequences from keras.applications.inception_v3 import InceptionV3 from keras.models import Model import matplotlib.pyplot as plt from PIL import Image import scipy import scipy.misc import scipy.cluster ###Output Bad key "text.kerning_factor" on line 4 in D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test_patch.mplstyle. You probably need to get an updated matplotlibrc file from https://github.com/matplotlib/matplotlib/blob/v3.1.3/matplotlibrc.template or from the matplotlib source distribution ###Markdown Define Paths to appropriate directories and files ###Code # input paths path_dataset = "dataset\\flicker8k-dataset\\Flickr8k_Dataset\\Flicker8k_Dataset\\" path_tokens = "dataset\\flicker8k-dataset\\Flickr8k_text\\Flickr8k.token.txt" path_train_set = "dataset\\flicker8k-dataset\\Flickr8k_text\\Flickr_8k.trainImages.txt" path_test_set = "dataset\\flicker8k-dataset\\Flickr8k_text\\Flickr_8k.testImages.txt" path_glove_txt = "dataset\\pre-trained-glove\\glove.6B.200d.txt" # output paths path_desc = "descriptions.txt" path_extracted_train_features = "extracted_train_features.enc" path_extracted_test_features = "extracted_test_features.enc" ###Output _____no_output_____ ###Markdown Below lines help us to overcome ***keras scrach graph*** error ###Code import tensorflow as tf # setting GPU memory growth for no memory glitches physical_devices = tf.config.experimental.list_physical_devices('GPU') assert len(physical_devices) > 0, "Not enough GPU hardware devices available" config = tf.config.experimental.set_memory_growth(physical_devices[0], True) ###Output _____no_output_____ ###Markdown 1. Feature ExtractionAll the features will be extracted from images and will be collectively (train and test) saved in a pickle dumped file. Define the modified InceptionV3 model ###Code # we need InceptionV3 only to extract features thats why we remove last layer model = InceptionV3(weights='imagenet') model_popped = Model(inputs=model.input, outputs=model.layers[-2].output) # To open sets def set_opener(path): load_set = open(path, 'r') data = load_set.readlines() load_set.close() return data # pre processing and feature extraction def feature_extractor(image, in_model): img = load_img(image, target_size=(299, 299)) x = img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) ext_ft = in_model.predict(x) ext_ft = np.reshape(ext_ft, ext_ft.shape[1]) return ext_ft # get a set of all train images train_images = set_opener(path_train_set) # get a set of all test images test_images = set_opener(path_test_set) print("Length: Train:", len(train_images)) print("Length: Test: ", len(test_images)) all_sets = [train_images, test_images] outputs = [path_extracted_train_features, path_extracted_test_features] total_count = 0 # set initial time start_time = time() for i, dataset in enumerate(all_sets): count = 0 features_encoded = dict() for name in dataset: count += 1 name = name.strip() image_path = path_dataset + name feature_vector = feature_extractor(image_path, model_popped) image_name = name.split('.')[0] features_encoded[image_name] = feature_vector print('> Processing {}/{}'.format(count, len(dataset)) + ' : %s' % name) total_count += count # store to file dump(features_encoded, open(outputs[i], 'wb')) print("\nFeatures extracted :", len(features_encoded)) print('Features saved to :', outputs[i], end='\n\n') print("Total Features Extracted :", total_count) print("Processing Time :", time() - start_time, "sec") ###Output > Processing 1/6000 : 2513260012_03d33305cf.jpg > Processing 2/6000 : 2903617548_d3e38d7f88.jpg > Processing 3/6000 : 3338291921_fe7ae0c8f8.jpg > Processing 4/6000 : 488416045_1c6d903fe0.jpg > Processing 5/6000 : 2644326817_8f45080b87.jpg > Processing 6/6000 : 218342358_1755a9cce1.jpg > Processing 7/6000 : 2501968935_02f2cd8079.jpg > Processing 8/6000 : 2699342860_5288e203ea.jpg > Processing 9/6000 : 2638369467_8fc251595b.jpg > Processing 10/6000 : 2926786902_815a99a154.jpg > Processing 11/6000 : 2851304910_b5721199bc.jpg > Processing 12/6000 : 3423802527_94bd2b23b0.jpg > Processing 13/6000 : 3356369156_074750c6cc.jpg > Processing 14/6000 : 2294598473_40637b5c04.jpg > Processing 15/6000 : 1191338263_a4fa073154.jpg > Processing 16/6000 : 2380765956_6313d8cae3.jpg > Processing 17/6000 : 3197891333_b1b0fd1702.jpg > Processing 18/6000 : 3119887967_271a097464.jpg > Processing 19/6000 : 2276499757_b44dc6f8ce.jpg > Processing 20/6000 : 2506892928_7e79bec613.jpg > Processing 21/6000 : 2187222896_c206d63396.jpg > Processing 22/6000 : 2826769554_85c90864c9.jpg > Processing 23/6000 : 3097196395_ec06075389.jpg > Processing 24/6000 : 3603116579_4a28a932e2.jpg > Processing 25/6000 : 3339263085_6db9fd0981.jpg > Processing 26/6000 : 2532262109_87429a2cae.jpg > Processing 27/6000 : 2076906555_c20dc082db.jpg > Processing 28/6000 : 2502007071_82a8c639cf.jpg > Processing 29/6000 : 3113769557_9edbb8275c.jpg > Processing 30/6000 : 3325974730_3ee192e4ff.jpg > Processing 31/6000 : 1655781989_b15ab4cbff.jpg > Processing 32/6000 : 1662261486_db967930de.jpg > Processing 33/6000 : 2410562803_56ec09f41c.jpg > Processing 34/6000 : 2469498117_b4543e1460.jpg > Processing 35/6000 : 69710415_5c2bfb1058.jpg > Processing 36/6000 : 3414734842_beb543f400.jpg > Processing 37/6000 : 3006217970_90b42e6b27.jpg > Processing 38/6000 : 2192411521_9c7e488c5e.jpg > Processing 39/6000 : 3535879138_9281dc83d5.jpg > Processing 40/6000 : 2685788323_ceab14534a.jpg > Processing 41/6000 : 3465606652_f380a38050.jpg > Processing 42/6000 : 2599131872_65789d86d5.jpg > Processing 43/6000 : 2244613488_4d1f9edb33.jpg > Processing 44/6000 : 2738077433_10e6264b6f.jpg > Processing 45/6000 : 3537201804_ce07aff237.jpg > Processing 46/6000 : 1597557856_30640e0b43.jpg > Processing 47/6000 : 3357194782_c261bb6cbf.jpg > Processing 48/6000 : 3682038869_585075b5ff.jpg > Processing 49/6000 : 236474697_0c73dd5d8b.jpg > Processing 50/6000 : 2641288004_30ce961211.jpg > Processing 51/6000 : 267164457_2e8b4d30aa.jpg > Processing 52/6000 : 2453891449_fedb277908.jpg > Processing 53/6000 : 281419391_522557ce27.jpg > Processing 54/6000 : 354999632_915ea81e53.jpg > Processing 55/6000 : 3109136206_f7d201b368.jpg > Processing 56/6000 : 2281054343_95d6d3b882.jpg > Processing 57/6000 : 3296584432_bef3c965a3.jpg > Processing 58/6000 : 3526431764_056d2c61dc.jpg > Processing 59/6000 : 3549997413_01388dece0.jpg > Processing 60/6000 : 143688895_e837c3bc76.jpg > Processing 61/6000 : 2495394666_2ef6c37519.jpg > Processing 62/6000 : 3384742888_85230c34d5.jpg > Processing 63/6000 : 1160034462_16b38174fe.jpg > Processing 64/6000 : 334768700_51c439b9ee.jpg > Processing 65/6000 : 412101267_7257e6d8c0.jpg > Processing 66/6000 : 2623939135_0cd02ffa5d.jpg > Processing 67/6000 : 3043266735_904dda6ded.jpg > Processing 68/6000 : 3034585889_388d6ffcc0.jpg > Processing 69/6000 : 2069279767_fb32bfb2de.jpg > Processing 70/6000 : 2593406865_ab98490c1f.jpg > Processing 71/6000 : 432167214_c17fcc1a2d.jpg > Processing 72/6000 : 305749904_54a612fd1a.jpg > Processing 73/6000 : 2780087302_6a77658cbf.jpg > Processing 74/6000 : 3051998298_38da5746fa.jpg > Processing 75/6000 : 1574401950_6bedc0d29b.jpg > Processing 76/6000 : 539493431_744eb1abaa.jpg > Processing 77/6000 : 3524436870_7670df68e8.jpg > Processing 78/6000 : 2081446176_f97dc76951.jpg > Processing 79/6000 : 2265367960_7928c5642f.jpg > Processing 80/6000 : 460350019_af60511a3b.jpg > Processing 81/6000 : 2976946039_fb9147908d.jpg > Processing 82/6000 : 2308108566_2cba6bca53.jpg > Processing 83/6000 : 3367758711_a8c09607ac.jpg > Processing 84/6000 : 3666056567_661e25f54c.jpg > Processing 85/6000 : 3099264059_21653e2536.jpg > Processing 86/6000 : 2988439935_7cea05bc48.jpg > Processing 87/6000 : 241345864_138471c9ea.jpg > Processing 88/6000 : 3019199755_a984bc21b1.jpg > Processing 89/6000 : 3201594926_cd2009eb13.jpg > Processing 90/6000 : 2540751930_d71c7f5622.jpg > Processing 91/6000 : 1475046848_831245fc64.jpg > Processing 92/6000 : 2877637572_641cd29901.jpg > Processing 93/6000 : 1308472581_9961782889.jpg > Processing 94/6000 : 2282260240_55387258de.jpg > Processing 95/6000 : 2363419943_717e6b119d.jpg > Processing 96/6000 : 392976422_c8d0514bc3.jpg > Processing 97/6000 : 103205630_682ca7285b.jpg > Processing 98/6000 : 1347519824_e402241e4f.jpg > Processing 99/6000 : 584484388_0eeb36d03d.jpg > Processing 100/6000 : 2460823604_7f6f786b1c.jpg > Processing 101/6000 : 121800200_bef08fae5f.jpg > Processing 102/6000 : 2422302286_385725e3cf.jpg > Processing 103/6000 : 3183883750_b6acc40397.jpg > Processing 104/6000 : 3091912922_0d6ebc8f6a.jpg > Processing 105/6000 : 2787868417_810985234d.jpg > Processing 106/6000 : 3670075789_92ea9a183a.jpg > Processing 107/6000 : 3329169877_175cb16845.jpg > Processing 108/6000 : 751074141_feafc7b16c.jpg > Processing 109/6000 : 3445428367_25bafffe75.jpg > Processing 110/6000 : 3542418447_7c337360d6.jpg > Processing 111/6000 : 2730819220_b58af1119a.jpg > Processing 112/6000 : 3543378438_47e2712486.jpg > Processing 113/6000 : 2335619125_2e2034f2c3.jpg > Processing 114/6000 : 3520199925_ca18d0f41e.jpg > Processing 115/6000 : 3374722123_6fe6fef449.jpg > Processing 116/6000 : 3280672302_2967177653.jpg > Processing 117/6000 : 3073579130_7c95d16a7f.jpg > Processing 118/6000 : 99679241_adc853a5c0.jpg > Processing 119/6000 : 3759492488_592cd78ed1.jpg > Processing 120/6000 : 2875528143_94d9480fdd.jpg > Processing 121/6000 : 1052358063_eae6744153.jpg > Processing 122/6000 : 111766423_4522d36e56.jpg > Processing 123/6000 : 2474918824_88660c7757.jpg > Processing 124/6000 : 3697675767_97796334e4.jpg > Processing 125/6000 : 241346317_be3f07bd2e.jpg > Processing 126/6000 : 2694178830_116be6a6a9.jpg > Processing 127/6000 : 513116697_ad0f4dc800.jpg > Processing 128/6000 : 371364900_5167d4dd7f.jpg > Processing 129/6000 : 2860041212_797afd6ccf.jpg > Processing 130/6000 : 1481062342_d9e34366c4.jpg > Processing 131/6000 : 3556792157_d09d42bef7.jpg > Processing 132/6000 : 3226254560_2f8ac147ea.jpg > Processing 133/6000 : 2252123185_487f21e336.jpg > Processing 134/6000 : 2353088412_5e5804c6f5.jpg > Processing 135/6000 : 3359587274_4a2b140b84.jpg > Processing 136/6000 : 3588417747_b152a51c52.jpg > Processing 137/6000 : 1055623002_8195a43714.jpg > Processing 138/6000 : 3454315016_f1e30d4676.jpg > Processing 139/6000 : 2837808847_5407af1986.jpg > Processing 140/6000 : 3544803461_a418ca611e.jpg > Processing 141/6000 : 3046916429_8e2570b613.jpg > Processing 142/6000 : 2570559405_dc93007f76.jpg > Processing 143/6000 : 2518219912_f47214aa16.jpg > Processing 144/6000 : 2951092164_4940b9a517.jpg > Processing 145/6000 : 2273038287_3004a72a34.jpg > Processing 146/6000 : 3710971182_cb01c97d15.jpg > Processing 147/6000 : 3544483327_830349e7bc.jpg > Processing 148/6000 : 3055716848_b253324afc.jpg > Processing 149/6000 : 3287236038_8998e6b82f.jpg > Processing 150/6000 : 3597210806_95b07bb968.jpg > Processing 151/6000 : 3453284877_8866189055.jpg > Processing 152/6000 : 2640000969_b5404a5143.jpg > Processing 153/6000 : 2451988767_244bff98d1.jpg > Processing 154/6000 : 3682428916_69ce66d375.jpg > Processing 155/6000 : 276356412_dfa01c3c9e.jpg > Processing 156/6000 : 3616846215_d61881b60f.jpg > Processing 157/6000 : 2360194369_d2fd03b337.jpg > Processing 158/6000 : 576093768_e78f91c176.jpg > Processing 159/6000 : 2934837034_a8ca5b1f50.jpg > Processing 160/6000 : 241345639_1556a883b1.jpg > Processing 161/6000 : 2876994989_a4ebbd8491.jpg > Processing 162/6000 : 2339516180_12493e8ecf.jpg > Processing 163/6000 : 3301438465_10121a2412.jpg > Processing 164/6000 : 101669240_b2d3e7f17b.jpg > Processing 165/6000 : 300500054_56653bf217.jpg > Processing 166/6000 : 1956678973_223cb1b847.jpg > Processing 167/6000 : 1213336750_2269b51397.jpg > Processing 168/6000 : 478750151_e0adb5030a.jpg > Processing 169/6000 : 2755952680_68a0a1fa42.jpg > Processing 170/6000 : 47870024_73a4481f7d.jpg > Processing 171/6000 : 3165826902_6bf9c4bdb2.jpg ###Markdown Two files, ***extracted_train_features.enc*** and ***extracted_test_features.enc***, are created. These files stores the features extracted form each set respectively. 2. Descriptions Generating ###Code # load descriptions descriptions_tokens = open(path_tokens, 'r') raw_descriptions = descriptions_tokens.read() def load_descriptions(file_name): desc_mappings = dict() for line in file_name.split('\n'): tokens = line.split() if len(line) < 2: continue image_name, image_desc = tokens[0], tokens[1:] image_name = image_name.split('.')[0] image_desc = ' '.join(image_desc) if image_name not in desc_mappings: desc_mappings[image_name] = list() desc_mappings[image_name].append(image_desc) return desc_mappings def clean_descriptions(descriptions): table = str.maketrans('', '', string.punctuation) for key, desc_list in descriptions.items(): for i in range(len(desc_list)): desc = desc_list[i] desc = desc.split() desc = [word.lower() for word in desc] desc = [w.translate(table) for w in desc] desc = [word for word in desc if len(word) > 1] desc = [word for word in desc if word.isalpha()] desc_list[i] = ' '.join(desc) return descriptions def save_descriptions(descriptions, file_name): count = 0 lines = list() for key, desc_list in descriptions.items(): for desc in desc_list: lines.append(key + ' ' + desc) count += 1 data = '\n'.join(lines) file = open(file_name, 'w') file.write(data) file.close() return count # parse descriptions all_descriptions = load_descriptions(raw_descriptions) print('Images: %d ' % len(all_descriptions)) # clean descriptions all_descriptions = clean_descriptions(all_descriptions) # save to file count = save_descriptions(all_descriptions, path_desc) print('Descriptions :', count) print('File saved to :', path_desc) ###Output Descriptions : 40460 File saved to : descriptions.txt ###Markdown 3. Model Training 3.1 Define Functions and initiate pre training stage ###Code def pick_load(path): file = open(path, "rb") data = load(file) file.close() return data def desc_loader(filename): load_desc = open(filename, 'r') data = load_desc.read() load_desc.close() return data def load_set(filename): doc = desc_loader(filename) dataset = list() for line in doc.split('\n'): if len(line) < 1: continue i_name = line.split('.')[0] dataset.append(i_name) return set(dataset) def load_clean_descriptions(filename, dataset): doc = desc_loader(filename) descriptions = dict() for line in doc.split('\n'): tokens = line.split() image_id, image_desc = tokens[0], tokens[1:] if image_id in dataset: if image_id not in descriptions: descriptions[image_id] = list() desc = '<start> ' + ' '.join(image_desc) + ' <end>' descriptions[image_id].append(desc) return descriptions def caption_creator(descriptions): captions = [] for key, val in descriptions.items(): for cap in val: captions.append(cap) return captions def to_lines(descriptions): all_desc = list() for key in descriptions.keys(): [all_desc.append(d) for d in descriptions[key]] return all_desc def get_max_length(descriptions): lines = to_lines(descriptions) return max(len(d.split()) for d in lines) train_features = pick_load(path_extracted_train_features) train = load_set(path_train_set) train_descriptions = load_clean_descriptions(path_desc, train) print('Train Samples: %d' % len(train_descriptions)) all_train_captions = caption_creator(train_descriptions) print('Total Captions:', len(all_train_captions)) max_length = get_max_length(train_descriptions) print('Description Length: %d' % max_length) ###Output Description Length: 34 ###Markdown 3.2 Load Embeddings ###Code def get_all_set(directory_path): dataset_all = os.listdir(directory_path) all_set = list() for line in dataset_all: if len(line) < 1: continue i_name = line.split('.')[0] all_set.append(i_name) return set(all_set) def minimize_words_count(captions): word_threshold = 10 word_counts = dict() words_used = 0 for word in captions: words_used += 1 for w in word.split(): word_counts[w] = word_counts.get(w, 0) + 1 vocab = [w for w in word_counts if word_counts[w] >= word_threshold] print('Minimized Vocabulary (Words) : %d -> %d' % (len(word_counts) + 1, len(vocab) + 1)) int_to_word_mappings = dict() word_to_int_mappings = dict() integer = 1 for w in vocab: word_to_int_mappings[w] = integer int_to_word_mappings[integer] = w integer += 1 vocab_size = len(int_to_word_mappings) + 1 data = vocab_size, word_to_int_mappings, int_to_word_mappings save_path = 'token_mappings.tk' dump(data, open(save_path, 'wb')) def load_mappings(): save_path = 'token_mappings.tk' while True: if os.path.exists(save_path): print('Old Word to Vector embeddings found, ' 'Loading them!') return pick_load(save_path) else: print('No Old Word to Vector embeddings found, ' 'Creating a new one!') all_set = get_all_set(path_dataset) all_descriptions = load_clean_descriptions(path_desc, all_set) all_captions = [] for key, val in all_descriptions.items(): for cap in val: all_captions.append(cap) minimize_words_count(all_captions) vocab_size, word_to_int, int_to_word = load_mappings() def emb_load(vocab_size, word_to_int): embeddings_index = {} f = open(path_glove_txt, encoding="utf-8") for line in f: values = line.split() word = values[0] coefficients = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefficients f.close() print('Found %s word vectors' % len(embeddings_index)) embedding_dim = 200 embedding_matrix = np.zeros((vocab_size, embedding_dim)) for word, i in word_to_int.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector return embedding_dim, embedding_matrix print('Loading Glove Word2Vec model, please wait...') embedding_dim, embedding_matrix = emb_load(vocab_size, word_to_int) ###Output Loading Glove Word2Vec model, please wait... Found 400000 word vectors ###Markdown 3.3 Train a Model ###Code def create_model(vocab_size, embedding_dim, embedding_matrix, max_length): # LSTM Model inputs_image = Input(shape=(2048,)) feature_layer_1 = Dropout(0.2)(inputs_image) feature_layer_2 = Dense(256, activation='relu')(feature_layer_1) inputs_sequence = Input(shape=(max_length,)) sequence_layer_1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs_sequence) sequence_layer_2 = Dropout(0.2)(sequence_layer_1) sequence_layer_3 = LSTM(256)(sequence_layer_2) decoder1 = add([feature_layer_2, sequence_layer_3]) decoder2 = Dense(256, activation='relu')(decoder1) outputs = Dense(vocab_size, activation='softmax')(decoder2) model = Model(inputs=[inputs_image, inputs_sequence], outputs=outputs) model.layers[2].set_weights([embedding_matrix]) model.layers[2].trainable = False model.compile(loss='categorical_crossentropy', optimizer='adam') return model def data_generator(descriptions, image, word_to_int, max_length, num_photos_per_batch, vocab_size): list_photos = list() list_in_seq = list() list_out_seq = list() n = 0 while True: for key, desc_list in descriptions.items(): n += 1 photo = image[key] for desc in desc_list: seq = [word_to_int[word] for word in desc.split(' ') if word in word_to_int] for i in range(1, len(seq)): in_seq, out_seq = seq[:i], seq[i] in_seq = pad_sequences([in_seq], maxlen=max_length)[0] out_seq = to_categorical([out_seq], num_classes=vocab_size)[0] list_photos.append(photo) list_in_seq.append(in_seq) list_out_seq.append(out_seq) if n == num_photos_per_batch: yield [[array(list_photos), array(list_in_seq)], array(list_out_seq)] list_photos, list_in_seq, list_out_seq = list(), list(), list() n = 0 def train_model(idn, model, epochs, model_parameters_alpha, model_parameters_omega): train_descriptions = model_parameters_alpha[0] train_features = model_parameters_alpha[1] word_to_int = model_parameters_alpha[2] max_length = model_parameters_alpha[3] vocab_size = model_parameters_alpha[4] number_pics_per_bath = model_parameters_omega[0] steps = model_parameters_omega[1] if len(model_parameters_omega) == 3: extras = model_parameters_omega[2] set_value(model.optimizer.lr, extras[0]) for i in range(epochs): generator = data_generator(train_descriptions, train_features, word_to_int, max_length, number_pics_per_bath, vocab_size ) history = model.fit_generator(generator, epochs=1, steps_per_epoch=steps, verbose=1, ) # pull out metrics from the model loss = history.history.get('loss')[0] # model naming model_name = 'model_' + str(idn) + '_' + str(i) + '_(loss_%.3f' % loss + ').h5' # saving the model to local storage model.save(str(model_name)) print('\nModel saved : ' + model_name, end="\n\n") use_model = create_model(vocab_size, embedding_dim, embedding_matrix, max_length) ###Output _____no_output_____ ###Markdown Defining Training Parameters ###Code epochs = 10 number_pics_per_bath = 3 steps = len(train_descriptions) model_parameters_alpha = [train_descriptions, train_features, word_to_int, max_length, vocab_size] model_parameters_omega = [number_pics_per_bath, steps] ###Output _____no_output_____ ###Markdown The ACTUAL Training Process ###Code train_model(1, use_model, epochs, model_parameters_alpha, model_parameters_omega) ###Output Epoch 1/1 6000/6000 [==============================] - 242s 40ms/step - loss: 3.4276 Model saved : model_1_0_(loss_3.431).h5 Epoch 1/1 6000/6000 [==============================] - 243s 40ms/step - loss: 2.8096 Model saved : model_1_1_(loss_2.816).h5 Epoch 1/1 6000/6000 [==============================] - 241s 40ms/step - loss: 2.5633 Model saved : model_1_2_(loss_2.571).h5 Epoch 1/1 6000/6000 [==============================] - 244s 41ms/step - loss: 2.4074 Model saved : model_1_3_(loss_2.416).h5 Epoch 1/1 6000/6000 [==============================] - 246s 41ms/step - loss: 2.3031 Model saved : model_1_4_(loss_2.312).h5 Epoch 1/1 6000/6000 [==============================] - 237s 40ms/step - loss: 2.2288 Model saved : model_1_5_(loss_2.238).h5 Epoch 1/1 6000/6000 [==============================] - 240s 40ms/step - loss: 2.1760 Model saved : model_1_6_(loss_2.185).h5 Epoch 1/1 6000/6000 [==============================] - 242s 40ms/step - loss: 2.1348 Model saved : model_1_7_(loss_2.144).h5 Epoch 1/1 6000/6000 [==============================] - 238s 40ms/step - loss: 2.1015 Model saved : model_1_8_(loss_2.111).h5 Epoch 1/1 6000/6000 [==============================] - 237s 39ms/step - loss: 2.0741 Model saved : model_1_9_(loss_2.084).h5 ###Markdown Model Evaluation ###Code def evaluate_model(eval_model, descriptions, features, max_length, word_to_int, int_to_word): actual, predicted = list(), list() count = 0 for key, desc_list in descriptions.items(): # generate description count += 1 print('Eval Progress : {}/{}'.format(count, len(descriptions))) y_hat = pred_caption_greedy(features[key], eval_model, max_length, word_to_int, int_to_word) # store actual and predicted references = [d.split() for d in desc_list] actual.append(references) predicted.append(y_hat.split()) # calculate BLEU score print('BLEU-1: %f' % corpus_bleu(actual, predicted, weights=(1.0, 0, 0, 0))) print('BLEU-2: %f' % corpus_bleu(actual, predicted, weights=(0.5, 0.5, 0, 0))) print('BLEU-3: %f' % corpus_bleu(actual, predicted, weights=(0.3, 0.3, 0.3, 0))) print('BLEU-4: %f' % corpus_bleu(actual, predicted, weights=(0.25, 0.25, 0.25, 0.25))) ###Output _____no_output_____ ###Markdown We use greedy method to build a caption ###Code def pred_caption_greedy(photo, model, max_length, word_to_int, int_to_word): photo = np.array(photo) photo = np.expand_dims(photo, axis=0) in_text = '<start>' for i in range(max_length): sequence = [word_to_int[w] for w in in_text.split() if w in word_to_int] sequence = pad_sequences([sequence], maxlen=max_length) y_hat = model.predict([photo, sequence], verbose=0) y_hat = np.argmax(y_hat) word = int_to_word[y_hat] in_text += ' ' + word if word == '<end>': break pred_caption = in_text.split() pred_caption = pred_caption[1:-1] pred_caption = ' '.join(pred_caption) return pred_caption test_features = load(open(path_extracted_test_features, "rb")) test = load_set(path_test_set) test_descriptions = load_clean_descriptions(path_desc, test) print('Test Samples: %d' % len(test_descriptions)) ###Output Test Samples: 1000 ###Markdown load a model and run evaluation on it ###Code from keras.engine.saving import load_model from nltk.translate.bleu_score import corpus_bleu use_model = load_model('model_1_9_(loss_2.084).h5') evaluate_model(use_model, test_descriptions, test_features, max_length, word_to_int, int_to_word) ###Output Eval Progress : 1/1000 Eval Progress : 2/1000 Eval Progress : 3/1000 Eval Progress : 4/1000 Eval Progress : 5/1000 Eval Progress : 6/1000 Eval Progress : 7/1000 Eval Progress : 8/1000 Eval Progress : 9/1000 Eval Progress : 10/1000 Eval Progress : 11/1000 Eval Progress : 12/1000 Eval Progress : 13/1000 Eval Progress : 14/1000 Eval Progress : 15/1000 Eval Progress : 16/1000 Eval Progress : 17/1000 Eval Progress : 18/1000 Eval Progress : 19/1000 Eval Progress : 20/1000 Eval Progress : 21/1000 Eval Progress : 22/1000 Eval Progress : 23/1000 Eval Progress : 24/1000 Eval Progress : 25/1000 Eval Progress : 26/1000 Eval Progress : 27/1000 Eval Progress : 28/1000 Eval Progress : 29/1000 Eval Progress : 30/1000 Eval Progress : 31/1000 Eval Progress : 32/1000 Eval Progress : 33/1000 Eval Progress : 34/1000 Eval Progress : 35/1000 Eval Progress : 36/1000 Eval Progress : 37/1000 Eval Progress : 38/1000 Eval Progress : 39/1000 Eval Progress : 40/1000 Eval Progress : 41/1000 Eval Progress : 42/1000 Eval Progress : 43/1000 Eval Progress : 44/1000 Eval Progress : 45/1000 Eval Progress : 46/1000 Eval Progress : 47/1000 Eval Progress : 48/1000 Eval Progress : 49/1000 Eval Progress : 50/1000 Eval Progress : 51/1000 Eval Progress : 52/1000 Eval Progress : 53/1000 Eval Progress : 54/1000 Eval Progress : 55/1000 Eval Progress : 56/1000 Eval Progress : 57/1000 Eval Progress : 58/1000 Eval Progress : 59/1000 Eval Progress : 60/1000 Eval Progress : 61/1000 Eval Progress : 62/1000 Eval Progress : 63/1000 Eval Progress : 64/1000 Eval Progress : 65/1000 Eval Progress : 66/1000 Eval Progress : 67/1000 Eval Progress : 68/1000 Eval Progress : 69/1000 Eval Progress : 70/1000 Eval Progress : 71/1000 Eval Progress : 72/1000 Eval Progress : 73/1000 Eval Progress : 74/1000 Eval Progress : 75/1000 Eval Progress : 76/1000 Eval Progress : 77/1000 Eval Progress : 78/1000 Eval Progress : 79/1000 Eval Progress : 80/1000 Eval Progress : 81/1000 Eval Progress : 82/1000 Eval Progress : 83/1000 Eval Progress : 84/1000 Eval Progress : 85/1000 Eval Progress : 86/1000 Eval Progress : 87/1000 Eval Progress : 88/1000 Eval Progress : 89/1000 Eval Progress : 90/1000 Eval Progress : 91/1000 Eval Progress : 92/1000 Eval Progress : 93/1000 Eval Progress : 94/1000 Eval Progress : 95/1000 Eval Progress : 96/1000 Eval Progress : 97/1000 Eval Progress : 98/1000 Eval Progress : 99/1000 Eval Progress : 100/1000 Eval Progress : 101/1000 Eval Progress : 102/1000 Eval Progress : 103/1000 Eval Progress : 104/1000 Eval Progress : 105/1000 Eval Progress : 106/1000 Eval Progress : 107/1000 Eval Progress : 108/1000 Eval Progress : 109/1000 Eval Progress : 110/1000 Eval Progress : 111/1000 Eval Progress : 112/1000 Eval Progress : 113/1000 Eval Progress : 114/1000 Eval Progress : 115/1000 Eval Progress : 116/1000 Eval Progress : 117/1000 Eval Progress : 118/1000 Eval Progress : 119/1000 Eval Progress : 120/1000 Eval Progress : 121/1000 Eval Progress : 122/1000 Eval Progress : 123/1000 Eval Progress : 124/1000 Eval Progress : 125/1000 Eval Progress : 126/1000 Eval Progress : 127/1000 Eval Progress : 128/1000 Eval Progress : 129/1000 Eval Progress : 130/1000 Eval Progress : 131/1000 Eval Progress : 132/1000 Eval Progress : 133/1000 Eval Progress : 134/1000 Eval Progress : 135/1000 Eval Progress : 136/1000 Eval Progress : 137/1000 Eval Progress : 138/1000 Eval Progress : 139/1000 Eval Progress : 140/1000 Eval Progress : 141/1000 Eval Progress : 142/1000 Eval Progress : 143/1000 Eval Progress : 144/1000 Eval Progress : 145/1000 Eval Progress : 146/1000 Eval Progress : 147/1000 Eval Progress : 148/1000 Eval Progress : 149/1000 Eval Progress : 150/1000 Eval Progress : 151/1000 Eval Progress : 152/1000 Eval Progress : 153/1000 Eval Progress : 154/1000 Eval Progress : 155/1000 Eval Progress : 156/1000 Eval Progress : 157/1000 Eval Progress : 158/1000 Eval Progress : 159/1000 Eval Progress : 160/1000 Eval Progress : 161/1000 Eval Progress : 162/1000 Eval Progress : 163/1000 Eval Progress : 164/1000 Eval Progress : 165/1000 Eval Progress : 166/1000 Eval Progress : 167/1000 Eval Progress : 168/1000 Eval Progress : 169/1000 Eval Progress : 170/1000 Eval Progress : 171/1000 Eval Progress : 172/1000 Eval Progress : 173/1000 Eval Progress : 174/1000 Eval Progress : 175/1000 Eval Progress : 176/1000 Eval Progress : 177/1000 Eval Progress : 178/1000 Eval Progress : 179/1000 Eval Progress : 180/1000 Eval Progress : 181/1000 Eval Progress : 182/1000 Eval Progress : 183/1000 Eval Progress : 184/1000 Eval Progress : 185/1000 Eval Progress : 186/1000 Eval Progress : 187/1000 Eval Progress : 188/1000 Eval Progress : 189/1000 Eval Progress : 190/1000 Eval Progress : 191/1000 Eval Progress : 192/1000 Eval Progress : 193/1000 Eval Progress : 194/1000 Eval Progress : 195/1000 Eval Progress : 196/1000 Eval Progress : 197/1000 Eval Progress : 198/1000 Eval Progress : 199/1000 Eval Progress : 200/1000 Eval Progress : 201/1000 Eval Progress : 202/1000 Eval Progress : 203/1000 Eval Progress : 204/1000 Eval Progress : 205/1000 Eval Progress : 206/1000 Eval Progress : 207/1000 Eval Progress : 208/1000 Eval Progress : 209/1000 Eval Progress : 210/1000 Eval Progress : 211/1000 Eval Progress : 212/1000 Eval Progress : 213/1000 Eval Progress : 214/1000 Eval Progress : 215/1000 Eval Progress : 216/1000 Eval Progress : 217/1000 Eval Progress : 218/1000 Eval Progress : 219/1000 Eval Progress : 220/1000 Eval Progress : 221/1000 Eval Progress : 222/1000 Eval Progress : 223/1000 Eval Progress : 224/1000 Eval Progress : 225/1000 Eval Progress : 226/1000 Eval Progress : 227/1000 Eval Progress : 228/1000 Eval Progress : 229/1000 Eval Progress : 230/1000 Eval Progress : 231/1000 Eval Progress : 232/1000 Eval Progress : 233/1000 Eval Progress : 234/1000 Eval Progress : 235/1000 Eval Progress : 236/1000 Eval Progress : 237/1000 Eval Progress : 238/1000 Eval Progress : 239/1000 Eval Progress : 240/1000 Eval Progress : 241/1000 Eval Progress : 242/1000 Eval Progress : 243/1000 Eval Progress : 244/1000 Eval Progress : 245/1000 Eval Progress : 246/1000 Eval Progress : 247/1000 Eval Progress : 248/1000 Eval Progress : 249/1000 Eval Progress : 250/1000 Eval Progress : 251/1000 Eval Progress : 252/1000 Eval Progress : 253/1000 Eval Progress : 254/1000 Eval Progress : 255/1000 Eval Progress : 256/1000 Eval Progress : 257/1000 Eval Progress : 258/1000 Eval Progress : 259/1000 Eval Progress : 260/1000 Eval Progress : 261/1000 Eval Progress : 262/1000 Eval Progress : 263/1000 Eval Progress : 264/1000 Eval Progress : 265/1000 Eval Progress : 266/1000 Eval Progress : 267/1000 Eval Progress : 268/1000 Eval Progress : 269/1000 Eval Progress : 270/1000 Eval Progress : 271/1000 Eval Progress : 272/1000 Eval Progress : 273/1000 Eval Progress : 274/1000 Eval Progress : 275/1000 Eval Progress : 276/1000 Eval Progress : 277/1000 Eval Progress : 278/1000 Eval Progress : 279/1000 Eval Progress : 280/1000 Eval Progress : 281/1000 Eval Progress : 282/1000 Eval Progress : 283/1000 Eval Progress : 284/1000 Eval Progress : 285/1000 Eval Progress : 286/1000 Eval Progress : 287/1000 Eval Progress : 288/1000 Eval Progress : 289/1000 Eval Progress : 290/1000 Eval Progress : 291/1000 Eval Progress : 292/1000 Eval Progress : 293/1000 Eval Progress : 294/1000 Eval Progress : 295/1000 Eval Progress : 296/1000 Eval Progress : 297/1000 Eval Progress : 298/1000 Eval Progress : 299/1000 Eval Progress : 300/1000 Eval Progress : 301/1000 Eval Progress : 302/1000 Eval Progress : 303/1000 Eval Progress : 304/1000 Eval Progress : 305/1000 Eval Progress : 306/1000 Eval Progress : 307/1000 Eval Progress : 308/1000 Eval Progress : 309/1000 Eval Progress : 310/1000 Eval Progress : 311/1000 Eval Progress : 312/1000 Eval Progress : 313/1000 Eval Progress : 314/1000 Eval Progress : 315/1000 Eval Progress : 316/1000 Eval Progress : 317/1000 Eval Progress : 318/1000 Eval Progress : 319/1000 Eval Progress : 320/1000 Eval Progress : 321/1000 Eval Progress : 322/1000 Eval Progress : 323/1000 Eval Progress : 324/1000 Eval Progress : 325/1000 Eval Progress : 326/1000 Eval Progress : 327/1000 Eval Progress : 328/1000 Eval Progress : 329/1000 Eval Progress : 330/1000 Eval Progress : 331/1000 Eval Progress : 332/1000 Eval Progress : 333/1000 ###Markdown Caption Generator ###Code os.listdir() def get_avg(inp): size = len(inp) tot = 0 for i in inp: tot += i return tot / size def get_dominant_color(image): clusters = 5 im = Image.open(image) im = im.resize((150, 150)) ar = np.asarray(im) shape = ar.shape ar = ar.reshape(scipy.product(shape[:2]), shape[2]).astype(float) codes, dist = scipy.cluster.vq.kmeans(ar, clusters) vec, dist = scipy.cluster.vq.vq(ar, codes) counts, bins = scipy.histogram(vec, len(codes)) index_max = scipy.argmax(counts) peak_color = codes[index_max] return peak_color def process_text(text): pro_txt = '' word = "" for i in range(len(text)): word += text[i] if i % max_length == 0 and i != 0: pro_txt += '\n' if text[i] == ' ': pro_txt += word word = '' if word != '': pro_txt += word return pro_txt def draw(image_name, text): img = plt.imread(image_name) fig, ax = plt.subplots() plt.imshow(img) ax.spines['top'].set_visible(False) ax.spines['left'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['right'].set_visible(False) ax.set_xticks([]) ax.set_yticks([]) txt = process_text(text) lines = txt.split('\n') max_val = 0 for line in lines: if max_val < len(line): max_val = len(line) plot_shape = plt.rcParams["figure.figsize"] plot_width = plot_shape[0] fs = int((plot_width / max_val) * 100) if fs not in range(10, 21): fs = 16 b_color = get_dominant_color(image_name) b_color = [x / 255.0 for x in b_color] f_color = get_avg(b_color) if f_color > 0.5: f_color = 'black' else: f_color = 'white' plt.xlabel(txt, fontsize=fs, style='italic', color=f_color, bbox=dict(facecolor=b_color, edgecolor='white', alpha=0.9, boxstyle='round'), labelpad=9) plt.show() image_name = "image_sample.JPG" img = feature_extractor(image_name, model_popped) pred_caption = pred_caption_greedy(img, use_model, max_length, word_to_int, int_to_word) draw(image_name, pred_caption) print("\nInput :", image_name) print("Caption :", pred_caption) ###Output D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\ipykernel_launcher.py:14: DeprecationWarning: scipy.product is deprecated and will be removed in SciPy 2.0.0, use numpy.product instead D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\ipykernel_launcher.py:17: DeprecationWarning: scipy.histogram is deprecated and will be removed in SciPy 2.0.0, use numpy.histogram instead D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\ipykernel_launcher.py:18: DeprecationWarning: scipy.argmax is deprecated and will be removed in SciPy 2.0.0, use numpy.argmax instead
step2_1_model1_predict_save.ipynb
###Markdown 將資料做切割(太長的會被分開/或404的),帶入第一個model預測name並且儲存 ###Code # train_full_content.csv 這張表的是由主辦單位提供後經過爬蟲得來的 df = pd.read_csv('train_full_content.csv' ,encoding='utf-8',index_col=0) df.head() # hyperlink:連結 #全文 #aml人物名單 name_list = df.name_list all_name_list = [] for i in range(len(name_list)): ii = [j for j in name_list.iloc[i][1:-1].replace(" ", "").split("'") if len(j)>=2] s = [] for k in ii: #print(k) s.append(k) all_name_list.append(s) df['name_list'] = all_name_list def split_content(x): if len(x)<=500: return [x] elif (len(x)>=500) and(len(x)<1000): return [ x[:500+3],x[500-6:] ] elif (len(x)>=1000) and (len(x)<1500): return [ x[:500+3],x[500-3:1000+3] ,x[1000-3:]] elif (len(x)>=1500) and (len(x)<2000): return [ x[:500+3],x[500-3:1000+3] ,x[1000-3:1500+3], x[1500-3:]] else: return [x[:500+3],x[500-3:1000+3] ,x[1000-3:1500+3], x[1500-3:2000-3] , x[2000-3:2000-3+500] ] # 將長篇文章依據字數500左右去分割_list裡面有三至五個不等的文章 df['article_split'] = df['article'].apply(lambda x :split_content(x)) import torch from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler from transformers import BertTokenizer, BertConfig from keras.preprocessing.sequence import pad_sequences #2.2.4 from sklearn.model_selection import train_test_split from tqdm import tqdm, trange tokenizer_chinese = BertTokenizer.from_pretrained("bert-base-chinese", do_lower_case=False) # 定義model的class tag_values = ['O', 'B_person_name', 'M_person_name', 'E_person_name', 'PAD'] tag2idx = { 'O': 0, 'B_person_name': 1, 'M_person_name': 2, 'E_person_name': 3, 'PAD': 4} # load model PATH = 'step1_1output_bertmode_step1_ner.pth'#'bertmode_asia.pth' model = torch.load(PATH) model.eval() # 找出每篇文章所有的名字 cols_name = [ ] for p in range(len(df)): row_name = [] for sentence in df['article_split'].iloc[p]: # bert預測 tokenized_sentence = tokenizer_chinese.encode(sentence) input_ids = torch.tensor([tokenized_sentence]).cuda() with torch.no_grad(): output = model(input_ids) label_indices = np.argmax(output[0].to('cpu').numpy(), axis=2) tokens = tokenizer_chinese.convert_ids_to_tokens(input_ids.to('cpu').numpy()[0]) new_tokens, new_labels = [], [] for token, label_idx in zip(tokens, label_indices[0]): if token.startswith("##"): new_tokens[-1] = new_tokens[-1] + token[2:] else: new_labels.append(tag_values[label_idx])#ex:['O','O','O','O',...] new_tokens.append(token)# ex:['[CLS]', '益', '公', '司', '債', '或', '新',...] texto='' for i in range(len(new_labels)): if new_labels[i] !='O': texto+= new_tokens[i] else: texto+='O' #'OOO張堯勇OOOOOOOOOOOOO' for i in texto.split('O'): if len(i)>1:#['張堯勇', '張堯勇'],單一個字或者空白的會被削去 row_name.append(i) uniq_name = list(set(row_name)) #['鄭心芸', '巴菲特', '詹姆斯·西蒙斯', '堯勇', '索羅斯', '張堯勇'] cols_name.append(uniq_name) list(set(row_name)) cols_name[:10] df['all_name'] = cols_name df.head() import pickle df.to_pickle("step2_1_output_train_full_data.pkl") ###Output _____no_output_____
Final/DATA643_Final_Project.ipynb
###Markdown DATA 643 - Final Project Sreejaya Nair and Suman K Polavarapu Description: *Explore the Apache Spark Cluster Computing Framework by analysing the movielens dataset. Provide recommendations using MLLib* ###Code import os import sys import urllib2 import collections import matplotlib.pyplot as plt import math from time import time, sleep %pylab inline ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Prepare the pySpark Environment ###Code spark_home = os.environ.get('SPARK_HOME', None) if not spark_home: raise ValueError("Please set SPARK_HOME environment variable!") # Add the py4j to the path. sys.path.insert(0, os.path.join(spark_home, 'python')) sys.path.insert(0, os.path.join(spark_home, 'C:/spark/python/lib/py4j-0.9-src.zip')) ###Output _____no_output_____ ###Markdown Initialize Spark Context ###Code from pyspark.mllib.recommendation import ALS, Rating from pyspark import SparkConf, SparkContext conf = SparkConf().setMaster("local[*]").setAppName("MovieRecommendationsALS").set("spark.executor.memory", "2g") sc = SparkContext(conf = conf) ###Output _____no_output_____ ###Markdown Load and Analyse Data ###Code def loadMovieNames(): movieNames = {} for line in urllib2.urlopen("https://raw.githubusercontent.com/psumank/DATA643/master/WK5/ml-100k/u.item"): fields = line.split('|') movieNames[int(fields[0])] = fields[1].decode('ascii', 'ignore') return movieNames print "\nLoading movie names..." nameDict = loadMovieNames() print "\nLoading ratings data..." data = sc.textFile("file:///C:/Users/p_sum/.ipynb_checkpoints/ml-100k/u.data") ratings = data.map(lambda x: x.split()[2]) #action -- just to trigger the driver [ lazy evaluation ] rating_results = ratings.countByValue() sortedResults = collections.OrderedDict(sorted(rating_results.items())) for key, value in sortedResults.iteritems(): print "%s %i" % (key, value) ###Output 1 6110 2 11370 3 27145 4 34174 5 21201 ###Markdown Ratings Histogram ###Code ratPlot = plt.bar(range(len(sortedResults)), sortedResults.values(), align='center') plt.xticks(range(len(sortedResults)), list(sortedResults.keys())) ratPlot[3].set_color('g') print "Ratings Histogram" ###Output Ratings Histogram ###Markdown Most popular movies ###Code movies = data.map(lambda x: (int(x.split()[1]), 1)) movieCounts = movies.reduceByKey(lambda x, y: x + y) flipped = movieCounts.map( lambda (x, y) : (y, x)) sortedMovies = flipped.sortByKey(False) sortedMoviesWithNames = sortedMovies.map(lambda (count, movie) : (nameDict[movie], count)) results = sortedMoviesWithNames.collect() subset = results[0:10] popular_movieNm = [str(i[0]) for i in subset] popularity_strength = [int(i[1]) for i in subset] popMovplot = plt.barh(range(len(subset)), popularity_strength, align='center') plt.yticks(range(len(subset)), popular_movieNm) popMovplot[0].set_color('g') print "Most Popular Movies from the Dataset" ###Output Most Popular Movies from the Dataset ###Markdown Similar Movies Find similar movies for a given movie using cosine similarity ###Code ratingsRDD = data.map(lambda l: l.split()).map(lambda l: (int(l[0]), (int(l[1]), float(l[2])))) ratingsRDD.takeOrdered(10, key = lambda x: x[0]) ratingsRDD.take(4) # Movies rated by same user. ==> [ user ID ==> ( (movieID, rating), (movieID, rating)) ] userJoinedRatings = ratingsRDD.join(ratingsRDD) userJoinedRatings.takeOrdered(10, key = lambda x: x[0]) # Remove dups def filterDups( (userID, ratings) ): (movie1, rating1) = ratings[0] (movie2, rating2) = ratings[1] return movie1 < movie2 uniqueUserJoinedRatings = userJoinedRatings.filter(filterDups) uniqueUserJoinedRatings.takeOrdered(10, key = lambda x: x[0]) # Now key by (movie1, movie2) pairs ==> (movie1, movie2) => (rating1, rating2) def makeMovieRatingPairs((user, ratings)): (movie1, rating1) = ratings[0] (movie2, rating2) = ratings[1] return ((movie1, movie2), (rating1, rating2)) moviePairs = uniqueUserJoinedRatings.map(makeMovieRatingPairs) moviePairs.takeOrdered(10, key = lambda x: x[0]) #collect all ratings for each movie pair and compute similarity. (movie1, movie2) = > (rating1, rating2), (rating1, rating2) ... moviePairRatings = moviePairs.groupByKey() moviePairRatings.takeOrdered(10, key = lambda x: x[0]) #Compute Similarity def cosineSimilarity(ratingPairs): numPairs = 0 sum_xx = sum_yy = sum_xy = 0 for ratingX, ratingY in ratingPairs: sum_xx += ratingX * ratingX sum_yy += ratingY * ratingY sum_xy += ratingX * ratingY numPairs += 1 numerator = sum_xy denominator = sqrt(sum_xx) * sqrt(sum_yy) score = 0 if (denominator): score = (numerator / (float(denominator))) return (score, numPairs) moviePairSimilarities = moviePairRatings.mapValues(cosineSimilarity).cache() moviePairSimilarities.takeOrdered(10, key = lambda x: x[0]) ###Output _____no_output_____ ###Markdown Lets find similar movies for Toy Story (Movie ID: 1) ###Code scoreThreshold = 0.97 coOccurenceThreshold = 50 inputMovieID = 1 #Toy Story. # Filter for movies with this sim that are "good" as defined by our quality thresholds. filteredResults = moviePairSimilarities.filter(lambda((pair,sim)): \ (pair[0] == inputMovieID or pair[1] == inputMovieID) and sim[0] > scoreThreshold and sim[1] > coOccurenceThreshold) #Top 10 by quality score. results = filteredResults.map(lambda((pair,sim)): (sim, pair)).sortByKey(ascending = False).take(10) print "Top 10 similar movies for " + nameDict[inputMovieID] for result in results: (sim, pair) = result # Display the similarity result that isn't the movie we're looking at similarMovieID = pair[0] if (similarMovieID == inputMovieID): similarMovieID = pair[1] print nameDict[similarMovieID] + "\tscore: " + str(sim[0]) + "\tstrength: " + str(sim[1]) ###Output Top 10 similar movies for Toy Story (1995) Hamlet (1996) score: 0.974543871512 strength: 67 Raiders of the Lost Ark (1981) score: 0.974084217219 strength: 273 Cinderella (1950) score: 0.974002987747 strength: 105 Winnie the Pooh and the Blustery Day (1968) score: 0.973415495885 strength: 58 Cool Hand Luke (1967) score: 0.97334234772 strength: 98 Great Escape, The (1963) score: 0.973270581613 strength: 77 African Queen, The (1951) score: 0.973151271508 strength: 101 Apollo 13 (1995) score: 0.972395120538 strength: 207 12 Angry Men (1957) score: 0.971987295102 strength: 81 Wrong Trousers, The (1993) score: 0.971814306667 strength: 90 ###Markdown Recommender using MLLib Training the recommendation model ###Code ratings = data.map(lambda l: l.split()).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2]))).cache() ratings.take(3) ratings.take(1)[0] nratings = ratings.count() nUsers = ratings.keys().distinct().count() nMovies = ratings.values().distinct().count() print "We have Got %d ratings from %d users on %d movies." % (nratings, nUsers, nMovies) # Build the recommendation model using Alternating Least Squares #Train a matrix factorization model given an RDD of ratings given by users to items, in the form of #(userID, itemID, rating) pairs. We approximate the ratings matrix as the product of two lower-rank matrices #of a given rank (number of features). To solve for these features, we run a given number of iterations of ALS. #The level of parallelism is determined automatically based on the number of partitions in ratings. #Our ratings are in the form of ==> [userid, (movie id, rating)] ==> [ (1, (61, 4.0)), (1, (189, 3.0)) etc. ] start = time() seed = 5L iterations = 10 rank = 8 model = ALS.train(ratings, rank, iterations) duration = time() - start print "Model trained in %s seconds" % round(duration,3) ###Output Model trained in 4.084 seconds ###Markdown Recommendations ###Code #Lets recommend movies for the user id - 2 userID = 2 print "\nTop 10 recommendations:" recommendations = model.recommendProducts(userID, 10) for recommendation in recommendations: print nameDict[int(recommendation[1])] + \ " score " + str(recommendation[2]) ###Output Top 10 recommendations: Angel Baby (1995) score 7.30157994119 Burnt By the Sun (1994) score 5.91702154482 Horseman on the Roof, The (Hussard sur le toit, Le) (1995) score 5.91615270541 Duoluo tianshi (1995) score 5.72715083338 Alphaville (1965) score 5.71454149871 Boys, Les (1997) score 5.65218523752 Whole Wide World, The (1996) score 5.57786180842 Funny Face (1957) score 5.53967043305 Ruling Class, The (1972) score 5.48367186049 Once Were Warriors (1994) score 5.48150506587
models/Classifiers-SBC-ICA-Dogs-3-c3.ipynb
###Markdown Data Preparation and loading ###Code from __future__ import print_function, division import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copy plt.ion() # interactive mode # Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } data_dir = 'Stanford Dogs_3/c3' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=64, shuffle=True, num_workers=2) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") import pandas as pd train_results = pd.DataFrame(columns=['model', 'epoch', 'epoch_loss', 'epoch_acc']) val_results = pd.DataFrame(columns=['model', 'epoch', 'epoch_loss', 'epoch_acc']) model_times = pd.DataFrame(columns=['Model', 'Time']) def train_model(model, model_name, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch, num_epochs - 1)) print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': model.train() # Set model to training mode else: model.eval() # Set model to evaluate mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for inputs, labels in dataloaders[phase]: inputs = inputs.to(device) labels = labels.to(device) # zero the parameter gradients optimizer.zero_grad() # forward # track history if only in train with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _, preds = torch.max(outputs, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.item() * inputs.size(0) running_corrects += torch.sum(preds == labels.data) if phase == 'train': scheduler.step() epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects.double() / dataset_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f}'.format( phase, epoch_loss, epoch_acc)) if(phase == 'train'): train_results.loc[len(train_results.index)] = [model_name, epoch, float("{:.4f}".format(epoch_loss)), float("{:.4f}".format(epoch_acc))] elif(phase == 'val'): val_results.loc[len(val_results.index)] = [model_name, epoch, float("{:.4f}".format(epoch_loss)), float("{:.4f}".format(epoch_acc))] # deep copy the model if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict()) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) print('Best val Acc: {:4f}'.format(best_acc)) model_times.loc[len(model_times.index)] = [model_name, str('{:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))] # load best model weights model.load_state_dict(best_model_wts) return model def visualize_model(model, num_images=6): was_training = model.training model.eval() images_so_far = 0 fig = plt.figure() with torch.no_grad(): for i, (inputs, labels) in enumerate(dataloaders['val']): inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, preds = torch.max(outputs, 1) for j in range(inputs.size()[0]): images_so_far += 1 ax = plt.subplot(num_images//2, 2, images_so_far) ax.axis('off') ax.set_title('predicted: {}'.format(class_names[preds[j]])) imshow(inputs.cpu().data[j]) if images_so_far == num_images: model.train(mode=was_training) return model.train(mode=was_training) ###Output _____no_output_____ ###Markdown VGG19 with BN ###Code torch.cuda.empty_cache() model_ft = models.vgg19_bn(pretrained=True) model_name = "VGG-19" num_ftrs = model_ft.classifier[6].in_features # Here the size of each output sample is set to 2. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). model_ft.classifier[6] = nn.Linear(num_ftrs, 10) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=20) ###Output _____no_output_____ ###Markdown VGG16 with BN ###Code torch.cuda.empty_cache() model_ft = models.vgg16_bn(pretrained=True) model_name = "VGG-16" num_ftrs = model_ft.classifier[6].in_features # Here the size of each output sample is set to 2. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). model_ft.classifier[6] = nn.Linear(num_ftrs, 10) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=20) ###Output _____no_output_____ ###Markdown ResNet ###Code torch.cuda.empty_cache() model_ft = models.resnet50(pretrained=True) model_name = "ResNet50" num_ftrs = model_ft.fc.in_features # Here the size of each output sample is set to 2. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). model_ft.fc = nn.Linear(num_ftrs, 10) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=20) ###Output _____no_output_____ ###Markdown ResNeXt50 ###Code torch.cuda.empty_cache() model_ft = models.resnext50_32x4d(pretrained=True) model_name = "ResNeXt50" num_ftrs = model_ft.fc.in_features # Here the size of each output sample is set to 2. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). model_ft.fc = nn.Linear(num_ftrs, 10) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=20) ###Output _____no_output_____ ###Markdown AlexNet ###Code torch.cuda.empty_cache() model_ft = models.alexnet(pretrained=True) model_name = "AlexNet" num_ftrs = model_ft.classifier[6].in_features # Here the size of each output sample is set to 2. # Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)). model_ft.classifier[6] = nn.Linear(num_ftrs, 10) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=20) train_results.to_excel('train_results_dogs_3_c3.xlsx') val_results.to_excel('val_results_dogs_3_c3.xlsx') model_times.to_excel('training_times_dogs_3_c3.xlsx') ###Output _____no_output_____
python/archive/cosyne_figures.ipynb
###Markdown OT-based image alignment ###Code %load_ext autoreload %autoreload 2 %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.ndimage import affine_transform from scipy.stats import multivariate_normal from scipy.io import loadmat from otimage import readers, imagerep, imagereg from otimage.utils import plot_maxproj idx = range(2, 8) img_path = '/home/mn2822/Desktop/WormOT/data/zimmer/raw/mCherry_v00065-00115.hdf5' out_dir = '/home/mn2822/Desktop/WormOT/cosyne_figs' with readers.ZimmerReader(img_path) as reader: for i in idx: img = reader.get_frame(i) plt.figure() plot_maxproj(img) plt.axis('off') plt.savefig(f'{out_dir}/frame_{i}.png') # Select frames t1 = 6 t2 = t1 + 1 # Load two successive frames from dataset img_path = '/home/mn2822/Desktop/WormOT/data/zimmer/raw/mCherry_v00065-00115.hdf5' with readers.ZimmerReader(img_path) as reader: frame_1 = reader.get_frame(t1) frame_2 = reader.get_frame(t2) img_shape = frame_1.shape # Load MP components n_mps = 50 mp_path = '/home/mn2822/Desktop/WormOT/data/zimmer/mp_components/mp_0000_0050.mat' mp_data = loadmat(mp_path) cov = mp_data['cov'] pts_1 = mp_data['means'][t1, 0:n_mps, :] pts_2 = mp_data['means'][t2, 0:n_mps, :] wts_1 = mp_data['weights'][t1, 0:n_mps, 0] wts_2 = mp_data['weights'][t2, 0:n_mps, 0] alpha, beta, _ = imagereg.ot_reg_linear(pts_1, pts_2, wts_1, wts_2) # Apply linear transform to first frame to reconstruct frame at time t inv_beta = np.linalg.inv(beta) inv_alpha = -inv_beta @ alpha rec_img = affine_transform(frame_1, inv_beta, inv_alpha, mode='nearest') # MP reconstruction #rec_pts_t = reg_data['rec_pts'][t, :, :].astype(int) #rec_img_t = imagerep.reconstruct_image(rec_pts_t, [cov], wts_0, img_shape) #plt.figure(figsize=(15, 15)) #plt.subplot(131) #plot_maxproj(frame_1) #plt.title(f'frame {t1}') #plt.axis('off') #plt.subplot(132) #plot_maxproj(frame_2) #plt.title(f'frame {t2}') #plt.axis('off') #plt.subplot(133) #plot_maxproj(rec_img) #plt.title(f'frame {t2} (reconstruction)'); #plt.axis('off') plt.figure() plot_maxproj(rec_img) plt.axis('off') plt.savefig(f'{out_dir}/trans_{t1}_{t2}.png') ###Output _____no_output_____
code/Taking_A_Step_Back.ipynb
###Markdown So in explore_SLFV_GP.ipynb, I tried a bunch of different things on a VERY big lightcurve. But I think I'm getting ahead of myself, so I'm gonna take a step back here... ###Code import numpy as np import pandas as pd from TESStools import * import os import warnings from multiprocessing import Pool, cpu_count from scipy.stats import multivariate_normal from tqdm.notebook import tqdm import h5py as h5 import pymc3 as pm import pymc3_ext as pmx import aesara_theano_fallback.tensor as tt from celerite2.theano import terms, GaussianProcess from pymc3_ext.utils import eval_in_model import arviz as az import exoplanet print(f"exoplanet.__version__ = '{exoplanet.__version__}'") from aesara_theano_fallback import __version__ as tt_version from celerite2 import __version__ as c2_version pm.__version__, pmx.__version__, tt_version, c2_version ###Output _____no_output_____ ###Markdown Ok here is our example data we're going to be working with. It's almost two years of TESS observations, with a year in between them ###Code cool_sgs = pd.read_csv('sample.csv',index_col=0) example = cool_sgs[cool_sgs['CommonName']=='HD 269953'] tic = example.index[0] lc, lc_smooth = lc_extract(get_lc_from_id(tic), smooth=128) time, flux, err = lc['Time'].values, lc['Flux'].values, lc['Err'].values ###Output _____no_output_____ ###Markdown Let's parse the lightcurve into TESS Sectors. ###Code orbit_times = pd.read_csv('../data/orbit_times_20210629_1340.csv',skiprows=5) sector_group = orbit_times.groupby('Sector') sector_starts = sector_group['Start TJD'].min() sector_ends = sector_group['End TJD'].max() sectors = pd.DataFrame({'Sector':sector_starts.index,'Start TJD':sector_starts.values,'End TJD':sector_ends.values}) fig = plt.figure(dpi=300) plt.scatter(time, flux, s=1, c='k') for i,row in sectors.iterrows(): plt.axvline(x=row['Start TJD'], c='C0') plt.axvline(x=row['End TJD'], c='C3') plt.text(0.5*(row['Start TJD']+row['End TJD']),1.007,int(row['Sector'])) sector_lcs = [] for i,row in sectors.iterrows(): sec_lc = lc[(lc['Time']>=row['Start TJD'])&(lc['Time']<=row['End TJD'])] if len(sec_lc) > 0: sec_lc.insert(3,'Sector',np.tile(int(row['Sector']),len(sec_lc))) sector_lcs.append(sec_lc) lc_new = pd.concat(sector_lcs) lc_new all_sectors = np.unique(lc_new['Sector']) this_sector = lc_new[lc_new['Sector'] == all_sectors[0]] this_sector this_time, this_flux, this_err = this_sector['Time'].values, this_sector['Flux'].values, this_sector['Err'].values pseudo_NF = 0.5 / (np.mean(np.diff(this_time))) rayleigh = 1.0 / (this_time.max() - this_time.min()) ls = LombScargle(this_time,this_flux,dy=this_err,) freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF) power /= len(this_time) fig, ax = plt.subplots(2, 1, dpi=300) ax[0].scatter(this_time, this_flux,s=1,c='k') ax[0].plot(lc_smooth['Time'],lc_smooth['Flux'],c='C2') ax[0].set(xlim=(this_time.min(),this_time.max())) ax[1].loglog(freq, power) ###Output _____no_output_____ ###Markdown Let's fit the GP to this! ###Code # Here's a cute function that does that, but the mean can be any number of sinusoids! def pm_fit_gp_sin(time, flux, err, fs=None, amps=None, phases=None, model=None, return_var=False, thin=50): """ Use PyMC3 to do a maximum likelihood fit for a GP + multiple periodic signals Inputs ------ time : array-like Times of observations flux : array-like Observed fluxes err : array-like Observational uncertainties fs : array-like, elements are PyMC3 distributions Array with frequencies to fit, default None (i.e., only the GP is fit) amps : array-like, elements are PyMC3 distributions Array with amplitudes to fit, default None (i.e., only the GP is fit) phases : array-like, elements are PyMC3 distributions Array with phases to fit, default None (i.e., only the GP is fit) model : `pymc3.model.Model` PyMC3 Model object, will fail unless given return_var : bool, default True If True, returns the variance of the GP thin : integer, default 50 Calculate the variance of the GP every `thin` points. Returns ------- map_soln : dict Contains best-fit parameters and the gp predictions logp : float The log-likelihood of the model bic : float The Bayesian Information Criterion, -2 ln P + m ln N var : float If `return_var` is True, returns the variance of the GP """ assert model is not None, "Must provide a PyMC3 model object" #Step 1: Mean model mean_flux = pm.Normal("mean_flux", mu = 1.0, sigma=np.std(flux)) if fs is not None: #Making a callable for celerite mean_model = tt.sum([a * tt.sin(2.0*np.pi*f*time + phi) for a,f,phi in zip(amps,fs,phases)],axis=0) + mean_flux #And add it to the model pm.Deterministic("mean", mean_model) else: mean_model = mean_flux mean = pm.Deterministic("mean", mean_flux) #Step 2: Compute Lomb-Scargle Periodogram pseudo_NF = 0.5 / (np.mean(np.diff(time))) rayleigh = 1.0 / (time.max() - time.min()) ls = LombScargle(time,flux) freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF) power /= len(time) #Step 3: Do the basic peridogram fit to guess nu_char and alpha_0 popt, pcov, resid = fit_red_noise(freq, power) a0, tau_char, gamma, aw = popt nu_char = 1.0/(2*np.pi*tau_char) # A jitter term describing excess white noise (analogous to C_w) log_jitter = pm.Uniform("log_jitter", lower=np.log(aw)-15, upper=np.log(aw)+15, testval=np.log(np.median(np.abs(np.diff(flux))))) # A term to describe the SLF variability # sigma is the standard deviation of the GP, tau roughly corresponds to the #breakoff in the power spectrum. rho and tau are related by a factor of #pi/Q (the quality factor) #guesses for our parameters omega_0_guess = 2*np.pi*nu_char Q_guess = 1/np.sqrt(2) sigma_guess = a0 * np.sqrt(omega_0_guess*Q_guess) * np.power(np.pi/2.0, 0.25) #sigma logsigma = pm.Uniform("log_sigma", lower=np.log(sigma_guess)-10, upper=np.log(sigma_guess)+10) sigma = pm.Deterministic("sigma",tt.exp(logsigma)) #rho (characteristic timescale) logrho = pm.Uniform("log_rho", lower=np.log(0.01/nu_char), upper=np.log(100.0/nu_char)) rho = pm.Deterministic("rho", tt.exp(logrho)) nuchar = pm.Deterministic("nu_char", 1.0 / rho) #tau (damping timescale) logtau = pm.Uniform("log_tau", lower=np.log(0.01*2.0*Q_guess/omega_0_guess),upper=np.log(100.0*2.0*Q_guess/omega_0_guess)) tau = pm.Deterministic("tau", tt.exp(logtau)) nudamp = pm.Deterministic("nu_damp", 1.0 / tau) #We also want to track Q, as it's a good estimate of how stochastic the #process is. Q = pm.Deterministic("Q", np.pi*tau/rho) kernel = terms.SHOTerm(sigma=sigma, rho=rho, tau=tau) gp = GaussianProcess( kernel, t=time, diag=err ** 2.0 + tt.exp(2 * log_jitter), quiet=True, ) # Compute the Gaussian Process likelihood and add it into the # the PyMC3 model as a "potential" gp.marginal("gp", observed=flux-mean_model) # Compute the mean model prediction for plotting purposes pm.Deterministic("pred", gp.predict(flux-mean_model)) # Optimize to find the maximum a posteriori parameters map_soln = pmx.optimize() logp = model.logp(map_soln) # parameters are tau, sigma, Q/rho, mean, jitter, plus 3 per frequency (rho is fixed) if fs is not None: n_par = 5.0 + (3.0 * len(fs)) else: n_par = 5.0 bic = -2.0*logp + n_par * np.log(len(time)) #compute variance as well... if return_var: eval_in_model(gp.compute(time[::thin],yerr=err[::thin]), map_soln) mu, var = eval_in_model(gp.predict(flux[::thin], t=time[::thin], return_var=True), map_soln) return map_soln, logp, bic, var return map_soln, logp, bic with pm.Model() as model: map_soln, logp, bic = pm_fit_gp_sin(this_time, this_flux, this_err, model=model) fig = plt.figure(dpi=300) plt.scatter(this_time, this_flux, c='k', s=1) plt.plot(this_time, map_soln['pred']+map_soln['mean_flux']) plt.scatter(this_time, resid_flux,c='k',s=1) resid_flux = this_flux - (map_soln['pred']+map_soln['mean_flux']) ls_resid = LombScargle(this_time,resid_flux,dy=this_err,) freq_r,power_r=ls_resid.autopower(normalization='psd',maximum_frequency=pseudo_NF) power_r /= len(this_time) fig, ax = plt.subplots(2, 1, dpi=300) ax[0].scatter(this_time, resid_flux,s=1,c='k') ax[0].set(xlim=(this_time.min(),this_time.max())) ax[1].loglog(freq_r, power_r) ###Output _____no_output_____ ###Markdown Let's try this with two sectors of data! ###Code two_sec = lc_new[lc_new['Sector'] < 3] two_sec time, flux, err = lc[['Time','Flux','Err']].values.T time def gp_multisector(lc, fs=None, amps=None, phases=None, model=None, return_var=False, thin=50): """ Use PyMC3 to do a maximum likelihood fit for a GP + multiple periodic signals, but now with a twist: handles multiple sectors! Inputs ------ ls : `pandas.DataFrame` Dataframe containing the lightcurve. Must have Time, Flux, Err, and Sector as columns. fs : array-like, elements are PyMC3 distributions Array with frequencies to fit, default None (i.e., only the GP is fit) amps : array-like, elements are PyMC3 distributions Array with amplitudes to fit, default None (i.e., only the GP is fit) phases : array-like, elements are PyMC3 distributions Array with phases to fit, default None (i.e., only the GP is fit) model : `pymc3.model.Model` PyMC3 Model object, will fail unless given return_var : bool, default True If True, returns the variance of the GP thin : integer, default 50 Calculate the variance of the GP every `thin` points. Returns ------- map_soln : dict Contains best-fit parameters and the gp predictions logp : float The log-likelihood of the model bic : float The Bayesian Information Criterion, -2 ln P + m ln N var : float If `return_var` is True, returns the variance of the GP """ assert model is not None, "Must provide a PyMC3 model object" time, flux, err, sectors = lc[['Time','Flux','Err','Sector']].values.T #Step 1: Mean model mean_flux = pm.Normal("mean_flux", mu = 1.0, sigma=np.std(flux)) if fs is not None: #Making a callable for celerite mean_model = tt.sum([a * tt.sin(2.0*np.pi*f*time + phi) for a,f,phi in zip(amps,fs,phases)],axis=0) + mean_flux #And add it to the model pm.Deterministic("mean", mean_model) else: mean_model = mean_flux mean = pm.Deterministic("mean", mean_flux) #Step 2: Compute Lomb-Scargle Periodogram pseudo_NF = 0.5 / (np.mean(np.diff(time))) rayleigh = 1.0 / (time.max() - time.min()) ls = LombScargle(time,flux) freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF) power /= len(time) #Step 3: Do the basic peridogram fit to guess nu_char and alpha_0 popt, pcov, resid = fit_red_noise(freq, power) a0, tau_char, gamma, aw = popt nu_char = 1.0/(2*np.pi*tau_char) # A jitter term per sector describing excess white noise (analogous to C_w) jitters = [pm.Uniform(f"log_jitter_S{int(s)}", lower=np.log(aw)-15, upper=np.log(aw)+15, testval=np.log(np.median(np.abs(np.diff(flux))))) for s in np.unique(sectors)] # A term to describe the SLF variability, shared across sectors #guesses for our parameters omega_0_guess = 2*np.pi*nu_char Q_guess = 1/np.sqrt(2) sigma_guess = a0 * np.sqrt(omega_0_guess*Q_guess) * np.power(np.pi/2.0, 0.25) #sigma logsigma = pm.Uniform("log_sigma", lower=np.log(sigma_guess)-10, upper=np.log(sigma_guess)+10) sigma = pm.Deterministic("sigma",tt.exp(logsigma)) #rho (characteristic timescale) logrho = pm.Uniform("log_rho", lower=np.log(0.01/nu_char), upper=np.log(100.0/nu_char)) rho = pm.Deterministic("rho", tt.exp(logrho)) nuchar = pm.Deterministic("nu_char", 1.0 / rho) #tau (damping timescale) logtau = pm.Uniform("log_tau", lower=np.log(0.01*2.0*Q_guess/omega_0_guess),upper=np.log(100.0*2.0*Q_guess/omega_0_guess)) tau = pm.Deterministic("tau", tt.exp(logtau)) nudamp = pm.Deterministic("nu_damp", 1.0 / tau) #We also want to track Q, as it's a good estimate of how stochastic the #process is. Q = pm.Deterministic("Q", np.pi*tau/rho) kernel = terms.SHOTerm(sigma=sigma, rho=rho, tau=tau) #A number of GP objects with shared hyperparameters gps = [GaussianProcess( kernel, t=time[sectors==s], diag=err[sectors==s] ** 2.0 + tt.exp(2 * j), quiet=True,) for s,j in zip(np.unique(sectors),jitters) ] for s,gp in zip(np.unique(sectors),gps): # Compute the Gaussian Process likelihood and add it into the # the PyMC3 model as a "potential" gp.marginal(f"gp_S{int(s)}", observed=(flux-mean_model)[sectors==s]) # Compute the mean model prediction for plotting purposes pm.Deterministic(f"pred_S{int(s)}", gp.predict((flux-mean_model)[sectors==s])) # Optimize to find the maximum a posteriori parameters map_soln = pmx.optimize() logp = model.logp(map_soln) # parameters are logtau, logsigma, logrho, mean, jitter*n_sectors, plus 3 per frequency (rho is fixed) base_par = 4 + len(np.unique(sectors)) if fs is not None: n_par = base_par + (3.0 * len(fs)) else: n_par = base_par bic = -2.0*logp + n_par * np.log(len(time)) #compute variance as well... if return_var: eval_in_model(gp.compute(time[::thin],yerr=err[::thin]), map_soln) mu, var = eval_in_model(gp.predict(flux[::thin], t=time[::thin], return_var=True), map_soln) return map_soln, logp, bic, var return map_soln, logp, bic with pm.Model() as model_m: map_soln, logp, bic = gp_multisector(two_sec, model=model_m) with pm.Model() as model_all: map_soln, logp, bic = gp_multisector(lc_new, model=model_all) ###Output optimizing logp for variables: [log_tau, log_rho, log_sigma, log_jitter_S39, log_jitter_S38, log_jitter_S36, log_jitter_S35, log_jitter_S34, log_jitter_S33, log_jitter_S32, log_jitter_S31, log_jitter_S30, log_jitter_S29, log_jitter_S28, log_jitter_S13, log_jitter_S12, log_jitter_S11, log_jitter_S10, log_jitter_S9, log_jitter_S8, log_jitter_S6, log_jitter_S5, log_jitter_S4, log_jitter_S3, log_jitter_S2, log_jitter_S1, mean_flux]
notebooks/Vera's Experiments.ipynb
###Markdown New Word2Vec ###Code # print "Generating %d-dim word embedding ..." %ndim # int2ch, ch2int = get_vocab() # ch_lists = [] # quatrains = get_quatrains() # for idx, poem in enumerate(quatrains): # for sentence in poem['sentences']: # ch_lists.append(filter(lambda ch: ch in ch2int, sentence)) # # the i-th characters in the poem, used to boost Dui Zhang # i_characters = [[sentence[j] for sentence in poem['sentences']] for j in range(len(poem['sentences'][0]))] # for characters in i_characters: # ch_lists.append(filter(lambda ch: ch in ch2int, characters)) # if 0 == (idx+1)%10000: # print "[Word2Vec] %d/%d poems have been processed." %(idx+1, len(quatrains)) # print "Hold on. This may take some time ..." # model = models.Word2Vec(ch_lists, size = ndim, min_count = 5) # embedding = uniform(-1.0, 1.0, [VOCAB_SIZE, ndim]) # for idx, ch in enumerate(int2ch): # if ch in model.wv: # embedding[idx,:] = model.wv[ch] # np.save(_w2v_path, embedding) # print "Word embedding is saved." ###Output _____no_output_____
PythonCode/experiments/benchmark_vs_others/tax-credit-data/ipynb/runtime/compute-runtimes.ipynb
###Markdown Prepare the environment-----------------------First we'll import various functions that we'll need for generating the report and configure the environment. ###Code from os.path import join, expandvars, abspath from joblib import Parallel, delayed from tax_credit.framework_functions import (runtime_make_test_data, runtime_make_commands, clock_runtime) ## project_dir should be the directory where you've downloaded (or cloned) the ## tax-credit repository. project_dir = '../..' data_dir = join(project_dir, "data") results_dir = join(project_dir, 'temp_results_runtime') runtime_results = join(results_dir, 'runtime_results.txt') tmpdir = join(results_dir, 'tmp') ref_db_dir = join(project_dir, 'data/ref_dbs/gg_13_8_otus') ref_seqs = join(ref_db_dir, '99_otus_clean.fasta') ref_taxa = join(ref_db_dir, '99_otu_taxonomy_clean.tsv') num_iters = 1 sampling_depths = [1, 4000] #[1] + list(range(2000,10001,2000)) ###Output _____no_output_____ ###Markdown Generate test datasetsSubsample reference sequences to create a series of test datasets and references. ###Code runtime_make_test_data(ref_seqs, tmpdir, sampling_depths) ###Output _____no_output_____ ###Markdown Import to qiime for q2-feature-classifier methods, train scikit-learn classifiers. We do not include the training step in the runtime analysis, because under normal operating conditions a reference dataset will be trained once, then re-used many times for any datasets that use the same marker gene (e.g., 16S rRNA). Separating the training step from the classification step was a conscious decision on part of the designers to make classification as quick as possible, and removing redundant training steps! ###Code ! qiime tools import --input-path {ref_taxa} --output-path {ref_taxa}.qza --type "FeatureData[Taxonomy]" --input-format HeaderlessTSVTaxonomyFormat for depth in sampling_depths: tmpfile = join(tmpdir, str(depth)) + '.fna' ! qiime tools import --input-path {tmpfile} --output-path {tmpfile}.qza --type "FeatureData[Sequence]" ! qiime feature-classifier fit-classifier-naive-bayes --o-classifier {tmpfile}.nb.qza --i-reference-reads {tmpfile}.qza --i-reference-taxonomy {ref_taxa}.qza ###Output Imported ../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy_clean.tsv as HeaderlessTSVTaxonomyFormat to ../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy_clean.tsv.qza Imported ../../temp_results_runtime/tmp/1.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/1.fna.qza Saved TaxonomicClassifier to: ../../temp_results_runtime/tmp/1.fna.nb.qza Imported ../../temp_results_runtime/tmp/2000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/2000.fna.qza Saved TaxonomicClassifier to: ../../temp_results_runtime/tmp/2000.fna.nb.qza Imported ../../temp_results_runtime/tmp/4000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/4000.fna.qza Saved TaxonomicClassifier to: ../../temp_results_runtime/tmp/4000.fna.nb.qza Imported ../../temp_results_runtime/tmp/6000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/6000.fna.qza Saved TaxonomicClassifier to: ../../temp_results_runtime/tmp/6000.fna.nb.qza Imported ../../temp_results_runtime/tmp/8000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/8000.fna.qza Saved TaxonomicClassifier to: ../../temp_results_runtime/tmp/8000.fna.nb.qza Imported ../../temp_results_runtime/tmp/10000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/10000.fna.qza Saved TaxonomicClassifier to: ../../temp_results_runtime/tmp/10000.fna.nb.qza ###Markdown Preparing the method/parameter combinationsFinally we define the method, parameter combintations that we want to test and command templates to execute.Template fields must adhere to following format: {0} = output directory {1} = input data {2} = reference sequences {3} = reference taxonomy {4} = method name {5} = other parameters ###Code blast_template = ('qiime feature-classifier classify-consensus-blast --i-query {1}.qza --o-classification ' '{0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}') vsearch_template = ('qiime feature-classifier classify-consensus-vsearch --i-query {1}.qza ' '--o-classification {0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}') naive_bayes_template = ('qiime feature-classifier classify-sklearn ' '--o-classification {0}/assign.tmp --i-classifier {2}.nb.qza --i-reads {1}.qza {5}') mindivlp_template = ('python ../../../classify_mindivlp.py -i {1} -o {0} -r {2} -t {3} -p') # PythonCode/experiments/benchmark_vs_others # {method: template, method-specific params} methods = { #'blast+' : (blast_template, '--p-evalue 0.001'), #'vsearch' : (vsearch_template, '--p-perc-identity 0.90'), #'naive-bayes': (naive_bayes_template, '--p-confidence 0.7'), 'mindivlp': (mindivlp_template, '-s 8 -l 12 -c 1000 -q 0.01') } ###Output _____no_output_____ ###Markdown Generate the list of commands and run them First we will vary the size of the reference database and search a single sequence against it. ###Code commands_a = runtime_make_commands(tmpdir, tmpdir, methods, ref_taxa, sampling_depths, num_iters=1, subsample_ref=True) ###Output _____no_output_____ ###Markdown Next, we will vary the number of query seqs, and keep the number of ref seqs constant ###Code commands_b = runtime_make_commands(tmpdir, tmpdir, methods, abspath(ref_taxa), sampling_depths, num_iters=1, subsample_ref=False) ###Output _____no_output_____ ###Markdown Let's look at the first command in each list and the total number of commands as a sanity check... ###Code commands_a = [('python ../../../classify_mindivlp.py -i ../../temp_results_runtime/tmp/1.fna -o ../../temp_results_runtime/tmp -r ../../temp_results_runtime/tmp/4000.fna -t ../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy_clean.tsv -p', 'mindivlp', '1', '4000', 0)] commands_b = [] print(len(commands_a + commands_b)) print(commands_a[1]) print(commands_b[-1]) Parallel(n_jobs=1)(delayed(clock_runtime)(command, runtime_results, force=False) for command in (list(set(commands_a + commands_b)))); ###Output _____no_output_____
library/model_analysis/Model_Visualization.ipynb
###Markdown Visualizing Model States We often simulate a simple free recall experiment and visualize model states throughout to explore their capacity toexhibit classical patterns of primacy, recency, and temporal contiguity. Any arbitrary configuration of parameters canbe specified for the model, including an `experiment_count`, determining the number of simulations with the givenparameters.In each experiment:1. A specified number of unique items are each experienced once,2. Context is momentarily drifted toward its pre-experimental state, and3. The model freely recalls items until it stops, with retrieval of previously experienced items disallowed.To visualize model state, we add to our `model_analysis` submodule three basic categories of visualizations. Tovisualize model state throughout encoding, we track the state of `context` and the amount of `support` for recall ofeach item based on contextual state. We also prepare a visualization of the final state of `memory` once encoding isfinished. To visualize model state throughout retrieval, we similarly track `context` and `support` at each step ofrecall. An additional visualization makes clearer the distribution of outcome probabilities at a particular index ofrecall (e.g. after a second item has been recalled). While the previous sets of analyses focus on behavior of aparticular instantiation of the model, a final set of analysis focuses on model behavior across many simulations. Wetrack recall probability as a function of serial position, probability of starting recall with each serial position,and conditional response probability as a function of lag. Parameter ConfigurationPick some parameters for Instance_CMR and CMR to organize comparisons. EncodingFirst we create simulations and visualizations to track model state throughout encoding of new memories. To do this,we produce two parallel functions, `encoding_states` and `plot_states` that collect and visualize encoding states,respectively. An additional wrapper function called `encoding_visualizations` plots these states in addition to thefinal overall state of model memory. ###Code icmr_parameters = { } cmr_parameters = { } #hide import numpy as np def encoding_states(model): """ Tracks state of context, and item supports across encoding. Model is also advanced to a state of fully encoded memories. **Required model attributes**: - item_count: specifies number of items encoded into memory - context: vector representing an internal contextual state - experience: adding a new trace to the memory model - activations: function returning item activations given a vector probe - outcome_probabilities: function returning item supports given a set of activations **Returns** array representations of context and support for retrieval of each item at each increment of item encoding. Each has shape model.item_count by model.item_count + 1. """ experiences = np.eye(model.item_count, model.item_count + 1, 1) cmr_experiences = np.eye(model.item_count, model.item_count) encoding_contexts, encoding_supports = model.context, [] # track model state across experiences for i in range(len(experiences)): try: model.experience(experiences[i].reshape((1, -1))) except ValueError: # special case for CMR model.experience(cmr_experiences[i].reshape((1, -1))) # track model contexts and item supports encoding_contexts = np.vstack((encoding_contexts, model.context)) if model.__class__.__name__ == 'CMR': activation_cue = lambda model: model.context else: activation_cue = lambda model: np.hstack((np.zeros(model.item_count + 1), model.context)) if len(encoding_supports) > 0: encoding_supports = np.vstack((encoding_supports, model.outcome_probabilities(activation_cue(model)))) else: encoding_supports = model.outcome_probabilities(activation_cue(model)) return encoding_contexts, encoding_supports show_doc(encoding_states, title_level=3) # hide # collapse_input import seaborn as sns import matplotlib.pyplot as plt def plot_states(matrix, title, figsize=(15, 15), savefig=False): """ Plots an array of model states as a value-annotated heatmap with an arbitrary title. **Arguments**: - matrix: an array of model states, ideally with columns representing unique feature indices and rows representing unique update indices - title: a title for the generated plot, ideally conveying what array values represent at each entry - savefig: boolean deciding whether generated figure is saved (True if Yes) """ plt.figure(figsize=figsize) sns.heatmap(matrix, annot=True, linewidths=.5) plt.title(title) plt.xlabel('Feature Index') plt.ylabel('Update Index') if savefig: plt.savefig('figures/{}.jpeg'.format(title).replace(' ', '_').lower(), bbox_inches='tight') plt.show() show_doc(plot_states, title_level=3) def encoding_visualizations(model, savefig=True): """ Plots encoding contexts, encoding supports as heatmaps. **Required model attributes**: - item_count: specifies number of items encoded into memory - context: vector representing an internal contextual state - experience: adding a new trace to the memory model - activations: function returning item activations given a vector probe - outcome_probabilities: function returning item supports given a set of activations - memory: a unitary representation of the current state of memory **Also** requires savefig: boolean deciding if generated figure is saved """ encoding_contexts, encoding_supports = encoding_states(model) plot_states(encoding_contexts, 'Encoding Contexts', savefig=savefig) plot_states(encoding_supports, 'Supports For Each Item At Each Increment of Encoding', savefig=savefig) try: show_doc(encoding_visualizations, title_level=3) except: pass ###Output _____no_output_____ ###Markdown Demo ICMR ###Code from instance_cmr.models import InstanceCMR model = InstanceCMR(**icmr_parameters) encoding_visualizations(model) ###Output _____no_output_____ ###Markdown ![](figures/icmr_encoding_contexts.jpeg)![](figures/icmr_supports_for_each_item_at_each_increment_of_encoding.jpeg) CMR ###Code from instance_cmr.models import CMR model = CMR(**cmr_parameters) encoding_visualizations(model) ###Output _____no_output_____ ###Markdown ![](figures/cmr_encoding_contexts.jpeg)![](figures/cmr_supports_for_each_item_at_each_increment_of_encoding.jpeg) Latent Mfc/Mcf ###Code def latent_mfc_mcf(model): """ Generates the latent $M^{FC}$ and $M^{CF}$ in the specified ICMR instance. For exploring and demonstrating model equivalence, we can calculate for any state of ICMR's dual-store memory array $M$ a corresponding $M^{FC}$ (or $M^{CF}$) by computing for each orthogonal $f_i$ (or $c_i$) the model's corresponding echo representation. """ encoding_states(model) # start by finding latent mfc: the contextual representation cued when each orthogonal $f_i$ is cued latent_mfc = np.zeros((model.item_count, model.item_count+1)) cue = np.zeros(model.item_count*2 + 2) for i in range(model.item_count): cue *= 0 cue[i+1] = 1 latent_mfc[i] = model.echo(cue)[model.item_count + 1:] # now the latent mcf latent_mcf = np.zeros((model.item_count+1, model.item_count)) for i in range(model.item_count+1): cue *= 0 cue[model.item_count+1+i] = 1 latent_mcf[i] = model.echo(cue)[1:model.item_count + 1] # start at 1 due to dummy column in F # plotting return latent_mfc, latent_mcf if True: # ICMR model = InstanceCMR(**parameters) latent_mfc, latent_mcf = latent_mfc_mcf(model) print(model.__class__.__name__) plot_states(model.memory, 'ICMR Memory') plot_states(latent_mfc, 'ICMR Latent Mfc') plot_states(latent_mcf, 'ICMR Latent Mcf') # CMR model = CMR(**parameters) encoding_states(model) print(model.__class__.__name__) plot_states(model.mfc, 'CMR Mfc') plot_states(model.mcf, 'CMR Mcf') ###Output _____no_output_____ ###Markdown RetrievalTracking model state across each step of retrieval. Since it's stochastic, these values change with eachrandom seed. An additional optional parameter `first_recall_item` can control which item is recalled first bythe model (`0` denotes termination of recall while actual items are 1-indexed); it is useful for testinghypotheses about model dynamics during recall. We leave the parameter set at `None`, for now, indicating nocontrolled first recall. ###Code import numpy as np def retrieval_states(model, first_recall_item=None): """ Tracks state of context, and item supports across retrieval. Model is also advanced into a state of completed free recall. **Required model attributes**: - item_count: specifies number of items encoded into memory - context: vector representing an internal contextual state - experience: adding a new trace to the memory model - activations: function returning item activations given a vector probe - outcome_probabilities: function returning item supports given a set of activations - free_recall: function that freely recalls a given number of items or until recall stops - state: indicates whether model is encoding or engaged in recall with a string **Also** optionally uses first_recall_item: can specify an item for first recall **Returns** array representations of context and support for retrieval of each item at each increment of item retrieval. Also returns recall train associated with simulation. """ if model.__class__.__name__ == 'CMR': activation_cue = lambda model: model.context else: activation_cue = lambda model: np.hstack((np.zeros(model.item_count + 1), model.context)) # encoding items, presuming model is freshly initialized encoding_states(model) retrieval_contexts, retrieval_supports = model.context, model.outcome_probabilities(activation_cue(model)) # pre-retrieval distraction model.free_recall(0) retrieval_contexts = np.vstack((retrieval_contexts, model.context)) retrieval_supports = np.vstack((retrieval_supports, model.outcome_probabilities(activation_cue(model)))) # optional forced first item recall if first_recall_item is not None: model.force_recall(first_recall_item) retrieval_contexts = np.vstack((retrieval_contexts, model.context)) retrieval_supports = np.vstack((retrieval_supports, model.outcome_probabilities(activation_cue(model)))) # actual recall while model.retrieving: model.free_recall(1) retrieval_contexts = np.vstack((retrieval_contexts, model.context)) retrieval_supports = np.vstack((retrieval_supports, model.outcome_probabilities(activation_cue(model)))) return retrieval_contexts, retrieval_supports, model.recall[:model.recall_total] try: show_doc(retrieval_states, title_level=3) except: pass def outcome_probs_at_index(model, support_index_to_plot=1, savefig=True): """ Plots outcome probability distribution at a specific index of free recall. **Required model attributes**: - item_count: specifies number of items encoded into memory - context: vector representing an internal contextual state - experience: adding a new trace to the memory model - activations: function returning item activations given a vector probe - outcome_probabilities: function returning item supports given a set of activations - free_recall: function that freely recalls a given number of items or until recall stops - state: indicates whether model is encoding or engaged in recall with a string **Other arguments**: - support_index_to_plot: index of retrieval to plot - savefig: whether to save or display the figure of interest **Generates** a plot of outcome probabilities as a line graph. Also returns vector representation of the generated probabilities. """ retrieval_supports = retrieval_states(model)[1] plt.plot(np.arange(model.item_count + 1), retrieval_supports[support_index_to_plot]) plt.xlabel('Choice Index') plt.ylabel('Outcome Probability') plt.title('Outcome Probabilities At Recall Index {}'.format(support_index_to_plot)) plt.show() return retrieval_supports[support_index_to_plot] try: show_doc(outcome_probs_at_index, title_level=3) except: pass def retrieval_visualizations(model, savefig=True): """ Plots incremental retrieval contexts and supports, as heatmaps, and prints recalled items. **Required model attributes**: - item_count: specifies number of items encoded into memory - context: vector representing an internal contextual state - experience: adding a new trace to the memory model - activations: function returning item activations given a vector probe - outcome_probabilities: function returning item supports given a set of activations **Also** uses savefig: boolean deciding whether figures are saved (True) or displayed """ retrieval_contexts, retrieval_supports, recall = retrieval_states(model) plot_states(retrieval_contexts, 'Retrieval Contexts', savefig=savefig) plot_states(retrieval_supports, 'Supports For Each Item At Each Increment of Retrieval', savefig=savefig) return recall try: show_doc(retrieval_visualizations, title_level=3) except: pass ###Output _____no_output_____ ###Markdown Demo ICMR ###Code model = InstanceCMR(**icmr_parameters) retrieval_visualizations(model) ###Output _____no_output_____ ###Markdown Outputs can look like...![](figures/retrieval_contexts.jpeg)![](figures/supports_for_each_item_at_each_increment_of_retrieval.jpeg) CMR ###Code model = CMR(**cmr_parameters) retrieval_visualizations(model) ###Output _____no_output_____ ###Markdown ![](figures/retrieval_contexts.jpeg)![](figures/supports_for_each_item_at_each_increment_of_retrieval.jpeg) Organizational AnalysesUpon completion, the `psifr` toolbox is used to generate three plots corresponding to the contents of Figure4 in Morton & Polyn, 2016:1. Recall probability as a function of serial position2. Probability of starting recall with each serial position3. Conditional response probability as a function of lagWhereas previous visualizations were based on an arbitrary model simulation, the current figures are based onaverages over a simulation of the model some specified amount of times. ###Code import pandas as pd from psifr import fr def temporal_organization_analyses(model, experiment_count, savefig=False, figsize=(15, 15), first_recall_item=None): """ Visualization of the outcomes of a trio of organizational analyses of model performance on a free recall task. **Required model attributes**: - item_count: specifies number of items encoded into memory - context: vector representing an internal contextual state - experience: adding a new trace to the memory model - free_recall: function that freely recalls a given number of items or until recall stops **Other arguments**: - experiment_count: number of simulations to compute curves over - savefig: whether to save or display the figure of interest **Returns** three plots corresponding to the contents of Figure 4 in Morton & Polyn, 2016: 1. Recall probability as a function of serial position 2. Probability of starting recall with each serial position 3. Conditional response probability as a function of lag """ # encode items try: model.experience(np.eye(model.item_count, model.item_count + 1, 1)) except ValueError: # so we can apply to CMR model.experience(np.eye(model.item_count, model.item_count)) # simulate retrieval for the specified number of times, tracking results in df data = [] for experiment in range(experiment_count): data += [[experiment, 0, 'study', i + 1, i] for i in range(model.item_count)] for experiment in range(experiment_count): if first_recall_item is not None: model.force_recall(first_recall_item) data += [[experiment, 0, 'recall', i + 1, o] for i, o in enumerate(model.free_recall())] data = pd.DataFrame(data, columns=['subject', 'list', 'trial_type', 'position', 'item']) merged = fr.merge_free_recall(data) # visualizations # spc recall = fr.spc(merged) g = fr.plot_spc(recall) plt.title('Serial Position Curve') if savefig: plt.savefig('figures/spc.jpeg', bbox_inches='tight') else: plt.show() # P(Start Recall) For Each Serial Position prob = fr.pnr(merged) pfr = prob.query('output <= 1') g = fr.plot_spc(pfr).add_legend() plt.title('Probability of Starting Recall With Each Serial Position') if savefig: plt.savefig('figures/pfr.jpeg', bbox_inches='tight') else: plt.show() # Conditional response probability as a function of lag crp = fr.lag_crp(merged) g = fr.plot_lag_crp(crp) plt.title('Conditional Response Probability') if savefig: plt.savefig('figures/crp.jpeg', bbox_inches='tight') else: plt.show() try: show_doc(temporal_organization_analyses, title_level=3) except: pass ###Output _____no_output_____ ###Markdown Demo ###Code from instance_cmr.models import InstanceCMR model = InstanceCMR(**icmr_parameters) temporal_organization_analyses(model, 100, True) from instance_cmr.models import CMR model = CMR(**cmr_parameters) temporal_organization_analyses(model, 100, True) ###Output _____no_output_____
House Sales_in_King_Count_USA.ipynb
###Markdown Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id : A notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of bedroomsbathrooms: Number of bathroomssqft_living: Square footage of the homesqft_lot: Square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above : Square footage of house apart from basementsqft_basement: Square footage of the basementyr_built : Built Yearyr_renovated : Year when house was renovatedzipcode: Zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 : Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 : LotSize area in 2015(implies-- some renovations) You will require the following libraries: ###Code import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler,PolynomialFeatures from sklearn.linear_model import LinearRegression %matplotlib inline ###Output _____no_output_____ ###Markdown Module 1: Importing Data Sets Load the csv: ###Code file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv' df=pd.read_csv(file_name) ###Output _____no_output_____ ###Markdown We use the method head to display the first 5 columns of the dataframe. ###Code df.head() ###Output _____no_output_____ ###Markdown Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image. We use the method describe to obtain a statistical summary of the dataframe. ###Code df.describe() ###Output _____no_output_____ ###Markdown Module 2: Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True We can see we have missing values for the columns bedrooms and bathrooms ###Code print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output _____no_output_____ ###Markdown We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace(). Don't forget to set the inplace parameter to True ###Code mean=df['bedrooms'].mean() df['bedrooms'].replace(np.nan,mean, inplace=True) ###Output _____no_output_____ ###Markdown We also replace the missing values of the column 'bathrooms' with the mean of the column 'bathrooms' using the method replace(). Don't forget to set the inplace parameter top True ###Code mean=df['bathrooms'].mean() df['bathrooms'].replace(np.nan,mean, inplace=True) print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output _____no_output_____ ###Markdown Module 3: Exploratory Data Analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe. Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers. Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price. We can use the Pandas method corr() to find the feature other than price that is most correlated with price. ###Code df.corr()['price'].sort_values() ###Output _____no_output_____ ###Markdown Module 4: Model Development We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2. ###Code X = df[['long']] Y = df['price'] lm = LinearRegression() lm.fit(X,Y) lm.score(X, Y) ###Output _____no_output_____ ###Markdown Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2. Question 7Fit a linear regression model to predict the 'price' using the list of features: ###Code features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] ###Output _____no_output_____ ###Markdown Then calculate the R^2. Take a screenshot of your code. This will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression() ###Code Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())] ###Output _____no_output_____ ###Markdown Question 8Use the list to create a pipeline object to predict the 'price', fit the object using the features in the list features, and calculate the R^2. Module 5: Model Evaluation and Refinement Import the necessary modules: ###Code from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split print("done") ###Output _____no_output_____ ###Markdown We will split the data into training and testing sets: ###Code features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] X = df[features] Y = df['price'] x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1) print("number of test samples:", x_test.shape[0]) print("number of training samples:",x_train.shape[0]) ###Output _____no_output_____ ###Markdown Question 9Create and fit a Ridge regression object using the training data, set the regularization parameter to 0.1, and calculate the R^2 using the test data. ###Code from sklearn.linear_model import Ridge ###Output _____no_output_____ ###Markdown Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id :a notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of Bedrooms/Housebathrooms: Number of bathrooms/bedroomssqft_living: square footage of the homesqft_lot: square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is Overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above :square footage of house apart from basementsqft_basement: square footage of the basementyr_built :Built Yearyr_renovated :Year when house was renovatedzipcode:zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 :Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 :lotSize area in 2015(implies-- some renovations) You will require the following libraries ###Code import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler,PolynomialFeatures %matplotlib inline ###Output _____no_output_____ ###Markdown 1.0 Importing the Data Load the csv: ###Code file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv' df=pd.read_csv(file_name) ###Output _____no_output_____ ###Markdown we use the method head to display the first 5 columns of the dataframe. ###Code df.head() ###Output _____no_output_____ ###Markdown Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image. ###Code df.dtypes ###Output _____no_output_____ ###Markdown We use the method describe to obtain a statistical summary of the dataframe. ###Code df.describe() ###Output _____no_output_____ ###Markdown 2.0 Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True ###Code df1=df.drop(['id', 'Unnamed: 0'], axis=1) df1.describe() ###Output _____no_output_____ ###Markdown we can see we have missing values for the columns bedrooms and bathrooms ###Code print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output number of NaN values for the column bedrooms : 13 number of NaN values for the column bathrooms : 10 ###Markdown We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace. Don't forget to set the inplace parameter top True ###Code mean=df['bedrooms'].mean() df['bedrooms'].replace(np.nan,mean, inplace=True) ###Output _____no_output_____ ###Markdown We also replace the missing values of the column 'bathrooms' with the mean of the column 'bedrooms' using the method replace.Don't forget to set the inplace parameter top Ture ###Code mean=df['bathrooms'].mean() df['bathrooms'].replace(np.nan,mean, inplace=True) print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output number of NaN values for the column bedrooms : 13 number of NaN values for the column bathrooms : 0 ###Markdown 3.0 Exploratory data analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe. ###Code df1=df["floors"].value_counts() df1.to_frame() ###Output _____no_output_____ ###Markdown Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers . ###Code sns.boxplot(x="waterfront",y="price",data=df) ###Output _____no_output_____ ###Markdown Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price. ###Code sns.regplot(x="sqft_above",y="price",data=df) plt.ylim(0,) ###Output _____no_output_____ ###Markdown We can use the Pandas method corr() to find the feature other than price that is most correlated with price. ###Code df.corr()['price'].sort_values() ###Output _____no_output_____ ###Markdown Module 4: Model Development Import libraries ###Code import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression ###Output _____no_output_____ ###Markdown We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2. ###Code X = df[['long']] Y = df['price'] lm = LinearRegression() lm lm.fit(X,Y) lm.score(X, Y) ###Output _____no_output_____ ###Markdown Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2. ###Code X=df[['sqft_living']] Y=df[['price']] lm1=LinearRegression() lm1 lm1.fit(X,Y) lm1.score(X,Y) ###Output _____no_output_____ ###Markdown Question 7Fit a linear regression model to predict the 'price' using the list of features: ###Code features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] ###Output _____no_output_____ ###Markdown the calculate the R^2. Take a screenshot of your code ###Code X=df[["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]] Y=df[['price']] lm2=LinearRegression() lm2 lm2.fit(X,Y) lm2.score(X,Y) ###Output _____no_output_____ ###Markdown this will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression() ###Code Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())] ###Output _____no_output_____ ###Markdown Question 8Use the list to create a pipeline object, predict the 'price', fit the object using the features in the list features , then fit the model and calculate the R^2 ###Code pipe=Pipeline(Input) pipe pipe.fit(X,Y) pipe.score(X,Y) ###Output /opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/pipeline.py:511: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler. Xt = transform.transform(Xt) ###Markdown Module 5: MODEL EVALUATION AND REFINEMENT import the necessary modules ###Code from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split print("done") ###Output done ###Markdown we will split the data into training and testing set ###Code features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] X = df[features ] Y = df['price'] x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1) print("number of test samples :", x_test.shape[0]) print("number of training samples:",x_train.shape[0]) ###Output number of test samples : 3242 number of training samples: 18371 ###Markdown Question 9Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1 and calculate the R^2 using the test data. ###Code from sklearn.linear_model import Ridge ridge = Ridge(alpha=0.1) ridge.fit(x_train,y_train) ridge.score(x_test, y_test) ###Output _____no_output_____ ###Markdown Question 10Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, setting the regularisation parameter to 0.1. Calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2. ###Code Input=[('polynomial', PolynomialFeatures(include_bias=False)),('model',Ridge(alpha=0.1))] pipe=Pipeline(Input) pipe.fit(x_train,y_train) pipe.score(x_test, y_test) ###Output _____no_output_____ ###Markdown Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id :a notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of Bedrooms/Housebathrooms: Number of bathrooms/bedroomssqft_living: square footage of the homesqft_lot: square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is Overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above :square footage of house apart from basementsqft_basement: square footage of the basementyr_built :Built Yearyr_renovated :Year when house was renovatedzipcode:zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 :Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 :lotSize area in 2015(implies-- some renovations) You will require the following libraries ###Code import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler,PolynomialFeatures %matplotlib inline ###Output _____no_output_____ ###Markdown 1.0 Importing the Data Load the csv: ###Code file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv' df=pd.read_csv(file_name) ###Output _____no_output_____ ###Markdown we use the method head to display the first 5 columns of the dataframe. ###Code df.head() ###Output _____no_output_____ ###Markdown Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image. ###Code df.dtypes ###Output _____no_output_____ ###Markdown We use the method describe to obtain a statistical summary of the dataframe. ###Code df.describe() ###Output _____no_output_____ ###Markdown 2.0 Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True ###Code df.drop(['id','Unnamed: 0'], axis =1, inplace = True) df.describe() ###Output _____no_output_____ ###Markdown we can see we have missing values for the columns bedrooms and bathrooms ###Code print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output number of NaN values for the column bedrooms : 13 number of NaN values for the column bathrooms : 10 ###Markdown We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace. Don't forget to set the inplace parameter top True ###Code mean=df['bedrooms'].mean() df['bedrooms'].replace(np.nan,mean, inplace=True) ###Output _____no_output_____ ###Markdown We also replace the missing values of the column 'bathrooms' with the mean of the column 'bedrooms' using the method replace.Don't forget to set the inplace parameter top Ture ###Code mean=df['bathrooms'].mean() df['bathrooms'].replace(np.nan,mean, inplace=True) print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output number of NaN values for the column bedrooms : 0 number of NaN values for the column bathrooms : 0 ###Markdown 3.0 Exploratory data analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe. ###Code df['floors'].value_counts().to_frame() ###Output _____no_output_____ ###Markdown Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers . ###Code sns.boxplot(x='waterfront', y='price', data=df) ###Output _____no_output_____ ###Markdown Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price. ###Code sns.regplot(x='sqft_above', y='price', data=df) ###Output _____no_output_____ ###Markdown We can use the Pandas method corr() to find the feature other than price that is most correlated with price. ###Code df.corr()['price'].sort_values() ###Output _____no_output_____ ###Markdown Module 4: Model Development Import libraries ###Code import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression ###Output _____no_output_____ ###Markdown We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2. ###Code X = df[['long']] Y = df['price'] lm = LinearRegression() lm lm.fit(X,Y) lm.score(X, Y) ###Output _____no_output_____ ###Markdown Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2. ###Code X = df[['sqft_living']] Y = df['price'] lm2 = LinearRegression() lm2 lm2.fit(X,Y) lm2.score(X,Y) ###Output _____no_output_____ ###Markdown Question 7Fit a linear regression model to predict the 'price' using the list of features: ###Code features =df[["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]] ###Output _____no_output_____ ###Markdown the calculate the R^2. Take a screenshot of your code ###Code multi_ = LinearRegression() multi_ multi_.fit(features, df['price']) multi_.score(features, df['price']) ###Output _____no_output_____ ###Markdown this will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression() ###Code Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())] ###Output _____no_output_____ ###Markdown Question 8Use the list to create a pipeline object, predict the 'price', fit the object using the features in the list features , then fit the model and calculate the R^2 ###Code pipe=Pipeline(Input) pipe pipe.fit(features,Y) pipe.score(features,Y) ###Output /opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/pipeline.py:511: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler. Xt = transform.transform(Xt) ###Markdown Module 5: MODEL EVALUATION AND REFINEMENT import the necessary modules ###Code from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split print("done") ###Output done ###Markdown we will split the data into training and testing set ###Code features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] X = df[features ] Y = df['price'] x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1) print("number of test samples :", x_test.shape[0]) print("number of training samples:",x_train.shape[0]) ###Output number of test samples : 3242 number of training samples: 18371 ###Markdown Question 9Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1 and calculate the R^2 using the test data. ###Code from sklearn.linear_model import Ridge Ridge_Model = Ridge(alpha=0.1) Ridge_Model.fit(x_train, y_train) Ridge_Model.score(x_train, y_train) ###Output _____no_output_____ ###Markdown Question 10Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, setting the regularisation parameter to 0.1. Calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2. ###Code poly_trans = PolynomialFeatures(degree=2) x_train_poly = poly_trans.fit_transform(x_train) x_test_poly = poly_trans.fit_transform(x_test) Ridge_Model2 = Ridge(alpha=0.1) Ridge_Model2.fit(x_train_poly, y_train) Ridge_Model2.fit(x_test_poly, y_test) Ridge_Model2.score(x_test_poly, y_test) ###Output _____no_output_____ ###Markdown House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id : A notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of bedroomsbathrooms: Number of bathroomssqft_living: Square footage of the homesqft_lot: Square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above : Square footage of house apart from basementsqft_basement: Square footage of the basementyr_built : Built Yearyr_renovated : Year when house was renovatedzipcode: Zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 : Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 : LotSize area in 2015(implies-- some renovations) You will require the following libraries: ###Code import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler,PolynomialFeatures from sklearn.linear_model import LinearRegression %matplotlib inline ###Output _____no_output_____ ###Markdown Importing Data Sets Load the csv: ###Code file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv' df=pd.read_csv(file_name) ###Output _____no_output_____ ###Markdown We use the method head to display the first 5 columns of the dataframe. ###Code df.head() # Checking the data types df.dtypes ###Output _____no_output_____ ###Markdown We use the method describe to obtain a statistical summary of the dataframe. ###Code df.describe() ###Output _____no_output_____ ###Markdown Data Wrangling ###Code df.drop(['id','Unnamed: 0'],axis = 1, inplace = True) df.describe() ###Output _____no_output_____ ###Markdown Checking for NAN values ###Code print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output number of NaN values for the column bedrooms : 13 number of NaN values for the column bathrooms : 10 ###Markdown Replacing NAN values ###Code mean=df['bedrooms'].mean() df['bedrooms'].replace(np.nan,mean, inplace=True) mean=df['bathrooms'].mean() df['bathrooms'].replace(np.nan,mean, inplace=True) print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum()) print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum()) ###Output number of NaN values for the column bedrooms : 0 number of NaN values for the column bathrooms : 0 ###Markdown Exploratory Data Analysis ###Code #Using the value_counts function to count the number of unique floors and .to_frame() for getting an output in a dataframe df['floors'].value_counts().to_frame() #Use the function in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers. box = sns.boxplot( x="waterfront", y='price', data=df) #Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price. x = df['sqft_above'] y = df['price'] sns.regplot(x,y) # We can use the Pandas method corr() to find the feature other than price that is most correlated with price. df.corr()['price'].sort_values() ###Output _____no_output_____ ###Markdown Model Development ###Code #We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2. X = df[['long']] Y = df['price'] lm = LinearRegression() lm.fit(X,Y) lm.score(X, Y) X = df[['long']] Y = df['price'] lm = LinearRegression() lm.fit(X,Y) lm.score(X, Y) #Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2.. features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] X = df[features] Y = df['price'] lm = LinearRegression() lm lm.fit(X,Y) lm.score(X, Y) ###Output _____no_output_____ ###Markdown Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression() ###Code Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())] #Use the list to create a pipeline object to predict the 'price', fit the object using the features in the list features, and calculate the R^2. pipe=Pipeline(Input) pipe pipe.fit(df[features], df['price']) prediction = pipe.predict( df[features] ) print (prediction) pipe.score(X,Y) ###Output [351928.15625 560712.15625 454712.15625 ... 419512.15625 458352.15625 419512.15625] ###Markdown Model Evaluation and Refinement ###Code #Import the necessary modules: from sklearn.model_selection import cross_val_score from sklearn.model_selection import train_test_split print("done") #We will split the data into training and testing sets: features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"] X = df[features] Y = df['price'] x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1) print("number of test samples:", x_test.shape[0]) print("number of training samples:",x_train.shape[0]) from sklearn.linear_model import Ridge RG = Ridge (alpha = 0.1) RG.fit (X, Y) RG.score (X, Y) P2 = PolynomialFeatures (degree = 2) x_train_P2 = P2.fit_transform(x_train) x_test_P2 = P2.fit_transform(x_test) RG2 = Ridge (alpha = 0.1) RG2.fit (x_train_P2, y_train) RG2.score (x_test_P2, y_test) ###Output _____no_output_____
textual_augmenter.ipynb
###Markdown Example of Textual Augmenter Usage:* [Character Augmenter](chara_aug) * [OCR](ocr_aug) * [Keyboard](keyboard_aug) * [Random](random_aug)* [Word Augmenter](word_aug) * [Spelling](spelling_aug) * [Word Embeddings](word_embs_aug) * [TF-IDF](tfidf_aug) * [Contextual Word Embeddings](context_word_embs_aug) * [Synonym](synonym_aug) * [Antonym](antonym_aug) * [Random Word](random_word_aug) * [Split](split_aug) * [Back Translatoin](back_translation_aug) * [Reserved Word](reserved_aug)* [Sentence Augmenter](sent_aug) * [Contextual Word Embeddings for Sentence](context_word_embs_sentence_aug) * [Abstractive Summarization](abst_summ_aug) ###Code import os os.environ["MODEL_DIR"] = '../model' ###Output _____no_output_____ ###Markdown Config ###Code import nlpaug.augmenter.char as nac import nlpaug.augmenter.word as naw import nlpaug.augmenter.sentence as nas import nlpaug.flow as nafc from nlpaug.util import Action text = 'The quick brown fox jumps over the lazy dog .' print(text) ###Output The quick brown fox jumps over the lazy dog . ###Markdown Character AugmenterAugmenting data in character level. Possible scenarios include image to text and chatbot. During recognizing text from image, we need to optical character recognition (OCR) model to achieve it but OCR introduces some errors such as recognizing "o" and "0". `OCRAug` simulate these errors to perform the data augmentation. For chatbot, we still have typo even though most of application comes with word correction. Therefore, `KeyboardAug` is introduced to simulate this kind of errors. OCR Augmenter Substitute character by pre-defined OCR error ###Code aug = nac.OcrAug() augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Texts: ['The quick bkown fox jumps ovek the lazy dog .', 'The quick 6rown fox jumps ovek the lazy dog .', 'The quick brown f0x jomps over the la2y dog .'] ###Markdown Keyboard Augmenter Substitute character by keyboard distance ###Code aug = nac.KeyboardAug() augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown Gox juJps ocer the lazy dog . ###Markdown Random Augmenter Insert character randomly ###Code aug = nac.RandomCharAug(action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: T3he quicNk @brown fEox juamps $over th6e la1zy d*og ###Markdown Substitute character randomly ###Code aug = nac.RandomCharAug(action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: ThN qDick brow0 foB jumks oveE t+e laz6 dBg ###Markdown Swap character randomly ###Code aug = nac.RandomCharAug(action="swap") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: Hte quikc borwn fxo jupms ovre teh lzay dgo ###Markdown Delete character randomly ###Code aug = nac.RandomCharAug(action="delete") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: Te quic rown fx jump ver he laz og ###Markdown Word AugmenterBesides character augmentation, word level is important as well. We make use of word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), fasttext (Joulin et al., 2016), BERT(Devlin et al., 2018) and wordnet to insert and substitute similar word. `Word2vecAug`, `GloVeAug` and `FasttextAug` use word embeddings to find most similar group of words to replace original word. On the other hand, `BertAug` use language models to predict possible target word. `WordNetAug` use statistics way to find the similar group of words. Spelling Augmenter Substitute word by spelling mistake words dictionary ###Code aug = naw.SpellingAug() augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) aug = naw.SpellingAug() augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Texts: ['They quick browb fox jumps over se lazy dog.', 'The quikly brown fox jumps over tge lazy dod.', 'Tha quick brown fox jumps ower their lazy dog.'] ###Markdown Word Embeddings Augmenter Insert word randomly by word embeddings similarity ###Code # model_type: word2vec, glove or fasttext aug = naw.WordEmbsAug( model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin', action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The quick brown fox jumps Alzeari over the lazy Superintendents dog ###Markdown Substitute word by word2vec similarity ###Code # model_type: word2vec, glove or fasttext aug = naw.WordEmbsAug( model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The easy brown fox jumps around the lazy dog ###Markdown TF-IDF Augmenter Insert word by TF-IDF similarity ###Code aug = naw.TfIdfAug( model_path=os.environ.get("MODEL_DIR"), action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: sinks The quick brown fox jumps over the lazy Sidney dog ###Markdown Substitute word by TF-IDF similarity ###Code aug = naw.TfIdfAug( model_path=os.environ.get("MODEL_DIR"), action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The quick brown fox Baked over the polygraphy dog ###Markdown Contextual Word Embeddings Augmenter Insert word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet) ###Code aug = naw.ContextualWordEmbsAug( model_path='bert-base-uncased', action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: even the quick brown fox usually jumps over the lazy dog ###Markdown Substitute word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet) ###Code aug = naw.ContextualWordEmbsAug( model_path='bert-base-uncased', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = naw.ContextualWordEmbsAug( model_path='distilbert-base-uncased', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = naw.ContextualWordEmbsAug( model_path='roberta-base', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox jumps Into the bull dog . ###Markdown Synonym Augmenter Substitute word by WordNet's synonym ###Code aug = naw.SynonymAug(aug_src='wordnet') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The speedy brown fox jumps complete the lazy dog . ###Markdown Substitute word by PPDB's synonym ###Code aug = naw.SynonymAug(aug_src='ppdb', model_path=os.environ.get("MODEL_DIR") + 'ppdb-2.0-s-all') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox climbs over the lazy dog . ###Markdown Antonym Augmenter Substitute word by antonym ###Code aug = naw.AntonymAug() _text = 'Good boy' augmented_text = aug.augment(_text) print("Original:") print(_text) print("Augmented Text:") print(augmented_text) ###Output Original: Good boy Augmented Text: Good daughter ###Markdown Random Word Augmenter Swap word randomly ###Code aug = naw.RandomWordAug(action="swap") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: Quick the brown fox jumps over the lazy dog . ###Markdown Delete word randomly ###Code aug = naw.RandomWordAug() augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The brown jumps over the lazy dog ###Markdown Delete a set of contunous word will be removed randomly ###Code aug = naw.RandomWordAug(action='crop') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox jumps dog . ###Markdown Split Augmenter Split word to two tokens randomly ###Code aug = naw.SplitAug() augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The q uick b rown fox jumps o ver the lazy dog . ###Markdown Back Translation Augmenter ###Code import nlpaug.augmenter.word as naw text = 'The quick brown fox jumped over the lazy dog' back_translation_aug = naw.BackTranslationAug( from_model_name='transformer.wmt19.en-de', to_model_name='transformer.wmt19.de-en' ) back_translation_aug.augment(text) # Load models from local path import nlpaug.augmenter.word as naw from_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.en-de') to_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.de-en') text = 'The quick brown fox jumped over the lazy dog' back_translation_aug = naw.BackTranslationAug( from_model_name=from_model_dir, from_model_checkpt='model1.pt', to_model_name=to_model_dir, to_model_checkpt='model1.pt', is_load_from_github=False) back_translation_aug.augment(text) ###Output _____no_output_____ ###Markdown Reserved Word Augmenter ###Code import nlpaug.augmenter.word as naw text = 'Fwd: Mail for solution' reserved_tokens = [ ['FW', 'Fwd', 'F/W', 'Forward'], ] reserved_aug = naw.ReservedAug(reserved_tokens=reserved_tokens) augmented_text = reserved_aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output _____no_output_____ ###Markdown Sentence Augmentation Contextual Word Embeddings for Sentence Augmenter Insert sentence by contextual word embeddings (GPT2 or XLNet) ###Code # model_path: xlnet-base-cased or gpt2 aug = nas.ContextualWordEmbsForSentenceAug(model_path='xlnet-base-cased') augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = nas.ContextualWordEmbsForSentenceAug(model_path='distilgpt2') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox jumps over the lazy dog . She keeps running around the house. ###Markdown Abstractive Summarization Augmenter ###Code article = """ The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s when the first statistical machine translation systems were developed. """ aug = nas.AbstSummAug(model_path='t5-base', num_beam=3) augmented_text = aug.augment(article) print("Original:") print(article) print("Augmented Text:") print(augmented_text) ###Output Original: The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s when the first statistical machine translation systems were developed. Augmented Text: the history of natural language processing (NLP) generally started in the 1950s. work can be found from earlier periods, such as the Georgetown experiment in 1954. little further research in machine translation was conducted until the late 1980s ###Markdown Example of Textual Augmenter Usage:* [Character Augmenter](chara_aug) * [OCR](ocr_aug) * [Keyboard](keyboard_aug) * [Random](random_aug)* [Word Augmenter](word_aug) * [Spelling](spelling_aug) * [Word Embeddings](word_embs_aug) * [TF-IDF](tfidf_aug) * [Contextual Word Embeddings](context_word_embs_aug) * [Synonym](synonym_aug) * [Antonym](antonym_aug) * [Random Word](random_word_aug) * [Split](split_aug) * [Back Translatoin](back_translation_aug) * [Reserved Word](reserved_aug)* [Sentence Augmenter](sent_aug) * [Contextual Word Embeddings for Sentence](context_word_embs_sentence_aug) * [Abstractive Summarization](abst_summ_aug) ###Code import os os.environ["MODEL_DIR"] = '../model' ###Output _____no_output_____ ###Markdown Config ###Code import nlpaug.augmenter.char as nac import nlpaug.augmenter.word as naw import nlpaug.augmenter.sentence as nas import nlpaug.flow as nafc from nlpaug.util import Action text = 'The quick brown fox jumps over the lazy dog .' print(text) ###Output The quick brown fox jumps over the lazy dog . ###Markdown Character AugmenterAugmenting data in character level. Possible scenarios include image to text and chatbot. During recognizing text from image, we need to optical character recognition (OCR) model to achieve it but OCR introduces some errors such as recognizing "o" and "0". `OCRAug` simulate these errors to perform the data augmentation. For chatbot, we still have typo even though most of application comes with word correction. Therefore, `KeyboardAug` is introduced to simulate this kind of errors. OCR Augmenter Substitute character by pre-defined OCR error ###Code aug = nac.OcrAug() augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Texts: ['The quick bkown fox jumps ovek the lazy dog .', 'The quick 6rown fox jumps ovek the lazy dog .', 'The quick brown f0x jomps over the la2y dog .'] ###Markdown Keyboard Augmenter Substitute character by keyboard distance ###Code aug = nac.KeyboardAug() augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown Gox juJps ocer the lazy dog . ###Markdown Random Augmenter Insert character randomly ###Code aug = nac.RandomCharAug(action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: T3he quicNk @brown fEox juamps $over th6e la1zy d*og ###Markdown Substitute character randomly ###Code aug = nac.RandomCharAug(action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: ThN qDick brow0 foB jumks oveE t+e laz6 dBg ###Markdown Swap character randomly ###Code aug = nac.RandomCharAug(action="swap") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: Hte quikc borwn fxo jupms ovre teh lzay dgo ###Markdown Delete character randomly ###Code aug = nac.RandomCharAug(action="delete") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: Te quic rown fx jump ver he laz og ###Markdown Word AugmenterBesides character augmentation, word level is important as well. We make use of word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), fasttext (Joulin et al., 2016), BERT(Devlin et al., 2018) and wordnet to insert and substitute similar word. `Word2vecAug`, `GloVeAug` and `FasttextAug` use word embeddings to find most similar group of words to replace original word. On the other hand, `BertAug` use language models to predict possible target word. `WordNetAug` use statistics way to find the similar group of words. Spelling Augmenter Substitute word by spelling mistake words dictionary ###Code aug = naw.SpellingAug() augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) aug = naw.SpellingAug() augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Texts: ['They quick browb fox jumps over se lazy dog.', 'The quikly brown fox jumps over tge lazy dod.', 'Tha quick brown fox jumps ower their lazy dog.'] ###Markdown Word Embeddings Augmenter Insert word randomly by word embeddings similarity ###Code # model_type: word2vec, glove or fasttext aug = naw.WordEmbsAug( model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin', action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The quick brown fox jumps Alzeari over the lazy Superintendents dog ###Markdown Substitute word by word2vec similarity ###Code # model_type: word2vec, glove or fasttext aug = naw.WordEmbsAug( model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The easy brown fox jumps around the lazy dog ###Markdown TF-IDF Augmenter Insert word by TF-IDF similarity ###Code aug = naw.TfIdfAug( model_path=os.environ.get("MODEL_DIR"), action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: sinks The quick brown fox jumps over the lazy Sidney dog ###Markdown Substitute word by TF-IDF similarity ###Code aug = naw.TfIdfAug( model_path=os.environ.get("MODEL_DIR"), action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The quick brown fox Baked over the polygraphy dog ###Markdown Contextual Word Embeddings Augmenter Insert word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet) ###Code aug = naw.ContextualWordEmbsAug( model_path='bert-base-uncased', action="insert") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: even the quick brown fox usually jumps over the lazy dog ###Markdown Substitute word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet) ###Code aug = naw.ContextualWordEmbsAug( model_path='bert-base-uncased', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = naw.ContextualWordEmbsAug( model_path='distilbert-base-uncased', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = naw.ContextualWordEmbsAug( model_path='roberta-base', action="substitute") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox jumps Into the bull dog . ###Markdown Synonym Augmenter Substitute word by WordNet's synonym ###Code aug = naw.SynonymAug(aug_src='wordnet') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The speedy brown fox jumps complete the lazy dog . ###Markdown Substitute word by PPDB's synonym ###Code aug = naw.SynonymAug(aug_src='ppdb', model_path=os.environ.get("MODEL_DIR") + 'ppdb-2.0-s-all') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox climbs over the lazy dog . ###Markdown Antonym Augmenter Substitute word by antonym ###Code aug = naw.AntonymAug() _text = 'Good boy' augmented_text = aug.augment(_text) print("Original:") print(_text) print("Augmented Text:") print(augmented_text) ###Output Original: Good boy Augmented Text: Good daughter ###Markdown Random Word Augmenter Swap word randomly ###Code aug = naw.RandomWordAug(action="swap") augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: Quick the brown fox jumps over the lazy dog . ###Markdown Delete word randomly ###Code aug = naw.RandomWordAug() augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog Augmented Text: The brown jumps over the lazy dog ###Markdown Delete a set of contunous word will be removed randomly ###Code aug = naw.RandomWordAug(action='crop') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox jumps dog . ###Markdown Split Augmenter Split word to two tokens randomly ###Code aug = naw.SplitAug() augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The q uick b rown fox jumps o ver the lazy dog . ###Markdown Back Translation Augmenter ###Code import nlpaug.augmenter.word as naw text = 'The quick brown fox jumped over the lazy dog' back_translation_aug = naw.BackTranslationAug( from_model_name='transformer.wmt19.en-de', to_model_name='transformer.wmt19.de-en' ) back_translation_aug.augment(text) # Load models from local path import nlpaug.augmenter.word as naw from_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.en-de') to_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.de-en') text = 'The quick brown fox jumped over the lazy dog' back_translation_aug = naw.BackTranslationAug( from_model_name=from_model_dir, from_model_checkpt='model1.pt', to_model_name=to_model_dir, to_model_checkpt='model1.pt', is_load_from_github=False) back_translation_aug.augment(text) ###Output _____no_output_____ ###Markdown Reserved Word Augmenter ###Code import nlpaug.augmenter.word as naw text = 'Fwd: Mail for solution' reserved_tokens = [ ['FW', 'Fwd', 'F/W', 'Forward'], ] reserved_aug = naw.ReservedAug(reserved_tokens=reserved_tokens) augmented_text = reserved_aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output _____no_output_____ ###Markdown Sentence Augmentation Contextual Word Embeddings for Sentence Augmenter Insert sentence by contextual word embeddings (GPT2 or XLNet) ###Code # model_path: xlnet-base-cased or gpt2 aug = nas.ContextualWordEmbsForSentenceAug(model_path='xlnet-base-cased') augmented_texts = aug.augment(text, n=3) print("Original:") print(text) print("Augmented Texts:") print(augmented_texts) aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) aug = nas.ContextualWordEmbsForSentenceAug(model_path='distilgpt2') augmented_text = aug.augment(text) print("Original:") print(text) print("Augmented Text:") print(augmented_text) ###Output Original: The quick brown fox jumps over the lazy dog . Augmented Text: The quick brown fox jumps over the lazy dog . She keeps running around the house. ###Markdown Abstractive Summarization Augmenter ###Code article = """ The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s when the first statistical machine translation systems were developed. """ aug = nas.AbstSummAug(model_path='t5-base', num_beam=3) augmented_text = aug.augment(article) print("Original:") print(article) print("Augmented Text:") print(augmented_text) ###Output Original: The history of natural language processing (NLP) generally started in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966, which found that ten-year-long research had failed to fulfill the expectations, funding for machine translation was dramatically reduced. Little further research in machine translation was conducted until the late 1980s when the first statistical machine translation systems were developed. Augmented Text: the history of natural language processing (NLP) generally started in the 1950s. work can be found from earlier periods, such as the Georgetown experiment in 1954. little further research in machine translation was conducted until the late 1980s
projects/causal moneyball/Causal-analysis-on-football-transfer-prices/Causal Model notebooks/Causal Inference, Interventions, and Counterfactuals.ipynb
###Markdown Pgmpy ###Code import random import pandas as pd from pgmpy.models import BayesianModel from pgmpy.estimators import BayesianEstimator import networkx as nx import pylab as plt random.seed(42) ###Output _____no_output_____ ###Markdown Reading all the data to see the column headers ###Code data = pd.read_csv("../data/modelling datasets/transfers_final.csv") data.head() data.describe(include='all').loc['unique'] data.describe(include='all') ###Output _____no_output_____ ###Markdown Renaming all the columns to match the nodes of the DAG ###Code data.rename(columns={"arrival_league": "AL", "year": "Y", "origin_league": "OL", "grouping_position": "P", "arrival_club_tier": "AC", "origin_club_tier": "OC", "age_grouping_2": "A", "transfer_price_group2": "T", "potential_fifa": "Pot", "overall_fifa": "Ovr", "new_height": "H", "appearances": "App"}, inplace=True) data = data[["A", "N", "Y", "P", "Pot", "Ovr", "App", "AL", "AC", "OL", "OC", "T"]] data.head() ###Output _____no_output_____ ###Markdown Using the functions in the PGMPY library to replicate the DAG from bnlearn ###Code bn_model = BayesianModel([('OL', 'OC'), ('AL', 'AC'), ('Ovr', 'Pot'), ('A', 'App'), ('OC', 'T'), ('AC', 'T'), ('N', 'T'), ('Y', 'T'), ('Ovr', 'T'), ('Pot', 'T'), ('P', 'Ovr'), ('P', 'Pot'), ('A', 'T'), ('A', 'Ovr'), ('A', 'Pot'), ('App', 'T'), ('P', 'T')]) nx.draw(bn_model, with_labels=True) plt.show() ###Output _____no_output_____ ###Markdown Fitting the DAG with the data using a Bayesian Estimator ###Code bn_model.fit(data, estimator=BayesianEstimator, prior_type="BDeu", equivalent_sample_size=10) # default equivalent_sample_size=5 ###Output _____no_output_____ ###Markdown The next step is to extract all the CPTs that the model fitting built, in order to transfer them to Pyro ###Code # Demo of how to extract CPD a = bn_model.get_cpds(node="Ovr") a.state_names a.get_evidence() a.variables a.values.T ###Output _____no_output_____ ###Markdown Pyro ###Code from statistics import mean import torch import numpy as np import pyro import pyro.distributions as dist from pyro.infer import Importance, EmpiricalMarginal import matplotlib.pyplot as plt import pandas as pd %matplotlib inline pyro.set_rng_seed(101) ###Output _____no_output_____ ###Markdown Defining the labels with the categories of all the variables ###Code # labels N_label = bn_model.get_cpds(node="N").state_names["N"] print(N_label) P_label = bn_model.get_cpds(node="P").state_names["P"] print(P_label) Age_label = bn_model.get_cpds(node="A").state_names["A"] print(Age_label) OC_label = bn_model.get_cpds(node="OC").state_names["OC"] print(OC_label) OL_label = bn_model.get_cpds(node="OL").state_names["OL"] print(OL_label) AC_label = bn_model.get_cpds(node="AC").state_names["AC"] print(AC_label) AL_label = bn_model.get_cpds(node="AL").state_names["AL"] print(AL_label) Ovr_label = bn_model.get_cpds(node="Ovr").state_names["Ovr"] print(Ovr_label) Pot_label = bn_model.get_cpds(node="Pot").state_names["Pot"] print(Pot_label) Y_label = bn_model.get_cpds(node="Y").state_names["Y"] print(Y_label) TP_label = bn_model.get_cpds(node="T").state_names["T"] print(TP_label) ###Output ['AF', 'AS', 'EU', 'N_A', 'OC', 'SA'] ['D', 'F', 'GK', 'M'] ['Above30', 'Under23', 'Under30'] ['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4'] ['1 Bundesliga', 'Ligue 1', 'Other', 'Premier League', 'Primera Division', 'Serie A'] ['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4'] ['1 Bundesliga', 'Ligue 1', 'Other', 'Premier League', 'Primera Division', 'Serie A'] ['65to74', '75to84', '85above', 'below65'] ['65to74', '75to84', '85above', 'below65'] ['After2016', 'Before2016'] ['20Mto5M', '60Mto20M', 'Above60M'] ###Markdown Transferring the CPTs learnt by fitting the model using pgmpy to pyro for modeliing ###Code Age_probs = torch.tensor(bn_model.get_cpds(node="A").values.T) Position_probs = torch.tensor(bn_model.get_cpds(node="P").values.T) Nationality_probs = torch.tensor(bn_model.get_cpds(node="N").values.T) year_probs = torch.tensor(bn_model.get_cpds(node="Y").values.T) arrival_league_probs = torch.tensor(bn_model.get_cpds(node="AL").values.T) origin_league_probs = torch.tensor(bn_model.get_cpds(node="OL").values.T) arrival_club_probs = torch.tensor(bn_model.get_cpds(node="AC").values.T) origin_club_probs = torch.tensor(bn_model.get_cpds(node="OC").values.T) overall_probs = torch.tensor(bn_model.get_cpds(node="Ovr").values.T) potential_probs = torch.tensor(bn_model.get_cpds(node="Pot").values.T) app_probs = torch.tensor(bn_model.get_cpds(node="App").values.T) transfer_price_probs = torch.tensor(bn_model.get_cpds(node="T").values.T) ###Output _____no_output_____ ###Markdown Defining the pyro model that will be the base of all the experiments/interventions ###Code def pyro_model(): Age = pyro.sample("A", dist.Categorical(probs=Age_probs)) Position = pyro.sample("P", dist.Categorical(probs=Position_probs)) Nationality = pyro.sample("N", dist.Categorical(probs=Nationality_probs)) Year = pyro.sample("Y", dist.Categorical(probs=year_probs)) Arrival_league = pyro.sample("AL", dist.Categorical(probs=arrival_league_probs)) Origin_league = pyro.sample('OL', dist.Categorical(probs=origin_league_probs)) Arrival_club = pyro.sample('AC', dist.Categorical(probs=arrival_club_probs[Arrival_league])) Origin_club = pyro.sample('OC', dist.Categorical(probs=origin_club_probs[Origin_league])) Overall = pyro.sample('Ovr', dist.Categorical(probs=overall_probs[Position][Age])) Potential = pyro.sample('Pot',dist.Categorical(probs=potential_probs[Position][Overall][Age])) Appearances = pyro.sample('App',dist.Categorical(probs=app_probs[Age])) transfer_price = pyro.sample('TP', dist.Categorical(probs=transfer_price_probs[Year][Potential][Position][Overall][Origin_club][Nationality][Appearances][Arrival_club][Age])) return{'A': Age,'P': Position,'N': Nationality,'Y': Year,'AL': Arrival_league,'OL':Origin_league,'AC':Arrival_club,'OC':Origin_club,'Ovr':Overall,'Pot':Potential, 'App':Appearances, 'TP':transfer_price} print(pyro_model()) ###Output {'A': tensor(2), 'P': tensor(3), 'N': tensor(2), 'Y': tensor(0), 'AL': tensor(3), 'OL': tensor(4), 'AC': tensor(1), 'OC': tensor(0), 'Ovr': tensor(1), 'Pot': tensor(2), 'App': tensor(0), 'TP': tensor(0)} ###Markdown Defining an Importance sampling function that uses Importance Sampling to calculate the posterior, generates a list of samples using the Empirical Marginal algorithm and outputs a Histogram plot of the required variable ###Code def importance_sampling(model, title, xlabel, ylabel, marginal_on="TP", label=TP_label): posterior = pyro.infer.Importance(model, num_samples=5000).run() marginal = EmpiricalMarginal(posterior, marginal_on) samples = [marginal().item() for _ in range(5000)] unique, counts = np.unique(samples, return_counts=True) plt.bar(unique, counts, align='center', alpha=0.5) plt.xticks(unique, label) plt.ylabel(ylabel) plt.xlabel(xlabel) for i in range(len(label)): plt.text(i, counts[i]+10, str(counts[i])) plt.title(title) ###Output _____no_output_____ ###Markdown Experiment 1: Intervention on Nationality = SA and Position = FThe first experiment is to intervene on all South American Forward players. The intuition is that they tend to have a higher transfer fee when we talk about Forward players. We want to see if our model can validate this intuition ###Code # Intervening on south american fowards do_on_SA_F = pyro.do(pyro_model, data={'N': torch.tensor(5), 'P': torch.tensor(1)}) importance_sampling(model=do_on_SA_F, title="P(TP | do(N = 'SA', P = 'F')) - Importance Sampling", xlabel='Transfer Price', ylabel='count', marginal_on='TP') ###Output _____no_output_____ ###Markdown Experiment 2: Intervention on ArrivalLeague = Premier League and OriginLeague = Premier LeagueThe second experiment is to intervene on Origin and Arrival Leagues to be Premier League. The intuition here is that all intra-league transfers in the Premier League extract a higher avgerage transfer fee. ###Code # transfer between english teams do_on_PremierL = pyro.do(pyro_model, data={'AL': torch.tensor(3), 'OL': torch.tensor(3)}) importance_sampling(model=do_on_PremierL, title="P(TP | do(AL = 'Premier League', OL = 'Premier League') - Importance Sampling", xlabel='Transfer Price', ylabel='count', marginal_on='TP') ###Output _____no_output_____ ###Markdown Experiment 3: Intervention on ArrivalClub = Tier1 and OriginClub = Tier1The third experiment is to intervene on Arrival and Origin clubs being Tier1. The intuition here is that transfers between Tier1 clubs extract a higher average Transfer fee ###Code # intervening on transfers betwen tier 1 clubs do_on_Tier1 = pyro.do(pyro_model, data={'AC': torch.tensor(0), 'OC': torch.tensor(0)}) importance_sampling(model=do_on_Tier1, title="P(TP | do(AC = 'Tier 1', OC = 'Tier 1') - Importance Sampling", xlabel='Transfer Price', ylabel='count', marginal_on='TP') ###Output _____no_output_____ ###Markdown Experiment 4: Intervention on Age = Under23 and Potential = 85aboveThe fourth experiment explores the intervention where Age is under 23 years old and player potential rating for the year of transfer is 85 and above. The intuition here is that a young player with a very high potential rating should extract a higher average transfer fee ###Code # intervening on young and high potenital stars to test intution about our transfer strategy do_on_young_stars = pyro.do(pyro_model, data={'A': torch.tensor(1), 'Pot': torch.tensor(2)}) importance_sampling(model=do_on_young_stars, title="P(TP | do(A = 'Under23', Pot = '85above') - Importance Sampling", xlabel='Transfer Price', ylabel='count', marginal_on='TP') ###Output _____no_output_____ ###Markdown Experiment 5: Intervening on Year = before 2016 and then on Y = after 2016This experiment is something that we want our model to capture. As mentioned earlier, the said inflation in player transfer fee for high potential players, according to our beliefs was the year 2016. So we do a before and after intervention to see if our model captures this change ###Code # intevrening on year to see inflated probabilities for price brackets # intervening on players for transfers before 2016 do_before2016 = pyro.do(pyro_model, data={'Y': torch.tensor(1)}) do_before2016_conditioned_model = pyro.condition(do_before2016, data={'Pot':torch.tensor(2)}) importance_sampling(model=do_before2016_conditioned_model, title="P(TP | do(Y = 'Before2016', P = '85above') - Importance Sampling", xlabel='Transfer Price', ylabel='count', marginal_on='TP') # intervening on players for transfers after 2016 do_after2016 = pyro.do(pyro_model, data={'Y': torch.tensor(0)}) do_after2016_conditioned_model = pyro.condition(do_after2016, data={'Pot':torch.tensor(2)}) importance_sampling(model=do_after2016_conditioned_model, title="P(TP | do(Y = 'After2016', P = '85above') - Importance Sampling", xlabel='Transfer Price', ylabel='count', marginal_on='TP') ###Output _____no_output_____ ###Markdown Finding the Causal Effect of all variables on Transfer Price above 20M ###Code def causal_effect(model1, model2, marginal_on, marginal_val, n_samples=5000): posterior1 = pyro.infer.Importance(model1, num_samples=n_samples).run() marginal1 = EmpiricalMarginal(posterior1, marginal_on) samples1 = [marginal1().item() for _ in range(n_samples)] unique1, counts1 = np.unique(samples1, return_counts=True) posterior2 = pyro.infer.Importance(model2, num_samples=n_samples).run() marginal2 = EmpiricalMarginal(posterior2, marginal_on) samples2 = [marginal2().item() for _ in range(n_samples)] unique2, counts2 = np.unique(samples2, return_counts=True) return counts1[marginal_val] / n_samples - counts2[marginal_val] / n_samples # Causal effect of year on Transfer price above 60M do_before2016 = pyro.do(pyro_model, data={'Y': torch.tensor(1)}) do_after2016 = pyro.do(pyro_model, data={'Y': torch.tensor(0)}) #P(TP > Above60M | do(Y = After2016) - P(TP > Above60M | do(Y = Before2016)) causal_effect(model1=do_before2016, model2=do_after2016, marginal_on='TP', marginal_val=2) # Causal effect of age on Transfer price above 60M # Age_Label = ['Above30', 'Under23', 'Under30'] do_above30 = pyro.do(pyro_model, data={'A': torch.tensor(0)}) do_under30 = pyro.do(pyro_model, data={'A': torch.tensor(2)}) #P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30)) causal_effect(model1=do_above30, model2=do_under30, marginal_on='TP', marginal_val=2) # Causal effect of Potential Rating on Transfer price betweein 20-60M # Potential_Label = ['65to74', '75to84', '85above', 'below65'] do_above85_pot = pyro.do(pyro_model, data={'Pot': torch.tensor(2)}) do_below65_pot = pyro.do(pyro_model, data={'Pot': torch.tensor(0)}) #P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30)) causal_effect(model1=do_above85_pot, model2=do_below65_pot, marginal_on='TP', marginal_val=1) # Causal effect of Overall Rating on Transfer price above 60M # Potential_Label = ['65to74', '75to84', '85above', 'below65'] do_above85_ovr = pyro.do(pyro_model, data={'Ovr': torch.tensor(2)}) do_below65_ovr = pyro.do(pyro_model, data={'Ovr': torch.tensor(3)}) #P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30)) causal_effect(model1=do_above85_ovr, model2=do_below65_ovr, marginal_on='TP', marginal_val=1) # Causal effect of Arrival Club on Transfer price above between 20 - 60M #AC['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4'] do_tier1 = pyro.do(pyro_model, data={'AC': torch.tensor(0)}) do_tier3 = pyro.do(pyro_model, data={'AC': torch.tensor(2)}) #P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30)) causal_effect(model1=do_tier1, model2=do_tier3, marginal_on='TP', marginal_val=1) # Causal effect of Origin Club on Transfer price between 20 - 60M #OC['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4'] oc_do_tier1 = pyro.do(pyro_model, data={'OC': torch.tensor(0)}) oc_do_tier3 = pyro.do(pyro_model, data={'OC': torch.tensor(2)}) #P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30)) causal_effect(model1=oc_do_tier1, model2=oc_do_tier3, marginal_on='TP', marginal_val=1) # Counterfactual query on Potential changing from 'below65' to '85above' conditioned_model_for_cf = pyro.condition(pyro_model, data={'Pot':torch.tensor(3)}) cf_posterior = Importance(conditioned_model_for_cf, num_samples=1000).run() marginal_cf = EmpiricalMarginal(cf_posterior, "TP") samples_cf = [marginal_cf().item() for _ in range(1000)] unique_cf, counts_cf = np.unique(samples_cf, return_counts=True) tp_samples = [] for _ in range(1000): trace_handler_1000 = pyro.poutine.trace(conditioned_model_for_cf) trace = trace_handler_1000.get_trace() N = trace.nodes["N"]['value'] A = trace.nodes["A"]['value'] P = trace.nodes["P"]['value'] Y = trace.nodes["Y"]['value'] Ovr = trace.nodes["Ovr"]['value'] AC = trace.nodes["AC"]['value'] OC = trace.nodes["OC"]['value'] AL = trace.nodes["AL"]['value'] OL = trace.nodes["OL"]['value'] App = trace.nodes["App"]['value'] intervention_model_q1_1000 = pyro.do(pyro_model, data={'Pot': torch.tensor(2)}) counterfact_model_q1_1000 = pyro.condition(intervention_model_q1_1000, data={'N': N, 'A':A, 'P': P, "Y": Y, "Ovr": Ovr, "AC": AC, "OC": OC, "AL": AL, "OL": OL, "App": App}) tp_samples.append(counterfact_model_q1_1000()['TP']) unique_tp, counts_tp = np.unique(tp_samples, return_counts=True) # P (Y = 60Mto20M | Pot = below65) = (counts_cf[1]) / 1000 # P (Y = 60Mto20M | do(Pot = above85)) = (counts_tp[1]) / 1000 # Query: Are teams paying for 'X' nationality because they think they are great or are they actually better? # Compare them to performance conditional on being Nationality={SA, EU, AF, AS} # Nationality_Label = ['AF', 'AS', 'EU', 'N_A', 'OC', 'SA'] # TP_Label = ['20Mto5M', '60Mto20M', 'Above60M'] cond_on_N = pyro.condition(pyro_model, data={'TP': torch.tensor(2)}) importance_sampling(model=cond_on_N, title="P(N | TP = 'Above60M') - Importance Sampling", xlabel='Overall Rating', ylabel='count', marginal_on='N', label=N_label) # We determine X = EU cond_on_SA = pyro.condition(pyro_model, data={'N': torch.tensor(5)}) importance_sampling(model=cond_on_SA, title="P(Ovr | N = 'SA') - Importance Sampling", xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label) (136)/5000 # good players in SA cond_on_EU = pyro.condition(pyro_model, data={'N': torch.tensor(2)}) importance_sampling(model=cond_on_EU, title="P(Ovr | N = 'EU') - Importance Sampling", xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label) (176)/5000 # good players in EU cond_on_AF = pyro.condition(pyro_model, data={'N': torch.tensor(0)}) importance_sampling(model=cond_on_AF, title="P(Ovr | N = 'AF') - Importance Sampling", xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label) (100)/5000 # good players in AF cond_on_AS = pyro.condition(pyro_model, data={'N': torch.tensor(1)}) importance_sampling(model=cond_on_AS, title="P(Ovr | N = 'AS') - Importance Sampling", xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label) (129)/5000 # good players in AS ###Output _____no_output_____
natural-language-processing/word-embedding/word2vec.ipynb
###Markdown WORD2VECThe word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Each vector has some semantic meaning to it. Created with shallow 2 layered NN that reconstruct the context of words. Helps in developing context for each word using embeddings. Developed in either of the two model archs:1. CBOW - Continuous Bag of Words - model predicts current word from surrounding words. ( No order of context, faster, distant also better) 2. Skip Gram - model predicts surrounding windows from current word. (context is order, slower, closer ones more important)Hyper Parameters involved:1. Training algorithm - hierarchical softmax and/or negative sampling. hierarchical softmax works better for infrequent words while negative sampling works better for frequent words and better with low dimensional vectors.2. Sub Sampling - High-frequency words often provide little information. Words with a frequency above a certain threshold may be subsampled to increase training speed3. Dimensionality - After a point of increased embedding size, no point. Usually 100 to 1000 is the size.4. Context Window - number of Surrounding words - 10 for skip gram, 5 for CBOW Exercise is to train own word2vec model and play with pretrained model ###Code import nltk from gensim.models import Word2Vec from nltk.corpus import stopwords import re paragraph = """WORD2VEC The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Each vector has some semantic meaning to it. Created with shallow 2 layered NN that reconstruct the context of words. Helps in developing context for each word using embeddings. Developed in either of the two model archs: CBOW - Continuous Bag of Words - model predicts current word from surrounding words. ( No order of context, faster, distant also better) Skip Gram - model predicts surrounding windows from current word. (context is order, slower, closer ones more important) Hyper Parameters involved: Training algorithm - hierarchical softmax and/or negative sampling. hierarchical softmax works better for infrequent words while negative sampling works better for frequent words and better with low dimensional vectors. Sub Sampling - High-frequency words often provide little information. Words with a frequency above a certain threshold may be subsampled to increase training speed Dimensionality - After a point of increased embedding size, no point. Usually 100 to 1000 is the size. Context Window - number of Surrounding words - 10 for skip gram, 5 for CBOW""" #preprocess the data using regex sentences = nltk.sent_tokenize(paragraph) processed_sentences = [] for sentence in sentences: print("\nSentence before processing : ", sentence) sentence = re.sub('[^a-zA-Z0-9]', ' ',sentence) sentence = re.sub('\s+', ' ', sentence) sentence = sentence.lower() words = nltk.word_tokenize(sentence) processed_sentence = [word for word in words if word not in stopwords.words('english')] processed_sentences.append(processed_sentence) print("\nSentence after processing : ", processed_sentence) model = Word2Vec(processed_sentences, min_count = 1) vocab = model.wv.vocab for key, value in vocab.items(): print(key, " : ", value) vector = model.wv['skip'] similar = model.wv.most_similar('skip') similar #pretrained model from gensim repository ( lsiting all avaialable models) import gensim.downloader print(list(gensim.downloader.info()['models'].keys())) glove_wiki = gensim.downloader.load('glove-wiki-gigaword-300') glove_wiki.most_similar('wikipedia') ###Output _____no_output_____
Fairness/error-fairness.ipynb
###Markdown Fair share of errors Consider three variables of interest:- $S$: a sensitive variable- $\hat{Y}$: a prediction or decision- $Y$: the ground truth (often unobserved)For example $Y$ could be the ability to pay for a mortgage, $\hat{Y}$ is a decision whether to offer a person a home loan, and $S$ is the person's race. ###Code import pandas as pd import numpy as np from itertools import product from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection import matplotlib.pyplot as plt %matplotlib notebook # Illustrating the use of itertools product for ix,value in enumerate(product(range(2), repeat=3)): print(ix, value) type(value[0]) ###Output 0 (0, 0, 0) 1 (0, 0, 1) 2 (0, 1, 0) 3 (0, 1, 1) 4 (1, 0, 0) 5 (1, 0, 1) 6 (1, 1, 0) 7 (1, 1, 1) ###Markdown Explaination of the conditionsThe meaning of 0 and 1 for $Y$ and $\hat{Y}$ are pretty standard (negative and positive). We add the interpretation that a value of $S=0$ indicates a minority or disadvantaged part of the community, and $S=1$ otherwise.If $Y$ is the same as $\hat{Y}$, then there is no bias as the predictions are correct.- If they are both zero, then this is **true negative**, and we label them 0 and 1 based on the sensitive variable- If they are both one, then this is **true positive**, and we label them 0 and 1 based on the sensitive variable The interesting cases are the **false positive** and **false negative** cases.When the prediction is one but the ground truth is zero, this is **false positive** (predict positive but falsely)- If the sensitive variable is zero, this is **A**ffirmative action. The minority group gets a positive action even though it really should not manage.- If the sensitive variable is one, this is **C**ronyism. The majority group benefits from positive action, even though not warranted.When the prediction is zero but the ground truth is one, this is **false negative** (predict negative but falsely)- If the sensitive variable is zero, this is **D**iscrimination. The minority group is negatively affected, since they should get positive action, but they did not.- If the sensitive variable is one, this is **B**acklash or Byproduct. The majority group is (as a side effect of decision making based on aggregate information) negatively affected. ###Code def naming(y, yhat, s): if y == 0 and yhat == 0 and s == 0: return (y, yhat, s, 'TN0') if y == 0 and yhat == 0 and s == 1: return (y, yhat, s, 'TN1') if y == 0 and yhat == 1 and s == 0: return (y, yhat, s, 'A') if y == 0 and yhat == 1 and s == 1: return (y, yhat, s, 'C') if y == 1 and yhat == 0 and s == 0: return (y, yhat, s, 'D') if y == 1 and yhat == 0 and s == 1: return (y, yhat, s, 'B') if y == 1 and yhat == 1 and s == 0: return (y, yhat, s, 'TP0') if y == 1 and yhat == 1 and s == 1: return (y, yhat, s, 'TP1') def name2position(variables): ix_y = np.where(np.array(variables) == 'Y')[0][0] ix_yhat = np.where(np.array(variables) == 'Yhat')[0][0] ix_s = np.where(np.array(variables) == 'S')[0][0] return (ix_y, ix_yhat, ix_s) #variables = ['S', 'Yhat', 'Y', 'condition'] variables = ['Y', 'Yhat', 'S', 'condition'] ix_y, ix_yhat, ix_s = name2position(variables) all_possibilities = pd.DataFrame(index=range(8), columns=variables, dtype='int') for ix, value in enumerate(product([0,1], repeat=len(variables)-1)): all_possibilities.iloc[ix] = naming(value[ix_y], value[ix_yhat], value[ix_s]) # Bug in pandas, creates a dataframe of floats. Workaround. for col in all_possibilities.columns[:-1]: all_possibilities[col] = pd.to_numeric(all_possibilities[col], downcast='integer') all_possibilities def plot_cube(ax, cube_definition): """ From https://stackoverflow.com/questions/44881885/python-draw-3d-cube """ cube_definition_array = [ np.array(list(item)) for item in cube_definition ] points = [] points += cube_definition_array vectors = [ cube_definition_array[1] - cube_definition_array[0], cube_definition_array[2] - cube_definition_array[0], cube_definition_array[3] - cube_definition_array[0] ] points += [cube_definition_array[0] + vectors[0] + vectors[1]] points += [cube_definition_array[0] + vectors[0] + vectors[2]] points += [cube_definition_array[0] + vectors[1] + vectors[2]] points += [cube_definition_array[0] + vectors[0] + vectors[1] + vectors[2]] points = np.array(points) edges = [ [points[0], points[3], points[5], points[1]], [points[1], points[5], points[7], points[4]], [points[4], points[2], points[6], points[7]], [points[2], points[6], points[3], points[0]], [points[0], points[2], points[4], points[1]], [points[3], points[6], points[7], points[5]] ] faces = Poly3DCollection(edges, linewidths=1, edgecolors='k') faces.set_facecolor((0,0,1,0.1)) ax.add_collection3d(faces) # Plot the points themselves to force the scaling of the axes ax.scatter(points[:,0], points[:,1], points[:,2], s=50) ax.set_aspect('equal') ax.set_xlabel(variables[ix_s]) ax.set_ylabel(variables[ix_yhat]) ax.set_zlabel(variables[ix_y]) ax.grid(False) return cube_definition = [ (0,0,0), (0,1,0), (1,0,0), (0,0,1) ] fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(111, projection='3d') plot_cube(ax, cube_definition) for ix, row in all_possibilities.iterrows(): ax.text(row[ix_s], row[ix_yhat], row[ix_y], row[3], size=30) ###Output _____no_output_____ ###Markdown Studying the trade offFocusing on the plane traced out by A, C, B ,D, we get a two dimensional plot which provides insight into the trade off between 1. false positives and false negatives2. Favouritism, how much the majority group benefits ###Code fig = plt.figure(figsize=(6,6)) ax = fig.add_subplot(111) ax.plot([0,0,1,1], [0,1,0,1], 'bo') ax.set_xlabel('FN -- FP') ax.set_ylabel('favouritism') ax.text(0, 0, naming(1, 0, 0)[3], size=30) ax.text(0, 1, naming(1, 0, 1)[3], size=30) ax.text(1, 0, naming(0, 1, 0)[3], size=30) ax.text(1, 1, naming(0, 1, 1)[3], size=30) ###Output _____no_output_____
assignment1-UMJCS-master/Homework1_partB(coding)/softmax.ipynb
###Markdown Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights ###Code import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the linear classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] mask = np.random.choice(num_training, num_dev, replace=False) X_dev = X_train[mask] y_dev = y_train[mask] # Preprocessing: reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_val = np.reshape(X_val, (X_val.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) X_dev = np.reshape(X_dev, (X_dev.shape[0], -1)) # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis = 0) X_train -= mean_image X_val -= mean_image X_test -= mean_image X_dev -= mean_image # add bias dimension and transform into columns X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]) X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]) X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]) X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))]) return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) print('dev data shape: ', X_dev.shape) print('dev labels shape: ', y_dev.shape) ###Output Train data shape: (49000, 3073) Train labels shape: (49000,) Validation data shape: (1000, 3073) Validation labels shape: (1000,) Test data shape: (1000, 3073) Test labels shape: (1000,) dev data shape: (500, 3073) dev labels shape: (500,) ###Markdown Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**. ###Code # First implement the naive softmax loss function with nested loops. # Open the file cs231n/classifiers/softmax.py and implement the # softmax_loss_naive function. from cs231n.classifiers.softmax import softmax_loss_naive import time # Generate a random softmax weight matrix and use it to compute the loss. W = np.random.randn(3073, 10) * 0.0001 loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As a rough sanity check, our loss should be something close to -log(0.1). print('loss: %f' % loss) print('sanity check: %f' % (-np.log(0.1))) ###Output loss: 1205.640317 sanity check: 2.302585 ###Markdown Inline Question 1:Why do we expect our loss to be close to -log(0.1)? Explain briefly.****Your answer:** *Because there are 10 samples in this experiment, so 0.1 just the average of the whole sample set* ###Code # Complete the implementation of softmax_loss_naive and implement a (naive) # version of the gradient that uses nested loops. loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As we did for the SVM, use numeric gradient checking as a debugging tool. # The numeric gradient should be close to the analytic gradient. from cs231n.gradient_check import grad_check_sparse f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # similar to SVM case, do another gradient check with regularization loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1) f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # Now that we have a naive implementation of the softmax loss function and its gradient, # implement a vectorized version in softmax_loss_vectorized. # The two versions should compute the same results, but the vectorized version should be # much faster. tic = time.time() loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005) toc = time.time() print('naive loss: %e computed in %fs' % (loss_naive, toc - tic)) from cs231n.classifiers.softmax import softmax_loss_vectorized tic = time.time() loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005) toc = time.time() print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)) # As we did for the SVM, we use the Frobenius norm to compare the two versions # of the gradient. grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro') print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized)) print('Gradient difference: %f' % grad_difference) # Use the validation set to tune hyperparameters (regularization strength and # learning rate). You should experiment with different ranges for the learning # rates and regularization strengths; if you are careful you should be able to # get a classification accuracy of over 0.35 on the validation set. from cs231n.classifiers import Softmax results = {} best_val = -1 best_softmax = None learning_rates = [1e-7, 5e-7] regularization_strengths = [2.5e4, 5e4] ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained softmax classifer in best_softmax. # ################################################################################ num_iters = 3000 for lr in learning_rates: for reg in regularization_strengths: softmax = Softmax() set_tuple = (lr,reg) softmax.train(X_train, y_train, lr, reg, num_iters) train_pred = softmax.predict(X_train) corr = np.sum(y_train == train_pred) train_acc = corr / len(y_train) val_pred = softmax.predict(X_val) corr = np.sum(y_val == val_pred) val_acc = corr / len(y_val) if val_acc >= best_val: best_val = val_acc best_softmax = softmax results[(lr, reg)] = (train_acc, val_acc) ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # evaluate on test set # Evaluate the best softmax on test set y_test_pred = best_softmax.predict(X_test) test_accuracy = np.mean(y_test == y_test_pred) print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )) # Visualize the learned weights for each class w = best_softmax.W[:-1,:] # strip out the bias w = w.reshape(32, 32, 3, 10) w_min, w_max = np.min(w), np.max(w) classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for i in range(10): plt.subplot(2, 5, i + 1) # Rescale the weights to be between 0 and 255 wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min) plt.imshow(wimg.astype('uint8')) plt.axis('off') plt.title(classes[i]) ###Output _____no_output_____
scripts/Analysis.ipynb
###Markdown Who Is J? Analysing JOTB diversity network One of the main goals of the ‘Yes We Tech’ community is contributing to create an inclusive space where we can celebrate diversity, provide visibility to women-in-tech, and ensure that everybody has an equal chance to learn, share and enjoy technology-related disciplines.As co-organisers of the event, we have concentrated our efforts in getting more women speakers on board under the assumption that a more diverse panel would enrich the conversation also around technology.Certainly, we have doubled the number of women giving talks this year, but, is this diversity enough? How can we know that we have succeeded in our goal? and more importantly, what can we learn to create a more diverse event in future editions?The work that we are sharing here talks about two things: data and people. Both data and people should help us to find out some answers and understand the reasons why.Let's start with a story about data. Data is pretty simple compared with people. Just take a look at the numbers, the small ones, the ones that better describe what happened in 2016 and 2017 J On The Beach editions. ###Code import pandas as pd import numpy as np import scipy as sp import pygal import operator from iplotter import GCPlotter plotter = GCPlotter() ###Output _____no_output_____ ###Markdown Small data analysisSmall data says that last year, our 'J' engaged up to 48 speakers and 299 attendees into this big data thing. I'm not considering here any member of the organisation. ###Code data2016 = pd.read_csv('../input/small_data_2016.csv') data2016['Women Rate'] = pd.Series(data2016['Women']*100/data2016['Total']) data2016['Men Rate'] = pd.Series(data2016['Men']*100/data2016['Total']) data2016 ###Output _____no_output_____ ###Markdown This year speakers are 40, few less than last year, while participation have reached the number of 368 people. (Compare the increment of attendees 368 vs 299 ###Code data2017 = pd.read_csv('../input/small_data_2017.csv') data2017['Women Rate'] = pd.Series(data2017['Women']*100/data2017['Total']) data2017['Men Rate'] = pd.Series(data2017['Men']*100/data2017['Total']) data2017 increase = 100 - 299*100.00/368 increase ###Output _____no_output_____ ###Markdown It is noticable also, that big data is bigger than ever and this year we have included workshops and a hackathon. The more the better right? Let's continue because there are more numbers behind those ones. Numbers that will give us some signs of diversity. DiversityWhen it comes about speakers, this year we have a **27.5%** of women speaking to J, compared with a rough **10.4%** of the last year. ###Code data = [ ['Tribe', 'Women', 'Men', {"role": 'annotation'}], ['2016', data2016['Women Rate'][0], data2016['Men Rate'][0],''], ['2017', data2017['Women Rate'][0], data2017['Men Rate'][0],''], ] options = { "title": 'Speakers at JOTB', "width": 600, "height": 400, "legend": {"position": 'top', "maxLines": 3}, "bar": {"groupWidth": '50%'}, "isStacked": "true", "colors": ['#984e9e', '#ed1c40'], } plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options) ###Output _____no_output_____ ###Markdown However, and this is the worrying thing, the participation of women as attendees has slightly dropped from a not too ambitious **13%** to a disappointing **9.8%**. So we have an x% more of attendees but zero impact on a wider variaty of people. ###Code data = [ ['Tribe', 'Women', 'Men', {"role": 'annotation'}], ['2016', data2016['Women Rate'][1], data2016['Men Rate'][1],''], ['2017', data2017['Women Rate'][1], data2017['Men Rate'][1],''], ] options = { "title": 'Attendees at JOTB', "width": 600, "height": 400, "legend": {"position": 'top', "maxLines": 3}, "bar": {"groupWidth": '55%'}, "isStacked": "true", "colors": ['#984e9e', '#ed1c40'], } plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options) ###Output _____no_output_____ ###Markdown Why this happened? We don’t really know. But we continued looking at the numbers and realised that **30** of the **45** companies that enrolled two or more people didn't include any women on their lists. Meaning a **31%** of the mass of attendees. Correlate team size with women percentage to validate if: the smaller the teams are, the less chances to include a women on their lists ###Code companies_team = data2017['Total'][3] + data2017['Total'][4] mass_represented = pd.Series(data2017['Total'][4]*100/companies_team) women_represented = pd.Series(100 - mass_represented) mass_represented ###Output _____no_output_____ ###Markdown For us this is not a good sign. Despite the fact that our ability to summon has increased on our monthly meetups (the ones that attempts to create this culture for equality on Málaga), the engagement on other events doesn’t have a big impact.Again I'm not blaming companies here, because if we try to identify the participation rate of women who are not part of a team, the representation also decreased almost a **50%**. ###Code data = [ ['Tribe', 'Women', 'Men', {"role": 'annotation'}], [data2016['Tribe'][2], data2016['Women Rate'][2], data2016['Men Rate'][2],''], [data2016['Tribe'][3], data2016['Women Rate'][3], data2016['Men Rate'][3],''], [data2016['Tribe'][5], data2016['Women Rate'][5], data2016['Men Rate'][5],''], ] options = { "title": '2016 JOTB Edition', "width": 600, "height": 400, "legend": {"position": 'top', "maxLines": 3}, "bar": {"groupWidth": '55%'}, "isStacked": "true", "colors": ['#984e9e', '#ed1c40'], } plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options) data = [ ['Tribe', 'Women', 'Men', {"role": 'annotation'}], [data2017['Tribe'][2], data2017['Women Rate'][2], data2017['Men Rate'][2],''], [data2017['Tribe'][3], data2017['Women Rate'][3], data2017['Men Rate'][3],''], [data2017['Tribe'][5], data2017['Women Rate'][5], data2017['Men Rate'][5],''], ] options = { "title": '2017 JOTB Edition', "width": 600, "height": 400, "legend": {"position": 'top', "maxLines": 3}, "bar": {"groupWidth": '55%'}, "isStacked": "true", "colors": ['#984e9e', '#ed1c40'], } plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options) ###Output _____no_output_____ ###Markdown Before before blaming anyone or falling to quickly into self-indulgence, there are still more data to play with.Note aside: the next thing is nothing but an experiment, nothing is categorical or has been made with the intention of offending any body. Like our t-shirt labels says: no programmer have been injured in the creation of the following data game. Social network analysisThe next story talks about people. The people around J, the ones who follow, are followed by, interact with, and create the chances of a more diverse and interesting conference. It is also a story about the people who organise this conference. Because when we started to plan a conference like this, we did nothing but thinking on what could be interesting for the people who come. In order to get that we used the previous knowledge that we have about cool people who do amazing things with data, and JVM technologies. And this means looking into our own networks and following suggestions of the people we trust. So if we assume that we are biased by the people around us, we thought it was a good idea to know first how is the network of people around J to see the chances that we have to bring someone different, unusual that can add value to the conference.For the moment, since this is an experiment that wants to trigger your reaction we will look at J's Twitter account.Indeed, a real-world network would have a larger amount of numbers and people to look at, but yet a digital social network is about human interactions, conversations and knowledge sharing. For this experiment we've used `sexmachine` python library https://pypi.python.org/pypi/SexMachine/ and the 'Twitter Gender Distribution' project published in github https://github.com/ajdavis/twitter-gender-distribution to find out the gender of a specific twitter acount. ###Code run index.py jotb2018 ###Output _____no_output_____ ###Markdown From the small **50%** of J's friends that could be identified with a gender, the distribution woman/men is a **20/80**. Friends are the ones who follow and are followed by J. ###Code # Read the file and take some important information whoisj = pd.read_json('../out/jotb2018.json', orient = 'columns') people = pd.read_json(whoisj['jotb2018'].to_json()) following_total = whoisj['jotb2018']['friends_count'] followers_total = whoisj['jotb2018']['followers_count'] followers = pd.read_json(people['followers_list'].to_json(), orient = 'index') following = pd.read_json(people['friends_list'].to_json(), orient = 'index') whoisj ###Output _____no_output_____ ###Markdown J follows to... ###Code # J follows to... following_total ###Output _____no_output_____ ###Markdown J is followed by... ###Code # J is followed by... followers_total ###Output _____no_output_____ ###Markdown Gender distribution ###Code followers['gender'].value_counts() following['gender'].value_counts() followers_dist = followers['gender'].value_counts() genders = followers['gender'].value_counts().keys() followers_map = pygal.Pie(height=400) followers_map.title = 'Followers Gender Map' for i in genders: followers_map.add(i,followers_dist[i]*100.00/followers_total) followers_map.render_in_browser() following_dist = following['gender'].value_counts() genders = following['gender'].value_counts().keys() following_map = pygal.Pie(height=400) following_map.title = 'Following Gender Map' for i in genders: following_map.add(i,following_dist[i]*100.00/following_total) following_map.render_in_browser() ###Output file:///tmp/tmpdyrMnq.html ###Markdown Language distribution ###Code lang_counts = followers['lang'].value_counts() languages = followers['lang'].value_counts().keys() followers_dist = followers['gender'].value_counts() lang_followers_map = pygal.Treemap(height=400) lang_followers_map.title = 'Followers Language Map' for i in languages: lang_followers_map.add(i,lang_counts[i]*100.00/followers_total) lang_followers_map.render_in_browser() lang_counts = following['lang'].value_counts() languages = following['lang'].value_counts().keys() following_dist = following['gender'].value_counts() lang_following_map = pygal.Treemap(height=400) lang_following_map.title = 'Following Language Map' for i in languages: lang_following_map.add(i,lang_counts[i]*100.00/following_total) lang_following_map.render_in_browser() ###Output file:///tmp/tmpYEUnt2.html ###Markdown Location distribution ###Code followers['location'].value_counts() following['location'].value_counts() ###Output _____no_output_____ ###Markdown Tweets analysis ###Code run tweets.py jotb2018 1000 j_network = pd.read_json('../out/jotb2018_tweets.json', orient = 'index') interactions = j_network['gender'].value_counts() genders = j_network['gender'].value_counts().keys() j_network_map = pygal.Pie(height=400) j_network_map.title = 'Interactions Gender Map' for i in genders: j_network_map.add(i,interactions[i]) j_network_map.render_in_browser() a = j_network['hashtags'] b = j_network['gender'] say_something = [x for x in a if x != []] tags = [] for y in say_something: for x in pd.DataFrame(y)[0]: tags.append(x.lower()) tags_used = pd.DataFrame(tags)[0].value_counts() tags_keys = pd.DataFrame(tags)[0].value_counts().keys() tags_map = pygal.Treemap(height=400) tags_map.title = 'Hashtags Map' for i in tags_keys: tags_map.add(i,tags_used[i]) tags_map.render_in_browser() pairs = [] for i in j_network['gender'].keys() : if (j_network['hashtags'][i] != []) : pairs.append([j_network['hashtags'][i], j_network['gender'][i]]) key_pairs = [] for i,j in pairs: for x in i: key_pairs.append((x,j)) key_pairs key_pair_dist = {x: key_pairs.count(x) for x in key_pairs} sorted_x = sorted(key_pair_dist.items(), key = operator.itemgetter(1), reverse = True) sorted_x ###Output _____no_output_____
python_lambda.ipynb
###Markdown ###Code (lambda first, second : first * second + 20)(10, 3) def plus(first01, second02): return first01 + 20 # result = first01 + 20 # return result plus(10), type(plus) plus(20) plus_02 = (lambda first : first + 20) type(plus_02) plus_02(25) ###Output _____no_output_____ ###Markdown lambda를 사용하면 아래와 같은 함수 정의를 간단하게 나타냄 ###Code (lambda first, second : first * second + 20)(10, 3) def plus(first, second) : # 함수 정의 result = first + 20 return result plus(10) ###Output _____no_output_____ ###Markdown lambda를 변수안에 저장하면 재사용가능 ###Code plus_lambda = (lambda first: first + 20) # lambda 정의 plus_lambda(10) ###Output _____no_output_____ ###Markdown ###Code (lambda first : first + 20)(10) def plus(first01) : return first01 + 20 # return 이 가장 마지막으로 실행되기때문에 first01+20이 먼저 실행되서 가능한것 #result = first01 + 20 #return result plus(10), type(plus) plus_02 = (lambda first : first + 20) type(plus_02) plus_02(10) plus_03 = (lambda first,second : first * second + 20) type(plus_03) plus_03(30,20) ###Output _____no_output_____ ###Markdown ###Code (lambda first, second : first * second + 20)(10,3) def plus(first01, second02): return first01 + 20 # first01 + 20, return # result = first01 + 20 # return result plus(10), type(plus) plus(20) plus_02 = (lambda first : first + 20) type(plus_02) plus_02(30) ###Output _____no_output_____ ###Markdown ###Code (lambda first, second : first * second + 20)(10,3) def sum(first01, second02): return first01+20 # result = first01 + 20 # return result sum(10) sum02 = (lambda first : first+20) type(sum02) sum02(30) ###Output _____no_output_____ ###Markdown ###Code (lambda first : first + 20 )(10) def plus(first01): result = first01+ 20 return result plus(10) plus_02 = (lambda first : first + 20) type(plus_02) plus_02(30) ###Output _____no_output_____ ###Markdown ###Code (lambda first : first + 20)(10) def plus(first01): result = first01 + 20 return result # 아래와 같이 하여도 가능 # def plus(first01): # return first01 + 20 plus(10) plus_02 = (lambda first : first + 20) type(plus_02), type(plus) plus_02(10) (lambda first, second : first + second + 20)(10,20) def plus(first01, second02): result = first01 + second02 + 20 return result ###Output _____no_output_____
GroupHW_1_Exposure_ForwardBond.ipynb
###Markdown Loading of Libraries and Classes. ###Code %matplotlib inline from datetime import date import time import pandas as pd import numpy as np pd.options.display.max_colwidth = 60 from Curves.Corporates.CorporateDailyVasicek import CorporateRates from Boostrappers.CDSBootstrapper.CDSVasicekBootstrapper import BootstrapperCDSLadder from MonteCarloSimulators.Vasicek.vasicekMCSim import MC_Vasicek_Sim from Products.Rates.CouponBond import CouponBond from Products.Credit.CDS import CDS from Scheduler.Scheduler import Scheduler import quandl import matplotlib.pyplot as plt from parameters import WORKING_DIR import itertools marker = itertools.cycle((',', '+', '.', 'o', '*')) from IPython.core.pylabtools import figsize figsize(15, 4) from pandas import ExcelWriter import numpy.random as nprnd from pprint import pprint ###Output _____no_output_____ ###Markdown Create forward bond future PV (Exposure) time profile Setting up parameters ###Code t_step = 1.0 / 365.0 simNumber = 10 trim_start = date(2005,3,10) trim_end = date(2010,12,31) # Last Date of the Portfolio start = date(2005, 3, 10) referenceDate = date(2005, 5, 10) ###Output _____no_output_____ ###Markdown Data input for the CouponBond portfolioThe word portfolio is used to describe just a dict of CouponBonds. This line creates a referenceDateListmyScheduler = Scheduler()ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate) Create Simulator This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be inside the Monte Carlo simulation range [trim_start,trim_end] Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories. SDE parameters - Vasicek SDE dr(t) = k(θ − r(t))dt + σdW(t) self.kappa = x[0] self.theta = x[1] self.sigma = x[2] self.r0 = x[3] myVasicek = MC_Vasicek_Sim()xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)myVasicek.getLibor() Create Coupon Bond with several startDates.SixMonthDelay = myScheduler.extractDelay("6M")TwoYearsDelay = myScheduler.extractDelay("2Y")startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)] For debugging uncomment this to choose a single date for the forward bond print(startDates)startDates = [date(2005,3,10)] orstartDates = [date(2005,3,10) + SixMonthDelay]maturities = [(x+TwoYearsDelay) for x in startDates] You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life. Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate)myPortfolio = {}coupon = 0.07536509for i in range(len(startDates)): notional=(-1.0)**i myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional, maturity= maturities[i], freq="3M", referencedate=referenceDate) ###Code myScheduler = Scheduler() ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate) # Create Simulator xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509] myVasicek = MC_Vasicek_Sim(datelist = [trim_start,trim_end],x = xOIS,simNumber = simNumber,t_step =1/365.0 ) #myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0) myVasicek.getLibor() # Create Coupon Bond with several startDates. SixMonthDelay = myScheduler.extractDelay("6M") TwoYearsDelay = myScheduler.extractDelay("2Y") startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)] # For debugging uncomment this to choose a single date for the forward bond # print(startDates) startDates = [date(2005,3,10)+SixMonthDelay,date(2005,3,10)+TwoYearsDelay ] maturities = [(x+TwoYearsDelay) for x in startDates] myPortfolio = {} coupon = 0.07536509 for i in range(len(startDates)): notional=(-1.0)**i myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional, maturity= maturities[i], freq="3M", referencedate=referenceDate) ###Output _____no_output_____ ###Markdown Create Libor and portfolioScheduleOfCF. This datelist contains all dates to be used in any calculation of the portfolio positions. BondCoupon class has to have a method getScheduleComplete, which return fullSet on [0] and datelist on [1], calculated by BondCoupon as:def getScheduleComplete(self): self.datelist=self.myScheduler.getSchedule(start=self.start,end=self.maturity,freq=self.freq,referencedate=self.referencedate) self.ntimes = len(self.datelist) fullset = sorted(set(self.datelist) .union([self.referencedate]) .union([self.start]) .union([self.maturity]) ) return fullset,self.datelist portfolioScheduleOfCF is the concatenation of all fullsets. It defines the set of all dates for which Libor should be known. ###Code portfolioScheduleOfCF = set(ReferenceDateList) for i in range(len(myPortfolio)): portfolioScheduleOfCF=portfolioScheduleOfCF.union(myPortfolio[i].getScheduleComplete()[0] ) portfolioScheduleOfCF = sorted(portfolioScheduleOfCF.union(ReferenceDateList)) OIS = myVasicek.getSmallLibor(datelist=portfolioScheduleOfCF) # at this point OIS contains all dates for which the discount curve should be known. # If the OIS doesn't contain that date, it would not be able to discount the cashflows and the calcualtion would faill. print(OIS) pvs={} for t in portfolioScheduleOfCF: pvs[t] = np.zeros([1,simNumber]) for i in range(len(myPortfolio)): myPortfolio[i].setLibor(OIS) pvs[t] = pvs[t] + myPortfolio[i].getExposure(referencedate=t).values #print(portfolioScheduleOfCF) #print(pvs) pvsPlot = pd.DataFrame.from_dict(list(pvs.items())) pvsPlot.index= list(pvs.keys()) pvs1={} for i,t in zip(pvsPlot.values,pvsPlot.index): pvs1[t]=i[1][0] pvs = pd.DataFrame.from_dict(data=pvs1,orient="index") ax=pvs.plot(legend=False) ax.set_xlabel("Year") ax.set_ylabel("Coupon Bond Exposure") ###Output _____no_output_____
06-Data-Ingestion/06-02-Exercise-STRING-AGG.ipynb
###Markdown Practice on STRING_AGG() & ARRAY_AGG()We will use the Google Analytics dataset `data-to-insights.ecommerce.all_sessions_raw`, also use in the [uncoming Quiklab](https://google.qwiklabs.com/focuses/3638?parent=catalog). ###Code # This cell is to enable the "hint" functionality. After each question there is a cell with either a hint about the correct answer or the solution. from IPython.display import Pretty as disp hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics-Toolbox/master/docs/hints/' # path to hints on GitHub ###Output _____no_output_____ ###Markdown Question: Find out how many product names and product SKUs are on the website? ###Code %%bigquery SELECT COUNT(*) FROM ( SELECT DISTINCT productSKU, v2ProductName FROM `data-to-insights.ecommerce.all_sessions_raw` ) ###Output _____no_output_____ ###Markdown Now find the number of distinct SKUs: ###Code %%bigquery SELECT COUNT(DISTINCT productSKU) FROM `data-to-insights.ecommerce.all_sessions_raw` ###Output _____no_output_____ ###Markdown Obviously these numbers do not match which indicates that there are duplications. Let's determine which products have more than one SKU and which SKUs have more than one Product Name.Let's determine if some product names have more than one SKU: ###Code %%bigquery SELECT v2ProductName, COUNT(DISTINCT productSKU) AS SKU_count, STRING_AGG(DISTINCT productSKU LIMIT 5) AS SKU FROM `data-to-insights.ecommerce.all_sessions_raw` WHERE productSKU IS NOT NULL GROUP BY v2ProductName HAVING SKU_count > 1 ORDER BY SKU_count DESC ###Output _____no_output_____ ###Markdown We can see that 493 products are under this category. We can see the SKUs that these product names are related to. Your turnFind the SKUs that have multiple product names: ###Code # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '06-02-products') ###Output _____no_output_____
ipynb/Namibia.ipynb
###Markdown Namibia* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb) ###Code import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview("Namibia"); # load the data cases, deaths, region_label = get_country_data("Namibia") # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 500 rows pd.set_option("max_rows", 500) # display the table table ###Output _____no_output_____ ###Markdown Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))-------------------- ###Code print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}") ###Output _____no_output_____ ###Markdown Namibia* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb) ###Code import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview("Namibia", weeks=5); overview("Namibia"); compare_plot("Namibia", normalise=True); # load the data cases, deaths = get_country_data("Namibia") # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 500 rows pd.set_option("max_rows", 500) # display the table table ###Output _____no_output_____ ###Markdown Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))-------------------- ###Code print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}") ###Output _____no_output_____ ###Markdown Namibia* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb) ###Code import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview("Namibia", weeks=5); overview("Namibia"); compare_plot("Namibia", normalise=True); # load the data cases, deaths = get_country_data("Namibia") # get population of the region for future normalisation: inhabitants = population("Namibia") print(f'Population of "Namibia": {inhabitants} people') # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 1000 rows pd.set_option("max_rows", 1000) # display the table table ###Output _____no_output_____ ###Markdown Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))-------------------- ###Code print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}") ###Output _____no_output_____
code/simulation/calibration.ipynb
###Markdown Calibration ###Code import numpy as np import pandas as pd import numpy as np from os.path import join import json import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.patches as patches # custom functions to run the calibration simulations import calibration_functions as cf # parallelisation functionality from multiprocess import Pool import psutil from tqdm import tqdm ###Output _____no_output_____ ###Markdown Empirical outbreak data ###Code empirical_data_src = '../../data/school_data/empirical_observations/' # distribution of outbreak sizes by school type outbreak_sizes = pd.read_csv(\ join(empirical_data_src, 'empirical_outbreak_sizes.csv')) # ratio of infections in the student and teacher groups group_distributions = pd.read_csv(\ join(empirical_data_src, 'empirical_group_distributions.csv')) # note: these are the number of clusters per school type from the slightly older # data version (November 2020). counts = pd.DataFrame({'type':['upper_secondary', 'secondary'], 'count':[116, 70]}) counts.index = counts['type'] counts = counts.drop(columns=['type']) # The cluster counts are used to weigh the respective school type in the # calibration process. counts['weight'] = counts['count'] / counts['count'].sum() ###Output _____no_output_____ ###Markdown Simulation data Simulation parameters ###Code # school types over which the calibration was run school_types = ['upper_secondary', 'secondary'] # the way the simulation framework is set up, it works with a "base transmission risk" # for a household contact, that is then multiplied by a modifier for a different contact # setting. What we calibrate is this modifier. base_transmission_risk = 0.16598 transmission_risk_modifier = np.asarray([0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.30, 0.31, 0.32, 0.33, 0.34, 0.35]) # For the school simulations, we also calibrated a modifier for student age # (the "age transmission discount"). The calibration showed no age dependence # but since we re-use data from the calibration that was done for the school # simulations, we carry these parameter values with us and use them to access # the simulation result files. # The age_transmission_discount sets the slope of the age-dependence of the # transmission risk. Transmission risk for adults (age 18+) is always base # transmission risk. For every year an agent is younger than 18 years, the # transmission risk is reduced. Parameter values are chosen around the optimum # from the previous random sampling search age_transmission_discounts = [0.00, -0.0025, -0.005, -0.0075, -0.01, -0.0125, -0.015, -0.0175, -0.02, -0.0225, -0.025, -0.0275, -0.03] # list of all possible parameter combinations from the grid screening_params = [(i, j, j, k) for i in school_types \ for j in transmission_risk_modifier \ for k in age_transmission_discounts] print('values for the base transmission risk rescaled by the modifier [%]:') print(transmission_risk_modifier * base_transmission_risk * 100) print() print('parameter value step rescaled by the modifier [%]: {:1.2f}'\ .format(transmission_risk_modifier[1] * base_transmission_risk * 100 -\ transmission_risk_modifier[0] * base_transmission_risk * 100)) ###Output values for the base transmission risk rescaled by the modifier [%]: [3.81754 3.98352 4.1495 4.31548 4.48146 4.64744 4.81342 4.9794 5.14538 5.31136 5.47734 5.64332 5.8093 ] parameter value step rescaled by the modifier [%]: 0.17 ###Markdown Load data for upper secondary and secondary schools ###Code # calculate the various distribution distances between the simulated and # observed outbreak size distributions src = '../../data/school_data/simulation_results' results_fine = pd.DataFrame() for i, ep in enumerate(screening_params): school_type, icw, fcw, atd = ep if i % 100 == 0: print('{}/{}'.format(i, len(screening_params))) fname = 'school_type-{}_icw-{:1.2f}_fcw-{:1.2f}_atd-{:1.4f}_infected.csv'\ .format(school_type, icw, fcw, atd) ensemble_results = pd.read_csv(join(src, fname), dtype={'infected_students':int, 'infected_teachers':int, 'infected_total':int, 'run':int}) row = cf.calculate_distances(ensemble_results, school_type, icw, fcw, atd, outbreak_sizes, group_distributions) results_fine = results_fine.append(row, ignore_index=True) print('number of runs per ensemble: {}'.format(len(ensemble_results))) ###Output number of runs per ensemble: 4000 ###Markdown Calculate distances between empirical and simulated data ###Code # collection of different distance metrics to try distance_cols = [ 'sum_of_squares', 'chi2_distance', 'bhattacharyya_distance', 'spearmanr_difference', 'pearsonr_difference', 'pp_difference', 'qq_difference', ] results_fine = results_fine.sort_values(by=['school_type', 'intermediate_contact_weight', 'age_transmission_discount']) results_fine = results_fine.reset_index(drop=True) for col in distance_cols: results_fine[col + '_total'] = results_fine[col + '_size'] + \ results_fine['sum_of_squares_distro'] results_fine[col + '_total_weighted'] = results_fine[col + '_total'] for i, row in results_fine.iterrows(): st = row['school_type'] weight = counts.loc[st, 'weight'] error = row[col + '_total'] results_fine.loc[i, col + '_total_weighted'] = error * weight ###Output _____no_output_____ ###Markdown Find optimal parameter values ###Code agg_results_fine = results_fine\ .drop(columns=['far_contact_weight'])\ .rename(columns={'intermediate_contact_weight':'contact_weight'})\ .groupby(['contact_weight', 'age_transmission_discount'])\ .sum() for col in distance_cols: print(col) opt_fine = agg_results_fine.loc[\ agg_results_fine[col + '_total_weighted'].idxmin()].name opt_contact_weight_fine = opt_fine[0] opt_age_transmission_discount_fine = opt_fine[1] print('optimal grid search parameter combination:') print('\t contact weight: {:1.3f}'\ .format(opt_contact_weight_fine)) print('\t age transmission discount: {:1.4f}'\ .format(opt_age_transmission_discount_fine)) print() ###Output sum_of_squares optimal grid search parameter combination: contact weight: 0.260 age transmission discount: 0.0000 chi2_distance optimal grid search parameter combination: contact weight: 0.260 age transmission discount: 0.0000 bhattacharyya_distance optimal grid search parameter combination: contact weight: 0.240 age transmission discount: -0.0300 spearmanr_difference optimal grid search parameter combination: contact weight: 0.230 age transmission discount: -0.0075 pearsonr_difference optimal grid search parameter combination: contact weight: 0.250 age transmission discount: 0.0000 pp_difference optimal grid search parameter combination: contact weight: 0.240 age transmission discount: -0.0050 qq_difference optimal grid search parameter combination: contact weight: 0.320 age transmission discount: -0.0225 ###Markdown Visualise distances ###Code # compose matrices of the distance measurements for all different distance # metrics which are calculated as sum between the first component (ratio of # infected students and teachers) and the second component (outbreak size # distribution) distance_images = {} for col in distance_cols: img_fine = np.zeros((len(contact_weights_fine), len(age_transmission_discounts_fine))) for i, cw in enumerate(contact_weights_fine): for j, atd in enumerate(age_transmission_discounts_fine): cw = round(cw, 2) atd = round(atd, 4) try: img_fine[i, j] = agg_results_fine\ .loc[cw, atd][col + '_total_weighted'] except KeyError: print(atd) img_fine[i, j] = np.nan distance_images[col] = img_fine # qq and spearman are super noisy, exclude them for further analysis distance_col_names = { 'sum_of_squares':'Sum of squares', 'chi2_distance':'$\\chi^2$', 'bhattacharyya_distance':'Bhattacharyya', 'spearmanr_difference': 'Spearman correlation', 'pearsonr_difference':'Pearson correlation', 'pp_difference':'pp-slope', 'qq_difference':'qq-slope' } fig, axes = plt.subplots(2, 4, figsize=(15, 6)) for ax, col in zip(axes.flatten(), distance_col_names.keys()): img_fine = distance_images[col] im = ax.imshow(img_fine) ax.set_yticks(range(len(contact_weights_fine))[::2]) ax.set_yticklabels(['{:1.2f}'.format(cw) for \ cw in contact_weights_fine[::2]]) #ax.set_xticks(range(len(age_transmission_discounts_fine))[::2]) ax.set_xticks([0, 4, 8, 12]) ax.set_xticklabels(['0.00', '-0.01', '-0.02', '-0.03']) #ax.set_xticklabels(['{:1.4f}'.format(atd) for \ # atd in age_transmission_discounts_fine[::2]], # fontsize=8) ax.set_title(distance_col_names[col], fontsize=16) divider = make_axes_locatable(ax) cax = divider.append_axes('right', size='5%', pad=0.05) cbar = fig.colorbar(im, cax=cax, orientation='vertical', format='%.0e') cbar.ax.tick_params(labelsize=8) cbar.set_label('$E$', fontsize=12) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.set_ylabel('$c_\\mathrm{contact}$', fontsize=16) ax.set_xlabel('$c_\\mathrm{age}$', fontsize=16) axes[1, 3].axis('off') fig.text(0.061, 0.875, 'A', color='w', fontsize=20) fig.text(0.312, 0.875, 'B', color='w', fontsize=20) fig.text(0.56, 0.875, 'C', color='w', fontsize=20) fig.text(0.808, 0.875, 'D', color='w', fontsize=20) fig.text(0.061, 0.395, 'E', color='w', fontsize=20) fig.text(0.312, 0.395, 'F', color='w', fontsize=20) fig.text(0.56, 0.395, 'G', color='w', fontsize=20) fig.tight_layout() ###Output _____no_output_____ ###Markdown Confidence intervals for the optimum values ###Code def run_bootstrap(params): src = '../../data/calibration/simulation_results/ensembles_fine_ensemble_distributions' ensemble_results, st, icw, fcw, atd, outbreak_sizes, \ group_distributions, bootstrap_run = params row = cf.calculate_distances(ensemble_results, st, icw, fcw, atd, outbreak_sizes, group_distributions) row.update({'bootstrap_run':bootstrap_run}) return row # calculate the various distribution distances between the simulated and # observed outbreak size distributions. Note: dst = '../../data/school_data/simulation_results/' N_bootstrap = 1000 # number of subsamplings per parameter combination number_of_cores = 10 bootstrapping_results = pd.DataFrame() for i, ep in enumerate(screening_params): school_type, icw, fcw, atd = ep if i % 100 == 0: print('{}/{}'.format(i, len(screening_params))) fname = 'school_type-{}_icw-{:1.2f}_fcw-{:1.2f}_atd-{:1.4f}_infected.csv'\ .format(school_type, icw, fcw, atd) ensemble_results = pd.read_csv(join(src, fname), dtype={'infected_students':int, 'infected_teachers':int, 'infected_total':int, 'run':int}) bootstrap_params = [(ensemble_results.sample(2000), school_type, icw, fcw, \ atd, outbreak_sizes, group_distributions, j) \ for j in range(N_bootstrap)] number_of_cores = number_of_cores = psutil.cpu_count(logical=True) - 2 pool = Pool(number_of_cores) for res in tqdm(pool.imap_unordered(func=run_bootstrap, iterable=bootstrap_params), total=len(bootstrap_params)): bootstrapping_results = bootstrapping_results\ .append(res, ignore_index=True) bootstrapping_results.to_csv(join(dst, 'bootstrapping_results_{}.csv'\ .format(N_bootstrap)), index=False) dst = '../../data/school_data/simulation_results/' N_bootstrap = 1000 bs_results = pd.read_csv(join(dst, 'bootstrapping_results_{}.csv'\ .format(N_bootstrap))) bs_results = bs_results\ .rename(columns={'intermediate_contact_weight':'contact_weight'})\ .drop(columns=['far_contact_weight']) # calculated the weighted sum of error terms for all distance measures for col in distance_cols: bs_results[col + '_total'] = \ bs_results[col + '_size'] + bs_results['sum_of_squares_distro'] bs_results[col + '_total_weighted'] = bs_results[col + '_total'] for st in school_types: weight = counts.loc[st, 'weight'] st_indices = bs_results[bs_results['school_type'] == st].index bs_results.loc[st_indices, col + '_total_weighted'] = \ bs_results.loc[st_indices, col + '_total'] * weight agg_bs_results = bs_results\ .groupby(['contact_weight', 'age_transmission_discount', 'bootstrap_run'])\ .sum() opt_bs = pd.DataFrame() for i in range(N_bootstrap): run_data = agg_bs_results.loc[:, :, i] row = {'bootstrap_run':i} for col in distance_cols: opt = run_data.loc[\ run_data[col + '_total_weighted'].idxmin()].name opt_contact_weight_bs = opt[0] opt_age_transmission_discount_bs = opt[1] row.update({ 'contact_weight_' + col:opt_contact_weight_bs, 'age_transmission_discount_' + col:opt_age_transmission_discount_bs }) opt_bs = opt_bs.append(row, ignore_index=True) uncertainties_cw = [] medians_cw = [] uncertainties_atd = [] medians_atd = [] for col in distance_cols: median = opt_bs['contact_weight_' + col].median() * base_transmission_risk mean = opt_bs['contact_weight_' + col].mean() * base_transmission_risk low = opt_bs['contact_weight_' + col].quantile(0.025) * base_transmission_risk high = opt_bs['contact_weight_' + col].quantile(0.975) * base_transmission_risk atd_median = opt_bs['age_transmission_discount_' + col].median() atd_mean = opt_bs['age_transmission_discount_' + col].mean() atd_low = opt_bs['age_transmission_discount_' + col].quantile(0.025) atd_high = opt_bs['age_transmission_discount_' + col].quantile(0.975) print('{}: contact weight {} [{}; {}] (mean {:1.4f}), atd {} [{}; {}]'\ .format(col, median, low, high, mean, atd_median, atd_low, atd_high, atd_mean)) uncertainties_cw.append(high - low) uncertainties_atd.append(atd_high - atd_low) medians_cw.append(median) medians_atd.append(atd_median) ###Output sum_of_squares: contact weight 0.041495 [0.0381754; 0.0464744] (mean 0.0416), atd -0.0025 [-0.0225; 0.0] chi2_distance: contact weight 0.0431548 [0.0381754; 0.0464744] (mean 0.0423), atd 0.0 [-0.015; 0.0] bhattacharyya_distance: contact weight 0.039835199999999994 [0.0381754; 0.0448146] (mean 0.0406), atd -0.02 [-0.03; 0.0] spearmanr_difference: contact weight 0.0381754 [0.0381754; 0.041495] (mean 0.0389), atd -0.025 [-0.03; -0.0025] pearsonr_difference: contact weight 0.0448146 [0.039835199999999994; 0.048134199999999995] (mean 0.0439), atd -0.005 [-0.025; 0.0] pp_difference: contact weight 0.039835199999999994 [0.0381754; 0.041495] (mean 0.0392), atd -0.0075 [-0.03; -0.0025] qq_difference: contact weight 0.0547734 [0.048134199999999995; 0.05809299999999999] (mean 0.0547), atd -0.0075 [-0.0225; 0.0]
solutions_do_not_open/Lab_08_ML Improving performance_solution.ipynb
###Markdown Improving performance ###Code import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt # Load the data df = pd.read_csv('../data/new_titanic_features.csv') # Create Features and Labels X = df[['Male', 'Family', 'Pclass2_one', 'Pclass2_two', 'Pclass2_three', 'Embarked_C', 'Embarked_Q', 'Embarked_S', 'Age2', 'Fare3_Fare11to50', 'Fare3_Fare51+', 'Fare3_Fare<=10']] y = df['Survived'] X.describe() from sklearn.model_selection import train_test_split # Train test split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=.2, random_state=0) from sklearn.linear_model import LogisticRegression model = LogisticRegression() model.fit(X_train, y_train) pred_train = model.predict(X_train) pred_test = model.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report print('Train Accuracy: {:0.3}'.format(accuracy_score(y_train, pred_train))) print('Test Accuracy: {:0.3}'.format(accuracy_score(y_test, pred_test))) confusion_matrix(y_test, pred_test) print(classification_report(y_test, pred_test)) ###Output _____no_output_____ ###Markdown Feature importances (wrong! see exercise 1) ###Code coeffs = pd.Series(model.coef_.ravel(), index=X.columns) coeffs coeffs.plot(kind='barh') ###Output _____no_output_____ ###Markdown Cross Validation ###Code from sklearn.model_selection import cross_val_score, ShuffleSplit cv = ShuffleSplit(n_splits=5, test_size=.4, random_state=0) scores = cross_val_score(model, X, y, cv=cv) scores 'Crossval score: %0.3f +/- %0.3f ' % (scores.mean(), scores.std()) ###Output _____no_output_____ ###Markdown Learning curve ###Code from sklearn.model_selection import learning_curve tsz = np.linspace(0.1, 1, 10) train_sizes, train_scores, test_scores = learning_curve(model, X, y, train_sizes=tsz) fig = plt.figure() plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores") plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores") plt.title('Learning Curve: Logistic Regression') plt.ylim((0.5, 1.0)) plt.legend() plt.draw() plt.show() ###Output _____no_output_____ ###Markdown Exercise 1Try rescaling the Age feature with [`preprocessing.StandardScaler`](http://scikit-learn.org/stable/modules/preprocessing.html) so that it will have comparable size to the other features.- Do the model prediction change?- Does the performance of the model change?- Do the feature importances change?- How can you explain what you've observed? ###Code from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train[['Age2']]) X_train_sc = X_train.copy() X_test_sc = X_test.copy() X_train_sc['Age2'] = sc.transform(X_train[['Age2']]) X_test_sc['Age2'] = sc.transform(X_test[['Age2']]) model = LogisticRegression() model.fit(X_train, y_train) print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train)))) print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test)))) coeffs = pd.Series(model.coef_.ravel(), index=X.columns) model.fit(X_train_sc, y_train) print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc)))) print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc)))) coeffs_sc = pd.Series(model.coef_.ravel(), index=X.columns) plt.figure(figsize=(15, 5)) plt.subplot(121) coeffs.plot(kind='barh', title='Unscaled Age2') plt.subplot(122) coeffs_sc.plot(kind='barh', title='Scaled Age2') plt.tight_layout() ###Output _____no_output_____ ###Markdown Only the coefficients of the rescaled features can be interpreted as feature importances. Exercise 2Experiment with another classifier for example `DecisionTreeClassifier`, `RandomForestClassifier`, `SVC`, `MLPClassifier`, `SGDClassifier` or any other classifier of choice you can find here: http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html. - Train the model on both the scaled data and on the unscaled data- Compare the score for the scaled and unscaled data- how can you get the features importances for tree based models? Check [here](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html) for some help.- Which classifiers are impacted by the age rescale? Why? ###Code from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier() model.fit(X_train, y_train) print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train)))) print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test)))) coeffs = pd.Series(model.feature_importances_, index=X.columns) coeffs.plot(kind='barh') model.fit(X_train_sc, y_train) print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc)))) print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc)))) coeffs = pd.Series(model.feature_importances_, index=X.columns) coeffs.plot(kind='barh') ###Output _____no_output_____ ###Markdown Exercise 3Pick your preferred classifier from Exercise 2 and search for the best hyperparameters. You can read about hyperparameter search [here](http://scikit-learn.org/stable/modules/grid_search.html)- Decide the range of hyperparameters you intend to explore- Try using [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlsklearn.model_selection.GridSearchCV) to perform brute force search- Try using [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.htmlsklearn.model_selection.RandomizedSearchCV) for a random search- Once you've chosen the best classifier and the best hyperparameter set, redo the learning curve.Do you need more data or a better model? ###Code from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint as sp_randint param_dist = {"max_depth": [3, None], "max_features": sp_randint(1, 11), "min_samples_split": sp_randint(2, 11), "min_samples_leaf": sp_randint(1, 11), "bootstrap": [True, False], "criterion": ["gini", "entropy"]} clf = RandomForestClassifier(n_estimators=20) model = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=40, n_jobs=-1) model.fit(X_train, y_train) model.best_score_ model.score(X_test, y_test) best = model.best_estimator_ best.fit(X_train, y_train) best.score(X_test, y_test) train_sizes, train_scores, test_scores = learning_curve(best, X, y, train_sizes=tsz) fig = plt.figure() plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores") plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores") plt.title('Learning Curve: Logistic Regression') plt.ylim((0.5, 1.0)) plt.legend() plt.draw() plt.show() ###Output _____no_output_____ ###Markdown Improving performance ###Code import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt # Load the data df = pd.read_csv('../data/new_titanic_features.csv') # Create Features and Labels X = df[['Male', 'Family', 'Pclass2_one', 'Pclass2_two', 'Pclass2_three', 'Embarked_C', 'Embarked_Q', 'Embarked_S', 'Age2', 'Fare3_Fare11to50', 'Fare3_Fare51+', 'Fare3_Fare<=10']] y = df['Survived'] X.describe() from sklearn.model_selection import train_test_split # Train test split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=.2, random_state=0) from sklearn.linear_model import LogisticRegression model = LogisticRegression(solver='liblinear') model.fit(X_train, y_train) pred_train = model.predict(X_train) pred_test = model.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report print('Train Accuracy: {:0.3}'.format(accuracy_score(y_train, pred_train))) print('Test Accuracy: {:0.3}'.format(accuracy_score(y_test, pred_test))) confusion_matrix(y_test, pred_test) print(classification_report(y_test, pred_test)) ###Output _____no_output_____ ###Markdown Feature importances (wrong! see exercise 1) ###Code coeffs = pd.Series(model.coef_.ravel(), index=X.columns) coeffs coeffs.plot(kind='barh') ###Output _____no_output_____ ###Markdown Cross Validation ###Code from sklearn.model_selection import cross_val_score, ShuffleSplit cv = ShuffleSplit(n_splits=5, test_size=.4, random_state=0) scores = cross_val_score(model, X, y, cv=cv) scores 'Crossval score: %0.3f +/- %0.3f ' % (scores.mean(), scores.std()) ###Output _____no_output_____ ###Markdown Learning curve ###Code from sklearn.model_selection import learning_curve tsz = np.linspace(0.1, 1, 10) train_sizes, train_scores, test_scores = learning_curve(model, X, y, train_sizes=tsz, cv=3) fig = plt.figure() plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores") plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores") plt.title('Learning Curve: Logistic Regression') plt.ylim((0.5, 1.0)) plt.legend() plt.draw() plt.show() ###Output _____no_output_____ ###Markdown Exercise 1Try rescaling the Age feature with [`preprocessing.StandardScaler`](http://scikit-learn.org/stable/modules/preprocessing.html) so that it will have comparable size to the other features.- Do the model prediction change?- Does the performance of the model change?- Do the feature importances change?- How can you explain what you've observed? ###Code from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train[['Age2']]) X_train_sc = X_train.copy() X_test_sc = X_test.copy() X_train_sc['Age2'] = sc.transform(X_train[['Age2']]) X_test_sc['Age2'] = sc.transform(X_test[['Age2']]) model = LogisticRegression(solver='liblinear') model.fit(X_train, y_train) print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train)))) print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test)))) coeffs = pd.Series(model.coef_.ravel(), index=X.columns) model.fit(X_train_sc, y_train) print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc)))) print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc)))) coeffs_sc = pd.Series(model.coef_.ravel(), index=X.columns) plt.figure(figsize=(15, 5)) plt.subplot(121) coeffs.plot(kind='barh', title='Unscaled Age2') plt.subplot(122) coeffs_sc.plot(kind='barh', title='Scaled Age2') plt.tight_layout() ###Output _____no_output_____ ###Markdown Only the coefficients of the rescaled features can be interpreted as feature importances. Exercise 2Experiment with another classifier for example `DecisionTreeClassifier`, `RandomForestClassifier`, `SVC`, `MLPClassifier`, `SGDClassifier` or any other classifier of choice you can find here: http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html. - Train the model on both the scaled data and on the unscaled data- Compare the score for the scaled and unscaled data- how can you get the features importances for tree based models? Check [here](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html) for some help.- Which classifiers are impacted by the age rescale? Why? ###Code from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=30) model.fit(X_train, y_train) print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train)))) print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test)))) coeffs = pd.Series(model.feature_importances_, index=X.columns) coeffs.plot(kind='barh') model.fit(X_train_sc, y_train) print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc)))) print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc)))) coeffs = pd.Series(model.feature_importances_, index=X.columns) coeffs.plot(kind='barh') ###Output _____no_output_____ ###Markdown Exercise 3Pick your preferred classifier from Exercise 2 and search for the best hyperparameters. You can read about hyperparameter search [here](http://scikit-learn.org/stable/modules/grid_search.html)- Decide the range of hyperparameters you intend to explore- Try using [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlsklearn.model_selection.GridSearchCV) to perform brute force search- Try using [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.htmlsklearn.model_selection.RandomizedSearchCV) for a random search- Once you've chosen the best classifier and the best hyperparameter set, redo the learning curve.Do you need more data or a better model? ###Code from sklearn.model_selection import RandomizedSearchCV from scipy.stats import randint as sp_randint param_dist = {"max_depth": [3, None], "max_features": sp_randint(1, 11), "min_samples_split": sp_randint(2, 11), "min_samples_leaf": sp_randint(1, 11), "bootstrap": [True, False], "criterion": ["gini", "entropy"]} clf = RandomForestClassifier(n_estimators=20) model = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=40, n_jobs=-1, cv=3) model.fit(X_train, y_train) model.best_score_ model.score(X_test, y_test) best = model.best_estimator_ best.fit(X_train, y_train) best.score(X_test, y_test) train_sizes, train_scores, test_scores = learning_curve(best, X, y, train_sizes=tsz, cv=3) fig = plt.figure() plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores") plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores") plt.title('Learning Curve: Logistic Regression') plt.ylim((0.5, 1.0)) plt.legend() plt.draw() plt.show() ###Output _____no_output_____
Hyperparameter_Generated/Hyperparameter_Simple_With_Project_Scope.ipynb
###Markdown Hyperparameter Database Submitted to RISE:(Approved) Hyperparameters are parameters that are specified prior to running machine learning algorithms that have a large effect on the predictive power of statistical models. Knowledge of the relative importance of a hyperparameter to an algorithm and its range of values is crucial to hyperparameter tuning and creating effective models. To either experts or non-experts, determining hyperparameters that optimize model performance can be a tedious and difficult task. Therefore, we develop a hyperparameter database that allows users to visualize and understand how to choose hyperparameters that maximize the predictive power of their models. The database is created by running millions of hyperparameter values, over thousands of public datasets and calculating the individual conditional expectation of every hyperparameter to the quality of a model. We analyze the effect of hyperparameters on algorithms such as Distributed Random Forest (DRF), Generalized Linear Model (GLM), Gradient Boosting Machine (GBM), and several more. Consequently, the database attempts to provide a one-stop platform for data scientists to identify hyperparameters that have the most effect on their models in order to speed up the process of developing effective predictive models. Moreover, the database will also use these public datasets to build models that can predict hyperparameters without search and for visualizing and teaching concepts such as statistical power and bias/variance tradeoff. The raw data will also be publically available for the research community. What are the hyperparamters? Hyperparameters are parameters that are specified prior to running machine learning algorithms that have a large effecton the predictive power of statistical models. Hyperparameters are specified for tuning purpose, for examples: * learningrate - Learning Rate * n_layers - Number of layers * n_neurons - Number of neurons * Hidden Layers - Number of layers and size of each layers Hyperparameters are important because they directly control the behaviour of the training algorithm and have a significant impact on the performance of the model that is being trained. ###Code import h2o from h2o.automl import H2OAutoML import random, os, sys from datetime import datetime import pandas as pd import logging import csv import optparse import time import json from distutils.util import strtobool import psutil import warnings warnings.filterwarnings('ignore') port_no=random.randint(5555,55555) h2o.init(strict_version_check=False,min_mem_size_GB=min_mem_size,port=port_no) #importing data to the server df = h2o.import_file(path="./Dataset/loan.csv") ###Output Parse progress: |█████████████████████████████████████████████████████████| 100% ###Markdown We try to predict if it is a bad loan, by taking Loan dataset as an example ###Code #Checking the heads df.head() # Assume the following are passed by the user from the web interface ''' Need a user id and project id? ''' target='bad_loan' data_file='loan.csv' run_time=333 run_id='SOME_ID_20180617_221529' # Just some arbitrary ID server_path='./Dataset/' classification=True scale=False max_models=None balance_y=False # balance_classes=balance_y balance_threshold=0.2 project ="automl_test" # project_name = project ###Output _____no_output_____ ###Markdown All that we need is the `target`, and our AI software does the rest. ###Code # assign target and inputs for logistic regression y = target X = [name for name in df.columns if name != y] print(y) print(X) # impute missing values _ = df[reals].impute(method='mean') _ = df[ints].impute(method='median') if scale: df[reals] = df[reals].scale() df[ints] = df[ints].scale() # set target to factor for classification by default or if user specifies classification if classification: df[y] = df[y].asfactor() df[y].levels() # Use local data file or download from some type of bucket import os data_path=os.path.join(server_path,data_file) data_path if classification: class_percentage = y_balance=df[y].mean()[0]/(df[y].max()-df[y].min()) if class_percentage < balance_threshold: balance_y=True print(run_time) type(run_time) # automl # runs for run_time seconds then builds a stacked ensemble aml = H2OAutoML(max_runtime_secs=run_time,project_name = project) # init automl, run for 300 seconds aml.train(x=X, y=y, training_frame=df) ###Output AutoML progress: |████████████████████████████████████████████████████████| 100% Parse progress: |█████████████████████████████████████████████████████████| 100% ###Markdown We run thousands of hyperparamter combinations and select the best out of it. ###Code # view leaderboard lb = aml.leaderboard lb aml_leaderboard_df=aml.leaderboard.as_data_frame() model_set=aml_leaderboard_df['model_id'] mod_best=h2o.get_model(model_set[3]) mod_best.params ###Output _____no_output_____ ###Markdown 'ntrees': {'default': 50, 'actual': 33},'max_depth': {'default': 5, 'actual': 4},'learn_rate': {'default': 0.1, 'actual': 0.8} We try to check the plot between hyperparameter against its values to know the best value range.---------------------- We do the same for the different Hyperparameters we have--------------------- Not just that, we even see the importance of Hyperparameters through the plots. We will develop novel hyperparameter interpretability metrics, inspired by model inerpretability metrics, such as : * Global surrogate models * Word embeddings * Individual conditional expectation(ICE) plots * K-local interpretable model-agnostic explanations(K-LIME) * Leave-one-covariance (LOCO) * Local feature importance * Partial dependency plots * Random forest feature importance * Standardized coefficient importance * Visualization of neural network layers * Generalized low rank estimators * Feature extraction and ranking * accumulated local effects (ALE) * Shapley values. Currently The hyperparameter database analyzes the effect of hyperparameters on the following algorithms: * Distributed Random Forest (DRF)* Generalized Linear Model (GLM)* Gradient Boosting Machine (GBM)* Naïve Bayes Classifier* Stacked Ensembles * XGBoost and * Deep Learning Models (Neural Networks). Data dump for hyperparamter researchers and Kaggle competition ###Code from IPython.display import IFrame ###Output _____no_output_____ ###Markdown Database - UML Diagram ###Code IFrame(src='./HP_Database_UML_Diagram.html', width=900, height=700) ###Output _____no_output_____
lectures/Yaml.ipynb
###Markdown Конфигурирование с YAMLВ данном примере будет показано, как можно сконфигурировать отчёт при помощи `YAML` файла.В качестве фабрики, по производству отчёта возмём фабрики, созданные в предыдущих уроках, но изменим их так, чтобы формирование отчёта осуществялось через загрузку `yaml` файла. YAML файл отчётаОпределим строковые переменные `yml_MD` и `yml_HTML` в которых будут храниться содержание конфигурационных фалов для `Markdown` и `HTML` отчёта соответственно.для `Markdown` отчёта ###Code yml_MD = ''' --- !MDreport # указывает, что хранящаяся ниже структура относиться к типу MDreport objects: # для хранения якорей - &img !img # якорь img хранит объект типа img alt_text: coursera # описание изображения src: "https://blog.coursera.org/wp-content/uploads/2017/07/coursera-fb.png" # адрес изображения report: !report # содержит непосредственно отчёт filename: report_yaml.md # имя файла отчёта title: !!str Report # название отчёта - строковый параметр (!!str) "Report" parts: # содержание отчёта - список частей (каждая часть начинаеться с "-") - !chapter # первая часть отчёта - объект типа "chapter" caption: "chapter one" # заглавие первой части parts: # содержание первой части - список ниже # первая часть - текст. # символ '>' вконце показывает, что весь блок ниже являеться содержанием. Перенос строк не учитываеться # Для учёта переноса строк - символ '|' - | chapter 1 text - !link # далее ссылка obj: coursera # текст ссылки href: "https://ru.coursera.org" # куда ссылаеться - !chapter # вторая часть отчёта - объект типа "chapter" caption: "chapter two" # заглавие второй части parts: # содержание второй части - список ниже - "Chapter 2 header" # сначала текст - !link # далее ссылка obj: *img # объект, хранящийся по якорю img (изображение) будет являться ссылкой href: "https://ru.coursera.org" # куда ссылаеться - "Chapter 2 footer" # в конце - текст''' ###Output _____no_output_____ ###Markdown Для `HTML` отчёта только одно изминение — тип отчёта: ###Code yml_HTML = ''' --- !HTMLreport # указывает, что хранящаяся ниже структура относиться к типу HTMLreport objects: - &img !img alt_text: google src: "https://blog.coursera.org/wp-content/uploads/2017/07/coursera-fb.png" report: !report filename: report_yaml.html title: Report parts: - !chapter caption: "chapter one" parts: - "chapter 1 text" - !link obj: coursera href: "https://ru.coursera.org" - !chapter caption: "chapter two" parts: - "Chapter 2 header" - !link obj: *img href: "https://ru.coursera.org" - "Chapter 2 footer"''' ###Output _____no_output_____ ###Markdown Далее перейдём к изменению абстрактной фабрики `ReportFactory` ###Code import yaml # для работы с PyYAML # теперь ReportFactory - потомок yaml.YAMLObject. # Сделано для того, чтобы yaml оработчик знал новый тип данных, указанный в yaml_tag # он будет определён в фабриках - потомках class ReportFactory(yaml.YAMLObject): # данные yaml фала - структура отчёта одинакова для всех потомков. # В связи с этим - получение отчёта из yaml файла - классовый метод со специальным именем from_yaml @classmethod def from_yaml(Class, loader, node): # сначала опишем функции для обработки каждого нового типа # метод loader.construct_mapping() формирует из содержания node словарь # обработчик создания отчёта !report def get_report(loader, node): data = loader.construct_mapping(node) rep = Class.make_report(data["title"]) rep.filename = data["filename"] # на данный момент data["parts"] пуст. Он будет заполнен позже, соответствующим обработчиком, # сохраняем на него ссылку, дополнив сразу частями из rep.parts data["parts"].extend(rep.parts) rep.parts = data["parts"] return rep # обработчик создания части !chapter def get_chapter(loader, node): data = loader.construct_mapping(node) ch = Class.make_chapter(data["caption"]) # аналогично предыдущему обработчику data["parts"].extend(ch.objects) ch.objects = data["parts"] return ch # обработчик создания ссылки !link def get_link(loader, node): data = loader.construct_mapping(node) lnk = Class.make_link(data["obj"], data["href"]) return lnk # обработчик создания изображения !img def get_img(loader, node): data = loader.construct_mapping(node) img = Class.make_img(data["alt_text"], data["src"]) return img # добавляем обработчики loader.add_constructor(u"!report", get_report) loader.add_constructor(u"!chapter", get_chapter) loader.add_constructor(u"!link", get_link) loader.add_constructor(u"!img", get_img) # возвращаем результат yaml обработчика - отчёт return loader.construct_mapping(node)['report'] # ниже - без изменений @classmethod def make_report(Class, title): return Class.Report(title) @classmethod def make_chapter(Class, caption): return Class.Chapter(caption) @classmethod def make_link(Class, obj, href): return Class.Link(obj, href) @classmethod def make_img(Class, alt_text, src): return Class.Img(alt_text, src) ###Output _____no_output_____ ###Markdown Далее берём непосредственно фабрики по производству элементов отчёта. Добавляем соответствие фабрик `yaml` типу ###Code class MDreportFactory(ReportFactory): yaml_tag = u'!MDreport' # указываем соответствие class Report: def __init__(self, title): self.parts = [] self.parts.append("# "+title+"\n\n") def add(self, part): self.parts.append(part) def save(self): # вносим изменения - имя файла отчёта указываеться в yaml файле try: file = open(self.filename, "w", encoding="utf-8") print('\n'.join(map(str, self.parts)), file=file) finally: if isinstance(self.filename, str) and file is not None: file.close() class Chapter: def __init__(self, caption): self.caption = caption self.objects = [] def add(self, obj): print(obj) self.objects.append(obj) def __str__(self): return f'## {self.caption}\n\n' + ''.join(map(str, self.objects)) class Link: def __init__(self, obj, href): self.obj = obj self.href = href def __str__(self): return f'[{self.obj}]({self.href})' class Img: def __init__(self, alt_text, src): self.alt_text = alt_text self.src = src def __str__(self): return f'![{self.alt_text}]({self.src})' class HTMLreportFactory(ReportFactory): yaml_tag = u'!HTMLreport' class Report: def __init__(self, title): self.title = title self.parts = [] self.parts.append("<html>") self.parts.append("<head>") self.parts.append("<title>" + title + "</title>") self.parts.append("<meta charset=\"utf-8\">") self.parts.append("</head>") self.parts.append("<body>") def add(self, part): self.parts.append(part) def save(self): try: file = open(self.filename, "w", encoding="utf-8") print('\n'.join(map(str, self.parts)), file=file) finally: if isinstance(self.filename, str) and file is not None: file.close() class Chapter: def __init__(self, caption): self.caption = caption self.objects = [] def add(self, obj): self.objects.append(obj) def __str__(self): ch = f'<h1>{self.caption}</h1>' return ch + ''.join(map(str, self.objects)) class Link: def __init__(self, obj, href): self.obj = obj self.href = href def __str__(self): return f'<a href="{self.href}">{self.obj}</a>' class Img: def __init__(self, alt_text, src): self.alt_text = alt_text self.src = src def __str__(self): return f'<img alt = "{self.alt_text}", sr c ="{self.src}"/>' ###Output _____no_output_____ ###Markdown Осталось провести загрузку `yaml` файла и вывести результат ###Code from IPython.display import display, Markdown, HTML txtreport = yaml.load(yml_MD) # загружаем yaml файл markdown отчёта txtreport.save() # сохраняем print("Сохранено:", txtreport.filename) # вывод HTMLreport = yaml.load(yml_HTML) # загружаем yaml файл markdown отчёта HTMLreport.save() # сохраняем print("Сохранено:", HTMLreport.filename) # вывод # Выводим результат работы в jupyter notebook display(Markdown('# <span style="color:red">report.md</span>')) display(Markdown(filename="report_yaml.md")) display(Markdown('# <span style="color:red">report.html</span>')) display(HTML(filename="report_yaml.html")) ###Output Сохранено: report_yaml.md Сохранено: report_yaml.html
notebooks/3.0-jmk-scraping_stream_titles.ipynb
###Markdown 1. Libraries, Configuration, and Importing Queries 1.1 Libraries ###Code # selenium specific imports from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.common.exceptions import TimeoutException # other imports import configparser import time import pandas as pd import numpy as np from datetime import datetime ###Output _____no_output_____ ###Markdown 1.2 Configuration ###Code # configuration parser initialization config = configparser.ConfigParser() config.read('../config.ini') delay = 10 # waits for 10 seconds for the correct element to appeara ###Output _____no_output_____ ###Markdown 1.3 Load csv of Brand Names Search Queries- Brand queries in conjuction with slight modifications were systematically created by Catherine C. Pollack at Dartmouth College. ###Code query_df = pd.read_csv("../data/queries/Final_Words_List.csv") query_df.describe() ###Output _____no_output_____ ###Markdown 2. Custom Functions 2.1 Login ###Code def login_streamhatchet(): driver.get("https://app.streamhatchet.com/") driver.find_element_by_id("cookiesAccepted").click() username = driver.find_element_by_name("loginEmail") username.clear() username.send_keys(config['login_credentials']['email']) password = driver.find_element_by_name("loginPassword") password.clear() password.send_keys(config['login_credentials']['password']) driver.find_element_by_xpath("//button[contains(text(),'Login')]").click() time.sleep(3) # sleep for 3 seconds to let the page load ###Output _____no_output_____ ###Markdown 2.1 Stream Title Search ###Code def stream_title_search(query, incomplete_queries_list, df): driver.get("https://app.streamhatchet.com/search/toolstatus") time.sleep(1) # Enters query into 'Stream title query' stream_title_query_input = WebDriverWait(driver, delay).until(EC.element_to_be_clickable((By.XPATH,"//input[@id='status-query']"))) stream_title_query_input.send_keys(query) # Makes twitch the only platform to search platform_input = WebDriverWait(driver, delay).until(EC.element_to_be_clickable((By.XPATH,"//input[@class='search']"))) platform_input.click() platform_input.send_keys(Keys.BACKSPACE) platform_input.send_keys(Keys.BACKSPACE) platform_input.send_keys(Keys.BACKSPACE) # Click to Expand Date Options driver.find_element_by_xpath("//div[@id='NewRangePicker']").click() # change the hours and minutes to 0:00 for date from and to driver.find_element_by_xpath("//div[@class='calendar left']//select[@class='hourselect']//option[1]").click() driver.find_element_by_xpath("//div[@class='calendar left']//option[contains(text(),'00')]").click() driver.find_element_by_xpath("//div[@class='calendar right']//select[@class='hourselect']//option[1]").click() driver.find_element_by_xpath("//div[@class='calendar right']//option[contains(text(),'00')]").click() # Keep clicking on right_arrow while driver.find_element_by_xpath("//i[@id='icon-down-New']").is_displayed() == True: try: driver.find_element_by_xpath("//i[@class='fa fa-chevron-right glyphicon glyphicon-chevron-right']").click() except: break # Click on first day of the month: time.sleep(5) driver.find_element_by_xpath("//div[@class='calendar left']//td[contains(text(), '1')]").click() time.sleep(5) driver.find_element_by_xpath("//div[@class='calendar right']//td[contains(text(), '1')]").click() time.sleep(5) # Runs the search driver.find_element_by_xpath("//button[@class='applyBtn btn btn-sm btn-success ui google plus button']").click() run_button = WebDriverWait(driver, delay).until(EC.element_to_be_clickable((By.XPATH,"//button[@class='medium ui google plus submit button']"))) run_button.click() # Scrape the Number of Titles num_titles = WebDriverWait(driver, delay).until(EC.visibility_of_element_located((By.XPATH,"//p[@id='messages-count']"))) num_titles = num_titles.text # create a row_dict and append it to the df row_dict = { 'query': query, 'month': "Fill in after, the date selection works properly", 'num_titles':num_titles } df = df.append(row_dict, ignore_index = True) incomplete_queries_list.append(query) return df ###Output _____no_output_____ ###Markdown 3. Run Stream Titles Search ###Code df = pd.DataFrame(columns=['query', 'month', 'num_titles']) incomplete_queries_list = [] driver = webdriver.Chrome() login_streamhatchet() stream_title_search("test", incomplete_queries_list, df) ###Output _____no_output_____
notebooks/rain_in_spain.ipynb
###Markdown The Rain in Spain - the last 100 yearsData source: The data comes from a subset of The National Centers for Environmental Information (NCEI) [Daily Global Historical Climatology Network](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt) (GHCN-Daily). The GHCN-Daily is comprised of daily climate records from thousands of land surface stations across the globe. ###Code import pandas as pd import ipywidgets as widgets import matplotlib.pyplot as plt import mplleaflet # countries and their codes path='https://rain-in-spain-data.s3.us-east-2.amazonaws.com/' ctry_df=pd.read_fwf(path+'ghcnd-countries.txt', widths=[2,1,46], header=None, encoding='utf8') ctry_df.columns=['CODE','NO1','NAME'] ctry_df=ctry_df.drop(columns='NO1') # metadata for all stations meta_df = pd.read_fwf(path+'ghcnd-stations.txt', widths=[11,1,8,1,9,1,6,1,2,1,30,1,3,1,3,1,5], header=None, encoding='utf8') meta_df.columns = ['ID','NO1','LATITUDE','NO2','LONGITUDE','NO3','ELEVATION','NO4','STATE','NO5','NAME', 'NO6','GSN FLAG','NO7','HCN/CRN FLAG','NO8','WMO ID'] meta_df = meta_df.drop(columns=['NO1','NO2','NO3','NO4','NO5','NO6','NO7','NO8']) meta_df['COUNTRY']=[row[:2] for row in meta_df.ID] # only stations for Spain ctry_code='SP' meta_df = meta_df[meta_df.COUNTRY==ctry_code] print(f'Number of stations in {ctry_df[ctry_df.CODE==ctry_code].NAME.item()}: {len(meta_df)}') meta_df.head() def leaflet_plot_stations(df): "Map of stations in Spain" lats, lons = df.LATITUDE.tolist(), df.LONGITUDE.tolist() plt.figure(figsize=(8,8)) plt.scatter(lons, lats, c='r', alpha=0.7, s=20) return mplleaflet.display() leaflet_plot_stations(meta_df) # select weather station stations=[*zip(meta_df.NAME,meta_df.ID)] w=widgets.Dropdown( options=stations[:500], value=stations[0][1], description='Station:', disabled=False, ) def on_change_stn(change): "Code changed from default" if change['type'] == 'change' and change['name'] == 'value': print (f"station code {change['new']}") w.observe(on_change_stn) display(w) def gen_col_names(lst): "select columns of interest" for i in range(31): lst=lst+['VALUE'+str(i+1),'MFLAG'+str(i+1),'QFLAG'+str(i+1),'SFLAG'+str(i+1)] return lst def gen_drop_col_names(lst): "drop columns of no interest" for i in range(31): lst=lst+['MFLAG'+str(i+1),'QFLAG'+str(i+1),'SFLAG'+str(i+1)] return lst # rainfall data for a single station # PRCP = Precipitation (tenths of mm) station=w.value print('Daily',meta_df[meta_df.ID==station].NAME.item()) df=pd.read_fwf(path+'SP_dly/'+station+'.dly', widths=[11,4,2,4,]+[5,1,1,1]*31, header=None) df.columns=gen_col_names(['ID','YEAR','MONTH','ELEMENT']) df=df[df.ELEMENT=='PRCP'].drop(columns=gen_drop_col_names(['ID','ELEMENT'])) df=df.melt(['YEAR','MONTH']).sort_values(['YEAR','MONTH']).reset_index(drop=True) df=df[df.value>-9999] df['variable'] = [row[5:] for row in df.variable] df['date']=pd.to_datetime(df[['YEAR', 'MONTH', 'variable']].rename(columns={'YEAR': 'year', 'MONTH': 'month', 'variable': 'day'})) df=df.drop(columns=['YEAR','MONTH','variable']).rename(columns={'value':'PRECP'}) df.set_index('date', inplace=True) df.tail() mthly_df=df.resample('MS').sum() print ('Monthly') mthly_df.head() plt.scatter(mthly_df.index[-60:],mthly_df.PRECP[-60:]) plt.title('monthly rainfall last 5 years'); plt.scatter(mthly_df.index[-120:-60],mthly_df.PRECP[-120:-60]) plt.title('monthly rainfall previous 5 years'); yrly_df=df.resample('YS').sum() print ('Yearly') yrly_df.head() plt.scatter(yrly_df.index,yrly_df.PRECP); plt.scatter(yrly_df.index[-60:],yrly_df.PRECP[-60:]); plt.scatter(mthly_df.index[-60*12:],mthly_df.PRECP[-60*12:]); qtly_df=df.resample('QS').sum() qtly_df.head() plt.scatter(qtly_df.index,qtly_df.PRECP); %load_ext watermark %watermark --iversions -p matplotlib,mplleaflet,watermark,pylint #!git add . #!git commit -m 'updated notebook' !jupyter nbconvert --to=script --output-dir=/tmp/converted-notebooks/ ./rain_in_spain.ipynb !pylint ./tmp/converted-notebooks/rain_in_spain.py --disable=C,E0602,W0301,W0621 ###Output [NbConvertApp] Converting notebook ./rain_in_spain.ipynb to script [NbConvertApp] Writing 4573 bytes to /tmp/converted-notebooks/rain_in_spain.py -------------------------------------------------------------------- Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
2_pytorch/convnet-classifier.ipynb
###Markdown PyTorch dataPyTorch comes with a nice paradigm for dealing with data which we'll use here. A PyTorch [`Dataset`](http://pytorch.org/docs/master/data.htmltorch.utils.data.Dataset) knows where to find data in its raw form (files on disk) and how to load individual examples into Python datastructures. A PyTorch [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader) takes a dataset and offers a variety of ways to sample batches from that dataset.Take a moment to browse through the `CIFAR10` `Dataset` in `2_pytorch/cifar10.py`, read the `DataLoader` documentation linked above, and see how these are used in the section of `train.py` that loads data. Note that in the first part of the homework we subtracted a mean CIFAR10 image from every image before feeding it in to our models. Here we subtract a constant color instead. Both methods are seen in practice and work equally well.PyTorch provides lots of vision datasets which can be imported directly from [`torchvision.datasets`](http://pytorch.org/docs/master/torchvision/datasets.html). Also see [`torchtext`](https://github.com/pytorch/textdatasets) for natural language datasets. ConvNet Classifier in PyTorchIn PyTorch Deep Learning building blocks are implemented in the neural network module [`torch.nn`](http://pytorch.org/docs/master/nn.html) (usually imported as `nn`). A PyTorch model is typically a subclass of [`nn.Module`](http://pytorch.org/docs/master/nn.htmltorch.nn.Module) and thereby gains a multitude of features. Because your logistic regressor is an `nn.Module` all of its parameters and sub-modules are accessible through the `.parameters()` and `.modules()` methods.Now implement a ConvNet classifier by filling in the marked sections of `models/convnet.py`. The main driver for this question is `train.py`. It reads arguments and model hyperparameter from the command line, loads CIFAR10 data and the specified model (in this case, softmax). Using the optimizer initialized with appropriate hyperparameters, it trains the model and reports performance on test data. Complete the following couple of sections in `train.py`:1. Initialize an optimizer from the torch.optim package2. Update the parameters in model using the optimizer initialized aboveAt this point all of the components required to train the softmax classifer are complete for the softmax classifier. Now run $ run_convnet.shto train a model and save it to `convnet.pt`. This will also produce a `convnet.log` file which contains training details which we will visualize below. **Note**: You may want to adjust the hyperparameters specified in `run_convnet.sh` to get reasonable performance. Visualizing the PyTorch model ###Code # Assuming that you have completed training the classifer, let us plot the training loss vs. iteration. This is an # example to show a simple way to log and plot data from PyTorch. # we neeed matplotlib to plot the graphs for us! import matplotlib # This is needed to save images matplotlib.use('Agg') import matplotlib.pyplot as plt %matplotlib inline # Parse the train and val losses one line at a time. import re # regexes to find train and val losses on a line float_regex = r'[-+]?(\d+(\.\d*)?|\.\d+)([eE][-+]?\d+)?' train_loss_re = re.compile('.*Train Loss: ({})'.format(float_regex)) val_loss_re = re.compile('.*Val Loss: ({})'.format(float_regex)) val_acc_re = re.compile('.*Val Acc: ({})'.format(float_regex)) # extract one loss for each logged iteration train_losses = [] val_losses = [] val_accs = [] # NOTE: You may need to change this file name. with open('convnet.log', 'r') as f: for line in f: train_match = train_loss_re.match(line) val_match = val_loss_re.match(line) val_acc_match = val_acc_re.match(line) if train_match: train_losses.append(float(train_match.group(1))) if val_match: val_losses.append(float(val_match.group(1))) if val_acc_match: val_accs.append(float(val_acc_match.group(1))) fig = plt.figure() plt.plot(train_losses, label='train') plt.plot(val_losses, label='val') plt.title('ConvNet Learning Curve') plt.ylabel('loss') plt.legend() fig.savefig('convnet_lossvstrain.png') fig = plt.figure() plt.plot(val_accs, label='val') plt.title('ConvNet Validation Accuracy During Training') plt.ylabel('accuracy') plt.legend() fig.savefig('convnet_valaccuracy.png') ###Output _____no_output_____
2020WinterIPS-Tech/.ipynb_checkpoints/PythonBasic-01-checkpoint.ipynb
###Markdown ---- 经典的星星游戏 ###Code print("*") print("**") print("***") print("*") print("**") print("***") print("*") print("**") print("***") print("****") print("*****") print("*****") print("*****") print("*****") print("*****") print("*****") # 打一行,用 for for i in range(20): print('*') for i in range(20): print('*',end='') ***** ***** ***** ***** ***** for rowIndex in range(5): # 行 for columnIndex in range(5): # 列 print("*",end='')#这个 print 是列上的。 print('') # 作业 1 * ** *** **** ***** # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(5):# 行 for j in range(i+1): # 列 print("*",end='') print() # 作业 2 * *** ***** ******* ********* # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(5):# 行 for j in range(2*i+1): # 列 print("*",end='') print() # 作业 3 * *** ***** ******* # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(4):# 行 for j in range(3-i): # 列 print(" ",end='') for j in range(2*i+1): # 列 print("*",end='') print() # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(5):# 行 for j in range(2*i+1): # 列 print("*",end='') print() # 作业 4 * ** *** **** ***** ****** # 作业 5 * *** ***** ******* ***** *** * # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(4):# 行 for j in range(3-i): # 列 print(" ",end='') for j in range(2*i+1): # 列 print("*",end='') print() # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(3):# 行 for j in range(1+i): # 列 print(" ",end='') for j in range(5-2*i): # 列 print("*",end='') print() # 循环和判断 # ******* ***** *** * # 所有的打星星的问题其实都是在找行变化和列变化的关系 for i in range(4):# 行 for j in range(i): # 列 print(" ",end='') for j in range(7-2*i): # 列 print("*",end='') print() ###Output _____no_output_____ ###Markdown --- ###Code import math math.ceil(4.1) import random random.random() random.random() math.pi * 2 print('Monday') print("Monday") print(''' Line1 Line2 Line3 ''') print('Today is '+'Saturday') day = 'Sunday' print('Today is '+day) day*2 ###Output _____no_output_____
Mall_customers.ipynb
###Markdown Mall Customers ###Code import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() import warnings warnings.filterwarnings('ignore') mall = pd.read_csv("Mall_Customers.csv") mall.info() mall.describe() mall.head() ###Output _____no_output_____ ###Markdown It is interesting to know how the features distributes according the gender. In the below, we define a function for performing that ###Code def mall_chart(feature): male=mall[mall["Gender"]=="Male"][feature] female=mall[mall["Gender"]=="Female"][feature] df = pd.DataFrame([male,female]) df.index = ['Male','Female'] plt.figure(figsize=(10,5)) sns.distplot(female,bins=30,kde=True,color="red") plt.title("Female") plt.figure(figsize=(10,5)) sns.distplot(male,bins=30,kde=True,color="blue") plt.title("Male") mall_chart("Age") mall_chart("Annual Income (k$)") mall_chart("Spending Score (1-100)") mall.drop("CustomerID",axis=1,inplace=True) #plt.figure(figsize=(10,5)) sns.pairplot(mall, hue="Gender") mall.head() ###Output _____no_output_____ ###Markdown Clustering Elbow methodK-means is a simple unsupervised machine learning algorithm that groups a dataset into a user-specified number (k) of clusters. The algorithm is somewhat naive--it clusters the data into k clusters, even if k is not the right number of clusters to use. Therefore, when using k-means clustering, users need some way to determine whether they are using the right number of clusters.One method to validate the number of clusters is the elbow method. The idea of the elbow method is to run k-means clustering on the dataset for a range of values of k (say, k from 1 to 10) ###Code from sklearn.cluster import KMeans X=mall[['Annual Income (k$)','Spending Score (1-100)']].values sse=[] # range(1,30) is random selection because in our dataset there may not be more than 30 cluster (assumption) for i in range(1,30): kmeans = KMeans(n_clusters= i, init='k-means++', random_state=0) kmeans.fit(X) sse.append(kmeans.inertia_) plt.plot(range(1,30), sse) plt.title('The Elbow Method') plt.xlabel('number of clusters (k)') plt.ylabel('Sum of squared errors') plt.show() ###Output _____no_output_____ ###Markdown According the plot in the above, one finds that the "elbow" is for $n=5$ ###Code kmeansmodel = KMeans(n_clusters= 5, init='k-means++', random_state=0) y_kmeans= kmeansmodel.fit_predict(X) plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Cluster 1') plt.scatter(X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Cluster 2') plt.scatter(X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Cluster 3') plt.scatter(X[y_kmeans == 3, 0], X[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4') plt.scatter(X[y_kmeans == 4, 0], X[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5') plt.title('Clusters of customers') plt.xlabel('Annual Income (k$)') plt.ylabel('Spending Score (1-100)') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown ConclusionWe can see from the plot in the above that the Cluster 3 is our target set. That is, one has high Spending Score and high Annual Income. Classification after clustering ###Code mall["label_kmeans"] = y_kmeans mall.head() ###Output _____no_output_____ ###Markdown We have to change Gender for labels, e.g. male=1 and female=0.Our target is the Cluster 3, the others are not interesting to us. Hence, let us label cluster 3 =1 others=0.We have to normalize the data ###Code gender_01=[1 if each=="Male" else 0 for each in mall["Gender"]]#converting male=1 and female=0. gender_01_df=pd.DataFrame(data=gender_01,columns=["Gender"]) mall["Gender"]=gender_01_df["Gender"] label_kmeans_01=[1 if each==3 else 0 for each in mall["label_kmeans"]]#converting cluster3=1 others=0. label_kmeans_01_df=pd.DataFrame(data=label_kmeans_01,columns=["label_kmeans"]) mall["label_kmeans"]=label_kmeans_01_df["label_kmeans"] mall.head() from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.tree import DecisionTreeRegressor from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler y = mall["label_kmeans"].values x = mall.drop(["label_kmeans"],axis=1) X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=.2,random_state=42) scaler = MinMaxScaler()# escala as features entre 0 e 1. X_train_scaled = scaler.fit_transform(X_train) X_train = pd.DataFrame(X_train_scaled) X_test_scaled = scaler.fit_transform(X_test) X_test = pd.DataFrame(X_test_scaled) seed = 7 scoring = 'accuracy' models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) results = [] names = [] for name, model in models: kfold = model_selection.KFold(n_splits=10, random_state=seed) cv_results = model_selection.cross_val_score(model, X_train, y_train,cv=kfold, scoring=scoring) results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg) from sklearn import metrics lr=LogisticRegression().fit(X_train,y_train) prob_lr=lr.predict_proba(X_train) lda=LinearDiscriminantAnalysis().fit(X_train,y_train) prob_lda=lda.predict_proba(X_train) knn=KNeighborsClassifier().fit(X_train,y_train) prob_knn=knn.predict_proba(X_train) cart=DecisionTreeClassifier().fit(X_train,y_train) prob_cart=cart.predict_proba(X_train) gnb=GaussianNB().fit(X_train,y_train) prob_gnb=gnb.predict_proba(X_train) svm=SVC(probability=True).fit(X_train,y_train) prob_svm=svm.predict_proba(X_train) #Compute the ROC curve: true positives/false positives tpr_lr,fpr_lr,thresh_lr=metrics.roc_curve(y_train,prob_lr[:,0]) tpr_lda,fpr_lda,thresh_lda=metrics.roc_curve(y_train,prob_lda[:,0]) tpr_knn,fpr_knn,thresh_knn=metrics.roc_curve(y_train,prob_knn[:,0]) tpr_cart,fpr_cart,thresh_cart=metrics.roc_curve(y_train,prob_cart[:,0]) tpr_gnb,fpr_gnb,thresh_gnb=metrics.roc_curve(y_train,prob_gnb[:,0]) tpr_svm,fpr_svm,thresh_svm=metrics.roc_curve(y_train,prob_svm[:,0]) #Area under Curve (AUC) from sklearn.metrics import auc roc_auc_lr = auc(fpr_lr, tpr_lr) roc_auc_lda = auc(fpr_lda, tpr_lda) roc_auc_knn = auc(fpr_knn, tpr_knn) roc_auc_cart = auc(fpr_cart, tpr_cart) roc_auc_gnb = auc(fpr_gnb, tpr_gnb) roc_auc_svm = auc(fpr_svm, tpr_svm) #Plotting the ROC curves plt.figure() plt.plot([0, 1], [0, 1], 'k--') plt.plot(fpr_lr, tpr_lr, label='LR, ROC curve (area = %0.2f)' % roc_auc_lr) plt.plot(fpr_lda, tpr_lda, label='LDA, ROC curve (area = %0.2f)' % roc_auc_lda) plt.plot(fpr_knn, tpr_knn, label='KNN, ROC curve (area = %0.2f)' % roc_auc_knn) plt.plot(fpr_cart, tpr_cart, label='CART, ROC curve (area = %0.2f)' % roc_auc_cart) plt.plot(fpr_gnb, tpr_gnb, label='NB, ROC curve (area = %0.2f)' % roc_auc_gnb) plt.plot(fpr_svm, tpr_svm, label='SVC, ROC curve (area = %0.2f)' % roc_auc_svm) plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve') plt.legend(loc='best') plt.show() # Make predictions on validation dataset print("--------------------------") print("LogisticRegression Report") print("--------------------------") predictions_lr = lr.predict(X_test) print("accuracy =",accuracy_score(y_test, predictions_lr)) print("confusion matrix",confusion_matrix(y_test, predictions_lr)) print(classification_report(y_test, predictions_lr)) print("--------------------------") print("LinearDiscriminantAnalysis Report") print("--------------------------") predictions_lda = lda.predict(X_test) print("accuracy =",accuracy_score(y_test, predictions_lda)) print("confusion matrix",confusion_matrix(y_test, predictions_lda)) print(classification_report(y_test, predictions_lda)) print("--------------------------") print("KNeighborsClassifier Report") print("--------------------------") predictions_knn = knn.predict(X_test) print("accuracy =",accuracy_score(y_test, predictions_knn)) print("confusion matrix",confusion_matrix(y_test, predictions_knn)) print(classification_report(y_test, predictions_knn)) print("--------------------------") print("DecisionTreeClassifier Report") print("--------------------------") predictions = cart.predict(X_test) print("accuracy =",accuracy_score(y_test, predictions)) print("confusion matrix",confusion_matrix(y_test, predictions)) print(classification_report(y_test, predictions)) print("--------------------------") print("GaussianNB Report") print("--------------------------") predictions_gnb = gnb.predict(X_test) print("accuracy =",accuracy_score(y_test, predictions_gnb)) print("confusion matrix",confusion_matrix(y_test, predictions_gnb)) print(classification_report(y_test, predictions_gnb)) print("--------------------------") print("SVC Report") print("--------------------------") predictions_svm = svm.predict(X_test) print("accuracy =",accuracy_score(y_test, predictions_svm)) print("confusion matrix",confusion_matrix(y_test, predictions_svm)) print(classification_report(y_test, predictions_svm)) import numpy as np y = np.array([accuracy_score(y_test, predictions_lr),accuracy_score(y_test, predictions_lda),accuracy_score(y_test, predictions_knn),accuracy_score(y_test, predictions),accuracy_score(y_test, predictions_gnb),accuracy_score(y_test, predictions_svm)]) x = ['LogisticRegression','LinearDiscriminantAnalysis','KNeighborsClassifier','DecisionTreeClassifier','GaussianNB','SVM'] plt.bar(x,y) plt.title("Comparison of Regression Algorithms") plt.xticks(rotation=90) plt.xlabel("Classifier") plt.ylabel("accuracy score") plt.show() ###Output _____no_output_____
Information_Retreival_System.ipynb
###Markdown Scraper Ready ###Code # Query handling while True: query = input("\nWhat do you want to buy? ") query = query.lower().split() query = str(query).translate(str.maketrans(string.punctuation, " " * len(string.punctuation))) # de-contaminated STRING query = query.split() # de-contaminated LIST # Creating query matrix query_matrix = np.zeros((cols)) # Obtaining id of the queried word from w2n dictionary count = 0 for token in query: if token in w2n: uid = w2n[token] query_matrix[uid] = 1 count += 1 if count == 0: print("Your search ", query, "did not match any documents.") else: # Dot Product transpose = doc_matrix.T dot_prod = query_matrix.dot(transpose) # Used in elimination descending_scores = np.sort(dot_prod)[::-1] # Ranking the pages descending_filenos = np.argsort(dot_prod)[::-1][:no_of_ads_to_be_fetched] # Eliminating files with 0 matches count = 0 for score in descending_scores: if score < 1: break else: count += 1 # Printing the matched results print("Your results were matched in following files:") for i in range(0, count): filename = str(descending_filenos[i] + 1) + ".txt" print(filename) again = "" again = input("\n**Search again? [y / any key]: ") if again.lower() == 'y': continue else: sys.exit(0) ###Output What do you want to buy? refrigerated box Your results were matched in following files: 3.txt **Search again? [y / any key]: y What do you want to buy? _+)*(*&^^%%^#@[]refrigerated+_))*()*(^box../';' Your results were matched in following files: 3.txt **Search again? [y / any key]: y What do you want to buy? chAiRs Your results were matched in following files: 2.txt **Search again? [y / any key]: y What do you want to buy? jumperoo Your results were matched in following files: 1.txt **Search again? [y / any key]: n
Notebooks/TP1.POC/TP1.reg2.ipynb
###Markdown Hace falta algo que indique con qué entorno vamos a trabajar Importar lo que hace falta ###Code import pandas as pd import numpy as np import seaborn as sns import re data_url = "../Data/properatti.csv" data = pd.read_csv(data_url, encoding="utf-8") #limpiamos los que NaN en el precio data = data.dropna(axis=0, how='any', subset=['price_aprox_usd']) #funcion para borrar outliers. def borrar_outliers(data, columnas): """Solo recibo columnas con valores numericos. Las columns van en forma de tupla""" cols_limpiar = columnas mask=np.ones(shape=(data.shape[0]), dtype=bool) for i in cols_limpiar: #calculamos cuartiles, y valores de corte Q1=data[i].quantile(0.25) Q3=data[i].quantile(0.75) RSI=Q3-Q1 max_value=Q3+1.5*RSI min_value=Q1-1.5*RSI #ajusto el min value a mano... no puede ser negativo. min_value=10 #filtramos por max y min mask=np.logical_and(mask, np.logical_and(data[i]>=min_value, data[i]<=max_value)) return data[mask] def regex_to_bool(col, reg) : u"""Returns a series with boolean mask result of apply the regular expresion to the column col : column where to apply regular expresion reg : regular expresion compiled """ serie = col.apply(lambda x : x if x is np.NaN else reg.search(x)) serie = serie.apply(lambda x : x is not None) return serie def regex_to_ones(col, reg, fill = 0) : u"""Returns a series with ones or other value result of apply the regular expresion to the column the value of one will be when the regular expression search() method found a match the fill value (default to 0) will be when the regular expression serach() method did not found a match col : column where to apply regular expresion reg : regular expresion compiled """ serie = col.apply(lambda x : x if x is np.NaN else reg.search(x)) serie = serie.apply(lambda x : 1 if x is not None else fill) return serie _pattern = 'cochera|garage|auto' _express = re.compile(_pattern, flags = re.IGNORECASE) work = regex_to_ones(data['description'], _express) data['cochera'] = work _pattern = 'piscina|pileta' _express = re.compile(_pattern, flags = re.IGNORECASE) work = regex_to_ones(data['description'], _express) data['pileta'] = work _pattern = 'parrilla' _express = re.compile(_pattern, flags = re.IGNORECASE) work = regex_to_ones(data['description'], _express) data['parrilla'] = work _pattern = 'balcon' _express = re.compile(_pattern, flags = re.IGNORECASE) work = regex_to_ones(data['description'], _express) data['balcon'] = work # Crear una categoría númerica (en base 2) de acuerdo a los valores individuales (es dependiente de la posición) data['amenities'] = data['cochera']*8 + data['pileta']*4 + data['parrilla']*2 + data['balcon'] data[['cochera', 'pileta', 'parrilla', 'balcon', 'amenities']] data['amenities'].describe() data['amenities'].value_counts() ###Output _____no_output_____
notebooks/Experiment.ipynb
###Markdown Chanllenge 1: Missing valuesstrategies:- fill missing values by mean of training data- Drop THC, CH4 and NMHC and fill other columns with the last valid value- Drop THC, CH4 and NMHC and fill other columns with mean ###Code # Preprocessing for the experiments def preprocessing(df, grouped=False, category_cols= []): # time_step for lstm time_step = 1 if not grouped: df = df.groupby(["station", pd.Grouper(freq="D")]).mean() # shift labels for predictions for station, d in df.groupby(["station"]): df.loc[station, "target"] = d["PM2.5"].shift(periods=-1).values # for every station drop the first values df.dropna(how='any',axis=0,inplace=True) # encode rainfall if it is out of Q3 + 1.5 x IQR Q1 = df.loc[df.index.get_level_values('time').year==2018, "RAINFALL"].quantile(0.25) Q3 = df.loc[df.index.get_level_values('time').year==2018, "RAINFALL"].quantile(0.75) IQR = Q3 - Q1 df["RAINFALL"] = np.where(df["RAINFALL"] < (Q3 + 1.5 * IQR), 1, 0) # Normalization of features scaler = MinMaxScaler(feature_range=(-1,1)) scaler.fit( df.loc[df.index.get_level_values('time').year==2018, :] .drop(["longitude", "latitude", "RAINFALL",'PM2.5',"target"]+category_cols, axis=1) .values ) df[ df.drop(["longitude", "latitude", "RAINFALL",'PM2.5',"target"]+category_cols, axis=1).columns ] = scaler.transform(df.drop(["longitude", "latitude", "RAINFALL",'PM2.5',"target"]+category_cols, axis=1).values) # Normalization of labels label_scaler = MinMaxScaler(feature_range=(-1,1)) label_scaler.fit(df.loc[df.index.get_level_values('time').year==2018, "target"].values.reshape(-1, 1)) df['target'] = label_scaler.transform(df['target'].values.reshape(-1, 1)) train_X, train_Y = [], [] validation_X, validation_Y = [], [] test_X, test_Y = [], [] for station, d in df.groupby('station'): # find the index of first day and last day of each years first_train_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2018].index[0]) last_train_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2018].index[-1]) first_val_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2019].index[0]) last_val_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2019].index[-1]) first_test_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2020].index[0]) last_test_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2020].index[-1]) # append previous time step values to fit lstm input format for i in range(first_train_date + time_step, last_train_date+1): indices = range(i - time_step, i, 1) train_X.append(d.reset_index(drop=True).loc[indices].drop(['PM2.5','target'],axis=1).values) train_Y.append(d.reset_index(drop=True).loc[i-1,'target']) for i in range(first_val_date + time_step, last_val_date): indices = range(i - time_step, i, 1) validation_X.append(d.reset_index(drop=True).loc[indices].drop(['PM2.5','target'],axis=1).values) validation_Y.append(d.reset_index(drop=True).loc[i-1,'target']) for i in range(first_test_date + time_step, last_test_date): indices = range(i - time_step, i, 1) test_X.append(d.reset_index(drop=True).loc[indices].drop(['PM2.5','target'],axis=1)) test_Y.append(d.reset_index(drop=True).loc[i-1,'target']) return np.array(train_X), np.array(train_Y), np.array(validation_X), np.array(validation_Y), np.array(test_X), np.array(test_Y), label_scaler # return label scaler for recover result def train_lstm(train_X, train_Y, validation_X, validation_Y): # building models model = Sequential() model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2]))) model.add(Dense(1)) model.compile(loss='mae', optimizer='Adam') # if out of memory, lower the batch_size history = model.fit(train_X, train_Y, epochs=200, batch_size=256, validation_data=(validation_X, validation_Y), verbose=2, shuffle=True) return model, history # helper function for plot experiment result def plot_result(X, True_Y, model, title): pred = model.predict(X) pred = label_scaler.inverse_transform(pred.reshape(-1,1)) True_Y = label_scaler.inverse_transform(True_Y.reshape(-1,1)) plt.figure(figsize=(15,12)) plt.plot(pred[:200], label='prediction') plt.plot(True_Y[:200], label='True label') plt.title(title) plt.legend() plt.show() return mean_absolute_error(True_Y, pred) # helper function for plot training loss def plot_loss(history, title): plt.plot(history.history['loss'], label='train') plt.plot(history.history['val_loss'], label='validation') plt.title(title) plt.legend() plt.show() # Method 1: fill missing values by mean of training data df1 = df.copy() for col in numerical_columns: train_mean = df1.loc["2018", col].mean() df1.fillna(train_mean,inplace=True) ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df1) # training model, history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'fill_by_mean_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'fill_by_mean_validation') test_mae = plot_result(test_X, test_Y, model, 'fill_by_mean_testset') result = pd.DataFrame({},columns=['Validation_MAE','test_MAE']) result.loc['fill_by_mean'] = [valid_mae,test_mae] result # Method 2: fill missing values by last valid values df2 = df.copy() df2.drop(['THC','CH4','NMHC'],axis=1,inplace=True) for station, d in df2.groupby('station'): d.fillna(method='ffill',inplace=True) ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df2) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'drop_and_ffill_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'drop_and_ffill_validation') test_mae = plot_result(test_X, test_Y, model, 'drop_and_ffill_test') result.loc['drop_and_ffill'] = [valid_mae,test_mae] result # Method 3: drop THC, CH4, NMHC and fill by mean of training data df3 = df.copy() df3.drop(['THC','CH4','NMHC'],axis=1,inplace=True) for col in numerical_columns: if col in df3.columns: train_mean = df3.loc["2018", col].mean() df3.fillna(train_mean,inplace=True) ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df3) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'drop_and_fill_by_mean_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'drop_and_fill_by_mean_validation') test_mae = plot_result(test_X, test_Y, model, 'drop_and_fill_by_mean_test') result.loc['drop_and_fill_by_mean'] = [valid_mae,test_mae] result df4 = df.copy() df4 = df4.groupby(["station", pd.Grouper(freq="D")]).mean() for station, d in df4.groupby(["station"]): # df4.loc[station, "previous"] = d.loc[station].groupby([d.loc[station].index.month,d.loc[station].index.day])['PM2.5'].shift().values df4.loc[station, 'previous'] = d.loc[station,"PM2.5"].shift().values df4_2019 = df4.loc[df4.index.get_level_values('time').year==2019].dropna(how='any',subset=['PM2.5','previous']) valid_mae = mean_absolute_error(df4_2019['PM2.5'],df4_2019['previous']) df4_2020 = df4.loc[df4.index.get_level_values('time').year==2020].dropna(how='any',subset=['PM2.5','previous']) test_mae = mean_absolute_error(df4_2020['PM2.5'],df4_2020['previous']) result.loc['previous_day'] = [valid_mae,test_mae] result.style.highlight_min(color="green", axis=0) ###Output _____no_output_____ ###Markdown Chanllenge 2: Temporal data representationstrategies:- Add new columns of year, month, day.- Add new features of previous 7 days target.- Add new features that represent the statistics of last week ###Code def fill_na(df): for col in numerical_columns: train_mean = df.loc["2018", col].mean() df.fillna(train_mean,inplace=True) return df # Method 1: Add new columns of year, month, day. df1 = df.copy() df1 = fill_na(df1) df1['year'] = df1.index.year df1['month'] = df1.index.month df1['day'] = df1.index.day ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df1) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'add_time_columns_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'add_time_columns_validation') test_mae = plot_result(test_X, test_Y, model, 'add_time_columns_test') temporal_result = pd.DataFrame({},columns=['Validation_MAE','test_MAE']) temporal_result.loc['add_time_columns'] = [valid_mae,test_mae] temporal_result # Method 2: add new features of previous 7 days target. df2 = df.copy() df2 = fill_na(df2) df2 = df2.groupby(["station", pd.Grouper(freq="D")]).mean() for t in range(7): df2[f't-{t+1}'] = df2['PM2.5'].shift(periods=t+1) df2.fillna(0,inplace=True) ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df2,grouped=True) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'add_prev_t_columns_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'add_prev_t_columns_validation') test_mae = plot_result(test_X, test_Y, model, 'add_prev_t_columns_test') temporal_result.loc['add_prev_t_columns'] = [valid_mae,test_mae] temporal_result # Method 3: add new features that represent the statistics of last week df3 = df.copy() df3 = fill_na(df3) df3 = df3.groupby(["station", pd.Grouper(freq="D")]).mean() df3['last_week_mean'] = df3['PM2.5'].rolling(7).mean() df3['last_week_min'] = df3['PM2.5'].rolling(7).min() df3['last_week_max'] = df3['PM2.5'].rolling(7).max() df3['diff'] = df3['PM2.5'].diff(periods=1) df3['last_week_diff_mean'] = df3['diff'].rolling(7).mean() df3['last_week_diff_min'] = df3['diff'].rolling(7).min() df3['last_week_diff_max'] = df3['diff'].rolling(7).max() df3.fillna(0) ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df3,grouped=True) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'add_last_week_statistics_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'add_last_week_statistics_validation') test_mae = plot_result(test_X, test_Y, model, 'add_last_week_statistics_test') temporal_result.loc['add_last_week_statistics'] = [valid_mae,test_mae] temporal_result temporal_result.loc['drop_and_fill_by_mean'] = result.loc['drop_and_fill_by_mean'] temporal_result.loc['previous_day'] = result.loc['previous_day'] temporal_result.style.highlight_min(color="green", axis=0) ###Output _____no_output_____ ###Markdown Chanllenge 3: Spatial data representationStrategies:- Use kmeans to separate into 5 groups- Apply one hot representation to counties(22 counties)- Factorize county to numeric representation ###Code # Method 1: Use kmeans to separate into 5 groups df1 = df.copy() df1 = fill_na(df1) kmeans = KMeans(n_clusters=5, random_state=0).fit(df1[['longitude','latitude']].values) df1['geo_group'] = kmeans.labels_ ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df1,category_cols=['geo_group']) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'kmeans_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'kmeans_validation') test_mae = plot_result(test_X, test_Y, model, 'kmeans_test') spatial_result = pd.DataFrame({},columns=['Validation_MAE','test_MAE']) spatial_result.loc['kmeans'] = [valid_mae,test_mae] spatial_result # Method 2: Apply one hot representation to counties(22 counties) df2 = df.copy() df2 = fill_na(df2) new_geo = geo.copy() df2['county'] = pd.merge(df2,new_geo, left_on= ['station'], right_on= ['siteengname'], how = 'left')['county'].values df2 = pd.concat([df2.drop('county',axis=1), pd.get_dummies(df2['county'])], axis=1) ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df2) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'one_hot_county_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'one_hot_county_validation') test_mae = plot_result(test_X, test_Y, model, 'one_hot_county_test') spatial_result.loc['one_hot_county'] = [valid_mae,test_mae] spatial_result # Method 3: Factorize county to numeric representation df3 = df.copy() df3 = fill_na(df3) new_geo = geo.copy() df3['county'] = pd.merge(df3,geo, left_on= ['station'], right_on= ['siteengname'], how = 'left')['county'].values df3['county'] = pd.factorize(df3['county'])[0] ( train_X, train_Y, validation_X, validation_Y, test_X, test_Y, label_scaler ) = preprocessing(df3) model,history = train_lstm(train_X,train_Y,validation_X,validation_Y) plot_loss(history, 'categorize_county_loss') valid_mae = plot_result(validation_X, validation_Y, model, 'categorize_county_validation') test_mae = plot_result(test_X, test_Y, model, 'categorize_county_test') spatial_result.loc['categorize_county'] = [valid_mae,test_mae] spatial_result.loc['drop_and_fill_by_mean'] = result.loc['drop_and_fill_by_mean'] spatial_result.loc['previous_day'] = result.loc['previous_day'] spatial_result.style.highlight_min(color="green", axis=0) ###Output _____no_output_____ ###Markdown Load the data ###Code data = pd.read_csv("../data/skcm_vaf.csv").drop(['Unnamed: 0', 'Tumor_Sample_Barcode'], axis=1) data.fillna(0, inplace=True) data.head() ###Output _____no_output_____ ###Markdown Filter out passenger genes ###Code driver = 'BRAF NRAS TP53 CDKN2A PTEN IDH1 MAP2K1 NF1 ARID2 RAC1 CTNNB1 CDK4 PPP6C KIT DDX3X RB1 GNA11 KRAS HRAS'.split() data = data[driver] ###Output _____no_output_____ ###Markdown Parameters ###Code n_bootstrap = 100 lambda1 = 0.002 alpha = 0.05 pos_threshold = 0.09 neg_threshold = -0.01 ###Output _____no_output_____ ###Markdown Bootstrap ###Code Ws = [] for i in tqdm(range(n_bootstrap)): np.random.seed(42 + i) bootstrap_indices = np.random.choice(data.index, size=data.shape[0], replace=True) bootstrap_data = data.loc[bootstrap_indices] W = notears_linear(bootstrap_data.values, lambda1=lambda1, loss_type='l2', w_threshold=0.0, max_iter=300) Ws.append(W.T) Ws = np.array(Ws) ###Output 100%|██████████| 100/100 [01:00<00:00, 1.41it/s] ###Markdown Remove non-significant edges ###Code t_pos, p_pos = stats.ttest_1samp(Ws, pos_threshold, axis=0) t_pos = pd.DataFrame(t_pos, columns=data.columns, index=data.columns) p_pos = pd.DataFrame(p_pos, columns=data.columns, index=data.columns) t_neg, p_neg = stats.ttest_1samp(Ws, neg_threshold, axis=0) t_neg = pd.DataFrame(t_neg, columns=data.columns, index=data.columns) p_neg = pd.DataFrame(p_neg, columns=data.columns, index=data.columns) W = Ws.mean(axis=0) W_pos = ((t_pos > 0) & (p_pos < alpha)) * W W_neg = ((t_neg < 0) & (p_neg < alpha)) * W W = W_pos + W_neg G1 = nx.from_pandas_adjacency(W[W > 0].fillna(0), create_using=nx.DiGraph) G2 = nx.from_pandas_adjacency(W[W < 0].fillna(0), create_using=nx.DiGraph) plot_graph(G1, G2) plt.figure(figsize=(15, 5)); plt.subplot(1, 2, 1); sns.heatmap(W_pos != 0); plt.title('Positive Edges'); plt.subplot(1, 2, 2); sns.heatmap(W_neg != 0); plt.title('Negative Edges'); plt.figure(figsize=(10, 6)); sns.heatmap(W, cmap='RdBu_r', vmin=-np.max([W.max(), W.min()]), vmax=np.max([W.max(), W.min()]), annot=True, fmt='.2f', mask=np.isclose(W, 0)); plt.title('Adjacency Matrix'); ###Output _____no_output_____ ###Markdown Test pipeline1. Create dataset: sequence of preporcessed examples ready to feed to neuralnet 2. Create dataloader: define how dataset is loaded to neuralnet (batch size, order, computation optimizing ...)3. Create model : a bunch of matrixes math to transform input tensor to output tensor4. Training loop: + Forward + Calculate loss + Backward + Monitoring: + Evaluate metrics + Logger, back and forth + Visualize Import necessary packages ###Code import os import glob import sys import random import matplotlib.pylab as plt from PIL import Image, ImageDraw import torch from torch.utils.data import Dataset import torchvision.transforms.functional as TF import numpy as np from sklearn.model_selection import ShuffleSplit torch.manual_seed(0) np.random.seed(0) random.seed(0) %matplotlib inline sys.path.insert(0, '..') from src.models.util import pipeline, Cornell_Grasp_dataset ###Output _____no_output_____ ###Markdown Create a transformer ###Code def resize_img_label(image,label,target_size=(256,256)): w_orig,h_orig = image.size w_target,h_target = target_size label = label.view(-1,2) # resize image and label image_new = TF.resize(image,target_size) for i in range(len(label)): x, y = label[i] label[i][0] = x/w_orig*w_target label[i][1] = y/h_orig*h_target label = label.view(-1,8) return image_new,label def transformer(image, label, params): image,label=resize_img_label(image,label,params["target_size"]) if params["sample_output"]: # randoom choose a grasp to be the ground truth index = random.randint(0, len(label) -1) label = label[index] image=TF.to_tensor(image) return image, label ###Output _____no_output_____ ###Markdown Create Data loader ###Code def collate_fn(batch): imgs, labels = list(zip(*batch)) targets = [] for i in range(len(labels)): label = labels[i] target = torch.zeros(label.shape[0], label.shape[1] + 1) target[:,0] = i target[:, 1:] = label targets.append(target) targets = torch.cat(targets, 0) imgs = torch.stack([img for img in imgs]) return imgs, targets, trans_params_train={ "target_size" : (256, 256), "sample_output" : True } trans_params_val={ "target_size" : (256, 256), "sample_output" : False } path2data = "../data/processed/grasp.csv" # create data set train_ds = Cornell_Grasp_dataset(path2data,transformer,trans_params_train) val_ds = Cornell_Grasp_dataset(path2data,transformer,trans_params_val) sss = ShuffleSplit(n_splits=1, test_size=0.3, random_state=0) indices=range(len(train_ds)) for train_index, val_index in sss.split(indices): print(len(train_index)) print("-"*10) print(len(val_index)) from torch.utils.data import Subset train_ds = Subset(train_ds,train_index) print(len(train_ds)) val_ds = Subset(val_ds,val_index) print(len(val_ds)) import matplotlib.pyplot as plt def show(img,label=None): npimg = img.numpy().transpose((1,2,0)) plt.imshow(npimg) if label is not None: label = label.view(-1,2) for point in label: x,y= point plt.plot(x,y,'b+',markersize=10) plt.figure(figsize=(10,10)) for img,label in train_ds: show(img,label) break plt.figure(figsize=(10,10)) for img,label in val_ds: show(img,label) break from torch.utils.data import DataLoader train_dl = DataLoader(train_ds, batch_size=16, shuffle=True) val_dl = DataLoader(val_ds, batch_size=32, shuffle=False, collate_fn=collate_fn) for img_b, label_b in train_dl: print(img_b.shape,img_b.dtype) print(label_b.shape) break for img, label in val_dl: print(label.shape) break ###Output torch.Size([160, 9]) ###Markdown Create Model ###Code import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self, params): super(Net, self).__init__() def forward(self, x): return x def __init__(self, params): super(Net, self).__init__() C_in,H_in,W_in=params["input_shape"] init_f=params["initial_filters"] num_outputs=params["num_outputs"] self.conv1 = nn.Conv2d(C_in, init_f, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(init_f+C_in, 2*init_f, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(3*init_f+C_in, 4*init_f, kernel_size=3, padding=1) self.conv4 = nn.Conv2d(7*init_f+C_in, 8*init_f, kernel_size=3, padding=1) self.conv5 = nn.Conv2d(15*init_f+C_in, 16*init_f, kernel_size=3, padding=1) self.fc1 = nn.Linear(16*init_f, num_outputs) def forward(self, x): identity=F.avg_pool2d(x,4,4) x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) identity=F.avg_pool2d(x,2,2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) identity=F.avg_pool2d(x,2,2) x = F.relu(self.conv3(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) identity=F.avg_pool2d(x,2,2) x = F.relu(self.conv4(x)) x = F.max_pool2d(x, 2, 2) x = torch.cat((x, identity), dim=1) x = F.relu(self.conv5(x)) x=F.adaptive_avg_pool2d(x,1) x = x.reshape(x.size(0), -1) x = self.fc1(x) return x Net.__init__= __init__ Net.forward = forward params_model={ "input_shape": (3,256,256), "initial_filters": 16, "num_outputs": 5, } model = Net(params_model) device = torch.device("cuda") model=model.to(device) ###Output _____no_output_____ ###Markdown Create optimizer ###Code from torch import optim from torch.optim.lr_scheduler import ReduceLROnPlateau opt = optim.Adam(model.parameters(), lr=1e-3) lr_scheduler = ReduceLROnPlateau(opt, mode='min',factor=0.5, patience=20,verbose=1) ###Output _____no_output_____ ###Markdown Training ###Code path2models= "../models/" mse_loss = nn.MSELoss(reduction="sum") params_loss={ "mse_loss": mse_loss, "gama": 5.0, } params_train={ "num_epochs": 10, "optimizer": opt, "params_loss": params_loss, "train_dl": train_dl, "val_dl": val_dl, "sanity_check": True, "lr_scheduler": lr_scheduler, "path2weights": path2models+"weights.pt", } pline = pipeline(model, params_train, device) model,loss_hist, metric_history = pline.train_val() # Train-Validation Progress num_epochs=params_train["num_epochs"] # plot loss progress plt.title("Train-Val Loss") plt.plot(range(1,num_epochs+1),loss_hist["train"],label="train") plt.plot(range(1,num_epochs+1),loss_hist["val"],label="val") plt.ylabel("Loss") plt.xlabel("Training Epochs") plt.legend() plt.show() # plot accuracy progress plt.title("Train-Val Accuracy") plt.plot(range(1,num_epochs+1),metric_history["train"],label="train") plt.plot(range(1,num_epochs+1),metric_history["val"],label="val") plt.ylabel("Accuracy") plt.xlabel("Training Epochs") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Prepare data Read data ###Code with open("small", "r") as f: text = f.read() text = re.sub(r"\([^\)]+\)", "", text).strip() text = '\n'.join([ t for t in text.split('\n') if re.search(r'[一-龯ぁ-んァ-ン]+', t)==None]) print("Text Sample >>\n{}\n".format(text[:200])) print("Length >>\n{}".format(len(text))) ###Output Text Sample >> T-50 골든이글 KAI T-50 골든이글()은 대한민국이 제작한 초음속 고등 훈련기이다. 2005년 10월부터 제작사인 한국항공우주산업에서 양산에 들어가, 2005년 12월에 1호기가 납품되었다. 2008년 3월 25일 초도분량 25대 도입이 모두 완료되어 기존의 T-38 탤론의 역할을 대체하였다. 현재 납품된 기체는 대한민국 공군 1 전투비행단소속 18 Length >> 7641228 ###Markdown Preprocess text dataLoad data and tokenize. Convert to id ###Code def encode_data(text, tokenize, vocab_size=None): tokens = [] for line in text.split("\n"): if len(line.strip())==0: continue; tokens.extend(tokenize(line)) print("Tokenization done.") c = Counter(tokens) if vocab_size: vocabs = ['UNK'] + [ word for word, cnt in c.most_common(vocab_size - 1) ] else: vocabs = [ word for word, cnt in c.most_common() ] vocab_size = len(vocabs) print(f"Total number of tokens: {len(tokens)}. Vocab size: {vocab_size}") word2id = { word: idx for idx, word in enumerate(vocabs)} text_encoded = np.array([word2id.get(t,0) for t in tokens]) return text_encoded, vocab_size ###Output _____no_output_____ ###Markdown Make tensorflow datasetMake language model data(input, target) and convert to tensorflow batch data ###Code # 임베딩 차원 embedding_dim = 64 # RNN 유닛(unit) 개수 rnn_units = 256 batch_size = 64 buffer_size = 1000 seq_length = 100 def split_input_target(chunk): input_text = chunk[:-1] target_text = chunk[1:] return input_text, target_text def batch_dataset(text_encoded): token_dataset = tf.data.Dataset.from_tensor_slices(text_encoded) sequences = token_dataset.batch(seq_length+1, drop_remainder=True) dataset = sequences.map(split_input_target) dataset = dataset.shuffle(buffer_size).batch(batch_size, drop_remainder=True) dataset = dataset.repeat() return dataset ###Output _____no_output_____ ###Markdown Language model neural network Define model ###Code def build_model(vocab_size, embedding_dim, rnn_units, batch_size): model = tf.keras.Sequential([ tf.keras.layers.Embedding(vocab_size, embedding_dim, batch_input_shape=[batch_size, None]), tf.keras.layers.LSTM(rnn_units, return_sequences=True, stateful=False, recurrent_initializer='glorot_uniform'), tf.keras.layers.Dense(vocab_size) ]) return model ###Output _____no_output_____ ###Markdown TrainLog perplexity ###Code from collections import defaultdict class LossCallback(tf.keras.callbacks.Callback): def __init__(self, name, log_dict): super(LossCallback, self).__init__() self.name = name self.writer = None self.log_dict = log_dict def on_train_batch_end(self, batch, logs=None): if self.writer is None: self.writer = open(f"{self.name}.log", "w") self.writer.write("{}\t{:.4f}\n".format(batch, logs["loss"])) self.log_dict[self.name].append((batch, logs["loss"])) def on_train_end(self, logs=None): self.writer.close() def run_experiment(text, tokenize, name, log_dict): text_encoded, vocab_size = encode_data(text, tokenize, 30000) dataset = batch_dataset(text_encoded) model = build_model(vocab_size, embedding_dim, rnn_units, batch_size) model.compile(optimizer='sgd', loss="sparse_categorical_crossentropy") examples_per_epoch = len(text_encoded)//seq_length steps_per_epoch = examples_per_epoch // batch_size logger = LossCallback(name, log_dict) model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=10, verbose=1, callbacks=[logger]) log = defaultdict(list) t1 = Tokenizer(decompose=True) tokenize = lambda x:t1.tokenize(x, as_id=True) run_experiment(text, tokenize, "bpe_with_decomposition", log) t2 = Tokenizer(decompose=False) tokenize = lambda x:t2.tokenize(x, as_id=True) run_experiment(text, tokenize, "bpe_no_decomposition", log) from konlpy.tag import Komoran k = Komoran() def tokenize(text): poses = k.pos(text) return [ a for a,b in poses ] run_experiment(text, tokenize, "morph_analyzer_komoran", log) import matplotlib.pyplot as plt from scipy.interpolate import make_interp_spline, BSpline def get_smooth(data): step = np.arange(len(data)) loss = data[:,1] xnew = np.linspace(step.min(), step.max(), 300) spl = make_interp_spline(step, loss, k=3) # type: BSpline smooth = spl(xnew) return xnew, smooth for name, data in log.items(): data = np.array(data) x, y = get_smooth(data) plt.plot(x, y, label=name) plt.xlabel("step") plt.ylabel("perplexity") plt.legend() plt.show() ###Output _____no_output_____
tutorial-contents-notebooks/502_GPU.ipynb
###Markdown 502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhou ###Code import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision torch.manual_seed(1) import matplotlib.pyplot as plt %matplotlib inline EPOCH = 1 BATCH_SIZE = 50 LR = 0.001 DOWNLOAD_MNIST = True train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST,) train_loader = Data.DataLoader( dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_data = torchvision.datasets.MNIST( root='./mnist/', train=False) # !!!!!!!! Change in here !!!!!!!!! # test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU test_y = test_data.test_labels[:2000].cuda() class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,), nn.ReLU(), nn.MaxPool2d(kernel_size=2),) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2),) self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) output = self.out(x) return output cnn = CNN() # !!!!!!!! Change in here !!!!!!!!! # cnn.cuda() # Moves all model parameters and buffers to the GPU. optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) loss_func = nn.CrossEntropyLoss() losses_his = [] ###Output _____no_output_____ ###Markdown Training ###Code for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # !!!!!!!! Change in here !!!!!!!!! # b_x = Variable(x).cuda() # Tensor on GPU b_y = Variable(y).cuda() # Tensor on GPU output = cnn(b_x) loss = loss_func(output, b_y) losses_his.append(loss.item()) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_output = cnn(test_x) # !!!!!!!! Change in here !!!!!!!!! # pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU accuracy = sum(pred_y == test_y).item() / test_y.size(0) print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %.2f' % accuracy) plt.plot(losses_his, label='loss') plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 1)) plt.show() ###Output _____no_output_____ ###Markdown Test ###Code # !!!!!!!! Change in here !!!!!!!!! # test_output = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU print(pred_y, 'prediction number') print(test_y[:10], 'real number') ###Output tensor([ 7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') prediction number tensor([ 7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') real number ###Markdown 502 GPUView more, visit my tutorial page: https://mofanpy.com/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision ###Code import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision torch.manual_seed(1) import matplotlib.pyplot as plt %matplotlib inline EPOCH = 1 BATCH_SIZE = 50 LR = 0.001 DOWNLOAD_MNIST = False train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST,) train_loader = Data.DataLoader( dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_data = torchvision.datasets.MNIST( root='./mnist/', train=False) # !!!!!!!! Change in here !!!!!!!!! # test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU test_y = test_data.test_labels[:2000].cuda() class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,), nn.ReLU(), nn.MaxPool2d(kernel_size=2),) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2),) self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) output = self.out(x) return output cnn = CNN() # !!!!!!!! Change in here !!!!!!!!! # cnn.cuda() # Moves all model parameters and buffers to the GPU. optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) loss_func = nn.CrossEntropyLoss() losses_his = [] ###Output _____no_output_____ ###Markdown Training ###Code for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # !!!!!!!! Change in here !!!!!!!!! # b_x = Variable(x).cuda() # Tensor on GPU b_y = Variable(y).cuda() # Tensor on GPU output = cnn(b_x) loss = loss_func(output, b_y) losses_his.append(loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_output = cnn(test_x) # !!!!!!!! Change in here !!!!!!!!! # pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0) print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) plt.plot(losses_his, label='loss') plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 1)) plt.show() ###Output _____no_output_____ ###Markdown Test ###Code # !!!!!!!! Change in here !!!!!!!!! # test_output = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU print(pred_y, 'prediction number') print(test_y[:10], 'real number') ###Output 7 2 1 0 4 1 4 9 5 9 [torch.cuda.LongTensor of size 10 (GPU 0)] prediction number 7 2 1 0 4 1 4 9 5 9 [torch.cuda.LongTensor of size 10 (GPU 0)] real number ###Markdown 502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision ###Code import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision torch.manual_seed(1) import matplotlib.pyplot as plt %matplotlib inline EPOCH = 1 BATCH_SIZE = 50 LR = 0.001 DOWNLOAD_MNIST = False train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST,) train_loader = Data.DataLoader( dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_data = torchvision.datasets.MNIST( root='./mnist/', train=False) # !!!!!!!! Change in here !!!!!!!!! # test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU test_y = test_data.test_labels[:2000].cuda() class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,), nn.ReLU(), nn.MaxPool2d(kernel_size=2),) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2),) self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) output = self.out(x) return output cnn = CNN() # !!!!!!!! Change in here !!!!!!!!! # cnn.cuda() # Moves all model parameters and buffers to the GPU. optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) loss_func = nn.CrossEntropyLoss() losses_his = [] ###Output _____no_output_____ ###Markdown Training ###Code for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # !!!!!!!! Change in here !!!!!!!!! # b_x = Variable(x).cuda() # Tensor on GPU b_y = Variable(y).cuda() # Tensor on GPU output = cnn(b_x) loss = loss_func(output, b_y) losses_his.append(loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_output = cnn(test_x) # !!!!!!!! Change in here !!!!!!!!! # pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0) print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) plt.plot(losses_his, label='loss') plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 1)) plt.show() ###Output _____no_output_____ ###Markdown Test ###Code # !!!!!!!! Change in here !!!!!!!!! # test_output = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU print(pred_y, 'prediction number') print(test_y[:10], 'real number') ###Output 7 2 1 0 4 1 4 9 5 9 [torch.cuda.LongTensor of size 10 (GPU 0)] prediction number 7 2 1 0 4 1 4 9 5 9 [torch.cuda.LongTensor of size 10 (GPU 0)] real number ###Markdown 502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision ###Code import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision torch.manual_seed(1) import matplotlib.pyplot as plt %matplotlib inline EPOCH = 1 BATCH_SIZE = 50 LR = 0.001 DOWNLOAD_MNIST = False train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST,) train_loader = Data.DataLoader( dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_data = torchvision.datasets.MNIST( root='./mnist/', train=False) # !!!!!!!! Change in here !!!!!!!!! # test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU test_y = test_data.test_labels[:2000].cuda() print(test_data.test_data.shape, test_x.shape) print(test_data.test_data.type(), test_x.type()) print(test_data.test_data.max(), test_x.max()) print(test_data.test_data.is_cuda, test_x.is_cuda) class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,), nn.ReLU(), nn.MaxPool2d(kernel_size=2),) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2),) self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) output = self.out(x) return output cnn = CNN() cnn print(next(cnn.parameters()).is_cuda) # !!!!!!!! Change in here !!!!!!!!! # cnn.cuda() # Moves all model parameters and buffers to the GPU. print(next(cnn.parameters()).is_cuda) optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) loss_func = nn.CrossEntropyLoss() losses_his = [] ###Output _____no_output_____ ###Markdown Training ###Code for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # !!!!!!!! Change in here !!!!!!!!! # b_x = Variable(x).cuda() # Tensor on GPU b_y = Variable(y).cuda() # Tensor on GPU output = cnn(b_x) loss = loss_func(output, b_y) losses_his.append(loss.item()) #data[0]) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_output = cnn(test_x) # !!!!!!!! Change in here !!!!!!!!! # pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU """ sum() returns different result as torch.sum!!! giving a example: sum() : 176 torch.sum(): 1968 """ # without .type(torch.FloatTensor), accuracy will always be 0 accuracy = torch.sum(pred_y==test_y).type(torch.FloatTensor) / test_y.size(0) # accuracy = sum(pred_y == test_y) / test_y.size(0) # print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %.2f' % accuracy) # print(sum(pred_y == test_y), test_y.size(0)) # print(sum(pred_y == test_y), sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0)) # print(torch.sum(pred_y==test_y), torch.sum(pred_y==test_y).type(torch.FloatTensor) / test_y.size(0)) plt.plot(losses_his, label='loss') plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 1)) plt.show() ###Output _____no_output_____ ###Markdown Test ###Code # !!!!!!!! Change in here !!!!!!!!! # test_output = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU print(pred_y, 'prediction number') # if without [:10] # print(test_y, 'real number') #tensor([7, 2, 1, ..., 3, 9, 5], device='cuda:0') real number print(test_y[:10], 'real number') ###Output tensor([7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') prediction number tensor([7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') real number ###Markdown 502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision ###Code import torch import torch.nn as nn from torch.autograd import Variable import torch.utils.data as Data import torchvision torch.manual_seed(1) import matplotlib.pyplot as plt %matplotlib inline EPOCH = 1 BATCH_SIZE = 50 LR = 0.001 DOWNLOAD_MNIST = False train_data = torchvision.datasets.MNIST( root='./mnist/', train=True, transform=torchvision.transforms.ToTensor(), download=DOWNLOAD_MNIST,) train_loader = Data.DataLoader( dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) test_data = torchvision.datasets.MNIST( root='./mnist/', train=False) # !!!!!!!! Change in here !!!!!!!!! # test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU test_y = test_data.test_labels[:2000].cuda() class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,), nn.ReLU(), nn.MaxPool2d(kernel_size=2),) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2),) self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0), -1) output = self.out(x) return output cnn = CNN() # !!!!!!!! Change in here !!!!!!!!! # cnn.cuda() # Moves all model parameters and buffers to the GPU. optimizer = torch.optim.Adam(cnn.parameters(), lr=LR) loss_func = nn.CrossEntropyLoss() losses_his = [] ###Output _____no_output_____ ###Markdown Training ###Code for epoch in range(EPOCH): for step, (x, y) in enumerate(train_loader): # !!!!!!!! Change in here !!!!!!!!! # b_x = Variable(x).cuda() # Tensor on GPU b_y = Variable(y).cuda() # Tensor on GPU output = cnn(b_x) loss = loss_func(output, b_y) losses_his.append(loss.data[0]) optimizer.zero_grad() loss.backward() optimizer.step() if step % 50 == 0: test_output = cnn(test_x) # !!!!!!!! Change in here !!!!!!!!! # pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0) print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy) plt.plot(losses_his, label='loss') plt.legend(loc='best') plt.xlabel('Steps') plt.ylabel('Loss') plt.ylim((0, 1)) plt.show() ###Output _____no_output_____ ###Markdown Test ###Code # !!!!!!!! Change in here !!!!!!!!! # test_output = cnn(test_x[:10]) pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU print(pred_y, 'prediction number') print(test_y[:10], 'real number') ###Output 7 2 1 0 4 1 4 9 5 9 [torch.cuda.LongTensor of size 10 (GPU 0)] prediction number 7 2 1 0 4 1 4 9 5 9 [torch.cuda.LongTensor of size 10 (GPU 0)] real number
CIFAR10.ipynb
###Markdown Download the CIFAR10 dataset ###Code import torch import torchvision batch_size_train = 5000000 batch_size_test = 1000000 train_loader = torch.utils.data.DataLoader( torchvision.datasets.CIFAR10('/files/', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_train, shuffle=True) test_loader = torch.utils.data.DataLoader( torchvision.datasets.CIFAR10('/files/', train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_test, shuffle=True) examples = enumerate(train_loader) batch_idx, (example_data, example_targets) = next(examples) print(example_data.shape) examples_t = enumerate(test_loader) batch_idx_t, (example_data_t, example_targets_t) = next(examples_t) print(example_data_t.shape) sorted(list(set((example_targets_t).numpy().tolist()))) ###Output _____no_output_____ ###Markdown We are converting the dataset into five tasks and storing it in some JSON files to use in the future again and again so that we do not need to download the dataset always. Later, we can extract it from the driver to train and test models when we need the data. Though, we added a link for the zip file, so please skip the first few steps. ###Code def get_indices(example_targets): indices_list = [[] for i in range(5)] for i in range(example_targets.shape[0]): if example_targets[i].item() == 0 or example_targets[i].item() == 1: indices_list[0].append(i) elif example_targets[i].item() == 2 or example_targets[i].item() == 3: indices_list[1].append(i) elif example_targets[i].item() == 4 or example_targets[i].item() == 5: indices_list[2].append(i) elif example_targets[i].item() == 6 or example_targets[i].item() == 7: indices_list[3].append(i) elif example_targets[i].item() == 8 or example_targets[i].item() == 9: indices_list[4].append(i) return indices_list import json train_indices = get_indices(example_targets) test_indices = get_indices(example_targets_t) traindata_list =[] trainlabels_list = [] testdata_list = [] testlabels_list = [] for j in range(5): traindata = example_data[train_indices[j]] trainlabels = example_targets[train_indices[j]] testdata = example_data_t[test_indices[j]] testlabels = example_targets_t[test_indices[j]] testdata_list.append(testdata.numpy().tolist()) testlabels_list.append(testlabels.numpy().tolist()) traindata_list.append(traindata.detach().numpy().tolist()) trainlabels_list.append(trainlabels.numpy().tolist()) with open('traindata.json', 'w') as jsonfile: json.dump(traindata_list, jsonfile) with open('trainlabels.json', 'w') as jsonfile: json.dump(trainlabels_list, jsonfile) with open('testdata.json', 'w') as jsonfile: json.dump(testdata_list, jsonfile) with open('testlabels.json', 'w') as jsonfile: json.dump(testlabels_list, jsonfile) indices_list_t = [[] for i in range(5)] for i in range(example_data_t.shape[0]): if example_targets_t[i].item() == 0 or example_targets_t[i].item() == 1: indices_list_t[0].append(i) testdata = example_data_t[indices_list_t[0]] testlabels = example_targets_t[indices_list_t[0]] examples_test = enumerate(test_loader) batch_idx_t, (example_data_t, example_targets_t) = next(examples_test) traindata_t = [[] for i in range(10)] indices_list_t = [[] for i in range(10)] for j in range(10): indices_t = torch.where(example_targets_t == j) indices_list_t[j].append(indices_t) traindata_t[j].append((example_data_t[indices_t], example_targets_t[indices_t])) examples = enumerate(train_loader) batch_idx, (example_data, example_targets) = next(examples) traindata = [[] for i in range(10)] indices_list = [[] for i in range(10)] for j in range(10): indices = torch.where(example_targets == j) indices_list[j].append(indices) traindata[j].append((example_data[indices], example_targets[indices])) import matplotlib.pyplot as plt fig = plt.figure() for i in range(6): plt.subplot(2,3,i+1) plt.tight_layout() plt.imshow(example_data[i][0], cmap='gray', interpolation='none') plt.title("Ground Truth: {}".format(example_targets[i])) plt.xticks([]) plt.yticks([]) fig ###Output _____no_output_____ ###Markdown If you download the zip file start from here Please download the zip file, extracts four files and upload to a drive(your_path). Please click [here](https://drive.google.com/drive/folders/1tPBCC8DKl-uz3tixvRcQOpLk_FKDDdo9?usp=sharing) to download the data. ###Code import torch import torch.nn as nn class encoder(nn.Module): def __init__(self): super(encoder, self).__init__() self.nc_mnist = 1 self.nc_cifar10 = 3 self.conv1 = nn.Conv2d(self.nc_cifar10, 3, 3, 1, 1) self.conv2 = nn.Conv2d(3, 6, 2, 2, 0) self.conv3 = nn.Conv2d(6, 12, 2, 2, 0) self.conv4 = nn.Conv2d(12, 24, 2, 2, 0) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.conv3(x) x = self.conv4(x) return x class decoder(nn.Module): def __init__(self): super(decoder, self).__init__() self.nc_mnist = 118 self.nc_cifar10 = 202 self.nk_mnist = 3 self.nk_cifar10 = 4 self.decon1 = nn.ConvTranspose2d(self.nc_cifar10, 24, 3, 1, 0) self.decon2 = nn.ConvTranspose2d(24, 12, self.nk_cifar10, 2, 0) self.decon3 = nn.ConvTranspose2d(12, 6, 2, 2, 0) self.decon4 = nn.ConvTranspose2d(6, 3, 2, 2, 0) def forward(self, x): x = x.view(x.shape[0], self.nc_cifar10, 1, 1) x = self.decon1(x) x = self.decon2(x) x = self.decon3(x) x = self.decon4(x) return x class VAE(nn.Module): def __init__(self, eps): super(VAE, self).__init__() self.en = encoder() self.de = decoder() self.eps = eps self.mnist_z = 108 self.cifar10_z = 192 def forward(self, x, one_hot): x = self.en(x) x = x.view(x.shape[0], -1) mu = x[:, :self.cifar10_z] logvar = x[:, self.cifar10_z:] std = torch.exp(0.5 * logvar) z = mu + self.eps * std #print(z.shape, 'aaa', one_hot.shape) z1 = torch.cat((z, one_hot), axis = 1) #print(z1.shape, 'bbb') return self.de(z1), mu, logvar class private(nn.Module): def __init__(self, eps): super(private, self).__init__() self.task = torch.nn.ModuleList() self.eps = eps for _ in range(5): self.task.append(VAE(self.eps)) def forward(self, x, one_hot, task_id): return self.task[task_id].forward(x, one_hot) class NET(nn.Module): def __init__(self, eps): super(NET, self).__init__() self.eps = eps self.shared = VAE(self.eps) self.private = private(self.eps) self.head = torch.nn.ModuleList() self.mnist = 216 self.cifar10 = 384 self.in_mnist = 2 self.in_cifar10 = 6 for _ in range(5): self.head.append( nn.Sequential( nn.Conv2d(self.in_cifar10, 12, 3, 1, 1), nn.Conv2d(12, 24, 2, 2, 0), nn.Flatten(1, -1), nn.Linear(24*16*16, 100), nn.Linear(100, 10) ) ) def forward(self, x, one_hot, task_id): s_x, s_mu, s_logvar = self.shared(x, one_hot) p_x, p_mu, p_logvar = self.private(x, one_hot, task_id) x = torch.cat([s_x, p_x], dim = 1) return self.head[task_id].forward(x), (s_x, s_mu, s_logvar), (p_x, p_mu, p_logvar) ###Output _____no_output_____ ###Markdown Number of epochs and synthetic data If you wish to change the number of epochs and synthetic data used as a generative replay, check lines 113 and 64, respectively. Change according to your requirments. ###Code import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np from collections import deque from torch.autograd import grad as torch_grad import torchvision.utils as vutils import os import os.path class CL_VAE(): def __init__(self): super(CL_VAE, self).__init__() self.batch_size = 64 self.mnist_z = 108 self.cifar10_z = 192 self.build_model() self.set_cuda() self.criterion = torch.nn.CrossEntropyLoss() self.recon = torch.nn.MSELoss() self.net_path = 'path/CIFAR10.pth' #give your preffered path where you want to save. self.accuracy_matrix = [[] for kk in range(5)] self.acc_25 = [] self.acc_50 = [] def build_model(self): self.eps = torch.randn(self.batch_size, self.cifar10_z) self.eps = self.eps.cuda() self.net = NET(self.eps) pytorch_total_params = sum(p.numel() for p in self.net.parameters() if p.requires_grad) print('pytorch_total_params:', pytorch_total_params) def set_cuda(self): self.net.cuda() def VAE_loss(self, recon, mu, sigma): kl_div = -0.5 * torch.sum(1 + sigma - mu.pow(2) - sigma.exp()) #print('kl_div', kl_div.item()) return recon + kl_div def train(self, all_traindata, all_trainlabels, all_testdata, all_testlabels, total_tasks): replay_classes = [] for i in range(total_tasks): traindata = torch.tensor(all_traindata[i]) trainlabels = torch.tensor(all_trainlabels[i]) testdata = torch.tensor(all_testdata[i]) testlabels = torch.tensor(all_testlabels[i]) #print(trainlabels, 'avfr') replay_classes.append(sorted(list(set(trainlabels.numpy().tolist())))) if i + 1 == 1: self.train_task(traindata, trainlabels, testdata, testlabels, i) #replay_classes.append(sorted(list(set(trainlabels.detach().numpy().tolist())))) else: num_gen_samples = 4 #z_dim = 108 for m in range(i): #print(replay_classes, 'replay_classes') replay_trainlabels = [] for ii in replay_classes[m]: for j in range(num_gen_samples): replay_trainlabels.append(ii) replay_trainlabels = torch.tensor(replay_trainlabels) replay_trainlabels_onehot = self.one_hot(replay_trainlabels) z = torch.randn(2 * num_gen_samples, self.cifar10_z) z_one_hot = torch.cat((z, replay_trainlabels_onehot), axis = 1) z_one_hot = z_one_hot.cuda() replay_data = self.net.private.task[m].de(z_one_hot).detach().cpu() traindata = torch.cat((replay_data, traindata), axis = 0) trainlabels = torch.cat((replay_trainlabels, trainlabels)) testdata = torch.cat((testdata, torch.tensor(all_testdata[m])), axis = 0) testlabels = torch.cat((testlabels, torch.tensor(all_testlabels[m]))) #print(sorted(list(set(trainlabels.detach().numpy().tolist()))), 'aaa', i + 1) self.train_task(traindata, trainlabels, testdata, testlabels, i) self.acc_mat(all_testdata, all_testlabels, total_tasks, i) #print(sorted(list(set(trainlabels.detach().numpy()))), '/n', sorted(list(set(testlabels.detach().numpy())))) self.forgetting_measure(self.accuracy_matrix, total_tasks) print(self.acc_25, 'acc_25', np.mean(self.acc_25)) print(self.acc_50, 'acc_50', np.mean(self.acc_50)) def one_hot(self, labels): matrix = torch.zeros(len(labels), 10) rows = np.arange(len(labels)) matrix[rows, labels] = 1 return matrix def model_save(self): torch.save(self.net.state_dict(), os.path.join(self.net_path)) def train_task(self, traindata, trainlabels, testdata, testlabels, task_id): net_opti = torch.optim.Adam(self.net.parameters(), lr = 1e-4) #data, label = traindata #batch_size = 64 num_iterations = int(traindata.shape[0]/self.batch_size) num_epochs = 50 for e in range(num_epochs): for i in range(num_iterations): self.net.zero_grad() self.net.train() batch_data = traindata[i * self.batch_size : (i + 1)*self.batch_size] #print(batch_data.shape, '41') batch_label = trainlabels[i * self.batch_size : (i + 1)*self.batch_size] batch_label_one_hot = self.one_hot(batch_label) batch_data = batch_data.cuda() batch_label = batch_label.cuda() batch_label_one_hot = batch_label_one_hot.cuda() out, shared_out, private_out = self.net(batch_data, batch_label_one_hot, task_id) s_x, s_mu, s_logvar = shared_out p_x, p_mu, p_logvar = private_out #print(out.shape, '12', batch_label.shape, s_x.shape) cross_en_loss = self.criterion(out, batch_label) s_recon = self.recon(batch_data, s_x) p_recon = self.recon(batch_data, p_x) s_VAE_loss = self.VAE_loss(s_recon, s_mu, s_logvar) p_VAE_loss = self.VAE_loss(p_recon, p_mu, p_logvar) all_loss = cross_en_loss + s_VAE_loss + p_VAE_loss all_loss.backward(retain_graph=True) net_opti.step() #print('epoch:', e + 1, 'task_loss', cross_en_loss.item(), 's_VAE:', s_VAE_loss.item(), 'p_VAE', p_VAE_loss.item()) if (e + 1) % 25 == 0: acc1, _ = self.evall(testdata, testlabels, task_id) print('Task:', task_id + 1, 'acc', acc1) if task_id + 1 == 5: self.model_save() def evall(self, testdata, testlabels, task_id): self.net.eval() num_iterations = int(testdata.shape[0]/self.batch_size) pred_labels_list = [] acc = [] for i in range(num_iterations): batch_data = testdata[i * self.batch_size : (i + 1) * self.batch_size] batch_labels = testlabels[i * self.batch_size : (i + 1) * self.batch_size] batch_label_one_hot = self.one_hot(batch_labels) batch_data = batch_data.cuda() batch_labels = batch_labels.cuda() batch_label_one_hot = batch_label_one_hot.cuda() out, _, _ = self.net(batch_data, batch_label_one_hot, task_id) pred_labels = torch.argmax(out, axis = 1) pred_labels_list.append(pred_labels.detach().cpu().numpy().tolist()) #print(pred_labels, 'aa') #print(pred_labels.shape, '1452', batch_labels) acc.append((torch.sum(batch_labels == pred_labels)/batch_data.shape[0] * 100).detach().cpu().numpy().tolist()) #print('acc:', acc) return np.mean(np.array(acc)), np.array(pred_labels_list).flatten() def forgetting_measure(self, accuracy_matrix, num_tasks): forgetting_measures = [] accuracy_matrix = np.array(accuracy_matrix) #print(accuracy_matrix, 'aa') for after_task_idx in range(1, num_tasks): after_task_num = after_task_idx + 1 #print(accuracy_matrix, 'accuracy_matrix') prev_acc = accuracy_matrix[:after_task_num - 1, :after_task_num - 1] forgettings = prev_acc.max(axis=0) - accuracy_matrix[after_task_num - 1, :after_task_num - 1] forgetting_measures.append(np.mean(forgettings).item()) #print('forgetting_measures', forgetting_measures) #print("the forgetting measure is...", np.mean(np.array(forgetting_measures))) def acc_mat(self, testData1, testLabels1, num_tasks, t): for kk in range(num_tasks): testData_tw = torch.tensor(testData1[kk]) testLabels_tw = torch.tensor(testLabels1[kk]) testLabels_tw_classes = sorted(list(set(testLabels_tw.detach().numpy().tolist()))) #pred_tw = (class_appr.test(testData_tw)).cpu() #classifier.predict(testData_tw) _, pred_tw = self.evall(testData_tw, testLabels_tw, kk) #pred_tw = torch.argmax(pred_tw, dim = 1) #pred_tw = pred_tw.cpu() testLabels_tw = testLabels_tw.detach().numpy()[:pred_tw.shape[0]] #print(pred_tw[0], '12', testLabels_tw[0]) dict_correct_tw = {} dict_total_tw = {} for ii in testLabels_tw_classes: dict_total_tw[ii] = 0 dict_correct_tw[ii] = 0 for ii in range(0, testLabels_tw.shape[0]): #print(testLabels_tw[ii],'aaa', pred_tw[ii]) if(testLabels_tw[ii] == pred_tw[ii]): dict_correct_tw[testLabels_tw[ii].item()] = dict_correct_tw[testLabels_tw[ii].item()] + 1 #print(testLabels_tw[ii], '1', dict_total_tw[testLabels_tw[ii]], '2', dict_total_tw[testLabels_tw[ii]]) dict_total_tw[testLabels_tw[ii].item()] = dict_total_tw[testLabels_tw[ii].item()] + 1 avgAcc_tw = 0.0 num_seen_tw = 0.0 for ii in testLabels_tw_classes: avgAcc_tw = avgAcc_tw + (dict_correct_tw[ii]*1.0)/(dict_total_tw[ii]) num_seen_tw = num_seen_tw + 1 avgAcc_tw = avgAcc_tw/num_seen_tw #testData_tw[jj].append(avgAcc_tw) self.accuracy_matrix[t].append(avgAcc_tw) ###Output _____no_output_____ ###Markdown Check your_path run %ls to see which directory you are currently in. You can change the directory using the command %cd dir_name. ###Code %ls %cd dir_name your_path = '/content/drive/MyDrive/' #change this path import json traindata_path = your_path + '/traindata.json' trainlabels_path = your_path + '/trainlabels.json' testdata_path = your_path + '/testdata.json' testlabels_path = your_path + '/testlabels.json' with open(traindata_path) as f: traindata = json.load(f) with open(trainlabels_path) as f: trainlabels = json.load(f) with open(testdata_path) as f: testdata = json.load(f) with open(testlabels_path) as f: testlabels = json.load(f) import time model = CL_VAE() st = time.time() model.train(traindata, trainlabels, testdata, testlabels, 5) fn = time.time() #print("time:", fn - st) ###Output pytorch_total_params: 3388428 ###Markdown CIFAR10 with Daisy and ResNet features (pytorch / skimage / sklearn) ###Code # Remember to select a GPU runtime when setting this to True USE_CUDA = False ###Output _____no_output_____ ###Markdown Download and uncompress the dataset ###Code import numpy as np import torch import torchvision import matplotlib.pyplot as plt %matplotlib inline from skimage.feature import daisy from sklearn.svm import LinearSVC from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.manifold import TSNE from sklearn.decomposition import PCA import pickle from tqdm import tqdm import wget import tarfile import os data_url = r'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz' download_path = './data/cifar-10-python.tar.gz' uncompressed_path = './data/cifar-10-python' batches_subdir = 'cifar-10-batches-py' batches_path = os.path.join(uncompressed_path, batches_subdir) os.makedirs(os.path.dirname(download_path), exist_ok=True) print('Downloading...') wget.download(data_url, download_path) print('Uncompressing...') with tarfile.open(download_path, "r:gz") as tar: tar.extractall(uncompressed_path) print('Uncompressed batches: {}'.format(', '.join(os.listdir(batches_path)))) print('Data ready!') ###Output Downloading... Uncompressing... Uncompressed batches: data_batch_1, readme.html, batches.meta, data_batch_2, data_batch_5, test_batch, data_batch_4, data_batch_3 Data ready! ###Markdown Load the data into memory and format it as a stack of RGB images ###Code def load_batch(batches_path, batch_name): with open(os.path.join(batches_path, batch_name), 'rb') as f: data_batch = pickle.load(f, encoding='bytes') return data_batch def data_to_images(data): data_reshp = np.reshape(data, (-1, 3, 32, 32)) imgs = np.moveaxis(data_reshp, (1, 2, 3), (3, 1, 2)) return imgs def batches_to_images_with_labels(batches): data_table = np.concatenate([batch[b'data'] for batch in batches], axis=0) labels = np.concatenate([np.asarray(batch[b'labels']) for batch in batches]) images = data_to_images(data_table) return images, labels train_batch_names = ['data_batch_{}'.format(i) for i in range(1, 5)] val_batch_names = ['data_batch_5'] test_batch_names = ['test_batch'] train_batches = [load_batch(batches_path, batch_name) for batch_name in train_batch_names] train_imgs, train_labels = batches_to_images_with_labels(train_batches) print('Training set: Images shape = {}'.format(train_imgs.shape)) print('Training set: Labels shape = {}'.format(train_labels.shape)) print() val_batches = [load_batch(batches_path, batch_name) for batch_name in val_batch_names] val_imgs, val_labels = batches_to_images_with_labels(val_batches) print('Validation set: Images shape = {}'.format(val_imgs.shape)) print('Validation set: Labels shape = {}'.format(val_labels.shape)) print() test_batches = [load_batch(batches_path, batch_name) for batch_name in test_batch_names] test_imgs, test_labels = batches_to_images_with_labels(test_batches) print('Test set: Images shape = {}'.format(test_imgs.shape)) print('Test set: Labels shape = {}'.format(test_labels.shape)) ###Output Training set: Images shape = (40000, 32, 32, 3) Training set: Labels shape = (40000,) Validation set: Images shape = (10000, 32, 32, 3) Validation set: Labels shape = (10000,) Test set: Images shape = (10000, 32, 32, 3) Test set: Labels shape = (10000,) ###Markdown Display some random images from the training set ###Code grid_num_rows = 10 grid_num_cols = 10 num_random_samples = grid_num_rows * grid_num_cols def display_random_subset(imgs, labels, grid_num_rows=6, grid_num_cols=6, figsize=(12, 12)): fig, ax_objs = plt.subplots(nrows=grid_num_rows, ncols=grid_num_cols, figsize=figsize) for ax in np.ravel(ax_objs): rnd_id = np.random.randint(labels.shape[0]) img = train_imgs[rnd_id] label = labels[rnd_id] ax.imshow(img) ax.axis('off') ax.set_title('{}'.format(label)) display_random_subset(train_imgs, train_labels) ###Output _____no_output_____ ###Markdown Shallow baseline ###Code def extract_daisy_features(images): feature_vecs = [] for img in tqdm(images): img_grayscale = np.mean(img, axis=2) fvec = daisy(img_grayscale, step=4, radius=9).reshape(1, -1) feature_vecs.append(fvec) return np.concatenate(feature_vecs) train_daisy_feature_vecs = extract_daisy_features(train_imgs) val_daisy_feature_vecs = extract_daisy_features(val_imgs) test_daisy_feature_vecs = extract_daisy_features(test_imgs) print('Training set: Daisy features shape = {}'.format(train_daisy_feature_vecs.shape)) print('Training set: labels shape = {}'.format(train_labels.shape)) print() print('Validation set: Daisy features shape = {}'.format(val_daisy_feature_vecs.shape)) print('Validation set: labels shape = {}'.format(val_labels.shape)) print() print('Test set: Daisy features shape = {}'.format(test_daisy_feature_vecs.shape)) print('Test set: labels shape = {}'.format(test_labels.shape)) daisy_svm_param_grid = {'C': np.logspace(-4, 4, num=9, endpoint=True, base=10)} daisy_clf = GridSearchCV(estimator=LinearSVC(), param_grid=daisy_svm_param_grid, cv=3, n_jobs=4, verbose=10) daisy_clf.fit(X=train_daisy_feature_vecs, y=train_labels) val_daisy_predictions = daisy_clf.predict(X=val_daisy_feature_vecs) val_daisy_accuracy = np.mean(val_daisy_predictions == val_labels) print('Validation Daisy accuracy = {} (use this to tune params)'.format(val_daisy_accuracy)) test_daisy_predictions = daisy_clf.predict(X=test_daisy_feature_vecs) test_daisy_accuracy = np.mean(test_daisy_predictions == test_labels) print('Test Daisy accuracy = {} (DO NOT use this to tune params!)'.format(test_daisy_accuracy)) ###Output Test Daisy accuracy = 0.5953 (DO NOT use this to tune params!) ###Markdown ResNet feature extraction ###Code r50 = torchvision.models.resnet50(pretrained=True) # Throw away the classification layer and pooling layer before it (not needed # because images are small anyway) r50_fx_layers = list(r50.children())[:-2] r50_fx = torch.nn.Sequential(*r50_fx_layers) def extract_deep_features(images, model, use_cuda, batch_size=128): # Normalize for torchvision torchvision_mean = np.array([0.485, 0.456, 0.406]) torchvision_std = np.array([0.485, 0.456, 0.406]) images_norm = (images / 255. - torchvision_mean) / torchvision_std images_norm_tensor = torch.from_numpy(images_norm.astype(np.float32)).permute((0, 3, 2, 1)) dset = torch.utils.data.TensorDataset(images_norm_tensor) dataloader = torch.utils.data.DataLoader(dset, batch_size, shuffle=False, drop_last=False) model.eval() if use_cuda: model.cuda() feature_vec_batches = [] with tqdm(total=len(dataloader)) as pbar: for data_batch in dataloader: img_batch = data_batch[0] # We get a tuple so have to unpack it if use_cuda: img_batch = img_batch.cuda() fvec_batch = model(img_batch) if use_cuda: fvec_batch = fvec_batch.cpu() fvec_batch_cl = fvec_batch.detach().clone() fvec_batch_np = fvec_batch_cl.view(img_batch.size(0), -1).numpy() feature_vec_batches.append(fvec_batch_np) pbar.update(1) if use_cuda: model.cpu() # cleanup return np.concatenate(feature_vec_batches, axis=0) train_resnet_feature_vecs = extract_deep_features(train_imgs, r50_fx, use_cuda=USE_CUDA) val_resnet_feature_vecs = extract_deep_features(val_imgs, r50_fx, use_cuda=USE_CUDA) test_resnet_feature_vecs = extract_deep_features(test_imgs, r50_fx, use_cuda=USE_CUDA) print('Training set: ResNet features shape = {}'.format(train_resnet_feature_vecs.shape)) print('Training set: labels shape = {}'.format(train_labels.shape)) print() print('Validation set: ResNet features shape = {}'.format(val_resnet_feature_vecs.shape)) print('Validation set: labels shape = {}'.format(val_labels.shape)) print() print('Test set: ResNet features shape = {}'.format(test_resnet_feature_vecs.shape)) print('Test set: labels shape = {}'.format(test_labels.shape)) ###Output Training set: ResNet features shape = (40000, 2048) Training set: labels shape = (40000,) Validation set: ResNet features shape = (10000, 2048) Validation set: labels shape = (10000,) Test set: ResNet features shape = (10000, 2048) Test set: labels shape = (10000,) ###Markdown Visualize the ResNet features ###Code pca = PCA(n_components=50) embed = TSNE(n_components=2, init='pca') dim_red = Pipeline([('pca', pca), ('embed', embed)]) dim_red_subset = np.random.choice(np.arange(train_resnet_feature_vecs.shape[0]), size=5000, replace=False) train_resnet_feature_vecs_subset_dimred = dim_red.fit_transform(train_resnet_feature_vecs[dim_red_subset]) train_labels_subset = train_labels[dim_red_subset] fig, ax = plt.subplots(figsize=(10, 10)) sc = ax.scatter(train_resnet_feature_vecs_subset_dimred[:, 0], train_resnet_feature_vecs_subset_dimred[:, 1], c=train_labels_subset, marker='.', cmap='nipy_spectral') plt.colorbar(sc) ###Output _____no_output_____ ###Markdown Fit an SVM to ResNet features ###Code resnet_svm_param_grid = {'C': np.logspace(-4, 4, num=9, endpoint=True, base=10)} resnet_clf = GridSearchCV(estimator=LinearSVC(), param_grid=resnet_svm_param_grid, cv=3, n_jobs=4, verbose=10) resnet_clf.fit(X=train_resnet_feature_vecs, y=train_labels) val_resnet_predictions = resnet_clf.predict(X=val_resnet_feature_vecs) val_resnet_accuracy = np.mean(val_resnet_predictions == val_labels) print('Validation Resnet accuracy = {} (use this to tune params)'.format(val_resnet_accuracy)) test_resnet_predictions = resnet_clf.predict(X=test_resnet_feature_vecs) test_resnet_accuracy = np.mean(test_resnet_predictions == test_labels) print('Test Resnet accuracy = {} (DO NOT use this to tune params!)'.format(test_resnet_accuracy)) ###Output Test Resnet accuracy = 0.6337 (DO NOT use this to tune params!) ###Markdown ###Code import tensorflow as tf from tensorflow.keras import datasets, layers, models import matplotlib.pyplot as plt import numpy as np import pandas as pd tf.config.experimental.list_physical_devices() (X_train,y_train), (X_test,y_test) = keras.datasets.cifar10.load_data() X_train.shape index = 5 plt.imshow(X_train[index]) X_train.shape #X_train_flatten = X_train.reshape(X_train.shape[0],-1).T #y_train_flatten = y_train.reshape(y_train.shape[0],-1).T #X_test_flatten = X_test.reshape(X_test.shape[0],-1).T #y_test_flatten = y_test.reshape(y_test.shape[0],-1).T y_train.shape X_train_final = X_train/255 #y_train_final = y_train_flatten/255 X_test_final = X_test/255 #y_test_final = y_test_flatten/255 def relu(z): return max(0,z) y_train_categorical = keras.utils.to_categorical( y_train, num_classes=10, dtype='float32' ) y_test_categorical = keras.utils.to_categorical( y_test, num_classes=10, dtype='float32' ) y_train[0:5] model = keras.Sequential([ keras.layers.Flatten(input_shape=(32,32,3)), keras.layers.Dense(3000, activation='relu'), keras.layers.Dense(1000, activation='relu'), keras.layers.Dense(10, activation='sigmoid') ]) model.compile(optimizer='SGD', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(X_train_final, y_train_categorical, epochs=1) def get_model(): model = keras.Sequential([ keras.layers.Flatten(input_shape=(32,32,3)), keras.layers.Dense(3000, activation='relu'), keras.layers.Dense(1000, activation='relu'), keras.layers.Dense(10, activation='sigmoid') ]) model.compile(optimizer='SGD', loss='categorical_crossentropy', metrics=['accuracy']) return model prediction = model.predict(X_test_final) %%timeit -n1 -r1 with tf.device('/CPU:0'): cpu_model = get_model() cpu_model.fit(X_train_final, y_train_categorical, epochs=1) np.argmax(prediction[9]) y_test[9] ###Output _____no_output_____ ###Markdown CNN Implementation ###Code cnn = models.Sequential([ layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(32, 32, 3)), layers.MaxPooling2D((2, 2)), layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'), layers.MaxPooling2D((2, 2)), layers.Flatten(), layers.Dense(64, activation='relu'), layers.Dense(10, activation='softmax') ]) cnn.summary() cnn.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) cnn.fit(X_train, y_train, epochs=10) ###Output Epoch 1/10 1563/1563 [==============================] - 57s 36ms/step - loss: 3.2933 - accuracy: 0.2945 Epoch 2/10 1563/1563 [==============================] - 57s 36ms/step - loss: 1.3525 - accuracy: 0.5197 Epoch 3/10 1563/1563 [==============================] - 57s 36ms/step - loss: 1.1631 - accuracy: 0.5923 Epoch 4/10 1563/1563 [==============================] - 56s 36ms/step - loss: 1.0229 - accuracy: 0.6470 Epoch 5/10 1563/1563 [==============================] - 57s 37ms/step - loss: 0.9365 - accuracy: 0.6783 Epoch 6/10 1563/1563 [==============================] - 57s 36ms/step - loss: 0.8721 - accuracy: 0.6983 Epoch 7/10 1563/1563 [==============================] - 57s 36ms/step - loss: 0.7981 - accuracy: 0.7231 Epoch 8/10 1563/1563 [==============================] - 58s 37ms/step - loss: 0.7493 - accuracy: 0.7417 Epoch 9/10 1563/1563 [==============================] - 58s 37ms/step - loss: 0.6969 - accuracy: 0.7558 Epoch 10/10 1563/1563 [==============================] - 58s 37ms/step - loss: 0.6580 - accuracy: 0.7745 ###Markdown ###Code import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Assuming that we are on a CUDA machine, this should print a CUDA device: print(device) import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) plt.show() # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 48, 3, 1) self.conv2 = nn.Conv2d(48, 96, 3, 1) self.conv3 = nn.Conv2d(96, 192, 3, 1) self.conv4 = nn.Conv2d(192, 256, 3, 1) self.pool = nn.MaxPool2d(2, 2) self.fc1 = nn.Linear(5*5*256, 512) self.fc2 = nn.Linear(512, 64) self.fc3 = nn.Linear(64, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = self.pool(x) x = F.relu(self.conv3(x)) x = F.relu(self.conv4(x)) x = self.pool(x) x = x.view(-1, 5*5*256) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() net = net.to(device) import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(15): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data inputs, labels = data[0].to(device), data[1].to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') PATH = './cifar_net.pth' torch.save(net.state_dict(), PATH) dataiter = iter(testloader) images, labels = dataiter.next() # print images imshow(torchvision.utils.make_grid(images)) print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4))) net = Net() net.load_state_dict(torch.load(PATH)) outputs = net(images) _, predicted = torch.max(outputs, 1) print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4))) correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) average_accuracy = 100 * correct / total class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 class_vals=[] perAccuracy=[] for i in range(10): print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i])) class_vals.append(classes[i]) perAccuracy.append(100 * class_correct[i] / class_total[i]) import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_axes([0,0,1,1]) plt.xlabel('Class') plt.ylabel('Percentage') ax.bar(class_vals,perAccuracy) ###Output Accuracy of plane : 81 % Accuracy of car : 87 % Accuracy of bird : 67 % Accuracy of cat : 59 % Accuracy of deer : 77 % Accuracy of dog : 69 % Accuracy of frog : 84 % Accuracy of horse : 80 % Accuracy of ship : 87 % Accuracy of truck : 83 % ###Markdown Building an Artificial Neural Network **ANN** first to check the Performance ###Code ann = models.Sequential([ layers.Flatten(input_shape=(32,32,3)), layers.Dense(3000, activation='relu'), layers.Dense(1000, activation='relu'), layers.Dense(10, activation='softmax') ]) ann.compile(optimizer='SGD', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ann.fit(X_train, y_train, epochs=10) ann.evaluate(X_test, y_test) from sklearn.metrics import confusion_matrix , classification_report import numpy as np y_pred = ann.predict(X_test) y_pred_classes = [np.argmax(element) for element in y_pred] print("Classification Report: \n\n", classification_report(y_test, y_pred_classes)) ###Output Classification Report: precision recall f1-score support 0 0.72 0.24 0.36 1000 1 0.82 0.37 0.51 1000 2 0.22 0.77 0.34 1000 3 0.38 0.27 0.32 1000 4 0.33 0.47 0.39 1000 5 0.50 0.27 0.35 1000 6 0.55 0.54 0.55 1000 7 0.83 0.28 0.42 1000 8 0.72 0.53 0.61 1000 9 0.57 0.58 0.58 1000 accuracy 0.43 10000 macro avg 0.57 0.43 0.44 10000 weighted avg 0.57 0.43 0.44 10000 Classification Report: precision recall f1-score support 0 0.72 0.24 0.36 1000 1 0.82 0.37 0.51 1000 2 0.22 0.77 0.34 1000 3 0.38 0.27 0.32 1000 4 0.33 0.47 0.39 1000 5 0.50 0.27 0.35 1000 6 0.55 0.54 0.55 1000 7 0.83 0.28 0.42 1000 8 0.72 0.53 0.61 1000 9 0.57 0.58 0.58 1000 accuracy 0.43 10000 macro avg 0.57 0.43 0.44 10000 weighted avg 0.57 0.43 0.44 10000 ###Markdown Building a Convolutional Neural Network **(CNN)** ###Code cnn = models.Sequential([ #cnn layers layers.Conv2D(filters=32, kernel_size=(3,3), activation ='relu', input_shape=(32,32,3)), layers.MaxPooling2D((2,2)), layers.Conv2D(filters=32, kernel_size=(3,3), activation ='relu') , layers.MaxPooling2D((2,2)), #dense layers layers.Flatten(), layers.Dense(64, activation='relu'), layers.Dense(10, activation='softmax') ]) cnn.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy', metrics = ["accuracy"]) cnn.fit(X_train, y_train, epochs = 10) cnn.evaluate(X_test, y_test) y_test[:5] #2 dimensional array y_test = y_test.reshape(-1) #converting to 1 dimensional array y_test[:5] plot_sample(X_test, y_test, 1) ###Output _____no_output_____ ###Markdown Predicting the Model and seeing its Performance ###Code y_pred = cnn.predict(X_test) y_pred[:5] y_classes = [np.argmax(element) for element in y_pred] y_classes[:5] y_test[:5] ###Output _____no_output_____ ###Markdown So as you can see above, our model predicts all the correct values for our classes except one value which is 6. All the other alues have been predicted successfully as a result of 82% accuracy that was achieved. ###Code plot_sample(X_test, y_test, 1) classes[y_classes[1]] ###Output _____no_output_____ ###Markdown Here(above) the model has predicted correctly ###Code plot_sample(X_test, y_test, 4) classes[y_classes[4]] ###Output _____no_output_____ ###Markdown Here(above) the model has predicted incorrectly ###Code from sklearn.metrics import confusion_matrix , classification_report import numpy as np y_pred = cnn.predict(X_test) y_pred_classes = [np.argmax(element) for element in y_pred] print("Classification Report: \n", classification_report(y_test, y_pred_classes)) ###Output Classification Report: precision recall f1-score support 0 0.68 0.73 0.70 1000 1 0.85 0.75 0.80 1000 2 0.56 0.59 0.57 1000 3 0.50 0.45 0.48 1000 4 0.64 0.63 0.63 1000 5 0.57 0.67 0.62 1000 6 0.80 0.71 0.75 1000 7 0.68 0.78 0.73 1000 8 0.77 0.77 0.77 1000 9 0.80 0.74 0.76 1000 accuracy 0.68 10000 macro avg 0.69 0.68 0.68 10000 weighted avg 0.69 0.68 0.68 10000 ###Markdown Training a network on CIFAR10 Downloading the dataset using torch vision ###Code from torchvision import datasets cifar10 = datasets.CIFAR10("./", train=True, download=True) cifar10 cifar10_val = datasets.CIFAR10("./", train=False, download=True) cifar10_val len(cifar10) cifar10[80] ###Output _____no_output_____ ###Markdown Accessing data ###Code import matplotlib.pyplot as plt class_names = ['airplane','automobile','bird','cat','deer', 'dog','frog','horse','ship','truck'] fig = plt.figure(figsize=(8,3)) num_classes = 10 for i in range(num_classes): ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[]) ax.set_title(class_names[i]) img = next(img for img, label in cifar10 if label == i) plt.imshow(img) plt.show() img, label = cifar10[80] class_names[label] plt.imshow(img) plt.show() ###Output _____no_output_____ ###Markdown Transforms ###Code from torchvision import transforms dir(transforms) ###Output _____no_output_____ ###Markdown Converting the images to a tensor ###Code to_tensor = transforms.ToTensor() img_t = to_tensor(img) img_t img_t.shape ###Output _____no_output_____ ###Markdown Directly getting the transformed dataset ###Code tensor_cifar10 = datasets.CIFAR10("./", train=True, download=False, transform=transforms.ToTensor()) img_t, _ = tensor_cifar10[80] img_t img_t.max(), img_t.min(), img_t.shape, type(img_t) plt.imshow(img_t.permute(1, 2, 0)) plt.show() ###Output _____no_output_____ ###Markdown Normalizing data ###Code import torch imgs = torch.stack([img_t for img_t, _ in tensor_cifar10], dim=3) imgs.shape imgs.view(3, -1).mean(dim=1) imgs.view(3, -1).std(dim=1) transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) transformed_cifar10 = datasets.CIFAR10("./", train=True, download=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) ])) ###Output _____no_output_____ ###Markdown Making a birds v/s planes classifier ###Code cifar10 = datasets.CIFAR10( "./", train=True, download=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) ])) cifar10_val = datasets.CIFAR10( "./", train=False, download=False, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616)) ])) label_map = {0: 0, 2: 1} class_names = ['airplane', 'bird'] cifar2 = [(img, label_map[label]) for img, label in cifar10 if label in [0, 2]] cifar2_val = [(img, label_map[label]) for img, label in cifar10_val if label in [0, 2]] ###Output _____no_output_____ ###Markdown Defining the model ###Code import torch.nn as nn model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2) ) ###Output _____no_output_____ ###Markdown We will print out the probability of an object belonging to one class for which we use `Softmax` ###Code model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2), nn.Softmax(dim=1) ) ###Output _____no_output_____ ###Markdown Let's try to run the model without even training it ###Code img, _ = cifar2[0] type(img) plt.imshow(img.permute(1, 2, 0)) plt.show() img.shape batch_img = img.view(-1).unsqueeze(0) batch_img.shape model(batch_img) ###Output _____no_output_____ ###Markdown The model needs to be penalized when incorrect predictions are made, so we use `LogSoftmax` ###Code model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2), nn.LogSoftmax(dim=1)) loss = nn.NLLLoss() img, label = cifar2[0] out = model(img.view(-1).unsqueeze(0)) loss(out, torch.tensor([label])) ###Output _____no_output_____ ###Markdown Training the classifier ###Code torch.cuda.set_device(0) torch.cuda.get_device_name(0) torch.device('cuda' if torch.cuda.is_available() else 'cpu') import torch import torch.nn as nn from torch import optim model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2), nn.LogSoftmax(dim=1) ) learning_rate = 1e-2 optimizer = optim.SGD(model.parameters(), lr=learning_rate) loss_fn = nn.NLLLoss() n_epochs = 100 for epoch in range(0, n_epochs): for img, label in cifar2: pred = model(img.view(-1).unsqueeze(0)) loss = loss_fn(pred, torch.tensor([label])) optimizer.zero_grad() loss.backward() optimizer.step() print("Epoch: %d, Loss: %f" % (epoch, float(loss))) ###Output Epoch: 0, Loss: 4.885203 Epoch: 1, Loss: 8.398516 Epoch: 2, Loss: 12.194094 Epoch: 3, Loss: 8.400795 Epoch: 4, Loss: 7.035946 Epoch: 5, Loss: 6.672678 Epoch: 6, Loss: 14.702289 Epoch: 7, Loss: 2.806485 Epoch: 8, Loss: 9.093428 Epoch: 9, Loss: 1.937908 Epoch: 10, Loss: 0.096189 Epoch: 11, Loss: 7.063046 Epoch: 12, Loss: 10.701083 Epoch: 13, Loss: 8.897695 Epoch: 14, Loss: 10.900598 Epoch: 15, Loss: 3.986428 Epoch: 16, Loss: 0.029039 Epoch: 17, Loss: 1.960492 Epoch: 18, Loss: 10.484950 Epoch: 19, Loss: 1.311135 Epoch: 20, Loss: 7.059778 Epoch: 21, Loss: 8.214798 Epoch: 22, Loss: 11.473054 Epoch: 23, Loss: 4.624972 Epoch: 24, Loss: 0.464035 Epoch: 25, Loss: 5.108599 Epoch: 26, Loss: 0.656461 Epoch: 27, Loss: 4.596004 Epoch: 28, Loss: 1.365604 Epoch: 29, Loss: 3.978047 Epoch: 30, Loss: 13.991315 Epoch: 31, Loss: 0.012959 Epoch: 32, Loss: 9.526802 Epoch: 33, Loss: 5.412449 Epoch: 34, Loss: 5.310781 Epoch: 35, Loss: 7.506864 Epoch: 36, Loss: 7.706320 Epoch: 37, Loss: 13.320793 Epoch: 38, Loss: 8.017707 Epoch: 39, Loss: 14.925833 Epoch: 40, Loss: 19.361202 Epoch: 41, Loss: 11.212448 Epoch: 42, Loss: 15.842257 Epoch: 43, Loss: 9.839250 Epoch: 44, Loss: 4.075000 Epoch: 45, Loss: 6.826460 Epoch: 46, Loss: 8.869939 Epoch: 47, Loss: 11.255908 Epoch: 48, Loss: 1.516616 Epoch: 49, Loss: 3.173884 Epoch: 50, Loss: 15.334185 Epoch: 51, Loss: 17.397785 Epoch: 52, Loss: 4.067145 Epoch: 53, Loss: 13.176816 Epoch: 54, Loss: 0.064727 Epoch: 55, Loss: 17.370178 Epoch: 56, Loss: 7.199696 Epoch: 57, Loss: 20.260681 Epoch: 58, Loss: 18.920212 Epoch: 59, Loss: 12.034130 Epoch: 60, Loss: 13.000215 Epoch: 61, Loss: 12.489149 Epoch: 62, Loss: 12.272771 Epoch: 63, Loss: 12.774753 Epoch: 64, Loss: 10.542746 Epoch: 65, Loss: 15.277123 Epoch: 66, Loss: 3.086051 Epoch: 67, Loss: 16.968788 Epoch: 68, Loss: 14.771244 Epoch: 69, Loss: 3.802227 Epoch: 70, Loss: 10.204108 Epoch: 71, Loss: 5.821189 Epoch: 72, Loss: 7.695176 Epoch: 73, Loss: 1.036054 Epoch: 74, Loss: 2.437079 Epoch: 75, Loss: 9.829401 Epoch: 76, Loss: 0.046492 Epoch: 77, Loss: 9.588590 Epoch: 78, Loss: 10.307067 Epoch: 79, Loss: 6.878657 Epoch: 80, Loss: 1.898898 Epoch: 81, Loss: 0.028771 Epoch: 82, Loss: 0.000025 Epoch: 83, Loss: 4.961995 Epoch: 84, Loss: 10.551287 Epoch: 85, Loss: 1.690318 Epoch: 86, Loss: 0.106867 Epoch: 87, Loss: 3.279341 Epoch: 88, Loss: 4.938417 Epoch: 89, Loss: 16.231289 Epoch: 90, Loss: 11.445296 Epoch: 91, Loss: 17.599178 Epoch: 92, Loss: 20.918266 Epoch: 93, Loss: 19.979803 Epoch: 94, Loss: 18.109711 Epoch: 95, Loss: 16.496416 Epoch: 96, Loss: 12.549117 Epoch: 97, Loss: 15.137671 Epoch: 98, Loss: 15.985045 Epoch: 99, Loss: 6.681483 ###Markdown Using dataloader to form batches of training data ourselves ###Code import torch import torch.nn as nn from torch import optim torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True) model = nn.Sequential( nn.Linear(3072, 512), nn.Tanh(), nn.Linear(512, 2), nn.LogSoftmax(dim=1) ) learning_rate = 1e-2 optimizer = optim.SGD(model.parameters(), lr=learning_rate) loss_fn = nn.NLLLoss() n_epochs = 100 for epoch in range(0, n_epochs): for imgs, labels in train_loader: batch_size = imgs.shape[0] preds = model(imgs.view(batch_size, -1)) loss = loss_fn(preds, labels) optimizer.zero_grad() loss.backward() optimizer.step() print("Epoch: %d, Loss: %f" % (epoch, float(loss))) ###Output Streaming output truncated to the last 5000 lines. Epoch: 68, Loss: 0.020706 Epoch: 68, Loss: 0.023238 Epoch: 68, Loss: 0.026829 Epoch: 68, Loss: 0.064414 Epoch: 68, Loss: 0.033158 Epoch: 68, Loss: 0.034060 Epoch: 68, Loss: 0.042462 Epoch: 68, Loss: 0.029777 Epoch: 68, Loss: 0.035627 Epoch: 68, Loss: 0.044759 Epoch: 68, Loss: 0.039389 Epoch: 68, Loss: 0.043158 Epoch: 68, Loss: 0.035331 Epoch: 68, Loss: 0.057891 Epoch: 68, Loss: 0.033963 Epoch: 68, Loss: 0.027889 Epoch: 68, Loss: 0.040650 Epoch: 68, Loss: 0.054751 Epoch: 68, Loss: 0.048984 Epoch: 68, Loss: 0.061292 Epoch: 68, Loss: 0.050086 Epoch: 68, Loss: 0.061158 Epoch: 68, Loss: 0.027495 Epoch: 68, Loss: 0.043672 Epoch: 68, Loss: 0.053512 Epoch: 68, Loss: 0.036537 Epoch: 68, Loss: 0.028120 Epoch: 68, Loss: 0.032815 Epoch: 68, Loss: 0.044790 Epoch: 68, Loss: 0.040989 Epoch: 68, Loss: 0.069342 Epoch: 68, Loss: 0.029691 Epoch: 68, Loss: 0.040325 Epoch: 68, Loss: 0.054844 Epoch: 68, Loss: 0.070573 Epoch: 68, Loss: 0.037940 Epoch: 68, Loss: 0.038947 Epoch: 68, Loss: 0.016877 Epoch: 68, Loss: 0.033706 Epoch: 68, Loss: 0.037013 Epoch: 68, Loss: 0.049049 Epoch: 68, Loss: 0.048095 Epoch: 68, Loss: 0.036387 Epoch: 68, Loss: 0.041609 Epoch: 68, Loss: 0.024449 Epoch: 68, Loss: 0.020152 Epoch: 68, Loss: 0.027200 Epoch: 68, Loss: 0.028327 Epoch: 68, Loss: 0.034577 Epoch: 68, Loss: 0.035169 Epoch: 68, Loss: 0.038995 Epoch: 68, Loss: 0.034833 Epoch: 68, Loss: 0.046466 Epoch: 68, Loss: 0.034587 Epoch: 68, Loss: 0.066631 Epoch: 68, Loss: 0.052028 Epoch: 68, Loss: 0.048311 Epoch: 68, Loss: 0.054566 Epoch: 68, Loss: 0.045157 Epoch: 68, Loss: 0.037886 Epoch: 68, Loss: 0.029190 Epoch: 68, Loss: 0.041958 Epoch: 68, Loss: 0.027107 Epoch: 68, Loss: 0.034198 Epoch: 68, Loss: 0.037499 Epoch: 68, Loss: 0.042694 Epoch: 68, Loss: 0.034167 Epoch: 68, Loss: 0.040985 Epoch: 68, Loss: 0.037411 Epoch: 68, Loss: 0.052636 Epoch: 68, Loss: 0.048679 Epoch: 68, Loss: 0.040363 Epoch: 68, Loss: 0.042059 Epoch: 68, Loss: 0.053635 Epoch: 68, Loss: 0.053063 Epoch: 68, Loss: 0.040772 Epoch: 68, Loss: 0.043361 Epoch: 68, Loss: 0.040872 Epoch: 68, Loss: 0.024670 Epoch: 68, Loss: 0.041461 Epoch: 68, Loss: 0.031434 Epoch: 68, Loss: 0.071219 Epoch: 68, Loss: 0.044117 Epoch: 68, Loss: 0.048665 Epoch: 68, Loss: 0.035391 Epoch: 68, Loss: 0.045969 Epoch: 68, Loss: 0.046201 Epoch: 68, Loss: 0.062133 Epoch: 68, Loss: 0.042414 Epoch: 68, Loss: 0.032862 Epoch: 68, Loss: 0.057866 Epoch: 68, Loss: 0.059030 Epoch: 68, Loss: 0.050733 Epoch: 68, Loss: 0.043005 Epoch: 68, Loss: 0.034858 Epoch: 68, Loss: 0.046670 Epoch: 68, Loss: 0.037907 Epoch: 68, Loss: 0.032018 Epoch: 68, Loss: 0.038030 Epoch: 68, Loss: 0.032326 Epoch: 68, Loss: 0.062041 Epoch: 68, Loss: 0.039640 Epoch: 68, Loss: 0.055875 Epoch: 68, Loss: 0.048883 Epoch: 68, Loss: 0.029705 Epoch: 68, Loss: 0.032533 Epoch: 68, Loss: 0.045861 Epoch: 68, Loss: 0.030641 Epoch: 68, Loss: 0.056983 Epoch: 68, Loss: 0.068224 Epoch: 68, Loss: 0.047386 Epoch: 68, Loss: 0.051504 Epoch: 68, Loss: 0.043628 Epoch: 68, Loss: 0.025607 Epoch: 68, Loss: 0.033940 Epoch: 68, Loss: 0.049387 Epoch: 68, Loss: 0.049147 Epoch: 68, Loss: 0.054808 Epoch: 68, Loss: 0.037116 Epoch: 68, Loss: 0.039975 Epoch: 68, Loss: 0.030107 Epoch: 68, Loss: 0.041606 Epoch: 68, Loss: 0.031010 Epoch: 68, Loss: 0.033524 Epoch: 68, Loss: 0.035708 Epoch: 68, Loss: 0.031620 Epoch: 68, Loss: 0.037492 Epoch: 68, Loss: 0.029264 Epoch: 68, Loss: 0.035431 Epoch: 68, Loss: 0.041765 Epoch: 68, Loss: 0.025732 Epoch: 68, Loss: 0.027997 Epoch: 68, Loss: 0.080580 Epoch: 69, Loss: 0.046025 Epoch: 69, Loss: 0.049845 Epoch: 69, Loss: 0.030949 Epoch: 69, Loss: 0.029917 Epoch: 69, Loss: 0.018050 Epoch: 69, Loss: 0.029253 Epoch: 69, Loss: 0.026979 Epoch: 69, Loss: 0.031989 Epoch: 69, Loss: 0.032294 Epoch: 69, Loss: 0.020407 Epoch: 69, Loss: 0.044427 Epoch: 69, Loss: 0.025044 Epoch: 69, Loss: 0.042891 Epoch: 69, Loss: 0.047729 Epoch: 69, Loss: 0.039865 Epoch: 69, Loss: 0.048864 Epoch: 69, Loss: 0.026373 Epoch: 69, Loss: 0.025157 Epoch: 69, Loss: 0.028525 Epoch: 69, Loss: 0.030529 Epoch: 69, Loss: 0.031063 Epoch: 69, Loss: 0.048844 Epoch: 69, Loss: 0.066841 Epoch: 69, Loss: 0.031430 Epoch: 69, Loss: 0.024188 Epoch: 69, Loss: 0.035260 Epoch: 69, Loss: 0.030693 Epoch: 69, Loss: 0.030549 Epoch: 69, Loss: 0.025840 Epoch: 69, Loss: 0.031299 Epoch: 69, Loss: 0.024342 Epoch: 69, Loss: 0.033079 Epoch: 69, Loss: 0.041060 Epoch: 69, Loss: 0.042940 Epoch: 69, Loss: 0.035386 Epoch: 69, Loss: 0.029224 Epoch: 69, Loss: 0.044290 Epoch: 69, Loss: 0.060839 Epoch: 69, Loss: 0.070138 Epoch: 69, Loss: 0.038935 Epoch: 69, Loss: 0.020620 Epoch: 69, Loss: 0.041719 Epoch: 69, Loss: 0.019975 Epoch: 69, Loss: 0.028578 Epoch: 69, Loss: 0.034181 Epoch: 69, Loss: 0.029485 Epoch: 69, Loss: 0.083391 Epoch: 69, Loss: 0.042804 Epoch: 69, Loss: 0.036295 Epoch: 69, Loss: 0.037610 Epoch: 69, Loss: 0.026414 Epoch: 69, Loss: 0.043508 Epoch: 69, Loss: 0.029587 Epoch: 69, Loss: 0.047396 Epoch: 69, Loss: 0.022128 Epoch: 69, Loss: 0.034149 Epoch: 69, Loss: 0.073642 Epoch: 69, Loss: 0.037528 Epoch: 69, Loss: 0.030801 Epoch: 69, Loss: 0.044208 Epoch: 69, Loss: 0.034938 Epoch: 69, Loss: 0.022176 Epoch: 69, Loss: 0.048866 Epoch: 69, Loss: 0.049340 Epoch: 69, Loss: 0.031137 Epoch: 69, Loss: 0.040396 Epoch: 69, Loss: 0.079014 Epoch: 69, Loss: 0.029186 Epoch: 69, Loss: 0.031061 Epoch: 69, Loss: 0.037155 Epoch: 69, Loss: 0.029984 Epoch: 69, Loss: 0.025000 Epoch: 69, Loss: 0.021525 Epoch: 69, Loss: 0.039344 Epoch: 69, Loss: 0.040706 Epoch: 69, Loss: 0.027943 Epoch: 69, Loss: 0.039837 Epoch: 69, Loss: 0.025553 Epoch: 69, Loss: 0.031560 Epoch: 69, Loss: 0.039798 Epoch: 69, Loss: 0.037968 Epoch: 69, Loss: 0.036174 Epoch: 69, Loss: 0.100867 Epoch: 69, Loss: 0.079918 Epoch: 69, Loss: 0.074221 Epoch: 69, Loss: 0.042872 Epoch: 69, Loss: 0.034676 Epoch: 69, Loss: 0.024961 Epoch: 69, Loss: 0.055759 Epoch: 69, Loss: 0.031890 Epoch: 69, Loss: 0.049418 Epoch: 69, Loss: 0.036337 Epoch: 69, Loss: 0.046712 Epoch: 69, Loss: 0.039235 Epoch: 69, Loss: 0.043344 Epoch: 69, Loss: 0.037730 Epoch: 69, Loss: 0.043295 Epoch: 69, Loss: 0.032325 Epoch: 69, Loss: 0.040547 Epoch: 69, Loss: 0.076959 Epoch: 69, Loss: 0.027347 Epoch: 69, Loss: 0.041478 Epoch: 69, Loss: 0.055891 Epoch: 69, Loss: 0.037481 Epoch: 69, Loss: 0.020119 Epoch: 69, Loss: 0.021871 Epoch: 69, Loss: 0.050749 Epoch: 69, Loss: 0.047826 Epoch: 69, Loss: 0.024918 Epoch: 69, Loss: 0.031933 Epoch: 69, Loss: 0.055102 Epoch: 69, Loss: 0.029348 Epoch: 69, Loss: 0.027301 Epoch: 69, Loss: 0.095119 Epoch: 69, Loss: 0.052931 Epoch: 69, Loss: 0.047369 Epoch: 69, Loss: 0.029643 Epoch: 69, Loss: 0.085253 Epoch: 69, Loss: 0.063631 Epoch: 69, Loss: 0.065459 Epoch: 69, Loss: 0.034996 Epoch: 69, Loss: 0.032712 Epoch: 69, Loss: 0.042427 Epoch: 69, Loss: 0.032395 Epoch: 69, Loss: 0.034418 Epoch: 69, Loss: 0.047149 Epoch: 69, Loss: 0.040856 Epoch: 69, Loss: 0.026689 Epoch: 69, Loss: 0.038716 Epoch: 69, Loss: 0.071062 Epoch: 69, Loss: 0.043691 Epoch: 69, Loss: 0.082031 Epoch: 69, Loss: 0.084280 Epoch: 69, Loss: 0.083058 Epoch: 69, Loss: 0.025441 Epoch: 69, Loss: 0.039977 Epoch: 69, Loss: 0.034904 Epoch: 69, Loss: 0.066236 Epoch: 69, Loss: 0.063556 Epoch: 69, Loss: 0.073368 Epoch: 69, Loss: 0.047526 Epoch: 69, Loss: 0.039912 Epoch: 69, Loss: 0.045325 Epoch: 69, Loss: 0.036933 Epoch: 69, Loss: 0.061100 Epoch: 69, Loss: 0.041740 Epoch: 69, Loss: 0.049983 Epoch: 69, Loss: 0.045465 Epoch: 69, Loss: 0.046579 Epoch: 69, Loss: 0.047799 Epoch: 69, Loss: 0.033641 Epoch: 69, Loss: 0.039113 Epoch: 69, Loss: 0.045597 Epoch: 69, Loss: 0.050520 Epoch: 69, Loss: 0.037602 Epoch: 69, Loss: 0.049297 Epoch: 69, Loss: 0.030417 Epoch: 70, Loss: 0.042877 Epoch: 70, Loss: 0.053889 Epoch: 70, Loss: 0.048876 Epoch: 70, Loss: 0.035661 Epoch: 70, Loss: 0.047175 Epoch: 70, Loss: 0.037618 Epoch: 70, Loss: 0.040118 Epoch: 70, Loss: 0.059143 Epoch: 70, Loss: 0.050036 Epoch: 70, Loss: 0.035357 Epoch: 70, Loss: 0.029706 Epoch: 70, Loss: 0.052190 Epoch: 70, Loss: 0.035361 Epoch: 70, Loss: 0.050933 Epoch: 70, Loss: 0.050087 Epoch: 70, Loss: 0.018722 Epoch: 70, Loss: 0.024243 Epoch: 70, Loss: 0.032654 Epoch: 70, Loss: 0.046970 Epoch: 70, Loss: 0.033493 Epoch: 70, Loss: 0.029330 Epoch: 70, Loss: 0.037093 Epoch: 70, Loss: 0.036461 Epoch: 70, Loss: 0.050446 Epoch: 70, Loss: 0.042041 Epoch: 70, Loss: 0.026536 Epoch: 70, Loss: 0.029694 Epoch: 70, Loss: 0.021562 Epoch: 70, Loss: 0.038476 Epoch: 70, Loss: 0.032730 Epoch: 70, Loss: 0.030345 Epoch: 70, Loss: 0.056372 Epoch: 70, Loss: 0.036621 Epoch: 70, Loss: 0.015404 Epoch: 70, Loss: 0.025733 Epoch: 70, Loss: 0.029981 Epoch: 70, Loss: 0.029992 Epoch: 70, Loss: 0.047707 Epoch: 70, Loss: 0.029563 Epoch: 70, Loss: 0.043098 Epoch: 70, Loss: 0.056643 Epoch: 70, Loss: 0.024476 Epoch: 70, Loss: 0.047527 Epoch: 70, Loss: 0.049973 Epoch: 70, Loss: 0.037347 Epoch: 70, Loss: 0.034709 Epoch: 70, Loss: 0.023339 Epoch: 70, Loss: 0.033847 Epoch: 70, Loss: 0.030462 Epoch: 70, Loss: 0.022266 Epoch: 70, Loss: 0.031264 Epoch: 70, Loss: 0.047136 Epoch: 70, Loss: 0.042069 Epoch: 70, Loss: 0.055027 Epoch: 70, Loss: 0.052359 Epoch: 70, Loss: 0.052517 Epoch: 70, Loss: 0.069855 Epoch: 70, Loss: 0.077291 Epoch: 70, Loss: 0.035465 Epoch: 70, Loss: 0.053235 Epoch: 70, Loss: 0.040972 Epoch: 70, Loss: 0.037830 Epoch: 70, Loss: 0.021557 Epoch: 70, Loss: 0.028164 Epoch: 70, Loss: 0.017738 Epoch: 70, Loss: 0.025134 Epoch: 70, Loss: 0.076589 Epoch: 70, Loss: 0.043802 Epoch: 70, Loss: 0.059428 Epoch: 70, Loss: 0.030968 Epoch: 70, Loss: 0.040815 Epoch: 70, Loss: 0.028548 Epoch: 70, Loss: 0.048465 Epoch: 70, Loss: 0.032867 Epoch: 70, Loss: 0.052986 Epoch: 70, Loss: 0.034801 Epoch: 70, Loss: 0.036225 Epoch: 70, Loss: 0.045660 Epoch: 70, Loss: 0.041811 Epoch: 70, Loss: 0.035772 Epoch: 70, Loss: 0.032948 Epoch: 70, Loss: 0.038533 Epoch: 70, Loss: 0.034387 Epoch: 70, Loss: 0.053184 Epoch: 70, Loss: 0.076480 Epoch: 70, Loss: 0.039678 Epoch: 70, Loss: 0.030731 Epoch: 70, Loss: 0.040653 Epoch: 70, Loss: 0.028020 Epoch: 70, Loss: 0.030374 Epoch: 70, Loss: 0.077962 Epoch: 70, Loss: 0.067431 Epoch: 70, Loss: 0.046267 Epoch: 70, Loss: 0.033055 Epoch: 70, Loss: 0.043527 Epoch: 70, Loss: 0.034496 Epoch: 70, Loss: 0.051126 Epoch: 70, Loss: 0.027300 Epoch: 70, Loss: 0.026414 Epoch: 70, Loss: 0.029903 Epoch: 70, Loss: 0.029159 Epoch: 70, Loss: 0.031043 Epoch: 70, Loss: 0.028524 Epoch: 70, Loss: 0.032069 Epoch: 70, Loss: 0.044945 Epoch: 70, Loss: 0.056932 Epoch: 70, Loss: 0.045185 Epoch: 70, Loss: 0.029785 Epoch: 70, Loss: 0.045885 Epoch: 70, Loss: 0.034837 Epoch: 70, Loss: 0.046727 Epoch: 70, Loss: 0.034074 Epoch: 70, Loss: 0.038042 Epoch: 70, Loss: 0.028153 Epoch: 70, Loss: 0.043106 Epoch: 70, Loss: 0.028234 Epoch: 70, Loss: 0.036778 Epoch: 70, Loss: 0.036397 Epoch: 70, Loss: 0.033177 Epoch: 70, Loss: 0.046302 Epoch: 70, Loss: 0.028270 Epoch: 70, Loss: 0.019103 Epoch: 70, Loss: 0.032495 Epoch: 70, Loss: 0.023436 Epoch: 70, Loss: 0.025578 Epoch: 70, Loss: 0.028993 Epoch: 70, Loss: 0.051525 Epoch: 70, Loss: 0.029122 Epoch: 70, Loss: 0.089683 Epoch: 70, Loss: 0.050887 Epoch: 70, Loss: 0.046020 Epoch: 70, Loss: 0.031848 Epoch: 70, Loss: 0.043006 Epoch: 70, Loss: 0.034470 Epoch: 70, Loss: 0.018557 Epoch: 70, Loss: 0.037268 Epoch: 70, Loss: 0.019415 Epoch: 70, Loss: 0.029432 Epoch: 70, Loss: 0.043011 Epoch: 70, Loss: 0.039405 Epoch: 70, Loss: 0.046886 Epoch: 70, Loss: 0.020680 Epoch: 70, Loss: 0.030393 Epoch: 70, Loss: 0.028676 Epoch: 70, Loss: 0.031468 Epoch: 70, Loss: 0.048925 Epoch: 70, Loss: 0.058222 Epoch: 70, Loss: 0.035900 Epoch: 70, Loss: 0.031450 Epoch: 70, Loss: 0.039727 Epoch: 70, Loss: 0.028241 Epoch: 70, Loss: 0.023102 Epoch: 70, Loss: 0.034434 Epoch: 70, Loss: 0.041863 Epoch: 70, Loss: 0.044781 Epoch: 70, Loss: 0.039815 Epoch: 70, Loss: 0.043933 Epoch: 71, Loss: 0.036823 Epoch: 71, Loss: 0.030713 Epoch: 71, Loss: 0.036810 Epoch: 71, Loss: 0.029785 Epoch: 71, Loss: 0.059583 Epoch: 71, Loss: 0.029100 Epoch: 71, Loss: 0.032740 Epoch: 71, Loss: 0.026896 Epoch: 71, Loss: 0.043977 Epoch: 71, Loss: 0.034613 Epoch: 71, Loss: 0.034183 Epoch: 71, Loss: 0.038365 Epoch: 71, Loss: 0.031537 Epoch: 71, Loss: 0.022434 Epoch: 71, Loss: 0.028143 Epoch: 71, Loss: 0.022923 Epoch: 71, Loss: 0.044494 Epoch: 71, Loss: 0.034640 Epoch: 71, Loss: 0.031743 Epoch: 71, Loss: 0.043784 Epoch: 71, Loss: 0.043746 Epoch: 71, Loss: 0.048691 Epoch: 71, Loss: 0.045405 Epoch: 71, Loss: 0.033982 Epoch: 71, Loss: 0.035063 Epoch: 71, Loss: 0.031802 Epoch: 71, Loss: 0.049574 Epoch: 71, Loss: 0.033263 Epoch: 71, Loss: 0.030432 Epoch: 71, Loss: 0.014723 Epoch: 71, Loss: 0.042780 Epoch: 71, Loss: 0.030598 Epoch: 71, Loss: 0.054734 Epoch: 71, Loss: 0.036447 Epoch: 71, Loss: 0.032697 Epoch: 71, Loss: 0.053536 Epoch: 71, Loss: 0.023332 Epoch: 71, Loss: 0.024355 Epoch: 71, Loss: 0.041830 Epoch: 71, Loss: 0.036647 Epoch: 71, Loss: 0.031329 Epoch: 71, Loss: 0.021023 Epoch: 71, Loss: 0.027948 Epoch: 71, Loss: 0.032675 Epoch: 71, Loss: 0.032579 Epoch: 71, Loss: 0.030009 Epoch: 71, Loss: 0.051993 Epoch: 71, Loss: 0.028829 Epoch: 71, Loss: 0.023098 Epoch: 71, Loss: 0.045025 Epoch: 71, Loss: 0.030595 Epoch: 71, Loss: 0.034885 Epoch: 71, Loss: 0.055362 Epoch: 71, Loss: 0.038196 Epoch: 71, Loss: 0.072978 Epoch: 71, Loss: 0.037190 Epoch: 71, Loss: 0.024386 Epoch: 71, Loss: 0.042837 Epoch: 71, Loss: 0.028023 Epoch: 71, Loss: 0.041743 Epoch: 71, Loss: 0.045255 Epoch: 71, Loss: 0.043147 Epoch: 71, Loss: 0.027664 Epoch: 71, Loss: 0.022742 Epoch: 71, Loss: 0.016633 Epoch: 71, Loss: 0.029368 Epoch: 71, Loss: 0.022613 Epoch: 71, Loss: 0.031191 Epoch: 71, Loss: 0.023886 Epoch: 71, Loss: 0.021822 Epoch: 71, Loss: 0.062647 Epoch: 71, Loss: 0.049592 Epoch: 71, Loss: 0.058751 Epoch: 71, Loss: 0.040506 Epoch: 71, Loss: 0.029358 Epoch: 71, Loss: 0.053182 Epoch: 71, Loss: 0.026985 Epoch: 71, Loss: 0.031354 Epoch: 71, Loss: 0.031531 Epoch: 71, Loss: 0.033063 Epoch: 71, Loss: 0.024082 Epoch: 71, Loss: 0.052586 Epoch: 71, Loss: 0.033575 Epoch: 71, Loss: 0.042126 Epoch: 71, Loss: 0.053944 Epoch: 71, Loss: 0.059814 Epoch: 71, Loss: 0.053440 Epoch: 71, Loss: 0.037248 Epoch: 71, Loss: 0.033431 Epoch: 71, Loss: 0.033650 Epoch: 71, Loss: 0.080356 Epoch: 71, Loss: 0.027233 Epoch: 71, Loss: 0.029996 Epoch: 71, Loss: 0.025678 Epoch: 71, Loss: 0.029469 Epoch: 71, Loss: 0.033215 Epoch: 71, Loss: 0.026511 Epoch: 71, Loss: 0.053108 Epoch: 71, Loss: 0.039794 Epoch: 71, Loss: 0.046144 Epoch: 71, Loss: 0.068003 Epoch: 71, Loss: 0.042558 Epoch: 71, Loss: 0.030040 Epoch: 71, Loss: 0.033464 Epoch: 71, Loss: 0.049174 Epoch: 71, Loss: 0.026893 Epoch: 71, Loss: 0.044176 Epoch: 71, Loss: 0.040300 Epoch: 71, Loss: 0.034901 Epoch: 71, Loss: 0.029153 Epoch: 71, Loss: 0.042992 Epoch: 71, Loss: 0.039339 Epoch: 71, Loss: 0.052223 Epoch: 71, Loss: 0.026458 Epoch: 71, Loss: 0.035364 Epoch: 71, Loss: 0.015894 Epoch: 71, Loss: 0.023070 Epoch: 71, Loss: 0.041126 Epoch: 71, Loss: 0.046184 Epoch: 71, Loss: 0.041218 Epoch: 71, Loss: 0.027085 Epoch: 71, Loss: 0.018124 Epoch: 71, Loss: 0.036293 Epoch: 71, Loss: 0.018496 Epoch: 71, Loss: 0.026487 Epoch: 71, Loss: 0.027246 Epoch: 71, Loss: 0.071327 Epoch: 71, Loss: 0.039033 Epoch: 71, Loss: 0.041510 Epoch: 71, Loss: 0.030892 Epoch: 71, Loss: 0.040386 Epoch: 71, Loss: 0.033359 Epoch: 71, Loss: 0.058856 Epoch: 71, Loss: 0.037902 Epoch: 71, Loss: 0.037237 Epoch: 71, Loss: 0.027981 Epoch: 71, Loss: 0.026197 Epoch: 71, Loss: 0.041166 Epoch: 71, Loss: 0.045356 Epoch: 71, Loss: 0.043225 Epoch: 71, Loss: 0.045986 Epoch: 71, Loss: 0.034589 Epoch: 71, Loss: 0.028346 Epoch: 71, Loss: 0.062104 Epoch: 71, Loss: 0.061164 Epoch: 71, Loss: 0.062528 Epoch: 71, Loss: 0.055924 Epoch: 71, Loss: 0.026042 Epoch: 71, Loss: 0.066567 Epoch: 71, Loss: 0.028354 Epoch: 71, Loss: 0.040844 Epoch: 71, Loss: 0.050428 Epoch: 71, Loss: 0.039298 Epoch: 71, Loss: 0.033880 Epoch: 71, Loss: 0.028238 Epoch: 71, Loss: 0.022618 Epoch: 71, Loss: 0.063594 Epoch: 72, Loss: 0.084008 Epoch: 72, Loss: 0.068120 Epoch: 72, Loss: 0.051232 Epoch: 72, Loss: 0.047272 Epoch: 72, Loss: 0.031268 Epoch: 72, Loss: 0.039799 Epoch: 72, Loss: 0.056396 Epoch: 72, Loss: 0.035696 Epoch: 72, Loss: 0.028966 Epoch: 72, Loss: 0.024661 Epoch: 72, Loss: 0.017379 Epoch: 72, Loss: 0.022731 Epoch: 72, Loss: 0.029921 Epoch: 72, Loss: 0.021416 Epoch: 72, Loss: 0.024735 Epoch: 72, Loss: 0.038875 Epoch: 72, Loss: 0.061839 Epoch: 72, Loss: 0.057738 Epoch: 72, Loss: 0.048185 Epoch: 72, Loss: 0.023924 Epoch: 72, Loss: 0.040653 Epoch: 72, Loss: 0.035141 Epoch: 72, Loss: 0.044562 Epoch: 72, Loss: 0.068698 Epoch: 72, Loss: 0.057699 Epoch: 72, Loss: 0.067877 Epoch: 72, Loss: 0.041127 Epoch: 72, Loss: 0.032008 Epoch: 72, Loss: 0.035367 Epoch: 72, Loss: 0.038319 Epoch: 72, Loss: 0.045611 Epoch: 72, Loss: 0.037378 Epoch: 72, Loss: 0.036704 Epoch: 72, Loss: 0.021517 Epoch: 72, Loss: 0.019675 Epoch: 72, Loss: 0.030906 Epoch: 72, Loss: 0.023122 Epoch: 72, Loss: 0.046353 Epoch: 72, Loss: 0.029561 Epoch: 72, Loss: 0.035950 Epoch: 72, Loss: 0.030611 Epoch: 72, Loss: 0.032641 Epoch: 72, Loss: 0.023686 Epoch: 72, Loss: 0.020364 Epoch: 72, Loss: 0.032048 Epoch: 72, Loss: 0.024996 Epoch: 72, Loss: 0.019433 Epoch: 72, Loss: 0.046075 Epoch: 72, Loss: 0.026312 Epoch: 72, Loss: 0.029305 Epoch: 72, Loss: 0.024053 Epoch: 72, Loss: 0.033429 Epoch: 72, Loss: 0.033042 Epoch: 72, Loss: 0.041544 Epoch: 72, Loss: 0.035303 Epoch: 72, Loss: 0.040801 Epoch: 72, Loss: 0.019183 Epoch: 72, Loss: 0.051692 Epoch: 72, Loss: 0.083206 Epoch: 72, Loss: 0.022499 Epoch: 72, Loss: 0.048191 Epoch: 72, Loss: 0.062351 Epoch: 72, Loss: 0.047537 Epoch: 72, Loss: 0.027859 Epoch: 72, Loss: 0.046527 Epoch: 72, Loss: 0.016244 Epoch: 72, Loss: 0.027902 Epoch: 72, Loss: 0.027335 Epoch: 72, Loss: 0.028460 Epoch: 72, Loss: 0.024973 Epoch: 72, Loss: 0.049347 Epoch: 72, Loss: 0.035390 Epoch: 72, Loss: 0.037607 Epoch: 72, Loss: 0.017490 Epoch: 72, Loss: 0.060296 Epoch: 72, Loss: 0.020520 Epoch: 72, Loss: 0.030141 Epoch: 72, Loss: 0.041028 Epoch: 72, Loss: 0.050712 Epoch: 72, Loss: 0.027309 Epoch: 72, Loss: 0.019277 Epoch: 72, Loss: 0.051665 Epoch: 72, Loss: 0.043598 Epoch: 72, Loss: 0.029337 Epoch: 72, Loss: 0.031173 Epoch: 72, Loss: 0.066895 Epoch: 72, Loss: 0.026932 Epoch: 72, Loss: 0.034206 Epoch: 72, Loss: 0.041388 Epoch: 72, Loss: 0.016322 Epoch: 72, Loss: 0.017400 Epoch: 72, Loss: 0.046861 Epoch: 72, Loss: 0.090243 Epoch: 72, Loss: 0.114867 Epoch: 72, Loss: 0.022677 Epoch: 72, Loss: 0.060810 Epoch: 72, Loss: 0.028952 Epoch: 72, Loss: 0.030367 Epoch: 72, Loss: 0.028600 Epoch: 72, Loss: 0.021554 Epoch: 72, Loss: 0.036334 Epoch: 72, Loss: 0.070359 Epoch: 72, Loss: 0.048025 Epoch: 72, Loss: 0.061795 Epoch: 72, Loss: 0.027504 Epoch: 72, Loss: 0.027078 Epoch: 72, Loss: 0.024784 Epoch: 72, Loss: 0.109711 Epoch: 72, Loss: 0.050909 Epoch: 72, Loss: 0.035664 Epoch: 72, Loss: 0.039530 Epoch: 72, Loss: 0.030076 Epoch: 72, Loss: 0.041111 Epoch: 72, Loss: 0.053535 Epoch: 72, Loss: 0.051445 Epoch: 72, Loss: 0.028468 Epoch: 72, Loss: 0.053194 Epoch: 72, Loss: 0.040527 Epoch: 72, Loss: 0.045435 Epoch: 72, Loss: 0.030397 Epoch: 72, Loss: 0.020329 Epoch: 72, Loss: 0.024184 Epoch: 72, Loss: 0.025574 Epoch: 72, Loss: 0.028972 Epoch: 72, Loss: 0.031205 Epoch: 72, Loss: 0.030417 Epoch: 72, Loss: 0.033950 Epoch: 72, Loss: 0.038933 Epoch: 72, Loss: 0.037337 Epoch: 72, Loss: 0.056876 Epoch: 72, Loss: 0.021125 Epoch: 72, Loss: 0.028114 Epoch: 72, Loss: 0.020091 Epoch: 72, Loss: 0.050418 Epoch: 72, Loss: 0.017090 Epoch: 72, Loss: 0.031412 Epoch: 72, Loss: 0.046785 Epoch: 72, Loss: 0.054234 Epoch: 72, Loss: 0.052752 Epoch: 72, Loss: 0.049329 Epoch: 72, Loss: 0.047757 Epoch: 72, Loss: 0.023728 Epoch: 72, Loss: 0.027898 Epoch: 72, Loss: 0.032511 Epoch: 72, Loss: 0.035510 Epoch: 72, Loss: 0.044636 Epoch: 72, Loss: 0.030031 Epoch: 72, Loss: 0.039533 Epoch: 72, Loss: 0.043340 Epoch: 72, Loss: 0.032317 Epoch: 72, Loss: 0.052010 Epoch: 72, Loss: 0.030883 Epoch: 72, Loss: 0.041970 Epoch: 72, Loss: 0.022318 Epoch: 72, Loss: 0.043663 Epoch: 72, Loss: 0.025734 Epoch: 72, Loss: 0.024682 Epoch: 73, Loss: 0.063707 Epoch: 73, Loss: 0.057158 Epoch: 73, Loss: 0.032434 Epoch: 73, Loss: 0.027527 Epoch: 73, Loss: 0.020871 Epoch: 73, Loss: 0.029945 Epoch: 73, Loss: 0.029185 Epoch: 73, Loss: 0.019019 Epoch: 73, Loss: 0.036697 Epoch: 73, Loss: 0.026919 Epoch: 73, Loss: 0.023041 Epoch: 73, Loss: 0.028221 Epoch: 73, Loss: 0.055272 Epoch: 73, Loss: 0.101844 Epoch: 73, Loss: 0.031595 Epoch: 73, Loss: 0.029290 Epoch: 73, Loss: 0.023382 Epoch: 73, Loss: 0.027706 Epoch: 73, Loss: 0.040739 Epoch: 73, Loss: 0.026946 Epoch: 73, Loss: 0.027918 Epoch: 73, Loss: 0.040838 Epoch: 73, Loss: 0.034743 Epoch: 73, Loss: 0.027873 Epoch: 73, Loss: 0.043960 Epoch: 73, Loss: 0.026057 Epoch: 73, Loss: 0.031835 Epoch: 73, Loss: 0.026864 Epoch: 73, Loss: 0.038293 Epoch: 73, Loss: 0.020792 Epoch: 73, Loss: 0.019981 Epoch: 73, Loss: 0.044688 Epoch: 73, Loss: 0.020982 Epoch: 73, Loss: 0.025370 Epoch: 73, Loss: 0.028766 Epoch: 73, Loss: 0.033536 Epoch: 73, Loss: 0.028989 Epoch: 73, Loss: 0.032987 Epoch: 73, Loss: 0.018174 Epoch: 73, Loss: 0.013773 Epoch: 73, Loss: 0.025486 Epoch: 73, Loss: 0.045287 Epoch: 73, Loss: 0.036118 Epoch: 73, Loss: 0.030298 Epoch: 73, Loss: 0.035962 Epoch: 73, Loss: 0.026212 Epoch: 73, Loss: 0.036622 Epoch: 73, Loss: 0.032108 Epoch: 73, Loss: 0.028822 Epoch: 73, Loss: 0.027711 Epoch: 73, Loss: 0.058451 Epoch: 73, Loss: 0.036286 Epoch: 73, Loss: 0.060941 Epoch: 73, Loss: 0.027906 Epoch: 73, Loss: 0.037766 Epoch: 73, Loss: 0.023211 Epoch: 73, Loss: 0.023143 Epoch: 73, Loss: 0.033946 Epoch: 73, Loss: 0.052003 Epoch: 73, Loss: 0.026501 Epoch: 73, Loss: 0.024765 Epoch: 73, Loss: 0.016877 Epoch: 73, Loss: 0.031911 Epoch: 73, Loss: 0.027944 Epoch: 73, Loss: 0.042085 Epoch: 73, Loss: 0.045852 Epoch: 73, Loss: 0.027475 Epoch: 73, Loss: 0.034555 Epoch: 73, Loss: 0.041987 Epoch: 73, Loss: 0.032529 Epoch: 73, Loss: 0.026417 Epoch: 73, Loss: 0.050547 Epoch: 73, Loss: 0.032393 Epoch: 73, Loss: 0.033502 Epoch: 73, Loss: 0.018565 Epoch: 73, Loss: 0.026770 Epoch: 73, Loss: 0.059319 Epoch: 73, Loss: 0.051419 Epoch: 73, Loss: 0.026164 Epoch: 73, Loss: 0.035933 Epoch: 73, Loss: 0.042751 Epoch: 73, Loss: 0.027005 Epoch: 73, Loss: 0.020957 Epoch: 73, Loss: 0.025314 Epoch: 73, Loss: 0.023493 Epoch: 73, Loss: 0.033258 Epoch: 73, Loss: 0.038791 Epoch: 73, Loss: 0.026712 Epoch: 73, Loss: 0.029392 Epoch: 73, Loss: 0.107152 Epoch: 73, Loss: 0.069803 Epoch: 73, Loss: 0.035286 Epoch: 73, Loss: 0.043793 Epoch: 73, Loss: 0.028446 Epoch: 73, Loss: 0.024259 Epoch: 73, Loss: 0.038050 Epoch: 73, Loss: 0.047546 Epoch: 73, Loss: 0.040689 Epoch: 73, Loss: 0.034454 Epoch: 73, Loss: 0.037117 Epoch: 73, Loss: 0.041007 Epoch: 73, Loss: 0.033963 Epoch: 73, Loss: 0.019830 Epoch: 73, Loss: 0.015434 Epoch: 73, Loss: 0.029297 Epoch: 73, Loss: 0.024769 Epoch: 73, Loss: 0.024644 Epoch: 73, Loss: 0.040226 Epoch: 73, Loss: 0.022293 Epoch: 73, Loss: 0.034788 Epoch: 73, Loss: 0.024892 Epoch: 73, Loss: 0.043263 Epoch: 73, Loss: 0.026581 Epoch: 73, Loss: 0.028189 Epoch: 73, Loss: 0.030810 Epoch: 73, Loss: 0.030057 Epoch: 73, Loss: 0.029695 Epoch: 73, Loss: 0.027516 Epoch: 73, Loss: 0.092234 Epoch: 73, Loss: 0.061195 Epoch: 73, Loss: 0.031497 Epoch: 73, Loss: 0.052249 Epoch: 73, Loss: 0.037255 Epoch: 73, Loss: 0.048052 Epoch: 73, Loss: 0.029634 Epoch: 73, Loss: 0.021351 Epoch: 73, Loss: 0.019064 Epoch: 73, Loss: 0.028241 Epoch: 73, Loss: 0.027123 Epoch: 73, Loss: 0.022296 Epoch: 73, Loss: 0.022660 Epoch: 73, Loss: 0.029035 Epoch: 73, Loss: 0.032832 Epoch: 73, Loss: 0.029363 Epoch: 73, Loss: 0.031216 Epoch: 73, Loss: 0.043946 Epoch: 73, Loss: 0.019548 Epoch: 73, Loss: 0.030645 Epoch: 73, Loss: 0.045279 Epoch: 73, Loss: 0.027506 Epoch: 73, Loss: 0.022837 Epoch: 73, Loss: 0.039587 Epoch: 73, Loss: 0.022419 Epoch: 73, Loss: 0.024430 Epoch: 73, Loss: 0.065247 Epoch: 73, Loss: 0.063048 Epoch: 73, Loss: 0.026265 Epoch: 73, Loss: 0.068243 Epoch: 73, Loss: 0.026557 Epoch: 73, Loss: 0.033189 Epoch: 73, Loss: 0.038960 Epoch: 73, Loss: 0.049713 Epoch: 73, Loss: 0.028366 Epoch: 73, Loss: 0.037078 Epoch: 73, Loss: 0.020515 Epoch: 73, Loss: 0.019021 Epoch: 73, Loss: 0.104016 Epoch: 74, Loss: 0.185700 Epoch: 74, Loss: 0.118790 Epoch: 74, Loss: 0.026872 Epoch: 74, Loss: 0.033226 Epoch: 74, Loss: 0.022105 Epoch: 74, Loss: 0.047967 Epoch: 74, Loss: 0.051631 Epoch: 74, Loss: 0.032356 Epoch: 74, Loss: 0.031663 Epoch: 74, Loss: 0.031525 Epoch: 74, Loss: 0.039243 Epoch: 74, Loss: 0.048192 Epoch: 74, Loss: 0.037044 Epoch: 74, Loss: 0.046912 Epoch: 74, Loss: 0.026292 Epoch: 74, Loss: 0.025725 Epoch: 74, Loss: 0.032719 Epoch: 74, Loss: 0.034822 Epoch: 74, Loss: 0.059404 Epoch: 74, Loss: 0.011794 Epoch: 74, Loss: 0.033466 Epoch: 74, Loss: 0.031099 Epoch: 74, Loss: 0.043978 Epoch: 74, Loss: 0.033077 Epoch: 74, Loss: 0.026086 Epoch: 74, Loss: 0.027863 Epoch: 74, Loss: 0.037055 Epoch: 74, Loss: 0.040057 Epoch: 74, Loss: 0.026439 Epoch: 74, Loss: 0.024442 Epoch: 74, Loss: 0.027123 Epoch: 74, Loss: 0.046032 Epoch: 74, Loss: 0.028969 Epoch: 74, Loss: 0.037561 Epoch: 74, Loss: 0.027653 Epoch: 74, Loss: 0.040453 Epoch: 74, Loss: 0.029450 Epoch: 74, Loss: 0.028088 Epoch: 74, Loss: 0.025180 Epoch: 74, Loss: 0.026593 Epoch: 74, Loss: 0.029024 Epoch: 74, Loss: 0.035358 Epoch: 74, Loss: 0.031114 Epoch: 74, Loss: 0.030499 Epoch: 74, Loss: 0.024843 Epoch: 74, Loss: 0.024631 Epoch: 74, Loss: 0.030278 Epoch: 74, Loss: 0.029408 Epoch: 74, Loss: 0.053911 Epoch: 74, Loss: 0.043577 Epoch: 74, Loss: 0.021479 Epoch: 74, Loss: 0.014290 Epoch: 74, Loss: 0.026959 Epoch: 74, Loss: 0.023898 Epoch: 74, Loss: 0.027286 Epoch: 74, Loss: 0.031659 Epoch: 74, Loss: 0.030804 Epoch: 74, Loss: 0.026915 Epoch: 74, Loss: 0.042419 Epoch: 74, Loss: 0.023384 Epoch: 74, Loss: 0.033313 Epoch: 74, Loss: 0.033829 Epoch: 74, Loss: 0.022055 Epoch: 74, Loss: 0.024993 Epoch: 74, Loss: 0.021974 Epoch: 74, Loss: 0.030822 Epoch: 74, Loss: 0.021783 Epoch: 74, Loss: 0.039688 Epoch: 74, Loss: 0.050657 Epoch: 74, Loss: 0.046232 Epoch: 74, Loss: 0.026186 Epoch: 74, Loss: 0.050844 Epoch: 74, Loss: 0.022239 Epoch: 74, Loss: 0.033469 Epoch: 74, Loss: 0.030276 Epoch: 74, Loss: 0.025047 Epoch: 74, Loss: 0.031957 Epoch: 74, Loss: 0.034143 Epoch: 74, Loss: 0.043326 Epoch: 74, Loss: 0.024845 Epoch: 74, Loss: 0.029431 Epoch: 74, Loss: 0.019496 Epoch: 74, Loss: 0.043747 Epoch: 74, Loss: 0.036594 Epoch: 74, Loss: 0.022538 Epoch: 74, Loss: 0.027044 Epoch: 74, Loss: 0.029791 Epoch: 74, Loss: 0.014587 Epoch: 74, Loss: 0.036377 Epoch: 74, Loss: 0.020949 Epoch: 74, Loss: 0.025498 Epoch: 74, Loss: 0.014718 Epoch: 74, Loss: 0.036606 Epoch: 74, Loss: 0.025945 Epoch: 74, Loss: 0.020980 Epoch: 74, Loss: 0.030002 Epoch: 74, Loss: 0.041376 Epoch: 74, Loss: 0.032757 Epoch: 74, Loss: 0.034570 Epoch: 74, Loss: 0.045830 Epoch: 74, Loss: 0.047744 Epoch: 74, Loss: 0.068716 Epoch: 74, Loss: 0.022037 Epoch: 74, Loss: 0.062865 Epoch: 74, Loss: 0.024435 Epoch: 74, Loss: 0.026145 Epoch: 74, Loss: 0.030412 Epoch: 74, Loss: 0.049835 Epoch: 74, Loss: 0.035044 Epoch: 74, Loss: 0.060054 Epoch: 74, Loss: 0.034690 Epoch: 74, Loss: 0.046083 Epoch: 74, Loss: 0.025311 Epoch: 74, Loss: 0.041302 Epoch: 74, Loss: 0.025112 Epoch: 74, Loss: 0.026558 Epoch: 74, Loss: 0.033791 Epoch: 74, Loss: 0.034024 Epoch: 74, Loss: 0.033734 Epoch: 74, Loss: 0.028065 Epoch: 74, Loss: 0.077195 Epoch: 74, Loss: 0.057861 Epoch: 74, Loss: 0.032038 Epoch: 74, Loss: 0.028976 Epoch: 74, Loss: 0.027133 Epoch: 74, Loss: 0.033461 Epoch: 74, Loss: 0.025180 Epoch: 74, Loss: 0.097006 Epoch: 74, Loss: 0.018872 Epoch: 74, Loss: 0.057013 Epoch: 74, Loss: 0.033118 Epoch: 74, Loss: 0.035528 Epoch: 74, Loss: 0.033775 Epoch: 74, Loss: 0.021790 Epoch: 74, Loss: 0.019190 Epoch: 74, Loss: 0.034865 Epoch: 74, Loss: 0.029539 Epoch: 74, Loss: 0.059734 Epoch: 74, Loss: 0.048853 Epoch: 74, Loss: 0.043646 Epoch: 74, Loss: 0.015642 Epoch: 74, Loss: 0.041543 Epoch: 74, Loss: 0.038299 Epoch: 74, Loss: 0.040961 Epoch: 74, Loss: 0.025014 Epoch: 74, Loss: 0.028033 Epoch: 74, Loss: 0.052119 Epoch: 74, Loss: 0.025723 Epoch: 74, Loss: 0.055147 Epoch: 74, Loss: 0.020039 Epoch: 74, Loss: 0.030874 Epoch: 74, Loss: 0.026459 Epoch: 74, Loss: 0.026966 Epoch: 74, Loss: 0.038537 Epoch: 74, Loss: 0.031775 Epoch: 74, Loss: 0.025645 Epoch: 74, Loss: 0.050413 Epoch: 75, Loss: 0.034162 Epoch: 75, Loss: 0.044521 Epoch: 75, Loss: 0.030991 Epoch: 75, Loss: 0.039574 Epoch: 75, Loss: 0.015807 Epoch: 75, Loss: 0.035812 Epoch: 75, Loss: 0.020990 Epoch: 75, Loss: 0.030427 Epoch: 75, Loss: 0.027350 Epoch: 75, Loss: 0.034265 Epoch: 75, Loss: 0.025958 Epoch: 75, Loss: 0.019776 Epoch: 75, Loss: 0.019104 Epoch: 75, Loss: 0.050185 Epoch: 75, Loss: 0.032524 Epoch: 75, Loss: 0.021889 Epoch: 75, Loss: 0.049723 Epoch: 75, Loss: 0.016993 Epoch: 75, Loss: 0.023895 Epoch: 75, Loss: 0.041097 Epoch: 75, Loss: 0.047757 Epoch: 75, Loss: 0.036788 Epoch: 75, Loss: 0.025868 Epoch: 75, Loss: 0.040597 Epoch: 75, Loss: 0.023284 Epoch: 75, Loss: 0.022417 Epoch: 75, Loss: 0.027318 Epoch: 75, Loss: 0.058743 Epoch: 75, Loss: 0.030935 Epoch: 75, Loss: 0.029316 Epoch: 75, Loss: 0.042137 Epoch: 75, Loss: 0.083603 Epoch: 75, Loss: 0.038327 Epoch: 75, Loss: 0.052255 Epoch: 75, Loss: 0.023990 Epoch: 75, Loss: 0.025586 Epoch: 75, Loss: 0.016675 Epoch: 75, Loss: 0.033438 Epoch: 75, Loss: 0.031476 Epoch: 75, Loss: 0.022857 Epoch: 75, Loss: 0.018705 Epoch: 75, Loss: 0.022386 Epoch: 75, Loss: 0.026523 Epoch: 75, Loss: 0.021179 Epoch: 75, Loss: 0.033308 Epoch: 75, Loss: 0.044940 Epoch: 75, Loss: 0.026788 Epoch: 75, Loss: 0.041085 Epoch: 75, Loss: 0.028262 Epoch: 75, Loss: 0.026925 Epoch: 75, Loss: 0.027930 Epoch: 75, Loss: 0.059622 Epoch: 75, Loss: 0.043025 Epoch: 75, Loss: 0.036031 Epoch: 75, Loss: 0.055595 Epoch: 75, Loss: 0.022410 Epoch: 75, Loss: 0.035562 Epoch: 75, Loss: 0.051050 Epoch: 75, Loss: 0.024618 Epoch: 75, Loss: 0.083519 Epoch: 75, Loss: 0.046636 Epoch: 75, Loss: 0.022112 Epoch: 75, Loss: 0.033041 Epoch: 75, Loss: 0.038845 Epoch: 75, Loss: 0.028183 Epoch: 75, Loss: 0.024292 Epoch: 75, Loss: 0.028428 Epoch: 75, Loss: 0.037848 Epoch: 75, Loss: 0.047495 Epoch: 75, Loss: 0.044750 Epoch: 75, Loss: 0.032089 Epoch: 75, Loss: 0.030783 Epoch: 75, Loss: 0.016689 Epoch: 75, Loss: 0.024383 Epoch: 75, Loss: 0.020188 Epoch: 75, Loss: 0.039622 Epoch: 75, Loss: 0.035494 Epoch: 75, Loss: 0.022879 Epoch: 75, Loss: 0.027798 Epoch: 75, Loss: 0.029596 Epoch: 75, Loss: 0.024692 Epoch: 75, Loss: 0.029488 Epoch: 75, Loss: 0.021936 Epoch: 75, Loss: 0.022559 Epoch: 75, Loss: 0.037337 Epoch: 75, Loss: 0.033056 Epoch: 75, Loss: 0.019419 Epoch: 75, Loss: 0.021464 Epoch: 75, Loss: 0.057984 Epoch: 75, Loss: 0.069777 Epoch: 75, Loss: 0.025974 Epoch: 75, Loss: 0.045422 Epoch: 75, Loss: 0.058574 Epoch: 75, Loss: 0.057069 Epoch: 75, Loss: 0.023398 Epoch: 75, Loss: 0.021139 Epoch: 75, Loss: 0.021520 Epoch: 75, Loss: 0.026369 Epoch: 75, Loss: 0.029636 Epoch: 75, Loss: 0.024742 Epoch: 75, Loss: 0.048079 Epoch: 75, Loss: 0.018391 Epoch: 75, Loss: 0.044580 Epoch: 75, Loss: 0.045831 Epoch: 75, Loss: 0.034250 Epoch: 75, Loss: 0.031165 Epoch: 75, Loss: 0.035222 Epoch: 75, Loss: 0.044555 Epoch: 75, Loss: 0.036860 Epoch: 75, Loss: 0.053806 Epoch: 75, Loss: 0.027592 Epoch: 75, Loss: 0.035949 Epoch: 75, Loss: 0.023808 Epoch: 75, Loss: 0.036032 Epoch: 75, Loss: 0.029688 Epoch: 75, Loss: 0.018373 Epoch: 75, Loss: 0.023815 Epoch: 75, Loss: 0.012313 Epoch: 75, Loss: 0.018506 Epoch: 75, Loss: 0.037019 Epoch: 75, Loss: 0.045822 Epoch: 75, Loss: 0.087229 Epoch: 75, Loss: 0.030367 Epoch: 75, Loss: 0.017902 Epoch: 75, Loss: 0.046627 Epoch: 75, Loss: 0.035161 Epoch: 75, Loss: 0.039763 Epoch: 75, Loss: 0.022212 Epoch: 75, Loss: 0.023161 Epoch: 75, Loss: 0.037966 Epoch: 75, Loss: 0.029232 Epoch: 75, Loss: 0.038110 Epoch: 75, Loss: 0.032929 Epoch: 75, Loss: 0.020993 Epoch: 75, Loss: 0.068693 Epoch: 75, Loss: 0.031789 Epoch: 75, Loss: 0.030386 Epoch: 75, Loss: 0.031967 Epoch: 75, Loss: 0.023814 Epoch: 75, Loss: 0.032000 Epoch: 75, Loss: 0.020155 Epoch: 75, Loss: 0.019958 Epoch: 75, Loss: 0.051871 Epoch: 75, Loss: 0.031270 Epoch: 75, Loss: 0.024395 Epoch: 75, Loss: 0.046079 Epoch: 75, Loss: 0.051169 Epoch: 75, Loss: 0.058926 Epoch: 75, Loss: 0.044479 Epoch: 75, Loss: 0.018911 Epoch: 75, Loss: 0.024523 Epoch: 75, Loss: 0.016154 Epoch: 75, Loss: 0.017449 Epoch: 75, Loss: 0.023161 Epoch: 75, Loss: 0.022697 Epoch: 75, Loss: 0.019211 Epoch: 75, Loss: 0.028522 Epoch: 76, Loss: 0.047278 Epoch: 76, Loss: 0.038499 Epoch: 76, Loss: 0.020389 Epoch: 76, Loss: 0.020455 Epoch: 76, Loss: 0.027009 Epoch: 76, Loss: 0.034863 Epoch: 76, Loss: 0.026651 Epoch: 76, Loss: 0.026469 Epoch: 76, Loss: 0.033002 Epoch: 76, Loss: 0.050904 Epoch: 76, Loss: 0.037747 Epoch: 76, Loss: 0.025702 Epoch: 76, Loss: 0.028539 Epoch: 76, Loss: 0.030050 Epoch: 76, Loss: 0.042014 Epoch: 76, Loss: 0.026809 Epoch: 76, Loss: 0.047113 Epoch: 76, Loss: 0.027682 Epoch: 76, Loss: 0.028041 Epoch: 76, Loss: 0.020062 Epoch: 76, Loss: 0.021335 Epoch: 76, Loss: 0.022245 Epoch: 76, Loss: 0.014035 Epoch: 76, Loss: 0.028279 Epoch: 76, Loss: 0.027886 Epoch: 76, Loss: 0.030423 Epoch: 76, Loss: 0.026669 Epoch: 76, Loss: 0.027546 Epoch: 76, Loss: 0.024662 Epoch: 76, Loss: 0.021761 Epoch: 76, Loss: 0.028683 Epoch: 76, Loss: 0.041864 Epoch: 76, Loss: 0.024908 Epoch: 76, Loss: 0.034272 Epoch: 76, Loss: 0.086297 Epoch: 76, Loss: 0.127354 Epoch: 76, Loss: 0.067788 Epoch: 76, Loss: 0.028466 Epoch: 76, Loss: 0.045639 Epoch: 76, Loss: 0.036330 Epoch: 76, Loss: 0.021019 Epoch: 76, Loss: 0.024526 Epoch: 76, Loss: 0.024432 Epoch: 76, Loss: 0.026193 Epoch: 76, Loss: 0.029053 Epoch: 76, Loss: 0.023954 Epoch: 76, Loss: 0.045688 Epoch: 76, Loss: 0.038527 Epoch: 76, Loss: 0.025283 Epoch: 76, Loss: 0.026898 Epoch: 76, Loss: 0.017390 Epoch: 76, Loss: 0.032244 Epoch: 76, Loss: 0.050444 Epoch: 76, Loss: 0.044087 Epoch: 76, Loss: 0.024961 Epoch: 76, Loss: 0.026428 Epoch: 76, Loss: 0.044820 Epoch: 76, Loss: 0.063160 Epoch: 76, Loss: 0.052642 Epoch: 76, Loss: 0.023339 Epoch: 76, Loss: 0.045275 Epoch: 76, Loss: 0.054115 Epoch: 76, Loss: 0.046135 Epoch: 76, Loss: 0.016485 Epoch: 76, Loss: 0.027904 Epoch: 76, Loss: 0.023074 Epoch: 76, Loss: 0.032942 Epoch: 76, Loss: 0.087063 Epoch: 76, Loss: 0.053957 Epoch: 76, Loss: 0.027029 Epoch: 76, Loss: 0.024625 Epoch: 76, Loss: 0.021041 Epoch: 76, Loss: 0.017186 Epoch: 76, Loss: 0.021198 Epoch: 76, Loss: 0.018983 Epoch: 76, Loss: 0.047745 Epoch: 76, Loss: 0.025657 Epoch: 76, Loss: 0.026987 Epoch: 76, Loss: 0.035261 Epoch: 76, Loss: 0.019071 Epoch: 76, Loss: 0.032061 Epoch: 76, Loss: 0.020638 Epoch: 76, Loss: 0.026601 Epoch: 76, Loss: 0.044773 Epoch: 76, Loss: 0.025796 Epoch: 76, Loss: 0.049252 Epoch: 76, Loss: 0.030570 Epoch: 76, Loss: 0.019685 Epoch: 76, Loss: 0.052914 Epoch: 76, Loss: 0.054830 Epoch: 76, Loss: 0.020941 Epoch: 76, Loss: 0.032904 Epoch: 76, Loss: 0.030039 Epoch: 76, Loss: 0.029192 Epoch: 76, Loss: 0.031760 Epoch: 76, Loss: 0.028460 Epoch: 76, Loss: 0.020555 Epoch: 76, Loss: 0.036808 Epoch: 76, Loss: 0.039029 Epoch: 76, Loss: 0.017642 Epoch: 76, Loss: 0.040211 Epoch: 76, Loss: 0.012794 Epoch: 76, Loss: 0.025280 Epoch: 76, Loss: 0.037215 Epoch: 76, Loss: 0.034033 Epoch: 76, Loss: 0.023183 Epoch: 76, Loss: 0.036298 Epoch: 76, Loss: 0.032391 Epoch: 76, Loss: 0.033803 Epoch: 76, Loss: 0.036618 Epoch: 76, Loss: 0.028582 Epoch: 76, Loss: 0.017723 Epoch: 76, Loss: 0.016848 Epoch: 76, Loss: 0.044920 Epoch: 76, Loss: 0.029529 Epoch: 76, Loss: 0.028862 Epoch: 76, Loss: 0.039363 Epoch: 76, Loss: 0.040640 Epoch: 76, Loss: 0.012437 Epoch: 76, Loss: 0.054595 Epoch: 76, Loss: 0.023156 Epoch: 76, Loss: 0.021309 Epoch: 76, Loss: 0.030862 Epoch: 76, Loss: 0.029403 Epoch: 76, Loss: 0.031486 Epoch: 76, Loss: 0.013413 Epoch: 76, Loss: 0.029637 Epoch: 76, Loss: 0.029639 Epoch: 76, Loss: 0.022111 Epoch: 76, Loss: 0.026412 Epoch: 76, Loss: 0.023110 Epoch: 76, Loss: 0.015918 Epoch: 76, Loss: 0.020720 Epoch: 76, Loss: 0.045543 Epoch: 76, Loss: 0.028694 Epoch: 76, Loss: 0.032151 Epoch: 76, Loss: 0.065409 Epoch: 76, Loss: 0.066806 Epoch: 76, Loss: 0.066857 Epoch: 76, Loss: 0.030190 Epoch: 76, Loss: 0.050100 Epoch: 76, Loss: 0.039507 Epoch: 76, Loss: 0.016896 Epoch: 76, Loss: 0.033150 Epoch: 76, Loss: 0.031973 Epoch: 76, Loss: 0.025469 Epoch: 76, Loss: 0.019789 Epoch: 76, Loss: 0.022075 Epoch: 76, Loss: 0.022420 Epoch: 76, Loss: 0.022726 Epoch: 76, Loss: 0.023428 Epoch: 76, Loss: 0.049005 Epoch: 76, Loss: 0.021597 Epoch: 76, Loss: 0.043078 Epoch: 76, Loss: 0.056186 Epoch: 76, Loss: 0.031417 Epoch: 76, Loss: 0.024216 Epoch: 77, Loss: 0.023071 Epoch: 77, Loss: 0.033633 Epoch: 77, Loss: 0.030777 Epoch: 77, Loss: 0.046545 Epoch: 77, Loss: 0.020971 Epoch: 77, Loss: 0.047033 Epoch: 77, Loss: 0.026673 Epoch: 77, Loss: 0.041180 Epoch: 77, Loss: 0.041262 Epoch: 77, Loss: 0.035697 Epoch: 77, Loss: 0.027157 Epoch: 77, Loss: 0.026967 Epoch: 77, Loss: 0.028150 Epoch: 77, Loss: 0.022891 Epoch: 77, Loss: 0.030014 Epoch: 77, Loss: 0.014373 Epoch: 77, Loss: 0.025653 Epoch: 77, Loss: 0.039450 Epoch: 77, Loss: 0.030450 Epoch: 77, Loss: 0.027472 Epoch: 77, Loss: 0.023456 Epoch: 77, Loss: 0.025120 Epoch: 77, Loss: 0.022941 Epoch: 77, Loss: 0.028367 Epoch: 77, Loss: 0.026780 Epoch: 77, Loss: 0.028920 Epoch: 77, Loss: 0.027233 Epoch: 77, Loss: 0.032428 Epoch: 77, Loss: 0.031645 Epoch: 77, Loss: 0.026327 Epoch: 77, Loss: 0.033054 Epoch: 77, Loss: 0.032511 Epoch: 77, Loss: 0.039747 Epoch: 77, Loss: 0.035308 Epoch: 77, Loss: 0.036155 Epoch: 77, Loss: 0.023532 Epoch: 77, Loss: 0.026479 Epoch: 77, Loss: 0.032855 Epoch: 77, Loss: 0.022262 Epoch: 77, Loss: 0.029715 Epoch: 77, Loss: 0.031182 Epoch: 77, Loss: 0.022714 Epoch: 77, Loss: 0.021331 Epoch: 77, Loss: 0.024131 Epoch: 77, Loss: 0.021023 Epoch: 77, Loss: 0.038197 Epoch: 77, Loss: 0.026070 Epoch: 77, Loss: 0.036699 Epoch: 77, Loss: 0.030622 Epoch: 77, Loss: 0.036525 Epoch: 77, Loss: 0.054599 Epoch: 77, Loss: 0.016312 Epoch: 77, Loss: 0.011640 Epoch: 77, Loss: 0.031101 Epoch: 77, Loss: 0.033208 Epoch: 77, Loss: 0.028918 Epoch: 77, Loss: 0.031929 Epoch: 77, Loss: 0.027265 Epoch: 77, Loss: 0.018209 Epoch: 77, Loss: 0.029139 Epoch: 77, Loss: 0.017464 Epoch: 77, Loss: 0.025292 Epoch: 77, Loss: 0.044260 Epoch: 77, Loss: 0.017067 Epoch: 77, Loss: 0.029672 Epoch: 77, Loss: 0.031421 Epoch: 77, Loss: 0.028405 Epoch: 77, Loss: 0.023135 Epoch: 77, Loss: 0.028315 Epoch: 77, Loss: 0.053940 Epoch: 77, Loss: 0.032788 Epoch: 77, Loss: 0.032399 Epoch: 77, Loss: 0.041711 Epoch: 77, Loss: 0.024475 Epoch: 77, Loss: 0.035773 Epoch: 77, Loss: 0.021617 Epoch: 77, Loss: 0.093308 Epoch: 77, Loss: 0.052525 Epoch: 77, Loss: 0.028828 Epoch: 77, Loss: 0.036618 Epoch: 77, Loss: 0.034109 Epoch: 77, Loss: 0.064136 Epoch: 77, Loss: 0.032957 Epoch: 77, Loss: 0.026771 Epoch: 77, Loss: 0.020335 Epoch: 77, Loss: 0.042653 Epoch: 77, Loss: 0.032126 Epoch: 77, Loss: 0.040708 Epoch: 77, Loss: 0.033115 Epoch: 77, Loss: 0.027369 Epoch: 77, Loss: 0.031452 Epoch: 77, Loss: 0.029131 Epoch: 77, Loss: 0.019786 Epoch: 77, Loss: 0.019768 Epoch: 77, Loss: 0.014415 Epoch: 77, Loss: 0.020638 Epoch: 77, Loss: 0.023076 Epoch: 77, Loss: 0.020640 Epoch: 77, Loss: 0.025827 Epoch: 77, Loss: 0.035197 Epoch: 77, Loss: 0.016474 Epoch: 77, Loss: 0.027302 Epoch: 77, Loss: 0.029093 Epoch: 77, Loss: 0.032265 Epoch: 77, Loss: 0.012019 Epoch: 77, Loss: 0.022449 Epoch: 77, Loss: 0.023450 Epoch: 77, Loss: 0.023202 Epoch: 77, Loss: 0.012804 Epoch: 77, Loss: 0.036616 Epoch: 77, Loss: 0.026102 Epoch: 77, Loss: 0.024914 Epoch: 77, Loss: 0.055509 Epoch: 77, Loss: 0.060121 Epoch: 77, Loss: 0.033743 Epoch: 77, Loss: 0.034930 Epoch: 77, Loss: 0.020234 Epoch: 77, Loss: 0.026819 Epoch: 77, Loss: 0.022684 Epoch: 77, Loss: 0.013468 Epoch: 77, Loss: 0.034716 Epoch: 77, Loss: 0.018627 Epoch: 77, Loss: 0.022424 Epoch: 77, Loss: 0.036888 Epoch: 77, Loss: 0.043379 Epoch: 77, Loss: 0.019996 Epoch: 77, Loss: 0.030616 Epoch: 77, Loss: 0.031794 Epoch: 77, Loss: 0.022811 Epoch: 77, Loss: 0.031210 Epoch: 77, Loss: 0.022489 Epoch: 77, Loss: 0.030394 Epoch: 77, Loss: 0.091034 Epoch: 77, Loss: 0.021043 Epoch: 77, Loss: 0.034311 Epoch: 77, Loss: 0.022608 Epoch: 77, Loss: 0.018990 Epoch: 77, Loss: 0.034003 Epoch: 77, Loss: 0.039045 Epoch: 77, Loss: 0.022802 Epoch: 77, Loss: 0.027325 Epoch: 77, Loss: 0.015368 Epoch: 77, Loss: 0.050984 Epoch: 77, Loss: 0.045637 Epoch: 77, Loss: 0.026564 Epoch: 77, Loss: 0.045182 Epoch: 77, Loss: 0.028129 Epoch: 77, Loss: 0.027726 Epoch: 77, Loss: 0.044792 Epoch: 77, Loss: 0.021146 Epoch: 77, Loss: 0.023825 Epoch: 77, Loss: 0.027616 Epoch: 77, Loss: 0.020029 Epoch: 77, Loss: 0.033643 Epoch: 77, Loss: 0.021138 Epoch: 77, Loss: 0.028242 Epoch: 77, Loss: 0.018470 Epoch: 78, Loss: 0.035702 Epoch: 78, Loss: 0.021533 Epoch: 78, Loss: 0.019558 Epoch: 78, Loss: 0.058816 Epoch: 78, Loss: 0.037740 Epoch: 78, Loss: 0.023480 Epoch: 78, Loss: 0.024070 Epoch: 78, Loss: 0.027747 Epoch: 78, Loss: 0.034359 Epoch: 78, Loss: 0.019695 Epoch: 78, Loss: 0.048446 Epoch: 78, Loss: 0.020159 Epoch: 78, Loss: 0.025717 Epoch: 78, Loss: 0.032718 Epoch: 78, Loss: 0.033765 Epoch: 78, Loss: 0.017021 Epoch: 78, Loss: 0.020928 Epoch: 78, Loss: 0.029389 Epoch: 78, Loss: 0.022266 Epoch: 78, Loss: 0.024482 Epoch: 78, Loss: 0.025516 Epoch: 78, Loss: 0.025284 Epoch: 78, Loss: 0.021845 Epoch: 78, Loss: 0.050106 Epoch: 78, Loss: 0.027284 Epoch: 78, Loss: 0.018522 Epoch: 78, Loss: 0.018970 Epoch: 78, Loss: 0.019622 Epoch: 78, Loss: 0.022168 Epoch: 78, Loss: 0.020195 Epoch: 78, Loss: 0.027021 Epoch: 78, Loss: 0.015817 Epoch: 78, Loss: 0.025020 Epoch: 78, Loss: 0.054775 Epoch: 78, Loss: 0.036754 Epoch: 78, Loss: 0.038682 Epoch: 78, Loss: 0.023349 Epoch: 78, Loss: 0.015584 Epoch: 78, Loss: 0.016871 Epoch: 78, Loss: 0.017969 Epoch: 78, Loss: 0.019136 Epoch: 78, Loss: 0.028490 Epoch: 78, Loss: 0.018467 Epoch: 78, Loss: 0.034796 Epoch: 78, Loss: 0.041118 Epoch: 78, Loss: 0.039459 Epoch: 78, Loss: 0.042624 Epoch: 78, Loss: 0.042766 Epoch: 78, Loss: 0.029555 Epoch: 78, Loss: 0.031685 Epoch: 78, Loss: 0.081661 Epoch: 78, Loss: 0.068193 Epoch: 78, Loss: 0.034210 Epoch: 78, Loss: 0.034615 Epoch: 78, Loss: 0.049375 Epoch: 78, Loss: 0.027369 Epoch: 78, Loss: 0.053373 Epoch: 78, Loss: 0.072650 Epoch: 78, Loss: 0.073426 Epoch: 78, Loss: 0.077898 Epoch: 78, Loss: 0.107802 Epoch: 78, Loss: 0.038240 Epoch: 78, Loss: 0.066511 Epoch: 78, Loss: 0.032434 Epoch: 78, Loss: 0.026746 Epoch: 78, Loss: 0.023351 Epoch: 78, Loss: 0.041317 Epoch: 78, Loss: 0.053556 Epoch: 78, Loss: 0.025693 Epoch: 78, Loss: 0.023411 Epoch: 78, Loss: 0.040614 Epoch: 78, Loss: 0.014984 Epoch: 78, Loss: 0.025814 Epoch: 78, Loss: 0.020531 Epoch: 78, Loss: 0.037895 Epoch: 78, Loss: 0.019822 Epoch: 78, Loss: 0.021838 Epoch: 78, Loss: 0.032496 Epoch: 78, Loss: 0.029439 Epoch: 78, Loss: 0.016992 Epoch: 78, Loss: 0.041484 Epoch: 78, Loss: 0.023934 Epoch: 78, Loss: 0.026261 Epoch: 78, Loss: 0.016220 Epoch: 78, Loss: 0.023369 Epoch: 78, Loss: 0.017850 Epoch: 78, Loss: 0.021486 Epoch: 78, Loss: 0.024502 Epoch: 78, Loss: 0.027306 Epoch: 78, Loss: 0.028314 Epoch: 78, Loss: 0.021343 Epoch: 78, Loss: 0.037491 Epoch: 78, Loss: 0.026718 Epoch: 78, Loss: 0.029046 Epoch: 78, Loss: 0.022663 Epoch: 78, Loss: 0.040218 Epoch: 78, Loss: 0.038127 Epoch: 78, Loss: 0.035646 Epoch: 78, Loss: 0.018805 Epoch: 78, Loss: 0.026508 Epoch: 78, Loss: 0.030841 Epoch: 78, Loss: 0.030878 Epoch: 78, Loss: 0.019452 Epoch: 78, Loss: 0.029641 Epoch: 78, Loss: 0.033521 Epoch: 78, Loss: 0.024390 Epoch: 78, Loss: 0.015438 Epoch: 78, Loss: 0.035379 Epoch: 78, Loss: 0.025372 Epoch: 78, Loss: 0.029129 Epoch: 78, Loss: 0.024562 Epoch: 78, Loss: 0.017905 Epoch: 78, Loss: 0.017415 Epoch: 78, Loss: 0.031114 Epoch: 78, Loss: 0.029875 Epoch: 78, Loss: 0.035511 Epoch: 78, Loss: 0.043871 Epoch: 78, Loss: 0.026204 Epoch: 78, Loss: 0.041917 Epoch: 78, Loss: 0.067197 Epoch: 78, Loss: 0.040102 Epoch: 78, Loss: 0.017230 Epoch: 78, Loss: 0.020809 Epoch: 78, Loss: 0.034969 Epoch: 78, Loss: 0.020872 Epoch: 78, Loss: 0.022804 Epoch: 78, Loss: 0.025811 Epoch: 78, Loss: 0.022242 Epoch: 78, Loss: 0.030012 Epoch: 78, Loss: 0.022970 Epoch: 78, Loss: 0.019004 Epoch: 78, Loss: 0.023064 Epoch: 78, Loss: 0.021283 Epoch: 78, Loss: 0.028819 Epoch: 78, Loss: 0.041741 Epoch: 78, Loss: 0.044308 Epoch: 78, Loss: 0.021193 Epoch: 78, Loss: 0.028327 Epoch: 78, Loss: 0.014566 Epoch: 78, Loss: 0.042085 Epoch: 78, Loss: 0.019113 Epoch: 78, Loss: 0.055333 Epoch: 78, Loss: 0.040152 Epoch: 78, Loss: 0.046577 Epoch: 78, Loss: 0.025686 Epoch: 78, Loss: 0.018889 Epoch: 78, Loss: 0.026123 Epoch: 78, Loss: 0.038181 Epoch: 78, Loss: 0.041655 Epoch: 78, Loss: 0.031683 Epoch: 78, Loss: 0.035920 Epoch: 78, Loss: 0.031226 Epoch: 78, Loss: 0.021042 Epoch: 78, Loss: 0.023990 Epoch: 78, Loss: 0.026806 Epoch: 78, Loss: 0.043859 Epoch: 78, Loss: 0.029039 Epoch: 79, Loss: 0.030665 Epoch: 79, Loss: 0.019594 Epoch: 79, Loss: 0.031890 Epoch: 79, Loss: 0.026515 Epoch: 79, Loss: 0.018597 Epoch: 79, Loss: 0.017360 Epoch: 79, Loss: 0.024780 Epoch: 79, Loss: 0.028416 Epoch: 79, Loss: 0.031669 Epoch: 79, Loss: 0.016833 Epoch: 79, Loss: 0.015698 Epoch: 79, Loss: 0.018830 Epoch: 79, Loss: 0.045630 Epoch: 79, Loss: 0.017258 Epoch: 79, Loss: 0.017537 Epoch: 79, Loss: 0.024419 Epoch: 79, Loss: 0.025602 Epoch: 79, Loss: 0.021385 Epoch: 79, Loss: 0.021552 Epoch: 79, Loss: 0.029905 Epoch: 79, Loss: 0.029747 Epoch: 79, Loss: 0.014425 Epoch: 79, Loss: 0.026719 Epoch: 79, Loss: 0.028642 Epoch: 79, Loss: 0.022294 Epoch: 79, Loss: 0.034625 Epoch: 79, Loss: 0.015681 Epoch: 79, Loss: 0.082662 Epoch: 79, Loss: 0.035699 Epoch: 79, Loss: 0.031454 Epoch: 79, Loss: 0.023470 Epoch: 79, Loss: 0.029066 Epoch: 79, Loss: 0.036740 Epoch: 79, Loss: 0.026734 Epoch: 79, Loss: 0.032371 Epoch: 79, Loss: 0.018516 Epoch: 79, Loss: 0.023901 Epoch: 79, Loss: 0.027211 Epoch: 79, Loss: 0.018783 Epoch: 79, Loss: 0.034517 Epoch: 79, Loss: 0.026012 Epoch: 79, Loss: 0.035641 Epoch: 79, Loss: 0.028837 Epoch: 79, Loss: 0.020539 Epoch: 79, Loss: 0.019945 Epoch: 79, Loss: 0.027498 Epoch: 79, Loss: 0.059355 Epoch: 79, Loss: 0.019549 Epoch: 79, Loss: 0.016673 Epoch: 79, Loss: 0.015869 Epoch: 79, Loss: 0.021467 Epoch: 79, Loss: 0.043096 Epoch: 79, Loss: 0.023246 Epoch: 79, Loss: 0.019774 Epoch: 79, Loss: 0.021306 Epoch: 79, Loss: 0.023949 Epoch: 79, Loss: 0.045051 Epoch: 79, Loss: 0.031815 Epoch: 79, Loss: 0.027073 Epoch: 79, Loss: 0.029337 Epoch: 79, Loss: 0.022333 Epoch: 79, Loss: 0.027299 Epoch: 79, Loss: 0.016165 Epoch: 79, Loss: 0.023122 Epoch: 79, Loss: 0.036501 Epoch: 79, Loss: 0.015835 Epoch: 79, Loss: 0.046966 Epoch: 79, Loss: 0.038912 Epoch: 79, Loss: 0.030052 Epoch: 79, Loss: 0.047189 Epoch: 79, Loss: 0.077168 Epoch: 79, Loss: 0.025150 Epoch: 79, Loss: 0.051018 Epoch: 79, Loss: 0.031952 Epoch: 79, Loss: 0.043377 Epoch: 79, Loss: 0.021611 Epoch: 79, Loss: 0.054197 Epoch: 79, Loss: 0.050701 Epoch: 79, Loss: 0.039976 Epoch: 79, Loss: 0.063528 Epoch: 79, Loss: 0.047068 Epoch: 79, Loss: 0.022223 Epoch: 79, Loss: 0.023317 Epoch: 79, Loss: 0.019627 Epoch: 79, Loss: 0.028576 Epoch: 79, Loss: 0.029449 Epoch: 79, Loss: 0.027169 Epoch: 79, Loss: 0.020973 Epoch: 79, Loss: 0.019508 Epoch: 79, Loss: 0.020740 Epoch: 79, Loss: 0.018064 Epoch: 79, Loss: 0.017435 Epoch: 79, Loss: 0.030720 Epoch: 79, Loss: 0.015472 Epoch: 79, Loss: 0.044273 Epoch: 79, Loss: 0.044529 Epoch: 79, Loss: 0.045795 Epoch: 79, Loss: 0.033422 Epoch: 79, Loss: 0.026032 Epoch: 79, Loss: 0.030694 Epoch: 79, Loss: 0.021999 Epoch: 79, Loss: 0.020037 Epoch: 79, Loss: 0.056458 Epoch: 79, Loss: 0.045569 Epoch: 79, Loss: 0.097101 Epoch: 79, Loss: 0.040532 Epoch: 79, Loss: 0.041946 Epoch: 79, Loss: 0.039393 Epoch: 79, Loss: 0.027276 Epoch: 79, Loss: 0.031539 Epoch: 79, Loss: 0.020563 Epoch: 79, Loss: 0.022341 Epoch: 79, Loss: 0.029417 Epoch: 79, Loss: 0.032921 Epoch: 79, Loss: 0.021077 Epoch: 79, Loss: 0.039230 Epoch: 79, Loss: 0.020430 Epoch: 79, Loss: 0.029015 Epoch: 79, Loss: 0.021860 Epoch: 79, Loss: 0.030590 Epoch: 79, Loss: 0.028875 Epoch: 79, Loss: 0.034882 Epoch: 79, Loss: 0.022374 Epoch: 79, Loss: 0.028005 Epoch: 79, Loss: 0.024535 Epoch: 79, Loss: 0.013023 Epoch: 79, Loss: 0.025436 Epoch: 79, Loss: 0.014804 Epoch: 79, Loss: 0.027417 Epoch: 79, Loss: 0.034931 Epoch: 79, Loss: 0.045470 Epoch: 79, Loss: 0.020157 Epoch: 79, Loss: 0.038656 Epoch: 79, Loss: 0.045090 Epoch: 79, Loss: 0.018582 Epoch: 79, Loss: 0.030723 Epoch: 79, Loss: 0.038463 Epoch: 79, Loss: 0.026430 Epoch: 79, Loss: 0.027831 Epoch: 79, Loss: 0.025225 Epoch: 79, Loss: 0.029930 Epoch: 79, Loss: 0.027660 Epoch: 79, Loss: 0.020125 Epoch: 79, Loss: 0.030434 Epoch: 79, Loss: 0.034816 Epoch: 79, Loss: 0.025824 Epoch: 79, Loss: 0.024471 Epoch: 79, Loss: 0.020835 Epoch: 79, Loss: 0.018217 Epoch: 79, Loss: 0.024883 Epoch: 79, Loss: 0.022086 Epoch: 79, Loss: 0.028947 Epoch: 79, Loss: 0.017213 Epoch: 79, Loss: 0.029890 Epoch: 79, Loss: 0.014413 Epoch: 79, Loss: 0.030152 Epoch: 79, Loss: 0.025668 Epoch: 80, Loss: 0.028182 Epoch: 80, Loss: 0.043164 Epoch: 80, Loss: 0.020797 Epoch: 80, Loss: 0.029385 Epoch: 80, Loss: 0.012761 Epoch: 80, Loss: 0.022143 Epoch: 80, Loss: 0.017971 Epoch: 80, Loss: 0.027402 Epoch: 80, Loss: 0.018882 Epoch: 80, Loss: 0.017083 Epoch: 80, Loss: 0.019520 Epoch: 80, Loss: 0.023578 Epoch: 80, Loss: 0.031487 Epoch: 80, Loss: 0.022054 Epoch: 80, Loss: 0.022136 Epoch: 80, Loss: 0.041777 Epoch: 80, Loss: 0.024218 Epoch: 80, Loss: 0.020696 Epoch: 80, Loss: 0.028747 Epoch: 80, Loss: 0.023406 Epoch: 80, Loss: 0.018677 Epoch: 80, Loss: 0.023158 Epoch: 80, Loss: 0.026106 Epoch: 80, Loss: 0.024747 Epoch: 80, Loss: 0.035668 Epoch: 80, Loss: 0.032986 Epoch: 80, Loss: 0.026218 Epoch: 80, Loss: 0.013628 Epoch: 80, Loss: 0.022723 Epoch: 80, Loss: 0.044397 Epoch: 80, Loss: 0.020594 Epoch: 80, Loss: 0.044047 Epoch: 80, Loss: 0.033784 Epoch: 80, Loss: 0.034725 Epoch: 80, Loss: 0.036876 Epoch: 80, Loss: 0.047489 Epoch: 80, Loss: 0.023948 Epoch: 80, Loss: 0.019170 Epoch: 80, Loss: 0.023157 Epoch: 80, Loss: 0.022106 Epoch: 80, Loss: 0.025123 Epoch: 80, Loss: 0.032826 Epoch: 80, Loss: 0.018452 Epoch: 80, Loss: 0.023429 Epoch: 80, Loss: 0.013801 Epoch: 80, Loss: 0.024951 Epoch: 80, Loss: 0.024664 Epoch: 80, Loss: 0.021166 Epoch: 80, Loss: 0.031467 Epoch: 80, Loss: 0.023325 Epoch: 80, Loss: 0.027581 Epoch: 80, Loss: 0.024683 Epoch: 80, Loss: 0.020601 Epoch: 80, Loss: 0.021431 Epoch: 80, Loss: 0.043550 Epoch: 80, Loss: 0.027550 Epoch: 80, Loss: 0.015750 Epoch: 80, Loss: 0.022639 Epoch: 80, Loss: 0.016660 Epoch: 80, Loss: 0.018773 Epoch: 80, Loss: 0.032814 Epoch: 80, Loss: 0.023292 Epoch: 80, Loss: 0.021572 Epoch: 80, Loss: 0.025029 Epoch: 80, Loss: 0.025745 Epoch: 80, Loss: 0.020238 Epoch: 80, Loss: 0.023956 Epoch: 80, Loss: 0.024583 Epoch: 80, Loss: 0.023269 Epoch: 80, Loss: 0.021380 Epoch: 80, Loss: 0.026917 Epoch: 80, Loss: 0.018747 Epoch: 80, Loss: 0.031050 Epoch: 80, Loss: 0.014160 Epoch: 80, Loss: 0.032063 Epoch: 80, Loss: 0.026551 Epoch: 80, Loss: 0.024234 Epoch: 80, Loss: 0.038256 Epoch: 80, Loss: 0.020715 Epoch: 80, Loss: 0.035436 Epoch: 80, Loss: 0.024685 Epoch: 80, Loss: 0.047389 Epoch: 80, Loss: 0.073081 Epoch: 80, Loss: 0.032810 Epoch: 80, Loss: 0.023436 Epoch: 80, Loss: 0.021992 Epoch: 80, Loss: 0.017634 Epoch: 80, Loss: 0.059348 Epoch: 80, Loss: 0.058196 Epoch: 80, Loss: 0.044848 Epoch: 80, Loss: 0.043653 Epoch: 80, Loss: 0.017248 Epoch: 80, Loss: 0.030855 Epoch: 80, Loss: 0.045246 Epoch: 80, Loss: 0.094294 Epoch: 80, Loss: 0.030830 Epoch: 80, Loss: 0.024178 Epoch: 80, Loss: 0.020950 Epoch: 80, Loss: 0.028191 Epoch: 80, Loss: 0.024405 Epoch: 80, Loss: 0.021318 Epoch: 80, Loss: 0.027478 Epoch: 80, Loss: 0.022230 Epoch: 80, Loss: 0.026314 Epoch: 80, Loss: 0.038950 Epoch: 80, Loss: 0.025416 Epoch: 80, Loss: 0.063130 Epoch: 80, Loss: 0.047659 Epoch: 80, Loss: 0.020589 Epoch: 80, Loss: 0.033031 Epoch: 80, Loss: 0.022765 Epoch: 80, Loss: 0.040852 Epoch: 80, Loss: 0.018290 Epoch: 80, Loss: 0.019081 Epoch: 80, Loss: 0.045626 Epoch: 80, Loss: 0.071960 Epoch: 80, Loss: 0.042506 Epoch: 80, Loss: 0.027203 Epoch: 80, Loss: 0.032923 Epoch: 80, Loss: 0.024469 Epoch: 80, Loss: 0.035348 Epoch: 80, Loss: 0.023363 Epoch: 80, Loss: 0.023104 Epoch: 80, Loss: 0.023637 Epoch: 80, Loss: 0.036632 Epoch: 80, Loss: 0.027676 Epoch: 80, Loss: 0.024729 Epoch: 80, Loss: 0.014664 Epoch: 80, Loss: 0.036181 Epoch: 80, Loss: 0.040058 Epoch: 80, Loss: 0.028087 Epoch: 80, Loss: 0.040182 Epoch: 80, Loss: 0.024350 Epoch: 80, Loss: 0.029598 Epoch: 80, Loss: 0.031449 Epoch: 80, Loss: 0.030953 Epoch: 80, Loss: 0.030719 Epoch: 80, Loss: 0.025501 Epoch: 80, Loss: 0.028641 Epoch: 80, Loss: 0.013634 Epoch: 80, Loss: 0.028478 Epoch: 80, Loss: 0.029620 Epoch: 80, Loss: 0.021646 Epoch: 80, Loss: 0.033369 Epoch: 80, Loss: 0.039135 Epoch: 80, Loss: 0.056957 Epoch: 80, Loss: 0.086490 Epoch: 80, Loss: 0.120457 Epoch: 80, Loss: 0.049302 Epoch: 80, Loss: 0.136819 Epoch: 80, Loss: 0.037823 Epoch: 80, Loss: 0.030718 Epoch: 80, Loss: 0.027863 Epoch: 80, Loss: 0.020707 Epoch: 80, Loss: 0.027097 Epoch: 80, Loss: 0.020557 Epoch: 80, Loss: 0.022339 Epoch: 81, Loss: 0.019947 Epoch: 81, Loss: 0.025887 Epoch: 81, Loss: 0.032834 Epoch: 81, Loss: 0.041457 Epoch: 81, Loss: 0.020670 Epoch: 81, Loss: 0.036588 Epoch: 81, Loss: 0.016320 Epoch: 81, Loss: 0.033665 Epoch: 81, Loss: 0.031640 Epoch: 81, Loss: 0.022635 Epoch: 81, Loss: 0.022335 Epoch: 81, Loss: 0.023106 Epoch: 81, Loss: 0.033801 Epoch: 81, Loss: 0.021845 Epoch: 81, Loss: 0.017416 Epoch: 81, Loss: 0.027183 Epoch: 81, Loss: 0.016281 Epoch: 81, Loss: 0.030358 Epoch: 81, Loss: 0.032816 Epoch: 81, Loss: 0.017773 Epoch: 81, Loss: 0.036381 Epoch: 81, Loss: 0.023573 Epoch: 81, Loss: 0.013039 Epoch: 81, Loss: 0.029577 Epoch: 81, Loss: 0.019426 Epoch: 81, Loss: 0.029602 Epoch: 81, Loss: 0.021279 Epoch: 81, Loss: 0.016848 Epoch: 81, Loss: 0.092213 Epoch: 81, Loss: 0.036452 Epoch: 81, Loss: 0.020133 Epoch: 81, Loss: 0.039546 Epoch: 81, Loss: 0.017286 Epoch: 81, Loss: 0.025810 Epoch: 81, Loss: 0.031153 Epoch: 81, Loss: 0.030721 Epoch: 81, Loss: 0.024448 Epoch: 81, Loss: 0.026372 Epoch: 81, Loss: 0.009129 Epoch: 81, Loss: 0.021509 Epoch: 81, Loss: 0.015571 Epoch: 81, Loss: 0.033130 Epoch: 81, Loss: 0.022043 Epoch: 81, Loss: 0.024322 Epoch: 81, Loss: 0.016037 Epoch: 81, Loss: 0.029830 Epoch: 81, Loss: 0.020470 Epoch: 81, Loss: 0.026502 Epoch: 81, Loss: 0.022679 Epoch: 81, Loss: 0.029538 Epoch: 81, Loss: 0.036236 Epoch: 81, Loss: 0.031001 Epoch: 81, Loss: 0.011014 Epoch: 81, Loss: 0.039847 Epoch: 81, Loss: 0.027327 Epoch: 81, Loss: 0.021453 Epoch: 81, Loss: 0.043642 Epoch: 81, Loss: 0.069631 Epoch: 81, Loss: 0.029514 Epoch: 81, Loss: 0.019329 Epoch: 81, Loss: 0.020423 Epoch: 81, Loss: 0.020475 Epoch: 81, Loss: 0.038528 Epoch: 81, Loss: 0.033139 Epoch: 81, Loss: 0.030549 Epoch: 81, Loss: 0.015382 Epoch: 81, Loss: 0.021520 Epoch: 81, Loss: 0.034744 Epoch: 81, Loss: 0.022535 Epoch: 81, Loss: 0.029293 Epoch: 81, Loss: 0.026139 Epoch: 81, Loss: 0.024376 Epoch: 81, Loss: 0.020019 Epoch: 81, Loss: 0.028447 Epoch: 81, Loss: 0.017827 Epoch: 81, Loss: 0.024677 Epoch: 81, Loss: 0.023255 Epoch: 81, Loss: 0.027976 Epoch: 81, Loss: 0.017678 Epoch: 81, Loss: 0.030140 Epoch: 81, Loss: 0.022493 Epoch: 81, Loss: 0.012026 Epoch: 81, Loss: 0.025651 Epoch: 81, Loss: 0.023174 Epoch: 81, Loss: 0.031759 Epoch: 81, Loss: 0.016382 Epoch: 81, Loss: 0.018042 Epoch: 81, Loss: 0.021532 Epoch: 81, Loss: 0.019885 Epoch: 81, Loss: 0.022546 Epoch: 81, Loss: 0.018857 Epoch: 81, Loss: 0.056030 Epoch: 81, Loss: 0.039981 Epoch: 81, Loss: 0.053095 Epoch: 81, Loss: 0.031408 Epoch: 81, Loss: 0.010511 Epoch: 81, Loss: 0.025805 Epoch: 81, Loss: 0.016231 Epoch: 81, Loss: 0.028800 Epoch: 81, Loss: 0.022861 Epoch: 81, Loss: 0.019888 Epoch: 81, Loss: 0.035318 Epoch: 81, Loss: 0.022742 Epoch: 81, Loss: 0.014555 Epoch: 81, Loss: 0.022578 Epoch: 81, Loss: 0.019111 Epoch: 81, Loss: 0.016530 Epoch: 81, Loss: 0.022079 Epoch: 81, Loss: 0.041547 Epoch: 81, Loss: 0.024788 Epoch: 81, Loss: 0.074609 Epoch: 81, Loss: 0.105038 Epoch: 81, Loss: 0.085381 Epoch: 81, Loss: 0.037041 Epoch: 81, Loss: 0.029603 Epoch: 81, Loss: 0.029764 Epoch: 81, Loss: 0.034273 Epoch: 81, Loss: 0.024109 Epoch: 81, Loss: 0.017263 Epoch: 81, Loss: 0.032539 Epoch: 81, Loss: 0.037120 Epoch: 81, Loss: 0.016156 Epoch: 81, Loss: 0.036902 Epoch: 81, Loss: 0.032377 Epoch: 81, Loss: 0.041112 Epoch: 81, Loss: 0.017153 Epoch: 81, Loss: 0.059138 Epoch: 81, Loss: 0.047262 Epoch: 81, Loss: 0.052829 Epoch: 81, Loss: 0.025696 Epoch: 81, Loss: 0.019765 Epoch: 81, Loss: 0.021554 Epoch: 81, Loss: 0.035170 Epoch: 81, Loss: 0.025377 Epoch: 81, Loss: 0.022136 Epoch: 81, Loss: 0.022937 Epoch: 81, Loss: 0.017283 Epoch: 81, Loss: 0.032141 Epoch: 81, Loss: 0.026042 Epoch: 81, Loss: 0.015673 Epoch: 81, Loss: 0.015697 Epoch: 81, Loss: 0.021484 Epoch: 81, Loss: 0.022127 Epoch: 81, Loss: 0.019428 Epoch: 81, Loss: 0.024140 Epoch: 81, Loss: 0.082020 Epoch: 81, Loss: 0.085195 Epoch: 81, Loss: 0.039635 Epoch: 81, Loss: 0.051137 Epoch: 81, Loss: 0.027363 Epoch: 81, Loss: 0.045391 Epoch: 81, Loss: 0.030045 Epoch: 81, Loss: 0.040169 Epoch: 81, Loss: 0.015346 Epoch: 81, Loss: 0.035894 Epoch: 81, Loss: 0.011371 Epoch: 81, Loss: 0.028197 Epoch: 82, Loss: 0.016616 Epoch: 82, Loss: 0.025420 Epoch: 82, Loss: 0.019400 Epoch: 82, Loss: 0.020677 Epoch: 82, Loss: 0.018579 Epoch: 82, Loss: 0.022860 Epoch: 82, Loss: 0.019762 Epoch: 82, Loss: 0.019214 Epoch: 82, Loss: 0.017820 Epoch: 82, Loss: 0.036759 Epoch: 82, Loss: 0.019972 Epoch: 82, Loss: 0.025554 Epoch: 82, Loss: 0.023276 Epoch: 82, Loss: 0.014279 Epoch: 82, Loss: 0.017026 Epoch: 82, Loss: 0.022097 Epoch: 82, Loss: 0.024791 Epoch: 82, Loss: 0.018127 Epoch: 82, Loss: 0.012317 Epoch: 82, Loss: 0.026758 Epoch: 82, Loss: 0.017946 Epoch: 82, Loss: 0.026362 Epoch: 82, Loss: 0.031035 Epoch: 82, Loss: 0.033996 Epoch: 82, Loss: 0.037765 Epoch: 82, Loss: 0.034097 Epoch: 82, Loss: 0.019299 Epoch: 82, Loss: 0.025998 Epoch: 82, Loss: 0.038613 Epoch: 82, Loss: 0.028214 Epoch: 82, Loss: 0.019536 Epoch: 82, Loss: 0.027453 Epoch: 82, Loss: 0.019985 Epoch: 82, Loss: 0.013629 Epoch: 82, Loss: 0.028769 Epoch: 82, Loss: 0.019066 Epoch: 82, Loss: 0.019075 Epoch: 82, Loss: 0.022225 Epoch: 82, Loss: 0.038308 Epoch: 82, Loss: 0.045723 Epoch: 82, Loss: 0.022796 Epoch: 82, Loss: 0.018095 Epoch: 82, Loss: 0.033217 Epoch: 82, Loss: 0.032200 Epoch: 82, Loss: 0.033674 Epoch: 82, Loss: 0.028058 Epoch: 82, Loss: 0.032392 Epoch: 82, Loss: 0.029491 Epoch: 82, Loss: 0.030358 Epoch: 82, Loss: 0.049472 Epoch: 82, Loss: 0.027772 Epoch: 82, Loss: 0.023317 Epoch: 82, Loss: 0.017479 Epoch: 82, Loss: 0.015621 Epoch: 82, Loss: 0.039291 Epoch: 82, Loss: 0.018522 Epoch: 82, Loss: 0.050252 Epoch: 82, Loss: 0.016636 Epoch: 82, Loss: 0.021370 Epoch: 82, Loss: 0.029557 Epoch: 82, Loss: 0.029530 Epoch: 82, Loss: 0.028664 Epoch: 82, Loss: 0.018889 Epoch: 82, Loss: 0.014054 Epoch: 82, Loss: 0.019394 Epoch: 82, Loss: 0.030123 Epoch: 82, Loss: 0.038544 Epoch: 82, Loss: 0.019451 Epoch: 82, Loss: 0.027465 Epoch: 82, Loss: 0.017205 Epoch: 82, Loss: 0.061356 Epoch: 82, Loss: 0.052544 Epoch: 82, Loss: 0.061794 Epoch: 82, Loss: 0.044608 Epoch: 82, Loss: 0.019173 Epoch: 82, Loss: 0.104929 Epoch: 82, Loss: 0.043982 Epoch: 82, Loss: 0.011847 Epoch: 82, Loss: 0.018654 Epoch: 82, Loss: 0.023180 Epoch: 82, Loss: 0.020295 Epoch: 82, Loss: 0.065594 Epoch: 82, Loss: 0.023924 Epoch: 82, Loss: 0.020822 Epoch: 82, Loss: 0.034837 Epoch: 82, Loss: 0.032183 Epoch: 82, Loss: 0.041810 Epoch: 82, Loss: 0.032490 Epoch: 82, Loss: 0.018477 Epoch: 82, Loss: 0.030567 Epoch: 82, Loss: 0.027974 Epoch: 82, Loss: 0.014971 Epoch: 82, Loss: 0.021770 Epoch: 82, Loss: 0.023944 Epoch: 82, Loss: 0.039768 Epoch: 82, Loss: 0.037010 Epoch: 82, Loss: 0.023051 Epoch: 82, Loss: 0.024203 Epoch: 82, Loss: 0.024404 Epoch: 82, Loss: 0.019718 Epoch: 82, Loss: 0.035992 Epoch: 82, Loss: 0.016286 Epoch: 82, Loss: 0.022071 Epoch: 82, Loss: 0.026646 Epoch: 82, Loss: 0.043582 Epoch: 82, Loss: 0.022597 Epoch: 82, Loss: 0.033843 Epoch: 82, Loss: 0.020663 Epoch: 82, Loss: 0.029552 Epoch: 82, Loss: 0.024868 Epoch: 82, Loss: 0.026833 Epoch: 82, Loss: 0.014722 Epoch: 82, Loss: 0.014636 Epoch: 82, Loss: 0.016955 Epoch: 82, Loss: 0.015878 Epoch: 82, Loss: 0.019178 Epoch: 82, Loss: 0.027910 Epoch: 82, Loss: 0.046189 Epoch: 82, Loss: 0.025550 Epoch: 82, Loss: 0.045834 Epoch: 82, Loss: 0.017303 Epoch: 82, Loss: 0.018316 Epoch: 82, Loss: 0.014703 Epoch: 82, Loss: 0.020989 Epoch: 82, Loss: 0.070784 Epoch: 82, Loss: 0.031710 Epoch: 82, Loss: 0.024131 Epoch: 82, Loss: 0.016292 Epoch: 82, Loss: 0.029843 Epoch: 82, Loss: 0.017677 Epoch: 82, Loss: 0.034199 Epoch: 82, Loss: 0.020420 Epoch: 82, Loss: 0.027302 Epoch: 82, Loss: 0.024253 Epoch: 82, Loss: 0.023505 Epoch: 82, Loss: 0.028228 Epoch: 82, Loss: 0.029531 Epoch: 82, Loss: 0.019523 Epoch: 82, Loss: 0.032696 Epoch: 82, Loss: 0.019294 Epoch: 82, Loss: 0.024507 Epoch: 82, Loss: 0.035880 Epoch: 82, Loss: 0.010688 Epoch: 82, Loss: 0.019174 Epoch: 82, Loss: 0.022286 Epoch: 82, Loss: 0.022804 Epoch: 82, Loss: 0.032932 Epoch: 82, Loss: 0.023682 Epoch: 82, Loss: 0.020215 Epoch: 82, Loss: 0.020653 Epoch: 82, Loss: 0.030704 Epoch: 82, Loss: 0.036566 Epoch: 82, Loss: 0.026911 Epoch: 82, Loss: 0.013644 Epoch: 82, Loss: 0.019345 Epoch: 82, Loss: 0.019820 Epoch: 82, Loss: 0.014731 Epoch: 83, Loss: 0.025586 Epoch: 83, Loss: 0.036040 Epoch: 83, Loss: 0.031185 Epoch: 83, Loss: 0.026523 Epoch: 83, Loss: 0.026485 Epoch: 83, Loss: 0.035655 Epoch: 83, Loss: 0.033968 Epoch: 83, Loss: 0.020418 Epoch: 83, Loss: 0.024913 Epoch: 83, Loss: 0.025389 Epoch: 83, Loss: 0.017070 Epoch: 83, Loss: 0.027688 Epoch: 83, Loss: 0.018733 Epoch: 83, Loss: 0.027724 Epoch: 83, Loss: 0.021413 Epoch: 83, Loss: 0.029349 Epoch: 83, Loss: 0.030252 Epoch: 83, Loss: 0.024552 Epoch: 83, Loss: 0.029788 Epoch: 83, Loss: 0.017124 Epoch: 83, Loss: 0.011561 Epoch: 83, Loss: 0.025723 Epoch: 83, Loss: 0.016353 Epoch: 83, Loss: 0.024256 Epoch: 83, Loss: 0.027000 Epoch: 83, Loss: 0.016673 Epoch: 83, Loss: 0.017627 Epoch: 83, Loss: 0.030771 Epoch: 83, Loss: 0.027817 Epoch: 83, Loss: 0.016539 Epoch: 83, Loss: 0.020082 Epoch: 83, Loss: 0.016261 Epoch: 83, Loss: 0.043428 Epoch: 83, Loss: 0.022359 Epoch: 83, Loss: 0.022200 Epoch: 83, Loss: 0.015136 Epoch: 83, Loss: 0.017410 Epoch: 83, Loss: 0.039801 Epoch: 83, Loss: 0.037012 Epoch: 83, Loss: 0.038527 Epoch: 83, Loss: 0.038102 Epoch: 83, Loss: 0.022769 Epoch: 83, Loss: 0.017942 Epoch: 83, Loss: 0.016508 Epoch: 83, Loss: 0.012798 Epoch: 83, Loss: 0.031596 Epoch: 83, Loss: 0.009196 Epoch: 83, Loss: 0.028653 Epoch: 83, Loss: 0.017234 Epoch: 83, Loss: 0.015853 Epoch: 83, Loss: 0.029663 Epoch: 83, Loss: 0.019693 Epoch: 83, Loss: 0.021453 Epoch: 83, Loss: 0.025663 Epoch: 83, Loss: 0.019856 Epoch: 83, Loss: 0.025258 Epoch: 83, Loss: 0.014301 Epoch: 83, Loss: 0.028362 Epoch: 83, Loss: 0.023381 Epoch: 83, Loss: 0.024214 Epoch: 83, Loss: 0.028453 Epoch: 83, Loss: 0.017567 Epoch: 83, Loss: 0.020098 Epoch: 83, Loss: 0.021577 Epoch: 83, Loss: 0.025698 Epoch: 83, Loss: 0.018980 Epoch: 83, Loss: 0.022563 Epoch: 83, Loss: 0.038980 Epoch: 83, Loss: 0.022341 Epoch: 83, Loss: 0.024669 Epoch: 83, Loss: 0.018174 Epoch: 83, Loss: 0.018046 Epoch: 83, Loss: 0.037068 Epoch: 83, Loss: 0.052012 Epoch: 83, Loss: 0.036015 Epoch: 83, Loss: 0.016109 Epoch: 83, Loss: 0.015988 Epoch: 83, Loss: 0.015303 Epoch: 83, Loss: 0.019538 Epoch: 83, Loss: 0.016405 Epoch: 83, Loss: 0.010313 Epoch: 83, Loss: 0.014400 Epoch: 83, Loss: 0.038883 Epoch: 83, Loss: 0.026281 Epoch: 83, Loss: 0.020443 Epoch: 83, Loss: 0.024509 Epoch: 83, Loss: 0.030171 Epoch: 83, Loss: 0.022994 Epoch: 83, Loss: 0.027172 Epoch: 83, Loss: 0.026044 Epoch: 83, Loss: 0.039978 Epoch: 83, Loss: 0.024659 Epoch: 83, Loss: 0.011950 Epoch: 83, Loss: 0.027919 Epoch: 83, Loss: 0.014230 Epoch: 83, Loss: 0.018582 Epoch: 83, Loss: 0.009640 Epoch: 83, Loss: 0.023356 Epoch: 83, Loss: 0.026496 Epoch: 83, Loss: 0.101500 Epoch: 83, Loss: 0.051689 Epoch: 83, Loss: 0.021594 Epoch: 83, Loss: 0.015602 Epoch: 83, Loss: 0.016422 Epoch: 83, Loss: 0.034699 Epoch: 83, Loss: 0.052066 Epoch: 83, Loss: 0.054638 Epoch: 83, Loss: 0.075149 Epoch: 83, Loss: 0.091506 Epoch: 83, Loss: 0.034605 Epoch: 83, Loss: 0.029738 Epoch: 83, Loss: 0.044707 Epoch: 83, Loss: 0.056910 Epoch: 83, Loss: 0.012575 Epoch: 83, Loss: 0.030030 Epoch: 83, Loss: 0.018936 Epoch: 83, Loss: 0.027339 Epoch: 83, Loss: 0.014453 Epoch: 83, Loss: 0.031470 Epoch: 83, Loss: 0.024830 Epoch: 83, Loss: 0.027650 Epoch: 83, Loss: 0.024593 Epoch: 83, Loss: 0.034260 Epoch: 83, Loss: 0.034507 Epoch: 83, Loss: 0.031128 Epoch: 83, Loss: 0.028859 Epoch: 83, Loss: 0.033454 Epoch: 83, Loss: 0.021397 Epoch: 83, Loss: 0.018590 Epoch: 83, Loss: 0.050939 Epoch: 83, Loss: 0.042497 Epoch: 83, Loss: 0.019242 Epoch: 83, Loss: 0.018178 Epoch: 83, Loss: 0.019746 Epoch: 83, Loss: 0.022245 Epoch: 83, Loss: 0.020019 Epoch: 83, Loss: 0.034809 Epoch: 83, Loss: 0.023658 Epoch: 83, Loss: 0.026133 Epoch: 83, Loss: 0.036703 Epoch: 83, Loss: 0.017053 Epoch: 83, Loss: 0.025595 Epoch: 83, Loss: 0.024292 Epoch: 83, Loss: 0.053413 Epoch: 83, Loss: 0.042775 Epoch: 83, Loss: 0.030706 Epoch: 83, Loss: 0.021699 Epoch: 83, Loss: 0.018862 Epoch: 83, Loss: 0.024162 Epoch: 83, Loss: 0.027015 Epoch: 83, Loss: 0.019656 Epoch: 83, Loss: 0.020909 Epoch: 83, Loss: 0.019191 Epoch: 83, Loss: 0.035822 Epoch: 83, Loss: 0.021472 Epoch: 83, Loss: 0.013640 Epoch: 83, Loss: 0.022524 Epoch: 84, Loss: 0.020331 Epoch: 84, Loss: 0.023685 Epoch: 84, Loss: 0.015383 Epoch: 84, Loss: 0.015035 Epoch: 84, Loss: 0.026739 Epoch: 84, Loss: 0.015768 Epoch: 84, Loss: 0.015143 Epoch: 84, Loss: 0.013577 Epoch: 84, Loss: 0.018806 Epoch: 84, Loss: 0.021318 Epoch: 84, Loss: 0.027786 Epoch: 84, Loss: 0.022820 Epoch: 84, Loss: 0.024159 Epoch: 84, Loss: 0.024300 Epoch: 84, Loss: 0.018412 Epoch: 84, Loss: 0.021063 Epoch: 84, Loss: 0.025724 Epoch: 84, Loss: 0.021708 Epoch: 84, Loss: 0.036757 Epoch: 84, Loss: 0.014512 Epoch: 84, Loss: 0.016497 Epoch: 84, Loss: 0.012354 Epoch: 84, Loss: 0.023274 Epoch: 84, Loss: 0.017675 Epoch: 84, Loss: 0.029811 Epoch: 84, Loss: 0.018154 Epoch: 84, Loss: 0.020434 Epoch: 84, Loss: 0.020776 Epoch: 84, Loss: 0.029099 Epoch: 84, Loss: 0.043525 Epoch: 84, Loss: 0.033900 Epoch: 84, Loss: 0.032295 Epoch: 84, Loss: 0.026050 Epoch: 84, Loss: 0.013479 Epoch: 84, Loss: 0.018170 Epoch: 84, Loss: 0.012176 Epoch: 84, Loss: 0.017184 Epoch: 84, Loss: 0.028784 Epoch: 84, Loss: 0.024309 Epoch: 84, Loss: 0.016227 Epoch: 84, Loss: 0.016572 Epoch: 84, Loss: 0.013988 Epoch: 84, Loss: 0.023399 Epoch: 84, Loss: 0.050016 Epoch: 84, Loss: 0.039331 Epoch: 84, Loss: 0.034872 Epoch: 84, Loss: 0.021397 Epoch: 84, Loss: 0.036667 Epoch: 84, Loss: 0.015358 Epoch: 84, Loss: 0.013647 Epoch: 84, Loss: 0.022483 Epoch: 84, Loss: 0.028822 Epoch: 84, Loss: 0.019532 Epoch: 84, Loss: 0.020365 Epoch: 84, Loss: 0.021232 Epoch: 84, Loss: 0.018375 Epoch: 84, Loss: 0.017377 Epoch: 84, Loss: 0.023537 Epoch: 84, Loss: 0.014884 Epoch: 84, Loss: 0.018842 Epoch: 84, Loss: 0.011298 Epoch: 84, Loss: 0.030357 Epoch: 84, Loss: 0.025004 Epoch: 84, Loss: 0.014315 Epoch: 84, Loss: 0.024042 Epoch: 84, Loss: 0.028866 Epoch: 84, Loss: 0.036108 Epoch: 84, Loss: 0.026442 Epoch: 84, Loss: 0.032241 Epoch: 84, Loss: 0.021534 Epoch: 84, Loss: 0.020681 Epoch: 84, Loss: 0.012387 Epoch: 84, Loss: 0.038149 Epoch: 84, Loss: 0.028605 Epoch: 84, Loss: 0.025057 Epoch: 84, Loss: 0.021100 Epoch: 84, Loss: 0.018919 Epoch: 84, Loss: 0.041420 Epoch: 84, Loss: 0.041007 Epoch: 84, Loss: 0.025293 Epoch: 84, Loss: 0.020263 Epoch: 84, Loss: 0.039929 Epoch: 84, Loss: 0.029576 Epoch: 84, Loss: 0.027430 Epoch: 84, Loss: 0.017567 Epoch: 84, Loss: 0.012625 Epoch: 84, Loss: 0.030135 Epoch: 84, Loss: 0.014708 Epoch: 84, Loss: 0.021292 Epoch: 84, Loss: 0.030731 Epoch: 84, Loss: 0.018912 Epoch: 84, Loss: 0.058512 Epoch: 84, Loss: 0.030075 Epoch: 84, Loss: 0.023399 Epoch: 84, Loss: 0.036038 Epoch: 84, Loss: 0.030930 Epoch: 84, Loss: 0.029052 Epoch: 84, Loss: 0.019332 Epoch: 84, Loss: 0.027025 Epoch: 84, Loss: 0.022639 Epoch: 84, Loss: 0.031475 Epoch: 84, Loss: 0.033040 Epoch: 84, Loss: 0.017350 Epoch: 84, Loss: 0.022747 Epoch: 84, Loss: 0.029051 Epoch: 84, Loss: 0.026498 Epoch: 84, Loss: 0.048960 Epoch: 84, Loss: 0.024415 Epoch: 84, Loss: 0.047089 Epoch: 84, Loss: 0.029686 Epoch: 84, Loss: 0.027208 Epoch: 84, Loss: 0.012448 Epoch: 84, Loss: 0.039896 Epoch: 84, Loss: 0.023532 Epoch: 84, Loss: 0.023783 Epoch: 84, Loss: 0.022020 Epoch: 84, Loss: 0.049074 Epoch: 84, Loss: 0.023475 Epoch: 84, Loss: 0.017436 Epoch: 84, Loss: 0.021568 Epoch: 84, Loss: 0.013169 Epoch: 84, Loss: 0.019349 Epoch: 84, Loss: 0.020636 Epoch: 84, Loss: 0.019906 Epoch: 84, Loss: 0.017238 Epoch: 84, Loss: 0.025236 Epoch: 84, Loss: 0.013837 Epoch: 84, Loss: 0.026493 Epoch: 84, Loss: 0.041505 Epoch: 84, Loss: 0.036934 Epoch: 84, Loss: 0.020559 Epoch: 84, Loss: 0.035602 Epoch: 84, Loss: 0.035910 Epoch: 84, Loss: 0.017390 Epoch: 84, Loss: 0.022497 Epoch: 84, Loss: 0.033554 Epoch: 84, Loss: 0.020178 Epoch: 84, Loss: 0.032932 Epoch: 84, Loss: 0.025684 Epoch: 84, Loss: 0.020243 Epoch: 84, Loss: 0.075947 Epoch: 84, Loss: 0.021544 Epoch: 84, Loss: 0.028647 Epoch: 84, Loss: 0.019468 Epoch: 84, Loss: 0.023287 Epoch: 84, Loss: 0.018159 Epoch: 84, Loss: 0.015877 Epoch: 84, Loss: 0.018745 Epoch: 84, Loss: 0.016469 Epoch: 84, Loss: 0.030035 Epoch: 84, Loss: 0.013063 Epoch: 84, Loss: 0.022952 Epoch: 84, Loss: 0.034272 Epoch: 84, Loss: 0.016329 Epoch: 84, Loss: 0.022241 Epoch: 84, Loss: 0.013685 Epoch: 84, Loss: 0.009745 Epoch: 85, Loss: 0.030265 Epoch: 85, Loss: 0.018947 Epoch: 85, Loss: 0.023321 Epoch: 85, Loss: 0.021551 Epoch: 85, Loss: 0.023244 Epoch: 85, Loss: 0.032526 Epoch: 85, Loss: 0.020717 Epoch: 85, Loss: 0.032491 Epoch: 85, Loss: 0.032061 Epoch: 85, Loss: 0.017944 Epoch: 85, Loss: 0.019100 Epoch: 85, Loss: 0.019642 Epoch: 85, Loss: 0.023040 Epoch: 85, Loss: 0.040389 Epoch: 85, Loss: 0.015295 Epoch: 85, Loss: 0.012629 Epoch: 85, Loss: 0.019820 Epoch: 85, Loss: 0.022192 Epoch: 85, Loss: 0.010386 Epoch: 85, Loss: 0.037954 Epoch: 85, Loss: 0.024465 Epoch: 85, Loss: 0.025828 Epoch: 85, Loss: 0.013832 Epoch: 85, Loss: 0.019123 Epoch: 85, Loss: 0.019416 Epoch: 85, Loss: 0.016415 Epoch: 85, Loss: 0.028762 Epoch: 85, Loss: 0.030287 Epoch: 85, Loss: 0.023205 Epoch: 85, Loss: 0.021741 Epoch: 85, Loss: 0.018539 Epoch: 85, Loss: 0.038863 Epoch: 85, Loss: 0.020454 Epoch: 85, Loss: 0.020620 Epoch: 85, Loss: 0.018456 Epoch: 85, Loss: 0.016393 Epoch: 85, Loss: 0.015344 Epoch: 85, Loss: 0.018975 Epoch: 85, Loss: 0.028583 Epoch: 85, Loss: 0.018743 Epoch: 85, Loss: 0.019924 Epoch: 85, Loss: 0.026958 Epoch: 85, Loss: 0.021825 Epoch: 85, Loss: 0.024935 Epoch: 85, Loss: 0.024954 Epoch: 85, Loss: 0.050141 Epoch: 85, Loss: 0.019998 Epoch: 85, Loss: 0.016524 Epoch: 85, Loss: 0.025437 Epoch: 85, Loss: 0.022134 Epoch: 85, Loss: 0.040036 Epoch: 85, Loss: 0.040745 Epoch: 85, Loss: 0.043229 Epoch: 85, Loss: 0.017397 Epoch: 85, Loss: 0.023166 Epoch: 85, Loss: 0.023573 Epoch: 85, Loss: 0.019678 Epoch: 85, Loss: 0.025389 Epoch: 85, Loss: 0.020281 Epoch: 85, Loss: 0.046402 Epoch: 85, Loss: 0.018699 Epoch: 85, Loss: 0.049277 Epoch: 85, Loss: 0.033156 Epoch: 85, Loss: 0.021310 Epoch: 85, Loss: 0.042099 Epoch: 85, Loss: 0.023435 Epoch: 85, Loss: 0.025506 Epoch: 85, Loss: 0.040831 Epoch: 85, Loss: 0.030088 Epoch: 85, Loss: 0.018864 Epoch: 85, Loss: 0.021360 Epoch: 85, Loss: 0.021165 Epoch: 85, Loss: 0.019074 Epoch: 85, Loss: 0.015448 Epoch: 85, Loss: 0.049542 Epoch: 85, Loss: 0.024329 Epoch: 85, Loss: 0.023520 Epoch: 85, Loss: 0.053886 Epoch: 85, Loss: 0.032736 Epoch: 85, Loss: 0.032667 Epoch: 85, Loss: 0.027391 Epoch: 85, Loss: 0.018946 Epoch: 85, Loss: 0.018791 Epoch: 85, Loss: 0.022044 Epoch: 85, Loss: 0.018350 Epoch: 85, Loss: 0.040117 Epoch: 85, Loss: 0.035281 Epoch: 85, Loss: 0.023883 Epoch: 85, Loss: 0.025837 Epoch: 85, Loss: 0.008670 Epoch: 85, Loss: 0.009305 Epoch: 85, Loss: 0.027053 Epoch: 85, Loss: 0.028650 Epoch: 85, Loss: 0.026559 Epoch: 85, Loss: 0.033096 Epoch: 85, Loss: 0.016652 Epoch: 85, Loss: 0.017485 Epoch: 85, Loss: 0.023146 Epoch: 85, Loss: 0.015799 Epoch: 85, Loss: 0.021012 Epoch: 85, Loss: 0.034941 Epoch: 85, Loss: 0.018513 Epoch: 85, Loss: 0.022455 Epoch: 85, Loss: 0.018934 Epoch: 85, Loss: 0.013854 Epoch: 85, Loss: 0.012392 Epoch: 85, Loss: 0.013819 Epoch: 85, Loss: 0.029270 Epoch: 85, Loss: 0.022650 Epoch: 85, Loss: 0.022379 Epoch: 85, Loss: 0.017120 Epoch: 85, Loss: 0.018646 Epoch: 85, Loss: 0.012840 Epoch: 85, Loss: 0.024169 Epoch: 85, Loss: 0.082256 Epoch: 85, Loss: 0.017436 Epoch: 85, Loss: 0.023182 Epoch: 85, Loss: 0.020107 Epoch: 85, Loss: 0.021907 Epoch: 85, Loss: 0.017448 Epoch: 85, Loss: 0.026649 Epoch: 85, Loss: 0.013967 Epoch: 85, Loss: 0.017095 Epoch: 85, Loss: 0.036344 Epoch: 85, Loss: 0.025505 Epoch: 85, Loss: 0.013974 Epoch: 85, Loss: 0.025707 Epoch: 85, Loss: 0.033036 Epoch: 85, Loss: 0.020837 Epoch: 85, Loss: 0.014746 Epoch: 85, Loss: 0.025288 Epoch: 85, Loss: 0.021761 Epoch: 85, Loss: 0.014316 Epoch: 85, Loss: 0.012624 Epoch: 85, Loss: 0.019804 Epoch: 85, Loss: 0.021318 Epoch: 85, Loss: 0.012601 Epoch: 85, Loss: 0.017476 Epoch: 85, Loss: 0.023538 Epoch: 85, Loss: 0.042775 Epoch: 85, Loss: 0.046601 Epoch: 85, Loss: 0.017718 Epoch: 85, Loss: 0.035467 Epoch: 85, Loss: 0.016678 Epoch: 85, Loss: 0.018035 Epoch: 85, Loss: 0.037405 Epoch: 85, Loss: 0.025480 Epoch: 85, Loss: 0.015337 Epoch: 85, Loss: 0.015830 Epoch: 85, Loss: 0.022374 Epoch: 85, Loss: 0.032156 Epoch: 85, Loss: 0.026237 Epoch: 85, Loss: 0.018890 Epoch: 85, Loss: 0.012301 Epoch: 85, Loss: 0.015381 Epoch: 85, Loss: 0.039531 Epoch: 85, Loss: 0.014651 Epoch: 86, Loss: 0.018886 Epoch: 86, Loss: 0.037016 Epoch: 86, Loss: 0.015412 Epoch: 86, Loss: 0.026244 Epoch: 86, Loss: 0.024916 Epoch: 86, Loss: 0.018875 Epoch: 86, Loss: 0.016504 Epoch: 86, Loss: 0.014827 Epoch: 86, Loss: 0.027061 Epoch: 86, Loss: 0.023930 Epoch: 86, Loss: 0.016744 Epoch: 86, Loss: 0.012908 Epoch: 86, Loss: 0.028192 Epoch: 86, Loss: 0.011347 Epoch: 86, Loss: 0.039009 Epoch: 86, Loss: 0.021297 Epoch: 86, Loss: 0.023323 Epoch: 86, Loss: 0.018500 Epoch: 86, Loss: 0.015441 Epoch: 86, Loss: 0.033621 Epoch: 86, Loss: 0.016485 Epoch: 86, Loss: 0.022235 Epoch: 86, Loss: 0.025695 Epoch: 86, Loss: 0.027556 Epoch: 86, Loss: 0.025608 Epoch: 86, Loss: 0.018139 Epoch: 86, Loss: 0.020982 Epoch: 86, Loss: 0.026014 Epoch: 86, Loss: 0.013307 Epoch: 86, Loss: 0.029634 Epoch: 86, Loss: 0.025833 Epoch: 86, Loss: 0.017465 Epoch: 86, Loss: 0.015581 Epoch: 86, Loss: 0.017628 Epoch: 86, Loss: 0.027522 Epoch: 86, Loss: 0.044136 Epoch: 86, Loss: 0.029140 Epoch: 86, Loss: 0.021869 Epoch: 86, Loss: 0.042277 Epoch: 86, Loss: 0.029613 Epoch: 86, Loss: 0.015858 Epoch: 86, Loss: 0.022855 Epoch: 86, Loss: 0.014763 Epoch: 86, Loss: 0.023542 Epoch: 86, Loss: 0.016593 Epoch: 86, Loss: 0.027808 Epoch: 86, Loss: 0.017840 Epoch: 86, Loss: 0.015159 Epoch: 86, Loss: 0.049095 Epoch: 86, Loss: 0.021472 Epoch: 86, Loss: 0.031680 Epoch: 86, Loss: 0.021083 Epoch: 86, Loss: 0.037612 Epoch: 86, Loss: 0.019806 Epoch: 86, Loss: 0.012372 Epoch: 86, Loss: 0.040700 Epoch: 86, Loss: 0.035106 Epoch: 86, Loss: 0.026577 Epoch: 86, Loss: 0.026262 Epoch: 86, Loss: 0.031955 Epoch: 86, Loss: 0.034530 Epoch: 86, Loss: 0.028303 Epoch: 86, Loss: 0.041852 Epoch: 86, Loss: 0.023846 Epoch: 86, Loss: 0.023186 Epoch: 86, Loss: 0.034500 Epoch: 86, Loss: 0.021847 Epoch: 86, Loss: 0.019439 Epoch: 86, Loss: 0.027113 Epoch: 86, Loss: 0.021577 Epoch: 86, Loss: 0.031823 Epoch: 86, Loss: 0.022795 Epoch: 86, Loss: 0.028419 Epoch: 86, Loss: 0.021812 Epoch: 86, Loss: 0.026355 Epoch: 86, Loss: 0.018552 Epoch: 86, Loss: 0.023288 Epoch: 86, Loss: 0.017968 Epoch: 86, Loss: 0.028281 Epoch: 86, Loss: 0.021543 Epoch: 86, Loss: 0.013265 Epoch: 86, Loss: 0.014341 Epoch: 86, Loss: 0.018991 Epoch: 86, Loss: 0.025985 Epoch: 86, Loss: 0.031809 Epoch: 86, Loss: 0.023409 Epoch: 86, Loss: 0.015796 Epoch: 86, Loss: 0.029725 Epoch: 86, Loss: 0.022906 Epoch: 86, Loss: 0.008529 Epoch: 86, Loss: 0.020045 Epoch: 86, Loss: 0.018576 Epoch: 86, Loss: 0.033692 Epoch: 86, Loss: 0.012393 Epoch: 86, Loss: 0.012909 Epoch: 86, Loss: 0.020885 Epoch: 86, Loss: 0.037127 Epoch: 86, Loss: 0.014791 Epoch: 86, Loss: 0.024729 Epoch: 86, Loss: 0.021390 Epoch: 86, Loss: 0.021538 Epoch: 86, Loss: 0.024757 Epoch: 86, Loss: 0.028777 Epoch: 86, Loss: 0.026332 Epoch: 86, Loss: 0.016284 Epoch: 86, Loss: 0.031341 Epoch: 86, Loss: 0.015009 Epoch: 86, Loss: 0.010849 Epoch: 86, Loss: 0.012286 Epoch: 86, Loss: 0.014801 Epoch: 86, Loss: 0.021720 Epoch: 86, Loss: 0.015199 Epoch: 86, Loss: 0.019511 Epoch: 86, Loss: 0.020388 Epoch: 86, Loss: 0.013610 Epoch: 86, Loss: 0.019741 Epoch: 86, Loss: 0.022668 Epoch: 86, Loss: 0.025214 Epoch: 86, Loss: 0.015145 Epoch: 86, Loss: 0.029145 Epoch: 86, Loss: 0.033428 Epoch: 86, Loss: 0.021710 Epoch: 86, Loss: 0.024050 Epoch: 86, Loss: 0.026627 Epoch: 86, Loss: 0.023172 Epoch: 86, Loss: 0.030182 Epoch: 86, Loss: 0.021531 Epoch: 86, Loss: 0.029629 Epoch: 86, Loss: 0.018937 Epoch: 86, Loss: 0.014530 Epoch: 86, Loss: 0.024292 Epoch: 86, Loss: 0.014681 Epoch: 86, Loss: 0.013898 Epoch: 86, Loss: 0.017114 Epoch: 86, Loss: 0.017520 Epoch: 86, Loss: 0.023051 Epoch: 86, Loss: 0.012692 Epoch: 86, Loss: 0.026257 Epoch: 86, Loss: 0.028815 Epoch: 86, Loss: 0.031267 Epoch: 86, Loss: 0.026819 Epoch: 86, Loss: 0.035565 Epoch: 86, Loss: 0.014006 Epoch: 86, Loss: 0.012324 Epoch: 86, Loss: 0.024176 Epoch: 86, Loss: 0.026897 Epoch: 86, Loss: 0.022200 Epoch: 86, Loss: 0.018777 Epoch: 86, Loss: 0.014057 Epoch: 86, Loss: 0.085264 Epoch: 86, Loss: 0.012613 Epoch: 86, Loss: 0.034279 Epoch: 86, Loss: 0.016750 Epoch: 86, Loss: 0.016454 Epoch: 86, Loss: 0.022547 Epoch: 86, Loss: 0.020305 Epoch: 86, Loss: 0.017005 Epoch: 87, Loss: 0.016187 Epoch: 87, Loss: 0.018459 Epoch: 87, Loss: 0.013666 Epoch: 87, Loss: 0.012492 Epoch: 87, Loss: 0.015562 Epoch: 87, Loss: 0.030483 Epoch: 87, Loss: 0.021740 Epoch: 87, Loss: 0.037585 Epoch: 87, Loss: 0.013637 Epoch: 87, Loss: 0.024370 Epoch: 87, Loss: 0.016375 Epoch: 87, Loss: 0.019042 Epoch: 87, Loss: 0.017451 Epoch: 87, Loss: 0.029151 Epoch: 87, Loss: 0.023538 Epoch: 87, Loss: 0.017161 Epoch: 87, Loss: 0.054761 Epoch: 87, Loss: 0.021874 Epoch: 87, Loss: 0.021514 Epoch: 87, Loss: 0.014262 Epoch: 87, Loss: 0.023459 Epoch: 87, Loss: 0.029350 Epoch: 87, Loss: 0.037398 Epoch: 87, Loss: 0.026622 Epoch: 87, Loss: 0.018506 Epoch: 87, Loss: 0.020927 Epoch: 87, Loss: 0.023028 Epoch: 87, Loss: 0.015960 Epoch: 87, Loss: 0.054009 Epoch: 87, Loss: 0.037838 Epoch: 87, Loss: 0.030989 Epoch: 87, Loss: 0.032576 Epoch: 87, Loss: 0.020287 Epoch: 87, Loss: 0.032143 Epoch: 87, Loss: 0.017392 Epoch: 87, Loss: 0.043062 Epoch: 87, Loss: 0.017631 Epoch: 87, Loss: 0.020042 Epoch: 87, Loss: 0.018997 Epoch: 87, Loss: 0.016450 Epoch: 87, Loss: 0.029530 Epoch: 87, Loss: 0.017403 Epoch: 87, Loss: 0.012103 Epoch: 87, Loss: 0.023957 Epoch: 87, Loss: 0.023276 Epoch: 87, Loss: 0.018488 Epoch: 87, Loss: 0.026493 Epoch: 87, Loss: 0.026294 Epoch: 87, Loss: 0.018489 Epoch: 87, Loss: 0.016627 Epoch: 87, Loss: 0.029704 Epoch: 87, Loss: 0.022677 Epoch: 87, Loss: 0.015367 Epoch: 87, Loss: 0.015794 Epoch: 87, Loss: 0.018854 Epoch: 87, Loss: 0.026225 Epoch: 87, Loss: 0.019659 Epoch: 87, Loss: 0.034391 Epoch: 87, Loss: 0.031178 Epoch: 87, Loss: 0.009669 Epoch: 87, Loss: 0.024072 Epoch: 87, Loss: 0.017385 Epoch: 87, Loss: 0.015300 Epoch: 87, Loss: 0.023265 Epoch: 87, Loss: 0.018442 Epoch: 87, Loss: 0.012503 Epoch: 87, Loss: 0.016221 Epoch: 87, Loss: 0.021263 Epoch: 87, Loss: 0.048777 Epoch: 87, Loss: 0.019413 Epoch: 87, Loss: 0.013290 Epoch: 87, Loss: 0.017656 Epoch: 87, Loss: 0.030592 Epoch: 87, Loss: 0.012293 Epoch: 87, Loss: 0.018521 Epoch: 87, Loss: 0.016011 Epoch: 87, Loss: 0.014903 Epoch: 87, Loss: 0.023093 Epoch: 87, Loss: 0.017971 Epoch: 87, Loss: 0.025698 Epoch: 87, Loss: 0.026575 Epoch: 87, Loss: 0.020587 Epoch: 87, Loss: 0.039738 Epoch: 87, Loss: 0.022513 Epoch: 87, Loss: 0.021352 Epoch: 87, Loss: 0.016807 Epoch: 87, Loss: 0.012020 Epoch: 87, Loss: 0.018716 Epoch: 87, Loss: 0.022522 Epoch: 87, Loss: 0.014836 Epoch: 87, Loss: 0.021715 Epoch: 87, Loss: 0.022479 Epoch: 87, Loss: 0.018835 Epoch: 87, Loss: 0.026313 Epoch: 87, Loss: 0.012204 Epoch: 87, Loss: 0.019479 Epoch: 87, Loss: 0.022822 Epoch: 87, Loss: 0.015784 Epoch: 87, Loss: 0.018929 Epoch: 87, Loss: 0.032242 Epoch: 87, Loss: 0.019489 Epoch: 87, Loss: 0.022658 Epoch: 87, Loss: 0.022989 Epoch: 87, Loss: 0.025622 Epoch: 87, Loss: 0.027561 Epoch: 87, Loss: 0.024733 Epoch: 87, Loss: 0.013539 Epoch: 87, Loss: 0.011748 Epoch: 87, Loss: 0.021647 Epoch: 87, Loss: 0.019807 Epoch: 87, Loss: 0.032837 Epoch: 87, Loss: 0.047934 Epoch: 87, Loss: 0.019584 Epoch: 87, Loss: 0.022931 Epoch: 87, Loss: 0.017510 Epoch: 87, Loss: 0.027581 Epoch: 87, Loss: 0.022834 Epoch: 87, Loss: 0.029447 Epoch: 87, Loss: 0.033900 Epoch: 87, Loss: 0.019508 Epoch: 87, Loss: 0.015314 Epoch: 87, Loss: 0.019051 Epoch: 87, Loss: 0.040988 Epoch: 87, Loss: 0.015478 Epoch: 87, Loss: 0.039262 Epoch: 87, Loss: 0.031952 Epoch: 87, Loss: 0.023957 Epoch: 87, Loss: 0.051324 Epoch: 87, Loss: 0.035611 Epoch: 87, Loss: 0.025685 Epoch: 87, Loss: 0.013349 Epoch: 87, Loss: 0.018476 Epoch: 87, Loss: 0.028268 Epoch: 87, Loss: 0.021882 Epoch: 87, Loss: 0.028929 Epoch: 87, Loss: 0.019644 Epoch: 87, Loss: 0.008758 Epoch: 87, Loss: 0.013840 Epoch: 87, Loss: 0.015014 Epoch: 87, Loss: 0.016548 Epoch: 87, Loss: 0.022559 Epoch: 87, Loss: 0.012926 Epoch: 87, Loss: 0.018940 Epoch: 87, Loss: 0.020264 Epoch: 87, Loss: 0.011982 Epoch: 87, Loss: 0.016311 Epoch: 87, Loss: 0.016891 Epoch: 87, Loss: 0.010912 Epoch: 87, Loss: 0.044381 Epoch: 87, Loss: 0.019899 Epoch: 87, Loss: 0.027826 Epoch: 87, Loss: 0.022760 Epoch: 87, Loss: 0.022912 Epoch: 87, Loss: 0.015610 Epoch: 87, Loss: 0.024225 Epoch: 87, Loss: 0.025250 Epoch: 87, Loss: 0.030005 Epoch: 88, Loss: 0.014039 Epoch: 88, Loss: 0.030714 Epoch: 88, Loss: 0.035495 Epoch: 88, Loss: 0.016003 Epoch: 88, Loss: 0.012502 Epoch: 88, Loss: 0.016008 Epoch: 88, Loss: 0.020993 Epoch: 88, Loss: 0.018560 Epoch: 88, Loss: 0.021363 Epoch: 88, Loss: 0.038038 Epoch: 88, Loss: 0.024209 Epoch: 88, Loss: 0.012426 Epoch: 88, Loss: 0.034271 Epoch: 88, Loss: 0.023927 Epoch: 88, Loss: 0.016518 Epoch: 88, Loss: 0.016157 Epoch: 88, Loss: 0.018230 Epoch: 88, Loss: 0.017095 Epoch: 88, Loss: 0.017665 Epoch: 88, Loss: 0.019678 Epoch: 88, Loss: 0.024048 Epoch: 88, Loss: 0.027168 Epoch: 88, Loss: 0.038347 Epoch: 88, Loss: 0.015459 Epoch: 88, Loss: 0.010245 Epoch: 88, Loss: 0.022147 Epoch: 88, Loss: 0.013805 Epoch: 88, Loss: 0.017366 Epoch: 88, Loss: 0.013641 Epoch: 88, Loss: 0.014989 Epoch: 88, Loss: 0.028829 Epoch: 88, Loss: 0.017024 Epoch: 88, Loss: 0.021300 Epoch: 88, Loss: 0.025228 Epoch: 88, Loss: 0.019221 Epoch: 88, Loss: 0.020443 Epoch: 88, Loss: 0.016742 Epoch: 88, Loss: 0.017556 Epoch: 88, Loss: 0.014518 Epoch: 88, Loss: 0.020690 Epoch: 88, Loss: 0.012505 Epoch: 88, Loss: 0.012615 Epoch: 88, Loss: 0.021433 Epoch: 88, Loss: 0.020926 Epoch: 88, Loss: 0.015434 Epoch: 88, Loss: 0.075150 Epoch: 88, Loss: 0.064413 Epoch: 88, Loss: 0.022773 Epoch: 88, Loss: 0.009910 Epoch: 88, Loss: 0.022217 Epoch: 88, Loss: 0.015108 Epoch: 88, Loss: 0.022914 Epoch: 88, Loss: 0.020453 Epoch: 88, Loss: 0.030647 Epoch: 88, Loss: 0.032549 Epoch: 88, Loss: 0.024307 Epoch: 88, Loss: 0.020241 Epoch: 88, Loss: 0.016351 Epoch: 88, Loss: 0.028042 Epoch: 88, Loss: 0.020636 Epoch: 88, Loss: 0.014073 Epoch: 88, Loss: 0.039032 Epoch: 88, Loss: 0.030646 Epoch: 88, Loss: 0.029355 Epoch: 88, Loss: 0.016634 Epoch: 88, Loss: 0.021813 Epoch: 88, Loss: 0.023516 Epoch: 88, Loss: 0.027922 Epoch: 88, Loss: 0.016735 Epoch: 88, Loss: 0.026217 Epoch: 88, Loss: 0.035059 Epoch: 88, Loss: 0.020004 Epoch: 88, Loss: 0.027870 Epoch: 88, Loss: 0.017384 Epoch: 88, Loss: 0.017617 Epoch: 88, Loss: 0.014774 Epoch: 88, Loss: 0.012792 Epoch: 88, Loss: 0.015673 Epoch: 88, Loss: 0.016405 Epoch: 88, Loss: 0.018893 Epoch: 88, Loss: 0.013863 Epoch: 88, Loss: 0.020422 Epoch: 88, Loss: 0.021841 Epoch: 88, Loss: 0.028514 Epoch: 88, Loss: 0.017826 Epoch: 88, Loss: 0.016386 Epoch: 88, Loss: 0.028924 Epoch: 88, Loss: 0.024454 Epoch: 88, Loss: 0.009829 Epoch: 88, Loss: 0.014609 Epoch: 88, Loss: 0.022433 Epoch: 88, Loss: 0.014733 Epoch: 88, Loss: 0.025580 Epoch: 88, Loss: 0.021261 Epoch: 88, Loss: 0.033429 Epoch: 88, Loss: 0.031458 Epoch: 88, Loss: 0.028987 Epoch: 88, Loss: 0.015999 Epoch: 88, Loss: 0.008778 Epoch: 88, Loss: 0.019066 Epoch: 88, Loss: 0.030595 Epoch: 88, Loss: 0.021429 Epoch: 88, Loss: 0.022090 Epoch: 88, Loss: 0.030161 Epoch: 88, Loss: 0.015441 Epoch: 88, Loss: 0.025204 Epoch: 88, Loss: 0.022376 Epoch: 88, Loss: 0.021711 Epoch: 88, Loss: 0.026456 Epoch: 88, Loss: 0.013997 Epoch: 88, Loss: 0.011582 Epoch: 88, Loss: 0.019070 Epoch: 88, Loss: 0.022913 Epoch: 88, Loss: 0.011759 Epoch: 88, Loss: 0.024746 Epoch: 88, Loss: 0.028937 Epoch: 88, Loss: 0.019599 Epoch: 88, Loss: 0.011478 Epoch: 88, Loss: 0.069848 Epoch: 88, Loss: 0.020898 Epoch: 88, Loss: 0.019000 Epoch: 88, Loss: 0.028452 Epoch: 88, Loss: 0.022283 Epoch: 88, Loss: 0.018584 Epoch: 88, Loss: 0.019128 Epoch: 88, Loss: 0.028385 Epoch: 88, Loss: 0.029152 Epoch: 88, Loss: 0.025293 Epoch: 88, Loss: 0.029677 Epoch: 88, Loss: 0.030097 Epoch: 88, Loss: 0.016715 Epoch: 88, Loss: 0.015484 Epoch: 88, Loss: 0.027471 Epoch: 88, Loss: 0.015474 Epoch: 88, Loss: 0.012669 Epoch: 88, Loss: 0.009800 Epoch: 88, Loss: 0.013802 Epoch: 88, Loss: 0.023540 Epoch: 88, Loss: 0.019643 Epoch: 88, Loss: 0.026124 Epoch: 88, Loss: 0.017559 Epoch: 88, Loss: 0.013118 Epoch: 88, Loss: 0.036031 Epoch: 88, Loss: 0.022142 Epoch: 88, Loss: 0.017155 Epoch: 88, Loss: 0.015875 Epoch: 88, Loss: 0.031436 Epoch: 88, Loss: 0.025383 Epoch: 88, Loss: 0.021192 Epoch: 88, Loss: 0.039096 Epoch: 88, Loss: 0.026899 Epoch: 88, Loss: 0.026518 Epoch: 88, Loss: 0.017179 Epoch: 88, Loss: 0.014976 Epoch: 88, Loss: 0.044988 Epoch: 88, Loss: 0.044645 Epoch: 88, Loss: 0.030940 Epoch: 89, Loss: 0.086641 Epoch: 89, Loss: 0.018413 Epoch: 89, Loss: 0.018178 Epoch: 89, Loss: 0.053455 Epoch: 89, Loss: 0.015734 Epoch: 89, Loss: 0.018023 Epoch: 89, Loss: 0.016556 Epoch: 89, Loss: 0.015817 Epoch: 89, Loss: 0.019882 Epoch: 89, Loss: 0.021261 Epoch: 89, Loss: 0.015266 Epoch: 89, Loss: 0.014727 Epoch: 89, Loss: 0.019709 Epoch: 89, Loss: 0.014718 Epoch: 89, Loss: 0.017385 Epoch: 89, Loss: 0.022303 Epoch: 89, Loss: 0.017959 Epoch: 89, Loss: 0.015426 Epoch: 89, Loss: 0.017392 Epoch: 89, Loss: 0.018428 Epoch: 89, Loss: 0.016336 Epoch: 89, Loss: 0.020796 Epoch: 89, Loss: 0.053355 Epoch: 89, Loss: 0.024350 Epoch: 89, Loss: 0.022459 Epoch: 89, Loss: 0.025982 Epoch: 89, Loss: 0.043860 Epoch: 89, Loss: 0.018992 Epoch: 89, Loss: 0.032898 Epoch: 89, Loss: 0.030104 Epoch: 89, Loss: 0.017487 Epoch: 89, Loss: 0.019094 Epoch: 89, Loss: 0.024494 Epoch: 89, Loss: 0.020498 Epoch: 89, Loss: 0.017806 Epoch: 89, Loss: 0.021383 Epoch: 89, Loss: 0.021310 Epoch: 89, Loss: 0.023698 Epoch: 89, Loss: 0.023890 Epoch: 89, Loss: 0.014061 Epoch: 89, Loss: 0.018496 Epoch: 89, Loss: 0.012866 Epoch: 89, Loss: 0.010757 Epoch: 89, Loss: 0.035034 Epoch: 89, Loss: 0.014242 Epoch: 89, Loss: 0.023011 Epoch: 89, Loss: 0.016113 Epoch: 89, Loss: 0.019272 Epoch: 89, Loss: 0.020741 Epoch: 89, Loss: 0.030692 Epoch: 89, Loss: 0.019826 Epoch: 89, Loss: 0.016079 Epoch: 89, Loss: 0.023250 Epoch: 89, Loss: 0.016759 Epoch: 89, Loss: 0.017916 Epoch: 89, Loss: 0.021780 Epoch: 89, Loss: 0.021946 Epoch: 89, Loss: 0.067480 Epoch: 89, Loss: 0.041367 Epoch: 89, Loss: 0.011878 Epoch: 89, Loss: 0.012732 Epoch: 89, Loss: 0.016064 Epoch: 89, Loss: 0.017250 Epoch: 89, Loss: 0.020243 Epoch: 89, Loss: 0.023806 Epoch: 89, Loss: 0.022352 Epoch: 89, Loss: 0.026552 Epoch: 89, Loss: 0.031456 Epoch: 89, Loss: 0.024991 Epoch: 89, Loss: 0.017872 Epoch: 89, Loss: 0.023554 Epoch: 89, Loss: 0.018220 Epoch: 89, Loss: 0.023446 Epoch: 89, Loss: 0.017183 Epoch: 89, Loss: 0.013147 Epoch: 89, Loss: 0.021691 Epoch: 89, Loss: 0.017063 Epoch: 89, Loss: 0.015796 Epoch: 89, Loss: 0.021017 Epoch: 89, Loss: 0.022052 Epoch: 89, Loss: 0.020614 Epoch: 89, Loss: 0.024387 Epoch: 89, Loss: 0.021930 Epoch: 89, Loss: 0.018068 Epoch: 89, Loss: 0.015482 Epoch: 89, Loss: 0.030844 Epoch: 89, Loss: 0.026559 Epoch: 89, Loss: 0.019468 Epoch: 89, Loss: 0.022177 Epoch: 89, Loss: 0.021296 Epoch: 89, Loss: 0.014168 Epoch: 89, Loss: 0.013273 Epoch: 89, Loss: 0.017286 Epoch: 89, Loss: 0.027581 Epoch: 89, Loss: 0.025244 Epoch: 89, Loss: 0.023854 Epoch: 89, Loss: 0.017807 Epoch: 89, Loss: 0.045459 Epoch: 89, Loss: 0.120076 Epoch: 89, Loss: 0.105060 Epoch: 89, Loss: 0.020339 Epoch: 89, Loss: 0.027127 Epoch: 89, Loss: 0.018061 Epoch: 89, Loss: 0.021973 Epoch: 89, Loss: 0.040448 Epoch: 89, Loss: 0.021412 Epoch: 89, Loss: 0.018382 Epoch: 89, Loss: 0.010828 Epoch: 89, Loss: 0.017181 Epoch: 89, Loss: 0.016545 Epoch: 89, Loss: 0.018528 Epoch: 89, Loss: 0.014750 Epoch: 89, Loss: 0.057830 Epoch: 89, Loss: 0.040693 Epoch: 89, Loss: 0.019161 Epoch: 89, Loss: 0.016389 Epoch: 89, Loss: 0.024241 Epoch: 89, Loss: 0.025704 Epoch: 89, Loss: 0.012275 Epoch: 89, Loss: 0.019823 Epoch: 89, Loss: 0.030207 Epoch: 89, Loss: 0.020571 Epoch: 89, Loss: 0.011653 Epoch: 89, Loss: 0.022893 Epoch: 89, Loss: 0.025699 Epoch: 89, Loss: 0.019939 Epoch: 89, Loss: 0.039730 Epoch: 89, Loss: 0.021655 Epoch: 89, Loss: 0.024184 Epoch: 89, Loss: 0.019021 Epoch: 89, Loss: 0.019877 Epoch: 89, Loss: 0.021447 Epoch: 89, Loss: 0.022746 Epoch: 89, Loss: 0.013917 Epoch: 89, Loss: 0.029922 Epoch: 89, Loss: 0.035663 Epoch: 89, Loss: 0.030255 Epoch: 89, Loss: 0.015331 Epoch: 89, Loss: 0.015639 Epoch: 89, Loss: 0.023827 Epoch: 89, Loss: 0.016578 Epoch: 89, Loss: 0.016204 Epoch: 89, Loss: 0.012031 Epoch: 89, Loss: 0.010768 Epoch: 89, Loss: 0.020797 Epoch: 89, Loss: 0.016581 Epoch: 89, Loss: 0.021392 Epoch: 89, Loss: 0.015659 Epoch: 89, Loss: 0.046911 Epoch: 89, Loss: 0.015391 Epoch: 89, Loss: 0.025185 Epoch: 89, Loss: 0.026389 Epoch: 89, Loss: 0.016587 Epoch: 89, Loss: 0.023609 Epoch: 89, Loss: 0.024207 Epoch: 89, Loss: 0.021579 Epoch: 89, Loss: 0.013258 Epoch: 90, Loss: 0.033626 Epoch: 90, Loss: 0.020540 Epoch: 90, Loss: 0.010728 Epoch: 90, Loss: 0.011932 Epoch: 90, Loss: 0.019323 Epoch: 90, Loss: 0.016176 Epoch: 90, Loss: 0.012249 Epoch: 90, Loss: 0.024696 Epoch: 90, Loss: 0.013728 Epoch: 90, Loss: 0.022514 Epoch: 90, Loss: 0.010586 Epoch: 90, Loss: 0.016940 Epoch: 90, Loss: 0.010704 Epoch: 90, Loss: 0.021138 Epoch: 90, Loss: 0.026191 Epoch: 90, Loss: 0.019082 Epoch: 90, Loss: 0.019363 Epoch: 90, Loss: 0.034257 Epoch: 90, Loss: 0.020225 Epoch: 90, Loss: 0.019541 Epoch: 90, Loss: 0.021621 Epoch: 90, Loss: 0.017549 Epoch: 90, Loss: 0.018657 Epoch: 90, Loss: 0.034470 Epoch: 90, Loss: 0.011105 Epoch: 90, Loss: 0.021850 Epoch: 90, Loss: 0.021840 Epoch: 90, Loss: 0.016398 Epoch: 90, Loss: 0.020376 Epoch: 90, Loss: 0.015009 Epoch: 90, Loss: 0.018857 Epoch: 90, Loss: 0.014506 Epoch: 90, Loss: 0.012707 Epoch: 90, Loss: 0.016445 Epoch: 90, Loss: 0.020847 Epoch: 90, Loss: 0.017053 Epoch: 90, Loss: 0.021045 Epoch: 90, Loss: 0.017106 Epoch: 90, Loss: 0.014044 Epoch: 90, Loss: 0.023049 Epoch: 90, Loss: 0.022812 Epoch: 90, Loss: 0.020782 Epoch: 90, Loss: 0.031897 Epoch: 90, Loss: 0.012938 Epoch: 90, Loss: 0.014333 Epoch: 90, Loss: 0.018931 Epoch: 90, Loss: 0.025534 Epoch: 90, Loss: 0.023594 Epoch: 90, Loss: 0.035330 Epoch: 90, Loss: 0.014658 Epoch: 90, Loss: 0.021548 Epoch: 90, Loss: 0.014094 Epoch: 90, Loss: 0.022228 Epoch: 90, Loss: 0.017019 Epoch: 90, Loss: 0.018301 Epoch: 90, Loss: 0.014567 Epoch: 90, Loss: 0.012176 Epoch: 90, Loss: 0.021399 Epoch: 90, Loss: 0.015076 Epoch: 90, Loss: 0.017098 Epoch: 90, Loss: 0.019411 Epoch: 90, Loss: 0.021595 Epoch: 90, Loss: 0.022044 Epoch: 90, Loss: 0.014839 Epoch: 90, Loss: 0.012892 Epoch: 90, Loss: 0.032252 Epoch: 90, Loss: 0.015283 Epoch: 90, Loss: 0.023085 Epoch: 90, Loss: 0.029912 Epoch: 90, Loss: 0.077864 Epoch: 90, Loss: 0.016841 Epoch: 90, Loss: 0.019337 Epoch: 90, Loss: 0.022635 Epoch: 90, Loss: 0.023400 Epoch: 90, Loss: 0.013772 Epoch: 90, Loss: 0.015803 Epoch: 90, Loss: 0.012694 Epoch: 90, Loss: 0.016304 Epoch: 90, Loss: 0.015737 Epoch: 90, Loss: 0.035926 Epoch: 90, Loss: 0.025992 Epoch: 90, Loss: 0.019259 Epoch: 90, Loss: 0.027644 Epoch: 90, Loss: 0.026821 Epoch: 90, Loss: 0.015779 Epoch: 90, Loss: 0.022662 Epoch: 90, Loss: 0.009238 Epoch: 90, Loss: 0.019092 Epoch: 90, Loss: 0.020190 Epoch: 90, Loss: 0.021631 Epoch: 90, Loss: 0.015812 Epoch: 90, Loss: 0.020587 Epoch: 90, Loss: 0.009891 Epoch: 90, Loss: 0.013860 Epoch: 90, Loss: 0.021707 Epoch: 90, Loss: 0.018529 Epoch: 90, Loss: 0.017625 Epoch: 90, Loss: 0.018339 Epoch: 90, Loss: 0.017509 Epoch: 90, Loss: 0.019977 Epoch: 90, Loss: 0.022112 Epoch: 90, Loss: 0.016207 Epoch: 90, Loss: 0.035471 Epoch: 90, Loss: 0.013306 Epoch: 90, Loss: 0.016364 Epoch: 90, Loss: 0.013977 Epoch: 90, Loss: 0.018971 Epoch: 90, Loss: 0.014995 Epoch: 90, Loss: 0.022165 Epoch: 90, Loss: 0.013690 Epoch: 90, Loss: 0.014649 Epoch: 90, Loss: 0.022430 Epoch: 90, Loss: 0.016897 Epoch: 90, Loss: 0.016142 Epoch: 90, Loss: 0.026829 Epoch: 90, Loss: 0.028650 Epoch: 90, Loss: 0.012214 Epoch: 90, Loss: 0.031416 Epoch: 90, Loss: 0.016815 Epoch: 90, Loss: 0.012963 Epoch: 90, Loss: 0.027654 Epoch: 90, Loss: 0.015077 Epoch: 90, Loss: 0.023856 Epoch: 90, Loss: 0.018640 Epoch: 90, Loss: 0.015855 Epoch: 90, Loss: 0.021633 Epoch: 90, Loss: 0.029795 Epoch: 90, Loss: 0.021654 Epoch: 90, Loss: 0.017107 Epoch: 90, Loss: 0.029524 Epoch: 90, Loss: 0.029692 Epoch: 90, Loss: 0.009494 Epoch: 90, Loss: 0.019347 Epoch: 90, Loss: 0.013856 Epoch: 90, Loss: 0.029153 Epoch: 90, Loss: 0.015513 Epoch: 90, Loss: 0.018885 Epoch: 90, Loss: 0.024338 Epoch: 90, Loss: 0.012329 Epoch: 90, Loss: 0.020675 Epoch: 90, Loss: 0.016336 Epoch: 90, Loss: 0.029829 Epoch: 90, Loss: 0.027454 Epoch: 90, Loss: 0.024352 Epoch: 90, Loss: 0.022031 Epoch: 90, Loss: 0.020317 Epoch: 90, Loss: 0.019342 Epoch: 90, Loss: 0.030929 Epoch: 90, Loss: 0.033829 Epoch: 90, Loss: 0.043266 Epoch: 90, Loss: 0.025401 Epoch: 90, Loss: 0.020305 Epoch: 90, Loss: 0.031530 Epoch: 90, Loss: 0.017279 Epoch: 90, Loss: 0.017763 Epoch: 90, Loss: 0.016557 Epoch: 90, Loss: 0.034591 Epoch: 91, Loss: 0.036547 Epoch: 91, Loss: 0.015171 Epoch: 91, Loss: 0.028587 Epoch: 91, Loss: 0.029765 Epoch: 91, Loss: 0.017366 Epoch: 91, Loss: 0.017515 Epoch: 91, Loss: 0.014674 Epoch: 91, Loss: 0.023018 Epoch: 91, Loss: 0.015815 Epoch: 91, Loss: 0.017586 Epoch: 91, Loss: 0.011438 Epoch: 91, Loss: 0.017977 Epoch: 91, Loss: 0.015990 Epoch: 91, Loss: 0.016042 Epoch: 91, Loss: 0.027443 Epoch: 91, Loss: 0.010517 Epoch: 91, Loss: 0.014627 Epoch: 91, Loss: 0.021100 Epoch: 91, Loss: 0.014155 Epoch: 91, Loss: 0.015625 Epoch: 91, Loss: 0.012671 Epoch: 91, Loss: 0.019744 Epoch: 91, Loss: 0.017501 Epoch: 91, Loss: 0.026379 Epoch: 91, Loss: 0.018392 Epoch: 91, Loss: 0.017268 Epoch: 91, Loss: 0.021913 Epoch: 91, Loss: 0.011061 Epoch: 91, Loss: 0.015819 Epoch: 91, Loss: 0.017768 Epoch: 91, Loss: 0.016770 Epoch: 91, Loss: 0.018987 Epoch: 91, Loss: 0.011152 Epoch: 91, Loss: 0.021166 Epoch: 91, Loss: 0.015494 Epoch: 91, Loss: 0.025406 Epoch: 91, Loss: 0.015277 Epoch: 91, Loss: 0.019194 Epoch: 91, Loss: 0.031413 Epoch: 91, Loss: 0.016428 Epoch: 91, Loss: 0.022623 Epoch: 91, Loss: 0.018043 Epoch: 91, Loss: 0.020093 Epoch: 91, Loss: 0.016375 Epoch: 91, Loss: 0.027199 Epoch: 91, Loss: 0.017470 Epoch: 91, Loss: 0.023701 Epoch: 91, Loss: 0.013355 Epoch: 91, Loss: 0.027699 Epoch: 91, Loss: 0.019177 Epoch: 91, Loss: 0.025667 Epoch: 91, Loss: 0.015029 Epoch: 91, Loss: 0.024027 Epoch: 91, Loss: 0.021973 Epoch: 91, Loss: 0.021449 Epoch: 91, Loss: 0.027936 Epoch: 91, Loss: 0.016767 Epoch: 91, Loss: 0.052483 Epoch: 91, Loss: 0.055149 Epoch: 91, Loss: 0.017344 Epoch: 91, Loss: 0.020131 Epoch: 91, Loss: 0.032307 Epoch: 91, Loss: 0.015308 Epoch: 91, Loss: 0.017131 Epoch: 91, Loss: 0.008988 Epoch: 91, Loss: 0.016519 Epoch: 91, Loss: 0.014991 Epoch: 91, Loss: 0.013291 Epoch: 91, Loss: 0.019804 Epoch: 91, Loss: 0.017411 Epoch: 91, Loss: 0.020648 Epoch: 91, Loss: 0.016522 Epoch: 91, Loss: 0.033911 Epoch: 91, Loss: 0.017135 Epoch: 91, Loss: 0.019500 Epoch: 91, Loss: 0.015542 Epoch: 91, Loss: 0.075344 Epoch: 91, Loss: 0.027125 Epoch: 91, Loss: 0.013818 Epoch: 91, Loss: 0.020756 Epoch: 91, Loss: 0.021937 Epoch: 91, Loss: 0.018894 Epoch: 91, Loss: 0.022119 Epoch: 91, Loss: 0.017484 Epoch: 91, Loss: 0.019096 Epoch: 91, Loss: 0.021436 Epoch: 91, Loss: 0.020918 Epoch: 91, Loss: 0.011193 Epoch: 91, Loss: 0.010313 Epoch: 91, Loss: 0.024806 Epoch: 91, Loss: 0.013627 Epoch: 91, Loss: 0.012203 Epoch: 91, Loss: 0.024439 Epoch: 91, Loss: 0.024801 Epoch: 91, Loss: 0.029317 Epoch: 91, Loss: 0.031217 Epoch: 91, Loss: 0.026866 Epoch: 91, Loss: 0.018876 Epoch: 91, Loss: 0.021366 Epoch: 91, Loss: 0.014536 Epoch: 91, Loss: 0.019812 Epoch: 91, Loss: 0.021814 Epoch: 91, Loss: 0.018134 Epoch: 91, Loss: 0.023548 Epoch: 91, Loss: 0.013145 Epoch: 91, Loss: 0.013238 Epoch: 91, Loss: 0.017841 Epoch: 91, Loss: 0.018805 Epoch: 91, Loss: 0.021379 Epoch: 91, Loss: 0.017173 Epoch: 91, Loss: 0.019778 Epoch: 91, Loss: 0.010696 Epoch: 91, Loss: 0.021220 Epoch: 91, Loss: 0.022379 Epoch: 91, Loss: 0.018697 Epoch: 91, Loss: 0.009698 Epoch: 91, Loss: 0.021470 Epoch: 91, Loss: 0.016396 Epoch: 91, Loss: 0.013777 Epoch: 91, Loss: 0.015170 Epoch: 91, Loss: 0.009052 Epoch: 91, Loss: 0.024096 Epoch: 91, Loss: 0.092515 Epoch: 91, Loss: 0.061076 Epoch: 91, Loss: 0.036698 Epoch: 91, Loss: 0.012852 Epoch: 91, Loss: 0.019628 Epoch: 91, Loss: 0.019710 Epoch: 91, Loss: 0.011272 Epoch: 91, Loss: 0.018862 Epoch: 91, Loss: 0.016040 Epoch: 91, Loss: 0.035078 Epoch: 91, Loss: 0.020832 Epoch: 91, Loss: 0.018373 Epoch: 91, Loss: 0.013975 Epoch: 91, Loss: 0.020713 Epoch: 91, Loss: 0.017623 Epoch: 91, Loss: 0.013472 Epoch: 91, Loss: 0.021688 Epoch: 91, Loss: 0.018506 Epoch: 91, Loss: 0.017483 Epoch: 91, Loss: 0.024204 Epoch: 91, Loss: 0.032785 Epoch: 91, Loss: 0.022594 Epoch: 91, Loss: 0.030959 Epoch: 91, Loss: 0.032785 Epoch: 91, Loss: 0.023688 Epoch: 91, Loss: 0.038430 Epoch: 91, Loss: 0.052248 Epoch: 91, Loss: 0.073038 Epoch: 91, Loss: 0.036390 Epoch: 91, Loss: 0.014983 Epoch: 91, Loss: 0.018041 Epoch: 91, Loss: 0.041530 Epoch: 91, Loss: 0.022991 Epoch: 91, Loss: 0.021451 Epoch: 91, Loss: 0.005497 Epoch: 92, Loss: 0.019422 Epoch: 92, Loss: 0.023709 Epoch: 92, Loss: 0.019166 Epoch: 92, Loss: 0.022340 Epoch: 92, Loss: 0.017028 Epoch: 92, Loss: 0.014829 Epoch: 92, Loss: 0.020508 Epoch: 92, Loss: 0.011163 Epoch: 92, Loss: 0.020331 Epoch: 92, Loss: 0.019689 Epoch: 92, Loss: 0.019477 Epoch: 92, Loss: 0.023135 Epoch: 92, Loss: 0.016848 Epoch: 92, Loss: 0.020926 Epoch: 92, Loss: 0.025421 Epoch: 92, Loss: 0.011816 Epoch: 92, Loss: 0.015618 Epoch: 92, Loss: 0.022645 Epoch: 92, Loss: 0.013208 Epoch: 92, Loss: 0.011198 Epoch: 92, Loss: 0.013358 Epoch: 92, Loss: 0.016942 Epoch: 92, Loss: 0.015603 Epoch: 92, Loss: 0.025734 Epoch: 92, Loss: 0.018524 Epoch: 92, Loss: 0.037925 Epoch: 92, Loss: 0.010538 Epoch: 92, Loss: 0.024014 Epoch: 92, Loss: 0.023922 Epoch: 92, Loss: 0.014614 Epoch: 92, Loss: 0.019350 Epoch: 92, Loss: 0.028196 Epoch: 92, Loss: 0.038287 Epoch: 92, Loss: 0.021871 Epoch: 92, Loss: 0.035402 Epoch: 92, Loss: 0.038046 Epoch: 92, Loss: 0.021566 Epoch: 92, Loss: 0.018911 Epoch: 92, Loss: 0.035712 Epoch: 92, Loss: 0.027929 Epoch: 92, Loss: 0.013344 Epoch: 92, Loss: 0.012151 Epoch: 92, Loss: 0.017771 Epoch: 92, Loss: 0.037920 Epoch: 92, Loss: 0.018967 Epoch: 92, Loss: 0.020842 Epoch: 92, Loss: 0.019796 Epoch: 92, Loss: 0.017738 Epoch: 92, Loss: 0.025921 Epoch: 92, Loss: 0.016718 Epoch: 92, Loss: 0.019568 Epoch: 92, Loss: 0.028227 Epoch: 92, Loss: 0.014863 Epoch: 92, Loss: 0.017071 Epoch: 92, Loss: 0.022376 Epoch: 92, Loss: 0.029597 Epoch: 92, Loss: 0.022732 Epoch: 92, Loss: 0.022188 Epoch: 92, Loss: 0.011083 Epoch: 92, Loss: 0.010318 Epoch: 92, Loss: 0.019021 Epoch: 92, Loss: 0.013109 Epoch: 92, Loss: 0.014928 Epoch: 92, Loss: 0.018819 Epoch: 92, Loss: 0.009323 Epoch: 92, Loss: 0.011833 Epoch: 92, Loss: 0.025644 Epoch: 92, Loss: 0.012448 Epoch: 92, Loss: 0.017015 Epoch: 92, Loss: 0.021798 Epoch: 92, Loss: 0.012496 Epoch: 92, Loss: 0.015272 Epoch: 92, Loss: 0.024347 Epoch: 92, Loss: 0.014144 Epoch: 92, Loss: 0.012726 Epoch: 92, Loss: 0.014234 Epoch: 92, Loss: 0.027427 Epoch: 92, Loss: 0.016424 Epoch: 92, Loss: 0.022988 Epoch: 92, Loss: 0.015743 Epoch: 92, Loss: 0.027744 Epoch: 92, Loss: 0.017636 Epoch: 92, Loss: 0.005330 Epoch: 92, Loss: 0.018687 Epoch: 92, Loss: 0.013200 Epoch: 92, Loss: 0.018859 Epoch: 92, Loss: 0.020959 Epoch: 92, Loss: 0.008209 Epoch: 92, Loss: 0.025106 Epoch: 92, Loss: 0.017361 Epoch: 92, Loss: 0.023195 Epoch: 92, Loss: 0.026658 Epoch: 92, Loss: 0.031469 Epoch: 92, Loss: 0.018842 Epoch: 92, Loss: 0.014181 Epoch: 92, Loss: 0.040661 Epoch: 92, Loss: 0.019966 Epoch: 92, Loss: 0.016192 Epoch: 92, Loss: 0.010829 Epoch: 92, Loss: 0.021711 Epoch: 92, Loss: 0.015928 Epoch: 92, Loss: 0.014511 Epoch: 92, Loss: 0.064922 Epoch: 92, Loss: 0.058107 Epoch: 92, Loss: 0.041104 Epoch: 92, Loss: 0.017307 Epoch: 92, Loss: 0.014825 Epoch: 92, Loss: 0.015510 Epoch: 92, Loss: 0.031339 Epoch: 92, Loss: 0.009881 Epoch: 92, Loss: 0.015606 Epoch: 92, Loss: 0.017065 Epoch: 92, Loss: 0.014280 Epoch: 92, Loss: 0.010651 Epoch: 92, Loss: 0.020619 Epoch: 92, Loss: 0.027649 Epoch: 92, Loss: 0.022984 Epoch: 92, Loss: 0.014114 Epoch: 92, Loss: 0.032417 Epoch: 92, Loss: 0.016304 Epoch: 92, Loss: 0.012369 Epoch: 92, Loss: 0.014947 Epoch: 92, Loss: 0.030401 Epoch: 92, Loss: 0.016922 Epoch: 92, Loss: 0.025482 Epoch: 92, Loss: 0.019857 Epoch: 92, Loss: 0.019125 Epoch: 92, Loss: 0.019635 Epoch: 92, Loss: 0.014807 Epoch: 92, Loss: 0.011369 Epoch: 92, Loss: 0.020658 Epoch: 92, Loss: 0.023321 Epoch: 92, Loss: 0.021110 Epoch: 92, Loss: 0.023952 Epoch: 92, Loss: 0.021194 Epoch: 92, Loss: 0.011571 Epoch: 92, Loss: 0.014701 Epoch: 92, Loss: 0.020418 Epoch: 92, Loss: 0.011869 Epoch: 92, Loss: 0.019290 Epoch: 92, Loss: 0.019804 Epoch: 92, Loss: 0.019565 Epoch: 92, Loss: 0.014316 Epoch: 92, Loss: 0.033793 Epoch: 92, Loss: 0.018215 Epoch: 92, Loss: 0.014707 Epoch: 92, Loss: 0.011897 Epoch: 92, Loss: 0.012669 Epoch: 92, Loss: 0.014089 Epoch: 92, Loss: 0.015068 Epoch: 92, Loss: 0.014099 Epoch: 92, Loss: 0.016928 Epoch: 92, Loss: 0.017086 Epoch: 92, Loss: 0.024261 Epoch: 92, Loss: 0.026597 Epoch: 92, Loss: 0.024846 Epoch: 92, Loss: 0.008572 Epoch: 93, Loss: 0.016302 Epoch: 93, Loss: 0.015387 Epoch: 93, Loss: 0.020002 Epoch: 93, Loss: 0.013697 Epoch: 93, Loss: 0.017280 Epoch: 93, Loss: 0.018275 Epoch: 93, Loss: 0.015700 Epoch: 93, Loss: 0.015013 Epoch: 93, Loss: 0.015528 Epoch: 93, Loss: 0.016115 Epoch: 93, Loss: 0.021440 Epoch: 93, Loss: 0.011514 Epoch: 93, Loss: 0.019884 Epoch: 93, Loss: 0.009678 Epoch: 93, Loss: 0.015961 Epoch: 93, Loss: 0.031811 Epoch: 93, Loss: 0.020320 Epoch: 93, Loss: 0.036518 Epoch: 93, Loss: 0.010185 Epoch: 93, Loss: 0.018737 Epoch: 93, Loss: 0.016002 Epoch: 93, Loss: 0.024415 Epoch: 93, Loss: 0.020384 Epoch: 93, Loss: 0.017424 Epoch: 93, Loss: 0.013587 Epoch: 93, Loss: 0.021171 Epoch: 93, Loss: 0.016560 Epoch: 93, Loss: 0.015582 Epoch: 93, Loss: 0.030546 Epoch: 93, Loss: 0.031241 Epoch: 93, Loss: 0.021359 Epoch: 93, Loss: 0.018711 Epoch: 93, Loss: 0.037167 Epoch: 93, Loss: 0.034994 Epoch: 93, Loss: 0.019260 Epoch: 93, Loss: 0.023407 Epoch: 93, Loss: 0.012904 Epoch: 93, Loss: 0.020813 Epoch: 93, Loss: 0.015680 Epoch: 93, Loss: 0.016576 Epoch: 93, Loss: 0.040291 Epoch: 93, Loss: 0.015254 Epoch: 93, Loss: 0.015530 Epoch: 93, Loss: 0.021121 Epoch: 93, Loss: 0.032433 Epoch: 93, Loss: 0.021372 Epoch: 93, Loss: 0.020920 Epoch: 93, Loss: 0.020667 Epoch: 93, Loss: 0.012800 Epoch: 93, Loss: 0.014333 Epoch: 93, Loss: 0.027188 Epoch: 93, Loss: 0.020886 Epoch: 93, Loss: 0.017121 Epoch: 93, Loss: 0.013424 Epoch: 93, Loss: 0.012370 Epoch: 93, Loss: 0.014304 Epoch: 93, Loss: 0.010105 Epoch: 93, Loss: 0.010481 Epoch: 93, Loss: 0.027344 Epoch: 93, Loss: 0.010300 Epoch: 93, Loss: 0.016764 Epoch: 93, Loss: 0.012466 Epoch: 93, Loss: 0.024678 Epoch: 93, Loss: 0.014878 Epoch: 93, Loss: 0.019460 Epoch: 93, Loss: 0.016858 Epoch: 93, Loss: 0.012607 Epoch: 93, Loss: 0.028469 Epoch: 93, Loss: 0.018680 Epoch: 93, Loss: 0.010220 Epoch: 93, Loss: 0.022992 Epoch: 93, Loss: 0.014223 Epoch: 93, Loss: 0.016570 Epoch: 93, Loss: 0.016183 Epoch: 93, Loss: 0.014950 Epoch: 93, Loss: 0.016736 Epoch: 93, Loss: 0.021307 Epoch: 93, Loss: 0.020859 Epoch: 93, Loss: 0.014059 Epoch: 93, Loss: 0.012872 Epoch: 93, Loss: 0.015763 Epoch: 93, Loss: 0.066358 Epoch: 93, Loss: 0.009999 Epoch: 93, Loss: 0.029931 Epoch: 93, Loss: 0.019659 Epoch: 93, Loss: 0.016852 Epoch: 93, Loss: 0.015875 Epoch: 93, Loss: 0.039738 Epoch: 93, Loss: 0.021942 Epoch: 93, Loss: 0.012318 Epoch: 93, Loss: 0.017916 Epoch: 93, Loss: 0.013148 Epoch: 93, Loss: 0.021258 Epoch: 93, Loss: 0.023002 Epoch: 93, Loss: 0.012589 Epoch: 93, Loss: 0.023117 Epoch: 93, Loss: 0.028448 Epoch: 93, Loss: 0.014351 Epoch: 93, Loss: 0.020442 Epoch: 93, Loss: 0.019491 Epoch: 93, Loss: 0.015998 Epoch: 93, Loss: 0.013939 Epoch: 93, Loss: 0.026309 Epoch: 93, Loss: 0.014887 Epoch: 93, Loss: 0.024114 Epoch: 93, Loss: 0.011658 Epoch: 93, Loss: 0.012216 Epoch: 93, Loss: 0.009573 Epoch: 93, Loss: 0.012194 Epoch: 93, Loss: 0.017175 Epoch: 93, Loss: 0.017656 Epoch: 93, Loss: 0.016916 Epoch: 93, Loss: 0.015679 Epoch: 93, Loss: 0.018852 Epoch: 93, Loss: 0.017600 Epoch: 93, Loss: 0.015318 Epoch: 93, Loss: 0.030708 Epoch: 93, Loss: 0.018828 Epoch: 93, Loss: 0.014228 Epoch: 93, Loss: 0.019637 Epoch: 93, Loss: 0.020584 Epoch: 93, Loss: 0.012411 Epoch: 93, Loss: 0.019266 Epoch: 93, Loss: 0.016149 Epoch: 93, Loss: 0.019580 Epoch: 93, Loss: 0.048559 Epoch: 93, Loss: 0.025143 Epoch: 93, Loss: 0.023416 Epoch: 93, Loss: 0.014352 Epoch: 93, Loss: 0.025340 Epoch: 93, Loss: 0.024411 Epoch: 93, Loss: 0.017702 Epoch: 93, Loss: 0.014294 Epoch: 93, Loss: 0.020618 Epoch: 93, Loss: 0.022685 Epoch: 93, Loss: 0.019091 Epoch: 93, Loss: 0.010594 Epoch: 93, Loss: 0.011731 Epoch: 93, Loss: 0.036194 Epoch: 93, Loss: 0.033334 Epoch: 93, Loss: 0.016013 Epoch: 93, Loss: 0.015467 Epoch: 93, Loss: 0.012502 Epoch: 93, Loss: 0.022057 Epoch: 93, Loss: 0.033390 Epoch: 93, Loss: 0.016703 Epoch: 93, Loss: 0.020330 Epoch: 93, Loss: 0.025623 Epoch: 93, Loss: 0.012758 Epoch: 93, Loss: 0.020358 Epoch: 93, Loss: 0.015594 Epoch: 93, Loss: 0.018206 Epoch: 93, Loss: 0.016761 Epoch: 93, Loss: 0.035519 Epoch: 93, Loss: 0.017617 Epoch: 93, Loss: 0.012335 Epoch: 93, Loss: 0.034017 Epoch: 94, Loss: 0.010192 Epoch: 94, Loss: 0.010944 Epoch: 94, Loss: 0.026711 Epoch: 94, Loss: 0.012683 Epoch: 94, Loss: 0.016599 Epoch: 94, Loss: 0.051406 Epoch: 94, Loss: 0.028467 Epoch: 94, Loss: 0.016814 Epoch: 94, Loss: 0.013880 Epoch: 94, Loss: 0.023739 Epoch: 94, Loss: 0.032173 Epoch: 94, Loss: 0.015751 Epoch: 94, Loss: 0.016932 Epoch: 94, Loss: 0.010799 Epoch: 94, Loss: 0.010659 Epoch: 94, Loss: 0.012670 Epoch: 94, Loss: 0.018117 Epoch: 94, Loss: 0.026916 Epoch: 94, Loss: 0.023985 Epoch: 94, Loss: 0.009284 Epoch: 94, Loss: 0.027688 Epoch: 94, Loss: 0.018887 Epoch: 94, Loss: 0.012849 Epoch: 94, Loss: 0.012946 Epoch: 94, Loss: 0.024393 Epoch: 94, Loss: 0.010946 Epoch: 94, Loss: 0.009592 Epoch: 94, Loss: 0.015469 Epoch: 94, Loss: 0.027411 Epoch: 94, Loss: 0.020968 Epoch: 94, Loss: 0.012622 Epoch: 94, Loss: 0.018687 Epoch: 94, Loss: 0.018292 Epoch: 94, Loss: 0.010004 Epoch: 94, Loss: 0.015340 Epoch: 94, Loss: 0.011449 Epoch: 94, Loss: 0.018021 Epoch: 94, Loss: 0.011633 Epoch: 94, Loss: 0.015315 Epoch: 94, Loss: 0.023627 Epoch: 94, Loss: 0.014322 Epoch: 94, Loss: 0.018499 Epoch: 94, Loss: 0.019521 Epoch: 94, Loss: 0.017567 Epoch: 94, Loss: 0.017701 Epoch: 94, Loss: 0.018186 Epoch: 94, Loss: 0.012732 Epoch: 94, Loss: 0.016755 Epoch: 94, Loss: 0.016200 Epoch: 94, Loss: 0.009948 Epoch: 94, Loss: 0.009686 Epoch: 94, Loss: 0.017961 Epoch: 94, Loss: 0.028786 Epoch: 94, Loss: 0.019388 Epoch: 94, Loss: 0.022793 Epoch: 94, Loss: 0.012279 Epoch: 94, Loss: 0.017308 Epoch: 94, Loss: 0.015299 Epoch: 94, Loss: 0.013286 Epoch: 94, Loss: 0.018853 Epoch: 94, Loss: 0.077036 Epoch: 94, Loss: 0.048637 Epoch: 94, Loss: 0.009950 Epoch: 94, Loss: 0.017653 Epoch: 94, Loss: 0.010828 Epoch: 94, Loss: 0.013245 Epoch: 94, Loss: 0.016283 Epoch: 94, Loss: 0.015529 Epoch: 94, Loss: 0.020519 Epoch: 94, Loss: 0.049216 Epoch: 94, Loss: 0.033554 Epoch: 94, Loss: 0.019985 Epoch: 94, Loss: 0.035223 Epoch: 94, Loss: 0.022022 Epoch: 94, Loss: 0.013984 Epoch: 94, Loss: 0.026228 Epoch: 94, Loss: 0.034645 Epoch: 94, Loss: 0.029117 Epoch: 94, Loss: 0.021321 Epoch: 94, Loss: 0.014299 Epoch: 94, Loss: 0.017673 Epoch: 94, Loss: 0.020857 Epoch: 94, Loss: 0.049064 Epoch: 94, Loss: 0.029743 Epoch: 94, Loss: 0.040855 Epoch: 94, Loss: 0.035192 Epoch: 94, Loss: 0.020786 Epoch: 94, Loss: 0.017423 Epoch: 94, Loss: 0.011849 Epoch: 94, Loss: 0.013833 Epoch: 94, Loss: 0.015147 Epoch: 94, Loss: 0.013914 Epoch: 94, Loss: 0.014998 Epoch: 94, Loss: 0.019309 Epoch: 94, Loss: 0.029054 Epoch: 94, Loss: 0.023497 Epoch: 94, Loss: 0.025014 Epoch: 94, Loss: 0.016765 Epoch: 94, Loss: 0.034923 Epoch: 94, Loss: 0.033964 Epoch: 94, Loss: 0.035611 Epoch: 94, Loss: 0.023627 Epoch: 94, Loss: 0.016733 Epoch: 94, Loss: 0.013095 Epoch: 94, Loss: 0.011150 Epoch: 94, Loss: 0.013375 Epoch: 94, Loss: 0.014812 Epoch: 94, Loss: 0.021463 Epoch: 94, Loss: 0.023033 Epoch: 94, Loss: 0.012711 Epoch: 94, Loss: 0.012126 Epoch: 94, Loss: 0.015078 Epoch: 94, Loss: 0.018006 Epoch: 94, Loss: 0.012466 Epoch: 94, Loss: 0.015183 Epoch: 94, Loss: 0.024398 Epoch: 94, Loss: 0.024931 Epoch: 94, Loss: 0.019719 Epoch: 94, Loss: 0.019833 Epoch: 94, Loss: 0.021497 Epoch: 94, Loss: 0.020493 Epoch: 94, Loss: 0.015296 Epoch: 94, Loss: 0.017261 Epoch: 94, Loss: 0.017619 Epoch: 94, Loss: 0.017436 Epoch: 94, Loss: 0.022924 Epoch: 94, Loss: 0.029071 Epoch: 94, Loss: 0.027628 Epoch: 94, Loss: 0.016683 Epoch: 94, Loss: 0.018680 Epoch: 94, Loss: 0.020062 Epoch: 94, Loss: 0.016889 Epoch: 94, Loss: 0.011690 Epoch: 94, Loss: 0.030778 Epoch: 94, Loss: 0.023458 Epoch: 94, Loss: 0.015248 Epoch: 94, Loss: 0.015560 Epoch: 94, Loss: 0.013366 Epoch: 94, Loss: 0.014704 Epoch: 94, Loss: 0.012615 Epoch: 94, Loss: 0.021706 Epoch: 94, Loss: 0.013173 Epoch: 94, Loss: 0.018290 Epoch: 94, Loss: 0.012281 Epoch: 94, Loss: 0.011434 Epoch: 94, Loss: 0.012338 Epoch: 94, Loss: 0.011360 Epoch: 94, Loss: 0.018785 Epoch: 94, Loss: 0.023802 Epoch: 94, Loss: 0.011573 Epoch: 94, Loss: 0.015137 Epoch: 94, Loss: 0.021056 Epoch: 94, Loss: 0.032982 Epoch: 94, Loss: 0.021202 Epoch: 94, Loss: 0.010169 Epoch: 94, Loss: 0.012880 Epoch: 94, Loss: 0.039141 Epoch: 95, Loss: 0.052802 Epoch: 95, Loss: 0.062557 Epoch: 95, Loss: 0.037126 Epoch: 95, Loss: 0.023272 Epoch: 95, Loss: 0.017987 Epoch: 95, Loss: 0.023814 Epoch: 95, Loss: 0.018069 Epoch: 95, Loss: 0.014600 Epoch: 95, Loss: 0.014774 Epoch: 95, Loss: 0.013673 Epoch: 95, Loss: 0.019644 Epoch: 95, Loss: 0.017407 Epoch: 95, Loss: 0.012576 Epoch: 95, Loss: 0.019329 Epoch: 95, Loss: 0.029639 Epoch: 95, Loss: 0.013079 Epoch: 95, Loss: 0.026481 Epoch: 95, Loss: 0.026758 Epoch: 95, Loss: 0.018739 Epoch: 95, Loss: 0.019088 Epoch: 95, Loss: 0.014270 Epoch: 95, Loss: 0.023837 Epoch: 95, Loss: 0.068518 Epoch: 95, Loss: 0.028789 Epoch: 95, Loss: 0.020284 Epoch: 95, Loss: 0.014057 Epoch: 95, Loss: 0.011509 Epoch: 95, Loss: 0.013925 Epoch: 95, Loss: 0.012523 Epoch: 95, Loss: 0.013916 Epoch: 95, Loss: 0.013879 Epoch: 95, Loss: 0.013001 Epoch: 95, Loss: 0.013372 Epoch: 95, Loss: 0.012922 Epoch: 95, Loss: 0.027421 Epoch: 95, Loss: 0.018394 Epoch: 95, Loss: 0.014834 Epoch: 95, Loss: 0.012043 Epoch: 95, Loss: 0.015516 Epoch: 95, Loss: 0.025908 Epoch: 95, Loss: 0.017429 Epoch: 95, Loss: 0.016950 Epoch: 95, Loss: 0.010231 Epoch: 95, Loss: 0.010818 Epoch: 95, Loss: 0.027202 Epoch: 95, Loss: 0.017940 Epoch: 95, Loss: 0.014830 Epoch: 95, Loss: 0.020244 Epoch: 95, Loss: 0.020450 Epoch: 95, Loss: 0.020584 Epoch: 95, Loss: 0.025070 Epoch: 95, Loss: 0.017214 Epoch: 95, Loss: 0.011944 Epoch: 95, Loss: 0.013795 Epoch: 95, Loss: 0.023034 Epoch: 95, Loss: 0.014651 Epoch: 95, Loss: 0.027278 Epoch: 95, Loss: 0.015505 Epoch: 95, Loss: 0.017238 Epoch: 95, Loss: 0.021545 Epoch: 95, Loss: 0.010115 Epoch: 95, Loss: 0.010095 Epoch: 95, Loss: 0.011264 Epoch: 95, Loss: 0.022341 Epoch: 95, Loss: 0.012382 Epoch: 95, Loss: 0.029310 Epoch: 95, Loss: 0.018077 Epoch: 95, Loss: 0.020727 Epoch: 95, Loss: 0.036570 Epoch: 95, Loss: 0.019860 Epoch: 95, Loss: 0.018830 Epoch: 95, Loss: 0.021731 Epoch: 95, Loss: 0.019982 Epoch: 95, Loss: 0.016747 Epoch: 95, Loss: 0.011651 Epoch: 95, Loss: 0.011825 Epoch: 95, Loss: 0.017264 Epoch: 95, Loss: 0.013740 Epoch: 95, Loss: 0.016177 Epoch: 95, Loss: 0.018375 Epoch: 95, Loss: 0.010157 Epoch: 95, Loss: 0.023208 Epoch: 95, Loss: 0.016451 Epoch: 95, Loss: 0.017584 Epoch: 95, Loss: 0.013270 Epoch: 95, Loss: 0.019509 Epoch: 95, Loss: 0.017854 Epoch: 95, Loss: 0.015832 Epoch: 95, Loss: 0.021953 Epoch: 95, Loss: 0.010388 Epoch: 95, Loss: 0.016504 Epoch: 95, Loss: 0.028481 Epoch: 95, Loss: 0.015913 Epoch: 95, Loss: 0.008582 Epoch: 95, Loss: 0.014364 Epoch: 95, Loss: 0.013176 Epoch: 95, Loss: 0.019769 Epoch: 95, Loss: 0.008737 Epoch: 95, Loss: 0.016919 Epoch: 95, Loss: 0.011170 Epoch: 95, Loss: 0.024294 Epoch: 95, Loss: 0.018151 Epoch: 95, Loss: 0.016506 Epoch: 95, Loss: 0.012649 Epoch: 95, Loss: 0.023751 Epoch: 95, Loss: 0.030984 Epoch: 95, Loss: 0.021911 Epoch: 95, Loss: 0.017245 Epoch: 95, Loss: 0.040137 Epoch: 95, Loss: 0.022234 Epoch: 95, Loss: 0.013777 Epoch: 95, Loss: 0.014124 Epoch: 95, Loss: 0.011392 Epoch: 95, Loss: 0.023157 Epoch: 95, Loss: 0.028773 Epoch: 95, Loss: 0.017055 Epoch: 95, Loss: 0.019704 Epoch: 95, Loss: 0.021179 Epoch: 95, Loss: 0.018090 Epoch: 95, Loss: 0.013108 Epoch: 95, Loss: 0.032233 Epoch: 95, Loss: 0.014169 Epoch: 95, Loss: 0.011431 Epoch: 95, Loss: 0.016543 Epoch: 95, Loss: 0.020599 Epoch: 95, Loss: 0.010143 Epoch: 95, Loss: 0.018677 Epoch: 95, Loss: 0.023946 Epoch: 95, Loss: 0.016187 Epoch: 95, Loss: 0.014224 Epoch: 95, Loss: 0.014969 Epoch: 95, Loss: 0.027674 Epoch: 95, Loss: 0.016209 Epoch: 95, Loss: 0.012754 Epoch: 95, Loss: 0.017692 Epoch: 95, Loss: 0.019989 Epoch: 95, Loss: 0.024744 Epoch: 95, Loss: 0.025295 Epoch: 95, Loss: 0.022715 Epoch: 95, Loss: 0.015607 Epoch: 95, Loss: 0.028424 Epoch: 95, Loss: 0.013860 Epoch: 95, Loss: 0.009743 Epoch: 95, Loss: 0.012563 Epoch: 95, Loss: 0.019369 Epoch: 95, Loss: 0.012168 Epoch: 95, Loss: 0.012013 Epoch: 95, Loss: 0.022818 Epoch: 95, Loss: 0.017878 Epoch: 95, Loss: 0.025609 Epoch: 95, Loss: 0.015535 Epoch: 95, Loss: 0.013651 Epoch: 95, Loss: 0.019222 Epoch: 95, Loss: 0.013164 Epoch: 95, Loss: 0.021209 Epoch: 95, Loss: 0.018628 Epoch: 95, Loss: 0.011498 Epoch: 96, Loss: 0.013484 Epoch: 96, Loss: 0.014574 Epoch: 96, Loss: 0.027933 Epoch: 96, Loss: 0.017613 Epoch: 96, Loss: 0.014369 Epoch: 96, Loss: 0.011938 Epoch: 96, Loss: 0.019678 Epoch: 96, Loss: 0.017681 Epoch: 96, Loss: 0.010388 Epoch: 96, Loss: 0.012731 Epoch: 96, Loss: 0.021376 Epoch: 96, Loss: 0.016480 Epoch: 96, Loss: 0.017396 Epoch: 96, Loss: 0.012821 Epoch: 96, Loss: 0.023529 Epoch: 96, Loss: 0.012789 Epoch: 96, Loss: 0.019083 Epoch: 96, Loss: 0.021853 Epoch: 96, Loss: 0.012795 Epoch: 96, Loss: 0.013876 Epoch: 96, Loss: 0.018443 Epoch: 96, Loss: 0.012073 Epoch: 96, Loss: 0.011359 Epoch: 96, Loss: 0.015843 Epoch: 96, Loss: 0.023863 Epoch: 96, Loss: 0.015215 Epoch: 96, Loss: 0.024364 Epoch: 96, Loss: 0.022709 Epoch: 96, Loss: 0.022996 Epoch: 96, Loss: 0.013107 Epoch: 96, Loss: 0.019775 Epoch: 96, Loss: 0.022086 Epoch: 96, Loss: 0.020659 Epoch: 96, Loss: 0.037598 Epoch: 96, Loss: 0.026318 Epoch: 96, Loss: 0.029539 Epoch: 96, Loss: 0.023275 Epoch: 96, Loss: 0.019770 Epoch: 96, Loss: 0.013180 Epoch: 96, Loss: 0.020929 Epoch: 96, Loss: 0.020328 Epoch: 96, Loss: 0.020066 Epoch: 96, Loss: 0.024256 Epoch: 96, Loss: 0.021539 Epoch: 96, Loss: 0.014812 Epoch: 96, Loss: 0.013265 Epoch: 96, Loss: 0.008334 Epoch: 96, Loss: 0.014143 Epoch: 96, Loss: 0.011380 Epoch: 96, Loss: 0.017509 Epoch: 96, Loss: 0.019240 Epoch: 96, Loss: 0.016332 Epoch: 96, Loss: 0.016931 Epoch: 96, Loss: 0.014853 Epoch: 96, Loss: 0.017359 Epoch: 96, Loss: 0.014342 Epoch: 96, Loss: 0.022307 Epoch: 96, Loss: 0.020840 Epoch: 96, Loss: 0.012146 Epoch: 96, Loss: 0.015670 Epoch: 96, Loss: 0.018648 Epoch: 96, Loss: 0.019229 Epoch: 96, Loss: 0.010520 Epoch: 96, Loss: 0.021552 Epoch: 96, Loss: 0.014197 Epoch: 96, Loss: 0.011416 Epoch: 96, Loss: 0.012223 Epoch: 96, Loss: 0.023321 Epoch: 96, Loss: 0.008321 Epoch: 96, Loss: 0.019405 Epoch: 96, Loss: 0.010901 Epoch: 96, Loss: 0.013919 Epoch: 96, Loss: 0.024970 Epoch: 96, Loss: 0.028615 Epoch: 96, Loss: 0.012769 Epoch: 96, Loss: 0.025303 Epoch: 96, Loss: 0.015319 Epoch: 96, Loss: 0.012320 Epoch: 96, Loss: 0.020323 Epoch: 96, Loss: 0.016398 Epoch: 96, Loss: 0.011734 Epoch: 96, Loss: 0.022713 Epoch: 96, Loss: 0.019605 Epoch: 96, Loss: 0.009580 Epoch: 96, Loss: 0.012709 Epoch: 96, Loss: 0.009868 Epoch: 96, Loss: 0.013404 Epoch: 96, Loss: 0.009620 Epoch: 96, Loss: 0.029080 Epoch: 96, Loss: 0.015938 Epoch: 96, Loss: 0.023501 Epoch: 96, Loss: 0.013653 Epoch: 96, Loss: 0.012426 Epoch: 96, Loss: 0.021723 Epoch: 96, Loss: 0.019157 Epoch: 96, Loss: 0.030100 Epoch: 96, Loss: 0.015235 Epoch: 96, Loss: 0.020639 Epoch: 96, Loss: 0.017326 Epoch: 96, Loss: 0.016927 Epoch: 96, Loss: 0.048656 Epoch: 96, Loss: 0.063780 Epoch: 96, Loss: 0.010277 Epoch: 96, Loss: 0.039127 Epoch: 96, Loss: 0.014726 Epoch: 96, Loss: 0.012187 Epoch: 96, Loss: 0.021035 Epoch: 96, Loss: 0.022710 Epoch: 96, Loss: 0.018820 Epoch: 96, Loss: 0.015985 Epoch: 96, Loss: 0.017261 Epoch: 96, Loss: 0.015355 Epoch: 96, Loss: 0.020924 Epoch: 96, Loss: 0.016462 Epoch: 96, Loss: 0.014426 Epoch: 96, Loss: 0.019964 Epoch: 96, Loss: 0.008687 Epoch: 96, Loss: 0.029900 Epoch: 96, Loss: 0.012725 Epoch: 96, Loss: 0.022768 Epoch: 96, Loss: 0.009891 Epoch: 96, Loss: 0.019981 Epoch: 96, Loss: 0.010853 Epoch: 96, Loss: 0.016382 Epoch: 96, Loss: 0.021130 Epoch: 96, Loss: 0.023085 Epoch: 96, Loss: 0.036066 Epoch: 96, Loss: 0.017015 Epoch: 96, Loss: 0.016982 Epoch: 96, Loss: 0.007790 Epoch: 96, Loss: 0.044060 Epoch: 96, Loss: 0.018555 Epoch: 96, Loss: 0.023328 Epoch: 96, Loss: 0.017756 Epoch: 96, Loss: 0.012841 Epoch: 96, Loss: 0.018247 Epoch: 96, Loss: 0.014729 Epoch: 96, Loss: 0.015082 Epoch: 96, Loss: 0.011431 Epoch: 96, Loss: 0.016210 Epoch: 96, Loss: 0.014734 Epoch: 96, Loss: 0.012531 Epoch: 96, Loss: 0.019007 Epoch: 96, Loss: 0.010795 Epoch: 96, Loss: 0.016185 Epoch: 96, Loss: 0.022182 Epoch: 96, Loss: 0.023730 Epoch: 96, Loss: 0.014284 Epoch: 96, Loss: 0.019425 Epoch: 96, Loss: 0.016873 Epoch: 96, Loss: 0.014935 Epoch: 96, Loss: 0.008648 Epoch: 96, Loss: 0.014221 Epoch: 96, Loss: 0.064157 Epoch: 96, Loss: 0.034460 Epoch: 96, Loss: 0.018093 Epoch: 96, Loss: 0.025506 Epoch: 97, Loss: 0.022696 Epoch: 97, Loss: 0.016716 Epoch: 97, Loss: 0.012311 Epoch: 97, Loss: 0.017062 Epoch: 97, Loss: 0.012937 Epoch: 97, Loss: 0.014768 Epoch: 97, Loss: 0.024087 Epoch: 97, Loss: 0.022406 Epoch: 97, Loss: 0.014386 Epoch: 97, Loss: 0.017035 Epoch: 97, Loss: 0.013705 Epoch: 97, Loss: 0.013104 Epoch: 97, Loss: 0.014037 Epoch: 97, Loss: 0.019602 Epoch: 97, Loss: 0.013078 Epoch: 97, Loss: 0.013187 Epoch: 97, Loss: 0.019837 Epoch: 97, Loss: 0.032392 Epoch: 97, Loss: 0.014495 Epoch: 97, Loss: 0.017850 Epoch: 97, Loss: 0.010790 Epoch: 97, Loss: 0.017129 Epoch: 97, Loss: 0.018769 Epoch: 97, Loss: 0.013044 Epoch: 97, Loss: 0.012331 Epoch: 97, Loss: 0.013491 Epoch: 97, Loss: 0.017904 Epoch: 97, Loss: 0.022350 Epoch: 97, Loss: 0.009163 Epoch: 97, Loss: 0.008668 Epoch: 97, Loss: 0.019482 Epoch: 97, Loss: 0.017116 Epoch: 97, Loss: 0.013838 Epoch: 97, Loss: 0.021820 Epoch: 97, Loss: 0.023436 Epoch: 97, Loss: 0.020765 Epoch: 97, Loss: 0.020111 Epoch: 97, Loss: 0.015691 Epoch: 97, Loss: 0.015637 Epoch: 97, Loss: 0.016707 Epoch: 97, Loss: 0.021434 Epoch: 97, Loss: 0.019110 Epoch: 97, Loss: 0.017253 Epoch: 97, Loss: 0.018803 Epoch: 97, Loss: 0.016312 Epoch: 97, Loss: 0.015778 Epoch: 97, Loss: 0.026598 Epoch: 97, Loss: 0.012789 Epoch: 97, Loss: 0.019476 Epoch: 97, Loss: 0.013914 Epoch: 97, Loss: 0.015182 Epoch: 97, Loss: 0.016536 Epoch: 97, Loss: 0.015590 Epoch: 97, Loss: 0.016266 Epoch: 97, Loss: 0.020379 Epoch: 97, Loss: 0.009028 Epoch: 97, Loss: 0.014701 Epoch: 97, Loss: 0.014952 Epoch: 97, Loss: 0.011835 Epoch: 97, Loss: 0.026077 Epoch: 97, Loss: 0.019896 Epoch: 97, Loss: 0.011944 Epoch: 97, Loss: 0.007175 Epoch: 97, Loss: 0.014290 Epoch: 97, Loss: 0.026091 Epoch: 97, Loss: 0.020644 Epoch: 97, Loss: 0.015506 Epoch: 97, Loss: 0.014453 Epoch: 97, Loss: 0.021990 Epoch: 97, Loss: 0.019962 Epoch: 97, Loss: 0.030771 Epoch: 97, Loss: 0.017921 Epoch: 97, Loss: 0.026114 Epoch: 97, Loss: 0.013501 Epoch: 97, Loss: 0.017888 Epoch: 97, Loss: 0.026238 Epoch: 97, Loss: 0.014783 Epoch: 97, Loss: 0.013401 Epoch: 97, Loss: 0.029140 Epoch: 97, Loss: 0.030007 Epoch: 97, Loss: 0.014879 Epoch: 97, Loss: 0.024721 Epoch: 97, Loss: 0.013231 Epoch: 97, Loss: 0.015334 Epoch: 97, Loss: 0.017545 Epoch: 97, Loss: 0.009129 Epoch: 97, Loss: 0.014838 Epoch: 97, Loss: 0.010520 Epoch: 97, Loss: 0.015832 Epoch: 97, Loss: 0.024203 Epoch: 97, Loss: 0.016616 Epoch: 97, Loss: 0.022916 Epoch: 97, Loss: 0.011320 Epoch: 97, Loss: 0.016418 Epoch: 97, Loss: 0.017278 Epoch: 97, Loss: 0.015007 Epoch: 97, Loss: 0.018313 Epoch: 97, Loss: 0.027513 Epoch: 97, Loss: 0.017125 Epoch: 97, Loss: 0.032487 Epoch: 97, Loss: 0.019060 Epoch: 97, Loss: 0.026423 Epoch: 97, Loss: 0.020373 Epoch: 97, Loss: 0.021533 Epoch: 97, Loss: 0.019981 Epoch: 97, Loss: 0.015895 Epoch: 97, Loss: 0.014971 Epoch: 97, Loss: 0.018178 Epoch: 97, Loss: 0.010533 Epoch: 97, Loss: 0.015350 Epoch: 97, Loss: 0.011457 Epoch: 97, Loss: 0.010092 Epoch: 97, Loss: 0.016310 Epoch: 97, Loss: 0.013097 Epoch: 97, Loss: 0.016937 Epoch: 97, Loss: 0.016169 Epoch: 97, Loss: 0.035593 Epoch: 97, Loss: 0.014165 Epoch: 97, Loss: 0.008483 Epoch: 97, Loss: 0.023487 Epoch: 97, Loss: 0.012574 Epoch: 97, Loss: 0.020017 Epoch: 97, Loss: 0.016322 Epoch: 97, Loss: 0.014757 Epoch: 97, Loss: 0.011305 Epoch: 97, Loss: 0.016056 Epoch: 97, Loss: 0.011922 Epoch: 97, Loss: 0.009409 Epoch: 97, Loss: 0.032185 Epoch: 97, Loss: 0.009648 Epoch: 97, Loss: 0.020904 Epoch: 97, Loss: 0.014568 Epoch: 97, Loss: 0.019892 Epoch: 97, Loss: 0.027724 Epoch: 97, Loss: 0.022552 Epoch: 97, Loss: 0.025851 Epoch: 97, Loss: 0.026445 Epoch: 97, Loss: 0.022102 Epoch: 97, Loss: 0.015065 Epoch: 97, Loss: 0.034423 Epoch: 97, Loss: 0.015723 Epoch: 97, Loss: 0.018394 Epoch: 97, Loss: 0.022962 Epoch: 97, Loss: 0.011147 Epoch: 97, Loss: 0.014007 Epoch: 97, Loss: 0.021359 Epoch: 97, Loss: 0.015828 Epoch: 97, Loss: 0.008494 Epoch: 97, Loss: 0.009072 Epoch: 97, Loss: 0.008002 Epoch: 97, Loss: 0.012263 Epoch: 97, Loss: 0.018240 Epoch: 97, Loss: 0.016115 Epoch: 97, Loss: 0.013427 Epoch: 97, Loss: 0.062443 Epoch: 97, Loss: 0.017716 Epoch: 97, Loss: 0.009457 Epoch: 98, Loss: 0.018222 Epoch: 98, Loss: 0.015249 Epoch: 98, Loss: 0.014896 Epoch: 98, Loss: 0.014144 Epoch: 98, Loss: 0.010955 Epoch: 98, Loss: 0.014428 Epoch: 98, Loss: 0.024322 Epoch: 98, Loss: 0.031838 Epoch: 98, Loss: 0.011454 Epoch: 98, Loss: 0.029963 Epoch: 98, Loss: 0.031002 Epoch: 98, Loss: 0.014111 Epoch: 98, Loss: 0.017919 Epoch: 98, Loss: 0.020904 Epoch: 98, Loss: 0.023660 Epoch: 98, Loss: 0.017508 Epoch: 98, Loss: 0.011544 Epoch: 98, Loss: 0.015266 Epoch: 98, Loss: 0.013238 Epoch: 98, Loss: 0.014058 Epoch: 98, Loss: 0.014406 Epoch: 98, Loss: 0.043025 Epoch: 98, Loss: 0.039561 Epoch: 98, Loss: 0.025899 Epoch: 98, Loss: 0.020078 Epoch: 98, Loss: 0.011918 Epoch: 98, Loss: 0.023460 Epoch: 98, Loss: 0.017439 Epoch: 98, Loss: 0.011747 Epoch: 98, Loss: 0.017482 Epoch: 98, Loss: 0.012466 Epoch: 98, Loss: 0.013590 Epoch: 98, Loss: 0.011639 Epoch: 98, Loss: 0.016226 Epoch: 98, Loss: 0.010331 Epoch: 98, Loss: 0.013125 Epoch: 98, Loss: 0.015639 Epoch: 98, Loss: 0.025731 Epoch: 98, Loss: 0.045690 Epoch: 98, Loss: 0.016573 Epoch: 98, Loss: 0.012821 Epoch: 98, Loss: 0.015303 Epoch: 98, Loss: 0.021602 Epoch: 98, Loss: 0.008689 Epoch: 98, Loss: 0.043371 Epoch: 98, Loss: 0.011807 Epoch: 98, Loss: 0.015546 Epoch: 98, Loss: 0.021092 Epoch: 98, Loss: 0.013959 Epoch: 98, Loss: 0.015624 Epoch: 98, Loss: 0.023825 Epoch: 98, Loss: 0.012651 Epoch: 98, Loss: 0.016923 Epoch: 98, Loss: 0.017841 Epoch: 98, Loss: 0.012324 Epoch: 98, Loss: 0.015504 Epoch: 98, Loss: 0.016934 Epoch: 98, Loss: 0.028041 Epoch: 98, Loss: 0.014135 Epoch: 98, Loss: 0.015046 Epoch: 98, Loss: 0.015078 Epoch: 98, Loss: 0.013659 Epoch: 98, Loss: 0.015550 Epoch: 98, Loss: 0.008513 Epoch: 98, Loss: 0.054300 Epoch: 98, Loss: 0.027485 Epoch: 98, Loss: 0.014100 Epoch: 98, Loss: 0.026612 Epoch: 98, Loss: 0.024021 Epoch: 98, Loss: 0.015514 Epoch: 98, Loss: 0.011791 Epoch: 98, Loss: 0.021379 Epoch: 98, Loss: 0.010149 Epoch: 98, Loss: 0.014573 Epoch: 98, Loss: 0.015340 Epoch: 98, Loss: 0.020873 Epoch: 98, Loss: 0.035375 Epoch: 98, Loss: 0.016918 Epoch: 98, Loss: 0.013932 Epoch: 98, Loss: 0.013721 Epoch: 98, Loss: 0.011859 Epoch: 98, Loss: 0.014729 Epoch: 98, Loss: 0.014313 Epoch: 98, Loss: 0.012853 Epoch: 98, Loss: 0.011811 Epoch: 98, Loss: 0.026145 Epoch: 98, Loss: 0.025705 Epoch: 98, Loss: 0.015119 Epoch: 98, Loss: 0.013106 Epoch: 98, Loss: 0.017321 Epoch: 98, Loss: 0.015563 Epoch: 98, Loss: 0.013784 Epoch: 98, Loss: 0.015237 Epoch: 98, Loss: 0.012983 Epoch: 98, Loss: 0.017587 Epoch: 98, Loss: 0.010856 Epoch: 98, Loss: 0.009534 Epoch: 98, Loss: 0.005964 Epoch: 98, Loss: 0.015613 Epoch: 98, Loss: 0.020925 Epoch: 98, Loss: 0.010287 Epoch: 98, Loss: 0.027722 Epoch: 98, Loss: 0.019072 Epoch: 98, Loss: 0.011905 Epoch: 98, Loss: 0.027069 Epoch: 98, Loss: 0.014934 Epoch: 98, Loss: 0.010968 Epoch: 98, Loss: 0.015282 Epoch: 98, Loss: 0.018512 Epoch: 98, Loss: 0.012184 Epoch: 98, Loss: 0.012610 Epoch: 98, Loss: 0.013018 Epoch: 98, Loss: 0.028345 Epoch: 98, Loss: 0.025648 Epoch: 98, Loss: 0.024041 Epoch: 98, Loss: 0.014649 Epoch: 98, Loss: 0.010459 Epoch: 98, Loss: 0.015473 Epoch: 98, Loss: 0.016236 Epoch: 98, Loss: 0.016763 Epoch: 98, Loss: 0.011368 Epoch: 98, Loss: 0.026180 Epoch: 98, Loss: 0.018375 Epoch: 98, Loss: 0.019317 Epoch: 98, Loss: 0.011221 Epoch: 98, Loss: 0.021541 Epoch: 98, Loss: 0.029505 Epoch: 98, Loss: 0.019174 Epoch: 98, Loss: 0.017781 Epoch: 98, Loss: 0.037199 Epoch: 98, Loss: 0.028059 Epoch: 98, Loss: 0.020887 Epoch: 98, Loss: 0.017214 Epoch: 98, Loss: 0.015356 Epoch: 98, Loss: 0.016771 Epoch: 98, Loss: 0.013383 Epoch: 98, Loss: 0.019264 Epoch: 98, Loss: 0.009693 Epoch: 98, Loss: 0.023385 Epoch: 98, Loss: 0.019407 Epoch: 98, Loss: 0.011683 Epoch: 98, Loss: 0.008698 Epoch: 98, Loss: 0.022170 Epoch: 98, Loss: 0.015574 Epoch: 98, Loss: 0.017409 Epoch: 98, Loss: 0.018103 Epoch: 98, Loss: 0.018276 Epoch: 98, Loss: 0.015098 Epoch: 98, Loss: 0.018760 Epoch: 98, Loss: 0.009648 Epoch: 98, Loss: 0.016149 Epoch: 98, Loss: 0.011563 Epoch: 98, Loss: 0.015992 Epoch: 98, Loss: 0.009974 Epoch: 98, Loss: 0.012930 Epoch: 98, Loss: 0.010980 Epoch: 98, Loss: 0.004651 Epoch: 99, Loss: 0.011637 Epoch: 99, Loss: 0.009384 Epoch: 99, Loss: 0.019328 Epoch: 99, Loss: 0.014601 Epoch: 99, Loss: 0.016661 Epoch: 99, Loss: 0.010429 Epoch: 99, Loss: 0.017080 Epoch: 99, Loss: 0.012255 Epoch: 99, Loss: 0.008027 Epoch: 99, Loss: 0.009618 Epoch: 99, Loss: 0.014289 Epoch: 99, Loss: 0.015280 Epoch: 99, Loss: 0.017019 Epoch: 99, Loss: 0.021009 Epoch: 99, Loss: 0.010158 Epoch: 99, Loss: 0.024153 Epoch: 99, Loss: 0.014805 Epoch: 99, Loss: 0.023935 Epoch: 99, Loss: 0.011321 Epoch: 99, Loss: 0.011667 Epoch: 99, Loss: 0.009536 Epoch: 99, Loss: 0.020308 Epoch: 99, Loss: 0.026156 Epoch: 99, Loss: 0.016639 Epoch: 99, Loss: 0.016139 Epoch: 99, Loss: 0.015701 Epoch: 99, Loss: 0.029213 Epoch: 99, Loss: 0.010985 Epoch: 99, Loss: 0.023748 Epoch: 99, Loss: 0.014928 Epoch: 99, Loss: 0.012796 Epoch: 99, Loss: 0.013291 Epoch: 99, Loss: 0.012893 Epoch: 99, Loss: 0.020918 Epoch: 99, Loss: 0.013402 Epoch: 99, Loss: 0.015281 Epoch: 99, Loss: 0.014606 Epoch: 99, Loss: 0.010546 Epoch: 99, Loss: 0.019568 Epoch: 99, Loss: 0.011360 Epoch: 99, Loss: 0.012113 Epoch: 99, Loss: 0.022881 Epoch: 99, Loss: 0.015194 Epoch: 99, Loss: 0.021612 Epoch: 99, Loss: 0.020167 Epoch: 99, Loss: 0.019655 Epoch: 99, Loss: 0.013513 Epoch: 99, Loss: 0.010905 Epoch: 99, Loss: 0.031515 Epoch: 99, Loss: 0.010175 Epoch: 99, Loss: 0.015861 Epoch: 99, Loss: 0.020818 Epoch: 99, Loss: 0.022639 Epoch: 99, Loss: 0.013082 Epoch: 99, Loss: 0.018008 Epoch: 99, Loss: 0.027293 Epoch: 99, Loss: 0.013689 Epoch: 99, Loss: 0.011023 Epoch: 99, Loss: 0.024847 Epoch: 99, Loss: 0.026152 Epoch: 99, Loss: 0.019093 Epoch: 99, Loss: 0.017093 Epoch: 99, Loss: 0.018940 Epoch: 99, Loss: 0.009954 Epoch: 99, Loss: 0.017948 Epoch: 99, Loss: 0.017851 Epoch: 99, Loss: 0.014041 Epoch: 99, Loss: 0.015731 Epoch: 99, Loss: 0.017708 Epoch: 99, Loss: 0.011058 Epoch: 99, Loss: 0.018846 Epoch: 99, Loss: 0.030035 Epoch: 99, Loss: 0.011401 Epoch: 99, Loss: 0.016191 Epoch: 99, Loss: 0.015654 Epoch: 99, Loss: 0.008613 Epoch: 99, Loss: 0.014419 Epoch: 99, Loss: 0.022223 Epoch: 99, Loss: 0.014064 Epoch: 99, Loss: 0.016122 Epoch: 99, Loss: 0.015334 Epoch: 99, Loss: 0.011989 Epoch: 99, Loss: 0.021343 Epoch: 99, Loss: 0.012928 Epoch: 99, Loss: 0.017504 Epoch: 99, Loss: 0.014271 Epoch: 99, Loss: 0.016422 Epoch: 99, Loss: 0.020805 Epoch: 99, Loss: 0.019434 Epoch: 99, Loss: 0.018530 Epoch: 99, Loss: 0.014325 Epoch: 99, Loss: 0.009998 Epoch: 99, Loss: 0.009624 Epoch: 99, Loss: 0.014841 Epoch: 99, Loss: 0.013415 Epoch: 99, Loss: 0.019336 Epoch: 99, Loss: 0.022252 Epoch: 99, Loss: 0.013507 Epoch: 99, Loss: 0.014855 Epoch: 99, Loss: 0.013026 Epoch: 99, Loss: 0.017579 Epoch: 99, Loss: 0.011165 Epoch: 99, Loss: 0.011545 Epoch: 99, Loss: 0.013549 Epoch: 99, Loss: 0.038742 Epoch: 99, Loss: 0.041424 Epoch: 99, Loss: 0.017415 Epoch: 99, Loss: 0.018242 Epoch: 99, Loss: 0.019576 Epoch: 99, Loss: 0.017466 Epoch: 99, Loss: 0.022713 Epoch: 99, Loss: 0.021368 Epoch: 99, Loss: 0.014318 Epoch: 99, Loss: 0.021256 Epoch: 99, Loss: 0.020139 Epoch: 99, Loss: 0.026513 Epoch: 99, Loss: 0.014317 Epoch: 99, Loss: 0.016646 Epoch: 99, Loss: 0.023694 Epoch: 99, Loss: 0.011014 Epoch: 99, Loss: 0.023808 Epoch: 99, Loss: 0.023353 Epoch: 99, Loss: 0.026599 Epoch: 99, Loss: 0.026251 Epoch: 99, Loss: 0.016918 Epoch: 99, Loss: 0.015216 Epoch: 99, Loss: 0.013455 Epoch: 99, Loss: 0.015124 Epoch: 99, Loss: 0.010177 Epoch: 99, Loss: 0.009918 Epoch: 99, Loss: 0.015579 Epoch: 99, Loss: 0.040073 Epoch: 99, Loss: 0.018768 Epoch: 99, Loss: 0.011658 Epoch: 99, Loss: 0.007179 Epoch: 99, Loss: 0.012262 Epoch: 99, Loss: 0.026884 Epoch: 99, Loss: 0.024127 Epoch: 99, Loss: 0.013085 Epoch: 99, Loss: 0.016700 Epoch: 99, Loss: 0.012060 Epoch: 99, Loss: 0.016036 Epoch: 99, Loss: 0.017610 Epoch: 99, Loss: 0.011167 Epoch: 99, Loss: 0.019217 Epoch: 99, Loss: 0.015670 Epoch: 99, Loss: 0.015200 Epoch: 99, Loss: 0.018902 Epoch: 99, Loss: 0.015284 Epoch: 99, Loss: 0.008313 Epoch: 99, Loss: 0.025930 Epoch: 99, Loss: 0.009416 Epoch: 99, Loss: 0.012116 Epoch: 99, Loss: 0.055932 Epoch: 99, Loss: 0.033490 Epoch: 99, Loss: 0.021114 Epoch: 99, Loss: 0.035522 ###Markdown Performing validation ###Code val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False) correct = 0 total = 0 with torch.no_grad(): for imgs, labels in val_loader: batch_size = imgs.shape[0] outputs = model(imgs.view(batch_size, -1)) _, predicted = torch.max(outputs, dim=1) total += labels.shape[0] correct += int((predicted == labels).sum()) print("Accuracy:", correct / total) ###Output Accuracy: 0.815 ###Markdown CIFAR10 with Keras and CNNTesting Keras' CNNs on CIFAR10 with a pretty typical layer disposition. Data Setup ###Code from keras.datasets import cifar10 (x_train, y_train_), (x_test, y_test_) = cifar10.load_data() x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 from keras.utils import to_categorical y_train = to_categorical(y_train_) y_test = to_categorical(y_test_) ###Output _____no_output_____ ###Markdown Model Definition ###Code from keras.models import Sequential model = Sequential() from keras.layers import Conv2D, MaxPool2D, Flatten, Dense model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPool2D()) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(MaxPool2D()) model.add(Flatten()) model.add(Dense(10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) print(model.summary()) ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 30, 30, 32) 896 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 15, 15, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 13, 13, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 6, 6, 64) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 2304) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 23050 ================================================================= Total params: 42,442 Trainable params: 42,442 Non-trainable params: 0 _________________________________________________________________ None ###Markdown Fitting ###Code history = model.fit(x_train, y_train, batch_size=50, epochs=15, verbose=1, validation_data=(x_test, y_test)) import matplotlib.pyplot as plt history_dict = history.history loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] epochs = range(1, len(history_dict['acc']) + 1) plt.plot(epochs, loss_values, 'bo', label='Training loss') plt.plot(epochs, val_loss_values, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Convolutional Neural Network for CIFAR-10 dataset image classification:* 1- Import libraries* 2- Load dataset - shape of dataset - Output classes - Visualization of input data* 3- Simple CNN model (First implementation) - CNN model - Building the model - Training (learning) - Evaluation - Accuracy of training data - Accuracy of test data* 4- Second CNN implementation - Normalization - New visualization - Data Augmentation - Xavier initialization - CNN model - Evaluation* 5- Prediction* 6- Import ResNet50 - Transfer Learning - Evaluation 1- Import libraries ###Code import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import keras from keras import datasets from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential, load_model from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization from tensorflow.keras.optimizers import Adam from keras.regularizers import l2 from random import randint from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input from tensorflow.keras import models from tensorflow.keras import layers from tensorflow.keras import optimizers from keras.utils import np_utils from keras.datasets import cifar10 from keras.preprocessing import image from PIL import Image import cv2 ###Output _____no_output_____ ###Markdown 2- Load dataset from datasets of keras, we load cifar10 images dataset ###Code (X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data() ###Output Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz 170500096/170498071 [==============================] - 3s 0us/step 170508288/170498071 [==============================] - 3s 0us/step ###Markdown - Information of dataset ###Code print("trining X shape:" + str(X_train.shape)) print("trining y shape:" + str(y_train.shape)) print("testing X shape:" + str(X_test.shape)) print("trining y shape:" + str(y_test.shape)) ###Output trining X shape:(50000, 32, 32, 3) trining y shape:(50000, 1) testing X shape:(10000, 32, 32, 3) trining y shape:(10000, 1) ###Markdown - Output classes ###Code num_classes=10 classes = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"] y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) ###Output _____no_output_____ ###Markdown - Data visualization befor processing Showing one random image among the training set ###Code img = randint(0, 50000) plt.imshow(X_train[img]) plt.show() ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown 3- CNN model (raw implementation) Based on the VGG and without any additional process (Batch Normalization, Augmentation), we implement the model ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXAAAACJCAYAAAA182NaAAAgAElEQVR4Aey9BZRcR5rvqd19c/bBzs686Xk7M2+6e6bbPd12W5ZsyZZJpja1mGW2ZcmyZLHFzFhSMSdDMQiMAlvMqlIxU0IxM1f99/y/m7dUlsWSLWuq6pw4cTPvzcq8cSN+8cVHMQA9f10AuoDuDldpB7pZXK95DkA3gE4AHa7C1321sB3aALS62oKv5Y8N0tXlKq7242tpwy50g0Vpx77adv333XfHTf+z//GzV8BB/qoMVvjb3dYEdLYo75Mb3UBXt8IO8maACzlAN1+6PgwV3u1AVxvQ3QZ08b1OdHd3yz9QAc5/xvf6YuG9t/cCOF9L43R1Ap0s7VcK20/asB3d3Z3oglL6Yrv133PfHC/9z/3Gz10YLJwgb11FhGiXIN1LiCZqBODtHV3o7OxEdxclxc4r0rfAphXodhUXfEQid/2jnlnDJVmqEmZfqtWG7LlnrljYlp0svQDOY7ahIF+d/lhfkcz7j/vbor8P9OE+INoOwroXd+VYFahVXigSPAXGAZ1c6ZNC8scX7UBHC9DWALTUAu2NQEc90MG6EehsBjpbryrNQAff62t1K7q7lIIutU3YBs1AezPQxrrxSulpv2agy1WkPftau/Xfb98cL/3P/YfPncxoc5Xe/HAxg9xtqwda69HdUgdRp4gQyEkO6OzuwgCK4T38pjTY0YrOphq0VJegqcKOhuI8KY3FOT3HdSX5+GHhNXyvr9X5qCm9Umpd7cK2aCz6cVHaKB+8jp9Tr+977dbX+kn//fZNPtzsuZMBNtSUOqTmscJVfk4pLeUO1JcWoK7UjuaaMgXilNTRha6uDgXgVyDehe72FrTWVcg/qrGno9aWIqWuIAlqqbXxWHlfPd8X62pbCqrsLGlSqm1pYKktuFLq8jNcrzPknHptRc9n+tuxL/ad/nvu7/dkRZUtAxX2TFTZWMgIhRNkCPlS68hARX4aygrSBeJdzbU9ziVdna0YQJb3BjiX/m21ZTIDsJM1F6ajpTANbYUpvUoS2p3Ka57ry6W5KA09pTBd2qu5MBNqaXFmu46z0VSUKaWhKBMNxeloKkqXz/bl9uu/9749fvry8ydbG4qyUV+cLTW5oDBC5Ug6mpwZqClIRnleqgjVVKUo3oHt6GpvwoCWTqCdniSqDryjBR31ZWguzUODPQUt9hS02hOktNvjoZRL6LApxzzX4uD5pD5ZtzjioZQENDmS0ORIQZMjFU329B+URkc6WBqcqa6SgkYn26xvtltf7S/9993f36/wMgmNDvJAYQOPWRSGkCMp6CrPRWOhsrJvKCsAWgnwVvFS6WiuxwD6RfxAAm9rQHttCVqKc9HgSBbAENIdNkLbVewX0ek6JtBbHfFotyX0yboy/RhQk4nO4iQ0FsShIS8BjflJQIUNrYVZaC/NR0naJaC9WmbT4vRz6KjIRXtZFuoLLiuTYx9uv77ab/rvu2/y4gfP3U6hj4KcAmsV3IpQR2FQEQgpSNfbM0TFIo4iLoDTmaQH4CKA04ipArwkG412BeCENOHdaSO4rxS+d0UqV6XzvlRfQldRPOqyTqAi7Sia8uNQlXEBnWU5qM9PEXAXXD6LypwUlGcloDQjHpW5CbAnnURh6mm0FaddtbLpS23Xf699e+z0P39Fe6GAWoX21TWhfkOAqzpwATitmzcAeI8Erkri/QBHTdpRoCIF3aVJcMQfRoMtCd2VNrSWFMBr02qsnj8L21YuxsYlc7Fl+XysmPsR3DcsxitP/gFfmL37Ad6jlusf0P1A71t9QAU4oU3VGl+rRQU5AV7vcBkz7RmK+3GPBN6GAfQB79F/XwXw3ioUpXNd0X33dDaqTnp9cV86ZhugNBGNuafRUZSIVmcy0FCEensWDkaGYOuqZYgxGxGqDUa0RQ+rxhc6n13Q++7ApDefRc6l73pUVH2p3frvte+Omf5n/8Nnr4L66nbh+6JecaSi3pGGWkeWeKsoQT5KUGB3VxsGMLa+R33CsM3WOnTWFKOpJPdHRsweXbcL2FTGqz+gb9YJQF22qE9anYnoLMtGZU4S6gpzsW7xIkRbzYg0mRFptcKoCYQ+yA8xoUbMmjYVm1cvxLkjsaL/6pttpxpw++v+59/fB9Q+oIKcr1WA1zrSUOPIQrk9u8eFkFHz4gfO/AQSvitJVH4McP4T9Z9fXfOcfIko4VNdyvi+U9PIUJ93HqjPRld5FmoKElGRl4p9ISYEeOyBIUgDq8GCUGsIAvz8ER0eht07t2DbxtXwcNuAuNOHFAt0H20/xXjTd/pL//0qHlr97aC0w7XYSoCrgrGw1ZEuvuA1juwrAJekVt3o6uqiBH5zgKugvrpW3V7oGqe6yPWlmh2xrTAJXaUpqMm/jLaKfJTlpuKzjz6ASaODNliH4CC9lKBAHcxmMz6e9j5io0KxavlCxJ05gjpnprgV9qV266v9pf+++yYnrvfcVZfBqwVjeW1XvFPoYljjzECVMxtljlwlV5UL4EyDck0VSnutokKh7kX9kt7w7plBXX6LKsgfxJqNq5arf7/4c4tfJn0zFR9NucapNC7bga6AlTkX0F1dgPLcJOzZtAZ6fx8E+fkjJioWWo0REZGxCA2Lwu7d7lixfCk0wf7w9tiFo9/uE4Bf/b23+/p6HeTW3lfun9+pXH+nteLjfjsT0e3eZ//1vfpg7/7YfyycehD7h8rVa0Gc53hPjMa8okJRwugpeAvAr2fEbCrJF+U5/wH/Ue8voOqAAT4PepEGciahoTCpxxdTvadWewrabb2LYilWljfxaHKy0DXwHNBQjIq8ZGTFncKmlZ8jzKxHRIgVFpMZPj5+MJutMBrNmDZtGkKsZkRFhmL96hW4fO6YPKDebXu7x8o93M3S9IofqjIx3+nr21eFSL/6T9CP1D7TXz/4TPg5n6GMXdfkK2NBvFF+bOSkG6GE1NszFR14bwmcPO+xY/byQmm8AcB7Q0Zxf1HdYB6smvfR6EyQojjNX/n9HQUpYOksSEIHiy2hJ1ip1XEJLc5LAvGWkkzkJZwGWmuwZdUihOmDEBsRgkBfL4G1xWKB1WrFqlWr4OHhAa0mCGaTAW7bt+DC8UMCcNVwcbu1TCZ2xQ7BDnBnRfFDVe//9us7/V7lcw9y/+n/7VfGS39b3H5bcLKggMyVL1f45BEZoHr40WmE7cp4HGpDKsSNkKGX3NjBJYH3dYBTiibEVXCpHVGgrcL7RwBXwuf5OWn8klwcPRCBpXOnIzbMhOgwC7SBfjAadDhw4AB2796Nzz//HLGxsQgM8INOGwz3Xdtx+shX/QAX/9fb7/zqc+qv+9vuQe0D/QC/y8HPGe96AG+3uSRvF8QlBLbHfVJRnzCXSY09Fa0VDiye/RH0fu7Q+nohzGKExWyEQa9FSEgI5syZAx8fHzk2GfUwGjTYtmk9Th764i4Brrga3ZnkrUrO/RL4gwqA/t/9YE9e/QC/FwAXNcoViKmDggDvDXEe85wKfUrsDc4UdNUVIUznC5+dG2DV+SHYzwsWg7YH3mvXrhXpOzw8HMHBwYiMCINZr8HWDWtx/Nv9AnD+zzstdwdvQvzKvd/ZsToR3Fmttnd//WDDqP/53f7z6wf4PQY4Iap2xN4A/wG8BfiKoa+uMBXO9His/nwO9oebEGoIgkkXLGoS6r4J7JkzZyIgIAA6nQ5Go158wale2bV1I04cPNAP8Lt8hurz6q+v9N3+tngw2qIf4Hc5+EWa7skGpkjBqiGRKpPehYOi53qX4aG2MFMJjQ/wgj7AS4yXlL6p56bEvX7DJqxbvxGhYRHQaDQICbGId4ouyB/bN63DpZNH+gF+l8+wH1YPBqz6n9OPn1M/wO9y8KtAphqCxyzXAjjfU69VrMbpqHOmo9aZjdkfvY1vDsRC4+8NgyZQdN/Ue2t1BsxfsEjgHazRITIyEnq9FuFWi6hYVi/7HJkJ5/sBfpfPsB8MPwZDf5s8GG3SD/C7HPwqlK8JcFeWvJrMs5ImtrHgMtqKM5Qt5hwZ6KotwfJ5MxCiD0RkiAn64ADxLqGOW6/XC7y9/AIF5GZLCAwGg6hQLAb6iJuxbeNanD58AI2FGajJjQcq89DsTEXCd7GIPxyNsvRzuPhtBC4fiUFHaRbQVIyKzAuozbssr7srcqHbtRpmjw1I/H6vZEFER6Vcw11O2ksypSQf249DYYFAdQGqsi+hOidO3JJ4nrlbanIvob0kHY32RLQUpqCzLBN1+fGozYtDa1GqFJ5rsCVI0FJV9gWgOk+OeT2/S/3fXeU5qC9IRGtRutT8vTzm/fHe+J3qMT/Hc/2weTBg0/+c7v1z6gf4XQKcwTotdpcPJo9VCZwbLLg2qpAc35lngKp8lGdelMhJNFfhmxgrvHZsRqhJh1CzAX6+3ggLC0NoaCh27HTD8tXrEKg1/ADg9EChioXA37FxtQC8pSRbwMbJoTLrImKC3fDeX58FYTzxpcflNYHO91FjExjz2qyz32LBe6Ol1u5cJSA1ua/vgadxzzo4Eo4j7lAUDocHyWfpTyqTVWGaAJWvUZUr8K7MOo/StNOoyDwnecqZ24VRpsxZXp1zETzPmoAn0Al3Qp3wrstPkEmBEwQnFoKZkwN/J4HN9zjxsCbIUecQwPP7+8Fw78HQ36YPRpv2A/weAZwQZ2P2Brg40TsuoTH/AtqLktFa4pK+ucdlpRNrFs9HqF4j0nRQoD8IZ0rZ4RFRmDV3ASh960xWBOtNUCXw6wG8pShDVCmoKoDJYyNWfvo2zn8TgW1LZiJw2wpknz+M4B2roNm5Gv5blmHeu6NRnXsZU18bBs+180USnznxVaSd/BIrZ07FujnvgZL3vHdG4liMHt9Y/QSmhDehysK85ZTG0VSI8oyzAmTUFogUTnBTMqe0TUmc76MyRzxWVEmcgCfUKXFT0lYlb/5P/u+S1DPyPTyHWnuP5E+wE/q8nuDvh82DAZv+53Tvn1M/wO8pwK9EQqnwZsRlbc5ZoDYX9YyIKs4Sn+8oswZumzfCpNFIsI5WqxXpW6c3YuceTyxevgpaowVGa5gA3GilCsUkkL9aAm8ry0VNXgLaS7OB+kL4bV6KM1+G4rUhf0Dm2YPw3bQEyz+ZCovXZnxh8sHJ/RY89/D/luvfG/G8gJJQ9F6/UNQwy2dMxopPpgD1TmxZ9DG+j9KK1E6wUgqmBEyAU/olQKk+KUk9JdI2pWpK16rKhNI5pW6qTCh5d5RmCOz5GRXynBSo4hHJuipfvkNVoVClwu/ka0KdEjqv5W9RJ5F+MNx7MPS36YPRpv0Av4cAb7WluqRBZa8+wpulIf8CWktS0FKSgYaSLDizEjFvxoc4EBGJAB9f+Pr6IiIyWqRsStwfz5wNrSkEAVojjCERNwU4Q2QpTaPGLjvaf2XxgzPpFDYt/Bil6ecRGbgLnusWSiHEt3w+A/FHYnEwLAjaXWsEjlS9uK+ei2XTJyHp6D6B9pJpE+C3aTHST30l0jhVGSwEKfXPBC+hSymbQG52JosKhfpvqk1YsxDaVK0Q5PQTp96b76mqFuq1CWr+b4KZsBbVTL1T/r8KbvU6fjd/L1/zN/TD5sGATf9zuvfPqR/g9wzg6fghwAnvi2hxXkRjwUUBeFVePNqq7NB578b29WsQbjbDqNXBzy8A1pAwhEfvxbpNW7F4xRqYw6JgsIZDY1RUKD+WwA3YuUHRgdfaUlBXwIjQNNTmJ6KtJEtgzmNU20D1SqjvNnSU5cg1vLYqJ16uJfgJQTEk1tiA1jKRcAlUvk9I8hzVFarUTbjyPbWmJE1JmwAnyFkIa0IadTaRvgl0HqsGT0KcKhRK5ZSw+X2EN6V+fhelbL7PQoDzHK+hHYGvqQtn3Q/wew+FftA+OG3aD/B7AvB0tNh7AZwJZET6VgAuGzZUZ6Ms6yJSzh3ByoWzsTcsBIHevogMj4JOZ4DBaBZ992fzFyHYYIG/xiAQ5zF14DcCeHt5nkjeNJASzKh1gLpwvm4tzhS9N6HdVZEnEK+3JQvweQ5NJaIGoSRNcGafOyg1pWBVVUFwUsVCaPM6FlWlwWOJKLUlwJFwVKRrQlrUJq0lsMV/J9I4AZ56Yr/ow6kDp4qFenFFck9FyvEDKE6hlB4ncKaxlbpvgpoTB38bf8OZL6wCbp7ntZxE+oHz4ACn/1nd22f1iwA4DX+K8e/26/vfIRQvFBox6ZEiv6cH4ErGQUrg7WUZ6K51Ytuaz6HxdYdZF4wwixWhIZHQ6c2IjN2PxctXYOPWbdCZjDBaLfAJCIQlLNwF8DBFJ240ix5c8UJZidOH94PZDAkygbCd0qoiXRPQRSlnYPXeIpIqgUyplfAl/ErTzqKW+unKHHSWpKIu94JIyfV5cQJi/k8ClKoNHlP/TPhS0iaQm+2JcixugVW5AnLZVSj3kkjkuRcO4lcDBsh3nz5gwZS/DFWMjkXpUlMNwt9EQM99ewS+NHlj/rujgIZC2C8fk/tRVwbUtfPa1bPe7rnXHnVOr+jX+98f7myAPrj9/87u914/p/vdfvfr+38ZAHelM+WPkeX8bdRqw6nBM3dW32UndKSioygDTfmXUZ1xBs15F9DpTEB7YYLsttOQH4/qvHic+DICa5fMExdAepNQ8tYZQ2AIjYXGHI5Zc+dBb9QgMNgDGp03QsMM8PTeA9/gYFij98IYthem0GhER8fCqAvEmqVzcOH7feLd0laYgrrcSwpYbYqrH9URvhs/V9q0Z5JUcgWr6SY7bJfQYb8opd1xEe12qn6YKfFKdkUm3GKRPCdOdbsmRc/f4ohHgz0etQWXUJZ5FqjOBTeypsE2L+47THn1KSQdO4B1cz/A1sWfIvPsYQRsXQ3vDcug370Jx/eGiHfMko8nid5+7juj8E1IALYv/VQ+Q++ZVbPewYqZb+HCt5FyjNZyUf9IjuO8yz2T/72Ggvr/WmyJYFFftzmS0bvw2dsvfY+ukiwpjfmJaLYlgzXKctBemI7O4kx53cSVUEkW2pxpaHWkynF97mVZ3XSXZsukynOysqFajOqtG4wHnlN/14NacwzfVblB+/wc7Xej53PT7+8RPn6Yw/tWOcZ2u+/pZHmTd15U+N5ZAygNpf6PO6spiRKgLbbLaMy7hNaCS2ixx6G5IF6kVEqYbWXZWDrrQ9D7hAE7dBeMjIqB3hIGc8QXmP7Z5/ANDIDJHAyj3gt67R6YrQHQ6ANgDouEZ5ARfrpweAaZ4e0XjMjIcHju3oiUi0cErN0laajJvoCO4jR0lqTL70k98QWORut6BofaKQhvAbftEjptF9FpO48Ou1II8d4AV+H9Q4ArqXAVNVE86m1xqM67gDrev8A+RSCeeHQ/vDd8jo0LpsPovglbF89G/JED0LttwYQXn0ZLYT7+3wED0FyYCQI8xGeruDvSY4YGWertNy+ajoTv94ke323lHLivmS86/ubCdFEJUc3yU0PsZgAvTjwJNBajqSAZPG62pQjA+Zpwb8xPQl1OPDqLs1CbHYfS5DOozrwIlOehIS9BVEfsI1wVMRhL9YWnmoiqo5uNjQcV3Orvvit4uzbuvVkb3ei8+jvupJbf7ppAbvQd1z+nMufO+NUPcEnvqjbindVsRIG4IxHtziQBOYN3mguU8HlKwociddiyfAHCjMGSrIobNNC3OzxqH9w8gvHxzAWI3bcXRmMgwkICYTH5Qm/0gyXciiBjKIIse2GMPgKN9QuYQ/cjJnYfFs3/FMe+jUJLcTrqcxR4d5dlghI/jYrBW5egvUhxbeRv7A3wH0rgKsAVCZxSNVPkUuLmDjssyo5DSupcnmchwFnXFVxCTX4cOiuyUJ51Dk2FqVIuHowSIE/+yzAUpZzDgvcnivRt8dqOj8a8ihBvN2h3bYDPxqVYO+d9cWukfp5+6rFad3wy8TWcOmAVF8j18z7EiX1mrJ79rsCcbpNUD9E//P4CPFXA3OxMQ5M9XTEaOzPRyhUZ7SJFGegszZNk+7W5SWij3aGtGp1lOajJSQRaykSlRWMt1VU0GlNVRHBT3aUaaq8PgP8EEvhdAfBuhD/ls3cCbvUz/QC/BzNobzipkLq9+s7ArT5ESrxNtgS0ORLRWaRI4vxNfJ8DkqBZu2AGDu0LQ5D3boSY9DAajRJhSRXKnPnLYbFEQa/VQRvkjagwLSLDtdDpA2EOD0eAIRKGyCN4a8YqDHp+LF4f/T7eGDUB//7vv8XJIwfEUFmRelK8PVqdySDE4w+G46DVB6i3/0gCl7ZhpCiTbTHcn6qTXuoTJb+5Am+mu+WGE6wVmF+BuCKpKxJ4gyMRqMmTvT1VgGedOyRScknaBVTnJqIqJwkxGi+knPgWxakXUZh8HmiuRO7Fo+LuKF4zTSXorsxH3OEYF6AdyLlwBCknvgQaiuQ9etNQAqeOn3r0+wlwqkHOfx2Ob0ODgfoSRAbuxun9oSjLuAS0VqG1OBt1ecnoKMtDV1k+vNYvxsZ5H+NgmAZoLBOIt5dkiZ2BRlr2l5zzhxDivVnUKJTIbwRvnlP74YNY3z0AH3SA9wr+c+0VcDvsYvs94CqUK0mkpDPcgT7tbjs+jYDMc6KqUnhMeNPljY1r9t4K/50bEG3RYl9UmOQ74eYM0TF7sWzlOqxYsQHfHTkBb/c90Af5IMTkj4hQHYxmA3TmcPga9yEw7Hu8MWUxXhw7G+PfW4inXxqNf/n173D+zHE0FSlStyptoyIb3mvnAi3FYqS8drtwyeaSykWavlrvrYCb6W6ZdIs1Qa7seUmd5RUdeHNhItpK00UCby5y7YdZmCp6cK4OCFtCN/XkN4gK2oPuShs6y/PRYE9DafpFkbypz+4szxXPGfqwE+J8TUjTewZ1TuTHHZX37AknRAdODxqqHe43wMP9d+EvT/wJqHLi3REvYa/OB6e+iMDq2R+iKPUivrYGIy/uJE7sC8WKT99HZ7lN6vy4k9i0cAZO7jOL9B24dRkMu9fKpMSauWxoxO0H+N1D+kZteDfj/15MQNcen7dmF+B99QP8Ll0JCUyqLSiF82HwmK53hE/amYPYsWqBbNhgDPLB3shQyTboF+APa0gEZn46ByZjKHQaI8xaLayGIBiCfWE0BMNsCUOgMQq6iOOYudwPj700A6+9tRJPvfohBj83Gv8xcBiOfX8UbRU2ATW/n5PJqb16nD1gApqLrgvwKx1agbFqtBQp25XqVrIlFmZK7haBuNO1a7yTA+rK5xh92VWZjaZCpgtIUyBfmArUFKA86wJK0s6Jjpg6bO44X5OXhPLMOAE4dd2iCqkqkPYinAlvqkgokbOmOyShjuZS5F78TiYDAr8w+bSAj21+N4PwZp+9kQ6cEniY304EbFuDr6zBUvttWYW004fhuX4pjB5b8W2YDiOfHSzvTXzlaSyf+R4OmAIw770JKM28BBpuGfFKF06L50YcMHhKKgO0lfcJgF/piz8tqK/3PTd7/jc63w/we6RCuZtZ7EYP6FbO0YBJdzx2EHoPiMtbWQ4cyWdg9t0JU4A7NN5uiLAaEeTnjYjwUBhMRqxeswGbt+xAeGgU/DwI9zBYDRqYdUEw6g3QGiMQZPoSbsEH8fy4pRj06gK8/t5W/HHYW3j69Y/wr394CoePHJeshtShUn1SnXUelt2rgNp88YjpIFR7tnFTZnX+Ts7aapHXVJEIuJnmNhPMU85Ut9WF2VLzNSH+QyncJSW4gnLo1sffofiGK0FAbAv+/4zTX+N4rAF1+QyzZxAPA3GSUZR8WnzRmfeE7o3UAzOBVk9gT3OJfJ5ujPzfRcmnJKmWep6fud8A9928ElmXTuJ/DBiAvISzWLtgJhZ/8h7OHdoP9w3LcebbvXhu4EOozE/D7HcnoKOqEOW5KZg+eRTaSvOxccE0LP14otwbs0PS5TLMd6us4NimbL8blVvpo7/Ua/jspP/16o9qv7zV+kZtcyvn7qZt+gH+nwDg9Juuyjwn4KK6gEEzzBAYf+wrrJg/A9/EhknKWO4yzz0uqf/WG0x498OPER4RA6/d3jgQHYuYEAsMwf6S3MpqCYefJgyB1iOYvdqAR/+yAE+P24BhY9fioWEf4rVJi/E3//AfiE/MQlOFQ7LzUYVzIkaLy9+GojLtFNBgR23WuR8AXB0sDQJjBda9jwXeAu4cVDvzUF2Y4yrZAvUfQlxRX9EvnFGX1EcTsOqyv8ebotaOrZ9PF+McIUzDHPW9BDmBTcgzu6BqsKNhkkE91P/SV53X8n/zNY95D/wMIf5Tq084uG8mgX9pCUKtIwthQZ5AXSn2mgIRwkl7zxac+DJKjqsK0vHd3lBEaLzRWJwH1Jfh+31hWDnrAyQf/0Luc+eyT7FP5y468EsHI3t8728GobsB0P3+rALwK8LErUK793U3a5+bnb+bNugHuIOWeSWzHQcrre805HBgc9DyHKUvDlY1mIR6T1rqKZ1QAlOj9Pg56p0JEP4vXs9jgoSf4eBnFjuCgiDh9fTJVR8gfXvrcy4BpZlAnV38uhty43rOq9ddXVPnTemX383lPyXv+sIsbF65EBHGIFi1QWK45E7yISEhsjXa3HkLsNvTB5FRexFuCYPONwBRFhNiwkPg7+eDsPAYuPuFwd96HP/02FQMHrEaL7/nhT++vATPjFmJkR+sxz8+9BxOnUtGWb6Sl5vGS4v7WlDqpjG1zRaH7qIkyUXeVZwqKh52ZtQ6UZmfgvaqQmTFnUJpTrKU4twUsBTlprlKBopyM1CSn4WawnwUZlxGRV4yyrLixOjIZ8P/xzasyrggvs70ey5LOYOaLAbzFMg5qnRyz30DTnRsJ3Z6rlroK03XRxkEd2C7UD939fP4OV9zAmEkbHFmPDqrnVIXZcQhNHAP0FCKalsaGoqyr1tovyCMbgaZG53/Oe/3Tr5LHVeNefFg6SxKQ1dxOirTzqAk+QRs8cdELca0DhVZl6Rv0RuJKkimh6jMjhNvnfTT34gxm0who98AACAASURBVCo1qtBUV1KOZdUFk2OcOfA5xp2JJ4QPN2o7nruTe1I/cysAV1lFt1D+LvKIAgh/t8o1soq/hSzjcXnGeeRdPCKMIud4f3SZJbN4DT9P9tHgzTZqlCjpNBlfXHHTOYHjn+7M/K38n8yZVGHPALraAXShu7sbnV3AgA4A3XD9dXcAbQ1ory1BY0m+fEjtoOqA+1FdmCY/mDdA4BbEfS83RyDSkMMfyx9ACYyFjcAbZM1CKPM1pTZexwbj/1LPqbDme5wU2AAq+Nko1B0T1AzEqc26IDUh3u5MQXdJxk0fMAEuoeFVBci/fEK8D2JNgfDYth4xoUZw+zOqTbhJg9FkgdtudyxY+LlkGqQrocVgRITFAqs2WHJ9G41mePnpYYk9gffnuePhlxdgyJiteOSNjfjjKyvx3MSNeHrUAvx/f3gRR0/GozhPMfQx37c97ghQzokpGWWJ3wEVGUBRqviks104GLpqCkXtEqr1wc4NK7Ft7TJXWYGt61Zg69pV2LxuldQ83rFhLTavWorta5dg28qFWL9oJiKCdotumm1MX2cGn9DHmceV6efFY6Tw8nFRLcUEbkdhwlGg0QmUZ0nAUaszEV2lGT12gx/1idsAujqY7kdNgDMXTUeVDWV0C6wrhNXfDeW5CSDIm0tzZDKvL8y4Zs3NONTxwf55J+V+3PftfCeBzeu5kuExxxbHVV32RWSf/gavPfGQ+PYHbF0OFhq8CWmmhCDQCXK20aIPx0titg3zPxLDNm0gtKGoajcygCu5zQunydhnfycsb9amt3MvV197KwAXDtXYJC3zq4//TqDttmJ2T5Ad+cbfSBahuURW8iOffhixmt0I2LJUBEOeI7tUgJOTTD+h371ONnS5rwAnUPnDqDulVEcgE7AEuLq05o9X9cuctdBSKhsWcMlOqZezENOp0vDFmZwzODPxMTeIJHeqdciMTaMYr+X77BziR0w1Tv5ldBSmimTAh0RJgTCvzqAK4sZGMj5EJnSifpdGqcqCdGxcPl/2uIwKs0KrCUJ0dLS4DWoMZnw6Z35Prm8msaL7YExEuITX81qD0Qrf4FD4Wg7hzy/OwPCpu/Ds297412dX4HcvLscLU7bhubGf4/kRM5BTUI7qIpsY9LgZA9O2NuQyfWs2OmzxaMk/j5r0k+InzlmcqWfrCnNQV5IPty3roA3wkZzkzEvOQh281aCViYQ1S5jZgHCjDhH6QIRrfLDw47dxJMokAThMltVRxAkjWyRvRhzSsMfoQgKbroypx/aiPo8qkCw05F9CTfY5dJWmoLssHbU55x94CVykweIMdFXZkHn+kExu7LeVuQkiGQmkmfyLsL5WLUbhO4M3B/7N+uf9Pi/xELZEEZJQltUDcErm578IweS/PImitPP4eMJr8N+2Ennxx7Fn7UJ8FRKIrUtn4WCEFuvmT5NjCkjvjBgO701LEbRzDd4fOVwk7g3zPsDsKW/AFn9USYVcXSC8IOjuN8BldVBjkwyfk15+QqRppmzesXSm7J7FqGNmDJ099a/Yq/MAUz4z3oHGf05au1fNlYC4NZ+9h/0GL3n91utPI/XkVwj339HTrxiLwBXuzy6BE+Dc7ovSs8xCdQ6RwDmjcnYi1PlgeEyIs9DoRXjzs2UZF2RJRVDLcsKRKtDmzM3BRaAT3Kz5mm5phL28lxMnUXKUBiiFUzKglMDOxZqd72YDgO57/B3iTdFchkC3TfDbvRXhFkPP/pbUeZtDI7Bmw2YsWLxMcp/4BAQjLDwSBp1edOMR4VaR0gOCDQg/8D3e+WwL/vjCp3hmkhuenOyNh/6yGb9/ZTVenLodb7y7DlNnrkVVXTva6yoRo3OHM/GkuC+KT3hJCtBkQ23acQnrR6myLRnqi1BflAtDoCd0gX7YHxMJs153VTGIEdWsN8j7uqBAmLRB2B9uQcDubVg17xPsMwehMjsRFZQybSnopqTJkHCGdjtSRRqnhB3huxkop6Qdj8qM05IbvaM4CahgtGgCarLPiEviAyuBO5LQUZ6NCqreauwwe2+SgVeezUnUBmagVJa4qdesxWf+DiVvFUw365/3+zyFIY4njjECnMcUmFi+0HkItBZ8OAFnvo7A+gUfI9R/J/y2rsCauR+iuTgLfztgANBZg/kfjIfvluWozk/C2nkfobvajg0Lp2Hb4k9QmHRSNiMJ99sGjzXzRF1KCZwCmtpO16vvpn1uRQInG8g2Gqa5OQrTNO/Xe8hWhvy9O5fPhtlzEzLOfIt/+psB0o/+8vjvER28W5jy0ZiXgI4q2Zjlg1EvyMokKshNNm7hBi2qYHDfAM5lDqFMKZzqDkJcGr+xSBqfOiDCm5I3r5VzLsiLxF5VIEDmjEXJm+lTCWsOHErZaiY++g3zXHHqWVmSUXLn93HgoTpf9N/UgRPkrKmn47LvZg+Y+lw+IP7v/MQzWDrnY+yLsCAmQlGbMNd3gEaPIJ0RH0z/VNLEMsug3hwiKhVu5hAQ4IeoqAjozSYYQqPhb96HQS+9h+cnr8Ojf92I37y0HgPHumPgqK0YPnkzXpq4BKPenouS8jpkJV6Abs86cdVjh2JAUX3WGaAoEZ32OKA0VSRwLuVosCzJS8WSBbNF6uc2bhaT+apihdlshcWkFKa8DfLxwf6wECyZ/QmsAR6IMfgDdSUSZchQ8Q5nBrqKslCXFY8GSt/tlTDsWAHbhUOSXoCRos22S6LaIby7ShLRXhiPNuflHiOr6i1z+/WNV0g3e353c17am+kLKrJw7qtQHAz3lyV9NVMqcBMKhxIKr6QioI+8y+vCVV8PKrfz/t38/p/js1RFUm1CYQjl2QLumszzAvT9mj0yRv/jf/1X6Zu7Vs3DilnvwuS1BRsWTpfyXYxRJHICnEBvLc3BnHfHiLS+ePokfG3xg9e6Bfhs6puiE14z+x0R+qh+/SWoUMg2RmNTbcLfNOyhfxQ1MT2PuPMVd8yii+3iaRNFEv/S7Cs7ajH2gcIm8whRVUKpnBK3z8bFspMWBVd+9r4DnJ2Vs5QKaRoiCGa6k1Hxr8KdA0L0XFX58h6v4QMSVUhVgehdCWnuSiNQrrGLxK36Eav6NErqLAwU4WRA3S2aigTWvaUFwvtWVCj0/pCHVGPHnvVLYPTfg8gQA7g5MXfYodokInY/5i5airWbtiEi9gB8g3QIi4pFgEaLgKBAaPUamK0maM1mhO//Gh/MX4tBr3yE197fiSET3PCPw5Zh4FgPDBqzHc+MX49hIz7DuPfno6a+RVwU088eknsiUGjA7HLEoz79ODq5oUTmGYEoJ8IaezqsGl9RlYSFWqHRaGA1W2ChLp7FHPbjYrIizGKGx/at2LZ2BUy+7vg63IB6exbairKBChvKk8+izZ6Gdkc6OgrTUZZ8GqZdq0Tiqss+j+7SFKA8DS32i2jIOyvwJsTVfCq3D+3euSPuH8AJQNpQ2ktTYdyzBuWZpyVDY0dZmrJhRQlVI0r06jXre+CF9XNA+G6+Qx1THE8EOSVwFq52049/IZDqqMgHWitgTzqNKI07Lh3Zizp7qqgkCajy7Hg5RwM6IU6JvCLnshSudPIvfYfMM9/IeCZHxGGhxiaT6c0mw7u5t1uRwKnGEfVlcYbY6fj7uDLgeKSwyhgHagSoMWCwGtXAjHcgt3hMbsnqvsYugidzDNFOQIGRQut9BzgNk5SuOTuJ7rsyT+C98P0xsiMMFfuS+pTScmORPKjXhzyE7yI1IqETxNQNcQZ76H/+X5Ig6eVB/w6PtQtE7021CXNOUDr/NjRQcmsM/d2v8Nv/ZwBeGfRvCPHYKPpu1eOEHihibCnPlvqmD1jc6LIk/Jsh8wf3RYg7ICVbrc4A30CNJK2aOWcRTOGxstMOk1hZwiLhH6xBgCYYBqsRgbogaKwWuAcZ8Pzo9/HMmHl4ZtJ6PDpqK37/+jY8PtEHfx6xGcPGrMLwMXMxfcF6pGfkIlTnDzQUy2TFDtWYcw4oS0WX/RLqM0+g034ZaHJKJ+EKYf2yBfhqfyz8/f1hMluVrIhaE3RaE7Q6/uYrRd4L1sFqNGHRrNnYF2pF8J7tOP5FJGoLMtBemgvUFKI08TQachJECu8uyUJswE5UpZ+VgUr1DeHN7Iw1WSelcLOL9sI4NBWcl+RZDzbA43H5UAhO7dWC6iHq+Ztp/XcmSt1sZ5RuHK5dc0MNRSq/GWiud/6m/fMmNpyf+vMUgriipQROcPP7CG9UKJkaC1PPoaUsH6XZCUg79x2OfxGOjioHqgpSe7x3aHhvLs2T143FOWIw5vW06dCeRUhSGCNHuNLkyppCFR0brtdu6vt3c/+3AnDV+43fRy0DVb+EOIVRnqOBloCmVw1VJWmnvpbIYxpxCXCCneeoVSDrKJwS+LbLx0Wdct8Bzg0AaASUiL5yZScWzlDcZPehv/8/5CbFZag8V26W+h9msWO+CapBlnw8BQ/96r9g2Yy38H2sAXGH9yIy2A3uaxYqunVHOipz47F0+lT85m8HYMH743F0rxFHIvXg9mPea+ZLGDxVJpSmKSVUpJ9RPFNsig6cD4oP+ke1PUVmQ+rj6J1Bzw6rLgBGbZBIt8EanUjgi5athF+wXjZroPqEUnlIRDT8A4JgNGhgMOjgq9VDG7EfHy7chGfHfoahoxZi4F9X4O8GzcLQKf547gMjHh21HU+OW4+Xxi3C9M/WwNPdB/UldpFKxFhbniXSdl0m/agzBOZMrCW75tQ4EKLxloCiwAA/2TyZqh0aTVm4nRtzs6jFYAiD0RCCyPAY+Hp6YeWiBYg0aWHy3Q2951Z0VuYpHcyWCpTkoLkgSVzDqIP/LtRXVCMENw2V1H8TbHxNAyY3uajPP4vu8iuBRncO8buXwNXnev3BrNpCekv+yjETelk9Vsk9EeCVGSdRx31Qq3OlZpZKunOxVkBOmKtFSYcrka2S1ClJ7Ai3+vrmv/vu2+b6bXLlf6u/43o1xxWdBOhKKvamfKra4sAUuozMFQm8qRReG5eIMCIubzmXBdBUmVDvTcmbUjmBT2gR3vQAIhipoiinapQuxyVZUtMbikCnXYbXXKsWm81dTnDX+r+9v4+ZJzm5VKSdk4mFnlrym2mLK2E+H8V+QpAT1tRrE9R0wqAKhQKouFLWOXvsefK6xi7OC/cd4J3l6ajNv4hGB42M8bLNFm8s89y3AmqmE+WD6ygrQL09A5P+8hzWzp2BjnInArevFyPHxe/2obPCjs5qO5qLcqTma24i3ODIRGigm1x37lAM0FiOWvpEZtN4WirqGtG7F2VIQ9OTgjM4fZmpppHcztRxuzoCjXSc3VlTz95ZXyYeJ/TokEhKvUbZfNhiEZ/vLVu3g8XTyw+mkHBRnXCzYkKcUnqsxYBwgxG+mijsNh7Gw6/Nw6Axa/Dyu7vxv4d+hqcm7sHj4zzxxNtaDHk7EI+P3YyXJqzAxMmfIelcAlrLHLKcohqKhYOIsJCMiJyQKvPEV7kiPx3b1q8UwypT2e7x8oUlPBpewUEwRUZAHxqOQKMF5rAY+AYZEBnzNby9GBVqwexPZiLaapKUAGFBuxGjd0NHRSrKM0+iy5kOVOSjPvMc0JCHcJ8VqMk+hvr8U2h2XOjJbNgk/s6uoA1JjqWoFmjkJLyZErc85YQC/upcOab/OnX6d1NuNjHQtZHCA1Px0orPmpG1/E6mB5YMk7QlFKehi3lfbHGoTDkGlKWJgfjU3mBkn/9CAN7qVHT6nbRF8HlknkO7LUlJHMbNNOxMt6Bkc2xyKjXTGEhuGREQVIHh1uqb3dutnFe//87qK9G96vi4umYqXYKVqXOpSuC4ETWpU9mqj/EChNWRSK2kDhbVJvOol2Yrht8bRGmqoLzexHGz929lcrqVa272PTc6T2mchasIGmO/DfGXMcv3eH83KoS+EojHWIIrm6pLkjo7M4bGi+DJ9v+J/MAV/WGD/RIaHXGSW5pRfRxQtoRjeOLf/ifcVs8XADUV5iL5xCH83YABOH4gBvXOfPzb3/83uK1dIkuw8twk1DoyJNiEEYVcgim+uHn4zd/9F7hvWCrXlWRdlqUYGstEh0YjJzsWlzQyM7p2RqeOijM4b55FbUjCWy2c/TISL2L3jq2gQdBo0CmAjAjDvn37EBoaihUrVsDPzw8hoeGiEw8K1soemFShaIM1OGDRQ+ftj/B9ZzFqxi4MHL0BT77ljT++uga/H74Mg/66BU9N9sNjkwPxh9FueHLSLjz20izM/WwDyvILUZaZJEss/ibpCC4fasU/PVlyjnD5SVWLxt8bQUFBsrsP1TjG8ChR23gEBQrAzZEMINIIxH38dQgNicLmjVuwY8tmfBUTCavGGzqvjdhn8URHZRpQnQlU2tCap3jt5J7dh2NR7kBdEkpSv0FXZbIAXMmxko5GB316lZB8JcNhkkhl4mZYlQdUZctym1I7l9tdpWnij99WmHTHtSzbnZwEkkWt8cM6GTV0O81PQXV2gmQOZBrY+twEcY8kbOqy49BJw3jGBcWXWYKh7ChLOI6mnIuICdwKR+JhoLFAPG4aaWhn3AL94/MS0FGQJIUgV2CgQJwAVyD+Y6n+VsB7r665M3Arkw4/q46LG9WqOx8FDF7H1wJ11w5R7LvMA0M1KoUmSqxi7+oD2Rilj7kCc+IPR+P812HiSs22ojroRu2qjiumuLhvAOfgpEsZ05gytzT3SeQSsio3Tqyxzz7ya1k+tRTni8RNgFflZ6AkM0XyT5z9OhqdlQWoKUgGGkuQdeE7VOUlSqFqoyDhpEjfYQG7JMSd0OX5koyLshSTREkdldKp2GkIQbopsoPRF/1mAPfatQXT3n8HCxfMw9Iln2PJ4kVYsXwp1qxZg1mzZknNIJ6o6FhQpeLnHygwt4ZHIcDPH1FGPazBZvjovsZvn/4Ij0/YLgD/pycXYOAbm/D4yB0YMtETg6YE4HdvbsOwyTsx5JXZmDljFZrKa1GcdlkMmBwEnHQoQbJwUuJOQJyhcy6fweZVixFm1iMgIAD+wTqERu2FPjQCWmsoPAODZeMIbh6x090b5pBIBARqxRtl/tx5sBoNOBATAVOwN3x3b0SMJQDF2XGykqnNy0BlRjJanJkI3r4MzqRDQG0q2kqZsEqVwFMlP7YKcEoO0vmcST072VNS48TNFRD7AMpzUZZ+BtzFhlu53WnNnXGoQrp2zS3mCoCKctTl5aLZYUejrQCNBbloL3KgrdCO5vxcoKwI5cmX0e7MA8qLUJ3GoJ1KfKMPQPKxvbKCVP3c6xmQwXgGBqBw+XxNgCv9nX2eELxXML6T//NTA5yradq2CGbWHE8cY4QT+yy30GMCr/RTX0mfVceb6nF2I4Dx3K1IyL/UayiZs7+zPdhOdC+kyzRjYR4YgNO401miSGpUpXAQc8C1lWbiaKxe1CiEclelA8MffQiLZ7yPllI7cuPPyrmLh/YJ4Bk4cfKLUPzzfx0gwP6H/3MADkfpkX3xe3mddOIrkeQJ+9TT38p79DH9578ZIMEAAmxGe7aVi/cL4c1OpHYotSOx06mFkwE3afDcvVOkbwbiUBL38faEl5cXFi1aBG9vReqlRwo3L6bhkBGYVKEEBAQhVGdEbNTXmDhtDf700lwMHr8dD725Bf/2wioMGbcbz0zxxp9H7sDQd4IwcLwbhk3cjBfGLMT4sTNQW1SBpsI8RVfm2r2dIfXsDPztNHwwrJ+RocYAT0mUxd2A+N3c9V5rCkGgzgROJlwR0NUxUKsT3Tw3mNi4cT22bNmEyIgQSQdgMWihC/TFV/uiUVtahMbyMjQVlwGNTbj03bcSdIHaXNQ7uL9mFqpzz7qMdKmy8XOTPdMF8isAFwt7Q5HYM6jbo38+a1rfqaJSvYZ+mjoH9fZsNBdy95scoLJUgN7szEOTLQcN+VlotGUBlUVoyqdbaSZQ7kBZ8kV0F+XD5LYB3Ae03h4vHihcOYgBvCwbXfYUNGRc+BHARWp1JqCRm2b8AgBO6N8NxG9mhFWhTYBTddLbyEhJnEF6QduWix6bqkxew1rUlyJZ3liN8EuF8638LgKcW+6RMVx1GPesk/bh+JXV9E0CkX4REnhjQZwYttiJxILv5Awdr+RDybkkzu1W720oTrsk0E05eRioL4cj6YK8/jpEI8YM6njQXAa61J0/GC0gzzh3WPTnBDXdkxgtx2hJSus0hiQc2y8qGeYdYANSauUyjtZrzoC3AnBDoDe0gX7iNhhiNfeAnC56S5YsEWNmYGAgNFq9qE5UVQoNiPT4CLPGwNsnBAOffx/D39qKR8duxz89vwqPjtqJZ9/yx9NTfOV46BQfDJm0C89P2oTHhr+H+fPWo7aoDG2lNun07AQcHOz4nGzodcP7TTlzGN471uOr2HAxrjKZlsESKvAO0BphNIVK7hWmr+UkExIWiqCgAJgteixcNAeRUSEwmXXQaIIQFRWD4CA9IsL3oba6DXVVHahw1qOhrBFuG7agwpYt4ePcwaci+yxo36BOVDZ9tikAbLFzhaXk/2AH5IRamH4JreUFQF0x0FQu6QiYQ4T5RW6US+RWztFr4folCx2VWShKP4FaexzaaHSlq2NhPFpKEtFVlY7K7JNopNdMWQKq806hrSgOaLcjzGcV8i4ckFUj/b65SpA88OXZQFG6ALzTlnwNgHNZ7Npn1KlI36rO8ueuRUfqygd/J8cyGd3Ei4Z9UnTfRenSTzmu2FcJdr4fFbhTVrxqqDgdGFQvErUvq8LTtepbAeUv9RoCXPZQZa59ezJCfbYIhwhvFk5m17pn9b1fAMDpRxsvEjg7A6Vxdm4ZDJRiynPw+UcT8PYbz8DksRmP/fofxJDZXeVEc1EeXnzsIbw78gWxUlN9QtUIrdX0ESW0cy4dlYitlwb/Xq4j0Gi95nlKz47EkwJw+pES4MyrwEZTOxgNgDxmURtNlb5Z839wl3l6nYSGWGA2GWC1mKDXaSSqkgCnJE5ohkdEKZI3vU10Bgnk0ZvCYbZ+gUnvLsWTb8zD82/twJ9GbsZvXtmAx8Z54pl3gzFojDuGTvLAoLE78Pjo9Rg+cRX++MQYHPz2DNqq61Gdp+R9YWfnoJCHX5Ak90kAWvz3IMhjm2REjAgxy4TClQC9YHwDtIiO+gLBAXrotSZEhIXDYtYjJNSIVasXY8vWtYiMCYWnnweC9QaERu3H3AXrMG3mSnj774Xb7nCsXxOANSvd8dE7M7F72w6c+/4rNJZkiPqGQGtldKYtFe0FmWhlIcjtLoA7UiVJ1pxpb2Ha1DGY+d5ETB75CmZ/OAWzPpiMT96dcNeF//N6ZdZ74/Dpu69h4mt/xsEoL1TbzgI1aeiuTkV1/mlU5pxAV2UiHElfo6X4IrrKuW/lZTgSDiDSb4XAvCr3PNrLMkQSZ04celdI8EpR+jXhzUHbA3DXVnc/N7jV77sTaPf+zK0AnGo9wpl9k2OMcOYxxw99t6ODdsn7BLua8InjkOdV3bk69q5V/1LhfCu/i31BUR2miy/791Fa4Q8nNk58lMSvdc/qe78IgHMJR0s/4U19OG+K3gB8gHzQcYei8ev/PgBvPvUnBO9YIxI1JeiOcpvsNUmXwi2fz5AIS9m9paEIzFz29wMGSE2H96MxBlG30DdcDb2nnyUd56lTTz62v6dj8TvZcdiIbKibAdys8UOYxSjGSxoxGSCjpo1dtmwZ3N3dYbFYRAfOkPrAII2oUQhQgyUaW/eYMfDZd/DSlPUYMnYjfvPyavx5jDsen+SPgeO8MXiCF4aM34MnRm/BU6NXYsgr0/HpvM2oKGtES1UlqnOTxRCoPmyqHsSCX5KLpNNHRPoO0wcg2M8LXCFwZUA1Dr8/MEiPUEs0dAEmaAN0CDGZYTYEIzjYE8tWzIHe5I8ggx98g/0RYDDDKygUQ1+cjMeffwfPvDIbA5/6EI8+MQWPPDYSY8ZOxz/+6l8RFWpBXUmeuELR86C9gPBORUd+pgLxXgCnfl7rvRO7Nq9FhFkrq4QQfSBCDUFSwk0aRFp0d1WirHpcr0RbghGt9wLDlROP7ZMNKDhwlOeeKLllmNOlKPF7UTOgMhsoy0CI+xo05SrukQyl76zIkT5DUDHKkIOXnigteXEuDxTVq0R1E1S3qGNK3vurA/+pVSgUKDiW2D85tnhMkBPsRyKCJWsgMwcynQbPcwVMgPN6Ql2F1fXqWwHlL/UamcwLaIdJl3z5WWe/7YE2+ce2u959Sx+VPP6KU8B9MmIqod/sRAR4R3GK+GIzv4jqdkQvAUKWoOY+i/SLpFshpWjWEQE75fy//rcBkhBm2YwpGDt8kFzPhC+EOv0rmSiG/+Nf/u8BEsyz8INxmPTKUPwDJfXzh6ThOHjZsVTjpbrcu5EEbgr2hdWoQ1CgPzTBgQgPCxE1ClPHrl69WoyGOp1O9N9UUTAjIXOgEKA6y15MW7ALDz8/A8+OW4/HR23EPwxZiCFT/PHUuwb89pUtGP6hAQ+/vgHPTdyMVyatwh8efRPHjiejvqoRFbZ8tJW4tnRzpl6JTG0uQ5UtAxqvHTAFegnAOMkEBwXIZBIaFiGeKDq9Bf6ewdgbdgBaPy38PDwQatFiw9ol8PHZjogYA4INgQjdGwsfXSjW7NTiNwNH4bVJ6/H8qPV4buQ6jJi6Ba+NXYzXR87CP/3Lw9gfvQ/NVcWyGqLPq8BbAJ7eC+CKGoXeQtPfm4KDXx3AgdgoeLu7ST70cKtJNr/g/qFs27sp/B/XK5wgYg2BGPPcUOSePwbq6DtLC6SuybqM9sIsNOanoKkgFS0FKZIqoDbtIg4ZfIEqG9pdidFKMy+ITzxXjMwBQymcHirgvqgSS3AF4DLwXLse0ZjLQaxA/P7UAhHRxav7nd5GfRP1iQofjivxyXbpvAlwOgowPwhhzeRfqgMBYcYxKO3Ua+Wr/q+r618qnG/ld7HtqQOnwEhVkqr7pxaApetxVgAAIABJREFUK2oGMF59v71f/2IkcAL8SlGkcKVjK7PT0WidJLt3Jp8EWkoEyJSkaeSihM2Nb5lDgOkouav5AaO3wJ7naRRjwA9DbulxwuAd5hAI3rFK4E9Iq4UdjUWVunvXasNxZlQLVShaPw+Emg0idVPCpRpFitl8xYUwJET031SdUPqlCoNGw7lLNuLxVz/BEyNWYNDrK/CbZxZi8JjdGDTeCwMn+OORsT4YNN4TQ8duwzOjluPJlz7C0mW7UFPZhNrySlQ5cgXg3NaNHZ9tQamW+uTk88fgvXOTBBZxlx+uCugNw0mEhYZMAjzCEgONjx5hphDsjYhAgPduLFk0C2azPwKCPWCNssJHZ0BAyAE8NHQMXpu6Hi9O3IZnxrrhhUmeGDZyAyZOc8ejw6biTwNfxjdfH0FzZTnaywpQnXkJrbmJaMtJQl1qHFBuR6czV8BYV5AixlWLPlgmPK5erlV62lNt19usTUa9rDyY2oBG5gB/X3nN90NNenz23tvw3bwGKccPo73EjrZi5m3ORkNBJiozk1CbRSEjG13F+UBNCTSbV4oaqCLlAlodGajNTxZBgkZW9pG63EuysbQawFOfy8jYDHFBZNg9nxO3h6MXFDc75uBtKUyRlLz3o+b3cy/T69Xs6y3FinBQz3FalI7aAgWwPEdJmTWhy5pjRlWTqBI0bUlsG3qcMBKRqkm92xqRtnuPsTs5vhVQ/lKvIeOoA6fen3uicpJje7IduFJhe6rcuVb9iwU4G1wFOKU4Fay8OW6UW5N/WQolaxrrCGoeU3XAgSQgY65mbuHlyk5ISVzNIaB6NxDsV8P7pwK4xRoquu/eAJ+zZDMefnEWHnltFR75y0r8ethC8TwZOtEXQ97S4/EpwaDxcvCItXhl3FI8M3wKDn91HFWFJagrYWbBPDALGcHAB857Zb6IGkcWdL57oPP3RIhRKxKsyWQCC1cALEoyrRBYjBGIMEfCGKxHuNmMTWtXItDPHVFRFuw9EIVAvR7m6K/w6TI3PDL8fTw/YSOGjduJIWO8MHSsN54etxvDx2/GPz88Gi++8QG02lA0VVWIrYIGzObsy4CdboEF6HJwA40s1BWkobUkD0s+mwaznpGohusW9XffaW02mxEWFiaTl6+vr2yqwf8VHh6OXTu3Y+mc2fDfsRXxR4+g3mlDc0mhlI6yIjQ68lFbwA1GmFgtF5lnjuFIuAmor0CzIwdNjhx0VRaKOq+Cibyo7y9KF3/zhjwlBwyjMDuKGFZP+46ydZ0z5SyqqPqyK/nAxa3yBgErP/V5+hHfqFA33UKwSDpcZQOBrqp8GW+EDItqeOQ9cunPduDYIrwpUfKYkwThnfBdrLgO8vWdQLv3Z36pcL6V30XGUdPAdogM2CE1249tRw49ECoU3qgSrabUvW+cNyjh7ZU5oo/k9l21eXGygQI30a23KxIzOxDhzo7CwgbgQ2YDqEs3NggbR71O7WDq5KDWvTuHDLibGDFvRQK3Wq1QAU54qhL47KVb8egbSyTL4MA3NuDfnvkcQ8e4Ych4LwyeGIA/j/MVH/AXJm3F0GffwecLNqChpALVBdloKi0Qgy1/L20G1B2qum9K3x47NmF/dDhMuuAfSd+qFK7Tm6EJ0iM6LEZ2Bgrw8caqZYtFpx8U6AuNTguvQD1MkQcx9NUPMOT1+Xhq9EY8Pno3Bo/xxaDRPhg6wRP/PGQmHn5uOl4a8QnMIbEoysuR/R5pwGxhsjDmDS9IRX0Gsz8Wo9GZg32hWqxeMlekbtoGrlfUCedOa94rM0LSC8jL2xexe/eLHYJqpIULF8LHw13yn58/dRxFtjxUFDtRW1GKpqpyVBc7UVdeiHJ7Ljrqq6D1cUdzRZHsgtRVX4mmUgfij34rW6mVZsSLNC6DL/8SGvMvSNbFNmccCHMGJYlXhTMDjc4sAX97pRO1BWniRcWw8J+7VNtScLPCqGVuSFGZn4QG7iBUnIUae6p8jrtPcYXLe+ZYY1/kmFFhrUKIgOK4o5BBaZPuckxWdyteJvx/Nyq9efGgHZNvjG+wXz4me6FyEmQ7UsXEtlFXLte7/1+EBN4b3lei1dSoNUUS59Zb5Rlnxb2Qy0wWpurkkq63BM0OxE7D99gQbBD1vPqa73G5x8IOpYKbtQrv3g129XuqxMH6VlQolPpUgBMiIv268oN/umQrBo9aI2qSx0duwUPPLsaTo3fiiTF7MHCcLx4e5Y5Bo3fijbe3Y+CgN5Fw5jLaih3iS83VRINEFyZL3o3ytNOywqBrXcCeLQjw3AV9cICoc3pL3leD0NPTGwF+geKBsmbFSni47UJYSCg0zOOisSDYGIulG/zxH09OwdOjV2Ho6K0YPNoDA0f7Y+DYAPz+lc341aCZeG78Kjz7+keI3XcExfm5okJhAqOO/MuSuKgi4RQac7lVVq5MPssWfCL6duaBuR68+f7Vv/d2XzPfDG0OnEBpQOaxt48ffHz98cG0mRg05AU8NvQVvPzqRIyfNB0TJs/AlLdn4e13Z2HMuPcxeep0vPXOTIwd+y4GD34e7787E1Mnf4TRI6dg3qz5mPnu2+ioLEJrSYF4Q3HQMSCNYfWN+efQbLuAutxz6K7MVAQOexrQ0YDSrGRU5Kahpdx+X0tThR03Ks2VDtQUcW/UbHTWl4Kvy23pYqiudqRLRjzCRh17HBccSzI+7MkCbhVEtC1d+CZcCRd3Bav0Hmt3cvygQbv37yXAuZvV2S9DxA5HJrHt2J6cFPn6Rm3yiwS4CnRVJ07Jha6GXIJyFxem7mTADwcJ03a2FSuSDWd4tbABOOuzE9EwQHjzWH2PNSV1dY9MFdKsezeY2gl7v8/31HI7AJfgnR8BfDv+9MZaPDLaA4+9uRl/eHYxnh27A8+M98DQyYF4fIIfnprohsEvfob5czYAjS2oTjmPTiYHKmYuictgulbm7aDulZJuytnv4b9nm0RO0nApft9Gs2JENSn6d64A5H2TERFRkdDqdaJeWbZkKTRBWpiNIdDrQmC17Icx5DCee3U6nn5jIZ4esx7Dxu/G4+O88OgYfzw6PhB/O3ghBo7YhCEjlmDIy+/BZI1Ba20VKrMvo4u5RfLj0JYbhw57KrpLuX9kAs589yW83LeKuyLVJz8lwAltQp8SN7+HEjmhvmz5SoydPA3DR8zD0FcW4Zk3luHF0Wvw/IiVUr80ehWefGUBnn9jCV4etRLPvLoIw99cihffXIyX/roEg5+ZgTdGzMZDv34ILaVO0Y9Tgqbhqd4Wp2xYkXMKNTmnlCyFDPbJuiCpBBKOfoOvw8z4JtKKGLMWMZbg+1LohRNlvXEJNwUgNkyH/ZFGKaZgT1i03vgq1opocyDy4o+KxMgxwXHGsdV7vHAsckzRQ4fH1H2rgFLHpTqe7qTuDcQH7ZgAZyTmXu0eMeIS3gT39XjUm008vu8AZ4OrwL66VgHOtJztRUwylCJuhoxeUwpzSTAEVbHiUtJWpWp2DN5g78Zg47CoErkqpV+rsXp3JPW82ng/OHcLRkwfHx/x/CDAGUpPeBIiTGw1c/E2DByzRQyWT4zajj89txjPjdkhBsIhE/wkB/gLb3nhj0++h+QL2eguLkRDyml05p2X3NrMatdUcBHVGWckiIQqFJ9tayXvCSMm6XXCPTblO5lt8BoAZyrbIH0w1qxbDQ8PD+h1ZljMkfD3NUOn2YftbuH47R9HYeTb2zBs1EY8M9FTfNQHTgjE70d44O+GLMawKXvw5Khl+N0To7D/y+9QU6rsZ4jiZLTmnkOX7TJQnifh9t11pVizdB4sFg0MZg30xp8W4LQ5qCsftoOk+PULwMfTP8FLb7yNlydvx9DRe/DMBF8Mn+KPYeO88exEH7wwxR9PjnbDM2P24MmR2zHyIwMGvbYRL0/2xhOvb8RLk7zw74+9g8GPPIWW8kIJqOKEzvQF7JcN+RfQZL+AqqwTQE02yjJOyYBjnMK6hZ/C6ucJjeduBHm6IdDL7b7V/l5u8PN2w/Vqr93boAn0gj7IB/4+u+G9ZzuMOn/4uO/AxBEvI+3MQVFTqhIj695jhcccM1QLMFFT3KEo2RKRK2GOx97j6U6OHzRo9/69qgRu9drU0xZsJ5VNvdtSbdPe9S8G4MwXcT2AM6seC5ehtTlnwcRX7aXcAScBNXnKpsRsCLoeMnCERXY9Z1BFfrzoh3meRc02p17Hz1wNaLXDqZ3p6vPq+6xvRQJXAU6QXA3wGYt34InJe/DE23oMG+uGR4cvxfAx2zBs5A48NtoTj/x1Fwa9uQGzl2rQWQc0Z6cDBXHozD6GdtsFdBRdlh13uBM9Q+jpZfPx5FHw3L4BSxfNx6JFC7B4yTIsWrqip/A1C4OMFi/9HKs3rcZ29+0YP2UCzFaLJLCKjvwGVtOX0Gq/xsQpKzF42DS8Pmkrho3ajKHj9+CRUbsxeLIGv3puA377yiY8Pm47XpyyAU++9iHy7eVw5tJTIQGdzng0Z5+RNLIMMafHRm7yeSycNwP+QZ6whJt/FoBzwqT0zWdAaXz1mnV4/4OPMOyVSXhs1Dr8foQb/jzBD4OnBuLhMd7483hvDJrsj4fHuuNPI90w9C1/PMqAqil+sjsSDcuPjd6Df3xkKt58fSJaK4rQ4MhBS3GuqLW4MuR2ce3Fl9FQcA7dZanSV1HvQNrpw9iw6DNE6wJhCfRFmEErhub7VVtNWphNWtyo1usCYdQFIsRqQESoCbEx4XDbsRlDHn0ICce+6PHbpkqS0JGxYVdiOXisglq3a3XPtRxnBFXv8XQnx72B+KAdk0ncUEaNwGSbUIugtg1XMzy+XrnvABfwSrrNKzrvKw9BydHAHWboJ8vCnCmtxcmot12U1LNtJSky+3NJxk6iSuDsGNQtMTcwvVhYoyxHlivM/kXDKD0k6IPJdLFsIDUNppx3pYsVlYtNicS81nkV4Iy+pJvetdwIrwVwjUGRwAnwx8ZuxcBJQRgyehf+NHwZnh+7Hc+N24mnJ3hg6Jjt+JdBH+BsXCXK8kok/wYKkwTg3Y6L6C6+rOSers1Dc1EaHMmnsHHJHEk6FRsVCp1OI+6L3I/TGqKWMISEsIQgJMQCo1mH8MgQvPPOW4iOjJHQfm1wOMzGr+G2OxIPPTwWI6ZuxpNvLBcD5uAxW/CnNzfiqbd98D8GzcajI7dg2PhNeG7sEizZ4I+W1i40VxWiPOuc7LzTnHNW2VSiIk+SiS2bOx1WUzBMVh00xuB7CnCTyQK1mI0msHDLOnqiUAceGhYFoyUc73zwKd77aA6eevVDMSD/xyg/PPGWAU+9bxaQPzLeG49N8cXD4z0waKofBr/lj8enBuKxSX4YNMEXf3hzBwZP8MYfh8/FiDcnKyl9i+mVkSv9kPlg6nJO9+w8ROGjjRtbNBRCs2cLYk0ahAT7wxwcIK6M9FOnS+P9qJkywWw29hTZYs+sBKYxKpd5cPz9PBEc5CvgDrEaodUEwH2PGz54ewrOfXdQbABUI5XnpKKp2I6GwjzUMJdMYZ5kDW0useH8oa9w5pt9qMrPQlu5E3UOXpMFZhlVS0NRLqTwc0XXKq7zruuaChlqzjw7yviV2pW7/wpHfsgWEeZc1ytgVDL58f/cfiFclcjanv/FVBE3KMzGqRRm5sxEUco5RAbuRnelDfW2VNksvLU4W7KINtjTXNeqn/lhTZfhOmc2agszUVdITyL+HraH0ibym5hd05GGWmciKpyJABoAtKK7uxOd3V0Y0AGgG66/7g6grQHttSVoLMmXD9IFSgDpkoKlAW/xmI1DAyZTzNL7BI02FCYdF2mGSa++sfop0Zn/P3tvGSbXlaXpeuCZH/f29J2e7pqpKrvAVXaVbMsWW2gSM7MsTDEzMzMnM4slyyCXLcZkZgjMyEjOjGR+77P2ychMqV0uKWXoe8c/1rPjBJw4tL+99tprfZ8+kQZZjDGnUmZMptSQpOS+9OF3ueR6hFUzRqML+RZqC5UIrypflZNKi9BuftNNl2OTG9+6FSkoAXz7+xodqSYLJQDudvqEAggp1pGKS8kvFpP8ZWEllAIfifPK1F08wcDzl5TAsX/QRRasPUCXcfuVt/fWkP10HHGAXuMO0b7fOroN20zHvgvZeiCQDH0JdaVVWBMiVXVfoymCiuRbNJhlIAunTNgcc5IxJj5h8+p5nPN1ViDp5enanDpoBzZpfYX329tHKe14OjtzLfA8i2c6cDXgEr7Ofjid9uXChfv0/GgmvYeuos+4bbw/bgfvjdhEtwnb6TllJ//aaQJvfjSXD8dtp8eg5bzTfQyZZhs5Vgs5ujgqLbGUGzTFHZlBUaAj+vYXrF3kwLWLgQoUAoN88fHxUsfYljRBb9+WRU4f7wAVu/f38iPAy4cgTy+CvDwI9PHAy81Z0ft6+gQyZfZyth5wp1OfyXwk8nTDHek83IfOI9zoNMqRTioD6CgdZO1h7FH+PGA3XSc60m28Mx/N8OW9YSd4e9AhXu+zg46frOCDXv1U3r3qdAYRdHio6X8Kx0hmCDWGMBX+ayjIUPTGc6eM43KgnxKMloHfnucujI/y+qds5dp7+gmtgw+evpJeGoCPd6C6jvKMyLEEB3jj5nyci+e8CfB14cJ5kd7zZtCgIZw67sjfPr/DrRsPuH8zjMd3Iwi5H0PI/Sge343i4e0w4iPT1fauLYdIiEojKiSJ0AfR3Ps2lBtXvyHifhiR9x8T8SCEsIchhD4M48nDcB4/0uzunUeEh8UgbVhoFGEh4Ty8/4j46Bie3LlFxJ2vwJalCNCUFq6ETs1a/5RcfAkvqpz8JiyRgiHBpOyEEGryDZRlZ2Kz6tpo6dhyk5SVWVMpy9ZRbjFRnmVptuqcHKpysynUZ1BdmEtJdhZF2Vlkpeuw5eTy1cVgMqKfUJVnojLXSLlVT11RNtX5ZoTDX977e1aea6bQYibPoqckP4WaUgNlOSYKdJkUmzIoyzZQmZOjdHCt5ruYDDeoKE8BbEAt8AMAeMsIZh/Jnm7rc6VQQNi6hM0sRXFORH17gT5v/Zp/+y+vqJL5+nwzeamx5KbEIK/LjWmc3b1RVV6O7tOJoxuXUW5IJOPxt4rrmRIzluj7yitXXB3fo8ohAC5mH9EFwO1WYU7G5cz3A7gwEwo4CbhLDDYg8LxWBekdwMylO+g6dq8C8L8OO8QfPtpIp+G7VMl8/4mbeef9sTyJ0FNRXq+Y/7LiRa4rRmlelifdplr3iFJTOCUCljkp6JKfsGnNAoL8NAD39NIAXANHzTvVwLsFwM/7+HLFL4AF02Zyzf8CVwKu4u1+geCLD/nvv+3Fn3vO4Y8fLOI3vRfxb+/P5Vfvz+S3vWbxz+2G0Gf0KvqOXkO3D6ezfY8zubk2CqxG8vXRVGXHQUmK6jyiWF+dlYrj/h2cObgXKd6RBVb/AO82g7fKrGkCHgEfO4AHePkR5OlDsKcHwZ7u+Lqd5UKwn/LEXTz9mTh7DbNXHKZd9xl07L+JrkOd6DTUg07DnOg44gQdRh2kw+h9vDtmD++O2Uf7UQfoNuE07YcdodckV94fe4Ze487wz+0c6DNsLZvXbVGEXMIrLwN/WUaYEnqoExFnU6QquZcMIREqCHA+jtAPC92CkJpJSE3NDpo9Xrvn+9O04nl7+vviHuCvCM68fIPw9b6Ar885/HwCFbVCgMzQAtxxdT5CcLAnrm6ObN26lfXrt7N1x3Fee/1DXntzIH98ayhvvDuSv3QYzZvvjVKv//TOcF5/exjtOo3lv/0/HWnfbaL67NU3Bqj2D28O4C9vi/XjzXcG8MY7A/hT+0G83n4wf3h3KL9/dyi/fWsgf+o8nN+81Y/XOwzhjY6DebPjQP7aeSD/9D9eY//WzSqlUdYWRLlLYuuFKU9UbYQANza9ei1hVUlcEKoJYSn9dGQ/Ph07hKnjRjB53Aimjh314u34IUye0FfZ1PGDmTZ2BNPGjOXTURP5dNRk1Q79qB+TRo5i4cxZTJs0kfe79uSPf/gr48bNxNP9HBtWbsD1pCPXAi9yyf8cF7wDuRZ8iasBF/Bz9VLb530Cv7v1Dua8/1WCAy9w+aI3Pl6nCBaaDDc/zosj4+rNV+e/wdvJlYLcMCqrIoEsGhpLqCgvobqy5uU98H8E4JI+WJAaomhmLXH3VCqhOeYORzct5Jr3CX71X18hL0PyWZNpLLEq9rpjW1er8vrQr69QZRER43RV1CNq6TXZEmOSTBLRaWwiW/oRAVzIrCSNUEIWku0hIC4hFGEEnLVsJx1G7ebN4cd4e+RR/tx3iwpHdB68ig59PmXt5tOUSuy7uJI8fTp5KVEaXaklhlrdYxoskYrKtMQS/xSAB/q5qrimHcBbp97ZPfEWL1zzUh1mTCNQKkS9g3ByCSToyiP+55/603vcDt4Ztp0/9N/CG0N20n7UPnXMv+k2lwFT99Fv9Go+7D+DJ2GpZJlzKC80U1OUQYXE5wsSEFUd4fY2x4SyfOZUBarOZ05yLjhQ5Zn/o0XM78tQkc/cfLxw85E8cj+8PP3w9fDDz8MHP5kRiVi0hwvngrU8/C27D7P7mC99hi2k39gtdB28jW6jz9BljDNdxp5W6xEdJx6g48Q9vDdJbB9dJh+l08QjvDl4Jx3HHKDT6P20G7gROf8OH8/g4d17ShJMgEFAQjKCyElQ3jeWGKjNUcUrMo1dOmeKinersE7gOVVMpeLznp6q0EgqZX9K8/DyxNXbE2cfrTLXzdMPT/cglYHk5e6Hl4cn/n5eBAV5ERTsjaePK76BAQwYMhxn9yD6DJpOl6Hr6TB8L51loBt7mPfHHVEmr7uOOcQ7g3fSc+Jxek8+2fydtwft4MNpZ3h/zEG6j97fZAfVtvymy9hDdB57RFn74fsUC+d7o/by7oidvDNkCz3G76XXhL38pt0Q1q5cpZTqpapVCvQkgUEUlWSdS5ydbJmxZ0Rq2pnWNIS+Yc386QR7nuXLS4EEy/n5exHs592G1oOgAEeCxfzdOO/rxXkffwW2AsQCvDe/+ppAXx8uBgcp/dktW/bQ+4OhDBzyKUuX7WbEkFnMn72B9cv3sWLxTpbP38aqpbtZs2Q3i+ZuZuncLSyet+XvtFtZNn8fy5fsxGH2XGZOm8qCmYuZ++kS1i1Zz8p561k1dzfvvt6dyMc3KchLhMZS5XlLsIQGfnwAF8+bokzlfVOiV564vFeUHsZXgY780yuvUJNvosKqoyrXgAj3/o///AoPPz9PXmo0uqh7NOTr1Q2U8vv63HQF4rXWVJUobw+X2D3sZ9uX9cBFPFgAXNL25LV9MU3S2uau2s1fB2/hzZEneWfUMf7w4To+mXKAPsNX0rnnBOITsrEa88nNzKRYJ6mRSUp1vs4Uoby7xpxYlQtfInqDuWnokkLZtHohdgD38HKndZhBgLw1gIsX6+HpjJe3M7PnfIqnpztS3HPGNZDjbtd54/2p9J1+lK7jj9Fu5FHajz1Dj2nu9J7mwqs9VvPh2L307OvA6nUHyc+vwKzXUWxNo6EkE5v+iVrAk2wicjO46uHErLFj2LJ6DYvmzWf+/IVMnTadOQ7z2myz5s5j5jwHps93QF7PcVjA3DmazZ/jwPw5s5kzYxoLFsxj9fpN9O4/gk373PnffxnAx6M203nwZv708VZ+33cnv++7ldf6b+S1AWt4deCqJlvDmyN28pdhO/lj/43qXv118Dr+73fG8uGn2+ncfzLFeVZV+SsgIaAhxGwUpFClC1MSbMobNCdy+1owe7es5erFc4qZUrjYPXwDFUOlzM5+DpNFa68AzUTmT9YHfH3Pqywkf59g/H0DFI2wk/MpAs/5cvzMMU45uTJm8iwOnfLiza5jlILUX0Ycp93IE8jagSwAvz3mtGpl+61RJ9V2jxne/GHgATpMcKL9uLN0m+ahfiMFa8rGuPDuWBfaj3XlnXEt9sbw07QbdVatTbw+aD+vfbydDmOO03X8CX777iT2bN+rhH6lElsSDgS0RXtTpPJkDYtSsxI1lxTPxhILiSF3mD15FDeunFfhNZkJSlw/yM/3hVsB/iB/tybzUpXM4gEH+wYqC/Lz5+zJE5w+fZLzFy8gA+Tcxev5aOAUtuz24nF0ETdu6XkcUURkfIVqH4Tm8ySymNBoGw/DCpDtv2cPQwqJiignPamSG1/dprEeKkvqKcgqoyKvljJrA/EPC+n61jDMmUaKCyzU19ZRVwu24lpqa38CAJewicS79RHfIgLI4oULbSd1eZx3Ocj/859ewZQYQZ5wUlhSOXNgC2/8+p9YOWcC//JfXlGqPX/+1/+mhI2lFFiY42yGGFWKL1VhLwvgQicrMUF7DFyAWkzAcf369UrCTF4LC6Co4UhHlc/l+xPmrOHtYdt5Z5ILbw4/wL90cKDvpN30GbSA/Qc9ybcUk5WSpsC7wZqhBJcrhYdCH6aEckWKzGaMRQF4TgaZyeFsWLMYf393xeFtB3AJMUjpvFY+L7FODci9fb3wCHDFI8CR6Q6T8An0JujSJTwDr7F6lwt/7DqFjsO30m3CSVXW/5dhp2k39DQdRjvz6vtb6DvhGD0+mMqDexFUlpZRnGMgO03j0y5Mf6Bi4AJglaZkfE8cwUOodZ1cVQxeFhSFD+Yfedjf97m7tw8uvl7KxAt39/bD00vYIP3w9vRpLs+XwdLRzYtZi9eydqcLb/WYwptdZ/LOJ2t4/ZPNvNpvO6/238yr/Tfy6oAN/HbgOn47cIPa/s3HG3h94Db+2G8Tbw7cxNtD1/A/O46l14TVrN/vSEVRTnP+sxBZyf0R1kKZvitdUnMCNfk6Du1Yr0JHcu/lvJ29Ajjr7qvO//uoBH7Mz2T24+7jgbu03n64e/nj6RmozMtTu45XP7tG0MVgPPy06zxnyUocvc8zZvoyPhi9nI6jd/HW6CO8PfIw7UYdVq1svzPqCG+POUqnCaf468hDdJm3Y24zAAAgAElEQVR0hj8N2UvH8SfpOs1JteK0vDfGUT1P8ky9p8yF9mPs5sw7o89qdRKjTiDrRG8N3E33MUd5d+AOXnt7DCf2H2qu75AMDunPBckaCZSk5IlXLvQS1rRYqCzk4I6NnA/wwuXMSZxk/UqtwXipNSFZE5C1oedttfULD3x9xGT9wgdZgxGz7ycw0F+tM/iev4jP+evMWbaL9/vP5i9dp9Cu51z+reNk/nf3Gfym5yz+tctU/kfHSar9VbdPVftvXafx9+zVjtN4860p9Og6hV//r9dJT0vBlGkmx5xHflYBBeZqyrLhjf/dF12SmZwsi7Zg2ag19fwEAC6etuR6N+ZLmbyEPyTbRMhxYvk62Jnf/vN/VlViNYVGqgsMbFnhwP/1yiu4Hd1BhTUNU8ITOr3xb4zt24USQyzGmLtKrs2a9FgVXfzYAC6gLWAtJp1X4p9CqHT82BFGTppP1wkH+fOo47w5dBevf7CUj8aso33n4ejScsnJEGKlDGrMKTRmJVKjpMFCkbRByYsv1gnvSRxFwjuRoyMzKYINq5fi7+fZDOAaeNsBXAPv1gDu4u+Ie5ATn86dhE+wN77BgZxy8+ew0yVFtNV1+FZ6Tz7N+5PcaD/SkU6jnOkx3pNXO2+k+6AtrFpzkNrKKnJNGZTn6SjSh1GbF0uZ/rHywCUDqMqSTqCjI1cCAgnyDsLN2Ut5ey6evmrhrHWI50Vey+Kbm78XrgFeStvTQwYpn0AlUiExcdmXq7ub4jJ39Qli8pzlbNzrQY8BC+g+cBUfTThAl/GHeW/Ccd6bcJR3Jx7mPbEJR5veO0k7pUN6hg4jDvHekO30HLON9v3n85sOgwhNNFKYldlMYCUeuMTAyUlS96gwLVTN/tIiHzJq4EdsXLuKRYuXsnDZKlZu2sm85WtYumwVy5Yt+9ls6fIlLF6xhKXLVqhjWbp0LcuUrVbbDvPm8umcGcxdtpgFq1bTe/BIzvhe4vfvfkzfCWvoPXkv3SYeocv4Q3Qae0BJAkor210nHFYmMoHdJx9T78n7PaYcp/O4g7w/6ZgS7JbCsE4jT6oCsY6jhMDtZJOdoM9UFzqOOEQ3ycwavZ9PJh5RM7+/9lxC+27jCb93V8W2JXRi98ClT0vihPAiSaaGaMJWF1nQJUYxbeJYRS8hgCuatS8bspIwk2beeHnYTUJh7nh4ueLh666eUbfAYI64BtB//DK6DFhEt0Hr6Dl6N53H7qLDxF10GL+Ld8Zu453R21T77tgdtB+3Xb3/3oSd39m+P2YnA4dvp1uXyfzx9Tcpq8ijtDSPmkobpYV5FFpKZfLL7/5nH9ISTGRnWaivb6S6po7KuipqGqt//BCK3QOXqsOcxIcKzKX6UrbvXfNRYG3NjCUp4i7UFTN/+kjmTBkKFVbKc9KQ3NvPg5yUJ25NC6XKmkxDYSYlBmEBM/xgHriIOEhWgYC0Mh8/Ve0nIsLykCj2uwA/lSomEmwiMLxq0146jt7BnwUcRu2l/YDl9Bo8j6XLdlCWU0KV2USVZNiYYlVFo6TkCRm/VKeKdyEEQ4qXIivlOQBcQLwJwH2biK3Ee/Vzxi3QmSkzx+Hj74aruwvbdx0kMiGbbv1m0XvkRt4fsUuJK7cfLFWiJ+kz5jS/+us8OvaYT+iTBBrKi7EImZPwuVtl4SiJcsMjBeDCtlaUnoDb4SNKvFmIs9xcffHwDlJybj6+gSo/W3K0X9S8/GQBzhu3QG+1EOfpF4gsxIk179fPV8XIAy9dZ/jEOew47MM7PaYycNw2eg7bRqeR++kw6rBavHx39MGm10fpMPIEHUecottoZz6a6MGHYxzpMXgP/aXDfTCLgeOXU2irx5aVobxuifVL/FuU7KUVMJfKyzJTCtGPbrJq8VwltSei1scd3Tnh4s2Bk46cPuOM0C38PHaKM2dPcPrsCe3/Tztz5rRbk7lox3bWiROOZ3Dy8WbXsWPMXL6Oeet286u/9OKvvSfSru8C/vTRIv7QZz6v9XTg1R5zVPv73vP44wcL+G332fyu11xe/3Ch+kxei8lnv++9gNe6L+W17sv53fvLVSuvW2wp7T5axe+7zuONnvP5aw8H3uvjwB/aj+LXfxrI2PGLaCgugAKNUlky1iT+LaFWCZ8oojtLiso0yTOm4XLqCBvWrFSLsFLctXXbDg4fOdZmO3LkGEcPn3jG5L0jHDl6gIPH9nPs7FF2HtnHjmPHWbhxHx36zeCNXg70GLGbPmOP0H3MXrqO3knXkbtU+/7oPXQbs4tuo3arbflcZBS/q+09aheDhm3mD3/oybiJIzBlR6IzhGAwRWDJisGkS8eQXMk7r/cnPSmTgvxsoI4GqqinjHqKf3wAF29bslAEsMUkfCKSa5Qaue57in/+z69QlptJoTlZpa/t37acnh1fp77EhDU9nLoiHTeveisA18XehTIzhZmR1OSmUpAe/qMAuJePxvst5dpeUkAiWSi+3hrtrKujykT4/FIQQRc+588fL6T7p6foM/kAv3prOF16jyU/p4zsJBHG1WtkUIYIMIRTL5kNWTGI4rxU/NmJhSQPtLUHHtjKA5cYuD1NTFoVE/eV1D0vlQMedDlQhVHGTxquFqvOB/qxfesuGhvgX/5XewaPXceHI9bTbdAGug3apDjAPxy+jXd7zGfOvL1Q00CBcGZbElVGTKXpCbXWUIpSb0O+5NonKzY/50OHOHvoBBcCL3H+3BVcPXxVZags4slimmcbW1cfV5x9XXH2kQW5psU4RZWrVbyKBx588ZISpJjmsJw1W07S/eOZjJq8g49GbKbnqF1Ip2m2UfsU26JUYPYcfpyOH+6je//DfDL0CH36bWbIiM282W4Ql68+pCi/lPp8o/K2BbQlfCKhE7k/MsDKwmaxMYmEsHscP7QPx7OnVSqpm7e/AnDfc1fUorZKPZXaAElB/albbzd8lMl/++LjGdBsUsXr5OLM+SsXOOZ0BtfAQOav2cL4uavoOXQG/Saupc8ELcW084itvDdsA+8O3qDaTsO30HnUFtoPWk/HEZuQz9/4ZBldRm6jw/CN9By/h3Z919Bl+G7FryMcO51HtJhsdxmxk/dH7KDr0E18NHYbvYeuoXu/RbzZYRy9+y3g0uX7ZCfHqawToZaQ654Tfx8KMzQ9gUyRxkvDkhJNaa6J1csWKeFxJTi+cLFysFatXsuqNatZ3YZWfrNm1Xpl61ZuaGrXKmHz1WuWs2LdEtZvX8O0hTOZv3YdExeupcewRfyx2yzafbSOX72zgNe6Lec3nRfz606LeK3bMn7ffYUazF7tupR/ZK93nk/n9x341397i4BzrpRUJpBXFIkh6wGFhbHkZmdiTqvg/XeHE/EkkoyMBGrryqioyaeeAmqw/PgALoAt4RKJgwuJlYRUxCsXML/ieUwBeK4hlqr8TIpMCVz1d+S//6dXeHgjGOryqcpLY+nMkbR/7Z+ozUtTXMZSZEJxptrHy4ZQPM5ofOCtPXCJ23p5+7N6/QacnF2VR371yiXFeufv4cTODStV3umJk4607zePAXNP0m3YSn7brp/Sx6yz1WCKDoesVBolpqoPA30I9YYw5YHL9RD6T5s5SVOGz0pRBEOJEQ/YvmEVbs6n8fRwwd3dFYlzaybgrQG3aFxK3E5aLx93Ll4K4tMpY5UKzpLZMymSqVZFPa//7h369pvMJ4Nm06vfLHp8Moten8zgw77T+a//7XfER2dSZNarEvEKcyRV5jCqsh5TbX5ItTmEKqW3mEyt1YzTwYN4nnbBx9VHpagJtYCXn7a4690EXG1pPXzdcPdzw93XEw8fbzx8NOoARSHg442PqCUF+yPx8ikz5nLohDfdek+g9ycODBixmt6DltFrsN1W0GvQKnoNWkPvARvpPWAzPT7YwOAhuxk2eBv9PljEgA9n0uWdjynJKaO6MF9lNDXTPiiBYC2dUKbwMn0XABd2yMN7tqnUSVdXV5zdvdQCpiwcyvRbGCN9PNx/ntbTCR8x+zG4e+NjNw9PxRbp4uaIT5AfQZ9dZuq8RbzRqReDJ8yn1wgB5D10GHWUjiOPv3Arv5HUTZnpiHUYaTd57wSdRhyn09DDdBi0h48nHKffpKN0H7SZzh+v4E/vTsNkhhKTsZWWgKYroBX+aWGUvJQISkzJVOSb1CzoYnCAWoeSdRFxHiTBwN/flwC/F2/lN0G+gcqCfYK11tdf3WepWvULdMPN5yxuQR4ccXZm7b4TdOk/i4+lrmLoDvqMPU6H4Qd5d8ThNlnXoXvo1mchHTsNIDsvFXPuI/KKwjEY75GfF0VJgRlDUjGd3xpIWmI6+flmBeB12Kghj3ryfnwAF7CyA7iETsQjl1beU6r1/+kVagr15OtjsVmSVbGOw8SB/Mt/fYUj25cze8wnSh8z5OtzqrxbyOkpykDik+LJvxyAJyrebRF0EAC353oLWAjviGQ+CIWp5PqKWME5f29WzJ+pPDIpdDp68BA9h8yh3Uez6fjhFN7u1B9DZjbl2dmQa6Q0KYRGfbgCbzuAiyahnLtUXgmAC9VnZU46pdlp6BPD2bF+BYE+blw8509QoK8SJ1bVdlJx5+eJn6+HsgBfN9VKmpgs5sz/dCoBzqe44HKW+rxsDDGxHN+9nwAPoaS9gJtrMD5+V3F3DUBSzHZv3UlVYb4CKeH+qDKHU2UOaQLvFgCX+H1DXjaBjk5c9AngckAwF0R7U7RDvVzVQCOaom0xKfH28DqDuzJn3D3dmtcblC6phwtu7o74+Xvi5u7ErNkOTJ42l+07TrFi1WGWLD3IkpX7WbJyTyvby5IVB1m2/CjLlp1gyYKjLJl3iNULD7N52QF2r93DwQ3bKDNmqso57flpESTR4q/yzEplXJISVBYPXOgNLgb5qZmYPA8yyEt4TVIdAzxcfxbz83TGz+uMMpGyC/BwI8DdgwB3L631cMPlzDECAmSwd+WMy2k6d+/FwFET+WDgBHoMnEXXoWvoOHRLm63zkC3YrfV+Og/dTOehG3m3/0q6j1hPh08W0b7PLLr1ncvv2g1kx8FAystEzCf93wG4NqBqA6nwA4kXLv1t+7rlBPlKONOdQH8pWno5xacAbw+tYEyKxlqb0CP4OGuiKC5HCLrkR9DVS2zYJcR0E/hT54m803Mxv35rOq92cuC3nee0yX7fcQa/++MQ1q05RFVNAXrTIzIy7pKXE01pSTrGjFSsadX8+de9MKaZKMyzUFNdTk1tOQLi9dh+fACX8IkAlnjf4nnb+cAFxJMff8Ul7xMY4x+TnxlDfZERkbeqL8jkut8Zjm5dyr61c0l69KUSnhXvXbJXJJYu+psSS342bfDZ7e9PI/x7AO6Hh3dAM4BLBsqli+fZs30zx/dtRwidKDapnOiBw6cyeNwiPuo/iTWrtiqxhuLMVMg3UpkepgC8UR+mvG8pDhHvQjgQ7B64LNRIGW6lJR1jQjirFszA2+kE0jldnU7h6eaIu93cz+DpJu+dwsvtBF5up5SGpceZs6yYPYP965ZTmBQFVgPVRtF5NGAzGsnJyMCclk55QR5WXSr5xmQK9XHUF2Ro98aked/idWve92OqTWHqWEszhEAsVS1iBrm5Eejuho+7szoGV6djBPm6c87Hs00mvw30d9bMzx0JHUnOrViQr5fat7+3Ey5Oh7l8wZcxI4fgeMaJbHMBtsJ6CnIqKc4voCTfqlKsJM1KWX6O9n5ekSpYqy6opDq3hFprLoVp8ajCJFMYJSm3USK/rRSlngXwIkMiYbe+YP/WNbifPoK702mVviaycSK2EeDuRID72Z/JThPgcQp/z1NN/+9EoJsLga6uWit6r15nCPI5i7f7CU6c2MfMWdP47PoX7D90mn1Hndiw+yDr9uxvs23YvZcNe/Y027o9e9FMe2/boQMccTrJ1v3b2bx7Azv2rMNh3iQSEh9RZE2k3CzV0y0DqKwR2e+J3AsRkJBYeHZKJEd2bVQqVXt3bGLbxnWsXbGELWuXs3XN0jbbttVL0Ww521avbDLZ52K2rFvExrULWbHCgVVrljNt1lymzVnLyk1O3A+zcT+smKj4XGLiTW2y2Dgr16/EU1qEAuZsSzxmYwKFuTqqSvKoLqrBHF/L7/65O6mxBsx6Aw3VTSkoTRX0L11KL+D8fSbALZ8LkDfkCVNXlAJxeS1x4OqcVAWGeWmRilxKAE1GXSldlhxQIdehrkD9RqTHJFYmpbaS6iVlts8C9rPb3wfgUgzkefqoGtXFg2zxwP2QPN9V6zdzylFbxLxwPpiNq5aQlRZLaZZ2jMaECPRpBqqqIC4ygeLsXEqzjFRbdJRnxFAQe5d6faSKfQt4y4PZGsAViOtiqBbVImMi2YnhbFw8m4u+Lnx2zpdAXxeVMiVpU8oCPbgY4MbFQFfNAty4fC6ALy9fZPygT/A/dQByMihJCIGsFMjXQ76JUmMqFdl6GkqsVOSkUFeQCuUZFKTcUWESLVQiHNgC4BJCEQDXBpuyzDjyU+K44OKE75mTeJw6gvvZw3h7HMPf8wQ+zsfxdTrZJpPf+rgcw9v1mNL+9HY5jbezI97OZ/FxPq32HeR1irPHthHsc5K+vTqQFhtBrt5AiaWQuuIy5ZmV5yZTnhtPaX6sZnnxlOYlUpqbgjU9WhWKCUWD1BDUZkVDQRwVmd9SmXmbWmN4M4C0nrpLCEVKtqVCMPHJTbxOHVSVmJ5nDuPteBSvs0eUoLPnyYN4njzws5j7qX24n96jTDuGg3idOIrXicN4nTiI18kD+Dse5OTetfg478f97F5uXDtHRUkhpkwjBTm52AqzsBXp22RlhZmUF6ZTXpim2rKidOQ9W5FYurKyojQqStIoL0qirjyFQtNjKnJCoDSW2lzpD5HN11+AXADcbrItIC7cSKLIZYwPISc9jqKsdJU9lGtIVaXr4p23yXIyVKabZLtJHYr0EeGCUa3sMyeDoqwUco1JFOebSEqMx8vrHPMXbOHipTC++SaBb7+8x82vvuH2l9+8ePvFA25ci+XBrVT8vDy5fsWfeze/5pvPrnM9+CLfXL7NVwFR9Gw3lpTIdKxGs1ZBL/mDDT9RIY8AuIC3xMKlgKfMEK3CKJJWKGo7MroWpEUoZRAK9ZQb48kI/ValFgnXrixoZEXfVjdSFMMbrYmqGKY8LVQtOj0L2M9u/0MAP3P4ewH8tJOrEhHYuXUTX145R12RRU2t5aHCZqHYqKPUaqHCmo0lSQaqDKqUBFkGZalhmqq5eBXqYRSa3FjldQhxjXji4g2Kqofwuhij77Fj+VzOuZ8myOM0bqcOKq/P7cwhlJ09iPsZrSNKZ3Q7ux+n04fwcXekb7f2qhRZzrcyPYTqTE2UV8jA8lLC1DTUJupE5lhqrLHUZYdTrn+oBhQNuFp74RoHivKACk3qnB5ev8J1fw/8nY/genIHbme343Z6G8Huxwh2O9U2cz+h/d79BOdcTxPsdpZgVyfN3E4T7H6CIPdDXPI9xjnPA5w9sJbqvExKs9LJSUqg2mpQNQE2Ywg20yNKzPcoMd9VVpR1jyLzI0rNETQUaHQOMmurz46mPOMuVbo7VGbeaQZwOVdlxhb2OAFwkfeTKkzRwJSpfE1uhiq9r8vXqQrOhgITdYU/lxnUIr8s9DcUGpBjEfm4xvwsNXALfw3FOqqz5VlNIzftkQKsEnO6EtQuM5up0qdTq0trm+lTqNclUa9LUG2drkn4Wp9CtV5oGFKoyEikOjOB8tQoGk0JVKeHQ6mB4qhvqUkLUf1DrrvdC7eDt73FGt9UXJWu6AzkHtQWaCIaZdZ0JY4u+NEmSw+nIP2JMhGtKEyNojA1hsLUuKY2RpFTybWlLI/KfCv+bp4M7z+KSaNnMWbop4wdNp2xw6Yxbui0F2+HzmTUwNlMGD6TSWNHMnnMIGZNnMScCZ8ybeQkJg6ewrSBy5jQfz7xT8KpLMwBAe9qqCmB+oqfIA9cwFsWMMXztgO5xK7FAxcTwJby+FJ9LLlJwpOdgGSvC8tgpT4WUYWhIEPxmRQnPlYpR7bkx5SlPFHg/ixgP7vdFgB3k6IInyBWbNjMWWcPtZC5deM6iiw6ctNjVKeW2YA8eHWWNAqTI6A4C1t6DI056QqQFUeLOU4DhVazFAFQzQTAY6nLSlCziApdDHkJjwk8c4Bbl/z4PNid896OXAvy4Gqwh2qvBblxLdiFa8FOXA9y4uo5Fz676se5IFe8HPdCrZWyzMeQHQ1FySjmQ0uiIuOXWU2hcDibYpRYgS1DeK6Tm49PgbhZZglhTRapiMKUqrsxTYUebIYk5cFXFyZDdTpUZ1JulThxqgoBvWgr3q0ApNAlVGUJI6OOyiwJJ+nU/iqzk2koSgVbMlXWSCqzIxQVsQql5ekR5XkZoOqyY6mzRlGTG0J13iOq8x5Qlf+IqrwQysyh1OSKZN0jVSAiOfglyQ+062OJUt6e3dOT+ymed2sT1XWlvF6oU9N5mQHK90SAQyoGRUezVBY8jZLF89O3peYYxOQ4hC+o3CDFSMlU6eOpEWWrlPuQE6NmV7bMh0pMRb5bZkglPymWqrQEqtPi2mS1qTE0pIhFq7YuJY7a1ASqUxPUfivTE6jJSAKrkcrUWGoz4iAnncbMKBoyIiE/Q9FQ/3sAD0dmrGJkRVNnaiqqMmqaAfb0QqWNmy33P7GNFq+emzprBPXZsYrmotGcRqM5o8nSsKVKP0igICWSxgITZVl6tcZVX1yJzVJMZV4lFfmVbWvzqmkoBZu1mMbqAvKzErSqbYOVcksepaZCGnKhLldICPMotiZQV5oP1Q2aJ/5TVGKKxymFPLJ4aY9/izeuCTnEqzLtGnMSDdY0FUoR2tii5FAMod9CgZ5qXTQ5UXfAmqTyQ7NCv0bAs14IlzLC1QPwLGi33n5ZAD/t6Ma27Tv59sYXlFgNKitBRmtZSC1Ouq8kx5TsmCkOJLsmJZQaUwIFSSFqhqE4f5u87adfa6GnOmOcYihsyE5SnCN5yaGK+0VlqGSlUJGbqdIsK3LTqchN1SwvhcrcJErzUyix6TGYo6ivMmKzhFOmf0ilTjptNNmRN9SDbdPJoJNKhSVFzXRkFmRLfwz5SQqkBbCkE8l0VlvM1MInGs1nEpWmVOqsOnVcooBebHxCqeUBxab7KndcQkFCsfmircZ0qdFnVhhTqDCmUWFIV2RmirJT4qPZUZSbH1GXE0JR+rfIwGON/QZq8qjRxVOVkai8PElBKzVonrjN9IAS8wNKTE+ozk1QgtnFmfEUpydQlp5MYUI0DVmZVGfGNYXgtKn6dwG4hPBk3/JZccpjytOfaFN8fRiNFk3zVaTxxBH5OdoykwzKYtoxSN1BjS6RWl0ctfoIyIqAXBmoHlBrekyNMVQRRJGnU8RwshYltM5tMSlGa8zSTPjiZTAVZ0zK4sVTltCEaHJK2K4sW56PeOryhIL1TpNsXUyzA9HaAxfglpRbydpCZg95iQrMawwRGrlaZoQ2yBqi1ba83yYzhlJlekCV+T41xieq0K5WF09tZiJ1GcnUZSbSaBDcyYLsVGrNiRSnx6g1oZKMdGqsFmyGVGwGcUBTXrgVOt58UybmzFgKsiNU+K/OZsVmNijaXpkh1VhsFOmM2HLCsZpuAnnQUK488dqSH4DMSnvotemngPWz28JtIKETSdCXzBHZFk9cxBjEgxFqWPIzqTUlKeAW8LOJkG65leLEEAWIVOVQmRGOLekRjbIoKgU9IkFmSXh5AG8VA7cX8tjTCNeu28DRo0fZvGkDVaVF5OqTFPWlyG6JByYdGCE/EuWa9FDKU0OoEOHXnDQac9IQvhYBKQ3YBOQSVNhEgNx+reTBL8uIoL6J97xOfpeTRrE+TrG0SQqVdAJl5nhKlMUpr6s4KwarJYZcaxwZSbepzIuh3BwK+TGUpT+gQlIXrZrCivw3RSY11RTGN5XvnBmu0fEKv7K6dxpvu/LGFedyAvUWUeLJUFNyRblqiqHIoMUxK6xhVFlkgTqxzSa87zKAV5lTqDKntViWKDXFqwGFongqDA+USfxaeNSFq8SWHEaNcMzoUpXHKc9WuSlMXYMyoeo1R6i0U5s+noZcE425WZSly3fTabQYIdeExmbZAuB23ng7R7Vwcsh+hde+ND1UMRViTaBGF6rAXP1nk3C2OCaagIi032/a72IUPbKAf6UhXg0CL9bGUW6K0kz6lSGuyfOOo1YXowF4vgD6A/VM1BoeUZnxiMrMUNVv5JmzGSIoMYa1yWzGMMr1YhFKhs5miEJMiuyKjDEUmmIosyZjSHxAWU4SpdZ4CvQR1BUkU2qOoiYnQakfybPXGsAlfGL3wPNjvqVWH6aFBaV+JCdVDbp1ogWgREc04RERH3lhy4qgPitEmTBPSsFdgzGJRkNKs2FKoSY9huL4RyokKmFemWVb48OozkpH9AeqzTLDeXETYXdTUghUZVGaG4EtO0ylTRdlxitJQ+Ebr822Yo4Lg8okCrLuQLVR4wQXNqvaxpfPQtE6vtwADbxfpG3tKf8jWlj57rOqP7L99D5ebFu8PG+nU3i4OCIFML6ebiol7/L5IM75+7Bm+WIWzfkUY0qsEvKVnODavEzlxSrVEn3UU/+vrsVTZPPadFwD7Ken5i3T9KeJ6DWyeI34XdLYvtu0wUAtgmbFY8uKVZ5weZNcnfKkm0Q0ZKDQrPX/a/dKi323ZACoe9cE3E97xJpXLFWJ8p/iSQmHufyf7OO77svzvCf/J9dBzUxkcGtl2vWR45T/0Ba71ILXU4tecr/jqTZIGCq+6RlsNQip38Y2DVKap6++K99v9ZtnnyENUP7esyTnq5mAjlzblrCYti1pmXazX+PW52A/D/ms2pCkYsX2mPELtUZtBtW6D9qvg3b9m4611cKgHLsGltr5ab999po9/7Z9X/ZjsLdPP3f2Z1Br7d9puTYt/9cM5E3XuOU5iggu+/QAACAASURBVKfWEE+tPqHZ1LmqWgUJ/bXFIpvupWCLfd8Sx7ebOIja/9qfkeZjb+rn9vfb0sozrvXhhOb+a9+/7E+9NiZpYTqzNiBqgg610NBIww/BRtiWA/+P8hsBDB/nk6osXlLCvN3OKn3DC75unPNyZPuqhbgf260WsSTeVpwpGp4yXdJEiUXyrQWIWwPk879uAS0BbS0U8SKtHehaOszTneX73reDoz1tSz00Sg1EVEm0cIYW0mgBcHuoRFuE1UBAAzTpBHYwf75W/q8FAJuAXM1OWq6f/YG2t/9Rnh11rZoGxxYAb3XtmwdTOzg1ZVwowGmVKtc0YMrCt6bE8mKtHbD/I12XFzmWfw/iLY6gth8ZmP89eAuQa+cu17dlgH+x19FNg0ErwFbg3XqA1/7f/vw9e7wvcq7PfldmedLXBcSVEk/TTMS+gKv1y1g1q5OFdEn/pUETcmhsbKT+/3QAl4vme/YIjkd2q5Qw91MHcTy4E58zh/E9c5B96xaS+OBLtYglcVAJm4iHINN34YwW3gy5sS8L4trvNeUjOaYX2W55sOxe9Yu0rR9+eS3eqniqmldYZRBWvjQNzI0iISXhGHsYSAMr+U1LR5PXmif8PK127WI1b7V5pvB8x/9sZ/g5trXjbxls5L7ZQy/NbVNY8buOT8BAKmArzOFtbAW4xKO234P/77UtgPj0ff/uc9LA1A7o9mesZR/2wfJ52xbPu3lG1jwzsz/H2jVVfaPpXrZ+/d3H+Xz34RcAf8kHVy4gVXlKUktSkxqKTSpNTOJcFBvUyCc3i9xkFTes1ocrov/KtIdgiVZx0O/qxC8G6K28tlYg9n2ec+vPWj9ML/669YP+dKihBcS1dLAWD/HZeLk2gMm1tAPY87ba8bY+Bm16/7zn8TKd54f5rQB2k/ekBj4tTKMNgq1ftwrz2AHCkKAN/uZwyi0hbbKKLK3Y6oc5l+cDnR/6v/7evf7u//nuwUrtQ2YyygF5/va7/+O7r8OLHed37+PZ//sFwF8WwE2xatW8MD1SiUPI4l5xarhacBVa0ZLUJ2rRShasanVSHBNFbcZDKpLv0GgIoaGpCOTFALu1xyZex9MA9qLb9phhW1vt/5q8n+bpfNMU0pCkYrTN3om63tGqIEmmeTLF00Ig2jnZY/3P28pv1XG3Ciuo/baqzLM/9M0dqCn2KA+/mP3zn6fVgFkGN4ll/32T6ymfP92qDpwVTpklrE1WnqVlC/085/58IPWPjq35vv477/a7+oU247D3Ee2Zb/LKWw2MyjN/3m1Vn/H0fu37f7p9eoZgP+5/dH7f9/kvAP4DALhkUJQZYhHa1ApDtEoZU0T++kiVmiRed3nqQy2lyRJJve4xlSl3FZCLFy4P0csDeFtjeE9XsbUFxO0Poh0Q7d6jBjYtscDmRbFWGQICtvJ7iQHLNXjR1g7g9phf67blXDSgaH2cra+3vP99neTH/azJs1bAbB/07G1rwNbA3Q70La2Eo6Ipz5Iq5Bc3RUD2s57/y1/7lufOPrOzA+WzAP5dfSS6aRD/7hnOPwJy7ZlqnTrb+j+e/X/7cT3dvszz9QuA/wAALguSlRYtDU6AQbJLpFRfbq5QjErhh0q7agKuBmMEdfpQqtIfgVX7XmtAebHX8jDIg9L6wXmx1xrQtb0jyUMkx/xUR1KerebZaKvwsdQZoqkzSHqXRg0gebqS6qUGMJWNosWy5bopcqznaOXcZeFT27fWaguhLVNlrZPZwzutZy9Nx/2zA1iTB9jqmglwaAOhvdXCKWqhqlXIRVvv0EIwbXkt98weB34ZIPl5f2s/B61tvt9NHrk9m0dbK7CvF8iagbZu0Px9ueYvGEKR50/tJysMCUdp+2zZtzZA2oFcA+6Wa2U/7pfre78sYr4EiEsHkPCJKpFuytkWZWwpRBAgF2CTeLiI3UreruQBC6BLOb9SbsnTJKBeDLRbg1DTAp5QuTYtaL1IK9+Vh1Z7qOSBktfP39pB++8DuD110w6yGoDbCy0UgNvDHxL2kNcv0gr3hUH7jzq9/b/s59Pk6TwTMrFfa3uYpuX8W37X0sl+ive0WVjrGYMdVGSGIa+bj7lVmqR0XA3Qk1TZuaQPvrBJWKb5fv8U5/oj/Id99tI0GNmvnXbdmtaHmtMyBcBlphLePGOxA7zWbzSw1RwILZXzH72v7StM7fMXAH8JMP2xOp1UOAqHSq05nsKkx5Smh6OqHvPTVdw7JzmSylwDRcYUxQaILRtrapSquGwsMVOki0UEVcuzkhXYC+ALyEtLkaG5gEUEALCZVIqhCOTKACCtvfP+vVbUSISPQSpTrUlNZPamKPKSH1OVHaeKngozQqDMRIleE8cQbo/4u1c1jnUZhJKleEdKvXUUp0Yob1q2JZ4vRQfkZahK19qsJFX5KgUfUjwjjIr2kIkdzO0dqPX9UNzrKU8Q/pnSlCfUSdFIepiqIJWwB9ZYTagi/ZEqblLl20Vp1EglnTWeRks0ZWmPVaGFbMv3paJR9iXH0Jidpqoi8+OfQG6mVpVbaFDnZUuPUte4Jlt4dDRiIxX2MsUpigA53tbH+tO+jqbaHEWFUZRkHkNhiipsoSSNoowQclMeqkInqT6syUlSBS6V2UlKkERESaQkXGLj5BrICr0LUhiSmUBlehzWiAeQnQk5euoMySo3WT5vMKWqSkssGRTFh/z/D8CbBmytv8gaVaIqDqrNTVaFPznJD6grSFR1CNbku1RZYxRVQlHGYyVIIrwvjUXJ2PShVOfEYoj6Gik4km0JU5UZw6nMjqbUEEaxPoSGwiTKLOEU6p5QYYlS35HPpc/Z6yrsM8of+lmTPveLB/4cg0aFLgpyU1XllgC5vC7PjORGgBMnd29i1sQRXPBxhupidAnhVBdlka9LolB4JXINyiTdsKbAzLSRA6gWhj9LBjZzigJz3+M7OLV9BcaoOwq0R3/wHpRlqfTDvwfc9vdLjbFc8znJlsVT2b16DkGO+5SMXE1OipKOozqH8qwEJe9Vl5+uRDCqc1JYNHlwc/l0Y56wDhqYOexjHn0WRH5yBDuXzmbRhKGkPvqGIxuWMm/MQKK+uUqZPoEaSxq2zDgqmnOPZYrfFEZp8hrtQKhCSCJKkZemBkIKM9k6bxJBJ3axfuYYMh9c1ypSbZlqwbc09RGUG7ClPFQl5zf9T5B06zxXHHdBUZoCdHIT+dr7CFvmjsf70HbM0SFsmvMp/icOYstMwO/4Pk5vX0N9Tqbi9xBPVZWqG4QsLAEBcwl7Sfjrh+5U9vN+vjaa4vQHYEvFGv8tlvhbijgq8lYQ88Z/zNFtCzm4aT7mhHtU5aVSW5hBaXYSjaUmJWQtknoPr/ioYpE1U4ZTJzxANQWUJIVCqYXDKx3Yu3g6BXGPuOp4kJ3zp1CVGcOJdQvZtWAqkZ8H/YyD1w81cLaKX9ufwdaEYuZ4RVhWoo8m8tZlVswcxcYlk3E5uJGGonRFldBYnII15R5U6NDH/A3Zjr57HkvSHdbNH0VNXjzFhlBspnCK9CHsWTudo9vmq/e+uejEstnDVGXogY0OLJ0+mAfXvaBUR37a41YFWc+GUF7+/H8B8OcEb3tntKWFKVIsASN5z/PwVmIffktFgYXl82Zw+8srim94y+pFRNz7lvVLHZg6epAScOjy1h/56pI/gz7oyoLp47ka6AEVBcTf+5wJfbsQeHoPlBjZv2Yuc8f0JSv2vvIY7UD9Xa2EASR8431qL6Y4jSlu48JpxD/6mt2r57Fx8XQib11j4dQRrJs3Gb+zB/ik0xtE3rnO3PGDWOMwkbN7NlGXq6cwPZaQry7xdZCHek1NMRdcjnPnsj+LJo3gqwA3Yu98qQh5hBdEvm/TJ6qc75Ysk2cXkjTSJikLlwwdIRxryNWxcPwwGvKN5CSEc2zDYpx3reHQmnncDHLBY/9G9i6fxcY54/nc8wRjer/L7XNuDOz4JxaO7Y8p4rYKSwlxV501g9WzPuWajw/blizCps/k/rVLfNyhHV/4ukGBhdocnRJeKMqIUhVpUkglsx3xzJTK08/ogcv0vCE/kbr8BOXhFWWGUp+fjCXpHq5HNynAPntwI1H3r7Nm8TT2bllK9JNvGN6/O1vXL+RywBlmjeyF+76VnN66gPWzhrF94TioMCja28XjP+LBZRdKM55Qrgtl6/wxNObIjOs++tAv8DywWgtZPUc/sPeB/2htS6VjUwy8KR3TviYgxXMy46XEzIMvz3HjvAeUWdi8dDYpYbfYuWYOm5dNJSP6JgN7/oWVDqP55rIbA3q8yWWfY+zbOI91Cyewd8NcqvKSVSn/qL4d+PaKO3VF6eRlRjBzXD9KhKK6oYCM6NvKkarLS1N8RxIGs4fCNGdBC5k9u1bTluv6C4A/x4MrfAnidQvfiIRPZFsYAButyfid2Km4nmtLrKxdPJstqxaydfUiRvbvrTzyLy74YkyO4uzhXYrQX77nMGU05XlGNiybS4k5Dcc96wj5MkgB+JLJQ1g1YxTLpg4jI/QbBejfBdz292T6VF9k5oLnWe5cv6D4ulfMmYrr0T1sXOKAw8QR3Lp2TnFNV+QYOLR1Dfs2roTSXCYM/pj6Yiubl8ylSJ8ClcXcuXYez+MHqCuwkJ0cw4SBH1FiTGPrsvksnT6RL4O8qc3PIidVuFQylFixlhFhr2p7GsDlwRXirsbCTFUtJvqRpcY0VjvMgLpKSs069f+fDuuH26FdrJo1WVm1VU+w03FCv76G9/H9xN37G18GeHD7ciCf+7lRrBOO9zL8zhznvKc3tcUVpEbFMXXkaPycnIi8cxvng/u4c+0i2HKUsHBBhsQ0ExVBmISb7Hw6bek4P9RvpEML94cl6YHSFJWZUpEumsyYu7if2AUVuYpRcs3i2SyaM5l5Mybi7X4a17PHaagtY8GsiVz2OYGEBfaumQm2TJZ9OlhN3+vyksiM+BqPIxt4/KWvCh8snTZIEVdRYWTFjKHkpz76mWcgL++FfjeASyWwWIpyNOryDFTm6Im+/w1XfF2pysvizP6d7N64gnVLZjJ1TF/+ds0Xf7cjFGcns2rhFLwc96vX29bOg8ZiVi+aijEllMpCHakx9zi+bz33v75EY3kOC6aNggYbDTYz65scKNEpMMfdbwqBNq3HKGfhFwD/yad9EvO2h07ISVFALiyFX3qf4rL7MY5vX4XLwS24Hd7GzUveRN+5zp3PgnA9spNTezZy47wXBzevICXsDpuWOihO8PWLZkGpVVVqHt20mJ3LZhB98zLZ8Q+ZPrQ3RelNjGnP0JPawVtaqWy0pMRxztMV77OncTl2iANbN/P5uQCO7d5JYthjwu/eZt7Uidy4dAGvMyc4vGMbWamJLJ09k/JcCzNGj4ByG3kZqdy8cpGrvl5YkhPo+Kc/cMXHky+DA9i8dBHfXDrP2f17KLOYoKqcyhyLAmNZOFP8EoYmD6h54U3SA0XsOEmpJQn3cl5mIoUmHUtmz+JqYBDL583nkn+AakPu3CUxIpLVixbz2bnzLJnjQMyTEOZMmcqXly5zNSiYqEePuRwQSK7ByHkfXwZ+8DEPbj3hb5/f4daNe2xZu5XPzl3G38WDuZMno4uPpaEkT/F/l+gT1eJyfW6a8ojKdOFUGGTR9OVBpK37EA+qMiuNxhIr9YXZSuG+riiHtOhQls+bhQhfL5o9g8+vXGTPju2EPHxEWkoq48dOUGLZWzZvxPH4fm5+cZ5dm5ZTkpPJ8vnToK4Ec3oMd25cYu/WVdy4GkC2Lp4dG5ZCbTEzJg7jwI61JIXdVgDT1uP/j/A7+yK2thhrz9rRwFvoHGy6FEr0KTQU5XLZ24P9GzdwzkN7Pu5+9SWnj+zj5pdXSYwKYcrY4VwO8sHT6SQHd23h3jdfKBEWY1oCS+fNpLzQiiE1nq+unmf21PGkxUdiyzGzcPokqChm2pjB7Fq7hKSQOxSkx0JtkVpolr4qg3WzB256WnSirdfxFw/8OTqvALedc7vGJBkkyZSkSjZJAqbwb7l30RND5O3mrBPxphPvf07kN5fYsWwm0beu0ligI+H+F6Q8/pq0sFtqYTP23heKNVAA5fHnAQq8KdKrUIM+4pZqZbrfGrCffS1EVQKIloxMrp87z90bX1NdVIxVpyf2SYgCwvtf/40ju/dwzsub4myrej/8/gMSwyMozZHtx9QUF0J5ObqEOBJDQ8k36ol7/Ji7X31BgcFAcmQ4V/z8KDQbKTKZKMoykRIhrHC5KvOhTm8n7NEyWMTzlpxu4TsRWltRtSk0p1Gab6GmxEbogyeEP47g5t/u0lAP1uwiPrv6DXpdNmtXb+PWzYckJegptdXw1Rd3SErMJC4mncSEDJITDdTVora/+foBd25HkJ6WR1DAdXKtpdSWN3Lrq1tEPw6lsiCfAmMmRcZUpYIkXpFazDRGUpoZQpku9GcNIcgCcE5SHKVZZoqNRsqsVipy8ynJtvLZuYs8ufeIsoIyqstqCXkUxc1vHxAVkcyuXUe4ceMBxcU16DLN3Pr2NjFR8WSkZRIRFk1RQTGpyRl8dvW62i4tKaOyvIYnj0KwWvK4e/seN7+5Q9j9u8oJaCuA/Ef43b8HcI3GQcC73JhOXa6F3LRkCjMz1TW+L+o317/CkmagODufr69/wZ1vb5Ian8i+Xbu5deNvVNhK1ba8r09NJyfLQkpcAiadHlOGji+ufcajO/coK9T6Wvi9uxSZjUTev82jG18Q+/AmFFtJeXKrmT5CwFZCZvYsK1FykgV8LfuobU7ELwD+HAAuoC0PqoRQcuPuK+BWNJSWRC0bpSBdcZtIVgQFKRQl3qMsM4T0h9fJi78HeSk0ZCeoIh+hFhVOFDEp+pERuSA1DMotCqhtuujmTJT8lFC1yPYsaLfeFna/YosZc6Ze8ftWl5ZTnl9EeVGJanNMWeToTUQ8DqG+rJLG6loaK6qpr6ymyJKjjOoaLJnpmFJSKcnJpjxPCN8rMSanUFtaorblfWG+qSjUwKXGVozNmkN5tvkZANceRA3AhSktnjzhEi80k2NM5cmDu9y7eZeUxAwF0hkZOfztTiRPonQ8CMvgQVgae494Eh5n4s7jFLUdHpfF7UeJJKQXqla2bz6I515IGrfuJ/IoJJ3wKCN378UTFaPn5jdPePIwkviIeELv3iMh9AnZqfFKXFgAXI6tzBhJheiLGiNeqgO9LIDJNL86J5eagjISnsSQHpNCxN0w4kJieXw7lDt/e0RkaDL37kTy4H4cjx6n8O3NGPYecCMqxsLtu4mEhpv427dxPHpi4NETHbfvppKWUcbDx3riEvK5cy9NvX/3fjrRsTlEx+bx8HEmEVFWIkKSKFOiwG0DkJc9/x/i9y0EaPYY+NMAXmbUU5efD6UV5KUbsGUVUm+roTS7BOqgOLeMovxK0hIMXL7wJSUisFDaQEVxHWUlddRVQJ61lHyLjeKCKnLNxdRUAjWQYyqgSIQTrPkUW7KpKSyEigqM8TEU65Kpzzd/J4BrBWcC4HY5vrZd/18AXCgX7SvXzVMc7WLaFxmER7xeeH7NcRTG31cqIFiSEHWfkqQn1OqjwBJHvTEK8lO01hKnuMZr1GcJ5ETdglwtC0PxfeelKbUTTYQhVcnE5ac8UfzVojwkPNbCdy7KQwI4LablgEtoQsxmTmPl4uU4zFnMfIfFLFuykhWLl7N0/kIWzXJQoYmFM+ewauES5k6bweLZc5k6fjJrl69k8dz5zJs5m2UOc3GYMplVC+axcv5cVs6fx4p581k1fzELZsxm06oVLJw1g0WzZjN78jRWLliKxxkXSqw5yvsX2SuJQ0pn1KaJLRSptqwEbJZUCkypPPjmS9r9+Y/06tqDgZ8MomfXXnzQpy+9PxxEj4+G8F7nD+jUoy8Dh06gS6/+fPDJcLX9fq8Bqu3Q5UO69h5A994D6fnxUHp+MJgPPhpGrx79ePuvnenV/WP69OzLR3360af7BwzpN4B333iTeze+JCM2nAprBpK1IWIUUvhUZ4lSqYsv4wG9LAAJgJdZsrBlFzB97GS6te/EkI8G0a/nJ3R+pwsDPh7EwI+H8kGvfnR6r6c6147v9Wb0yGn06jWQ3r2G0rHTAPoN+JSOHQfz3nsD6d93Kr16jKZPr7G833Wosu49RtClyzB69h5Pl+5j6dJ9PO3eHUbP7sMpM2b+rGGkl72G9n4qnrjMaCTuLZ63zaSnxGRk+sgxLJs9j3mTZ+IweRaLZy5gucMSFk+fz8Lp85g12YEl81Ywb+YS1izbwILZy5g5xYG5MxYzf9ZiPp04h6njZzBr6nymT5rNuhVbmDRmGlPGzWShw2LVJxbPms2C6TNYNHMWDpOm8PWF8xRkplCVrVN0ziqUqPClxQNXNQ/2uofncCS/6zr9Hw/g9gsgi4EtF1kKQqTwJFJZbWY4mGORtjDmtlL2KYy7hy3xCZSYIDuNqvRoypLDKYx9RHmKKJhkUpMZS2ValHq/UcQm4h5Tb0oEazqNWclgSaVaF0u9NUER2lMk3nkYlGZSmKKJKVSawqg0RlCUqklZiSpRqXAv5CRqpPcWHe3e7s7aLcf57MYj/AIvcfTocb7+/BqeZ07x5YUgbl65zGf+/lzy8VNhlCBPH2589hmOp45z+VwgNy5e5sHnn3Pjgq9aDP0iyJ87177ks4DLnPP0J8jThfM+rnx+8TyPbt5j16b9rFmynsqSUsrzs9UCoUrTU1JvLWXdZcIxbk6goThbyUglPHzI4F69sCSlQmkpBbpMFevNyozHrI/Hlq/HpIvDmB5NdXkOhdZ0LMZE8rJSKCs2k2tOxmpOpro0myxDgnrfkh6rskxE5UTkqnJTYqgvziVPn05Ffi5TR4/k8wuBVORlUWbJaNIijVIFQ0LAL4VEcq+/q3P8FO/Jc1eSnUlxXhbjRw0hwNOZmtI8NVspzNaRn5VBSuwTyvNNFGalk2dMUbHsikIThSLFl60jS59BYW6eEqwtyylS3nxdgY3yLCtlFgs1+XlkZ6RTWVKCyWDBYrFh/X/Zew8ou6prW9Cje/zuP/qP/r//69f/vednGwPGGDDYxsbGgB8YDCaaYIGEhISyhFDOWQIlJCSEchbKOeeskkqVq+6tujnnnFPdyjV7zHXqSqUyxgY8LMNDYxztuuncc/fZe+6115prrkgLzl824ZFfP4ecxyEVoFgIhcd1XzKLO5Bnfj1bU7Fw//TxF7Me/zb9qxQxbmUmL2u5+kwSkI+73UgFIzDVmXD3Hffg6P6TOHbwBI7sPYIT+w/hzIFDOL13D/Zu3oj927Zj/86dOLBrD66cP48dm7fgwK5dOLLvAPZs24ZDe/bh1JEj2L9zN3Zv3YpzJ07h2IEDOLr/IE4dPYTDe7bJce7YYdCnzpqUm5cvR2s6jqTHDBoxMheof+9jIpGSIFTYSX+Zfijg139aOVl2AH88j08DcKZ7Z41XpTI8QgapntPK6ioujZRIYlWP6YPfxOIp72LfygVSQSdGDm5TFK0BM5B0o9lnREBbAjTHEWeloIQbc8cMwLkda8Qfy4QbxFl5pxqfLJmERRP6yvYeUS1StmtSr5GvtUYMkszBkmQ5vwYZunbSUfzPf7kbzjCwZOUODB0xDqtWrETQY0PSbwNLLrFGJEs3tURjaAiHEXY6EA140NyQkkDNWy++BFt5GVKuOrBoMZrzOLfvIHq/1B0tsawwZtJBOxJ+F5K+ENYv3YRpo2ciHQoj6DAKl136UIpC1CDLZAc51NKvCbsBLlU1LGVVePGxx9EcDMNYVobR/d7ChzPGw6crQcKhQtKpBpNUoiwUa6tGQ9AIJJ3SMmll67L3MH04KYMfy/N8L3cmvqrTGP/mM3BcOykluehWaox6kQl60bvby9cBnLx7xgyoWcMtbLtHSem/lQDOhc+mKUd7QxJD3+6BE/u3IRt2IWDTYMboIfhw1jjsWL0QHl0Z2hNOsLIUef/k+JPX3BS2IBMwIe7WI+2xImbW4tVfP4SUUQNEvEib1UiZqvHxjFGiS98QC6A+mYXLEcbVYiOeefKVrzyAt7gr0eqrRt6tUhLkXBZEvW7EQwnoNWb8/GePQae1IptqQMgblPjQR3NmYde6ZWhLhWRxb0xGELCbxCBpr0/Bw5qciTAyYR9aswnUXLuMqqsX4dDXwm3Sgu+369TIxwOoLD6LZMiFuN8Dn82CsUPfwe5NG9GeTSPjtwuAp5mIxjnhr1K0a2jsXM+kVXavXwTIvwHwTplMnwbgotvhVqGJpZiCBrR6tFK2rY31IZ06XD2yA1vWzEdrgxc+eyVOHd2EQX1ewMwpg1BXeRoD+jyPgb2fR035CWxZNx9F53fjxMH18viD90YiEzGiKWaVxaMxaIS18hw+nDwESLsQ0BWhJazUGyTIsxAvJ63ULvTppABuSyaOu+7+BYpKzBg+ejqLbMBpMcKoKsGMUQMx7Z23EdRWY9jrr2DSgP7QXL2KD6ZOQsRtwZzpY5EJe+BWq7H9ow+lXB2/N6RXIeNwYkTPvmgORsX1kPEZkfRYkPZ6seGjtZgxeipy4SCiDv31LaIUJbgO3gRxWuAGJF1WCZBeOngcLzz2W2Q8frz61FOoD/qAxiziJi1mDuqL2UP6w1NxDUvGj8L0AX1wcuMaTOvfG4j48cGoYciYdTi9eR2OrFkuJdBiWhViWpafs2LN7NHQXToqfPbGiBspvxP5WATdXnwOx/fuRn2EuwC74o901Ip2SrtbLQvzrQRwUjARjaDe48Ow7m/i0IZNQCaHmrPnsHz6TCCbQb3dit1Ll2D6gLex+6NFOL1hLSb27I75wwajeN926C+fQMmxHTBfPYfZg/pgZr/eQCwI2I1os2uQ05XBfPEo9q6YL4WI88EImuINqLqqxSMPPIJ6j+2ra4FzF+VmSTNlp8rkLCbPxbwO2ZXotQbccfvdyNc3w6g3wW40H+zmGgAAIABJREFUypioLbkMl7YSLDd46egeePQ1iDmNqLx4Uo5c0Alj5VVhTWlKLmL8kLdx4fBuZAMOkI7Loznux4Ft6zCwVzcEXRagrQWMJ00YNhIbli1HJhBAPuyVIL5S8epmAFcscCUB7ouANz/zDYALW0LxJ7NDr9N8RGNDEUqi/5t+67aACSlLDTIOPVrDHuS9Nlw6shvFFw8jEbcCSOPNHs+grSWGYe90x74963D50gFsWLcIqprzmDZlmDwfi1hh0BdjxrThSMasSHiUquxZvwVubSkmDn4TyAeFD1zv10tWZUPAgLaoXRJYWsJWWb1jlmpkQx7cc/cvsP/AJUydOhfRcAyMugzv2w2s2L5z+Vwc27ACi8aNgObiOexZtQzzJo7BusVzhWLGjNAxfXrCWX4VTb46hLSXkLZogHw9xvceIEDObSmtfVaUz3hd+OSjtXh/zBQ0RfxSc1NKuIlPvq5DX6KqwwKvk8FLrnfQZEX52SK88tTzQLYB/V7vjlwkjOZYGMWHD6F4/15kLBa8P2woJvbpA4QCGPH669i6cAH2r1ghAIZIBB9PnohN8+ZJkBWhEBAJABEPjm5ajtKTB+E3qMUvH7RbkU/E0OeNbji2b08nADeh3q4Ia/2jAHjO5UZrNC0L5oW9h5G0uXBg3SbUXbiCRl8AUb0JK2bMAgJhzB46FLsWL0GwRoVFo8eg4uhBzB87BDPf7Y0z29ZgwcgheOWhBxCvrUTOUA2wTiiljrXXsPK9cci4zVIRPeIM4vLpUrz81EtfeQBnoWXWpcx7KpD31yHu1iLGHIVwAHqtDt//3u3wefxoqs+huqxYKmY1xH1oSXrx0exxKDq8E688/hAsFVfw1gu/xco503Bs6xq82/M1qC4ex7Gt6yTZbWj3l+FSlwCpEAL6KkTMdUBLDn27v4Zk0IeEP4CYN4BxQ0diz4ZtyPlDCNsMfwrgVI3sUN8k5iguqS/mhvoGwDsJBXUF8EJwi35BFoplBmFQX4X6gBNxlwVJtwPG6nI89etfobL4IuoqSoUbWlVchJ5/fAk7Nq7D5TPHcOrQfuGZThz5LmZMHI36eAS6mjJ8MHsGkiEP8lG/pOBngm74rRpJtMkFHWiOecVCSDp0IIcZSS8Qd0uCQtKmQsJag9Z4CL944GFUlOnR4/W3cObYMZScP4ZVC6bi9M41WDB+MCqP78dHk8ag0WXDxgXvQ33pNO777j8jbFdj+sgBWD9vNrTnjgN5txx5lx5Nbif6PP0cUJ8TtxDdNvVeIxr8buxYtgbzx01BW8wnuwAl442vF/zfHQAuQVYTQhYjEp4A9JUaPPeb38NYo8fGFavx/vTpuHz6JM4eOoCxA/vh8tEjWL94IeZMGA9kkninV084amvwz//Hf4G5qhwndu3EleNHMGvMaLHek26XVN+uD7ixeMZkGMpLkA74ZBubDIfhc7gkUHts7z7kIiHxw7PIqwC4sw7t7rpbboEziJn2+5EIxTB8wGDs3bJd2EQBuwNPPvxrkJ527tAh9O3WDeriYkx89x2s/mAB7GoVPpwxHemAC8MG/BFL5o+HTXsNIVstejz/WwQNKrRFncJ7rw9oUXVpPxa9NxrNuRA8DitcTh8qq0149ukXkfN+tS1wAnizvxR5T5lolCTcdYh7zUiFvTBqa/HILx+CrlYNi0ELnboCQ/r2QDbsRGPMgTED3kBL2I59qz/Eya1rsXfVIuiKTuPy/i1YM2cqxr3dDTGTGs0BK8pP7MelfZ8ACT9Cugog5kXCrseUkcPRkEgi5g4i7AhgwtCx2LtxJ7K+MIJmArgJaR9rwCoiWooq4g0J6W8A/AtGcAvbFlrd9KUqDAquhB2R4o4tCkWdGgM2iSgTuNuzSTSnkmjN5pCJpWDWWXH6xEWUXK5CwBvDvh1H4HNHYNW7YTY4YdY64bIH4DT7UVtjQC7RjJA/AVW5DjXlaklI2bV5A07s24NsOIjKoktwGuqQDniAbBz5sFv8mzmvFWFjjaSfU4+kgenBiSD+6f/6/2DSeZCMpHBo1w4c3b1ZVv2rR7ah9NhuIBGA7uJppG1mqC+cAX3SVRdPAE1xnNi1ASe2boKl+CJ2LZuJg2vnyCDNOW1Qn72AvNuFlKMWCXsVErZapJ1WbFmyAvPGThQAvx486eC4FpTexN/n1SLFoJLXjVw4jmN7juLl37+G1vo2RPxRXLl4Cds2b4DPqkP5pRMiLRB26qGvuoqmpB82TQVSARuqrpwB2rIoOnUQOzcsQ0s6hK1rlmDXxhU4vncLEl4nVNeKEXV7UB+LIxmOIhaMwuvwoG/PPji294BY+1mfEznW6bRrQeVCBcAVNcPCWPh7t1z8/BYd0NKIof16Yev6lYh47WjLp1BXeRXH921H1dXz0FcX4/ShnTDXlsFjUiPpt4rmjlldjMrio3CaSiRDMOrUyr0tPnkAhzYvwydLZ4uOSmvaDoP6AnIJF5w2I1KpDKqrDfjlz34FjqtCAPMrF8RkQsx1AK/4EwA3aWtx9+23w+t0oCmXRmsujW3rlmPEoJ6YOKwPys4ewMzh/THr3f7wq0pFQ0d95giOblqJimP7sHTaWGSsWswf+w7mjhgEZ9VVJE21ksmcs+slSWju5MmIuH2oj2SQj+QwfvA4rFu8GklXELmAX5hiUv/1UwCcFvSXGXP/6S1wdl6B+tYZuOV5oRfq0EB/t8+GbMAllhy5xTUVlZIgUVqqgcYYR0l1CBevuqCzNkCty+BSiRsWZxuqNAloTXmUqSIoqQxBa8mjvCaK0powavVZqGqDuHKpDKrySuiqVJJhaFLXwqiugUOnhUlVDV1FCYIWHdpifvFhpmxatAVtaPTohcP7o+/fh8vnKxDyRhDyeNCWi0sQrCXukUUnbKhFg88Fv1YrfFhyt8NOA2I+M7iVTLvdkkwSsZYjZq+EtfIqch4PIkY7kMuLhkRrzIZmWi3hALYvX485YyejKeJF0lmrSJp2UKT+BMC9FjREowg7fNBVG/Hsky/KYuZxB5HJ5OBz20CLCXEj2qJGNAXrEDJcRdJeJmneeZ9K0sNDhiLkfbVoDmmQclQiai6Gt+4iotZqWDU1wm+36s1wWZ2IBeNw2zzIpxuEMkkArw+HkfW5OwG4Fu1uLVopk+D6YtvXLzPxCp/l2Mt6apH2qDGizws4uPlDNEWM4L1oiZmRctYgRgniiEGEkaiTQg57a0SHsLFYAmJkLWW8KoRN5UDGBwILYwNJXRXSlmrYSo4DSZsEwBmkpkEQdTpx+sAJvPK7Z5H3fNUBvBLN/koIY8uvFeVPideEvLBoanHPnXdCp1IJ7TUZ9CPqscNtqhOfN3e5SIYRNWkQ1qmFfoh0DK1hL9rDPnnc7GdcSA0kI9BcPos9Kz7CgvEjsXflUrTHIghYbGiIZZD0xpH0JjFtxFRsWbEJ9YE4sl4vsh6L5EMoBTcKtUuVHJBvAPxLWt8KgLMzC1b3DUoZO5cWUshQjXzQJX5Um06Dxx99DPff91PcffdPcPc9D+E7tz2I+x98Bnfc/Rv88L7H8d07HsbtP3wE9/7kaXn879//pbx+171PyPMP/Pw5fOf2h/DD+36LO+58CHd9/27cd9c9uOfOu/HD7/9A2nvuvAv3/uCH+NmP7sH9d96BuVMmwW+oQ1vEh5zLjEaPUcprtYQ8uPeO++GwRRAOxBELhWEzalGfCEh0nT44pFOwq0k9jEqWX8TlRCLoRphyokE3mhMpsZL95iqkQ0Yk3FaEbA40hbOyDSQVKubSyIBPez3Y9PE6vDd+GpqTYTC4yT5SAEnhV1MTWWQ3aYF7LfCbzUBTOy6duoIXn3kFiWgOkXASTqcbfp8LEadZAqWk+cUcWtEuiTt1QD6OhEuPhMuIsLVW6IgRW51oWvD1oLlWfiOammDWmRD2RdDW2A6n3YOgJ4SmXDPeeqMnju/dh3pxodwAcPLW/xEAnG67vN+K+pATw3p3k4BayKoRpUqfUSWWHtUsyaDJhWzCqUcuhLC9DhGbFmmfVdxuTYmI8Mmz3gDq3RE0eGNo9EZQ7/aiyR8Ak1nQWC/ATf93NpRH5RUDnv7177/aPnDGqpjR6Ku5zkJJOY1IuanaGIC1ToMH7/kx3CY74r6gskPz+9CUiMnOLe11Ie6wojHgQ8blQNxqRnMogAa/FzGLSf5m36UcNuQ8CpsrqKe70A0kYohYLEh6/QhY3WJ9IwvMmzQXm5asRcoZRMrl+vMA3pF/UljMv0j7jQUuC0Bn8L4ZwBmgawjYhIHB9PKg243HHvkNystq4PLGxJdo1lQjE7CAE44p4zKpzDUIWurgN9cgG7BfbyMODSJ2Hex1JcIRDdi0wtH1WR1wmGyw6EywGa3IxNNiTdIdsHPDRkwZMUIsyKTdjCa/Q9wA1B9p8Dlw350PwGaNwuOJIRiIIhYNIxkNSRCHmZRNqSycejta0k0IOANIRWJIRiPw+5yIhfxIhVOI+AJIhhxIx91glmVLrgVRdwxt2RbhJZN7XZ8IIROOYNmC5Zg7dQ7S4QCC9jrxyRf8eEyQuQnAfSYEbRY0JNIovnANj/ziP1Cn0kOnNUGlUqFOY0DxNQPUdQHJGrxSbEK1yo+qGo9kGFZUueR5tswiZPYhMw+N5gRKy52orLKjqlKDqgoV1DUa6LUmaFU6XDh9Xvrz7TfexMn9+xB32oBEBAmLFimLWvj4dKM02W+tBc4sSLcjDG+gCX37T8JHH+9ErSaIymo3SqucqFR7UWeIoKrWh+JyG9S6ECpUHujMcZRVuFCp9qO00ouKmhAqqyKoqoqjsjwph1aVR2VpGOXX/Cgt8aG2LomS8iCulcdRo2nFvkMOPPF4P6Tdnr+C610Itn1ae+t2MGKAuWoVDrhLi4zLIG6+hMOBuMONqMOPn9xxH7hoIdeKTDCOhmgSFlUtGqJxMV5yoQgiDhfaMznEXB5kg2Eg3yiWNY0bPh+2O9EUT4oBFHW6EbTa5X2FzzUnMghZfUj7ElgyYxH2rtuBhkACvA5a4MJw8ypzQ6meRZwp9OUX779vAPyvAHBaSCmvFamADwGXCz//2S9QVlqNWLIRsUhcot7WqhNSTIHi8AH9FSBtQdJRidaYHpQIpdC7V3NJhOGpEYysTSLm5DqH7GbUR2PIRWNoyzUgHU3CUKdHOpYSf+6ODgAXvYwuAN7odeDOf/8hRgyfjokTZ2Pa9PcwZswYDB06GDOmT8XYkaMwbuR4rF2+EQN6D8HMaXMxfeoszJ87D+MnjMa4cWMwecJ0vD97Ht6fNRkTxr6DWdOmY9zICVg8fyVGvzsW0yaOxrhRAzFp7EgseG8O+rwxAFPHTUPE70Ym5BAAL2RiMrtMAfGaDh0Uk+xcMqEwzh4+jXvvvBc/uvMefPffvo2nnnwC997zAG67/Zf499sew79951Hc95MXpf3pL17Ff/+fP8Wdd/8OP7j7GXz7e4/i9h88Ja/f98BLuOueZ+T1e+57Eg/8+Fd48P6f4fdPPoN/++d/lZ3LXbfdhn//p/8X99z2PZScOyMWGYOuSVq3NqpL6iUTtslZ+6V8kF/Eaur8GQJ4JtUOm6sJ3XtNxO13PYH/9e2H8L07foNfPvYGbr/7t/j32x/Fd+54DLfd9Tj+13d/je/e8Thu++GTuO0Hv8W/fOfX+NfvPSzvY2blj378Mn7ysx748QOv45/++VH84O7n8b3vPYF7H3gB9/7kRfzqsZ74t9uewv/+X3+K79/9Oh57tPfXAMA1QjLIO43IOk3IOOxIO1xI2L2IO/y459/uFNrr2EGjMGrgcIwZOAILp8/B9FETMbzfIMyfNksylIe81ReTRo7BiP6D5fG8qTMxbtgITB01Tp6f+O4oLJw9B7MnTMGU0eMwbuhwTBs9HrPGjcPEYcOx5L15eOetAXjk3odwYOMuJBxu5H0eMHB+M4Ar7hMB8ELh5C/oSfgGwMUH3tWFoljhBRcKA5hxJqwEPAi5HPjZ/Q+gskKFZLIBZpMdg99+E/u2rcHmVYvBDLmQ0wC/XYPmdAguUw1iXguCtJYTfngsaqAth+ED38TUMUPRnosi43eK+p/boMM7b/fBgJ69YKzTgjom8WAYBPDJI0cg43OLvkKT3yZMCgFNnx2rl67Bpg178fAvf4sli5dhxYoVWLVqBT5avAQ7tu3EymVrseLjdVg4fxlWrdiMlSvWYenSZVi85AMMHz4MY0ZPRp/e/fHxxx9g6dJ5kgjUq8fbmDRmlqQKb/tkLVYsmysSpsxEW7dsI44fOCGJC1G3UQFwZ4cbpROAK2JWBmF/pDweJL1BpPxhBOwunD9+FO8M7IMdn2yEy+CA3xqAS+8Vqz9kjyLioh4LYKg2Iu5RNCu2rt6Bft37Y8PHm+X9KX8WAZsfxw8extTx49G7e3c0pNOI+72y2HpNBukzh5r1Cm1oCtqlCAUZRQRwZu51BtNb8TdZKCGbDc3pPOw6s4grcUcU9oTx6vOv4sTBEzh97CzcNi+cFjcacm0IuqNw2vxIR+uRCCcR9nsQDrgR9HpQXlyGR37+KOx6F6K+GJxm+nutGDVsCN7q3g211VUIuP3wOsh8ycNWqwg+3bAGC1bh52m/uAX55ftcUR8kc4oAnnNargN42uFB0hHA2gXLsH7RShzaug871mzBzDFT8eBdP8avfvygZBof3rUfOz/Zjp0bt+LYgSMiE/HJuk3Ytm4zNqxaJ22fHm/hX//HP2PEkHexZc1GHNpzAJ+s3oBXfv8sHvjBHVi+YA42LFmCdYuW4MT2/VBdLEbK7kTGRdeLQaENsuyht5P2yU2l4L5YH34D4H8WwJVVktoKLAhAzid9w2GnHQ/95CeoVdUh4AvjxLGT2L91i1iZaKjHgumTMXPcGLw3cSzOHzmEsUMGoPuLz+PIzq34eN4cFJ08KrQ50g8p/xqxG4F0WDjlSEdE+nTr6pVQl5XAoTcgE4t9BoDXodlrRX0kJYyXUcNGoyGbQ2MujaILZzF0YD+MeGcY9LUGPPmb32Hk0LE4feI8xo4Yh8aGerw3cwqam+olGLt96w7U10cRj7sR8LiFyqYqrcOsybMQDzvgsakQITUr4EPcE0EunJQiFgkXJ5BB5GSVyaj4wcUKJ8feo5NFJ+G0oT4YRMRmE3/0yAG9UR91IxdxwaauwLC33sTYgQOgLbmK0f37iXwt22kjhiEfCqLXS8/DrdUIzfDi4UNoT8VFwS9gNQpzh8wNyq/Wx0jlMiEddIDJGA1hF3JesxKIdhukKk8TrW8C+C0s5FAALk7AprBNOMl0v/lM1SI1XHTqAM4f3Y36qAfpkBPTxw3HlLHD8P60CVixeD5mTR6Lt15/GRtXL0LRuW3YsXkurl7YhZVLpqF/z5fFkKBOdSpoQjpsQGPKjC3r34dJcw6JoA4xv170aRjfEEGt69v5zwPchfd+MfAp9MGXbdmHSlk/w00gnnG4QBBHfQty3jASTh9CNheG9R4IY5UaUZcPCW8Qq5d8jJILF5EMhEXamFITMa8PF0+cRj6egLayBsP69YfLaJHHdAfGfX6gqRVlFy/g9MEdOLh9nTDGkMsiYrEi5VKsb+6wlIxLkiWU3WmBnixFUL5kTdJvAPw6C4Ud3NUSVzKdWPUlalZLWjo1Nh5+4H5oa2oQ9vpx9shxnNy6Da0+1mUMYPxbPYHGRiwePxZ7Pv4Yl3btwLlt2+AsK8G0AQMwb8S7QCKBqLYOq2fORL3dLMHIrE2FmKlGAqbD3+4Ov8WA5kwSjJrv2Lgek0cOF0udhQxYzqye6eDUa/Ea4dFokQ+nMGbQIDTGw0iH3GLdt2SiOEUN8JWrsHzBEkRdAaz5aBnmTZsuyQxll04iFXaj12uvw6bVIUrRqYBBUoUpZzph6Cg4NDrkojZEXSqkfAbRmUi7fWiORISqWO9jJplB6jIqVtwNAGd/MkgXN9dKNL8tGpZgEJIhDHzjBYSsVWiIWXD+2HYUXziAbNQhnOZ3B7wBIInJowdg0+r5+GTNIqxeOkteP7R7DaaNG4x4wACnsRLpsA3cBezbshon93wCl65ceLcMrrIWaVPQLFK+LKPGephsWciBBwXDbjWIk74atpchGaxFJsJdVyWycSOO7FuJPduXwG0rh9ddjfdnDUdbawgTxvfHpo2LYLWUY9788bh4cSemTeuO4cOewooVYzDsnRfw8C9vg6b2LKJhDSLhGsSi1TCZTqDf279GLFyKSLAUyagK6ageMbdyj766FngdGr1KWTzeSwE0lx5ZN+ufWsQfHjLoZLzWBwOoungBKxYsQFsmJcbI3KmTYayuxJL3Z+LE3t2Y+O4QHNq+VTSERg3sL+3JfXuwY90q/PHZZ6ApK0Y+GpYAqEuvBXIJ7N20FJuWvYeYS6ekzlNEy+1A2mWWilWcA0pNzM4ArpUapc3OL1dU+hsA/wsATuugxW9Stt5+u7hSfnnfPaKhzdXZrdfjlUcfxr5li1G0extWTJ2AjXNmYvgf/wDjpbO4tn8X9i9fAn91GVZNn4QpfXuh3e+C6fI5bFnwviQDNNpUQNIhdLb/8a1vYfW8aaInHHRQ3pUAvg6TmUrud0olms4A3sbCwvUZ2MorhNq0ZPp47Fu/BJuWvo+1i2ah/+svSYLLzNGj4TcasWnpRzCWl+A7//2/oDFixzOPPIBpI4bj6unjQEsUjdRkCTowqm9vjOnXH0WHDqA5alIqbvtqkbCokbQYkXdYkLRUiGYLi+pKNXTx53UAuLeqw+KoAxJeCR66qphsYUXOZcSl/ZvwTo/f4dLh9Tizfy3e7fsHLJoxHCf2rsGAN55GNmTAoB6/h9twDf/tW98CmkPYuHQmVi6chN3rF6I960Z72iVZqgc+WYqH7/0udq1egJaoDUi7gZwPDQEGebVCTyT1LudRSQk1qhESOFlSTVEmvHUWJBe5tLscMXcJGiNq5KNq5CO1iHvL8foLD+LY/mU4eXA1hvZ/HiuWTMLYEd2xdNFEmLSXsXXjBzDpzmLMu89h55apCLiKkAip0Lfn75CNmZFPWRB0V0Bfewz/9X/7FrZumAJ16U40JNRI+SoRp5RpyCSg99UFcDUafdXXj+s7P7dGaqESyJsCTjSFXHBrquCoq5ByhtSmj/vsUj2LtWtnjRmKc/t3YP3iOfDpVdiyfCEWTB6DwT1elscsfVh0dB+2r1qMnN8hTClWpio9cxDLF4xDW8aGfMQCq7pY2FNRi1Yov6yoxLFGei3dJ7w+xQIngLMQiumbTMwvvQX7s4k8ii+cldhJlWvwWwVAH3vgPqivXZViAdmQT9LJjWWn4dFcRVPYjLorR8HiwQFDKYLGMhFq8umuiSgTn6cIU9hcAXfdFZQc24pDy2dg5/wxuLxjmRRKJkjWXD6FXNgrgjo7N67B5FFDZXUnfa4pYAKLLNMCb6cgjkULxMOIaitQe3Y/TMXHBKhYWCJiqkbKTtcBAzsWWMsph+uTLM6YpRQB7VX46qrgVJViwaS+2Lh4HLZ8NBvWsiJYSq7CU1WCnKsc9a5iNLkr0OLWoM1jBTwWtHmr0eqpEkviBoDXidJfo6dKad114q+vZwm0mE/SkOPGSsRNZVLMt+zoejQ61AhWX4Hr2lkgYIW37AIQtKHerBLwD9dcRZNDC1/5RRgvHEWb24DtH8zAuhljsXHeJCmm4dNfhbn8FELGEnhqLygV3Y1X0B6mDnuVVHxv8KvRFNRKUDnrUaE5bBS53ltphfMegrK23kq0OCsQ05wHfGogUIsWj0rqWborzwAxMwJ1RUDWA2v5WRG0Iu+bypQ+9Tnpa8QcyDm0sJdcwZW9u7F5/vs4sm4FNBcOw1FxBpbi4yJpHFJdAXxmwO8EPLYO91fBHfJF2lu3AIokq48p9GVo9JehwVuOBm+lwoTyqIRjH7dVi2BaS9yF+rAVR/esw7B+r2H6+CEoPrdf6sKyrGFQX4HjW1dAX3wapSd2IWquwdr5k5F16zHj3d6Y8e7bYF1V5oUEdKXi2x7Y7XcYNeAF7N70vtTHpPIgs6ZZwi1mqUXc2iEHfROA18muu9lh+QbAvyx48/Oy9erIxFTOR+BW3Cm0kMRSc7OyNavMm/DUw7+AuuQawi4vmPIcclsR81tF2jMVdopbwqqrRDriEWpexGtFJuoGWz4fo0JZxCXvY0Zdo9+IpLla6h/SZ8viA6x4k01EkYqFsW3TekwYPRzJgBMxp0GKGLOaOq+1zatDq9sgAIe4C83eOiBqFl2TvLsWjR6dpAHTjZEya9HqtwMRJ8JURwwbJZDX0lFhHkkjGgKUri2XCvUIecQ1Qt1sSZSwl4kuOsGbGhsIadHqrVEAoLMvj1ZG4XCrgZANzT4z2kN2MNgUN1YACbvI5Lb4a5GhLK/fgnaHFq22Ojl3qrYESAeQY8py2CGvIelD3lAlh7zPz+swAnGbJLxQR533igk/SUsZEDcJeFOal0djoFYAnAuXJMOETUjZq2+pG4UA3uqulkSmjOEKWignnPchqSlC1lQmAmptbi0itVekmnzGVIVWnwlpcw3yDg0Qc6HBqUU77wctw4AXbV4fWtweIBRGg90myoQtLqMsuq1OI5ptOsBjR4uZn/F85QG8yV8p4E0Ab/RVCIDT2pWkMo8KiFlR79OAyWheQwmQZ+EFO3zWSrRnPLKLY4EVZUepBup94Nxp8mllnLLwCg0ONASgLzqMDfMnYPfy97B35Rwg60RLTI+WpBE+4zW0xe1oi7ukwHhjwCLCc9ct8Os+cC54nVwo15kon3/xpAtFXDQdZRcVY0SR2FWKRigWP39P1q2VvAq0NwJoE+G75nbgWy0A2tHxr70FaMqCxXu5Fc949B1O/Bs14ToD5q20fgrgTzaHchQsCcUNwAHAlPAkQTxuR9RrQi4ZxuOPPoKzpy8gHEjCZPDCZAzCYAzDYIx+7paftZi8sJidMJvdMFrcMFj80FlFUFGBAAAgAElEQVSVw2DxYvHSlZg4aQqikRC8NgNiHpNY/VQvpEKhDJCOXcQNreHORSB0yg7CbRB/NX3WDPoUKskrASCl3FPBb80dx42j87mUgJHyGgePMhgLfdl54eO5eH/bQxbEzFUirZuyq5F10QetAysOEXxZMAMBPdq9GtRbKkT5sc1Th4yxFA22KqQ56RJ2pPTX5H38O6G9Ku+n9c4SaW0xGyLmCqSpCe3Xg6XoWAwjZi6/HttQrufGwlx4fOPaC/f/79eyf7iYIGUTvzzvZ9xSIa4d1kTlbxMJ4rgTDRSm8ppEBbPJy7qsWuQ9ZoSMNWDyCvnGLCCQ9zvQGHQBsYBs4+u9ZnEnIOqCr7YE/GyWLClm4jJ+8SXTuW9l/4k7QowF7viUXZ8SLFTuM8cnxwFVPCkCR/ldyvLmA0bJ4qVIXMZeLVWzkqZSBKn3HzHJkbNXSSUtykjH9MVIW8rlccpchryzBi0+jYwtJGwif9wcsch3JDnGKVPdMc4VvLsx7hQXSmGMaZV5WGCkfN6WC4FLAezC3FPuxw0Qp+Z9k6saDS4V0s46YcGhvYXFiNDwVQdwAneLUyeH0PI6dFA6p4T7zCWSoRgLWGA11eHBB+7HT+5/EPf+6Of40d0P4c4fPIw7fvBr3PGDRz9/e9fDuPOHv8Kdd/8ct9/9EL7/o87Hz3H7jx7Ezx/+D7zZsw9ioSCas3HhXvsMFcItbYmQY6rucij1KG8Au7JKc7VWwJqBxw4Ad+lu2oHIYOuyqnOFZyT9zx38jDIoCzuXm3c1DB6y7qez5jJaIzY5OMDLT+3B8lmjxeLnhOAA48FdRHuQND8V6h3UT7chbrgGZ/kpqE7vhPnqEfGJZ22VSmk6jxbFR7fDp7kmW1zkAvBrS9Dg00tdUWUC3bimztd682QqTKq/X8trQ9YNyo1S8REZj7jaqEIZs9Vg3cKpIlmcdrL4sQnIBtEacooeDmMJ9O8GTGqgJYOE24SwTYvKiyfQnPDAWnNVAs1Jpwb6kjOouXgErTE76r16hI0VyLg0stAp9+/v95v/9oCvjLvCfe16v5H1IW6pEmEvujQ4NtAYAVri8KiLRLrZV3NeQBoxi4BzQH0RSDlAsOZY5PMEco5J7nATxhK0+rXKbjXpQsRUKeBIFwvPT6VQfifnTtfrKVzn9bF3HbRJBuC8/OtbwS8pPHOjvmbh+3h+WuFStMRZKSDOnTtpzATwZgCsDPeVtsD/GgBHvROZoB5eSxXCHhPMdSopcKpT62EzOiRz0mYyf7HWaIXNaIfFZIfF6IDZ1OUwW0USs7aqHI3JEPIRh1gPdBfQt8tKPZ9udd8ArBs3lJP007ZpfL4AvorVXPhMoe28Tev6N99T+DzbwmcKLa1IxB2yAHCAU69jx/K5WDR5GFyqIoT0pXCrLkNz+YjUBg1or8FSdgYRY7m8xu8LG8pQdHAztn/8Hvat+QBtEavUC+XiQNGui/s3YXjPF4H6oFg/eZ8B3tor4OQtXEeh7Xytyt+3Drx4TQl7jcRKuHPgLsKnLUbIWIaXHvsxgoZSlB7fKcJllaf3ijKkv64UruorCGjK4KguQtJtAGMjfmM1Sk4fwK/u+R6CEj/wCUWRvt1TO1bjrecfk/4meLVHbUr/xOzSP397UP179ulnAzjHE3cz9FtzcT+7ex1G93kFQ994VjEowhZhKKExjJaQWRZ+GhxSnzbnl8cbPpiCGcN6SfyBwIykC0FdCXYun4MJA16XRZHWLa1vFnERSzfllp1mYdwV2pvHH/upY04SvPn352ip40OAFpDuKJDc+Xs+C8C/FhY4O4wp6XJcr+tIECqkhFch461GfUiPTMAgKfGsakLVQBb/zYZDUrEmR2CNOPB5W+pc5EN+KY7AAgk8L8954wiK5ohUmPHoZUWnbjdCGiBhBFIm5Vo7fPi0ugs3sGtb2Fr96WT97AnQ+TxyfrHQFcoWX7t5QP4pgNOd0eg3yCCnFc+t7LAez4sVRJqfvaYIEwb1wEczx4iC3sBuv8f+DR/h/bGD0PulJ0ALcnTfP8Kpvoo3n3sMlw9vg7vuGlpjTrQn3DJp3h/dH/PHD5aKLCxzRyuTFjgtjs7X/2nX+6f98fcDH14P+4dbeS5YrVGrPD6zew1MJacUN1PWh6GvP4OyEzsxru+rOLj+Q2xcOBWvPf5TnNi2Ese2rcKyOZNx7dR+rP5gBka83Q1BY6XESmjFU0f+2JbleP2ph5Bza5C01ciCKAtr9KtdD/PGmL4xhrveb44DgjOtZGPJacwe2VeAm0Brr7qI2aMHYOmssYhYarBo6gjR4zeUnsXp3ethq76M8/s3y/j068uBxiiQ9CBmY7Upg/i7d66YhyObP0bIUC60VYJ3g98ouz8aGF2vp+t8+TLjj+6TAoDzvPyugoElY+szLPCvDYAr25aO1e96Yo9C+6Guh197XlLhkXFKybGopRYBYy1idou0vElZd5VYxJ+3rXfVoN6pEb8mmRpZJw+TaDpQ1yHj0sGvZYTdgKytGgho0OSoQM5UhAbzZTRYi8UHxpvFgVBoP21gy8C5TmO6MeD/dED9+dfoN1a+R2k/7bO8hs4HAYMHgZsuFG7dabU0hyxojthxbMc6HN+5Hu0pP6a82xfjB/eSvye90wfFJ/dhdP/uuHh4hwQ1LdVXMGZADyATRAP12WlpZvziPuE5eX4uCgRw+tlpKXW+FqV/uv6+vx9gf9pkRdgqJfdiBmXHgbQX53avxeaFk4GWCLKOGiyaMBDtYRNG9XwOnyyagqihBKvfGw2/phjj+3fDe2MG4siWFVgwaRh+dfe3BZhoDYrby60Vy3LZzFHCRmFwri1kRGvQIC6oT7umr9ZzN9/Prveb1jR3eBxv1orz2Lhouiz0XNjG9n8dGZ8ZH78/CWf3b8GCKSNRe/W0jMl+f3wWl47slLGnKz0vuQvc4eRDdtHpRz4qO563X34SZSd3C2gXiq1wjCPhvO6i6nxNXeeM+LDpx/4ih7tG3CS0tPkdBfBmy8f/KSzw6wAuWxeCj1rhbJK36a0E2RlpZ5lQ08grZjGF1hCr4wQkqaaFNTLd1V/iUKFQZ5M0vRa3To4mjwY8uE1iVaC45jLylhLAxaCfCgjXAkGNlAe7vg27SVPh5oF9nRlSYIgU2uv6513e/2nPU3+ZRYAL7ae8p/Ng5d/crnMb66m9qmwtGyNC03rmFz8UK7vo2G5MHt5XOLd1JWcx6Z2+aIy5MGFIHyS9Rnzv//k/RYlvxqjBmD1+KA5sXgW3vkLU+Og+WL9oBmYO7wNSweg6CXYUmyaAc+Hoej1dJ9CtBqucrVbqpLLiU1vQjAzdTLkARvd6HmN7v4i1c8bi42lDMXnAKziz/WNsmD8OpUc3YuviKfDWXhaLktrfabdOdiUEdMSdsmDSBUPLfc7ofhj86hMI64qBiBnOyrNo9GrE/3urf/+X//6bx23X+13wRRPAo+YqPPnTO8RPTZ37kX1eFaXLhTPG4MrxfVg+f6roqe9ctwQblszFIw/cgaaoB/qKizi+cyPOHNiC+qAD1F1nayi/CG3xKXEJ8nxIuMSY4Djkjoo7nq7X03X8EWS/7MFzErSpuSKLtlcBcM7TP+cD/9pY4J0De0pnKwCuRLUrkTAVIWG6gpyjDIgakDSVi85y1qoW+l6rS4c2l+YLHrVSFYbFk1udNdcP1uJske1RDdq9BGodEDMCgTo0WUqQqruAeuNl5I3FUpyAvvwbR10XjWtlgCvR+Y6EAlmcOicW3DwJOCA+fVDR38YBV2gLiQk3Pt91wHJg0RdN/zetHvGDJynH6VIs5rANWb9J2BQNYRviNo3I1CIXQXPMCaQCiDs1aI970RR1ANkw1iycjg2LZmPdQsWaotVaCMzRAmcgiZYXj67X03UCfXkA+XIWPOmAodprQgUkRbDVbwSt8hYf77mtgxKqQktAK7x2JC1Aqw9pJ+mYFqA5IkU36BIQH2zKI9WYVr43FmvnTYSt9AyQcQtw0/pu8tRKoJgBOTIvbvXv//Lff2Ps8d52vd+Mv0g8JBcQK5y+cMZfti+bI24VjqFjO9ZImTRd6VlRH9VcOy187uKTe2Rcblu5APvXfwxn3TVorpyGpfoyrhzZhS0fz8G+tYskQM/APC3w9hgLwJhk1/NXufAKhtQXbTu5ThSigUFAnKQF9ofM185BzNYs0N4sNMKmrzoLhZbrDQAvRIwVH3gBwJu8VWCF+BZPNRoc5ai3lSt0NlbE8ZvR7NCixa7/gocWTbZqNNnL0WSrlKPRXomuR0JbhDaPGs3OasBTCwJ+i6NaKHYC3PTjXwfxPwWUAmVOMsI6/PuFvwuvKRPpxmSQLV0n66Dga+vaEug7g2LXCUQLiJRBDmpG55H1izXOxwXKFf+m1cmtJ99DC4ZuEFpM4s/2GyUpgiBdmCB8L10mfA/9jlwc+F7+zYHMhYOBqK7X0/lalb//tL++PKh8jnM66iSxJqWvFIOARRiSpkrh6BNwo4ZiAfH2sEFqlrYE6GOtkeIXVLpkkWv2ReF30//KRYzuI7reIrpSxIylYnE3+3XC7CGDggyfjLXiaw/gjDHwYB/RSubY4GLHvznO+DzHDn3k7Du+zvFIoOdjgjJ3dRxrfA/dJTzc6ivyWX6eY5Kv8eAY5LhO2VV/5Q6wA28EiJW5RGOLY/Mvt8r4FuvboxO/PH3zPPgcz8H52eSqRJNTJTEhfB0BXDItCebiv+3oOOGVVqHZUylHi7vyesBAqtWTC+qsQ9akRs5qkOrpCLilejrCPjQ6LWj1OtDksiKuU1//m8/zuWa3DW0+O1p8FOFRC/eUE6qBvNSwSTo8ZSjp5Bvralkr0WtOfEnmcWpE55rWG0tj5RgRz3okkYEJLmSuZNw1SNorkHQyi68GrPDSGjYJn7uVgGqpkCQGggUtNSb7xLSXxfdO4G6wlSmZgjEjMoYiIGGWAdIZFLsCJgcy3RkEGE4CTgrhybq1ws9lspEkD0XMAjIR/TVJomACBc+VslaKv5bgQ84uQY3Pp21VaKYf102KnUEOgjnphEIl1BR/JQA8bayUxbfJqUHOUiP3Lc9FSJhGavmdZBq1hPVoDGgQt5aBssURc4mUuiN4MKBb72fxab2ACuly3OkIuDOQ61PEu5jUwqQOnpt+cPbp33WxusnF9zkWuc/83A2jQwG9m2MwBG8WCee44Pjgbo1jUP6m5ELHvGd/cPwjapE+p6uJY69rzIDvawno5b05JvzQ+Mj6ZVHgdxC8CeI0XLiAdp0PnecK/0aCglcqhY+edcpiy/stORKsQuXXoT1mkteZpMbrSdjKJTaStFWhhUVYrNVSiSgXsKIp6gKyIVmgGDf5mgO4wqbgalW4kUqHK35eWuFS8cPTGbyrxAKmFUwrlZmMbWEXkIvDqy6VijkhXZXof1B/Om03oMFrkecZpGRFHSQDogvSHLAjZlHJZGOSC+lyWQafHLVKxmLUeoNm1Gm3QAU5HkzEYJFeZuPRd05RLIQsSFtrkDRXiRVAmh5ZDi0Ri2hfxO01yPk1SHk1SLvr4Km7IhYbrQ4mwpCix8wzBrnCmiIg50GbrxZp0zU0O7k7KEebpwZpPXcFiv+u86DsOmBpBZPxwEFNa4YDnABDKwg5v0yWtoBOFgvUe2XhIP8WDX7h4ZIXHtFeAZqCsuWn5citP7njBH6CE8GK5ycDhZPJUX1JJq0wPLoEVTtfq/L33wpIvth5COCiaRMwi0hZxlojYxFRuyzAHJvMJOSBpB1tMQviDhaOrpP7SdEuv7ESrVG3AuAePRpDZrSSGcVdYtoLW8U5QGhtGqQs1TLpmWDFRfDrDuAcf+xD7kp40HigP5xj0VB8QoLDiNPoqRUjimO+4F7iuGQOApN4mLhDLjh3L3ydj2lk0DihFc7zFqx7fk/MXCnjvut86Dz+uPvl+0gfbQ6b4VJflqS0ppAJUQvjb05ZhNvjdvh11+Q9nMd8jYwl+rwzLEmY9SNgqkFz3IeAuVYCs9wNcAH6TwHgSpJLwQJXApnS0df9UgpRnp3BQ/FRK9scdnY2oEfEXoMEU1YDBrQmHciHzVKNOsXAQtAgj/l6xq8X8Ex6NKB2ApMuWCarOeZWtj5eBk3rEGHKedLdycWjE9Cud5sUkXi3RdpmljWjCyHADDsuKLSodWjrYHy0xTuuxUt9bCOS3OJ59Yi4dFKMIh+yivXGbRfpZ6x8Tz41ea4EcYI5M9EkTb/BKxodeXuZADki+r9ogZOlUwBwDmyCbQHAGWTk9p4WDScDLewCQ4LgzIPXQKDhe2gdUdcDTSGxTPiY5y+wT3heTiT61+n3/GuCSLcawMguavXrgbBZwJQ7J+rvMKBJ4OFWn7VH43aV1A+tD1pEiIzqkF6OkXwEcbdepHPJB2fZudaEG41Bu1DjolaVgBXHUsxahTSDbWmWl9MC/wl44FzECbLsR46Nwm6NY1HorS616MxwjElWJseUo0YsXY4vZv5SbI7jk0FgjteCZU4Ll+egUULA5Pm5MPA7CsHTzwbwOrkupH3gfSq0nIukLTYEWU/TILRZpLwyT+P2WrREHQiZqoCmmLCwSLFtiFCe2S1Fv6WUob5c/PB/DsDb29vxtUil79rBhcfKxL55e3Z99exgYXAFDbEYsK8O+ahZWof2CqzaIpHqjHprEXZUg63HWIJMxIikTwOXqQR8X546HiHqMmiQpXZ10CFJGcy64wRj4eIbPvpOAE7w7gBwbuG4ALX4jeLjspSdRdJeByS9Um+Rk7pQ8o1FjFlwor0+Idot1JrWlF9Cezokpd7a0n64teUyaGiRy/Yz7gBSytYuqrsqFgkt8gZHJZKGqzf5v9k/hf4rtMKFZVako1YsFVopHOAc7BTO4mNa/gQWCg/RYkRDUB5TLoCsHy6SDT4W9q2UZBYmvfD9tKR4npPbVwkTgL5KPuYCSHAiA6ZwHYX2+j28zqD5Ypbz3wr4m70sMEGdGr24kOhSIih41FdwYd9G5ANmyagkfS3hMaAx6kFjwicHKyLFPGYkfBY4NZUyfjz6asnKjDoI6g60JrwCDFSpJNeZR8rF4tEK4HDs/K1+y605z81ztHCfCy13tTQUZExkfBLs5m6NsRaCLYGclm5hfHEnnHSoxABiklVAWyLyDO1Ru4xTkTZIOkXDiIZDwepmyzFOA4XjkvfwL8Vg2PdI+LB41kS0JALgPQtRIiHsQkPEIzVlqWvfkvDDb6qVGqkxp1Hmatpnk7qwrCOb8hgFvFMBO9YtnQ/11bPiVuP300tQSKWXTMwOH/jXDMAJPIXjZh9aYSDc3CpAxaozLUknQm61pNsPH/RHLF84WYT1m1JO5JN2pIIGtDf4EXHXIsG03IYAGlIO1MesaEh6pGDC7u3r4bXUoT0XQdZvkbRpAjO5wTcogjd0E0TPxGVS3Cg+vYg48eYQABldZ+o0CwIztbox4UfArkPEbUEyRJGsCBxmA7wOK5KRANDWAEtdpazcbdkIfEa1AMGqBdMxovcrwtkmJQppj1jD3Goyvbiw5ewKiDf3E7Uo1MpgdtbJpOFAp9+WVkv/156W7Z5DU4qUxyx0LVPlFWhKz8ngzAatUBWdBheWqyf2w1xzRQaxseoyKs4fg89QJdYI1eMqTu/FpEHdZTsplrirTpQKu15P1+u9NaBzY9FoCxG46dtUNGd4H7mgqS8fwcr5U4D6iNRPLb90EkWnjyDitqHo7HHkk2HoVeWw62pk7Jw7vFfA3G/VwKwuhVNfLWDfFPNixbypeOyBO2VxZsZmxKZGW8wj7ruvO4BzIecY5HgjuHKMiIst5RHqJXng3AUn3Sa0ZwJojLDmpVrogy1Jr9AFuctpifvQEHUi7bUIzZU0w3GDemLZe+OFhULQZpYxv4dAzt3TX3Lh8T2Ur+31yotI+b3IR4Ow1qqkAIxTp0H1lYuIe1xAPoPyi+fhNmhw5uB+OLRqtKQSSPmd0JcXCcDbNBV4+42XUXbptBQCCVnU4l7heL8ZwNOgoNXXCMALwF1oOwBcfM6Kb5wdTbeE1LYr8C29WmS9OlhV1FZIIhmw460/Po/WbAT1MS+KTh/C3GljcfHEfuzevAo9X30WOzeuQM218/L8a889gb3b1uLQ3u148IG7sGPTCriNNYg56iTAQh8lt9XiZ6ev3Vl3gzLoVDQTaJ3Twm0M6mXSU0+DyRzqK6ekuHLIocfEEYMxZexw2HR1mDhmJHKpNPbv3QeLyYrJ4yegb68e8DtM2LZhFYYPeAtWTZWAPq0ANCeFK0sfHwM/TBzh9pJ+QPoFFRbDZ1tAnDxMUabVzcARJxEtFx5vvfg4om4T0JhBxdULOHFwN159/ncw1laiX89uyET96PbS7zFv5mTs3rIe40cMEfC67V//Cfu2bwJ1nRm4Y/BobN/XJKWeFEJmxaEl9hd9kAqY3wDTWwHmpAcyaNUc0oEyt/SH0l2ivnICy+ZNEcuaAnHnjx3E8kULsGbpEgwb1B91VVVYt3IZVi9diBGD+6Do9DGMHNQHVVfP4ze/uB/m2grw/lcWncayudMwdcRAsdyQCQuvnsqX7LuvO4AX2CAFtsmIXi91yMHqRDVw1KC3MPbdQQh7bFj4/nSgpV7GmqrsCsYMGygHDZ/pE0bJwbkSD7gwddwIWSTpzqCuCsc2rXp+j1BkwxYZ959lQBBT0NKI7n94GawbG/V4cenESVw7fwEDe/bC1rXrsPeTLbhw/AQ+mjsPA3u9BXVJKV763dMwqdT4l//7v+Hk/m1S2erwjs14/Jc/Qa/XXsDB7euBxrjM2a89gMsk7qDt8G/pcEkXv6HIR6uRte0I2PybLY+0l8qATji1RiDXiME9+6D/Gz2xePZcHNu1Dx/OmgO33oyJw0YiF4phyogxOHfoGPq9/iaQb8Zvf/WwWMQD3n4DAaceDeJ/DojFQDU0BLQSMG13VqHdWSNHq1ONVgFzhfvNogQMaLFAckPEjA+mjgAt1NZMSCwzs6YaM6dMwP7duzBuzFjUqXXo0f0trFi+Fh8tWY4jB/Zj0bz3MHzIANRWFEv1eZexFpmQE5aaaxjV7w3xt9HCYICTR2FQ0i+tgOANEC+8Vmjpf6QPnC6ZwhaTfUhXSf9uzyEVDSIY8KGyogzjx43B233eQmlJMfr3extbt2zG2jWr8FavNzF92hTMeX+2vPbC88/C6bAhEw/LToMWd8nxnXj3zRdkYeD3kANO+mLhOgpt1+u9FaB903d6a5FylAvDpJ5FMxhk9ulRV3wKC6aOQX3Eh3TIi+njx2LuzJmYMm4Cyq+V4JP1m9Grew+cO3kMr7/yPObPnCYAdPbYAezcvBYtuYRIEL/+4tNYOGsiRg7shcM7Nwqo01DgVp3uFRonN13PZzI+bu1i9+nXeWPsXZ+/nQLXHAd0Z9Bw4LhjohOtZLpUaKVqairw0ovP4+KFc+j+RjdUV1VIu/CD+Th4YJ+Mv6NHDuG3T/wHKspLkU4l0NbajD27d2LimBGSGUzgpuuO7hi68WhQyK415f7M8UcAp8Xdr1dvoKEZbqsdyxcuxpIFC/Hqsy/AY3NgSJ9+mDtztpR4e+3FP2DO1BkYO3wkdFUqzJgwTqQ2ONcXzpyM4nPHgXwS//GL+yRbmfOAffKZFnjbdS3ZNiGId5aTFd9MByeXA6UwiT6t5c3h83//VrG8r9/8DsubnctDAe/OAK4AeQHAs34fcv6IHAO6vQmk84jZPTixcx8uHT6BK8dOY8uy1UCmAROHDJfHbNuTOQzs0QNRjx2jBvdGSzoIl64UzRErSE+i2h5C+k9J9LlhjbOv6MZJOtVojFpQHzJh0czR4oLIxzyoKDqNqWOG4dj+nVi/ajmOHzyIdwYNxrbNn2D+3AXYuHYdrp47DbuhFu/274Wox4qkz4Gk3ybW2tuvPgtjxSXxoZIXy0FaCArR507alXIvOYk+3fXECcRgIvuRVjxbTiZmC/Z++XcouXQGVy+dx+jh7+DEkYMYMqAvVJVluHz+DL737X9BLOTHO4P648yJo9DV1sDvdmBw/7fR1pRHwGmWTLrzezdAX3wSf/ztz2Uby8WCi8anBTG7Xm/hum9VS+s7YrqGtKtadlL095PDXX3pqCye1ZdP4+C2jXi3f2+UnD+LN/7wIpozKbHCVny4ENrKEowY2Bs2bTX01SU4c2inWGD5mBd0p5hUJeJKG9mvh/hG6VKha42WOAOdhXl5q37/3+Z7//z44+8jE6tA9Rvy+u+VoG7ChV1rl+DDObOxbu1qqFXVAsov/+FFXL1yGX3f7o39+/bg7JlTYiy83u01qSXrctrh87rRkM8hE2e91idlXnCHSeu7MLb5vX9p/BFfkm4HXvjtEzi6exdKL5xHt+efg6G6Cs8/8TgaE3HMmTIZvf/4GtDagpef/p3UIqgrK4XPYsaUkawZS4kPHa4c34MPZ05A0fF9IkfB4Cd3uRzvpI7ykDiA+MAbgfZWtLW14VucSGhtUsCbYuFNGdEDzwbsYh1xm8aJL5PYrbl+MlJxCifma3mXRt7zd2/Ji+34gYVr/PRWK5QeBhfJr5VD/jYh77eLHvOgbi9KAWRqMx/dshaXD+2SKj69nn8S04b1lyNsqsXsUUPQGHTi1SceBnJBvPSbn2LnqvnS4e7aIrEYyPdkFJzV07se7KNCP0UsVQhZaoSZkHTrsH3VInR75lEM6P4C9m5cgYHd/4A3X3wKez9ZLVb1/Xd8Rwr/2uqq8Ox/PIT3x7+LyrOH8cGkEVK0wqm6JiJR3Z/+Nbo9+UtMHtxTrFtWfGHxCbJj8l49WkkJtNV03Nfa69fTte/o4ilY4QRVPqZ14qy+KOJCp3ZuwK61SxGy1OGTZR+g8sJx+VtXehGj+vUAclHxj/M9J3ZtkmpF2pILaI554a5VFhVqhxxYt0g0L0hNJLWQk7brtdx4fON6C/14q1rqVewtkLgAAAyOSURBVHP8kdPOrEEGXskW0RQdw+WDn+DUjrUwV1zApUNbcWjTClw8sE0CXyPeek2yB1kU+eqxndixcgEuHtwCh+qKsInIViA/nOwGWog8uFMpHAQasUq5uN7K+fc3mfc37ueNe6z8Lso4MH5DQOX4G9LtaUwa+DqmDumB7SvmYljvbnjz5WfAGIPbqML9d34bYacBRacOoscfnsbowb1B//I7b78BkgAstWXy+rh3+qJftxewc8UCRZY2H5RdH2mBvAa6Dbtey43HhevVSubx+YM7sHfDconzXDt1EIe3rkXxyQPIBeyS0n9m31b5m9LBaxbOlnmQ9dtQdnqfkkORsAlT68qBDbh8YJPMSxoBdOvQYCJwUxSOhAZJ5GnLix8crY34Fv9jaibampUnG7NoSoWlMnjcZRQ3A1cDnoidyEOxghSLTVYqb2cfc8HX/BVoO9T1aI1Tq9lVe0UYINRdZikmcm9ZgolWNUWpmFBAhknUWn0TT5ccT7oUyO1k8QNaYWRZKD73z+gHunP8FmGZULOhNeWToEvYXiegx6AMBwG1RXjzD29fg49mTwKDgwzK8IbWs6oNAzr2Dush5hbrOGWrUzLMDIqWcnPAKpM/aVXYHywuQH+z7FI+4/4xuk82CQFfUdszy+8iu4QDjJ+nxCllUrnQU2eGwPLCr34sW13Rncn4ENIp2ZztEafiZ8yFhfvNBZXBW56PRZZ5PgJgguWsOh7/xX78jOv/S7/vy77O62fBgcL9biZfOaRQQfm7uGiS68uWRgPpgJsWT8f6BVOAlEvuFZ/n59m/fJ2f53k7/34uulKEmklPfqo1Ku2Xvf5/9M+TPcI8CPYr+dRuVZGwTGiQcNzTS0B2D4OYW1Z+IPUw0ZxAzKFHmoW+HXqgLS0t2VztqSACFhWidh3IAGF2MMc3z8f+laIlHo2MQ7JZPrN/PDr5PAtNkDXGlvggBSfMNTCWncHcsYOlJByfD7EISsIjPH9hEbnqJCdCqM0sIWerlIM6N8RVzi/66MlgEoPaZ1X0wAngrfVAcwbfojg4KA9OEBcLXAFw+lCTHrMc7KSCJd7ZGueKxMcZj1YA76vX6uV3eXRliFnrgNakWEVRe61od0h1dLsW7Smv1Mmzq68CmZBoelDVkNYTpVFpNRJI/fpSUSBMONTCEEhL//A7/lz/6KXMGiuzh206+FiHj+yUqBvZgAN+ax0SbrM8Ji3p2J7NsKpK4TGphG3CIgBJhw5hc7VwyrkVYyFYrtYtYSdyPqNYe40hqzzPe9gYsMmgaI245Pq5UP3561OumwtcYWA2Ba0yAJWWWWgGWCovyvehPgRWkKFcJ9UYyYsN6quE/5rzmES9kK+TCseW10PFRi6YHNj1fgNSDo3EJ/h9f+m6/hFeZz/wOhK2WiniwN/TEDTJAs/fFbeq5fexqAPlYTnRfTqWnPMLP5x1T/n+9phLJj5/P8cPz8fdGfuB/UJDgsDAlu9nyS+2f839+0fopy96nfydYXOl9KNXS/18jm8aITYwCYpzgDTMpMeCT1Z9KPMoF3HBa1AhG3Zen0fIRhFxGeQx5xnnF/GNY5b9yvvE8/L87C/eR8YyPvu69TLXqAHUFHKAuEEcoTZQ1mNCwFQFa2WR4ErebwVxBkm/aARx3tJFImDtqBZfNzNtybopWN2cvwRv0pPTpCj77EDB+m5rAFrzBHD6vlv+//bOhbltG4bj/f5fbHe7ZevaS9LEbvyM32/JsiRKwu73p2jLbeMubXdLuvkOR4mkCBAkQAgErRpyszzRl2Pi+dA2PDSAOXda6TYIXA0obq+8r+uy21eZYuESRE8fYRQ+xu2wpXRww+GeDyojf3j7u/FWQlwn9fnG5X7cNo7AEt5FyoEeFjxiePm3Pa4vAb5O4rm3o67tpl2FFhHrPX34YPP+nY3bN9a5uTKLV6rH/5kny6E+mjxqv5cPbdW9ldsCKx7LglAq8gbXv8mS3w1bqkfkAuF+uDumrXe6vkQbZYQzYuljrQDcE7pFv9OZD8sq1yO1NWuzsfZow5sr4eOtgX99Az98JoQLIE98Fz3Q/of4Bx54CLDB/DXaXkI5vAi0Mt4APIJXHMoZXP+q/nFNWffdL/q7XY5Mh/kBH+HntPVWz3BfrofHORTaDf0FX4CQ97OmzAU2a0nhNUBfmTNKib3u3Fi5WyjNVhPJ0W7cswUnqKcDBQMQshfNerbst63ErffYUdgmbfC3xrTH2zDjEOZ6+OvZy7y9s0nrnfac9pOOdAn6Y/fY1snKcjNWPDgx4hzQYf+ClD0M5kQ4r4HFzZsrrjM8HuCGlhkhxN1b7W8RaUPY8NFrYjkKnO9hlh5Q5C41l+ws2S4Vq5osJ5YsR/IfocTwXaKkApD3WgGfGAqSSAEU48P7K4smfVsNPkqhsiHIdUiZHL2bt6pDvUWvJQsZpa5Vctiy8f2fGiDC+Bi0S7wBfzQb2KR3Z5POvT6kHM0ebdq/V7wwH1aO5yNL1mPbjPvKp160GNp61NO4QPvusWPp/NG6769sP+nrw8rcF+upIFuMLB73bCV6W7YdPlg06irNFsOLNHLwhL4wIZl4TE76Sj6HF2hz2r6xajs3t5ro3qKlLTp3Vm6IgW7bpHUtXJvBR9GXL8c2Z4F6+GB87II2AY4Swy8ZDIN7LZSX+PcSyljg4Q1GQKAffvHKS5/gkzYd07X4Rz5zBGOAMp4FaIf5Qt9RBOQhvPAaIC8A7Qd4CTz4J2kIMkQK7wDwwUO8BItB28xFdlhNJRfE2SNP44c7fXiclHvkZ/jx2tBnyBeRQbtp34eyPrbVJgfiaJt53jTonuof8jvkI+a1vmBxQCewOMy790ddwWIy69xpMcnXU+kP5DaZ9qSkOaXJgs0iRf+K9UjnOkL/VuOebWdDW4wH3lPicHvjOXH2JnzP2KelVaWzKj8oSqDIYlnklsfa3GSD07LNJ0DeKwbcRmzisgcAsKkbrj9J081SZVUS1XUjhf3otYZXm2TtX3G4drG//hpvWFHlvqrdWO7gec7eRBk2l31Zle0/qZtasduYFbkgXFfxziynX7mV0dYsiY28dLU4q2+l37S+OH7B35ZudQKUU6B2YA7s5Idzu+URP+2LljQRPuGGjgyrwdN4TMkHf+BTwMM9gI8P+Br//uXyMl56fkBzHpml2zP+FNHCzwMXS2ForlCP/sJH0ktAewFCX3k+QMj7mdPAn8A30iBr6Kam/CA3YV8vT2y/WZzKQ35Ikf3AR9LAwzCG4Ah5T6W0EfQEuuOA3H+uS4p4a9IbWWL75cxcxNgz/2udmm/9BqV824wv8z/xz+Tok9QOPFOVCoNET2N4vyGMECjwpFSV4GiRY5kfXSwNPzlEBwjEvsZUA++sSPeW7Xda1RwDUJ0WsWjNBAguJmeUp/FWz/h8P1gMDoOkwSsy205HFxcD1atyi1kUFAXkcSgqCF42cCqvpom6zftKC44zS1EEzivu0ilU7ZhH/pNQT7ZvHj/fdjSfnnCgrAM+6EKBB0V+2HtapcB5toG/uZA281/TNdZRfjjvF/Q3DIN8tz7VIZ9nXCphLZmHX3r+NfHgB9KabVeeP7QZ+JLuPX9r+cXgRH6R4wTeUrchP1xL+dV8Pi+r5x9lz6W7yqUPwHveptcp0iVFZjkGX5MecOkeOsGL7GLI1Xo1GHXam/SWNjrZuUx2dlEUBuA8eZO70gBXVFLi5xY54Ya1VY5AImBPQih/TalTYL8WrLKyHdZsZZbnqfpdoIgqs/2eAYAzpe2juHY3meJK3YGQHpRvvZdQBGXfuBfPvsQXv4qymhYanNqVZZ7n4AsrrctT4SOPuhxG4Br8RcrgW00Db1aebisLn1+wIOEewwouTyk0Q++T9GW+LhMu1Av9VOoX+O2SRc4s28c26vOaB10opZoO6jpooX6DTtp0QQhqpVc4TXjxFYEKeF9oeqQz8EW8pi/0188LbxyUtkcZNepJ2YR7Uimeet5UpTcS6LcUS/FZudp/oXz5YeNW86fMaqVHf4/yVhpygRykGAZV4WUz3Neu4Qi5Dm5iGaW+XoZxUZXyODTHC8XrZQrZuCQf7iiHQT/IGhYO8wq31hsuy63EsEozr2d0AAc5wWLnTdsbjSxG6l9ZGPR5+fdaOXO50KCvZXCjwOkKILn6Qnqu0E8K5owhNcHKqwk+duhF3/veicS67+T83fsTb2rF9Fw+BESnhp5/RRvfC8+lO9T/Xrzh+dDe/2k9+b5xPv3X+Mf8+d6f2vhWfnvkgYznpv7pp3TqqWO024Sgs0n/AoIGmK4m7sxEAAAAAElFTkSuQmCC) ###Code ###Output _____no_output_____ ###Markdown - Build model ###Code # Input size: img_rows = 32 img_cols = 32 channels = 3 # Regularization: reg=None # Initial number of filters: num_filters=32 # Activation function: ac='relu' # Optimizer (Adam) adm=Adam(lr=0.001,decay=0, beta_1=0.9, beta_2=0.999, epsilon=1e-08) opt=adm # Deop-out: drop_dense=0.5 drop_conv=0 model = Sequential() model.add(Conv2D(num_filters, (3, 3), activation=ac, kernel_regularizer=reg, input_shape=(img_rows, img_cols, channels),padding='same')) model.add(BatchNormalization(axis=-1)) model.add(Conv2D(num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same')) model.add(BatchNormalization(axis=-1)) model.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 16x16x3xnum_filters model.add(Dropout(drop_conv)) model.add(Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same')) model.add(BatchNormalization(axis=-1)) model.add(Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same')) model.add(BatchNormalization(axis=-1)) model.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 8x8x3x(2*num_filters) model.add(Dropout(drop_conv)) model.add(Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same')) model.add(BatchNormalization(axis=-1)) model.add(Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same')) model.add(BatchNormalization(axis=-1)) model.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 4x4x3x(4*num_filters) model.add(Dropout(drop_conv)) model.add(Flatten()) model.add(Dense(512, activation=ac,kernel_regularizer=reg)) model.add(BatchNormalization()) model.add(Dropout(drop_dense)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer=opt) ###Output /usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. "The `lr` argument is deprecated, use `learning_rate` instead.") ###Markdown Number of parameters: ###Code model.summary() ###Output Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 32, 32, 32) 896 _________________________________________________________________ batch_normalization (BatchNo (None, 32, 32, 32) 128 _________________________________________________________________ conv2d_1 (Conv2D) (None, 32, 32, 32) 9248 _________________________________________________________________ batch_normalization_1 (Batch (None, 32, 32, 32) 128 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 16, 16, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 16, 16, 64) 18496 _________________________________________________________________ batch_normalization_2 (Batch (None, 16, 16, 64) 256 _________________________________________________________________ conv2d_3 (Conv2D) (None, 16, 16, 64) 36928 _________________________________________________________________ batch_normalization_3 (Batch (None, 16, 16, 64) 256 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 8, 8, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 8, 8, 64) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 8, 8, 128) 73856 _________________________________________________________________ batch_normalization_4 (Batch (None, 8, 8, 128) 512 _________________________________________________________________ conv2d_5 (Conv2D) (None, 8, 8, 128) 147584 _________________________________________________________________ batch_normalization_5 (Batch (None, 8, 8, 128) 512 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 4, 4, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 2048) 0 _________________________________________________________________ dense (Dense) (None, 512) 1049088 _________________________________________________________________ batch_normalization_6 (Batch (None, 512) 2048 _________________________________________________________________ dropout_3 (Dropout) (None, 512) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 5130 ================================================================= Total params: 1,345,066 Trainable params: 1,343,146 Non-trainable params: 1,920 _________________________________________________________________ ###Markdown Summary of model precedure: ###Code tf.keras.utils.plot_model(model, to_file="model.png") ###Output _____no_output_____ ###Markdown - Train model **Batch size: 128, Epochs: 100** ###Code history = model.fit(X_train, y_train, batch_size=128, epochs=100, validation_data=(X_test, y_test)) ###Output Epoch 1/100 391/391 [==============================] - 17s 37ms/step - loss: 1.5035 - accuracy: 0.5122 - val_loss: 1.3294 - val_accuracy: 0.5573 Epoch 2/100 391/391 [==============================] - 14s 35ms/step - loss: 0.9139 - accuracy: 0.6845 - val_loss: 0.9305 - val_accuracy: 0.6830 Epoch 3/100 391/391 [==============================] - 14s 35ms/step - loss: 0.6982 - accuracy: 0.7582 - val_loss: 0.8087 - val_accuracy: 0.7243 Epoch 4/100 391/391 [==============================] - 14s 35ms/step - loss: 0.5702 - accuracy: 0.8008 - val_loss: 0.7081 - val_accuracy: 0.7574 Epoch 5/100 391/391 [==============================] - 14s 35ms/step - loss: 0.4745 - accuracy: 0.8359 - val_loss: 0.7023 - val_accuracy: 0.7678 Epoch 6/100 391/391 [==============================] - 14s 35ms/step - loss: 0.3993 - accuracy: 0.8607 - val_loss: 0.6951 - val_accuracy: 0.7757 Epoch 7/100 391/391 [==============================] - 14s 35ms/step - loss: 0.3280 - accuracy: 0.8840 - val_loss: 0.7011 - val_accuracy: 0.7820 Epoch 8/100 391/391 [==============================] - 14s 35ms/step - loss: 0.2731 - accuracy: 0.9045 - val_loss: 0.7750 - val_accuracy: 0.7758 Epoch 9/100 391/391 [==============================] - 14s 35ms/step - loss: 0.2283 - accuracy: 0.9190 - val_loss: 0.7398 - val_accuracy: 0.7936 Epoch 10/100 391/391 [==============================] - 14s 35ms/step - loss: 0.1874 - accuracy: 0.9345 - val_loss: 0.9847 - val_accuracy: 0.7463 Epoch 11/100 391/391 [==============================] - 14s 35ms/step - loss: 0.1513 - accuracy: 0.9461 - val_loss: 0.8872 - val_accuracy: 0.7870 Epoch 12/100 391/391 [==============================] - 14s 35ms/step - loss: 0.1453 - accuracy: 0.9487 - val_loss: 0.9534 - val_accuracy: 0.7793 Epoch 13/100 391/391 [==============================] - 14s 35ms/step - loss: 0.1166 - accuracy: 0.9597 - val_loss: 0.8475 - val_accuracy: 0.8003 Epoch 14/100 391/391 [==============================] - 14s 35ms/step - loss: 0.1120 - accuracy: 0.9602 - val_loss: 1.0408 - val_accuracy: 0.7727 Epoch 15/100 391/391 [==============================] - 14s 35ms/step - loss: 0.1005 - accuracy: 0.9651 - val_loss: 0.9832 - val_accuracy: 0.7820 Epoch 16/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0896 - accuracy: 0.9692 - val_loss: 0.9513 - val_accuracy: 0.7963 Epoch 17/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0896 - accuracy: 0.9685 - val_loss: 0.9583 - val_accuracy: 0.7953 Epoch 18/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0916 - accuracy: 0.9681 - val_loss: 0.9913 - val_accuracy: 0.7916 Epoch 19/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0805 - accuracy: 0.9720 - val_loss: 0.9916 - val_accuracy: 0.7903 Epoch 20/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0797 - accuracy: 0.9720 - val_loss: 1.0206 - val_accuracy: 0.7808 Epoch 21/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0693 - accuracy: 0.9754 - val_loss: 1.0696 - val_accuracy: 0.7841 Epoch 22/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0542 - accuracy: 0.9805 - val_loss: 0.9777 - val_accuracy: 0.8068 Epoch 23/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0714 - accuracy: 0.9756 - val_loss: 1.0927 - val_accuracy: 0.7868 Epoch 24/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0631 - accuracy: 0.9780 - val_loss: 1.1878 - val_accuracy: 0.7752 Epoch 25/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0685 - accuracy: 0.9754 - val_loss: 1.0782 - val_accuracy: 0.7990 Epoch 26/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0457 - accuracy: 0.9848 - val_loss: 1.0334 - val_accuracy: 0.8055 Epoch 27/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0510 - accuracy: 0.9824 - val_loss: 1.1227 - val_accuracy: 0.7933 Epoch 28/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0479 - accuracy: 0.9826 - val_loss: 1.1104 - val_accuracy: 0.7954 Epoch 29/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0530 - accuracy: 0.9810 - val_loss: 1.1311 - val_accuracy: 0.7984 Epoch 30/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0480 - accuracy: 0.9826 - val_loss: 1.0391 - val_accuracy: 0.8100 Epoch 31/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0552 - accuracy: 0.9814 - val_loss: 1.0742 - val_accuracy: 0.8072 Epoch 32/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0459 - accuracy: 0.9841 - val_loss: 1.1048 - val_accuracy: 0.7995 Epoch 33/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0401 - accuracy: 0.9852 - val_loss: 1.1516 - val_accuracy: 0.7922 Epoch 34/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0469 - accuracy: 0.9838 - val_loss: 1.0878 - val_accuracy: 0.8040 Epoch 35/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0456 - accuracy: 0.9843 - val_loss: 1.2495 - val_accuracy: 0.7909 Epoch 36/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0349 - accuracy: 0.9879 - val_loss: 1.1021 - val_accuracy: 0.8124 Epoch 37/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0326 - accuracy: 0.9886 - val_loss: 1.1721 - val_accuracy: 0.8036 Epoch 38/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0407 - accuracy: 0.9869 - val_loss: 1.1667 - val_accuracy: 0.8047 Epoch 39/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0439 - accuracy: 0.9848 - val_loss: 1.1386 - val_accuracy: 0.8030 Epoch 40/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0343 - accuracy: 0.9882 - val_loss: 1.2436 - val_accuracy: 0.7880 Epoch 41/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0432 - accuracy: 0.9853 - val_loss: 1.1782 - val_accuracy: 0.8029 Epoch 42/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0242 - accuracy: 0.9913 - val_loss: 1.1192 - val_accuracy: 0.8145 Epoch 43/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0275 - accuracy: 0.9911 - val_loss: 1.1788 - val_accuracy: 0.8108 Epoch 44/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0399 - accuracy: 0.9868 - val_loss: 1.2830 - val_accuracy: 0.7909 Epoch 45/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0387 - accuracy: 0.9866 - val_loss: 1.2780 - val_accuracy: 0.7980 Epoch 46/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0355 - accuracy: 0.9874 - val_loss: 1.2352 - val_accuracy: 0.8039 Epoch 47/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0262 - accuracy: 0.9914 - val_loss: 1.2076 - val_accuracy: 0.7989 Epoch 48/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0298 - accuracy: 0.9896 - val_loss: 1.1743 - val_accuracy: 0.8107 Epoch 49/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0270 - accuracy: 0.9904 - val_loss: 1.1919 - val_accuracy: 0.8085 Epoch 50/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0255 - accuracy: 0.9913 - val_loss: 1.1761 - val_accuracy: 0.8140 Epoch 51/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0305 - accuracy: 0.9894 - val_loss: 1.2419 - val_accuracy: 0.8069 Epoch 52/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0336 - accuracy: 0.9885 - val_loss: 1.2388 - val_accuracy: 0.8036 Epoch 53/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0310 - accuracy: 0.9893 - val_loss: 1.2214 - val_accuracy: 0.8128 Epoch 54/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0259 - accuracy: 0.9910 - val_loss: 1.2563 - val_accuracy: 0.8073 Epoch 55/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0281 - accuracy: 0.9903 - val_loss: 1.3035 - val_accuracy: 0.7974 Epoch 56/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0229 - accuracy: 0.9923 - val_loss: 1.2904 - val_accuracy: 0.8080 Epoch 57/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0224 - accuracy: 0.9920 - val_loss: 1.2884 - val_accuracy: 0.8098 Epoch 58/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0261 - accuracy: 0.9912 - val_loss: 1.3058 - val_accuracy: 0.8036 Epoch 59/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0221 - accuracy: 0.9927 - val_loss: 1.1905 - val_accuracy: 0.8190 Epoch 60/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0251 - accuracy: 0.9913 - val_loss: 1.2612 - val_accuracy: 0.8080 Epoch 61/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0240 - accuracy: 0.9919 - val_loss: 1.2729 - val_accuracy: 0.7999 Epoch 62/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0262 - accuracy: 0.9914 - val_loss: 1.2271 - val_accuracy: 0.8140 Epoch 63/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0258 - accuracy: 0.9912 - val_loss: 1.3934 - val_accuracy: 0.8013 Epoch 64/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0235 - accuracy: 0.9923 - val_loss: 1.2781 - val_accuracy: 0.8094 Epoch 65/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0209 - accuracy: 0.9925 - val_loss: 1.2417 - val_accuracy: 0.8181 Epoch 66/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0212 - accuracy: 0.9925 - val_loss: 1.2880 - val_accuracy: 0.8099 Epoch 67/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0225 - accuracy: 0.9923 - val_loss: 1.3255 - val_accuracy: 0.8095 Epoch 68/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0235 - accuracy: 0.9921 - val_loss: 1.3492 - val_accuracy: 0.8070 Epoch 69/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0214 - accuracy: 0.9925 - val_loss: 1.3266 - val_accuracy: 0.8119 Epoch 70/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0186 - accuracy: 0.9934 - val_loss: 1.3326 - val_accuracy: 0.8137 Epoch 71/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0190 - accuracy: 0.9935 - val_loss: 1.2055 - val_accuracy: 0.8176 Epoch 72/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0208 - accuracy: 0.9932 - val_loss: 1.3183 - val_accuracy: 0.8110 Epoch 73/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0229 - accuracy: 0.9919 - val_loss: 1.3162 - val_accuracy: 0.8090 Epoch 74/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0211 - accuracy: 0.9930 - val_loss: 1.2677 - val_accuracy: 0.8140 Epoch 75/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0187 - accuracy: 0.9935 - val_loss: 1.3902 - val_accuracy: 0.8002 Epoch 76/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0176 - accuracy: 0.9939 - val_loss: 1.2830 - val_accuracy: 0.8071 Epoch 77/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0214 - accuracy: 0.9928 - val_loss: 1.2419 - val_accuracy: 0.8191 Epoch 78/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0186 - accuracy: 0.9939 - val_loss: 1.2832 - val_accuracy: 0.8131 Epoch 79/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0179 - accuracy: 0.9939 - val_loss: 1.3228 - val_accuracy: 0.8131 Epoch 80/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0154 - accuracy: 0.9945 - val_loss: 1.2993 - val_accuracy: 0.8151 Epoch 81/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0168 - accuracy: 0.9942 - val_loss: 1.2962 - val_accuracy: 0.8169 Epoch 82/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0198 - accuracy: 0.9934 - val_loss: 1.3340 - val_accuracy: 0.8097 Epoch 83/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0161 - accuracy: 0.9947 - val_loss: 1.3283 - val_accuracy: 0.8104 Epoch 84/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0178 - accuracy: 0.9941 - val_loss: 1.3074 - val_accuracy: 0.8122 Epoch 85/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0179 - accuracy: 0.9943 - val_loss: 1.3557 - val_accuracy: 0.8136 Epoch 86/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0175 - accuracy: 0.9940 - val_loss: 1.2966 - val_accuracy: 0.8182 Epoch 87/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0183 - accuracy: 0.9936 - val_loss: 1.3410 - val_accuracy: 0.8152 Epoch 88/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0164 - accuracy: 0.9943 - val_loss: 1.3201 - val_accuracy: 0.8202 Epoch 89/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0153 - accuracy: 0.9950 - val_loss: 1.2950 - val_accuracy: 0.8191 Epoch 90/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0147 - accuracy: 0.9952 - val_loss: 1.4269 - val_accuracy: 0.8091 Epoch 91/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0156 - accuracy: 0.9947 - val_loss: 1.3328 - val_accuracy: 0.8133 Epoch 92/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0156 - accuracy: 0.9945 - val_loss: 1.3320 - val_accuracy: 0.8198 Epoch 93/100 391/391 [==============================] - 14s 36ms/step - loss: 0.0174 - accuracy: 0.9938 - val_loss: 1.3392 - val_accuracy: 0.8126 Epoch 94/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0141 - accuracy: 0.9947 - val_loss: 1.4214 - val_accuracy: 0.8045 Epoch 95/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0132 - accuracy: 0.9955 - val_loss: 1.3792 - val_accuracy: 0.8171 Epoch 96/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0214 - accuracy: 0.9929 - val_loss: 1.3372 - val_accuracy: 0.8132 Epoch 97/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0149 - accuracy: 0.9952 - val_loss: 1.4022 - val_accuracy: 0.8171 Epoch 98/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0150 - accuracy: 0.9947 - val_loss: 1.3535 - val_accuracy: 0.8157 Epoch 99/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0164 - accuracy: 0.9946 - val_loss: 1.3515 - val_accuracy: 0.8135 Epoch 100/100 391/391 [==============================] - 14s 35ms/step - loss: 0.0155 - accuracy: 0.9949 - val_loss: 1.3446 - val_accuracy: 0.8182 ###Markdown - Training accuracy ###Code train_acc = model.evaluate(X_train,y_train,batch_size=128) train_acc ###Output 391/391 [==============================] - 5s 12ms/step - loss: 0.0071 - accuracy: 0.9976 ###Markdown - Test accuracy ###Code test_acc = model.evaluate(X_test, y_test, batch_size=128) test_acc ###Output 79/79 [==============================] - 1s 11ms/step - loss: 1.3446 - accuracy: 0.8182 ###Markdown Evaluation ###Code plt.plot(history.history['loss'], label='Train_loss') plt.plot(history.history['val_loss'], label = 'val_loss') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0, 2]) plt.legend(loc='lower right') plt.plot(history.history['accuracy'], label='accuracy') plt.plot(history.history['val_accuracy'], label = 'val_accuracy') plt.xlabel('Epoch') plt.ylabel('Loss') plt.ylim([0.5, 1]) plt.legend(loc='lower right') ###Output _____no_output_____ ###Markdown 4- Second implementation In this implementation, we use "**Data Augmentation**", "**Data Normalization**", "**Regularization**" and finally apply "**Parameter initialization**" - Normalizing input 1- convert images into float type (for deviding over 255)\2- compute mean of data\3- standard deviation\4- normalization ###Code # convert to float X_train = X_train.astype("float32") X_test = X_test.astype("float32") # compute mean mean = np.mean(X_train) # standard deviation std = np.std(X_train) # normalization X_test = (X_test-mean)/std X_train=(X_train-mean)/std ###Output _____no_output_____ ###Markdown - Data visualization after Normalization Let's see the normalized image ###Code plt.imshow(X_train[img]) plt.show() ###Output Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). ###Markdown - Augmentation set up ###Code datagen = ImageDataGenerator( rotation_range = 25, # rotating image with 25 degree shear_range = 0.2, # Shear angle horizontal_flip = True, # Horizontal flipping width_shift_range = 0.2, # width shift height_shift_range = 0.2, # hight shift zoom_range = 0.1 # zoom >> [1-0.1, 1+0.1] ) datagen.fit(X_train) # some data visualization after Augmentation for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9): for i in range(0, 9): plt.subplot(330 + 1 + i) plt.imshow(X_batch[i].astype(np.uint8)) plt.show() break ###Output _____no_output_____ ###Markdown - CNN model implemetn the same model except the "Regularization" parameters ###Code # L2 or "ridge" regularisation: reg2=l2(1e-4) num_filters2=32 ac2='relu' adm2=Adam(lr=0.001,decay=0, beta_1=0.9, beta_2=0.999, epsilon=1e-08) opt2=adm2 drop_dense2=0.2 drop_conv2=0.1 # Define Xavier initialization method: initializer = tf.keras.initializers.GlorotNormal() model2 = Sequential() model2.add(Conv2D(num_filters2, (3, 3), activation=ac2, kernel_regularizer=reg2, input_shape=(img_rows, img_cols, channels),padding='same')) model2.add(BatchNormalization(axis=-1)) model2.add(Conv2D(num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same')) model2.add(BatchNormalization(axis=-1)) model2.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 16x16x3xnum_filters model2.add(Dropout(drop_conv2)) model2.add(Conv2D(2*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same')) model2.add(BatchNormalization(axis=-1)) model2.add(Conv2D(2*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same')) model2.add(BatchNormalization(axis=-1)) model2.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 8x8x3x(2*num_filters) model2.add(Dropout(drop_conv2)) model2.add(Conv2D(4*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same')) model2.add(BatchNormalization(axis=-1)) model2.add(Conv2D(4*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same')) model2.add(BatchNormalization(axis=-1)) model2.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 4x4x3x(4*num_filters) model2.add(Dropout(drop_conv2)) model2.add(Flatten()) model2.add(Dense(512, activation=ac2,kernel_regularizer=reg2,kernel_initializer=initializer)) # Add Xavier initialization to the Dense layer model2.add(BatchNormalization()) model2.add(Dropout(drop_dense2)) model2.add(Dense(num_classes, activation='softmax')) model2.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer=opt2) ###Output /usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead. "The `lr` argument is deprecated, use `learning_rate` instead.") ###Markdown - Train model with "Data Augmentation" ###Code history2=model2.fit_generator(datagen.flow(X_train, y_train, batch_size=128),steps_per_epoch = len(X_train) / 128, epochs=100, validation_data=(X_test, y_test)) ###Output /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1972: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators. warnings.warn('`Model.fit_generator` is deprecated and ' ###Markdown - Training accuracy ###Code model2_test_acc=model2.evaluate(X_test,y_test,batch_size=128) model2_test_acc ###Output 79/79 [==============================] - 1s 15ms/step - loss: 0.6127 - accuracy: 0.8716 ###Markdown - Test accuracy ###Code model2_train_acc=model2.evaluate(X_train,y_train,batch_size=128) model2_train_acc ###Output 391/391 [==============================] - 5s 13ms/step - loss: 0.4914 - accuracy: 0.9090 ###Markdown Evaluation ###Code plt.plot(history2.history['accuracy'], label='accuracy') plt.plot(history2.history['val_accuracy'], label = 'val_accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') plt.plot(history2.history['loss'], label='loss') plt.plot(history2.history['val_loss'], label = 'val_loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.ylim([0, 1]) plt.legend(loc='lower right') ###Output _____no_output_____ ###Markdown 5-Prediction select a random image among test set and predict its class ###Code image = randint(0,10000) y_pred = model2.predict(X_test) y_classes = [np.argmax(element) for element in y_pred] plt.imshow(X_train[image]) plt.show() classes[y_classes[image]] ###Output _____no_output_____ ###Markdown Define callback ###Code # Adding callbacks. from keras import callbacks callbacks = [ callbacks.EarlyStopping(monitor='acc', patience=3, restore_best_weights=True), callbacks.TerminateOnNaN() ] ###Output _____no_output_____ ###Markdown 6-Import Resnet50 and pre-trained weights ###Code from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input conv_base = ResNet50(weights='imagenet', include_top=False, input_shape=(32, 32, 3)) conv_base.summary() ###Output _____no_output_____ ###Markdown Transfer learning changing the classifying part to fit our dataset ###Code model = models.Sequential() model.add(conv_base) # import and use ResNet50 model model.add(layers.Flatten()) model.add(layers.BatchNormalization()) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dropout(0.3)) model.add(layers.BatchNormalization()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dropout(0.3)) model.add(layers.BatchNormalization()) model.add(layers.Dense(10, activation='softmax')) model.compile(optimizer=opt2, loss='categorical_crossentropy', metrics=['acc']) model.summary() ###Output Model: "sequential_3" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet50 (Functional) (None, 1, 1, 2048) 23587712 _________________________________________________________________ flatten_2 (Flatten) (None, 2048) 0 _________________________________________________________________ batch_normalization_14 (Batc (None, 2048) 8192 _________________________________________________________________ dense_4 (Dense) (None, 128) 262272 _________________________________________________________________ dropout_8 (Dropout) (None, 128) 0 _________________________________________________________________ batch_normalization_15 (Batc (None, 128) 512 _________________________________________________________________ dense_5 (Dense) (None, 64) 8256 _________________________________________________________________ dropout_9 (Dropout) (None, 64) 0 _________________________________________________________________ batch_normalization_16 (Batc (None, 64) 256 _________________________________________________________________ dense_6 (Dense) (None, 10) 650 ================================================================= Total params: 23,867,850 Trainable params: 23,810,250 Non-trainable params: 57,600 _________________________________________________________________ ###Markdown Training model ###Code history = model.fit(X_train, y_train, epochs=100, batch_size=256, validation_data=(X_test, y_test), callbacks=callbacks) history_dict = history.history loss_values = history_dict['loss'] val_loss_values = history_dict['val_loss'] epochs = range(1, len(loss_values) + 1) plt.figure(figsize=(14, 4)) plt.subplot(1,2,1) plt.plot(epochs, loss_values, 'bo', label='Training Loss') plt.plot(epochs, val_loss_values, 'b', label='Validation Loss') plt.title('Training and Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() acc = history_dict['acc'] val_acc = history_dict['val_acc'] epochs = range(1, len(loss_values) + 1) plt.subplot(1,2,2) plt.plot(epochs, acc, 'bo', label='Training Accuracy', c='orange') plt.plot(epochs, val_acc, 'b', label='Validation Accuracy', c='orange') plt.title('Training and Validation Accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() ###Output _____no_output_____ ###Markdown CIFAR10 - 10 categories of 32 x 32 sized color images - 50000 training and 10000 testing samples The full CIFAR dataset contains 80 million tiny colored images. - The main page: https://www.cs.toronto.edu/%7Ekriz/cifar.html - About CIFAR: https://www.cs.toronto.edu/%7Ekriz/learning-features-2009-TR.pdf ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt np.random.seed(42) import tensorflow as tf tf.random.set_seed(42) import tensorflow.keras as keras import os from functools import partial from sklearn.model_selection import StratifiedShuffleSplit from tensorflow.keras.datasets.cifar10 import load_data from tensorflow.keras import Sequential from tensorflow.keras.layers import InputLayer, Dense, BatchNormalization, Activation, \ Dropout, AlphaDropout from tensorflow.keras.optimizers import Nadam, SGD from tensorflow.keras.callbacks import EarlyStopping import tensorflow.keras.backend as K from sklearn.metrics import accuracy_score ###Output _____no_output_____ ###Markdown You can download the data from the original link above and load it like this ... ###Code def unpickle(file): import pickle with open(file, 'rb') as fo: dict = pickle.load(fo, encoding='bytes') return dict file_dicts = {} for i in range(1, 6): batch = f'data_batch_{i}' filename = os.path.join('.', 'data', 'cifar', 'cifar-10-batches-py', batch) file_dicts[i-1] = unpickle(filename) def append_data(data, type_): a = data[0][type_] for i in range(1, 5): a = np.r_[a, data[i][type_]] return a X_full = append_data(file_dicts, b'data') y_full = append_data(file_dicts, b'labels') X_full.shape, y_full.shape test_file = os.path.join('.', 'data', 'cifar', 'cifar-10-batches-py', 'test_batch') test_file_dict = unpickle(test_file) X_test = test_file_dict[b'data'] y_test = test_file_dict[b'labels'] len(X_test), len(y_test) # Use StratifiedShuffleSplit to split training data into training and validation. # This will ensure that the training and validation data has an equal proportion of classes. # split = StratifiedShuffleSplit(n_splits=1, train_size=0.8, test_size=0.2) # We don't need to specify both test/train. # sizes, but it is good for clarity. for train_idx, test_idx in split.split(X_full, y_full): X_train, X_val = X_full[train_idx], X_full[test_idx] y_train, y_val = y_full[train_idx], y_full[test_idx] X_train.shape, len(y_train), X_test.shape, len(y_test) # Validate that the split shows the correct proportion of classes pd.Series(y_train).value_counts(normalize=True), pd.Series(y_val).value_counts(normalize=True) ###Output _____no_output_____ ###Markdown ... or an easier way is to use Tensorflow's load_data() function ###Code (X_train, y_train), (X_test, y_test) = load_data() X_train = X_train.reshape(X_train.shape[0], -1) X_test = X_test.reshape(X_test.shape[0], -1) y_train = y_train.flatten() y_test = y_test.flatten() X_train.shape, y_train.shape, X_test.shape, y_test.shape # Validate that the split shows the correct proportion of classes pd.Series(y_train).value_counts(normalize=True), pd.Series(y_test).value_counts(normalize=True) # Use StratifiedShuffleSplit to split training data into training and validation. # This will ensure that the training and validation data has an equal proportion of classes. # split = StratifiedShuffleSplit(n_splits=1, train_size=0.8, test_size=0.2) # We don't need to specify both test/train. # sizes, but it is good for clarity. for train_idx, test_idx in split.split(X_train, y_train): X_train_1, X_val = X_train[train_idx], X_train[test_idx] y_train_1, y_val = y_train[train_idx], y_train[test_idx] X_train = X_train_1 y_train = y_train_1 X_train.shape, y_train.shape, X_val.shape, y_val.shape ###Output _____no_output_____ ###Markdown Create a model with Batch Normalization layers ###Code class MCDropout(Dropout): def call(self, rate): return super().call(rate, training=True) # When training = True, the Dropout class from which we inherit # MCDropout drops some of the cells in the layer. # When cells are dropped, the model we're training is different. # We get the benefit of running thousands of models on the data. # The final model is also more robust to small changes in input. # MC Dropout acts as a regularizer. def create_model(with_bn=False, initialization='he_normal', hidden_activation='elu', dropout_rate=None, mc_dropout=False): def add_dropout_layer(layer_num=None, dropout_rate=None, mc_dropout=False): assert(layer_num is not None) if dropout_rate is not None: if layer_num > 16: # For the last 3 layers, add AlphaDropout layer if mc_dropout == True: model.add(MCDropout(dropout_rate)) else: model.add(AlphaDropout(dropout_rate)) model = Sequential([ InputLayer(input_shape=[3072]) ]) if with_bn: model.add(BatchNormalization()) # Add BN layer after input NormalDense = partial(Dense, # Put all your common init here. kernel_initializer=initialization, use_bias=False if with_bn else True) # BN has bias, so remove it # from the Dense layer. for layer_num in range(20): model.add(NormalDense(100)) if with_bn: # Add a BatchNormalization layer after each model.add(BatchNormalization()) # Dense layer model.add(Activation(hidden_activation)) # Add an activation function. This is needed # because we did not add it when we created the # partial dense layer add_dropout_layer(layer_num, dropout_rate, mc_dropout) # Add dropout layer model.add(Dense(10, activation='softmax')) # Output layer return model model = create_model(with_bn=False) model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) early_stopping_cb = EarlyStopping(patience=10, restore_best_weights=True) history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[early_stopping_cb]) model = create_model(with_bn=True) model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) early_stopping_cb = EarlyStopping(patience=10, restore_best_weights=True) history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[early_stopping_cb]) # Standardize data so you can use it with SELU and # get a net that self-normalizes X_train = (X_train - np.mean(X_train)) / np.std(X_train) X_val = (X_val - np.mean(X_val)) / np.std(X_val) X_test = (X_test - np.mean(X_test)) / np.std(X_test) # Verify data is standardized X_train[:1], X_val[:1], X_test[:1] model = create_model(with_bn=False, initialization='lecun_normal', hidden_activation='selu') model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) early_stopping_cb = EarlyStopping(patience=10, restore_best_weights=True) history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[early_stopping_cb]) y_pred = model.predict(X_val) y_pred_classes = np.argmax(y_pred, axis=1) print(f'accuracy for model: {accuracy_score(y_val, y_pred_classes)}') models = [] for i in range(4): dropout_rate = 0.1 * (i + 1) # Try different dropout rates for your model. # For a self-normalizing model: # - use only Dense layers # - standardize the input features # - use LeCun initialization + SELU activation model = create_model(with_bn=False, initialization='lecun_normal', hidden_activation='selu', dropout_rate=dropout_rate) # model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) early_stopping_cb = EarlyStopping(patience=10, restore_best_weights=True) print(f' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>') print(f' >>>>>>>>>>>>>>>>> For dropout_rate: {dropout_rate}') print(f' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>') history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[early_stopping_cb]) models.append(model) for i in range(4): model = models[i] y_pred = model.predict(X_test) y_pred_classes = np.argmax(y_pred, axis=1) print(f'accuracy for model {i+1}: {accuracy_score(y_test, y_pred_classes)}') ###Output accuracy for model 1: 0.4238 accuracy for model 2: 0.3692 accuracy for model 3: 0.329 accuracy for model 4: 0.3258 ###Markdown Without dropout, with LeCun initialization, and SELU activation, we get accuracy of 0.48. With dropout, the best model gives us accuracy of 0.42 Now let's try retraining the same model with MC Dropout to see if we get better results ###Code model = models[0] y_probas = np.stack([model(X_test, training=True) for _ in range(100)]) y_proba = np.mean(y_probas, axis=0) y_pred = np.argmax(y_proba, axis=1) print(f'accuracy for model with dropout: {accuracy_score(y_test, y_pred)}') ###Output accuracy for model with dropout: 0.4209 ###Markdown MC Dropout does not give us any better results, but the model is regularized, so it will be better on different test sets Now let's try it using MCDropout with Batch Normalization ###Code model.layers model_mc_dropout = tf.keras.models.clone_model(model) # Everything in the model except the weights are cloned. model_mc_dropout.set_weights(model.get_weights()) # Cloning does not clone weights, so set them instead model_mc_dropout.layers[0].get_weights() # We remove the last 10 model layers including the output layer. # The output layer is a single layer. # The layers before that are combined of both Dense and Activation. # So we remove the single output layer, and 3 (Dense + Activation) # layers. We found this out by looking at the model.layers print(f'Number of layers in model: {len(model_mc_dropout.layers)}') for _ in range(10): model_mc_dropout.pop() print(f'Number of layers in model: {len(model_mc_dropout.layers)}') # Add (3 Dense layers + MC Dropout) layers for i in range(3): model_mc_dropout.add(Dense(100, kernel_initializer='lecun_normal', activation='selu')) model_mc_dropout.add(MCDropout(0.1)) # Add output layer model_mc_dropout.add(Dense(10, activation='softmax')) model_mc_dropout.summary() model = model_mc_dropout model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[early_stopping_cb]) y_probas = np.stack([model(X_test, training=True) for _ in range(100)]) y_proba = np.mean(y_probas, axis=0) y_pred = np.argmax(y_proba, axis=1) print(f'accuracy for model with dropout: {accuracy_score(y_test, y_pred)}') ###Output accuracy for model with dropout: 0.4929 ###Markdown With MC dropout, accuracy improved from 0.42 to 0.49, an increase of 0.07. This means error decreased from 0.58 to 0.51, a decrease of 7% - not bad for a small change in the model ###Code class ExponentialLearningRate(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor self.rates = [] self.losses = [] def on_batch_end(self, batch, logs): self.rates.append(K.get_value(self.model.optimizer.lr)) self.losses.append(logs["loss"]) K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor) def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10): init_weights = model.get_weights() iterations = len(X) // batch_size * epochs factor = np.exp(np.log(max_rate / min_rate) / iterations) init_lr = K.get_value(model.optimizer.lr) K.set_value(model.optimizer.lr, min_rate) exp_lr = ExponentialLearningRate(factor) history = model.fit(X, y, epochs=epochs, batch_size=batch_size, callbacks=[exp_lr]) K.set_value(model.optimizer.lr, init_lr) model.set_weights(init_weights) return exp_lr.rates, exp_lr.losses def plot_lr_vs_loss(rates, losses): plt.plot(rates, losses) plt.gca().set_xscale('log') plt.hlines(min(losses), min(rates), max(rates)) plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2]) plt.xlabel("Learning rate") plt.ylabel("Training Loss") # These two classes can be used when your optimizer uses momentum. # In this case, when your learning rate is going down, # momentum should be going up, and vice versa. # class LinearMomentum(keras.callbacks.Callback): def __init__(self, factor): self.factor = factor def on_batch_end(self, batch, logs): K.set_value(self.model.optimizer.momentum, K.get_value(self.model.optimizer.momentum) + self.factor) # Here we're starting with a high learning rate and decreasing it, # so start with a low momentum and increase it def find_learning_rate_with_momentum(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10, start_mom=0.85, end_mom=0.95): init_weights = model.get_weights() iterations = len(X) // batch_size * epochs factor = np.exp(np.log(max_rate / min_rate) / iterations) factor_mom = (start_mom - end_mom) / iterations init_lr = K.get_value(model.optimizer.lr) init_mom = K.get_value(model.optimizer.momentum) K.set_value(model.optimizer.lr, min_rate) K.set_value(model.optimizer.momentum, start_mom) exp_lr = ExponentialLearningRate(factor) lin_mom = LinearMomentum(factor_mom) history = model.fit(X, y, epochs=epochs, batch_size=batch_size, callbacks=[exp_lr, lin_mom]) K.set_value(model.optimizer.lr, init_lr) K.set_value(model.optimizer.momentum, init_mom) model.set_weights(init_weights) return exp_lr.rates, exp_lr.losses model = create_model(with_bn=False, initialization='lecun_normal', hidden_activation='selu') model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer=SGD(), metrics=['accuracy']) batch_size = 32 rates, losses = find_learning_rate(model, X_train, y_train, epochs=1, batch_size=batch_size) plot_lr_vs_loss(rates, losses) ###Output Train on 40000 samples 40000/40000 [==============================] - 9s 229us/sample - loss: nan - accuracy: 0.1650 ###Markdown Lowest LR from the plot to is around 3e-3. So select starting LR to be 5e-3 ###Code # This shows how you can ramp up on the LR rate and ramp down on it at the halfway point. # class OneCycleScheduler(keras.callbacks.Callback): def __init__(self, iterations, max_rate, start_rate=None, last_iterations=None, last_rate=None): self.iterations = iterations self.max_rate = max_rate self.start_rate = start_rate or max_rate / 10 self.last_iterations = last_iterations or iterations // 10 + 1 self.half_iteration = (iterations - self.last_iterations) // 2 self.last_rate = last_rate or self.start_rate / 1000 self.iteration = 0 self.rates = [] def _interpolate(self, iter1, iter2, rate1, rate2): return ((rate2 - rate1) * (iter2 - self.iteration) / (iter2 - iter1) + rate1) def on_batch_begin(self, batch, logs): if self.iteration < self.half_iteration: rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate) elif self.iteration < 2 * self.half_iteration: rate = self._interpolate(self.half_iteration, 2 * self.half_iteration, self.max_rate, self.start_rate) else: rate = self._interpolate(2 * self.half_iteration, self.iterations, self.start_rate, self.last_rate) rate = max(rate, self.last_rate) self.iteration += 1 self.rates.append(rate) K.set_value(self.model.optimizer.lr, rate) class OneCycleSchedulerWithMomentum(keras.callbacks.Callback): def __init__(self, iterations, max_rate, start_rate=None, last_iterations=None, last_rate=None, start_momentum=0.95, end_momentum=0.85): self.iterations = iterations self.max_rate = max_rate self.start_rate = start_rate or max_rate / 10 self.last_iterations = last_iterations or iterations // 10 + 1 self.half_iteration = (iterations - self.last_iterations) // 2 self.last_rate = last_rate or self.start_rate / 1000 self.start_momentum = start_momentum self.end_momentum = end_momentum self.iteration = 0 self.rates = [] self.momentums = [] def _interpolate(self, iter1, iter2, rate1, rate2): return ((rate2 - rate1) * (iter2 - self.iteration) / (iter2 - iter1) + rate1) def on_batch_begin(self, batch, logs): if self.iteration < self.half_iteration: rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate) momentum = self._interpolate(0, self.half_iteration, self.start_momentum, self.end_momentum) elif self.iteration < 2 * self.half_iteration: rate = self._interpolate(self.half_iteration, 2 * self.half_iteration, self.max_rate, self.start_rate) momentum = self._interpolate(self.half_iteration, 2 * self.half_iteration, self.end_momentum, self.start_momentum) else: rate = self._interpolate(2 * self.half_iteration, self.iterations, self.start_rate, self.last_rate) momentum = self._interpolate(2 * self.half_iteration, self.iterations, self.start_momentum, self.end_momentum) rate = max(rate, self.last_rate) momentum = self.start_momentum self.iteration += 1 self.rates.append(rate) self.momentums.append(momentum) K.set_value(self.model.optimizer.lr, rate) K.set_value(self.model.optimizer.momentum, momentum) n_epochs = 25 onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=5e-3) history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_val, y_val), callbacks=[onecycle]) ###Output Train on 40000 samples, validate on 10000 samples Epoch 1/25 40000/40000 [==============================] - 9s 227us/sample - loss: 1.8345 - accuracy: 0.3392 - val_loss: 1.7438 - val_accuracy: 0.3762 Epoch 2/25 40000/40000 [==============================] - 9s 216us/sample - loss: 1.6224 - accuracy: 0.4205 - val_loss: 1.6452 - val_accuracy: 0.4029 Epoch 3/25 40000/40000 [==============================] - 9s 221us/sample - loss: 1.5203 - accuracy: 0.4575 - val_loss: 1.5755 - val_accuracy: 0.4413 Epoch 4/25 40000/40000 [==============================] - 9s 220us/sample - loss: 1.4461 - accuracy: 0.4864 - val_loss: 1.5515 - val_accuracy: 0.4521 Epoch 5/25 40000/40000 [==============================] - 9s 221us/sample - loss: 1.3826 - accuracy: 0.5072 - val_loss: 1.5309 - val_accuracy: 0.4605 Epoch 6/25 40000/40000 [==============================] - 9s 230us/sample - loss: 1.3250 - accuracy: 0.5287 - val_loss: 1.5138 - val_accuracy: 0.4670 Epoch 7/25 40000/40000 [==============================] - 9s 224us/sample - loss: 1.2740 - accuracy: 0.5500 - val_loss: 1.4984 - val_accuracy: 0.4741 Epoch 8/25 40000/40000 [==============================] - 9s 225us/sample - loss: 1.2236 - accuracy: 0.5643 - val_loss: 1.4970 - val_accuracy: 0.4774 Epoch 9/25 40000/40000 [==============================] - 9s 219us/sample - loss: 1.1754 - accuracy: 0.5847 - val_loss: 1.5019 - val_accuracy: 0.4795 Epoch 10/25 40000/40000 [==============================] - 9s 227us/sample - loss: 1.1315 - accuracy: 0.6028 - val_loss: 1.5027 - val_accuracy: 0.4806 Epoch 11/25 40000/40000 [==============================] - 9s 220us/sample - loss: 1.0888 - accuracy: 0.6183 - val_loss: 1.5068 - val_accuracy: 0.4870 Epoch 12/25 40000/40000 [==============================] - 9s 224us/sample - loss: 1.0589 - accuracy: 0.6309 - val_loss: 1.5139 - val_accuracy: 0.4822 Epoch 13/25 40000/40000 [==============================] - 9s 221us/sample - loss: 1.0662 - accuracy: 0.6267 - val_loss: 1.5312 - val_accuracy: 0.4799 Epoch 14/25 40000/40000 [==============================] - 9s 225us/sample - loss: 1.0727 - accuracy: 0.6229 - val_loss: 1.5415 - val_accuracy: 0.4771 Epoch 15/25 40000/40000 [==============================] - 9s 222us/sample - loss: 1.0797 - accuracy: 0.6179 - val_loss: 1.5594 - val_accuracy: 0.4761 Epoch 16/25 40000/40000 [==============================] - 9s 222us/sample - loss: 1.0859 - accuracy: 0.6151 - val_loss: 1.5717 - val_accuracy: 0.4787 Epoch 17/25 40000/40000 [==============================] - 9s 224us/sample - loss: 1.0894 - accuracy: 0.6148 - val_loss: 1.5577 - val_accuracy: 0.4774 Epoch 18/25 40000/40000 [==============================] - 9s 223us/sample - loss: 1.0916 - accuracy: 0.6143 - val_loss: 1.5642 - val_accuracy: 0.4780 Epoch 19/25 40000/40000 [==============================] - 9s 222us/sample - loss: 1.0878 - accuracy: 0.6163 - val_loss: 1.5834 - val_accuracy: 0.4722 Epoch 20/25 40000/40000 [==============================] - 9s 221us/sample - loss: 1.0883 - accuracy: 0.6143 - val_loss: 1.5824 - val_accuracy: 0.4692 Epoch 21/25 40000/40000 [==============================] - 9s 221us/sample - loss: 1.0850 - accuracy: 0.6136 - val_loss: 1.5825 - val_accuracy: 0.4689 Epoch 22/25 40000/40000 [==============================] - 9s 224us/sample - loss: 1.0788 - accuracy: 0.6149 - val_loss: 1.5868 - val_accuracy: 0.4723 Epoch 23/25 40000/40000 [==============================] - 9s 223us/sample - loss: 1.0459 - accuracy: 0.6288 - val_loss: 1.5373 - val_accuracy: 0.4885 Epoch 24/25 40000/40000 [==============================] - 9s 220us/sample - loss: 0.8814 - accuracy: 0.6924 - val_loss: 1.5288 - val_accuracy: 0.4958 Epoch 25/25 40000/40000 [==============================] - 9s 224us/sample - loss: 0.8252 - accuracy: 0.7157 - val_loss: 1.5543 - val_accuracy: 0.4997 ###Markdown If you compare the losses for each epoch above with the plot earlier where we scanned the learning rate, you will see that we should get a loss around 1.6. This is indeed what we get when we use the 1cycle scheduler. What can we do, and what does not work with 1cycle learning rate scheduling: - Cannot use dropout since dropout keeps changing the network for each batch. Here we want to cycle through the learning rate in a particular sequence for the same network - Cannot use early stopping since our run through the learning rates will not be complete ###Code min(onecycle.rates), max(onecycle.rates) plt.scatter(range(len(onecycle.rates)), onecycle.rates) plt.axis([0, 40000, -0.001, 0.001]) ###Output _____no_output_____ ###Markdown Let's see if we can use He initialization and ELU activation to get good results with 1cycle learning schedule ###Code model = create_model(with_bn=False, initialization='he_normal', hidden_activation='elu') model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer=SGD(), metrics=['accuracy']) batch_size = 32 rates, losses = find_learning_rate(model, X_train, y_train, epochs=1, batch_size=batch_size) plot_lr_vs_loss(onecycle.rates, onecycle.losses) n_epochs = 25 onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=1e-2) history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_val, y_val), callbacks=[onecycle]) ###Output Train on 40000 samples, validate on 10000 samples Epoch 1/25 40000/40000 [==============================] - 8s 199us/sample - loss: 1.9359 - accuracy: 0.2977 - val_loss: 1.7862 - val_accuracy: 0.3573 Epoch 2/25 40000/40000 [==============================] - 8s 192us/sample - loss: 1.6839 - accuracy: 0.3961 - val_loss: 1.6741 - val_accuracy: 0.4022 Epoch 3/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.5663 - accuracy: 0.4415 - val_loss: 1.6091 - val_accuracy: 0.4245 Epoch 4/25 40000/40000 [==============================] - 8s 192us/sample - loss: 1.4854 - accuracy: 0.4708 - val_loss: 1.6187 - val_accuracy: 0.4230 Epoch 5/25 40000/40000 [==============================] - 8s 189us/sample - loss: 1.4186 - accuracy: 0.4965 - val_loss: 1.5697 - val_accuracy: 0.4398 Epoch 6/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.3539 - accuracy: 0.5212 - val_loss: 1.5529 - val_accuracy: 0.4473 Epoch 7/25 40000/40000 [==============================] - 8s 190us/sample - loss: 1.2966 - accuracy: 0.5390 - val_loss: 1.5513 - val_accuracy: 0.4511 Epoch 8/25 40000/40000 [==============================] - 8s 192us/sample - loss: 1.2434 - accuracy: 0.5598 - val_loss: 1.5513 - val_accuracy: 0.4634 Epoch 9/25 40000/40000 [==============================] - 8s 192us/sample - loss: 1.1878 - accuracy: 0.5814 - val_loss: 1.5701 - val_accuracy: 0.4601 Epoch 10/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.1360 - accuracy: 0.6003 - val_loss: 1.5776 - val_accuracy: 0.4650 Epoch 11/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.0830 - accuracy: 0.6190 - val_loss: 1.5947 - val_accuracy: 0.4650 Epoch 12/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.0451 - accuracy: 0.6310 - val_loss: 1.6071 - val_accuracy: 0.4615 Epoch 13/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.0549 - accuracy: 0.6294 - val_loss: 1.6198 - val_accuracy: 0.4561 Epoch 14/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.0660 - accuracy: 0.6270 - val_loss: 1.6284 - val_accuracy: 0.4539 Epoch 15/25 40000/40000 [==============================] - 8s 192us/sample - loss: 1.0781 - accuracy: 0.6209 - val_loss: 1.6473 - val_accuracy: 0.4570 Epoch 16/25 40000/40000 [==============================] - 8s 191us/sample - loss: 1.0855 - accuracy: 0.6175 - val_loss: 1.6479 - val_accuracy: 0.4502 Epoch 17/25 40000/40000 [==============================] - 8s 190us/sample - loss: 1.1014 - accuracy: 0.6110 - val_loss: 1.6548 - val_accuracy: 0.4451 Epoch 18/25 40000/40000 [==============================] - 8s 193us/sample - loss: 1.1064 - accuracy: 0.6084 - val_loss: 1.6474 - val_accuracy: 0.4491 Epoch 19/25 40000/40000 [==============================] - 8s 195us/sample - loss: 1.1065 - accuracy: 0.6090 - val_loss: 1.6511 - val_accuracy: 0.4523 Epoch 20/25 40000/40000 [==============================] - 8s 197us/sample - loss: 1.1175 - accuracy: 0.6029 - val_loss: 1.6577 - val_accuracy: 0.4509 Epoch 21/25 40000/40000 [==============================] - 8s 197us/sample - loss: 1.1148 - accuracy: 0.6042 - val_loss: 1.6570 - val_accuracy: 0.4558 Epoch 22/25 40000/40000 [==============================] - 8s 198us/sample - loss: 1.1082 - accuracy: 0.6090 - val_loss: 1.6363 - val_accuracy: 0.4595 Epoch 23/25 40000/40000 [==============================] - 8s 197us/sample - loss: 1.0846 - accuracy: 0.6204 - val_loss: 1.6134 - val_accuracy: 0.4677 Epoch 24/25 40000/40000 [==============================] - 8s 199us/sample - loss: 0.8926 - accuracy: 0.6906 - val_loss: 1.6364 - val_accuracy: 0.4766 Epoch 25/25 40000/40000 [==============================] - 8s 197us/sample - loss: 0.8218 - accuracy: 0.7157 - val_loss: 1.6736 - val_accuracy: 0.4821 ###Markdown Accuracy as with lecun_normal initialization and SELU activation = 0.5 Accuracy with he_normal initialization and ELU activation = 0.48 ###Code model = create_model(with_bn=False, initialization='he_normal', hidden_activation='elu') model.summary() model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) batch_size = 32 rates, losses = find_learning_rate(model, X_train, y_train, epochs=1, batch_size=batch_size) plot_lr_vs_loss(onecycle.rates, onecycle.losses) n_epochs = 25 onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=3e-3) history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_val, y_val), callbacks=[onecycle]) ###Output Train on 40000 samples, validate on 10000 samples Epoch 1/25 40000/40000 [==============================] - 14s 339us/sample - loss: 2.1133 - accuracy: 0.2258 - val_loss: 2.0406 - val_accuracy: 0.2417 Epoch 2/25 40000/40000 [==============================] - 13s 333us/sample - loss: 1.9822 - accuracy: 0.2693 - val_loss: 2.0095 - val_accuracy: 0.2601 Epoch 3/25 40000/40000 [==============================] - 13s 334us/sample - loss: 1.9424 - accuracy: 0.2866 - val_loss: 1.9505 - val_accuracy: 0.2855 Epoch 4/25 40000/40000 [==============================] - 13s 332us/sample - loss: 1.9127 - accuracy: 0.2995 - val_loss: 1.9405 - val_accuracy: 0.2900 Epoch 5/25 40000/40000 [==============================] - 13s 336us/sample - loss: 1.8880 - accuracy: 0.3131 - val_loss: 1.9094 - val_accuracy: 0.2998 Epoch 6/25 40000/40000 [==============================] - 13s 332us/sample - loss: 1.8657 - accuracy: 0.3228 - val_loss: 1.9069 - val_accuracy: 0.3069 Epoch 7/25 40000/40000 [==============================] - 13s 332us/sample - loss: 1.8434 - accuracy: 0.3280 - val_loss: 1.8869 - val_accuracy: 0.3135 Epoch 8/25 40000/40000 [==============================] - 13s 331us/sample - loss: 1.8225 - accuracy: 0.3403 - val_loss: 1.8950 - val_accuracy: 0.3106 Epoch 9/25 40000/40000 [==============================] - 13s 329us/sample - loss: 1.8022 - accuracy: 0.3464 - val_loss: 1.8822 - val_accuracy: 0.3210 Epoch 10/25 40000/40000 [==============================] - 13s 331us/sample - loss: 1.7780 - accuracy: 0.3568 - val_loss: 1.8823 - val_accuracy: 0.3157 Epoch 11/25 40000/40000 [==============================] - 13s 333us/sample - loss: 1.7518 - accuracy: 0.3674 - val_loss: 1.8831 - val_accuracy: 0.3208 Epoch 12/25 40000/40000 [==============================] - 13s 335us/sample - loss: 1.7305 - accuracy: 0.3758 - val_loss: 1.8946 - val_accuracy: 0.3177 Epoch 13/25 40000/40000 [==============================] - 13s 334us/sample - loss: 1.7425 - accuracy: 0.3717 - val_loss: 1.8937 - val_accuracy: 0.3195 Epoch 14/25 40000/40000 [==============================] - 13s 328us/sample - loss: 1.7537 - accuracy: 0.3645 - val_loss: 1.9076 - val_accuracy: 0.3128 Epoch 15/25 40000/40000 [==============================] - 13s 330us/sample - loss: 1.7734 - accuracy: 0.3595 - val_loss: 1.9130 - val_accuracy: 0.3129 Epoch 16/25 40000/40000 [==============================] - 13s 331us/sample - loss: 1.7917 - accuracy: 0.3530 - val_loss: 1.9033 - val_accuracy: 0.3151 Epoch 17/25 40000/40000 [==============================] - 13s 331us/sample - loss: 1.8107 - accuracy: 0.3419 - val_loss: 1.8962 - val_accuracy: 0.3110 Epoch 18/25 40000/40000 [==============================] - 13s 336us/sample - loss: 1.8204 - accuracy: 0.3410 - val_loss: 1.8952 - val_accuracy: 0.3110 Epoch 19/25 40000/40000 [==============================] - 14s 342us/sample - loss: 1.8242 - accuracy: 0.3397 - val_loss: 1.8963 - val_accuracy: 0.3165 Epoch 20/25 40000/40000 [==============================] - 14s 340us/sample - loss: 1.8216 - accuracy: 0.3391 - val_loss: 1.8516 - val_accuracy: 0.3403 Epoch 21/25 40000/40000 [==============================] - 13s 337us/sample - loss: 1.8013 - accuracy: 0.3480 - val_loss: 1.8293 - val_accuracy: 0.3382 Epoch 22/25 40000/40000 [==============================] - 14s 339us/sample - loss: 1.7748 - accuracy: 0.3640 - val_loss: 1.7980 - val_accuracy: 0.3611 Epoch 23/25 40000/40000 [==============================] - 14s 338us/sample - loss: 1.7314 - accuracy: 0.3799 - val_loss: 1.7387 - val_accuracy: 0.3799 Epoch 24/25 40000/40000 [==============================] - 14s 339us/sample - loss: 1.6158 - accuracy: 0.4224 - val_loss: 1.7086 - val_accuracy: 0.3953 Epoch 25/25 40000/40000 [==============================] - 14s 338us/sample - loss: 1.5818 - accuracy: 0.4337 - val_loss: 1.6981 - val_accuracy: 0.3978 ###Markdown Since RMSProp optimizer uses momentum, let's try decreasing the momentum as we increase the learning rate, and increasing the momentum as we decrease the learning rate ###Code model = create_model(with_bn=False, initialization='he_normal', hidden_activation='elu') model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) batch_size = 32 rates, losses = find_learning_rate_with_momentum(model, X_train, y_train, epochs=1, batch_size=batch_size) plot_lr_vs_loss(rates, losses) n_epochs = 25 onecycle = OneCycleSchedulerWithMomentum(len(X_train) // batch_size * n_epochs, max_rate=1e-3) history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size, validation_data=(X_val, y_val), callbacks=[onecycle]) min(onecycle.momentums), max(onecycle.momentums), min(onecycle.rates), max(onecycle.rates) plt.scatter(onecycle.rates, onecycle.momentums); plt.gca().set_xlabel('rates'); plt.gca().set_ylabel('momentums'); ###Output _____no_output_____ ###Markdown You can use beta_1 and beta_2 for the Nadam optimizer by changing the 1cycle code above for beta_1 and beta_2. This is just the same as for the RMSProp using momentum ###Code model = create_model(with_bn=False, initialization='he_normal', hidden_activation='elu') model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) K.get_value(model.optimizer.beta_1) K.get_value(model.optimizer.beta_2) ###Output _____no_output_____ ###Markdown **INITIALIZATION:**- I use these three lines of code on top of my each notebooks because it will help to prevent any problems while reloading the same project. And the third line of code helps to make visualization within the notebook. ###Code #@ INITIALIZATION: %reload_ext autoreload %autoreload 2 %matplotlib inline ###Output _____no_output_____ ###Markdown **DOWNLOADING THE DEPENDENCIES:**- I have downloaded all the libraries and dependencies required for the project in one particular cell. ###Code #@ DOWNLOADING THE LIBRARIES AND DEPENDENCIES: # !pip install -U d2l # !apt-get install p7zip-full import os, collections, math import shutil import pandas as pd import torch import torchvision from torch import nn from d2l import torch as d2l PROJECT_ROOT_DIR = "." ID = "RECOG" IMAGE_PATH = os.path.join(PROJECT_ROOT_DIR, "Images", ID) if not os.path.isdir(IMAGE_PATH): os.makedirs(IMAGE_PATH) def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300): path = os.path.join(IMAGE_PATH, fig_id + "." + fig_extension) print("Saving Figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(path, format=fig_extension, dpi=resolution) ###Output _____no_output_____ ###Markdown **OBTAINING AND ORGANIZING THE DATASET:**- I have used google colab for this project so the process of downloading and reading the data might be different in other platforms. I will use [**CIFAR-10 Object Recognition in Images**](https://www.kaggle.com/c/cifar-10) for this project. The dataset is divided into training set and test set. The training set contains 50,000 images. The images contains the categories such as planes, cars, birds, cats, deer, dogs, frogs, horses, boats and trucks. ###Code #@ ORGANIZING THE DATASET: UNCOMMENT BELOW: # os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/MyDrive/Kaggle" # %cd /content/drive/MyDrive/Kaggle # !kaggle competitions download -c cifar-10 #@ OBTAINING THE DATASET: d2l.DATA_HUB["CIFAR10"] = (d2l.DATA_URL + "kaggle_cifar10_tiny.zip", '2068874e4b9a9f0fb07ebe0ad2b29754449ccacd') # Initializing the Dataset. demo = True # Initialization. if demo: data_dir = d2l.download_extract("CIFAR10") # Initialization. else: data_dir = "../Data/CIFAR10/" # Initializaiton. ###Output _____no_output_____ ###Markdown **ORGANIZING THE DATASET:**- I will organize the datasets to facilitate model training and testing. ###Code #@ ORGANIZING THE DATASET: def read_csv_labels(fname): # Returning names to Labels. with open(fname, "r") as f: lines = f.readlines()[1:] # Reading Lines. tokens = [l.rstrip().split(",") for l in lines] return dict(((name, label) for name, label in tokens)) labels = read_csv_labels(os.path.join(data_dir, "trainLabels.csv")) # Implementation. print(f"Training Examples: {len(labels)}") # Number of Training Examples. print(f"Classes: {len(set(labels.values()))}") # Number of Classes. #@ ORGANIZING THE DATASET: def copyfile(filename, target_dir): # Copying File into Target Directory. os.makedirs(target_dir, exist_ok=True) shutil.copy(filename, target_dir) #@ ORGANIZING THE DATASET: def reorg_train_valid(data_dir, labels, valid_ratio): n = collections.Counter(labels.values()).most_common()[-1][1] # Number of examples per class. n_valid_per_label = max(1, math.floor(n * valid_ratio)) label_count = {} for train_file in os.listdir(os.path.join(data_dir, "train")): label = labels[train_file.split(".")[0]] fname = os.path.join(data_dir, "train", train_file) copyfile(fname, os.path.join(data_dir, "train_valid_test", "train_valid", label)) # Copy to Train Valid. if label not in label_count or label_count[label] < n_valid_per_label: copyfile(fname, os.path.join(data_dir, "train_valid_test", "valid", label)) # Copy to Valid. label_count[label] = label_count.get(label, 0) + 1 else: copyfile(fname, os.path.join(data_dir, "train_valid_test", "train", label)) # Copy to Train. return n_valid_per_label ###Output _____no_output_____ ###Markdown - The reorg test function is used to organize the testing set to facilitate the reading during prediction. ###Code #@ ORGANIZING THE DATASET: def reorg_test(data_dir): # Initialization. for test_file in os.listdir(os.path.join(data_dir, "test")): copyfile(os.path.join(data_dir, "test", test_file), os.path.join(data_dir, "train_valid_test", "test", "unknown")) # Implementation of Function. #@ OBTAINING AND ORGANIZING THE DATASET: def reorg_cifar10_data(data_dir, valid_ratio): # Obtaining and Organizing the Dataset. labels = read_csv_labels(os.path.join(data_dir, "trainLabels.csv")) # Implementation of Function. reorg_train_valid(data_dir, labels, valid_ratio) # Implementation of Function. reorg_test(data_dir) # Implementation of Function. #@ INITIALIZING THE PARAMETERS: batch_size = 4 if demo else 128 # Initializing Batchsize. valid_ratio = 0.1 # Initialization. reorg_cifar10_data(data_dir, valid_ratio) # Obtaining and Organizing the Dataset. ###Output _____no_output_____ ###Markdown **IMAGE AUGMENTATION:**- I will use image augmentation to cope with overfitting. The images are flipped at random and normalized. ###Code #@ IMPLEMENTATION OF IMAGE AUGMENTATION: TRAINING DATASET: transform_train = torchvision.transforms.Compose([ # Initialization. torchvision.transforms.Resize(40), # Resizing both Height and Width. torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0), ratio=(1.0, 1.0)), # Cropping and Resizing. torchvision.transforms.RandomHorizontalFlip(), # Randomly Flipping Image. torchvision.transforms.ToTensor(), # Converting into Tensors. torchvision.transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010])]) # Normalization of RGB Channels. #@ IMPLEMENTATION OF IMAGE AUGMENTATION: TEST DATASET: transform_test = torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), # Converting into Tensors. torchvision.transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.2023, 0.1994, 0.2010])]) # Normalization of RGB Channels. ###Output _____no_output_____ ###Markdown **READING THE DATASET:**- I will create the image folder dataset instance to read the organized dataset containing original image files where each example includes the image and label. ###Code #@ READING THE DATASET: train_ds, train_valid_ds = [torchvision.datasets.ImageFolder( os.path.join(data_dir, "train_valid_test", folder), transform = transform_train) for folder in ["train", "train_valid"]] # Initializing Training Dataset. #@ READING THE DATASET: valid_ds, test_ds = [torchvision.datasets.ImageFolder( os.path.join(data_dir, "train_valid_test", folder), transform = transform_test) for folder in ["valid", "test"]] # Initializing Test Dataset. #@ IMPLEMENTATION OF DATALOADER: train_iter, train_valid_iter = [torch.utils.data.DataLoader( dataset, batch_size, shuffle=True, drop_last=True) for dataset in (train_ds, train_valid_ds)] # Implementation of DataLoader. valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=True, drop_last=True) # Implementation of DataLoader. test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=True, drop_last=False) # Implementation of DataLoader. ###Output _____no_output_____ ###Markdown **DEFINING THE MODEL:**- I will define ResNet18 model. I will perform xavier random initialization on the model before training begins. ###Code #@ DEFINING THE MODEL: def get_net(): # Function for Initializing the Model. num_classes = 10 # Number of Classes. net = d2l.resnet18(num_classes, 3) # Initializing the RESNET Model. return net #@ DEFINING THE LOSS FUNCTION: loss = nn.CrossEntropyLoss(reduction="none") # Initializing Cross Entropy Loss Function. ###Output _____no_output_____ ###Markdown **DEFINING TRAINING FUNCTION:**- I will define model training function train here. I will record the training time of each epoch which helps to compare costs of different models. ###Code #@ DEFINING TRAINING FUNCTIONS: def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period, lr_decay): # Defining Training Function. trainer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9, weight_decay=wd) # Initializing the SGD Optimizer. scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay) # Initializing Learning Rate Scheduler. num_batches, timer = len(train_iter), d2l.Timer() # Initializing the Parameters. animator = d2l.Animator(xlabel="epoch", xlim=[1, num_epochs], legend=["train loss", "train acc", "valid acc"]) # Initializing the Animation. net = nn.DataParallel(net, device_ids=devices).to(devices[0]) # Implementation of Parallelism on Model. for epoch in range(num_epochs): net.train() # Initializing the Training Mode. metric = d2l.Accumulator(3) # Initializing the Accumulator. for i, (features, labels) in enumerate(train_iter): timer.start() # Starting the Timer. l, acc = d2l.train_batch_ch13(net, features, labels, loss, trainer, devices) # Initializing the Training. metric.add(l, acc, labels.shape[0]) # Accumulating the Metrics. timer.stop() # Stopping the Timer. if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1: animator.add(epoch + (i + 1) / num_batches, ( metric[0] / metric[2], metric[1] / metric[2], None)) # Implementation of Animation. if valid_iter is not None: valid_acc = d2l.evaluate_accuracy_gpu(net, valid_iter) # Evaluating Validation Accuracy. animator.add(epoch + 1, (None, None, valid_acc)) # Implementation of Animation. scheduler.step() # Optimization of the Model. if valid_iter is not None: print(f"Loss {metric[0] / metric[2]:.3f}," # Inspecting Loss. f"Train acc {metric[1] / metric[2]:.3f}," # Inspecting Training Accuracy. f"Valid acc {valid_acc:.3f}") # Inspecting Validation Accuracy. else: print(f"Loss {metric[0] / metric[2]:.3f}," # Inspecting Loss. f"Train acc {metric[1] / metric[2]:.3f}") # Inspecting Training Accuracy. print(f"{metric[2]*num_epochs / timer.sum():.1f} examples/sec" f"on {str(devices)}") # Inspecting Time Taken. ###Output _____no_output_____ ###Markdown **TRAINING AND VALIDATING THE MODEL:**- I will train and validate the model here. ###Code #@ TRAINING AND VALIDATING THE MODEL: devices, num_epochs, lr, wd = d2l.try_all_gpus(), 5, 0.1, 5e-4 # Initializing the Parameters. lr_period, lr_decay, net = 50, 0.1, get_net() # Initializing the Neural Network Model. train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period, lr_decay) # Training the Model. ###Output Loss nan,Train acc 0.102,Valid acc 0.100 283.7 examples/secon [device(type='cuda', index=0)] ###Markdown **CLASSIFYING THE TESTING SET:** ###Code #@ CLASSIFYING THE TESTING SET: net, preds = get_net(), [] # Initializing the Parameters. train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period, lr_decay) # Training the Model. for X, _ in test_iter: y_hat = net(X.to(devices[0])) preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy()) sorted_ids = list(range(1, len(test_ds) + 1)) sorted_ids.sort(key=lambda x: str(x)) df = pd.DataFrame({"id": sorted_ids, "label": preds}) df["label"] = df["label"].apply(lambda x: train_valid_ds.classes[x]) df.to_csv("result.csv", index=False) ###Output Loss 2.520,Train acc 0.100 291.0 examples/secon [device(type='cuda', index=0)] ###Markdown Finetuning PyTorch vision models to work with CIFAR-10 dataset Author: Huy Phan Github: https://github.com/huyvnphan/PyTorch-CIFAR10 1. Import required libraries ###Code import copy import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.transforms as transforms from tqdm import tqdm as pbar from torch.utils.tensorboard import SummaryWriter from models import * ###Output _____no_output_____ ###Markdown 2. Prepare datasets ###Code def make_dataloaders(params): """ Make a Pytorch dataloader object that can be used for traing and valiation Input: - params dict with key 'path' (string): path of the dataset folder - params dict with key 'batch_size' (int): mini-batch size - params dict with key 'num_workers' (int): number of workers for dataloader Output: - trainloader and testloader (pytorch dataloader object) """ transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])]) transform_validation = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])]) trainset = torchvision.datasets.CIFAR10(root=params['path'], train=True, transform=transform_train) testset = torchvision.datasets.CIFAR10(root=params['path'], train=False, transform=transform_validation) trainloader = torch.utils.data.DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, num_workers=params['num_workers']) testloader = torch.utils.data.DataLoader(testset, batch_size=params['batch_size'], shuffle=False, num_workers=params['num_workers']) return trainloader, testloader ###Output _____no_output_____ ###Markdown 3. Train model ###Code def train_model(model, params): writer = SummaryWriter('runs/' + params['description']) model = model.to(params['device']) optimizer = optim.SGD(model.parameters(), lr=params['learning_rate'], weight_decay=params['weight_decay'], momentum=0.9, nesterov=True) scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=params['reduce_learning_rate'], gamma=0.1) criterion = nn.CrossEntropyLoss() best_accuracy = test_model(model, params) best_model = copy.deepcopy(model.state_dict()) for epoch in pbar(range(params['num_epochs'])): scheduler.step() # Each epoch has a training and validation phase for phase in ['train', 'validation']: # Loss accumulator for each epoch logs = {'Loss': 0.0, 'Accuracy': 0.0} # Set the model to the correct phase model.train() if phase == 'train' else model.eval() # Iterate over data for image, label in params[phase+'_loader']: image = image.to(params['device']) label = label.to(params['device']) # Zero gradient optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): # Forward pass prediction = model(image) loss = criterion(prediction, label) accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item() # Update log logs['Loss'] += image.shape[0]*loss.detach().item() logs['Accuracy'] += accuracy # Backward pass if phase == 'train': loss.backward() optimizer.step() # Normalize and write the data to TensorBoard logs['Loss'] /= len(params[phase+'_loader'].dataset) logs['Accuracy'] /= len(params[phase+'_loader'].dataset) writer.add_scalars('Loss', {phase: logs['Loss']}, epoch) writer.add_scalars('Accuracy', {phase: logs['Accuracy']}, epoch) # Save the best weights if phase == 'validation' and logs['Accuracy'] > best_accuracy: best_accuracy = logs['Accuracy'] best_model = copy.deepcopy(model.state_dict()) # Write best weights to disk if epoch % params['check_point'] == 0 or epoch == params['num_epochs']-1: torch.save(best_model, params['state_dict_path'] + params['description'] + '.pt') final_accuracy = test_model(model, params) writer.add_text('Final_Accuracy', str(final_accuracy), 0) writer.close() ###Output _____no_output_____ ###Markdown 4. Test model ###Code def test_model(model, params): model = model.to(params['device']).eval() phase = 'validation' logs = {'Accuracy': 0.0} # Iterate over data for image, label in pbar(params[phase+'_loader']): image = image.to(params['device']) label = label.to(params['device']) with torch.no_grad(): prediction = model(image) accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item() logs['Accuracy'] += accuracy logs['Accuracy'] /= len(params[phase+'_loader'].dataset) return logs['Accuracy'] ###Output _____no_output_____ ###Markdown 5. Create PyTorch models ###Code model = densenet169(pretrained=True) ###Output _____no_output_____ ###Markdown 6. Put everything together ###Code # Train on cuda if available device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print("Using", device) data_params = {'path': '/raid/data/pytorch_dataset/cifar10', 'batch_size': 256, 'num_workers': 4} train_loader, validation_loader = make_dataloaders(data_params) train_params = {'description': 'densenet161', 'num_epochs': 600, 'reduce_learning_rate': [200, 400], 'learning_rate': 5e-2, 'weight_decay': 1e-3, 'check_point': 100, 'device': device, 'state_dict_path': 'trained_models/', 'train_loader': train_loader, 'validation_loader': validation_loader} # train_model(model, train_params) test_model(model, train_params) ###Output _____no_output_____ ###Markdown Finetuning PyTorch vision models to work with CIFAR-10 dataset Author: Huy Phan Github: https://github.com/huyvnphan/PyTorch-CIFAR10 1. Import required libraries ###Code import copy import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision import torchvision.transforms as transforms from tqdm import tqdm as pbar from torch.utils.tensorboard import SummaryWriter from cifar10_models import * ###Output _____no_output_____ ###Markdown 2. Prepare datasets ###Code def make_dataloaders(params): """ Make a Pytorch dataloader object that can be used for traing and valiation Input: - params dict with key 'path' (string): path of the dataset folder - params dict with key 'batch_size' (int): mini-batch size - params dict with key 'num_workers' (int): number of workers for dataloader Output: - trainloader and testloader (pytorch dataloader object) """ transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])]) transform_validation = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])]) transform_validation = transforms.Compose([transforms.ToTensor()]) trainset = torchvision.datasets.CIFAR10(root=params['path'], train=True, transform=transform_train) testset = torchvision.datasets.CIFAR10(root=params['path'], train=False, transform=transform_validation) trainloader = torch.utils.data.DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, num_workers=4) testloader = torch.utils.data.DataLoader(testset, batch_size=params['batch_size'], shuffle=False, num_workers=4) return trainloader, testloader ###Output _____no_output_____ ###Markdown 3. Train model ###Code def train_model(model, params): writer = SummaryWriter('runs/' + params['description']) model = model.to(params['device']) optimizer = optim.AdamW(model.parameters()) total_updates = params['num_epochs']*len(params['train_loader']) criterion = nn.CrossEntropyLoss() best_accuracy = test_model(model, params) best_model = copy.deepcopy(model.state_dict()) for epoch in pbar(range(params['num_epochs'])): # Each epoch has a training and validation phase for phase in ['train', 'validation']: # Loss accumulator for each epoch logs = {'Loss': 0.0, 'Accuracy': 0.0} # Set the model to the correct phase model.train() if phase == 'train' else model.eval() # Iterate over data for image, label in params[phase+'_loader']: image = image.to(params['device']) label = label.to(params['device']) # Zero gradient optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): # Forward pass prediction = model(image) loss = criterion(prediction, label) accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item() # Update log logs['Loss'] += image.shape[0]*loss.detach().item() logs['Accuracy'] += accuracy # Backward pass if phase == 'train': loss.backward() optimizer.step() # Normalize and write the data to TensorBoard logs['Loss'] /= len(params[phase+'_loader'].dataset) logs['Accuracy'] /= len(params[phase+'_loader'].dataset) writer.add_scalars('Loss', {phase: logs['Loss']}, epoch) writer.add_scalars('Accuracy', {phase: logs['Accuracy']}, epoch) # Save the best weights if phase == 'validation' and logs['Accuracy'] > best_accuracy: best_accuracy = logs['Accuracy'] best_model = copy.deepcopy(model.state_dict()) # Write best weights to disk if epoch % params['check_point'] == 0 or epoch == params['num_epochs']-1: torch.save(best_model, params['description'] + '.pt') final_accuracy = test_model(model, params) writer.add_text('Final_Accuracy', str(final_accuracy), 0) writer.close() ###Output _____no_output_____ ###Markdown 4. Test model ###Code def test_model(model, params): model = model.to(params['device']).eval() phase = 'validation' logs = {'Accuracy': 0.0} # Iterate over data for image, label in pbar(params[phase+'_loader']): image = image.to(params['device']) label = label.to(params['device']) with torch.no_grad(): prediction = model(image) accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item() logs['Accuracy'] += accuracy logs['Accuracy'] /= len(params[phase+'_loader'].dataset) return logs['Accuracy'] ###Output _____no_output_____ ###Markdown 5. Create PyTorch models ###Code model = resnet18() ###Output _____no_output_____ ###Markdown 6. Put everything together ###Code # Train on cuda if available device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') print("Using", device) data_params = {'path': '/raid/data/pytorch_dataset/cifar10', 'batch_size': 256} train_loader, validation_loader = make_dataloaders(data_params) train_params = {'description': 'Test', 'num_epochs': 300, 'check_point': 50, 'device': device, 'train_loader': train_loader, 'validation_loader': validation_loader} train_model(model, train_params) test_model(model, train_params) ###Output _____no_output_____
src/predictions/Load_PredictedMask_And_Image.ipynb
###Markdown Load the pickled predicted mask and original image; the pickled file is created by "UNET_Prediction_EntireScan" script.1. Create a folder ../data/luna16/2. Create a folder ../data/luna16/subset2 Download pickled prediction file (it has been created for this one scan) 'entire_predictions_1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405.dat" from https://drive.google.com/drive/u/1/folders/13wmubTgm-7sh3MxPGxqmVZuoqi0G3ufW ###Code import pandas as pd import numpy as np import h5py import pandas as pd import argparse import SimpleITK as sitk from PIL import Image import os, glob import os, os.path import tensorflow as tf import keras from ipywidgets import interact import pickle import matplotlib.pyplot as plt %matplotlib inline # HOLDOUT = 5 # HO_dir = 'HO{}/'.format(HOLDOUT) data_dir = '../data/luna16/' prediction_file = 'subset2/entire_predictions_1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405.dat' size_file = 'subset2/entire_size_1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405.dat' pkl_file = open(data_dir+prediction_file, 'rb') predictions_dict = pickle.load(pkl_file) # predictions_dict {seriesuid : (img.shape, padded_img, predicted_mask)} value = predictions_dict['1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405'] img_shape = value[0] padded_img = value[1] predicted_mask = value[2] print ("\n Predicted mask sum : {}".format(np.sum(predicted_mask))) def displaySlice(sliceNo): plt.figure(figsize=[20,20]); plt.subplot(121) plt.title("True Image") plt.imshow(padded_img[:, :, sliceNo], cmap='bone'); plt.subplot(122) plt.title("Predicted Mask") plt.imshow(predicted_mask[:, :, sliceNo], cmap='bone'); plt.show() interact(displaySlice, sliceNo=(1,img_shape[2],1)); # print ("\n Predicted mask sum : {}".format(np.sum(predicted_mask))) # Predicted mask sum : 119040.40901441715 ###Output _____no_output_____
KNN/Classification - K Nearest Neighbors.ipynb
###Markdown Regression: the output variable takes continuous values.Classification: the output variable takes class labels. f:x→yIf y is discrete/categorical variable, then this is classification problem.If y is real number/continuous, then this is a regression problem. ###Code import numpy as np from sklearn import preprocessing,cross_validation,neighbors import pandas as pd ###Output _____no_output_____ ###Markdown About The Data Set : https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.names ###Code df = pd.read_csv('data/breast-cancer-wisconsin.data') df.replace('?' , -99999, inplace = True) df.drop(['id'],1,inplace = True) print(df.describe()) # Defining Features and Labels X = np.array(df.drop(['class'], 1)) y = np.array(df['class']) X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2) clf = neighbors.KNeighborsClassifier(n_neighbors = 5) clf.fit(X_train, y_train) accuracy = clf.score(X_test, y_test) print(accuracy) #Testing With Sample Data example_data = np.array([[4,2,1,1,1,2,3,2,1],[4,2,1,3,2,2,3,2,1]]) example_data = example_data.reshape(len(example_data), -1) prediction = clf.predict(example_data) print(prediction) ###Output [2 2]
nbs/train.ipynb
###Markdown Training (Legacy Version)> Notebook to train deep learning models or ensembles for segmentation of fluorescent labels in microscopy images.This notebook is optmizied to be executed on [Google Colab](https://colab.research.google.com).* If youre new on _Google Colab_, try out the [tutorial](https://colab.research.google.com/notebooks/intro.ipynb).* Use Firefox or Google Chrome if you want to upload and download files ###Code #@title Set up environment #@markdown Please run this cell to get started. %load_ext autoreload %autoreload 2 try: from google.colab import files, drive except ImportError: pass try: import deepflash2 except ImportError: !pip install -q deepflash2==0.0.14 import zipfile import shutil import imageio from sklearn.model_selection import KFold, train_test_split from fastai.vision.all import * from deepflash2.all import * from deepflash2.data import _read_msk from scipy.stats import entropy ###Output _____no_output_____ ###Markdown Provide Training Data __Required data structure__- __One folder for training images__- __One folder for segmentation masks__ - We highly recommend using [ground truth estimation](https://matjesg.github.io/deepflash2/gt_estimation.html)_Examplary structure: see [naming conventions](https://matjesg.github.io/deepflash2/add_information.htmlNaming)_* [folder] images * [file] 0001.tif * [file] 0002.tif* [folder] masks * [file] 0001_mask.png * [file] 0002_mask.png Option A: Upload via _Google Drive_ (recommended, Colab only) - The folder in your drive must contain all files and correct folder structure. - See [here](https://support.google.com/drive/answer/2375091?co=GENIE.Platform%3DDesktop&hl=en) how to organize your files in _Google Drive_.- See this [stackoverflow post](https://stackoverflow.com/questions/46986398/import-data-into-google-colaboratory) for browsing files with the file browser ###Code #@markdown Provide the path to the folder on your _Google Drive_ try: drive.mount('/content/drive') path = "/content/drive/My Drive/data" #@param {type:"string"} path = Path(path) print('Path contains the following files and folders: \n', L(os.listdir(path))) #@markdown Follow the instructions and press Enter after copying and pasting the key. except: print("Warning: Connecting to Google Drive only works on Google Colab.") pass ###Output _____no_output_____ ###Markdown Option B: Upload via _zip_ file (Colab only) - The *zip* file must contain all images and segmentations and correct folder structure. - See [here](https://www.hellotech.com/guide/for/how-to-zip-a-file-mac-windows-pc) how to _zip_ files on Windows or Mac. ###Code #@markdown Run to upload a *zip* file path = Path('data') try: u_dict = files.upload() for key in u_dict.keys(): unzip(path, key) print('Path contains the following files and folders: \n', L(os.listdir(path))) except: print("Warning: File upload only works on Google Colab.") pass ###Output _____no_output_____ ###Markdown Option C: Provide path (Local installation) If you're working on your local machine or server, provide a path to the correct folder. ###Code #@markdown Provide path (either relative to notebook or absolute) and run cell path = "" #@param {type:"string"} path = Path(path) print('Path contains the following files and folders: \n', L(os.listdir(path))) ###Output _____no_output_____ ###Markdown Option D: Try with sample data (Testing only) If you don't have any data available yet, try our sample data ###Code #@markdown Run to use sample files path = Path('sample_data_cFOS') url = "https://github.com/matjesg/deepflash2/releases/download/model_library/wue1_cFOS_small.zip" urllib.request.urlretrieve(url, 'sample_data_cFOS.zip') unzip(path, 'sample_data_cFOS.zip') ###Output _____no_output_____ ###Markdown Check and load data ###Code #@markdown Provide your parameters according to your provided data image_folder = "images" #@param {type:"string"} mask_folder = "masks" #@param {type:"string"} mask_suffix = "_mask.png" #@param {type:"string"} #@markdown Number of classes: e.g., 2 for binary segmentation (foreground and background class) n_classes = 2 #@param {type:"integer"} #@markdown Check if you are providing instance labels (class-aware and instance-aware) instance_labels = False #@param {type:"boolean"} f_names = get_image_files(path/image_folder) label_fn = lambda o: path/mask_folder/f'{o.stem}{mask_suffix}' #Check if corresponding masks exist mask_check = [os.path.isfile(label_fn(x)) for x in f_names] if len(f_names)==sum(mask_check) and len(f_names)>0: print(f'Found {len(f_names)} images and {sum(mask_check)} masks in "{path}".') else: print(f'IMAGE/MASK MISMATCH! Found {len(f_names)} images and {sum(mask_check)} masks in "{path}".') print('Please check the steps above.') ###Output _____no_output_____ ###Markdown Customize [mask weights](https://matjesg.github.io/deepflash2/data.htmlWeight-Calculation) (optional)- Default values should work for most of the data. - However, this choice can significantly change the model performance later on. ###Code #@title { run: "auto" } #@markdown Run to set weight parameters border_weight_sigma=10 #@param {type:"slider", min:1, max:20, step:1} foreground_dist_sigma=10 #@param {type:"slider", min:1, max:20, step:1} border_weight_factor=10 #@param {type:"slider", min:1, max:50, step:1} foreground_background_ratio= 0.1 #@param {type:"slider", min:0.1, max:1, step:0.1} #@markdown Check if want to plot the resulting weights of one mask plot_weights = False #@param {type:"boolean"} #@markdown Check `reset_to_defaults` to reset your parameters. reset_to_defaults = False #@param {type:"boolean"} mw_dict = {'bws': 10 if reset_to_defaults else border_weight_sigma , 'fds': 10 if reset_to_defaults else foreground_dist_sigma, 'bwf': 10 if reset_to_defaults else border_weight_factor, 'fbr' : 0.1 if reset_to_defaults else foreground_background_ratio} #@markdown Select image number image_number = 0 #@param {type:"slider", min:0, max:100, step:1} if plot_weights: idx = np.minimum(len(f_names), image_number) print('Plotting mask for image', f_names[idx].name, '- Please wait.') msk = _read_msk(label_fn(f_names[idx])) _, w, _ = calculate_weights(msk, n_dims=n_classes, **mw_dict) fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,12)) axes[0].imshow(msk) axes[0].set_axis_off() axes[0].set_title('Mask') axes[1].imshow(w) axes[1].set_axis_off() axes[1].set_title('Weights') ###Output _____no_output_____ ###Markdown Create mask weights ###Code #@markdown Run to create mask weights for the whole dataset. try: mw_dict=mw_dict except: mw_dict = {'bws': 10,'fds': 10, 'bwf': 10,'fbr' : 0.1} ds = RandomTileDataset(f_names, label_fn, n_classes=n_classes, instance_labels=instance_labels, **mw_dict) #@title { run: "auto" } #@markdown Run to show data. #@markdown Use the slider to control the number of displayed images first_n = 3 #@param {type:"slider", min:1, max:100, step:1} ds.show_data(max_n = first_n, figsize=(15,15), overlay=False) ###Output _____no_output_____ ###Markdown Model Defintion Select one of the available [model architectures](https://matjesg.github.io/deepflash2/models.htmlU-Net-architectures). ###Code #@title { run: "auto" } model_arch = 'unet_deepflash2' #@param ["unet_deepflash2", "unet_falk2019", "unet_ronnberger2015"] ###Output _____no_output_____ ###Markdown Pretrained weights - Select 'new' to use an untrained model (no pretrained weights)- Or select [pretraind](https://matjesg.github.io/deepflash2/model_library.html) model weights from dropdown menu ###Code pretrained_weights = "wue_cFOS" #@param ["new", "wue_cFOS", "wue_Parv", "wue_GFAP", "wue_GFP", "wue_OPN3"] pre = False if pretrained_weights=="new" else True n_channels = ds.get_data(max_n=1)[0].shape[-1] model = torch.hub.load('matjesg/deepflash2', model_arch, pretrained=pre, dataset=pretrained_weights, n_classes=ds.c, in_channels=n_channels) if pretrained_weights=="new": apply_init(model) ###Output _____no_output_____ ###Markdown Setting model hyperparameters (optional) - *mixed_precision_training*: enables [Mixed precision training](https://docs.fast.ai/callback.fp16A-little-bit-of-theory) - decreases memory usage and speed-up training - may effect model accuracy- *batch_size*: the number of samples that will be propagated through the network during one iteration - 4 works best in our experiements - 4-8 works good for [mixed precision training](https://docs.fast.ai/callback.fp16A-little-bit-of-theory) ###Code mixed_precision_training = False #@param {type:"boolean"} batch_size = 4 #@param {type:"slider", min:2, max:8, step:2} loss_fn = WeightedSoftmaxCrossEntropy(axis=1) cbs = [ElasticDeformCallback] dls = DataLoaders.from_dsets(ds,ds, bs=batch_size) if torch.cuda.is_available(): dls.cuda(), model.cuda() learn = Learner(dls, model, wd=0.001, loss_func=loss_fn, cbs=cbs) if mixed_precision_training: learn.to_fp16() ###Output _____no_output_____ ###Markdown - `max_lr`: The learning rate controls how quickly or slowly a neural network model learns. - We found that a maximum learning rate of 5e-4 (i.e., 0.0005) yielded the best results across experiments. - `learning_rate_finder`: Check only if you want use the [Learning Rate Finder](https://matjesg.github.io/deepflash2/add_information.htmlLearning-Rate-Finder) on your dataset. ###Code #@markdown Check and run to use learning rate finder learning_rate_finder = False #@param {type:"boolean"} if learning_rate_finder: lr_min,lr_steep = learn.lr_find() print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}") max_lr = 5e-4 #@param {type:"number"} ###Output _____no_output_____ ###Markdown Model Training Setting training parameters - `n_models`: Number of models to train. - If you're experimenting with parameters, try only one model first. - Depending on the data, ensembles should comprise 3-5 models. - _Note: Number of model affects the [Train-validation-split](https://matjesg.github.io/deepflash2/add_information.htmlTrain-validation-split)._ ###Code #@title { run: "auto" } try: batch_size=batch_size except: batch_size=4 mixed_precision_training = False loss_fn = WeightedSoftmaxCrossEntropy(axis=1) try: max_lr=max_lr except: max_lr = 5e-4 metrics = [Dice_f1(), Iou()] n_models = 1 #@param {type:"slider", min:1, max:5, step:1} print("Suggested epochs for 1000 iterations:", calc_iterations(len(ds), batch_size, n_models)) ###Output _____no_output_____ ###Markdown - `epochs`: One epoch is when an entire (augemented) dataset is passed through the model for training. - Epochs need to be adusted depending on the size and number of images - We found that choosing the number of epochs such that the network parameters are update about 1000 times (iterations) leads to satiesfying results in most cases. ###Code epochs = 30 #@param {type:"slider", min:1, max:200, step:1} ###Output _____no_output_____ ###Markdown Train models ###Code #@markdown Run to train model(s).<br/> **THIS CAN TAKE A FEW HOURS FOR MULTIPLE MODELS!** kf = KFold(n_splits=max(n_models,2)) model_path = path/'models' model_path.mkdir(parents=True, exist_ok=True) res, res_mc = {}, {} fold = 0 for train_idx, val_idx in kf.split(f_names): fold += 1 name = f'model{fold}' print('Train', name) if n_models==1: files_train, files_val = train_test_split(f_names) else: files_train, files_val = f_names[train_idx], f_names[val_idx] print(f'Validation Images: {files_val}') train_ds = RandomTileDataset(files_train, label_fn, **mw_dict) valid_ds = TileDataset(files_val, label_fn, **mw_dict) dls = DataLoaders.from_dsets(train_ds, valid_ds, bs=batch_size) dls_valid = DataLoaders.from_dsets(valid_ds, batch_size=batch_size ,shuffle=False, drop_last=False) model = torch.hub.load('matjesg/deepflash2', model_arch, pretrained=pre, dataset=pretrained_weights, n_classes=ds.c, in_channels=n_channels) if pretrained_weights=="new": apply_init(model) if torch.cuda.is_available(): dls.cuda(), model.cuda(), dls_valid.cuda() cbs = [SaveModelCallback(monitor='iou'), ElasticDeformCallback] metrics = [Dice_f1(), Iou()] learn = Learner(dls, model, metrics = metrics, wd=0.001, loss_func=loss_fn, cbs=cbs) if mixed_precision_training: learn.to_fp16() learn.fit_one_cycle(epochs, max_lr) # save_model(model_path/f'{name}.pth', learn.model, opt=None) torch.save(learn.model.state_dict(), model_path/f'{name}.pth', _use_new_zipfile_serialization=False) smxs, segs, _ = learn.predict_tiles(dl=dls_valid.train) smxs_mc, segs_mc, std = learn.predict_tiles(dl=dls_valid.train, mc_dropout=True, n_times=10) for i, file in enumerate(files_val): res[(name, file)] = smxs[i], segs[i] res_mc[(name, file)] = smxs_mc[i], segs_mc[i], std[i] if n_models==1: break ###Output _____no_output_____ ###Markdown Validate models Here you can validate your models. To avoid information leakage, only predictions on the respective models' validation set are made. ###Code #@markdown Create folders to save the resuls. They will be created at your provided 'path'. pred_dir = 'val_preds' #@param {type:"string"} pred_path = path/pred_dir/'ensemble' pred_path.mkdir(parents=True, exist_ok=True) uncertainty_dir = 'val_uncertainties' #@param {type:"string"} uncertainty_path = path/uncertainty_dir/'ensemble' uncertainty_path.mkdir(parents=True, exist_ok=True) result_path = path/'results' result_path.mkdir(exist_ok=True) #@markdown Define `filetype` to save the predictions and uncertainties. All common [file formats](https://imageio.readthedocs.io/en/stable/formats.html) are supported. filetype = 'png' #@param {type:"string"} #@markdown Show and save results res_list = [] for model_number in range(1,n_models+1): model_name = f'model{model_number}' val_files = [f for mod , f in res.keys() if mod == model_name] print(f'Validating {model_name}') pred_path = path/pred_dir/model_name pred_path.mkdir(parents=True, exist_ok=True) uncertainty_path = path/uncertainty_dir/model_name uncertainty_path.mkdir(parents=True, exist_ok=True) for file in val_files: img = ds.get_data(file)[0] msk = ds.get_data(file, mask=True)[0] pred = res[(model_name,file)][1] pred_std = res_mc[(model_name,file)][2][...,0] df_tmp = pd.Series({'file' : file.name, 'model' : model_name, 'iou': iou(msk, pred), 'entropy': entropy(pred_std, axis=None)}) plot_results(img, msk, pred, pred_std, df=df_tmp) res_list.append(df_tmp) imageio.imsave(pred_path/f'{file.stem}_pred.{filetype}', pred.astype(np.uint8) if np.max(pred)>1 else pred.astype(np.uint8)*255) imageio.imsave(uncertainty_path/f'{file.stem}_uncertainty.{filetype}', pred_std.astype(np.uint8)*255) df_res = pd.DataFrame(res_list) df_res.to_csv(result_path/f'val_results.csv', index=False) ###Output _____no_output_____ ###Markdown Download Section - The models will always be the _last_ version trained in section _Model Training_- To download validation predictions and uncertainties, you first need to execute section _Validate models_._Note: If you're connected to *Google Drive*, the models are automatically saved to your drive._ ###Code #@title Download models { run: "auto" } model_number = "1" #@param ["1", "2", "3", "4", "5"] model_path = path/'models'/f'model{model_number}.pth' try: files.download(model_path) except: print("Warning: File download only works on Google Colab.") print(f"Models are saved at {model_path.parent}") pass #@markdown Download validation predicitions { run: "auto" } out_name = 'val_predictions' shutil.make_archive(path/out_name, 'zip', path/pred_dir) try: files.download(path/f'{out_name}.zip') except: print("Warning: File download only works on Google Colab.") pass #@markdown Download validation uncertainties out_name = 'val_uncertainties' shutil.make_archive(path/out_name, 'zip', path/uncertainty_dir) try: files.download(path/f'{out_name}.zip') except: print("Warning: File download only works on Google Colab.") pass #@markdown Download result analysis '.csv' files try: files.download(result_path/f'val_results.csv') except: print("Warning: File download only works on Google Colab.") pass ###Output _____no_output_____ ###Markdown We use the foward fill method in pandas to fill all the nans for the each sentence in the `Sentence ` column. ###Code #hide df['Sentence #'].fillna(method='ffill') #export df['Sentence #'] = df['Sentence #'].fillna(method='ffill') ###Output _____no_output_____ ###Markdown In total we cans ee that there are 47959 sentences in our dataset ###Code #hide len(df['Sentence #'].unique()) ###Output _____no_output_____ ###Markdown Now let us encode all the labels for every word in every sentence ###Code #hide le_pos = LabelEncoder() le_tag = LabelEncoder() #export utils.save_label_encoders(le_tag=le_tag, le_pos=le_pos) #export le_pos, le_tag = utils.load_label_encoders() #hide df["encoded_POS"] = le_pos.fit_transform(df.POS) df["encoded_Tag"] = le_tag.fit_transform(df.Tag) #export sentences, tags, pos = utils.process_data(df) #hide len(sentences), len(tags), len(pos) ###Output _____no_output_____ ###Markdown data Split I'll be using a simple train-test split ###Code #export train_sentences, valid_sentences, train_tag, valid_tag, train_pos, valid_pos = train_test_split(sentences, tags, pos, test_size=0.2) #export train_dl = utils.create_loader(train_sentences, train_tag, train_pos, bs=config.TRAIN_BATCH_SIZE) valid_dl = utils.create_loader(valid_sentences, valid_tag, valid_pos, bs=config.VALID_BATCH_SIZE) #export modeller = model.EntityModel(num_tag=len(le_tag.classes_), num_pos=len(le_pos.classes_)) # #export model_params = list(modeller.named_parameters()) #export # we don't want weight decay for these no_decay = ['bias', 'LayerNorm.weight', 'LayerNorm.bias'] optimizer_params = [ {'params': [p for n, p in model_params if not any(nd in n for nd in no_decay)], 'weight_decay':0.001}, # no weight decay should be applied {'params': [p for n, p in model_params if any(nd in n for nd in no_decay)], 'weight_decay':0.0} ] #export lr = config.LR #hide lr #export optimizer = AdamW(optimizer_params, lr=lr) #export num_train_steps = int(len(sentences) / config.TRAIN_BATCH_SIZE * config.NUM_EPOCHS) #export scheduler = get_linear_schedule_with_warmup(optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_train_steps) #export learn = engine.BertFitter(modeller, (train_dl, valid_dl), optimizer, [accuracy_score, partial(f1_score, average='macro')], config.DEVICE, scheduler=scheduler, log_file='training_log.txt') #hide config.NUM_EPOCHS #export NUM_EPOCHS = config.NUM_EPOCHS + 2 learn.fit(NUM_EPOCHS, model_path=config.MODEL_PATH/'entity_model.bin') ###Output _____no_output_____ ###Markdown Train> API details. ###Code %load_ext autoreload %autoreload 2 import matplotlib as mpl %matplotlib inline #export import warnings import re from functools import partial import torch import torch.nn as nn import torch.nn.functional as F import torchvision.models as models import pytorch_lightning as pl from pytorch_lightning.core import LightningModule from pytorch_lightning.metrics import functional as FM #export from isic.dataset import SkinDataModule, from_label_idx_to_key from isic.layers import LabelSmoothingCrossEntropy from isic.callback.hyperlogger import HyperparamsLogger from isic.callback.logtable import LogTableMetricsCallback from isic.callback.mixup import MixupDict from isic.callback.cutmix import CutmixDict from isic.callback.freeze import FreezeCallback, UnfreezeCallback from isic.utils.core import reduce_loss, generate_val_steps from isic.utils.model import apply_init, get_bias_batchnorm_params, apply_leaf, check_attrib_module, create_body, create_head, lr_find, freeze, unfreeze, log_metrics_per_key from isic.model import BaselineModel, Model message_formater = "You have set {0} number of classes if different from predicted {0} and target {0} number of classes" warnings.filterwarnings("ignore", message_formater.format("(.*)"), category=UserWarning) dm = SkinDataModule() dm.prepare_data() dm.setup('fit') F_EPOCHS = 1 U_EPOCHS = 1 LR = 1e-2 ###Output _____no_output_____ ###Markdown Baseline ###Code model = BaselineModel('resnet18') trainer = pl.Trainer(fast_dev_run=True, callbacks=[LogTableMetricsCallback()]) trainer.fit(model, dm) dm.setup('test') a = trainer.test(model, dm.val_dataloader()) torch.load('preds.pt').shape ###Output _____no_output_____ ###Markdown Real ###Code # init model model = Model(LR, arch='resnet18') check_attrib_module(model) lr_find(model, dm,lr_find=False,verbose=True) cbs = [LogTableMetricsCallback(), HyperparamsLogger()] trainer = fit_one_cycle(F_EPOCHS, model, dm, max_lr=LR, callbacks=cbs, fast_dev_run=False, limit_val_batches=0, limit_train_batches=0.01) unfreeze(model, 3) # Unfreeze training trainer = fit_one_cycle(callbacks=cbs, fast_dev_run=False, limit_val_batches=0, limit_train_batches=0.01) trainer.fit(model, dm) ###Output | Name | Type | Params ----------------------------------------------- 0 | model | Sequential | 25 M 1 | loss_func | CrossEntropyLoss | 0 ###Markdown Tensorboard ###Code %load_ext tensorboard %tensorboard --logdir=lightning_logs/ ###Output _____no_output_____
Tutorial-11/TUT11-1-graph-processing.ipynb
###Markdown TUT11-1 Graph Processing **Graph representation** **Graph Structure**Mathematically, a graph $\mathcal{G}$ is defined as a tuple of a set of nodes/vertices $V$, and a set of edges/links $E$: $\mathcal{G}=(V,E)$. Each edge is a pair of two vertices, and represents a connection between them as shown in the figure.The vertices are $V=\{1,2,3,4\}$, and edges $E=\{(1,2), (2,3), (2,4), (3,4)\}$. Note that for simplicity, we assume the graph to be undirected and hence don't add mirrored pairs like $(2,1)$. In application, vertices and edge can often have specific attributes, and edges can even be directed. **Adjacency Matrix**The **adjacency matrix** $A$ is a square matrix whose elements indicate whether pairs of vertices are adjacent, i.e. connected, or not. In the simplest case, $A_{ij}$ is 1 if there is a connection from node $i$ to $j$, and otherwise 0. If we have edge attributes or different categories of edges in a graph, this information can be added to the matrix as well. For an undirected graph, keep in mind that $A$ is a symmetric matrix ($A_{ij}=A_{ji}$). For the example graph above, we have the following adjacency matrix:$$A = \begin{bmatrix} 0 & 1 & 0 & 0\\ 1 & 0 & 1 & 1\\ 0 & 1 & 0 & 1\\ 0 & 1 & 1 & 0\end{bmatrix}$$Alternatively, we could also define a sparse adjacency matrix with which we can work as if it was a dense matrix, but allows more memory-efficient operations. PyTorch supports this with the sub-package `torch.sparse` ([documentation](https://pytorch.org/docs/stable/sparse.html)). 1. Import libraries ###Code import numpy as np import scipy.sparse as sp import torch ###Output _____no_output_____ ###Markdown 2. Load features ###Code path = '../input/aist4010-spring2022-a3/data/' idx_features = np.loadtxt(path + "features.txt", dtype=np.dtype(str)) features = idx_features[:, 1:] idx_features, features idx_features.shape, features.shape # Compressed Sparse Row matrix features = sp.csr_matrix(features, dtype=np.float32) # features 2707 * 1433 ###Output _____no_output_____ ###Markdown 3. Load Labels 1) Load train and val data ###Code train_data = np.loadtxt(path + "train_labels.csv", delimiter=",", dtype=np.dtype(str)) train_idx, train_labels = train_data[1:, 0], train_data[1:, 1] val_data = np.loadtxt(path + "val_labels.csv", delimiter=",", dtype=np.dtype(str)) val_idx, val_labels = val_data[1:, 0], val_data[1:, 1] # one-hot encoding labels 2708 * 7 train_idx[:10], train_labels[:10] ###Output _____no_output_____ ###Markdown 2) Load test idx ###Code test_idx, _ = np.loadtxt(path + "test_idx.csv", delimiter=",", dtype=np.dtype(str), unpack = True) test_idx = test_idx[1:] all_idx = np.concatenate((train_idx, val_idx, test_idx), axis = 0) test_idx.shape, all_idx.shape ###Output _____no_output_____ ###Markdown 3) One-hot encoding ###Code def encode_onehot(labels): classes = set(labels) class_dict = {c:i for i, c in enumerate(classes)} classes_onehot_dict = {c: np.identity(len(classes))[i, :] for i, c in enumerate(classes)} labels_onehot = np.array(list(map(classes_onehot_dict.get, labels)), dtype=np.int32) return labels_onehot, class_dict train_labels, class_dict = encode_onehot(train_labels) val_labels, _ = encode_onehot(val_labels) class_dict ###Output _____no_output_____ ###Markdown 4. Build graph 1) Load nodes ###Code idx = np.array(idx_features[:, 0], dtype=np.int32) # nodes names 2707 idx_map = {j: i for i, j in enumerate(idx)} # nodes mapping 'names' : 'idx' dict(list(idx_map.items())[:10]) ###Output _____no_output_____ ###Markdown 2) Load edges ###Code edges_unordered = np.genfromtxt(path + "edges.txt", dtype=np.int32) # node1, node2 edges = np.array(list(map(idx_map.get, edges_unordered.flatten())), dtype=np.int32).reshape(edges_unordered.shape) # node_idx1, node_idx2 5427 * 2 edges.shape, edges[:10] ###Output _____no_output_____ ###Markdown 3) Build adjacency matrix ###Code # build graph # A sparse matrix in COOrdinate format. adj = sp.coo_matrix((np.ones(edges.shape[0]), (edges[:, 0], edges[:, 1])), shape=(all_idx.shape[0], all_idx.shape[0]), dtype=np.float32) # adjacency matrix 2707 * 2707 # build symmetric adjacency matrix adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj) # symmetric adjacency matrix adj ###Output _____no_output_____ ###Markdown 4) Normalize ###Code def normalize(mx): """Row-normalize sparse matrix""" rowsum = np.array(mx.sum(1)) r_inv = np.power(rowsum, -1).flatten() r_inv[np.isinf(r_inv)] = 0. r_mat_inv = sp.diags(r_inv) mx = r_mat_inv.dot(mx) return mx # normalize features_n = normalize(features) adj_n = normalize(adj + sp.eye(adj.shape[0])) ###Output _____no_output_____
Datasets/.ipynb_checkpoints/Data Cleaning(19-20)-checkpoint.ipynb
###Markdown Creatiing OPPORTUNITIES Table from LEADS ###Code Leads = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Leads(2019-20).csv") Opp = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Opportunities.csv") # Leads_1 = Leads.dropna(how='all', axis='columns') Opp['Lead_ID'] = Leads['Lead_ID'] Opp['Product_Name'] = Leads['Product_Name'] Opp['Product_ID'] = Leads['Product_ID'] Opp['Email_address'] = Leads['Email_address'] Opp['Product_Name'].unique() Opp.drop(['Product_ID'], axis = 1, inplace = True) Opp.head(10) # def random_dates(start, end, n=10): # start_u = start.value//10**9 # end_u = end.value//10**9 # return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s') # start = pd.to_datetime('2019-01-01') # end = pd.to_datetime('2020-01-01') # random_dates(start, end) ###Output c:\users\jaswinder singh\appdata\local\programs\python\python38\lib\site-packages\pandas\core\frame.py:4163: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop( ###Markdown Product ID Issue ###Code # df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')}) # conditions = [ # (df['Set'] == 'Z') & (df['Type'] == 'A'), # (df['Set'] == 'Z') & (df['Type'] == 'B'), # (df['Type'] == 'B')] # choices = ['yellow', 'blue', 'purple'] # df['color'] = np.select(conditions, choices, default='black') # print(df) Opp['Product_Name'].unique() def product_id(row): if row["Product_Name"] == "Proxima-C": return "PRO-23-0493" elif row["Product_Name"] == "Kits Dragon": return "KTD-32-3231" elif row["Product_Name"] == "Phoenix": return "PHO-52-1928" elif row["Product_Name"] == "Sirius": return "SIR-10-0293" elif row["Product_Name"] == "Aurora": return "AUR-67-4989" elif row["Product_Name"] == "Apollo": return "APO-09-8723" elif row["Product_Name"] == "Agyrap-S": return "AGY-90-2818" else: return "ANH-02-0987" Opp = Opp.assign(Product_ID = Opp.apply(product_id, axis = 1)) Opp.head(10) Leads['Product_ID'] = Opp['Product_ID'] Leads.head(10) Leads.head(10) def random_dates(start, end, n=1000): start_u = start.value//10**9 end_u = end.value//10**9 return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s') start = pd.to_datetime('2019-10-31') end = pd.to_datetime('2020-11-01') Leads['Lead_Created_on'] = random_dates(start, end) Leads.head(10) Leads.info() Opp['Created_on'] = Leads['Lead_Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,20), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60))) #Opp['Created_on'] = Opp['Created_on'].dt.strftime("%d/%m/%Y") Opp.info() Opp['Days_Diff'] = Opp['Created_on'] - Leads['Lead_Created_on'] Opp.head(10) #The opportunity close date should be 5-10 days after it is created Opp['Close_Date'] = Opp['Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,10), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60))) Opp.head(10) Opp.rename(columns = {'Total Price(Euros)':'Sales_Price(EUR)',}, inplace = True) Opp.head(10) Opp['Actual_Revenue'] = randint(0,100) Opp.head(10) #Actual Revenue Column: Actual revenue is equal to the quantity*price of the product Proxima_C = 50 Kits_Dragon = 40 Phoenix = 48 Sirius = 60 Aurora = 65 Apollo = 70 Agyrap_S = 75 Anhee_C = 65 Opp['Actual_Revenue'] = Opp['Actual_Revenue'].map(lambda a: (Proxima_C*randint(0,5))+(Kits_Dragon*randint(0,5))+(Phoenix*randint(0,5))+(Sirius*randint(0,5))+(Aurora*randint(0,2))+(Apollo*randint(0,2))+(Agyrap_S*randint(0,5))+(Anhee_C*randint(0,5))) # product_names = Opp['Product_Name'].unique() # print(product_names) # Opp1.set_index('Product_Name', inplace=True) # Opp1.head() # Opp1.loc[['Proxima-C']] # def actual_rev(): # if Opp1.loc[['Proxima-C']] : # Opp['Actual_Revenue'] = 50*randint(0,5) # elif Opp1.loc[['Kits Dragon']]: # Opp['Actual_Revenue']= 40*randint(0,5) # elif Opp1.loc[['Phoenix']]: # Opp['Actual_Revenue'] = 48*randint(0,5) # elif Opp1.loc[['Sirius']]: # Opp['Actual_Revenue'] = 60*randint(0,5) # elif Opp.loc[['Aurora']]: # Opp['Actual_Revenue'] = 65*randint(0,5) # elif Opp1.loc[['Apollo']]: # Opp['Actual_Revenue'] = 70*randint(0,5) # elif Opp1.loc[['Agyrap-S']]: # Opp['Actual_Revenue'] = 75*randint(0,5) # elif Opp1.loc[['Anhee-C']]: # Opp['Actual_Revenue'] = 65*randint(0,5) # Opp['Actual_Revenue'] = actual_rev() # Opp.head(10) # Opp['Actual_Revenue'] = actual_rev(product_names) Opp.head(20) # Estimated Revenue: 0.75 to 1.5 times Actual Revenue # Need to think about this value Opp['Estimated_Revenue'] = Opp['Actual_Revenue'].map(lambda a: int(a*uniform(0.75, 1.5))) Opp.head(10) Desc = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Description.csv') Desc.head(10) Opp['Description'] = Desc['Description'] #Rating: Status Won => Hot # Status Open => 50% Hot, 25% Warm, 25% Cold # Status Lost => Cold Opp['Rating'] = Opp['Description'].map(lambda a: 'Hot' if str(a)=='Won' else ('Warm' if str(a)=='Open' else 'Cold')) # Opp['Rating'] = Opp['Description'].map(lambda a: 'Cold' if str(a)=='Lost' else "") Opp.head(10) #Probability Column prob = [0.95,0.90,0.85,0.80,0.75,0.70,0.65,0.60,0.55,0.50,0.45,0.40,0.35,0.30,0.25,0.20,0.15,0.10,0.05,0] def ProbImpute(Status, Rating): if Status=='Won': return prob[randint(0,3)] elif Status=='Open' and Rating=='Hot': return prob[randint(4,9)] elif Status=='Open' and Rating=='Warm': return prob[randint(10,13)] elif Status=='Open' and Rating=='Cold': return prob[randint(14,17)] else: return prob[randint(18,19)] Opp['Probability'] = Opp.apply(lambda a: ProbImpute(a['Description'],a['Rating']),axis=1) Opp.head(10) Opp['Product_Name'] = Leads['Product_Name'] Opp.head(10) Opp.columns.values Opp = Opp[['Lead_ID', 'Opportunity_ID', 'Product_Name', 'Product_ID', 'Email_address', 'Created_on', 'Close_Date', 'Estimated_Revenue', 'Actual_Revenue', 'Description', 'Rating', 'Probability', 'Last_Modified_By']] Opp.head(10) Opp.to_csv("Opportunities-Final.csv") ###Output _____no_output_____ ###Markdown ACCOUNTS Table ###Code Acc = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Accounts(2019-20).csv', delimiter = ',') # Acc['City'].unique() Acc['Lead_ID'] = Opp['Lead_ID'] Acc['Opportunity_ID'] = Opp['Opportunity_ID'] Acc['Full_Name'] = Leads['Full_Name'] Acc['Email_address'] = Opp['Email_address'] Acc.head(10) Acc = Acc[['Lead_ID', 'Opportunity_ID', 'Account_ID', 'Full_Name', 'City', 'Phone', 'Email_address', 'Status']] Acc1 = pd.read_excel("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Final final datasets/Accounts_Final(2019-20).xlsx") Acc1.head(10) ###Output _____no_output_____ ###Markdown QUOTES Table ###Code Quo = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Quotes.csv') Quo['Lead_ID'] = Acc['Lead_ID'] Quo['Opportunity_ID'] = Acc['Opportunity_ID'] Quo['Account_ID'] = Acc['Account_ID'] Quo['Product_Name'] = Opp['Product_Name'] Quo['Product_ID'] = Opp['Product_ID'] Quo['Product_Category'] = Leads['Product_Category'] Quo['Actual_Revenue'] = Opp['Actual_Revenue'] Quo['Email_address'] = Acc['Email_address'] Quo['Status'] = Acc['Status'] Quo['Created_On'] = Opp['Close_Date'].map(lambda a: a + pd.DateOffset(days=randint(10,25), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60))) Quo.head(10) Quo = Quo[['Lead_ID', 'Opportunity_ID', 'Account_ID', 'Quote_ID', 'Product_Name', 'Product_ID', 'Product_Category', 'Actual_Revenue', 'Email_address', 'Status' ]] Quo.to_csv('Quotes_Final(2019-20).csv') ###Output _____no_output_____ ###Markdown ORDERS Table ###Code Od = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Orders.csv') Od['Lead_ID'] = Quo['Lead_ID'] Od['Opportunity_ID'] = Quo['Opportunity_ID'] Od['Account_ID'] = Quo['Account_ID'] Od['Quote_ID'] = Quo['Quote_ID'] Od['Product_Name'] = Quo['Product_Name'] Od['Product_Category'] = Quo['Product_Category'] Od['Actual_Revenue'] = Quo['Actual_Revenue'] Od['Email_address'] = Quo['Email_address'] Od.head(10) Od.to_csv('Orders_final(2019-20).csv') print(Od['Product_Name'].unique()) print(Od['Product_Category'].unique()) ###Output ['Proxima-C' 'Kits Dragon' 'Phoenix' 'Sirius' 'Aurora' 'Apollo' 'Agyrap-S' 'Anhee-C'] ['Tech' 'Kitchen' 'Christmas' 'Knitting' 'Painting' 'Mystery Kit' 'Science' 'Craft'] ###Markdown Invoice Table ###Code Inv = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Invoices.csv") Inv['Lead_ID'] = Od['Lead_ID'] Inv['Opportunity_ID'] = Od['Opportunity_ID'] Inv['Account_ID'] = Od['Account_ID'] Inv['Quote_ID'] = Od['Quote_ID'] Inv['Order_ID'] = Od['Order_ID'] Inv['Product_Name'] = Od['Product_Name'] Inv['Product_ID'] = Opp['Product_ID'] Inv['Actual_Revenue'] = Od['Actual_Revenue'] Inv['Email_address'] = Od['Email_address'] Inv['Phone_No'] = Acc1['Phone_No'] Inv.head(10) ###Output _____no_output_____
Lectures/Lecture-09/FindingDisplacement.ipynb
###Markdown Testing displacement estimates using correlation between two images Import some libs ###Code import numpy as np import skimage as ski import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Load the data ###Code img = plt.imread('2_S_day5.jpg'); plt.imshow(img) ###Output _____no_output_____ ###Markdown Locating a small ROI around a screw ###Code plt.imshow(img[1600:1700,1100:1200]) ###Output _____no_output_____ ###Markdown Making two ROI imagesExtracting two images with a displacement of $d_{row}$=50 and $d_{col}$=10 and showing the result. ###Code a=img[1600:1700,1100:1200] b=img[1650:1750,1110:1210] plt.subplot(1,2,1), plt.imshow(a) plt.subplot(1,2,2), plt.imshow(b) ###Output _____no_output_____ ###Markdown Correlation calculation- Compute the 2D FFT of the two images (they have to be the same size)- Compute $\mathcal{F}\{corr\}=\mathcal{F}\{a\} * \mathcal{F}\{b\}^*$- Compute corr=$|\mathcal{F}^{-1}\{\mathcal{F}\{corr\}\}|$ ###Code fa=np.fft.fft2(a); fb=np.fft.fft2(b); f=fa*np.conjugate(fb); co=np.abs(np.fft.ifft2(f)); plt.imshow(np.abs(co)) plt.title('Correlation image between a and b'); ###Output _____no_output_____ ###Markdown Find the displacementLocate the max location in $corr$. ###Code pos = np.where(co == np.amax(co)) pos ###Output _____no_output_____
_site/lectures/Week 03 - Functions, Loops, Comprehensions and Generators/05 - Python Generators.ipynb
###Markdown Python Generators[Source](https://realpython.com/introduction-to-python-generators/)todo : add content ###Code def infinite_sequence(): num = 0 while True: num += 1 return num x = infinite_sequence() print(x) def infinite_sequence(): num = 0 while True: yield num num += 1 seq = infinite_sequence() print(next(seq)) for i in range(0, 10): print(next(seq)) for index, value in enumerate(seq): print(value) if index > 10: break print(next(seq)) import random def my_sequence(): num = 0 while True: yield num num += random.randint(0, 10) if num > 20: break seq = my_sequence() print(next(seq)) print(next(seq)) print(next(seq)) print(next(seq)) print(next(seq)) print(next(seq)) print(next(seq)) seq2 = my_sequence() print(next(seq2)) image_db = [ 1, 2, 5, 7, 10] meds_db = [ 3, 5, 7, 9 ] i = get_next_image_patient() m = get_next_med_patient() def get_next_patient() while no_match: if i == m: yield i elif i > m: i = get_next_image_patient() elif m < i: m = get_next_med_patient() for patient in get_next_patient(): # do something ###Output _____no_output_____
02A_TensorFlow-Slim.ipynb
###Markdown TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:![vgg16](Figures/vgg16.png)The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot: ###Code import tensorflow as tf # Set up the data loading: images, labels = ... # Define the model with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv1_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') with tf.name_scope('conv2_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv2_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') with tf.name_scope('conv3_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3') with tf.name_scope('conv4_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4') with tf.name_scope('conv5_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5') with tf.name_scope('fc_6') as scope: flat = tf.reshape(pool5, [-1, 7*7*512]) weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(flat, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc6 = tf.nn.relu(bias, name=scope) fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout') with tf.name_scope('fc_7') as scope: weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc6, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc7 = tf.nn.relu(bias, name=scope) fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout') with tf.name_scope('fc_8') as scope: weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc7, weights) biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) predictions = bias ###Output _____no_output_____ ###Markdown Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows: ###Code import tensorflow as tf slim = tf.contrib.slim # Set up the data loading: images, labels = ... # Define the model: with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') predictions = net ###Output _____no_output_____ ###Markdown Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier: ###Code import tensorflow as tf slim = tf.contrib.slim vgg = tf.contrib.slim.nets.vgg # Set up the data loading: images, labels = ... # Define the model: predictions = vgg.vgg16(images) ###Output _____no_output_____ ###Markdown Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784], name='x') W = tf.Variable(tf.zeros([784, 10]), name='W') b = tf.Variable(tf.zeros([10]), name='b') y = tf.nn.bias_add(tf.matmul(x, W), b, name='y') # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10], name='y_') cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Variable Initializer init_op = tf.global_variables_initializer() # Create a Saver object for saving weights saver = tf.train.Saver() # Create a Session object, initialize all variables sess = tf.Session() sess.run(init_op) # Train for _ in trange(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Save model save_path = saver.save(sess, "./log_reg_model.ckpt") print("Model saved in file: %s" % save_path) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz ###Markdown Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create a Session object, initialize all variables sess = tf.Session() # Restore weights saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta') saver.restore(sess, tf.train.latest_checkpoint('./')) print("Model restored.") graph = tf.get_default_graph() x = graph.get_tensor_by_name("x:0") y = graph.get_tensor_by_name("y:0") y_ = graph.get_tensor_by_name("y_:0") # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt Model restored. Test accuracy: 0.916700005531311 ###Markdown Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), but the general idea: ###Code from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = './checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) import os import tensorflow as tf from nets import vgg slim = tf.contrib.slim # Load images images = ... # Pre-process processed_images = ... # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) # Load checkpoint values init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) ###Output _____no_output_____ ###Markdown TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:![vgg16](Figures/vgg16.png)The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot: ###Code import tensorflow as tf # Set up the data loading: images, labels = ... # Define the model with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv1_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') with tf.name_scope('conv2_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv2_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') with tf.name_scope('conv3_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3') with tf.name_scope('conv4_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4') with tf.name_scope('conv5_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5') with tf.name_scope('fc_6') as scope: flat = tf.reshape(pool5, [-1, 7*7*512]) weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(flat, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc6 = tf.nn.relu(bias, name=scope) fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout') with tf.name_scope('fc_7') as scope: weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc6, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc7 = tf.nn.relu(bias, name=scope) fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout') with tf.name_scope('fc_8') as scope: weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc7, weights) biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) predictions = bias ###Output _____no_output_____ ###Markdown Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows: ###Code import tensorflow as tf slim = tf.contrib.slim # Set up the data loading: images, labels = ... # Define the model: with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') predictions = net ###Output _____no_output_____ ###Markdown Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier: ###Code import tensorflow as tf slim = tf.contrib.slim vgg = tf.contrib.slim.nets.vgg # Set up the data loading: images, labels = ... # Define the model: predictions = vgg.vgg16(images) ###Output _____no_output_____ ###Markdown Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784], name='x') W = tf.Variable(tf.zeros([784, 10]), name='W') b = tf.Variable(tf.zeros([10]), name='b') y = tf.nn.bias_add(tf.matmul(x, W), b, name='y') # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10], name='y_') cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Variable Initializer init_op = tf.global_variables_initializer() # Create a Saver object for saving weights saver = tf.train.Saver() # Create a Session object, initialize all variables sess = tf.Session() sess.run(init_op) # Train for _ in trange(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Save model save_path = saver.save(sess, "./log_reg_model.ckpt") print("Model saved in file: %s" % save_path) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz ###Markdown Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create a Session object, initialize all variables sess = tf.Session() # Restore weights saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta') saver.restore(sess, tf.train.latest_checkpoint('./')) print("Model restored.") graph = tf.get_default_graph() x = graph.get_tensor_by_name("x:0") y = graph.get_tensor_by_name("y:0") y_ = graph.get_tensor_by_name("y_:0") # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt Model restored. Test accuracy: 0.916700005531311 ###Markdown Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), but the general idea: ###Code from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = './checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) import os import tensorflow as tf from nets import vgg slim = tf.contrib.slim # Load images images = ... # Pre-process processed_images = ... # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) # Load checkpoint values init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) ###Output _____no_output_____ ###Markdown TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/research/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:![vgg16](Figures/vgg16.png)The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot: ###Code import tensorflow as tf # Set up the data loading: images, labels = ... # Define the model with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv1_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') with tf.name_scope('conv2_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv2_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') with tf.name_scope('conv3_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3') with tf.name_scope('conv4_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4') with tf.name_scope('conv5_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5') with tf.name_scope('fc_6') as scope: flat = tf.reshape(pool5, [-1, 7*7*512]) weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(flat, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc6 = tf.nn.relu(bias, name=scope) fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout') with tf.name_scope('fc_7') as scope: weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc6, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc7 = tf.nn.relu(bias, name=scope) fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout') with tf.name_scope('fc_8') as scope: weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc7, weights) biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) predictions = bias ###Output _____no_output_____ ###Markdown Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows: ###Code import tensorflow as tf slim = tf.contrib.slim # Set up the data loading: images, labels = ... # Define the model: with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') predictions = net ###Output _____no_output_____ ###Markdown Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier: ###Code import tensorflow as tf slim = tf.contrib.slim vgg = tf.contrib.slim.nets.vgg # Set up the data loading: images, labels = ... # Define the model: predictions = vgg.vgg16(images) ###Output _____no_output_____ ###Markdown Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/research/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784], name='x') W = tf.Variable(tf.zeros([784, 10]), name='W') b = tf.Variable(tf.zeros([10]), name='b') y = tf.nn.bias_add(tf.matmul(x, W), b, name='y') # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10], name='y_') cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Variable Initializer init_op = tf.global_variables_initializer() # Create a Saver object for saving weights saver = tf.train.Saver() # Create a Session object, initialize all variables sess = tf.Session() sess.run(init_op) # Train for _ in trange(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Save model save_path = saver.save(sess, "./log_reg_model.ckpt") print("Model saved in file: %s" % save_path) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz ###Markdown Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create a Session object, initialize all variables sess = tf.Session() # Restore weights saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta') saver.restore(sess, tf.train.latest_checkpoint('./')) print("Model restored.") graph = tf.get_default_graph() x = graph.get_tensor_by_name("x:0") y = graph.get_tensor_by_name("y:0") y_ = graph.get_tensor_by_name("y_:0") # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt Model restored. Test accuracy: 0.916700005531311 ###Markdown Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/research/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/slim_walkthrough.ipynb), but the general idea: ###Code from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = './checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) import os import tensorflow as tf from nets import vgg slim = tf.contrib.slim # Load images images = ... # Pre-process processed_images = ... # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) # Load checkpoint values init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) ###Output _____no_output_____ ###Markdown TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:![vgg16](Figures/vgg16.png)The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot: ###Code import tensorflow as tf # Set up the data loading: images, labels = ... # Define the model with tf.name_scope('conv1_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv1_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(bias, name=scope) pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1') with tf.name_scope('conv2_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv2_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv2 = tf.nn.relu(bias, name=scope) pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') with tf.name_scope('conv3_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv3_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv3 = tf.nn.relu(bias, name=scope) pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3') with tf.name_scope('conv4_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv4_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv4 = tf.nn.relu(bias, name=scope) pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4') with tf.name_scope('conv5_1') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_2') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) with tf.name_scope('conv5_3') as scope: kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights') conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME') biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(conv, biases) conv5 = tf.nn.relu(bias, name=scope) pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5') with tf.name_scope('fc_6') as scope: flat = tf.reshape(pool5, [-1, 7*7*512]) weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(flat, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc6 = tf.nn.relu(bias, name=scope) fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout') with tf.name_scope('fc_7') as scope: weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc6, weights) biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) fc7 = tf.nn.relu(bias, name=scope) fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout') with tf.name_scope('fc_8') as scope: weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights') mat = tf.matmul(fc7, weights) biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases') bias = tf.nn.bias_add(mat, biases) predictions = bias ###Output _____no_output_____ ###Markdown Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows: ###Code import tensorflow as tf slim = tf.contrib.slim # Set up the data loading: images, labels = ... # Define the model: with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu, weights_initializer=tf.truncated_normal_initializer(0.0, 0.01), weights_regularizer=slim.l2_regularizer(0.0005)): net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1') net = slim.max_pool2d(net, [2, 2], scope='pool1') net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2') net = slim.max_pool2d(net, [2, 2], scope='pool2') net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3') net = slim.max_pool2d(net, [2, 2], scope='pool3') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4') net = slim.max_pool2d(net, [2, 2], scope='pool4') net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5') net = slim.max_pool2d(net, [2, 2], scope='pool5') net = slim.fully_connected(net, 4096, scope='fc6') net = slim.dropout(net, 0.5, scope='dropout6') net = slim.fully_connected(net, 4096, scope='fc7') net = slim.dropout(net, 0.5, scope='dropout7') net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8') predictions = net ###Output _____no_output_____ ###Markdown Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier: ###Code import tensorflow as tf slim = tf.contrib.slim vgg = tf.contrib.slim.nets.vgg # Set up the data loading: images, labels = ... # Define the model: predictions = vgg.vgg16(images) ###Output _____no_output_____ ###Markdown Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create the model x = tf.placeholder(tf.float32, [None, 784], name='x') W = tf.Variable(tf.zeros([784, 10]), name='W') b = tf.Variable(tf.zeros([10]), name='b') y = tf.nn.bias_add(tf.matmul(x, W), b, name='y') # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10], name='y_') cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Variable Initializer init_op = tf.global_variables_initializer() # Create a Saver object for saving weights saver = tf.train.Saver() # Create a Session object, initialize all variables sess = tf.Session() sess.run(init_op) # Train for _ in trange(1000): batch_xs, batch_ys = mnist.train.next_batch(100) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) # Save model save_path = saver.save(sess, "./log_reg_model.ckpt") print("Model saved in file: %s" % save_path) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz ###Markdown Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model: ###Code import tensorflow as tf from tqdm import trange from tensorflow.examples.tutorials.mnist import input_data # Import data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # Create a Session object, initialize all variables sess = tf.Session() # Restore weights saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta') saver.restore(sess, tf.train.latest_checkpoint('./')) print("Model restored.") graph = tf.get_default_graph() x = graph.get_tensor_by_name("x:0") y = graph.get_tensor_by_name("y:0") y_ = graph.get_tensor_by_name("y_:0") # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) sess.close() ###Output Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt Model restored. Test accuracy: 0.916700005531311 ###Markdown Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), but the general idea: ###Code from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = './checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) import os import tensorflow as tf from nets import vgg slim = tf.contrib.slim # Load images images = ... # Pre-process processed_images = ... # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) # Load checkpoint values init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) ###Output _____no_output_____
Data Science Academy/Cap06/Notebooks/DSA-Python-Cap06-02-Insert no SQLite.ipynb
###Markdown Data Science Academy - Python Fundamentos - Capítulo 6 Download: http://github.com/dsacademybr ###Code # Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) ###Output Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6 ###Markdown Criando o Banco de Dados e Inserindo Dados ###Code # Reemove o arquivo com o banco de dados SQLite (caso exista) import os os.remove("dsa.db") if os.path.exists("dsa.db") else None import sqlite3 # Criando uma conexão conn = sqlite3.connect('dsa.db') # Criando um cursor c = conn.cursor() # Função para criar uma tabela def create_table(): c.execute('CREATE TABLE IF NOT EXISTS produtos(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, date TEXT, '\ 'prod_name TEXT, valor REAL)') # Função para inserir uma linha def data_insert(): c.execute("INSERT INTO produtos VALUES(10, '2020-05-02 14:32:11', 'Teclado', 90 )") conn.commit() c.close() conn.close() # Criar tabela create_table() # Inserir dados data_insert() ###Output _____no_output_____