path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/community/managed_notebooks/subscriber_churn_prediction/telecom-subscriber-churn-prediction.ipynb | ###Markdown
Telecom subscriber churn prediction on Vertex AI Table of contents* [Overview](section-1)* [Dataset](section-2)* [Objective](section-3)* [Costs](section-4)* [Perform EDA](section-5)* [Train a logistic regression model using scikit-learn](section-6)* [Evaluate the trained model](section-7)* [Save the model to a Cloud Storage path](section-8)* [Create a model with Explainable AI support in Vertex AI](section-9)* [Get explanations from the model](section-10)* [Clean up](section-11) OverviewThis example demonstrates building a subscriber churn prediction model on a [telecom customer churn dataset](https://www.kaggle.com/c/customer-churn-prediction-2020/overview). The generated churn model is further deployed to Vertex AI Endpoints and explanations are generated using the Explainable AI feature of Vertex AI. *Note: This notebook file was designed to run in a [Vertex AI Workbench managed notebooks](https://cloud.google.com/vertex-ai/docs/workbench/managed/create-instance) instance using the `Python (Local)` kernel. Some components of this notebook may not work in other notebook environments.* DatasetThe dataset used in this tutorial is publicly available at Kaggle. See [Customer Churn Prediction 2020](https://www.kaggle.com/c/customer-churn-prediction-2020/data). ObjectiveThis tutorial shows you how to do exploratory data analysis, preprocess data, and train a churn prediction model on a tabular churn dataset. The steps include the following:- Load data from a Cloud Storage path- Perform exploratory data analysis (EDA)- Preprocess the data- Train a scikit-learn model- Evaluate the scikit-learn model- Save the model to a Cloud Storage path- Create a model and an endpoint in Vertex AI- Deploy the trained model to an endpoint- Generate predictions and explanations on test data from the hosted model- Undeploy the model resource Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Installation
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
USER_FLAG = ""
# Google Cloud Notebook requires dependencies to be installed with '--user'
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
###Output
_____no_output_____
###Markdown
Install the latest version of the Vertex AI client library.Run the following command in your virtual environment to install the Vertex SDK for Python:
###Code
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
###Output
_____no_output_____
###Markdown
Install the Cloud Storage library:
###Code
! pip install {USER_FLAG} --upgrade google-cloud-storage
###Output
_____no_output_____
###Markdown
Install the `category_encoders` library:
###Code
! pip install --upgrade category_encoders
###Output
_____no_output_____
###Markdown
Install the `seaborn` library for the EDA step. If a Vertex AI Workbench managed notebooks instance is being used, this step is optional as the library is already available in the `Python (Local)` kernel.
###Code
! pip install --upgrade seaborn
###Output
_____no_output_____
###Markdown
Before you begin Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you create a model in Vertex AI using the Cloud SDK, you give a Cloud Storage path where the trained model is saved. In this tutorial, Vertex AI saves the trained model to a Cloud Storage bucket. Using this model artifact, you can thencreate Vertex AI model and endpoint resources in order to serveonline predictions.Set the name of your Cloud Storage bucket below. It must be unique across allCloud Storage buckets.You may also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Make sure to [choose a region where Vertex AI services areavailable](https://cloud.google.com/vertex-ai/docs/general/locationsavailable_regions). You maynot use a Multi-Regional Storage bucket for training with Vertex AI.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Tutorial Import required libraries
###Code
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# configure to don't display the warnings
import warnings
import category_encoders as ce
import joblib
import seaborn as sns
from google.cloud import aiplatform, storage
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, plot_roc_curve
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Load data from Cloud Storage path using Pandas
###Code
df = pd.read_csv(
"gs://cloud-samples-data/vertex-ai/managed_notebooks/telecom_churn_prediction/train.csv"
)
print(df.shape)
df.head()
###Output
_____no_output_____
###Markdown
Perform EDA Check the data types and null counts of the fields.
###Code
df.info()
###Output
_____no_output_____
###Markdown
The current dataset doesn't have any null or empty fields in it. Check the class imbalance.
###Code
df["churn"].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
There are 14% churners in the data which is not bad for training a churn prediction model. If the class imbalance seems high, oversampling or undersampling techniques can be considered to balance the class distribution. Separate the caetgorical and numerical columns.
###Code
categ_cols = ["state", "area_code", "international_plan", "voice_mail_plan"]
target = "churn"
num_cols = [i for i in df.columns if i not in categ_cols and i != target]
print(len(categ_cols), len(num_cols))
###Output
_____no_output_____
###Markdown
Plot the level distribution for the categorical columns.
###Code
for i in categ_cols:
df[i].value_counts().plot(kind="bar")
plt.title(i)
plt.show()
print(num_cols)
df["number_vmail_messages"].describe()
###Output
_____no_output_____
###Markdown
Check the distributions for the numerical columns.
###Code
for i in num_cols:
# check the Price field's distribution
_, ax = plt.subplots(1, 2, figsize=(10, 4))
df[i].plot(kind="box", ax=ax[0])
df[i].plot(kind="hist", ax=ax[1])
plt.title(i)
plt.show()
# check pairplots for selected features
selected_features = [
"total_day_calls",
"total_eve_calls",
"number_customer_service_calls",
"number_vmail_messages",
"account_length",
"total_day_charge",
"total_eve_charge",
]
sns.pairplot(df[selected_features])
plt.show()
###Output
_____no_output_____
###Markdown
Plot a heat map of the correlation matrix for the numerical features.
###Code
plt.figure(figsize=(12, 10))
sns.heatmap(df[num_cols].corr(), annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
Observations from EDA- There are many levels/categories in the categorical field state. In further steps, creating one-hot encoding vectors for this field would increase the columns drastically and so a binary encoding technique will be considered for encoding this field.- There are only 9% of customers in the data who have had international plans.- There are only a few customers who make frequent calls to customer service.- Only 25% of the customers had at least 16 voicemail messages and thus there was skewness in the distribution of the `number_vmail_messages` field.- Most of the feature combinations in the pair plot show a circular pattern that suggests that there is almost no correlation between the corresponding two features.- There seems to be a high correlation between minutes and charge. Either one of them can be dropped to avoid multi-collinearity or redundant features in the data. Preprocess the data Drop the fields corresponding to the highly-correlated features.
###Code
drop_cols = [
"total_day_charge",
"total_eve_charge",
"total_night_charge",
"total_intl_charge",
]
df.drop(columns=drop_cols, inplace=True)
num_cols = list(set(num_cols).difference(set(drop_cols)))
df.shape
###Output
_____no_output_____
###Markdown
Binary encode the state feature (as there are many levels/categories).
###Code
encoder = ce.BinaryEncoder(cols=["state"], return_df=True)
data_encoded = encoder.fit_transform(df)
data_encoded.head()
###Output
_____no_output_____
###Markdown
One-hot encode (drop the first level-column to avoid dummy-variable trap scenarios) the remaining categorical variables.
###Code
def encode_cols(data, col):
# Creating a dummy variable for the variable 'CategoryID' and dropping the first one.
categ = pd.get_dummies(data[col], prefix=col, drop_first=True)
# Adding the results to the master dataframe
data = pd.concat([data, categ], axis=1)
return data
for i in categ_cols + [target]:
if i != "state":
data_encoded = encode_cols(data_encoded, i)
data_encoded.drop(columns=[i], inplace=True)
data_encoded.shape
###Output
_____no_output_____
###Markdown
Check the data.
###Code
data_encoded.head()
###Output
_____no_output_____
###Markdown
Check the columns.
###Code
data_encoded.columns
###Output
_____no_output_____
###Markdown
Split the data into train and test sets.
###Code
X = data_encoded[[i for i in data_encoded.columns if i not in ["churn_yes"]]].copy()
y = data_encoded["churn_yes"].copy()
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.7, test_size=0.3, random_state=100
)
print(X_train.shape, X_test.shape)
###Output
_____no_output_____
###Markdown
Scale the numerical data using `MinMaxScaler`.
###Code
sc = MinMaxScaler()
X_train.loc[:, num_cols] = sc.fit_transform(X_train[num_cols])
X_test.loc[:, num_cols] = sc.transform(X_test[num_cols])
###Output
_____no_output_____
###Markdown
Train a logistic regression model using scikit-learn The argument `class_weight` adjusts the class weights to the target feature.
###Code
model = LogisticRegression(class_weight="balanced")
model = model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Evaluate the trained model Plot the ROC and show AUC on train and test sets Plot the ROC for the model on train data.
###Code
plot_roc_curve(model, X_train, y_train, drop_intermediate=False)
plt.show()
# plot the ROC for the model on test data
plot_roc_curve(model, X_test, y_test, drop_intermediate=False)
plt.show()
###Output
_____no_output_____
###Markdown
Determine the optimal threshold for the binary classification In general, the logistic regression model outputs probability scores between 0 and 1 and a threshold needs to be determined to assign a class label. Depending on the sensitivity (true-positive rate) and specificity (true-negative rate) of the model, an optimal threshold can be determined. Create columns with 10 different probability cutoffs.
###Code
y_train_pred = model.predict_proba(X_train)[:, 1]
numbers = [float(x) / 10 for x in range(10)]
y_train_pred_df = pd.DataFrame({"true": y_train, "pred": y_train_pred})
for i in numbers:
y_train_pred_df[i] = y_train_pred_df.pred.map(lambda x: 1 if x > i else 0)
###Output
_____no_output_____
###Markdown
Now calculate accuracy, sensitivity, and specificity for various probability cutoffs.
###Code
cutoff_df = pd.DataFrame(columns=["prob", "accuracy", "sensitivity", "specificity"])
# compute the parameters for each threshold considered
for i in numbers:
cm1 = confusion_matrix(y_train_pred_df.true, y_train_pred_df[i])
total1 = sum(sum(cm1))
accuracy = (cm1[0, 0] + cm1[1, 1]) / total1
speci = cm1[0, 0] / (cm1[0, 0] + cm1[0, 1])
sensi = cm1[1, 1] / (cm1[1, 0] + cm1[1, 1])
cutoff_df.loc[i] = [i, accuracy, sensi, speci]
# Let's plot accuracy sensitivity and specificity for various probabilities.
cutoff_df.plot.line(x="prob", y=["accuracy", "sensitivity", "specificity"])
plt.title("Comparison of performance across various thresholds")
plt.show()
###Output
_____no_output_____
###Markdown
In general, a model with balanced sensitivity and specificity is preferred. In the current case, the threshold where the sensitivity and specifity curves intersect can be considered an optimal threshold.
###Code
threshold = 0.5
# Evaluate train and test sets
y_test_pred = model.predict_proba(X_test)[:, 1]
# to get the performance stats, lets define a handy function
def print_stats(y_true, y_pred):
# Confusion matrix
confusion = confusion_matrix(y_true=y_true, y_pred=y_pred)
print("Confusion Matrix: ")
print(confusion)
TP = confusion[1, 1] # true positive
TN = confusion[0, 0] # true negatives
FP = confusion[0, 1] # false positives
FN = confusion[1, 0] # false negatives
# Let's see the sensitivity or recall of our logistic regression model
sensitivity = TP / float(TP + FN)
print("sensitivity = ", sensitivity)
# Let us calculate specificity
specificity = TN / float(TN + FP)
print("specificity = ", specificity)
# Calculate false postive rate - predicting conversion when customer didn't convert
fpr = FP / float(TN + FP)
print("False positive rate = ", fpr)
# positive predictive value
precision = TP / float(TP + FP)
print("precision = ", precision)
# accuracy
accuracy = (TP + TN) / (TP + TN + FP + FN)
print("accuracy = ", accuracy)
return
y_train_pred_sm = [1 if i > threshold else 0 for i in y_train_pred]
y_test_pred_sm = [1 if i > threshold else 0 for i in y_test_pred]
# Print the metrics for the model
# on train data
print("Train Data : ")
print_stats(y_train, y_train_pred_sm)
print("\n", "*" * 30, "\n")
# on test data
print("Test Data : ")
print_stats(y_test, y_test_pred_sm)
###Output
_____no_output_____
###Markdown
While the model's sensitivity and specificity are looking decent, the precision can be considered low. This type of situation may be acceptable to some extent because from a business standpoint in the telecom industry, it still makes sense to identify churners even though it means there'd be some mis-classifications of non-churners as churners. Save the model to a Cloud Storage path Save the trained model to a local file `model.joblib`.
###Code
FILE_NAME = "model.joblib"
joblib.dump(model, FILE_NAME)
# Upload the saved model file to Cloud Storage
BLOB_PATH = (
"[your-blob-path]" # leave blank if no folders inside the bucket are needed.
)
BLOB_NAME = BLOB_PATH + FILE_NAME
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob(BLOB_NAME)
blob.upload_from_filename(FILE_NAME)
###Output
_____no_output_____
###Markdown
Create a model with Explainable AI support in Vertex AIBefore creating a model, configure the explanations for the model. For further details, see [Configuring explanations in Vertex AI](https://cloud.google.com/vertex-ai/docs/explainable-ai/configuring-explanationsscikit-learn-and-xgboost-pre-built-containers).
###Code
MODEL_DISPLAY_NAME = "[your-model-display-name]"
ARTIFACT_GCS_PATH = f"gs://{BUCKET_NAME}/{BLOB_PATH}"
PROJECT = "[your-project-id]"
LOCATION = REGION
# Feature-name(Inp_feature) and Output-name(Model_output) can be arbitrary
exp_metadata = {"inputs": {"Inp_feature": {}}, "outputs": {"Model_output": {}}}
# Create a Vertex AI model resource with support for explanations
aiplatform.init(project=PROJECT, location=LOCATION)
explanation_parameters = {"sampledShapleyAttribution": {"pathCount": 25}}
model = aiplatform.Model.upload(
display_name=MODEL_DISPLAY_NAME,
artifact_uri=ARTIFACT_GCS_PATH,
serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-24:latest",
explanation_metadata=exp_metadata,
explanation_parameters=explanation_parameters,
)
model.wait()
print(model.display_name)
print(model.resource_name)
###Output
_____no_output_____
###Markdown
Alternatively, the following `gcloud` command can be used to create the model resource. The `explanation-metadata.json` file consists of the metadata that is used to configure explanations for the model resource.```gcloud beta ai models upload \ --region=$REGION \ --display-name=$MODEL_DISPLAY_NAME \ --container-image-uri="us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-24:latest" \ --artifact-uri=$ARTIFACT_GCS_PATH \ --explanation-method=sampled-shapley \ --explanation-path-count=25 \ --explanation-metadata-file=explanation-metadata.json``` Create an endpoint
###Code
ENDPOINT_DISPLAY_NAME = "[your-endpoint-display-name]"
endpoint = aiplatform.Endpoint.create(
display_name=ENDPOINT_DISPLAY_NAME, project=PROJECT, location=LOCATION
)
print(endpoint.display_name)
print(endpoint.resource_name)
###Output
_____no_output_____
###Markdown
Save the endpoint ID after the endpoint is created.
###Code
ENDPOINT_ID = "[your-endpoint-id]"
###Output
_____no_output_____
###Markdown
Deploy the model to the created endpointConfigure the depoyment name, machine-type, and other parameters for the deployment.
###Code
DEPLOYED_MODEL_NAME = "[deployment-model-name]"
MACHINE_TYPE = "n1-standard-4"
# deploy the model to the endpoint
model.deploy(
endpoint=endpoint,
deployed_model_display_name=DEPLOYED_MODEL_NAME,
machine_type=MACHINE_TYPE,
)
model.wait()
print(model.display_name)
print(model.resource_name)
###Output
_____no_output_____
###Markdown
Save the ID of the deployed model. The ID of the deployed model can also checked using the `endpoint.list_models()` method.
###Code
DEPLOYED_MODEL_ID = "[your-deployed-model-id]"
###Output
_____no_output_____
###Markdown
Get explanations from the deployed model Get explanations for some test instances from the hosted model.
###Code
# format the top 2 test instances as the request's payload
test_json = {"instances": [X_test.iloc[0].tolist(), X_test.iloc[1].tolist()]}
###Output
_____no_output_____
###Markdown
Get explanations and plot the feature attributions
###Code
features = X_train.columns.to_list()
def plot_attributions(attrs):
"""
Function to plot the features and their attributions for an instance
"""
rows = {"feature_name": [], "attribution": []}
for i, val in enumerate(features):
rows["feature_name"].append(val)
rows["attribution"].append(attrs["Inp_feature"][i])
attr_df = pd.DataFrame(rows).set_index("feature_name")
attr_df.plot(kind="bar")
plt.show()
return
def explain_tabular_sample(
project: str, location: str, endpoint_id: str, instances: list
):
"""
Function to make an explanation request for the specified payload and generate feature attribution plots
"""
aiplatform.init(project=project, location=location)
endpoint = aiplatform.Endpoint(endpoint_id)
response = endpoint.explain(instances=instances)
print("#" * 10 + "Explanations" + "#" * 10)
for explanation in response.explanations:
print(" explanation")
# Feature attributions.
attributions = explanation.attributions
for attribution in attributions:
print(" attribution")
print(" baseline_output_value:", attribution.baseline_output_value)
print(" instance_output_value:", attribution.instance_output_value)
print(" output_display_name:", attribution.output_display_name)
print(" approximation_error:", attribution.approximation_error)
print(" output_name:", attribution.output_name)
output_index = attribution.output_index
for output_index in output_index:
print(" output_index:", output_index)
plot_attributions(attribution.feature_attributions)
print("#" * 10 + "Predictions" + "#" * 10)
for prediction in response.predictions:
print(prediction)
return response
test_json = [X_test.iloc[0].tolist(), X_test.iloc[1].tolist()]
prediction = explain_tabular_sample(PROJECT, LOCATION, ENDPOINT_ID, test_json)
###Output
_____no_output_____
###Markdown
Clean upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:
###Code
# undeploy the model
endpoint.undeploy(deployed_model_id=DEPLOYED_MODEL_ID)
# delete the endpoint
endpoint.delete()
# delete the model
model.delete()
# remove the contents of the Cloud Storage bucket
! gsutil -m rm -r $BUCKET_NAME
###Output
_____no_output_____ |
notebooks/1.0-vanilla-autoencoder.ipynb | ###Markdown
Vanilla AutoencoderBuild a simple "vanilla" autoencoder that can be used on the fashion-mnist data. "Hands-On Machine Learning", by Aurelien Geron, is the basis for much of the code. https://github.com/ageron/handson-ml2
###Code
import numpy as np
import datetime
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf
from tensorflow import keras
import tensorboard
print('TensorFlow version: ', tf.__version__)
print('Keras version: ', keras.__version__)
print('Tensorboard version:', tensorboard.__version__)
%matplotlib inline
###Output
TensorFlow version: 2.0.0
Keras version: 2.2.4-tf
Tensorboard version: 2.0.0
###Markdown
Left align tables:
###Code
%%html
<style>
table {float:left}
</style>
###Output
_____no_output_____
###Markdown
1.0 Data ExplorationLet's look at the fashion-MNIST data set, and make sure we understand it.
###Code
# load fashion MNIST
fashion_mnist = keras.datasets.fashion_mnist
(X_train_all, y_train_all), (X_test, y_test) = fashion_mnist.load_data()
# check the shape of the data sets
print('X_train_full shape:', X_train_all.shape)
print('y_train_full shape:', y_train_all.shape)
print('X_test shape:', X_test.shape)
print('y_test shape:', y_test.shape)
# print off some y labels to check if it's already shuffled
y_train_all[0:10]
# to access, say, the first sample, you can index into the array as follows
# show the shape of the first sample
np.shape(X_train_all[0,:,:])
# show the sample
sample_to_display = 0
fig, axes = plt.subplots(1, 1)
axes.imshow(np.reshape(X_train_all[sample_to_display,:,:],[28,28]), cmap='Greys_r')
axes.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
Each training and test example is assigned one of the following labels (from https://github.com/zalandoresearch/fashion-mnist):| Label | Description || :--- | :--- || 0 | T-shirt/top || 1 | Trouser || 2 | Pullover || 3 | Dress || 4 | Coat || 5 | Sandal || 6 | Shirt || 7 | Sneaker || 8 | Bag || 9 | Ankle boot |
###Code
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
# lets visualize some of these
# k - number of samples
# w - width in pixels
# h - height in pixels
k, w, h = X_train_all.shape
# Plot a random sample
fig, axes = plt.subplots(1, 10,figsize=(15,2.3),dpi=300)
# fig.suptitle('Digits for Sample %i' %num, size=15, x=0.2)
for i in range(0, 10):
axes[i].imshow(np.reshape(X_train_all[i,:,:],[28,28]), cmap='Greys_r')
axes[i].axis('off')
axes[i].set_title(str(class_names[y_train_all[i]])+', '+str(y_train_all[i]))
###Output
_____no_output_____
###Markdown
2.0 Prepare Data
###Code
# need to scale the data between 0 and 1
# find out what the min/max values are
print('Max: ',X_train_all.max())
print('Min: ',X_train_all.min())
# split the data between train and validation sets, and scale
X_valid, X_train = X_train_all[:5000] / 255.0, X_train_all[5000:] / 255.0
y_valid, y_train = y_train_all[:5000], y_train_all[5000:]
# also scale the X_test
X_test = X_test / 255.0
print('X_valid shape:', X_valid.shape)
print('y_valid shape:', y_valid.shape)
print('X_train shape:', X_train.shape)
print('y_train shape:', y_train.shape)
###Output
X_valid shape: (5000, 28, 28)
y_valid shape: (5000,)
X_train shape: (55000, 28, 28)
y_train shape: (55000,)
###Markdown
3.0 Simple Sequential Model
###Code
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax")
])
model.summary()
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
# create a name for the model so that we can track it in tensorboard
log_dir="logs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") + "_ae_vanilla"
# create tensorboard callback
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=0,
update_freq='epoch',profile_batch=0)
history = model.fit(X_train, y_train,
epochs=30,
verbose=1,
validation_data=(X_valid, y_valid),
callbacks=[tensorboard_callback])
# put history of training into a dataframe
df_hist = pd.DataFrame(history.history)
df_hist.plot(figsize=(8, 5)) # plot
plt.grid(True) # apply grid
plt.title('Training Parameters') # plot title
plt.xlabel('Epoch') # x-axis label
plt.show()
# evaluate the model
model.evaluate(X_test, y_test, verbose=0)
###Output
_____no_output_____
###Markdown
4.0 Vanilla AutoencoderMake a simple stacked autoencoder (3 hidden layers, 1 output layer)
###Code
# build model
# encoder
stacked_encoder = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="selu"),
keras.layers.Dense(30, activation="selu"),
])
# decoder
stacked_decoder = keras.models.Sequential([
keras.layers.Dense(100, activation="selu", input_shape=[30]),
keras.layers.Dense(28 * 28, activation="sigmoid"),
keras.layers.Reshape([28, 28])
])
# combine encoder & decoder into one to make autoencoder
stacked_ae = keras.models.Sequential([stacked_encoder, stacked_decoder])
# compile, and get summary
stacked_ae.compile(loss="binary_crossentropy",
optimizer=keras.optimizers.SGD(lr=1.5))
stacked_ae.summary()
# fit model
history = stacked_ae.fit(X_train, X_train, epochs=10,
validation_data=[X_valid, X_valid])
def plot_reconstructions(model, index_list, X_valid):
"""Plot some original images, and their reconstructions
Parameters
===========
model : keras model
Autoencoder model
index_list : list
List of indices. These indices correspond to the index of the X_valid images
that will be shown
X_valid : numpy array
X_valid set
"""
reconstructions = model.predict(X_valid)
# get the length of index_list to set number of
# images to plot
n_images = len(index_list)
# Plot a random sample
fig, axes = plt.subplots(2, n_images,figsize=(n_images*1.5,3),dpi=150)
# fig.suptitle('Digits for Sample %i' %num, size=15, x=0.2)
for i in range(0, n_images):
axes[0][i].imshow(np.reshape(X_valid[index_list[i],:,:],[28,28]), cmap='Greys_r')
axes[0][i].axis('off')
axes[0][i].set_title(str(index_list[i]))
axes[1][i].imshow(np.reshape(reconstructions[index_list[i],:,:],[28,28]), cmap='Greys_r')
axes[1][i].axis('off')
plt.show()
# plot a random number of items
import random
index_list = random.sample(range(0,len(X_valid)), 5)
plot_reconstructions(stacked_ae, index_list, X_valid)
###Output
_____no_output_____
###Markdown
5.0 Visualize Results of Stacked Autoencoder Using T-SNE
###Code
# code from https://github.com/ageron/handson-ml2/blob/master/17_autoencoders_and_gans.ipynb
np.random.seed(63)
from sklearn.manifold import TSNE
X_valid_compressed = stacked_encoder.predict(X_valid)
tsne = TSNE()
X_valid_2D = tsne.fit_transform(X_valid_compressed)
X_valid_2D = (X_valid_2D - X_valid_2D.min()) / (X_valid_2D.max() - X_valid_2D.min())
plt.scatter(X_valid_2D[:, 0], X_valid_2D[:, 1], c=y_valid, s=10, cmap="tab10")
plt.axis("off")
plt.show()
# adapted from https://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html
plt.figure(figsize=(10, 8))
cmap = plt.cm.tab10
plt.scatter(X_valid_2D[:, 0], X_valid_2D[:, 1], c=y_valid, s=10, cmap=cmap)
image_positions = np.array([[1., 1.]])
for index, position in enumerate(X_valid_2D):
dist = np.sum((position - image_positions) ** 2, axis=1)
if np.min(dist) > 0.02: # if far enough from other images
image_positions = np.r_[image_positions, [position]]
imagebox = mpl.offsetbox.AnnotationBbox(
mpl.offsetbox.OffsetImage(X_valid[index], cmap="binary"),
position, bboxprops={"edgecolor": cmap(y_valid[index]), "lw": 2})
plt.gca().add_artist(imagebox)
plt.axis("off")
plt.show()
###Output
_____no_output_____ |
notebooks/zoning.ipynb | ###Markdown
URLhttps://knoxgis.maps.arcgis.com/home/item.html?id=ca4ac10098dd4de995b16312c83665f4 DescriptionThe location and boundaries of the zoning districts established by theCode of Ordinances of Knoxville and Knox County, TN are shown andmaintained by the Metropolitan Planning Commission under the directionof its Executive Director. The zoning GIS layer constitutes the Cityof Knoxville’s Official Zoning Map and is incorporated into, and thesame is made a part of the Code of Ordinances by reference.This data is updated monthly through actions of the Knox CountyCommission and the City of Knoxville. Check back frequently todownload the latest data or consider using the REST service to gainaccess to the latest features. Fields - OBJECTID (alias: OBJECTID): Stable, unique value for each zoning district in a GUID format - ZONE1 (alias: ZONE1): Base zoning district code - ZONE2 (alias: ZONE2): Overlay district code - AREA_ACRES (alias: AREA_ACRES): Calculated acreage of a zoning district - HIGH_DENSITY (alias: HIGH_DENSITY): Maximum dwelling units per acre allowed in a zoning district - CONDITIONS (alias: CONDITIONS): MPC file number for a zoning district with specific conditions - FORM_DIST (alias: Form District): Name of form district - FORM_CORR (alias: Form Corridor): Name of form corridor - FORM_DESCR (alias: Form Description): Form district description - FORM_CODE_PDF (alias: Form Code PDF): URL to more information about a form district or corridor - ZONE_TYPE (alias: ZONE_TYPE): Type of zoning district (e.g. City of Knoxville, Knox County, Form District)
###Code
import json
import tempfile
import requests
import geopandas as gpd
# gpd read_file reqires a file not url so this is a hack...
## arcgis provides download links that are dynamic... why? So we will save file and use lfs to download
# response = requests.get('https://ago-item-storage.s3-external-1.amazonaws.com/ca4ac10098dd4de995b16312c83665f4/Knoxville-Knox_County_Zoning.geojson?X-Amz-Security-Token=FQoDYXdzEO3%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDF3dK5lyT8t%2BDhV8SSK3A9I%2B0lFJLORN8Ds36P4shkRQYIn7iCMb9JiiBVVnzlzrPo8%2FG1K72RE0zCguK22hvZdUoMYlF4jHNad1soJTXxmKBZDdxbHgwkK051CIzI3I9VA3gDs0TyyZcaPz7g%2BWX7LxLZZ575gqipOxOVSrxKK6kxPQeFs2Dimsk6aMcoBVywHDp4ZJReDihXVhA3NlZn0kU6DfMUTLBCHRTRkPUeM5x6rTNDAa4YNFcNliYMTaRxrp%2BqqNaVYhkW6hCfteZOYhDUBGP5sRHoWGD8jC1vmosvEn0uv9JPATGsvbyFd%2FgTOfPdhEku0jIWwNsKjL0u4iFjoq%2FSDYTG8Br5k6cWNecE4pgR3DOSak977cQUAtOE8CuhgyMkjW7MQTSfGsc4HXcnbHFqVb2xTVjZr5G2TZdj37ZNZjEc287kxgz2Z609YVrbI4lGr%2BSMwVBIbRtJFDRPmil%2FvAfEW6Tl%2FMttPNyH0k2gpPAs6FXK9fk0QBhG%2BgO%2FLt5DqeNQc%2B%2B3SSlVXSOzJL0tmnVVGj%2B7sGWytlzoLoxOw9W7k9k2ad%2F31SKsATTXRqX7AAJI1VGey%2BuRs4ofxyqlco5MHY2QU%3D&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20180629T134706Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAINEFONIE23UY6VOQ%2F20180629%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=17cde7bceaa920375f413683ebe7da5c4f9e461b86aaeae3c12d27fffd482232')
# with open('../data/zoning/zoning.geojson', 'wb') as f:
# f.write(response.content)
# zoning = gpd.read_file('../data/zoning/zoning.geojson')
response = requests.get('https://gitlab.com/costrouc/knoxville-opendata-notebooks/raw/master/data/zoning/zoning.geojson')
with tempfile.NamedTemporaryFile() as f:
f.write(response.content)
zoning = gpd.read_file(f.name)
# knoxville_bnd = gpd.GeoDataFrame.from_file('../data/knoxville_boundary.geojson')
response = requests.get('https://gitlab.com/costrouc/knoxville-opendata-notebooks/raw/master/data/knoxville_boundary.geojson')
with tempfile.NamedTemporaryFile() as f:
f.write(response.content)
knoxville_bnd = gpd.read_file(f.name)
zoning['simple_zone'] = zoning['ZONE1'].apply(lambda z: z.split('-')[0]) # strip off - to make easier to plot (still too many fields)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
knoxville_bnd.plot(ax=ax, color='white', edgecolor='black')
ax = zoning.plot(ax=ax, column='ZONE1', markersize=5) # , legend=True)
fig.set_size_inches((20, 10))
ax.set_aspect('equal')
ax.axis('off')
fig.savefig('../images/zoning-colors.png', transparent=True)
zoning.info()
zoning.sample(5)
###Output
_____no_output_____
###Markdown
How many acres per zone type?
###Code
ECKERT_IV_PROJ_STRING = "+proj=eck4 +lon_0=0 +x_0=0 +y_0=0 +datum=WGS84 +units=m +no_defs"
zoning_eckert = zoning.to_crs(ECKERT_IV_PROJ_STRING)
zoning_eckert['area_m2'] = zoning_eckert.geometry.area
print('square miles', zoning_eckert.groupby('ZONE1').area_m2.sum().sort_values(ascending=False) / 1e6 * 0.6213712**2)
print('acres', zoning_eckert.groupby('ZONE1').area_m2.sum().sort_values(ascending=False) * 0.0002471052)
###Output
acres ZONE1
A 179577.627251
PR 23710.807273
R-1 20656.353537
RA 18208.090482
F 9214.604380
I 8747.688615
RB 7701.708341
RP-1 4797.685851
F-1 4218.874383
CA 4170.302980
R-2 4017.499416
OS-1 3722.899798
A-1 3506.898689
R-1A 3110.844892
C-3 2854.314165
I-3 2714.565964
C-6 2292.387136
C-4 2097.315973
CB 1977.848202
I-4 1935.049320
EN-1 1709.903358
PC 1636.031012
RAE 1514.202168
O-1 1256.187816
R-1E 1221.889895
OS-2 1206.842531
BP 1137.904824
I-2 1007.947057
O-2 919.258548
OB 804.930164
...
E 451.379321
EC 431.204283
PC-2 399.806637
SC-3 379.509035
FD 340.001734
C-2 337.793579
LI 334.098275
SC 256.540864
R-3 245.940034
O-3 221.726694
BP-1 217.425767
C-1 210.642207
EN-2 197.192966
OA 113.781430
HZ 113.395803
SC-1 113.150750
RP-2 100.979158
TC-1 100.763838
C-5 89.782597
SC-2 78.604123
CN 59.657542
I-1 57.143135
CR 42.666132
TND-1 39.700577
T 35.330377
RP-3 33.537112
CH 32.341463
OC 17.560784
R-4 4.544334
H-1 4.523470
Name: area_m2, Length: 62, dtype: float64
|
src/Chapter8.ipynb | ###Markdown
Examples for Chapter 8
###Code
import warnings
# these are innocuous but irritating
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
%matplotlib inline
###Output
_____no_output_____
###Markdown
Algorithms for simple cost functions K-means clustering
###Code
run scripts/kmeans -p [1,2,3,4] -k 8 imagery/AST_20070501_pca.tif
run scripts/dispms -f imagery/AST_20070501_pca_kmeans.tif -c \
#-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_1.eps'
###Output
_____no_output_____
###Markdown
K-means on GEE
###Code
import ee
from ipyleaflet import (Map,DrawControl,TileLayer)
ee.Initialize()
image = ee.Image('users/mortcanty/supervisedclassification/AST_20070501_pca').select(0,1,2,3)
region = image.geometry()
training = image.sample(region=region,scale=15,numPixels=100000)
clusterer = ee.Clusterer.wekaKMeans(8)
trained = clusterer.train(training)
clustered = image.cluster(trained)
# function for overlaying tiles onto a map
def GetTileLayerUrl(ee_image_object):
map_id = ee.Image(ee_image_object).getMapId()
tile_url_template = "https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}"
return tile_url_template.format(**map_id)
# display the default base map and overlay the clustered image
center = list(reversed(region.centroid().getInfo()['coordinates']))
m = Map(center=center, zoom=11)
jet = 'black,blue,cyan,yellow,red'
m.add_layer(TileLayer(url=GetTileLayerUrl(
clustered.select('cluster').visualize(min=0, max=6, palette= jet, opacity = 1.0)
)
))
m
###Output
_____no_output_____
###Markdown
K-means with Tensorflow
###Code
import os
import numpy as np
import tensorflow as tf
from osgeo import gdal
from osgeo.gdalconst import GA_ReadOnly,GDT_Byte
tf.logging.set_verbosity('ERROR')
# read image data
infile = 'imagery/AST_20070501_pca.tif'
pos = [1,2,3,4]
gdal.AllRegister()
inDataset = gdal.Open(infile,GA_ReadOnly)
cols = inDataset.RasterXSize
rows = inDataset.RasterYSize
bands = inDataset.RasterCount
if pos is not None:
bands = len(pos)
else:
pos = range(1,bands+1)
G = np.zeros((cols*rows,bands))
k = 0
for b in pos:
band = inDataset.GetRasterBand(b)
band = band.ReadAsArray(0,0,cols,rows)
G[:,k] = np.ravel(band)
k += 1
inDataset = None
# define an input function
def input_fn():
return tf.train.limit_epochs(
tf.convert_to_tensor(G, dtype=tf.float32),
num_epochs=1)
num_iterations = 10
num_clusters = 8
# create K-means clusterer
kmeans = tf.contrib.factorization.KMeansClustering(
num_clusters=num_clusters, use_mini_batch=False)
# train it
for _ in xrange(num_iterations):
kmeans.train(input_fn)
print 'score: %f'%kmeans.score(input_fn)
# map the input points to their clusters
labels = np.array(
list(kmeans.predict_cluster_index(input_fn)))
# write to disk
path = os.path.dirname(infile)
basename = os.path.basename(infile)
root, ext = os.path.splitext(basename)
outfile = path+'/'+root+'_kmeans'+ext
driver = gdal.GetDriverByName('GTiff')
outDataset = driver.Create(outfile,cols,rows,1,GDT_Byte)
outBand = outDataset.GetRasterBand(1)
outBand.WriteArray(np.reshape(labels,(rows,cols)),0,0)
outBand.FlushCache()
outDataset = None
print 'result written to: '+outfile
run scripts/dispms -f imagery/AST_20070501_pca_kmeans.tif -c
###Output
_____no_output_____
###Markdown
Kernel K-means clustering
###Code
run scripts/kkmeans -p [1,2,3,4] -n 1 -k 8 imagery/AST_20070501_pca.tif
%run scripts/dispms -f imagery/AST_20070501_pca_kkmeans.tif -c \
#-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_2.eps'
###Output
_____no_output_____
###Markdown
Extended K-mean clustering
###Code
run scripts/ekmeans -b 1 imagery/AST_20070501_pca.tif
run scripts/dispms -f imagery/AST_20070501_pca_ekmeans.tif -c \
#-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_3.eps'
###Output
_____no_output_____
###Markdown
Agglomerative hierarchical clustering
###Code
run scripts/hcl -h
run scripts/hcl -p [1,2,3,4] -k 8 -s 2000 imagery/AST_20070501_pca.tif
run scripts/dispms -f imagery/may0107pca_hcl.tif -c
###Output
_____no_output_____
###Markdown
Gaussian mixture clustering
###Code
run scripts/em -h
run scripts/em -p [1,2,3,4] -K 8 imagery/AST_20070501_pca.tif
run scripts/dispms -f imagery/AST_20070501_pca_em.tif -c -d [0,0,400,400] \
#-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_5.eps'
###Output
_____no_output_____
###Markdown
Benchmark
###Code
from osgeo.gdalconst import GDT_Float32
image = np.zeros((800,800,3))
b = 2.0
image[99:699 ,299:499 ,:] = b
image[299:499 ,99:699 ,:] = b
image[299:499 ,299:499 ,:] = 2*b
n1 = np.random.randn(800,800)
n2 = np.random.randn(800,800)
n3 = np.random.randn(800,800)
image[:,:,0] += n1
image[:,:,1] += n2+n1
image[:,:,2] += n3+n1/2+n2/2
driver = gdal.GetDriverByName('GTiff')
outDataset = driver.Create('imagery/toy.tif',
800,800,3,GDT_Float32)
for k in range(3):
outBand = outDataset.GetRasterBand(k+1)
outBand.WriteArray(image[:,:,k],0,0)
outBand.FlushCache()
outDataset = None
run scripts/dispms -f 'imagery/toy.tif' -e 3 -p [1,2,3]
run scripts/ex3_2 imagery/toy.tif
run scripts/hcl -k 3 -s 2000 imagery/toy.tif
run scripts/em -K 3 -s 1.0 imagery/toy.tif
run scripts/dispms -f imagery/toy_em.tif -c -F imagery/toy_hcl.tif -C
###Output
_____no_output_____
###Markdown
Kohonen SOM
###Code
run scripts/som -c 6 imagery/AST_20070501
run scripts/dispms -f imagery/AST_20070501_som -e 4 -p [1,2,3] -d [0,0,400,400] \
#-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_9.eps'
###Output
_____no_output_____
###Markdown
Mean shift segmentation
###Code
run scripts/dispms -f imagery/AST_20070501_pca.tif -p [1,2,3] -e 4 -d [300,450,400,400]
run scripts/meanshift -p [1,2,3,4] -d [500,450,200,200] -s 15 -r 30 -m 10 imagery/AST_20070501_pca.tif
run scripts/dispms -f imagery/AST_20070501_pca_meanshift.tif -p [1,2,3] -e 4 \
-F imagery/AST_20070501_pca.tif -P [1,2,3] -E 4 -D [500,450,200,200] \
%-s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_10.eps'
run scripts/dispms -f imagery/AST_20070501_pca_meanshift.tif -p [1,2,3] -e 3 \
-F imagery/AST_20070501_pca_meanshift.tif -P [6,6,6] -E 3 -o 0.4
###Output
_____no_output_____
###Markdown
Toy image for Exercise 2
###Code
from osgeo.gdalconst import GDT_Float32
import numpy as np
import gdal
image = np.zeros((400,400,2))
n = np.random.randn(400,400)
n1 = 8*np.random.rand(400,400)-4
image[:,:,0] = n1+8
image[:,:,1] = n1**2+0.3*np.random.randn(400,400)+8
image[:200,:,0] = np.random.randn(200,400)/2+8
image[:200,:,1] = np.random.randn(200,400)+14
driver = gdal.GetDriverByName('GTIFF')
outDataset = driver.Create('imagery/toy.tif',400,400,3,GDT_Float32)
for k in range(2):
outBand= outDataset.GetRasterBand(k+1)
outBand.WriteArray(image[:,:,k],0,0)
outBand.FlushCache()
outDataset = None
run scripts/scatterplot -s '/home/mort/LaTeX/new projects/CRC4/Chapter8/fig8_11.eps' imagery/toy.tif imagery/toy.tif 1 2
###Output
_____no_output_____ |
vmfiles/IPNB/Examples/b Graphics/40 Cartopy.ipynb | ###Markdown
Cartopy[Cartopy](https://scitools.org.uk/cartopy/docs/latest/) is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.We test here a few [map examples](https://scitools.org.uk/cartopy/docs/latest/matplotlib/intro.html) using cartopy.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (16, 10)
import cartopy.crs as ccrs
###Output
_____no_output_____
###Markdown
There is a list of the [available map projections](https://scitools.org.uk/cartopy/docs/latest/crs/projections.htmlcartopy-projections) in Cartopy.
###Code
# Set the projection to use
ax = plt.axes(projection=ccrs.PlateCarree())
# Draw coastlines
ax.coastlines();
ax = plt.axes(projection=ccrs.Mollweide())
# Add a land image
ax.stock_img();
###Output
_____no_output_____
###Markdown
ExamplesThis has been taken from the [gallery](http://scitools.org.uk/cartopy/docs/latest/gallery/index.html)
###Code
fig = plt.figure(figsize=(16, 10))
# Set the projection to use
ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson())
# make the map global rather than have it zoom in to
# the extents of any plotted data
ax.set_global()
# Add a land image
ax.stock_img()
# Draw coastlines
ax.coastlines()
# Plot a point
ax.plot(-0.08, 51.53, 'o', color="r", markersize=8, transform=ccrs.PlateCarree())
# Draw a straight line
ax.plot([-0.08, 132], [51.53, 43.17], linewidth=3, transform=ccrs.PlateCarree())
# Draw a geodetic line
ax.plot([-0.08, 132], [51.53, 43.17], linewidth=3, transform=ccrs.Geodetic());
# Set the projection to use
ax = plt.axes(projection=ccrs.PlateCarree())
ax.stock_img();
ny_lon, ny_lat = -75, 43
delhi_lon, delhi_lat = 77.23, 28.61
# Draw a geodetic line
plt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat],
color='blue', linewidth=2, marker='o', transform=ccrs.Geodetic())
# Draw a straight line
plt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat],
color='gray', linestyle='--', transform=ccrs.PlateCarree())
# Write two labels
plt.text(ny_lon-3, ny_lat-12, 'New York',
horizontalalignment='right', transform=ccrs.Geodetic())
plt.text(delhi_lon+3, delhi_lat-12, 'Delhi',
horizontalalignment='left', transform=ccrs.Geodetic());
###Output
_____no_output_____ |
Note/10- Veri Analizi/Pandas/PANDAS.ipynb | ###Markdown
Pandas Serileri
###Code
import numpy as np
import pandas as pd
liste1=["a","b","c","d","e"]
liste2=[1,2,3,4,5]
pd.Series(data=liste2)
pd.Series(data=liste2, index=liste1)
npArray= np.array([10,20,30,40,50])
npArray
pd.Series(data=npArray,index=["a","b","c","d","e"])
sozluk={"a":30,"b":40,"c":70}
pd.Series(sozluk)
ser1=pd.Series([1,2,3,4,5],["a","b","c","d","e"])
ser2=pd.Series([7,5,6,9,8],["a","b","c","f","e"])
ser1
ser2
ser1["a"]
ser1+ser2
top=ser1+ser2
top
top["d"]
top["g"]
###Output
_____no_output_____
###Markdown
Dataframe
###Code
from numpy.random import randn
randn(3,3)
df=pd.DataFrame(randn(3,3), index=["A","B","C"], columns=["C1","C2","C3"])
df
df["C1"]
type(df["C1"])
df.loc["A"]
type(df.loc["A"])
df[["C1","C2"]]
df["C4"]
df["C4"]=pd.Series(randn(3),index=["A","B","C"])
df
df["C5"]=df["C1"]+df["C2"]+df["C3"]+df["C4"]
df
df.drop("C5",axis=1)
df
df.drop("C5",axis=1,inplace=True)
df
###Output
_____no_output_____
###Markdown
Koşullar
###Code
df > -1
boolDf=df > -1
boolDf
df[boolDf]
df[df<-1]
df["C1"]<-1
df
df[(df["C1"]<-1) & (df["C3"]>0)]
df[(df["C1"]<0) | (df["C4"]>-1)]
df["C5"]=["new1","new2","new3"]
df
df.set_index("C5")
df
df.set_index("C5",inplace=True)
df
df.index.names
outerIndex=["Group1","Group1","Group1","Group2","Group2","Group2","Group3","Group3","Group3"]
innerIndex=["Index1","Index2","Index3","Index1","Index2","Index3","Index1","Index2","Index3"]
list(zip(outerIndex,innerIndex))
hierarchy=list(zip(outerIndex,innerIndex))
hierarchy
hierarchy=pd.MultiIndex.from_tuples(hierarchy)
hierarchy
df2=pd.DataFrame(randn(9,3),hierarchy,columns=["A","B","C"])
df2
df2["A"]
df2.loc["Group1"]
df2.loc[["Group1","Group2"]]
df2.loc["Group1"].loc["Index1"]
df2.index.names
df2.index.names=["Groups","Indexes"]
df2
df2.loc["Group1"].loc["Index1"]["A"]
df2.xs("Group1")
df2.xs("Group1").xs("Index1")
df2.xs("Group1").xs("Index1").xs("A")
###Output
_____no_output_____
###Markdown
Kayıp Veriler
###Code
arr=np.array([[10,20,np.nan],[5,np.nan,np.nan],[23,np.nan,14]])
arr
df=pd.DataFrame(arr,index=["i1","i2","i3"],columns=["c1","c2","c3"])
df
df.dropna()
df
df.dropna(axis=1)
df.dropna(thresh=2)
df.fillna(value=1)
###Output
_____no_output_____
###Markdown
NaN değerlerini değerlerin ortalaması ile değiştirmek
###Code
df.sum()
df.sum().sum()
df.size
df.isnull().sum().sum()
def calculateMean(df):
totalSum=df.sum().sum()
totalNum=df.size-df.isnull().sum().sum()
return totalSum/totalNum
df.fillna(value=calculateMean(df))
###Output
_____no_output_____
###Markdown
GroupBy Sorguları
###Code
dataset = {
"Departman":["Bilişim","İnsan Kaynakları","Üretim","Üretim","Bilişim","İnsan Kaynakları"],
"Çalışan": ["Mustafa","Jale","Kadir","Zeynep","Murat","Ahmet"],
"Maaş":[3000,3500,2500,4500,4000,2000]
}
dataset
df=pd.DataFrame(dataset)
df
depGroup=df.groupby("Departman")
depGroup
depGroup.sum()
df.groupby("Departman").count()
df.groupby("Departman").min()["Maaş"]["Bilişim"]
df.groupby("Departman").mean().loc["Bilişim"]["Maaş"]
###Output
_____no_output_____
###Markdown
Merge, Join ve Concate Concate
###Code
dataset1 = {
"A": ["A1","A2","A3","A4"],
"B":["B1","B2","B3","B4"],
"C":["C1","C2","C3","C4"],
}
dataset2 = {
"A": ["A5","A6","A7","A8"],
"B":["B5","B6","B7","B8"],
"C":["C5","C6","C7","C8"],
}
df1=pd.DataFrame(dataset1,index=[1,2,3,4])
df2=pd.DataFrame(dataset2,index=[5,6,7,8])
df1
df2
pd.concat([df1,df2])
pd.concat([df1,df2],axis=1)
###Output
_____no_output_____
###Markdown
Merge
###Code
dataset1 = {
"A": ["A1","A2","A3"],
"B":["B1","B2","B3",],
"Anahtar":["C1","C2","C3",],
}
dataset2 = {
"X": ["X5","X6","X7","X8"],
"Y":["Y5","Y6","Y7","Y8"],
"Anahtar":["C1","C2","C7","C8"],
}
df1=pd.DataFrame(dataset1,index=[1,2,3])
df2=pd.DataFrame(dataset2,index=[1,2,3,4])
df1
df2
pd.merge(df1,df2,how="inner",on="Anahtar")
###Output
_____no_output_____
###Markdown
Join
###Code
dataset1 = {
"A": ["A1","A2","A3"],
"B":["B1","B2","B3",],
}
dataset2 = {
"X": ["X5","X6","X7","X8"],
"Y":["Y5","Y6","Y7","Y8"],
}
df1=pd.DataFrame(dataset1,index=[1,2,3])
df2=pd.DataFrame(dataset2,index=[1,2,3,4])
df1.join(df2)
df2.join(df1)
###Output
_____no_output_____ |
how-to-use-azureml/automated-machine-learning/classification-with-whitelisting/auto-ml-classification-with-whitelisting.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a selected list of models, see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
#Note: This notebook will install tensorflow if not already installed in the enviornment..
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
import sys
whitelist_models=["LightGBM"]
if "3.7" != sys.version[0:3]:
try:
import tensorflow as tf1
except ImportError:
from pip._internal import main
main(['install', 'tensorflow>=1.10.0,<=1.12.0'])
logging.getLogger().setLevel(logging.ERROR)
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"]
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=whitelist_models,
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
#Note: This notebook will install tensorflow if not already installed in the enviornment..
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
import sys
whitelist_models=["LightGBM"]
if "3.7" != sys.version[0:3]:
try:
import tensorflow as tf1
except ImportError:
from pip._internal import main
main(['install', 'tensorflow>=1.10.0,<=1.12.0'])
logging.getLogger().setLevel(logging.ERROR)
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"]
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=whitelist_models,
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a selected list of models, see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
#Note: This notebook will install tensorflow if not already installed in the enviornment..
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
import sys
whitelist_models=["LightGBM"]
if "3.7" != sys.version[0:3]:
try:
import tensorflow as tf1
except ImportError:
from pip._internal import main
main(['install', 'tensorflow>=1.10.0,<=1.12.0'])
logging.getLogger().setLevel(logging.ERROR)
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"]
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment.
experiment_name = 'automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=whitelist_models)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Opt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
n_cross_validations = 3,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"],
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
#Note: This notebook will install tensorflow if not already installed in the enviornment..
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
import sys
whitelist_models=["LightGBM"]
if "3.7" != sys.version[0:3]:
try:
import tensorflow as tf1
except ImportError:
from pip._internal import main
main(['install', 'tensorflow>=1.10.0,<=1.12.0'])
logging.getLogger().setLevel(logging.ERROR)
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"]
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=whitelist_models,
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
n_cross_validations = 3,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"],
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
#Note: This notebook will install tensorflow if not already installed in the enviornment..
import logging
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
try:
import tensorflow as tf1
except ImportError:
from pip._internal import main
main(['install', 'tensorflow>=1.10.0,<=1.12.0'])
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], Multi-class targets.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
n_cross_validations = 3,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"],
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Classification using whitelist models**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [digit dataset](http://scikit-learn.org/stable/datasets/index.htmloptical-recognition-of-handwritten-digits-dataset) to showcase how you can use AutoML for a simple classification problem.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.This notebooks shows how can automl can be trained on a a selected list of models,see the readme.md for the models.This trains the model exclusively on tensorflow based models.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Configure AutoML using `AutoMLConfig`.3. Train the model on a whilelisted models using local compute. 4. Explore the results.5. Test the best fitted model. SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
import os
import random
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
ws = Workspace.from_config()
# Choose a name for the experiment and specify the project folder.
experiment_name = 'automl-local-whitelist'
project_folder = './sample_projects/automl-local-whitelist'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace Name'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data = output, index = ['']).T
###Output
_____no_output_____
###Markdown
Opt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
###Output
_____no_output_____
###Markdown
DataThis uses scikit-learn's [load_digits](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) method.
###Code
from sklearn import datasets
digits = datasets.load_digits()
# Exclude the first 100 rows from training so that they can be used for test.
X_train = digits.data[100:,:]
y_train = digits.target[100:]
###Output
_____no_output_____
###Markdown
TrainInstantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.|Property|Description||-|-||**task**|classification or regression||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedbalanced_accuracyaverage_precision_score_weightedprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**X**|(sparse) array-like, shape = [n_samples, n_features]||**y**|(sparse) array-like, shape = [n_samples, ], [n_samples, n_classes]Multi-class targets. An indicator matrix turns on multilabel classification. This should be an array of integers.||**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.||**whitelist_models**|List of models that AutoML should use. The possible values are listed [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-trainconfigure-your-experiment-settings).|
###Code
automl_config = AutoMLConfig(task = 'classification',
debug_log = 'automl_errors.log',
primary_metric = 'AUC_weighted',
iteration_timeout_minutes = 60,
iterations = 10,
n_cross_validations = 3,
verbosity = logging.INFO,
X = X_train,
y = y_train,
enable_tf=True,
whitelist_models=["TensorFlowLinearClassifier", "TensorFlowDNN"],
path = project_folder)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.In this example, we specify `show_output = True` to print currently running iterations to the console.
###Code
local_run = experiment.submit(automl_config, show_output = True)
local_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(local_run).show()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(local_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = local_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model that has the smallest `log_loss` value:
###Code
lookup_metric = "log_loss"
best_run, fitted_model = local_run.get_output(metric = lookup_metric)
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Model from a Specific IterationShow the run and the model from the third iteration:
###Code
iteration = 3
third_run, third_model = local_run.get_output(iteration = iteration)
print(third_run)
print(third_model)
###Output
_____no_output_____
###Markdown
Test Load Test Data
###Code
digits = datasets.load_digits()
X_test = digits.data[:10, :]
y_test = digits.target[:10]
images = digits.images[:10]
###Output
_____no_output_____
###Markdown
Testing Our Best Fitted ModelWe will try to predict 2 digits and see how our model works.
###Code
# Randomly select digits and test.
for index in np.random.choice(len(y_test), 2, replace = False):
print(index)
predicted = fitted_model.predict(X_test[index:index + 1])[0]
label = y_test[index]
title = "Label value = %d Predicted value = %d " % (label, predicted)
fig = plt.figure(1, figsize = (3,3))
ax1 = fig.add_axes((0,0,.8,.8))
ax1.set_title(title)
plt.imshow(images[index], cmap = plt.cm.gray_r, interpolation = 'nearest')
plt.show()
###Output
_____no_output_____ |
notebooks/nve_neighbor_list.ipynb | ###Markdown
###Code
#@title Imports & Utils
!pip install jax-md
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import lax
import time
from jax_md import space, smap, energy, quantity, simulate, partition
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Constant Energy Simulation With Neighbor Lists Setup some system parameters.
###Code
Nx = particles_per_side = 80
spacing = np.float32(1.25)
side_length = Nx * spacing
R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing
R = np.array(R, np.float64)
#@title Draw the initial state
ms = 10
R_plt = onp.array(R)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
Construct two versions of the energy function with and without neighbor lists.
###Code
displacement, shift = space.periodic(side_length)
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement,
side_length)
energy_fn = jit(energy_fn)
exact_energy_fn = jit(energy.lennard_jones_pair(displacement))
nbrs = neighbor_fn(R)
# Run once so that we avoid the jit compilation time.
print('E = {}'.format(energy_fn(R, neighbor=nbrs)))
print('E_ex = {}'.format(exact_energy_fn(R)))
%%timeit
energy_fn(R, neighbor=nbrs).block_until_ready()
%%timeit
exact_energy_fn(R).block_until_ready()
displacement, shift = space.periodic(side_length)
init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3)
state = init_fn(random.PRNGKey(0), R, neighbor=nbrs)
def body_fn(i, state):
state, nbrs = state
nbrs = neighbor_fn(state.position, nbrs)
state = apply_fn(state, neighbor=nbrs)
return state, nbrs
step = 0
while step < 40:
new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs))
if nbrs.did_buffer_overflow:
nbrs = neighbor_fn(state.position)
else:
state = new_state
step += 1
#@title Draw the final state
ms = 10
R_plt = onp.array(state.position)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
Imports & Utils
###Code
!pip install --upgrade -q https://storage.googleapis.com/jax-releases/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-$(pip search jaxlib | grep -oP '[0-9\.]+' | head -n 1)-cp36-none-linux_x86_64.whl
!pip install --upgrade -q jax
!pip install -q git+https://github.com/conference-submitter/jax-md.git
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import lax
import time
from jax_md import space, energy, simulate, partition
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Constant Energy Simulation With Neighbor Lists Setup some system parameters.
###Code
Nx = particles_per_side = 80
spacing = np.float32(1.25)
side_length = Nx * spacing
R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing
R = np.array(R, np.float64)
#@title Draw the initial state
ms = 10
R_plt = onp.array(R)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
Construct two versions of the energy function with and without neighbor lists.
###Code
displacement, shift = space.periodic(side_length)
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement,
side_length)
energy_fn = jit(energy_fn)
exact_energy_fn = jit(energy.lennard_jones_pair(displacement))
nbrs = neighbor_fn(R)
# Run once so that we avoid the jit compilation time.
print('E = {}'.format(energy_fn(R, neighbor_idx=nbrs.idx)))
print('E_ex = {}'.format(exact_energy_fn(R)))
%%timeit
energy_fn(R, neighbor_idx=nbrs.idx).block_until_ready()
%%timeit
exact_energy_fn(R).block_until_ready()
displacement, shift = space.periodic(side_length)
init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3)
state = init_fn(random.PRNGKey(0), R, neighbor_idx=nbrs.idx)
def body_fn(i, state):
state, nbrs = state
nbrs = neighbor_fn(state.position, nbrs)
state = apply_fn(state, neighbor_idx=nbrs.idx)
return state, nbrs
step = 0
while step < 40:
new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs))
if nbrs.did_buffer_overflow:
nbrs = neighbor_fn(state.position)
else:
state = new_state
step += 1
#@title Draw the final state
ms = 10
R_plt = onp.array(state.position)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
###Code
#@title Imports & Utils
!pip install jax-md
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import lax
import time
from jax_md import space
from jax_md import smap
from jax_md import energy
from jax_md import quantity
from jax_md import simulate
from jax_md import partition
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Constant Energy Simulation With Neighbor Lists Setup some system parameters.
###Code
Nx = particles_per_side = 80
spacing = np.float32(1.25)
side_length = Nx * spacing
R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing
R = np.array(R, np.float64)
#@title Draw the initial state
ms = 10
R_plt = onp.array(R)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
JAX MD supports three different formats for neighbor lists: `Dense`, `Sparse`, and `OrderedSparse`. `Dense` neighbor lists store neighbor IDs in a matrix of shape `(particle_count, neighbors_per_particle)`. This can be advantageous if the system if homogeneous since it requires less memory bandwidth. However, `Dense` neighbor lists are more prone to overflows or waste if there are large fluctuations in the number of neighbors, since they must allocate enough capacity for the maximum number of neighbors.`Sparse` neighbor lists store neighbor IDs in a matrix of shape `(2, total_neighbors)` where the first index specifies senders and receivers for each neighboring pair. Unlike `Dense` neighbor lists, `Sparse` neighbor lists must store two integers for each neighboring pair. However, they benefit because their capacity is bounded by the total number of neighbors, making them more efficient when different particles have different numbers of neighbors.`OrderedSparse` neighbor lists are like `Sparse` neighbor lists, except they only store pairs of neighbors `(i, j)` where `i < j`. For potentials that can be phrased as $\sum_{i<j}E_{ij}$ this can give a factor of two improvement in speed.
###Code
# format = partition.Dense
# format = partition.Sparse
format = partition.OrderedSparse
###Output
_____no_output_____
###Markdown
Construct two versions of the energy function with and without neighbor lists.
###Code
displacement, shift = space.periodic(side_length)
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement,
side_length,
format=format)
energy_fn = jit(energy_fn)
exact_energy_fn = jit(energy.lennard_jones_pair(displacement))
###Output
_____no_output_____
###Markdown
To use a neighbor list, we must first allocate it. This step cannot be Just-in-Time (JIT) compiled because it uses the state of the system to infer the capacity of the neighbor list (which involves dynamic shapes).
###Code
nbrs = neighbor_fn.allocate(R)
###Output
_____no_output_____
###Markdown
Now we can compute the energy with and without neighbor lists. We see that both results agree, but the neighbor list version of the code is significantly faster.
###Code
# Run once so that we avoid the jit compilation time.
print('E = {}'.format(energy_fn(R, neighbor=nbrs)))
print('E_ex = {}'.format(exact_energy_fn(R)))
%%timeit
energy_fn(R, neighbor=nbrs).block_until_ready()
%%timeit
exact_energy_fn(R).block_until_ready()
###Output
1000 loops, best of 5: 1.08 ms per loop
###Markdown
Now we can run a simulation. Inside the body of the simulation, we update the neighbor list using `nbrs.update(position)`. This can be JIT, but it also might lead to buffer overflows if the allocated neighborlist cannot accomodate all of the neighbors. Therefore, every so often we check whether the neighbor list overflowed and if it did, we reallocate it using the state right before it overflowed.
###Code
displacement, shift = space.periodic(side_length)
init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3)
state = init_fn(random.PRNGKey(0), R, kT=1e-3, neighbor=nbrs)
def body_fn(i, state):
state, nbrs = state
nbrs = nbrs.update(state.position)
state = apply_fn(state, neighbor=nbrs)
return state, nbrs
step = 0
while step < 40:
new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs))
if nbrs.did_buffer_overflow:
print('Neighbor list overflowed, reallocating.')
nbrs = neighbor_fn.allocate(state.position)
else:
state = new_state
step += 1
#@title Draw the final state
ms = 10
R_plt = onp.array(state.position)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
###Code
#@title Imports & Utils
!pip install jax-md
import numpy as onp
from jax.config import config ; config.update('jax_enable_x64', True)
import jax.numpy as np
from jax import random
from jax import jit
from jax import lax
import time
from jax_md import space, smap, energy, quantity, simulate, partition
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Constant Energy Simulation With Neighbor Lists Setup some system parameters.
###Code
Nx = particles_per_side = 80
spacing = np.float32(1.25)
side_length = Nx * spacing
R = onp.stack([onp.array(r) for r in onp.ndindex(Nx, Nx)]) * spacing
R = np.array(R, np.float64)
#@title Draw the initial state
ms = 10
R_plt = onp.array(R)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____
###Markdown
Construct two versions of the energy function with and without neighbor lists.
###Code
displacement, shift = space.periodic(side_length)
neighbor_fn, energy_fn = energy.lennard_jones_neighbor_list(displacement,
side_length)
energy_fn = jit(energy_fn)
exact_energy_fn = jit(energy.lennard_jones_pair(displacement))
nbrs = neighbor_fn(R)
# Run once so that we avoid the jit compilation time.
print('E = {}'.format(energy_fn(R, neighbor=nbrs)))
print('E_ex = {}'.format(exact_energy_fn(R)))
%%timeit
energy_fn(R, neighbor=nbrs).block_until_ready()
%%timeit
exact_energy_fn(R).block_until_ready()
displacement, shift = space.periodic(side_length)
init_fn, apply_fn = simulate.nve(energy_fn, shift, 1e-3)
state = init_fn(random.PRNGKey(0), R, neighbor=nbrs)
def body_fn(i, state):
state, nbrs = state
nbrs = neighbor_fn(state.position, nbrs)
state = apply_fn(state, neighbor=nbrs)
return state, nbrs
step = 0
while step < 40:
new_state, nbrs = lax.fori_loop(0, 100, body_fn, (state, nbrs))
if nbrs.did_buffer_overflow:
nbrs = neighbor_fn(state.position)
else:
state = new_state
step += 1
#@title Draw the final state
ms = 10
R_plt = onp.array(state.position)
plt.plot(R_plt[:, 0], R_plt[:, 1], 'o', markersize=ms * 0.5)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
###Output
_____no_output_____ |
sequence_model/Week 2/Word Vector Representation/Operations on word vectors - v2.ipynb | ###Markdown
Operations on word vectorsWelcome to your first assignment of this week! Because word embeddings are very computionally expensive to train, most ML practitioners will load a pre-trained set of embeddings. **After this assignment you will be able to:**- Load pre-trained word vectors, and measure similarity using cosine similarity- Use word embeddings to solve word analogy problems such as Man is to Woman as King is to ______. - Modify word embeddings to reduce their gender bias Let's get started! Run the following cell to load the packages you will need.
###Code
import numpy as np
from w2v_utils import *
###Output
_____no_output_____
###Markdown
Next, lets load the word vectors. For this assignment, we will use 50-dimensional GloVe vectors to represent words. Run the following cell to load the `word_to_vec_map`.
###Code
words, word_to_vec_map = read_glove_vecs('../../readonly/glove.6B.50d.txt')
###Output
_____no_output_____
###Markdown
You've loaded:- `words`: set of words in the vocabulary.- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.You've seen that one-hot vectors do not do a good job capturing what words are similar. GloVe vectors provide much more useful information about the meaning of individual words. Lets now see how you can use GloVe vectors to decide how similar two words are. 1 - Cosine similarityTo measure how similar two words are, we need a way to measure the degree of similarity between two embedding vectors for the two words. Given two vectors $u$ and $v$, cosine similarity is defined as follows: $$\text{CosineSimilarity(u, v)} = \frac {u . v} {||u||_2 ||v||_2} = cos(\theta) \tag{1}$$where $u.v$ is the dot product (or inner product) of two vectors, $||u||_2$ is the norm (or length) of the vector $u$, and $\theta$ is the angle between $u$ and $v$. This similarity depends on the angle between $u$ and $v$. If $u$ and $v$ are very similar, their cosine similarity will be close to 1; if they are dissimilar, the cosine similarity will take a smaller value. **Figure 1**: The cosine of the angle between two vectors is a measure of how similar they are**Exercise**: Implement the function `cosine_similarity()` to evaluate similarity between word vectors.**Reminder**: The norm of $u$ is defined as $ ||u||_2 = \sqrt{\sum_{i=1}^{n} u_i^2}$
###Code
# GRADED FUNCTION: cosine_similarity
def cosine_similarity(u, v):
"""
Cosine similarity reflects the degree of similariy between u and v
Arguments:
u -- a word vector of shape (n,)
v -- a word vector of shape (n,)
Returns:
cosine_similarity -- the cosine similarity between u and v defined by the formula above.
"""
distance = 0.0
### START CODE HERE ###
# Compute the dot product between u and v (≈1 line)
dot = None
# Compute the L2 norm of u (≈1 line)
norm_u = None
# Compute the L2 norm of v (≈1 line)
norm_v = None
# Compute the cosine similarity defined by formula (1) (≈1 line)
cosine_similarity = None
### END CODE HERE ###
return cosine_similarity
father = word_to_vec_map["father"]
mother = word_to_vec_map["mother"]
ball = word_to_vec_map["ball"]
crocodile = word_to_vec_map["crocodile"]
france = word_to_vec_map["france"]
italy = word_to_vec_map["italy"]
paris = word_to_vec_map["paris"]
rome = word_to_vec_map["rome"]
print("cosine_similarity(father, mother) = ", cosine_similarity(father, mother))
print("cosine_similarity(ball, crocodile) = ",cosine_similarity(ball, crocodile))
print("cosine_similarity(france - paris, rome - italy) = ",cosine_similarity(france - paris, rome - italy))
###Output
_____no_output_____
###Markdown
**Expected Output**: **cosine_similarity(father, mother)** = 0.890903844289 **cosine_similarity(ball, crocodile)** = 0.274392462614 **cosine_similarity(france - paris, rome - italy)** = -0.675147930817 After you get the correct expected output, please feel free to modify the inputs and measure the cosine similarity between other pairs of words! Playing around the cosine similarity of other inputs will give you a better sense of how word vectors behave. 2 - Word analogy taskIn the word analogy task, we complete the sentence "*a* is to *b* as *c* is to **____**". An example is '*man* is to *woman* as *king* is to *queen*' . In detail, we are trying to find a word *d*, such that the associated word vectors $e_a, e_b, e_c, e_d$ are related in the following manner: $e_b - e_a \approx e_d - e_c$. We will measure the similarity between $e_b - e_a$ and $e_d - e_c$ using cosine similarity. **Exercise**: Complete the code below to be able to perform word analogies!
###Code
# GRADED FUNCTION: complete_analogy
def complete_analogy(word_a, word_b, word_c, word_to_vec_map):
"""
Performs the word analogy task as explained above: a is to b as c is to ____.
Arguments:
word_a -- a word, string
word_b -- a word, string
word_c -- a word, string
word_to_vec_map -- dictionary that maps words to their corresponding vectors.
Returns:
best_word -- the word such that v_b - v_a is close to v_best_word - v_c, as measured by cosine similarity
"""
# convert words to lower case
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
### START CODE HERE ###
# Get the word embeddings e_a, e_b and e_c (≈1-3 lines)
e_a, e_b, e_c = None
### END CODE HERE ###
words = word_to_vec_map.keys()
max_cosine_sim = -100 # Initialize max_cosine_sim to a large negative number
best_word = None # Initialize best_word with None, it will help keep track of the word to output
# loop over the whole word vector set
for w in words:
# to avoid best_word being one of the input words, pass on them.
if w in [word_a, word_b, word_c] :
continue
### START CODE HERE ###
# Compute cosine similarity between the vector (e_b - e_a) and the vector ((w's vector representation) - e_c) (≈1 line)
cosine_sim = None
# If the cosine_sim is more than the max_cosine_sim seen so far,
# then: set the new max_cosine_sim to the current cosine_sim and the best_word to the current word (≈3 lines)
if None > None:
max_cosine_sim = None
best_word = None
### END CODE HERE ###
return best_word
###Output
_____no_output_____
###Markdown
Run the cell below to test your code, this may take 1-2 minutes.
###Code
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'large')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, complete_analogy(*triad,word_to_vec_map)))
###Output
_____no_output_____
###Markdown
**Expected Output**: **italy -> italian** :: spain -> spanish **india -> delhi** :: japan -> tokyo **man -> woman ** :: boy -> girl **small -> smaller ** :: large -> larger Once you get the correct expected output, please feel free to modify the input cells above to test your own analogies. Try to find some other analogy pairs that do work, but also find some where the algorithm doesn't give the right answer: For example, you can try small->smaller as big->?. Congratulations!You've come to the end of this assignment. Here are the main points you should remember:- Cosine similarity a good way to compare similarity between pairs of word vectors. (Though L2 distance works too.) - For NLP applications, using a pre-trained set of word vectors from the internet is often a good way to get started. Even though you have finished the graded portions, we recommend you take a look too at the rest of this notebook. Congratulations on finishing the graded portions of this notebook! 3 - Debiasing word vectors (OPTIONAL/UNGRADED) In the following exercise, you will examine gender biases that can be reflected in a word embedding, and explore algorithms for reducing the bias. In addition to learning about the topic of debiasing, this exercise will also help hone your intuition about what word vectors are doing. This section involves a bit of linear algebra, though you can probably complete it even without being expert in linear algebra, and we encourage you to give it a shot. This portion of the notebook is optional and is not graded. Lets first see how the GloVe word embeddings relate to gender. You will first compute a vector $g = e_{woman}-e_{man}$, where $e_{woman}$ represents the word vector corresponding to the word *woman*, and $e_{man}$ corresponds to the word vector corresponding to the word *man*. The resulting vector $g$ roughly encodes the concept of "gender". (You might get a more accurate representation if you compute $g_1 = e_{mother}-e_{father}$, $g_2 = e_{girl}-e_{boy}$, etc. and average over them. But just using $e_{woman}-e_{man}$ will give good enough results for now.)
###Code
g = word_to_vec_map['woman'] - word_to_vec_map['man']
print(g)
###Output
_____no_output_____
###Markdown
Now, you will consider the cosine similarity of different words with $g$. Consider what a positive value of similarity means vs a negative cosine similarity.
###Code
print ('List of names and their similarities with constructed vector:')
# girls and boys name
name_list = ['john', 'marie', 'sophie', 'ronaldo', 'priya', 'rahul', 'danielle', 'reza', 'katy', 'yasmin']
for w in name_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
###Output
_____no_output_____
###Markdown
As you can see, female first names tend to have a positive cosine similarity with our constructed vector $g$, while male first names tend to have a negative cosine similarity. This is not suprising, and the result seems acceptable. But let's try with some other words.
###Code
print('Other words and their similarities:')
word_list = ['lipstick', 'guns', 'science', 'arts', 'literature', 'warrior','doctor', 'tree', 'receptionist',
'technology', 'fashion', 'teacher', 'engineer', 'pilot', 'computer', 'singer']
for w in word_list:
print (w, cosine_similarity(word_to_vec_map[w], g))
###Output
_____no_output_____
###Markdown
Do you notice anything surprising? It is astonishing how these results reflect certain unhealthy gender stereotypes. For example, "computer" is closer to "man" while "literature" is closer to "woman". Ouch! We'll see below how to reduce the bias of these vectors, using an algorithm due to [Boliukbasi et al., 2016](https://arxiv.org/abs/1607.06520). Note that some word pairs such as "actor"/"actress" or "grandmother"/"grandfather" should remain gender specific, while other words such as "receptionist" or "technology" should be neutralized, i.e. not be gender-related. You will have to treat these two type of words differently when debiasing. 3.1 - Neutralize bias for non-gender specific words The figure below should help you visualize what neutralizing does. If you're using a 50-dimensional word embedding, the 50 dimensional space can be split into two parts: The bias-direction $g$, and the remaining 49 dimensions, which we'll call $g_{\perp}$. In linear algebra, we say that the 49 dimensional $g_{\perp}$ is perpendicular (or "orthogonal") to $g$, meaning it is at 90 degrees to $g$. The neutralization step takes a vector such as $e_{receptionist}$ and zeros out the component in the direction of $g$, giving us $e_{receptionist}^{debiased}$. Even though $g_{\perp}$ is 49 dimensional, given the limitations of what we can draw on a screen, we illustrate it using a 1 dimensional axis below. **Figure 2**: The word vector for "receptionist" represented before and after applying the neutralize operation. **Exercise**: Implement `neutralize()` to remove the bias of words such as "receptionist" or "scientist". Given an input embedding $e$, you can use the following formulas to compute $e^{debiased}$: $$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g\tag{2}$$$$e^{debiased} = e - e^{bias\_component}\tag{3}$$If you are an expert in linear algebra, you may recognize $e^{bias\_component}$ as the projection of $e$ onto the direction $g$. If you're not an expert in linear algebra, don't worry about this.<!-- **Reminder**: a vector $u$ can be split into two parts: its projection over a vector-axis $v_B$ and its projection over the axis orthogonal to $v$:$$u = u_B + u_{\perp}$$where : $u_B = $ and $ u_{\perp} = u - u_B $!-->
###Code
def neutralize(word, g, word_to_vec_map):
"""
Removes the bias of "word" by projecting it on the space orthogonal to the bias axis.
This function ensures that gender neutral words are zero in the gender subspace.
Arguments:
word -- string indicating the word to debias
g -- numpy-array of shape (50,), corresponding to the bias axis (such as gender)
word_to_vec_map -- dictionary mapping words to their corresponding vectors.
Returns:
e_debiased -- neutralized word vector representation of the input "word"
"""
### START CODE HERE ###
# Select word vector representation of "word". Use word_to_vec_map. (≈ 1 line)
e = None
# Compute e_biascomponent using the formula give above. (≈ 1 line)
e_biascomponent = None
# Neutralize e by substracting e_biascomponent from it
# e_debiased should be equal to its orthogonal projection. (≈ 1 line)
e_debiased = None
### END CODE HERE ###
return e_debiased
e = "receptionist"
print("cosine similarity between " + e + " and g, before neutralizing: ", cosine_similarity(word_to_vec_map["receptionist"], g))
e_debiased = neutralize("receptionist", g, word_to_vec_map)
print("cosine similarity between " + e + " and g, after neutralizing: ", cosine_similarity(e_debiased, g))
###Output
_____no_output_____
###Markdown
**Expected Output**: The second result is essentially 0, up to numerical roundof (on the order of $10^{-17}$). **cosine similarity between receptionist and g, before neutralizing:** : 0.330779417506 **cosine similarity between receptionist and g, after neutralizing:** : -3.26732746085e-17 3.2 - Equalization algorithm for gender-specific wordsNext, lets see how debiasing can also be applied to word pairs such as "actress" and "actor." Equalization is applied to pairs of words that you might want to have differ only through the gender property. As a concrete example, suppose that "actress" is closer to "babysit" than "actor." By applying neutralizing to "babysit" we can reduce the gender-stereotype associated with babysitting. But this still does not guarantee that "actor" and "actress" are equidistant from "babysit." The equalization algorithm takes care of this. The key idea behind equalization is to make sure that a particular pair of words are equi-distant from the 49-dimensional $g_\perp$. The equalization step also ensures that the two equalized steps are now the same distance from $e_{receptionist}^{debiased}$, or from any other work that has been neutralized. In pictures, this is how equalization works: The derivation of the linear algebra to do this is a bit more complex. (See Bolukbasi et al., 2016 for details.) But the key equations are: $$ \mu = \frac{e_{w1} + e_{w2}}{2}\tag{4}$$ $$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{5}$$ $$\mu_{\perp} = \mu - \mu_{B} \tag{6}$$$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{7}$$ $$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}\tag{8}$$$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {||(e_{w1} - \mu_{\perp}) - \mu_B||} \tag{9}$$$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {||(e_{w2} - \mu_{\perp}) - \mu_B||} \tag{10}$$$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} \tag{11}$$$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} \tag{12}$$**Exercise**: Implement the function below. Use the equations above to get the final equalized version of the pair of words. Good luck!
###Code
def equalize(pair, bias_axis, word_to_vec_map):
"""
Debias gender specific words by following the equalize method described in the figure above.
Arguments:
pair -- pair of strings of gender specific words to debias, e.g. ("actress", "actor")
bias_axis -- numpy-array of shape (50,), vector corresponding to the bias axis, e.g. gender
word_to_vec_map -- dictionary mapping words to their corresponding vectors
Returns
e_1 -- word vector corresponding to the first word
e_2 -- word vector corresponding to the second word
"""
### START CODE HERE ###
# Step 1: Select word vector representation of "word". Use word_to_vec_map. (≈ 2 lines)
w1, w2 = None
e_w1, e_w2 = None
# Step 2: Compute the mean of e_w1 and e_w2 (≈ 1 line)
mu = None
# Step 3: Compute the projections of mu over the bias axis and the orthogonal axis (≈ 2 lines)
mu_B = None
mu_orth = None
# Step 4: Use equations (7) and (8) to compute e_w1B and e_w2B (≈2 lines)
e_w1B = None
e_w2B = None
# Step 5: Adjust the Bias part of e_w1B and e_w2B using the formulas (9) and (10) given above (≈2 lines)
corrected_e_w1B = None
corrected_e_w2B = None
# Step 6: Debias by equalizing e1 and e2 to the sum of their corrected projections (≈2 lines)
e1 = None
e2 = None
### END CODE HERE ###
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_similarity(word_to_vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_similarity(word_to_vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word_to_vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_similarity(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_similarity(e2, g))
###Output
_____no_output_____ |
03_math.ipynb | ###Markdown
Advent of Code Utils> A collection of somewhat handy functions to make your AoC puzzle life solving a bit easier
###Code
#exporti
from collections.abc import Iterable
from collections import namedtuple, deque
import contextlib
from functools import reduce
import hashlib
import heapq
import logging
from math import sqrt, gcd
from pathlib import Path
import time
import pickle
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Mathy functions
###Code
#export
def factors(n):
"""
return set of divisors of a number
"""
step = 2 if n%2 else 1
return set(reduce(list.__add__,
([i, n//i] for i in range(1, int(sqrt(n))+1, step) if n % i == 0)))
assert factors(20) == {1, 2, 4, 5, 10, 20}
#export
def gcd(a,b):
largest = max(a,b)
smallest = min(a,b)
while True:
rest = largest % smallest
if rest == 0:
return prevrest
else:
prevrest = rest
largest = smallest
smallest = rest
def lcm(a):
lcm = a[0]
for i in a[1:]:
lcm = lcm*i//gcd(lcm, i)
return lcm
assert gcd(12,8) == 4
assert lcm([4,6,7]) == 84
a = [1,2,3,8,8,8,2,3]
a.index(8)
len(a) - 1 - a[::-1].index(8)
def power(a,b,M=None):
# computes a**b. Actually python pow does this with optional third argument
res = 1
while(b):
if b % 2 == 1:
res = (res * a) % M if M else res * a
print('res',res)
a *= a
print('a',a)
b //= 2
print('b',b)
return res
power(3,12)
#hide
from nbdev.export import notebook2script;
notebook2script()
!nbdev_build_lib
!nbdev_build_docs
!nbdev_clean_nbs
!git add .
!git commit -am "change future upwards"
!git push
###Output
Converted 00_core.ipynb.
Converted 01_context_free_grammar.ipynb.
Converted 02_norvig.ipynb.
Converted index.ipynb.
Converted 00_core.ipynb.
Converted 01_context_free_grammar.ipynb.
Converted 02_norvig.ipynb.
Converted index.ipynb.
converting: d:\Documenten\GitHub\adventofcode\aocutils\00_core.ipynb
converting: d:\Documenten\GitHub\adventofcode\aocutils\01_context_free_grammar.ipynb
converting: d:\Documenten\GitHub\adventofcode\aocutils\02_norvig.ipynb
converting: d:\Documenten\GitHub\adventofcode\aocutils\index.ipynb
converting d:\Documenten\GitHub\adventofcode\aocutils\index.ipynb to README.md
[main 47c0ec4] change future upwards
3 files changed, 14 insertions(+), 7 deletions(-)
To https://github.com/jvanelteren/aocutils.git
68e7b8a..47c0ec4 main -> main
|
pig-hive/pig-hive.ipynb | ###Markdown
NoSQL (Hive & Pig) Esta hoja es una introducción al uso de Hive y Pig.Utilizaremos la imagen Quickstart de Cloudera.Usaremos la librería `happybase` para python. La cargamos a continuación y hacemos la conexión.
###Code
!pip install happybase
import happybase
host = 'quickstart.cloudera'
connection = happybase.Connection(host)
connection.tables()
###Output
_____no_output_____
###Markdown
Para la carga inicial, vamos a crear todas las tablas con una única familia de columnas, `rawdata`, donde meteremos toda la información _raw_ comprimida. Después podremos hacer reorganizaciones de los datos para hacer el acceso más eficiente. Es una de las muchas ventajas de no tener un esquema.
###Code
%%bash
file=../Posts.csv
test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file
%%bash
file=../Users.csv
test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file
%%bash
file=../Tags.csv
test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file
%%bash
file=../Comments.csv
test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file
%%bash
file=../Votes.csv
test -e $file || wget http://neuromancer.inf.um.es:8080/es.stackoverflow/`basename ${file}`.gz -O - 2>/dev/null | gunzip > $file
# Create tables
tables = ['posts', 'votes', 'users', 'tags', 'comments']
for t in tables:
try:
connection.create_table(
t,
{
'rawdata': dict(max_versions=1,compression='GZ')
})
except:
print("Database already exists: {0}.".format(t))
pass
connection.tables()
###Output
_____no_output_____
###Markdown
El código de importación es siempre el mismo, ya que se coge la primera fila del CSV que contiene el nombre de las columnas y se utiliza para generar nombres de columnas dentro de la familia de columnas dada como parámetro. La función `csv_to_hbase()` acepta un fichero CSV a abrir, un nombre de tabla y una familia de columnas donde agregar las columnas del fichero CSV. En nuestro caso siempre va a ser `rawdata`.
###Code
import csv
def csv_to_hbase(file, tablename, cf):
table = connection.table(tablename)
with open(file) as f:
# La llamada csv.reader() crea un iterador sobre un fichero CSV
reader = csv.reader(f, dialect='excel')
# Se leen las columnas. Sus nombres se usarán para crear las diferentes columnas en la familia
columns = next(reader)
columns = [cf + ':' + c for c in columns]
with table.batch(batch_size=500) as b:
for row in reader:
# La primera columna se usará como Row Key
b.put(row[0], dict(zip(columns[1:], row[1:])))
for t in tables:
print("Importando tabla {0}...".format(t))
%time csv_to_hbase('../'+t.capitalize() + '.csv', t, 'rawdata')
posts = connection.table('posts')
###Output
_____no_output_____
###Markdown
Obtener el Post con `Id` 5. La orden más sencilla e inmediata de HBase es obtener una fila, opcionalmente limitando las columnas a mostrar:
###Code
posts.row(b'5',columns=[b'rawdata:Body'])
###Output
_____no_output_____
###Markdown
El siguiente código permite mostrar de forma amigable las tablas extraídas de la base de datos en forma de diccionario:
###Code
# http://stackoverflow.com/a/30525061/62365
class DictTable(dict):
# Overridden dict class which takes a dict in the form {'a': 2, 'b': 3},
# and renders an HTML Table in IPython Notebook.
def _repr_html_(self):
htmltext = ["<table width=100%>"]
for key, value in self.items():
htmltext.append("<tr>")
htmltext.append("<td>{0}</td>".format(key.decode('utf-8')))
htmltext.append("<td>{0}</td>".format(value.decode('utf-8')))
htmltext.append("</tr>")
htmltext.append("</table>")
return ''.join(htmltext)
# Muestra cómo queda la fila del Id del Post 9997
DictTable(posts.row(b'5'))
###Output
_____no_output_____
###Markdown
En otra terminal podemos ejecutar, para arrancar un _shell_ dentro del contenedor:```docker exec --user cloudera -ti pighive_quickstart.cloudera_1 bash``` El siguiente script carga todos los Posts directamente del fichero `Posts.csv`. Habrá que añadirlo primero desde la interfaz en la pestaña de gestión de ficheros.
###Code
register '/usr/lib/pig/piggybank.jar';
define CSVLoader org.apache.pig.piggybank.storage.CSVLoader();
A = LOAD '/user/cloudera/Posts.csv' using CSVLoader
AS (Id:chararray,AcceptedAnswerId:chararray,AnswerCount:chararray,Body:chararray,
ClosedDate:chararray,CommentCount:chararray,CommunityOwnedDate:chararray,
CreationDate:chararray,FavoriteCount:chararray,LastActivityDate:chararray,
LastEditDate:chararray,LastEditorDisplayName:chararray,LastEditorUserId:chararray,
OwnerDisplayName:chararray,OwnerUserId:chararray,ParentId:chararray,
PostTypeId:chararray,Score:chararray,Tags:chararray,Title:chararray,ViewCount:chararray);
ILLUSTRATE A;
###Output
_____no_output_____
###Markdown
El siguiente código coge la misma información que hemos almacenado en la tabla HBase `posts`. Sólo se cogen un conjunto limitado de columnas y se muestra cómo se puede usar el tipo mapa de Pig.
###Code
register '/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.7.0.jar';
register '/usr/lib/hbase/hbase-client-1.2.0-cdh5.7.0.jar';
register '/usr/lib/hbase/hbase-common-1.2.0-cdh5.7.0.jar';
raw = LOAD 'hbase://posts'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'rawdata:Body rawdata:OwnerUserId rawdata:*', '-loadKey true -limit 5')
AS (Id:chararray, Body:chararray, OwnerUserId:chararray, rawdata:map[]);
DUMP raw;
###Output
_____no_output_____
###Markdown
El siguiente código relaciona a la tabla usuarios de HBase con los Posts obtenidos de un fichero CSV. Lista los usuarios qué más entradas (preguntas+respuestas) tienen, ordenados por número de posts.
###Code
register '/usr/lib/zookeeper/zookeeper-3.4.5-cdh5.7.0.jar';
register '/usr/lib/hbase/hbase-client-1.2.0-cdh5.7.0.jar';
register '/usr/lib/hbase/hbase-common-1.2.0-cdh5.7.0.jar';
register '/usr/lib/pig/piggybank.jar';
define CSVLoader org.apache.pig.piggybank.storage.CSVLoader();
-- Cargar Posts del fichero CSV
Posts = LOAD '/user/cloudera/Posts.csv' using CSVLoader
AS (Id,AcceptedAnswerId,AnswerCount,Body,
ClosedDate,CommentCount,CommunityOwnedDate,
CreationDate,FavoriteCount,LastActivityDate,
LastEditDate,LastEditorDisplayName,LastEditorUserId,
OwnerDisplayName,OwnerUserId,ParentId,
PostTypeId,Score,Tags,Title,ViewCount);
-- Cargar Users de HBase
Users = LOAD 'hbase://users'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'rawdata:AboutMe rawdata:AccountId rawdata:Age rawdata:CreationDate rawdata:DisplayName rawdata:DownVotes rawdata:LastAccessDate rawdata:Location rawdata:ProfileImageUrl rawdata:Reputation rawdata:UpVotes rawdata:Views rawdata:WebsiteUrl'
, '-loadKey true')
AS (Id,AboutMe,AccountId,Age:int,
CreationDate,DisplayName,DownVotes,
LastAccessDate,Location,ProfileImageUrl,
Reputation,UpVotes,Views,WebsiteUrl);
ILLUSTRATE Users;
PostByUser = GROUP Posts BY OwnerUserId;
ILLUSTRATE PostByUser;
PostByUser = FOREACH PostByUser GENERATE group as userId, COUNT($1) AS n;
MaxPostByUser = FILTER PostByUser BY n >= 150;
DUMP MaxPostByUser;
Result = JOIN MaxPostByUser by userId, Users by Id;
Result = FOREACH Result GENERATE userId, DisplayName, n;
Result = ORDER Result BY n DESC;
DUMP Result
###Output
_____no_output_____ |
Day8/hackathon.ipynb | ###Markdown
Semi supervised learning aim of this notebook : build a classifer for defaults (that is classify a comment as a review related to a default, issue) first build a classifier in supervised approach using labeled data second build a classifer based on labeled data + unlabeled data to which we propagated labels This time we want to build a classifier that classifies the comment in one or more of this categories:- screen- software_bugs- locking_system- system- apps_update- battery_life_charging- customerservice
###Code
import pandas as pd
from tqdm import tqdm, tqdm_notebook # progress bars in Jupyter
#import newspaper # download newspapers' data easily
from time import time # measure the computation time of a python code
import pandas as pd # the most basic & powerful data manipulation tool
import numpy as np # Here, mostly used for np.nan
import langdetect # detect the language of text
import stop_words # handles stop words in many languages without having to rebuild them everytime
import spacy # NLP library for POS tagging
import nltk
from nltk.tokenize import word_tokenize
from nltk.stem.snowball import SnowballStemmer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import re
import itertools
# For spacy use "pip install spacy", then "python -m spacy download en" to download English text mining modules
tqdm.pandas()
#tqdm_notebook()
###Output
_____no_output_____
###Markdown
Read data
###Code
df = pd.read_csv('labeled_data.csv', engine='python') # label data only -> used for supervised model
dfu = pd.read_csv('data_unlabeled.csv', encoding = 'utf-8')
# unlabeled data -> used to together with lable data for semi supervised learning
print(df.shape)
print(df.head(1))
df[[c for c in df.columns if c not in ['text', 'tokens']]].sum().map(int)
###Output
_____no_output_____
###Markdown
**Reminder**: we want only these:- screen- software_bugs- locking_system- system- apps_update- battery_life_charging- customerservice
###Code
del df['issue']
del df['water_damage']
del df['sound']
del df['battery_overheat']
del df['connectivity']
del df['memory_storage']
del df['camera']
df[[c for c in df.columns if c not in ['text', 'tokens']]].sum().map(int)
###Output
_____no_output_____
###Markdown
Let's see what we have for 'screen'
###Code
df.loc[df.screen==1].head()
###Output
_____no_output_____
###Markdown
Create features and prepare the data into a NMF matrix before Machine Learning one important thing to have in mind when building a model : to make feature engineering separately on train and test. If you don't do that, you will incoporate info from the test set into the train
###Code
from gensim.models import Phrases
from gensim import corpora
import stop_words
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
#nlp = spacy.load('en')
## Function to clean and process the reviews
def cleaning_data(df) :
STOPWORDS = stop_words.get_stop_words(language='en')
#df.drop_duplicates(inplace= True) # Drop duplicated sentences
df = df[~df['text'].isnull()] # Remove empty sentences
# Remove special characters and punctucation
df['clean_review']= [ re.sub('[^A-Za-z]+',' ', e ) for e in df['text'].apply(lambda x : x.lower())]
# Remove empty clean_review
df = df[~df['clean_review'].isnull()]
df = df[~(df['clean_review']==' ')]
df.reset_index(inplace=True, drop=True) # Reset index
df['tokens'] = df['clean_review'].map(word_tokenize)
df['nb_tokens'] = df['tokens'].map(len)
## keep only sentences with at least 3 tokens
df = df[df['nb_tokens']>2]
# remove stopwords
df['tokens'] = df['tokens'].apply(lambda x: [i for i in x if i not in STOPWORDS])
stemmer = SnowballStemmer("english")
df['stemmed_text'] = df["tokens"].apply(lambda x: [stemmer.stem(y) for y in x])
df['joined_stemmed_text'] = [' '.join(word for word in word_list) for word_list in df.stemmed_text ]
return df
## split between train and test at the beginning
# we will use the same test set for supervised and semi supervised learning, so that we can compare the performances of
# both approaches
df_train, df_test = train_test_split(df, test_size=0.3, random_state=42)
# Preparing data
df_train = cleaning_data(df_train)
df_test = cleaning_data(df_test)
dfu = cleaning_data(dfu)
## in order to have the same features on train data sets (for both supervised and semi-sup) and test data sets
# build the tf idf with vocab which is the union the 3 above data sets
vocab = list(set(itertools.chain(*dfu.stemmed_text.tolist()))|set(itertools.chain(*df_test.stemmed_text.tolist()))|set(itertools.chain(*df_train.stemmed_text.tolist())))
vocab_dict = dict((y, x) for x, y in enumerate(vocab))
print(len(vocab))
# build tf idf matrix separately for train and test and unlabeled data sets
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, ngram_range=(1,3), use_idf=True, vocabulary = vocab_dict)
td_train = tfidf_vectorizer.fit_transform(df_train.joined_stemmed_text.tolist())
td_test = tfidf_vectorizer.transform(df_test.joined_stemmed_text.tolist())
td_u = tfidf_vectorizer.transform(dfu.joined_stemmed_text.tolist())
#td_test = tfidf_vectorizer.fit_transform(df_test.joined_stemmed_text.tolist())
#td_u = tfidf_vectorizer.fit_transform(dfu.joined_stemmed_text.tolist())
#td_test
###Output
_____no_output_____
###Markdown
Tried without the NMF. Just a tf-idf matrix as X. But it did not work. It seems like we should keep NMF.
###Code
#X_train = pd.DataFrame(td_train)
#X_test = pd.DataFrame(td_test)
#X_u = pd.DataFrame(td_u)
## same with NMF dimensionality reduction
## the NMF decomposes this Term Document matrix into the product of 2 smaller matrices: W and H
n_dimensions = 50 # This can also be interpreted as topics in this case. This is the "beauty" of NMF. 10 is arbitrary
nmf_model = NMF(n_components=n_dimensions, random_state=42, alpha=.1, l1_ratio=.5)
#X_u = pd.DataFrame(nmf_model.fit_transform(td_u))
X_train = pd.DataFrame(nmf_model.fit_transform(td_train))
X_test = pd.DataFrame(nmf_model.transform(td_test))
X_u = pd.DataFrame(nmf_model.transform(td_u))
#X_test = pd.DataFrame(nmf_model.fit_transform(td_test))
#X_u = pd.DataFrame(nmf_model.fit_transform(td_u))
###Output
_____no_output_____
###Markdown
Here I decided to reduce the number of topics to 10 instead of 50 to see if it improves our performance.
###Code
X_train
###Output
_____no_output_____
###Markdown
So far I've tried:- Keeping 'fit' to X_train, X_test and X_u gives very low performances (particularly for f1 for our relevant labeling. It basically labels 0 or only one of the testing data as 'relevant'.- Putting 'fit' only for X_train. Gives the best overall results: tf = 0.09 for the Normal classifier for our relevant topic. The unsupervised propagation with nn = 10 does not improve performance (it a actually decrease them if we consider the relevant category: 0.06. However, if we lower the threshold we get to 0.17- Increasing the the number of topics of NMF to 100 (instead of 50): It increases the performances: 0.11 for Normal Classifier. For unsupervised propagation it decreases f1 to 0.04. If we lower the threshold we get 0.09NOTA: So far both last solutions give an overall f1 of 0.96 for Normal, unsupervised, and threshold reduced (against 0.80 for 'fit' eveywhere).- Putting 'fit' only for X_u (because higher number of comments) with topics = 50. Increases the performances: f1 = 0.11 for Normal (still with 0.96 overall). But only 0.02 for unsupervised (still 0.96 overall). However: it increases the f1 of the lowered threshold to 0.19! (still 0.96 overall) -> Next steps: change nn to 20? Try to find a Classifier that puts more weight on the relevant category during the optimization.- With 'fit' only on X_train. Topics = 50. (Normal is the same of course) With nn = 20: f1 = 0.06 for unsupervised. (0.96 overall). However 0.22 for lower threshold! (0.96 overall)- Topics = 20, nn = 50: f1 = 0.11 for Normal (0.96 overall) 0.02 for unsupervised (0.96 overall) and 0.23 for lower threshold! (0.96 overall)- Topics = 20, nn = 5: f1 = 0.10 for unsupervised (0.96 overall) and only 0.10 with lowered threshold. (0.96 overall) Machine Learning approach
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
###Output
_____no_output_____
###Markdown
Let's try with "screen first"
###Code
y_train = df_train.screen.map(int)
y_test = df_test.screen.map(int)
# get the labels for both train and test
#for i in df.columns if i not in ['text', 'tokens']
# y_train[i] = df_train.columns[i].map(int)
# y_test[i] = df_test.columns[i].map(int)
# lets look at the number of positive in the data sets
print(len(X_train), '(Number of comments in X_train)')
print(sum(y_train), '(Number of relevant labels in X_train)')
print(len(X_test), '(Number of comments in X_test)')
print(sum(y_test), '(Number of relevant labels in X_test)')
# lets estimate a gradient boosting classifier
model = GradientBoostingClassifier(n_estimators=100, random_state=42, learning_rate=0.1)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Here with 'screen' again
###Code
print(confusion_matrix(y_train, model.predict(X_train)))
print(confusion_matrix(y_test, model.predict(X_test)))
###Output
[[7432 2]
[ 140 82]]
[[3177 15]
[ 89 5]]
###Markdown
Here we see that only 5 comments are labeled as "screen" by our prediction model on the testing set. And 89 that should have been detected did not get detected! This is pretty pretty bad. The reason might be that our Gradient Boosting method focuses on optimizing the prediction error, which is not the metric that makes sense in our case.
###Code
print(classification_report(y_test, model.predict(X_test)))
###Output
precision recall f1-score support
0 0.97 1.00 0.98 3192
1 0.25 0.05 0.09 94
avg / total 0.95 0.97 0.96 3286
###Markdown
semi supervised learning
###Code
from sklearn.semi_supervised import LabelPropagation
label_prop_model = LabelPropagation(kernel = 'knn', n_neighbors=10, max_iter = 3000)
label_prop_model.fit(X_train, y_train)
#label_prop_model.fit(pd.concat([X_train, X_test]), pd.concat([y_train, y_test]))
###Output
_____no_output_____
###Markdown
What distance is used here? Because we are using a TF-IDF Matrix... Euclidian distance does not make sense.Here we are actually using it on the NMF. So the number of dimension is way lower.
###Code
y_semi_proba = label_prop_model.predict_proba(X_u) # first column gives the proba of 0, second column gives the proba of 1
y_semi = pd.Series(label_prop_model.predict(X_u))
print(y_semi.value_counts())
proba_1 = y_semi_proba[:,1] # get the proba of 1
pd.Series(proba_1).describe()
# with n neigh = 10
X_train_semi = pd.concat([X_train, X_u])
y_train_semi = pd.concat([y_train, y_semi])
model.fit(X_train_semi, y_train_semi)
print(confusion_matrix(y_train_semi, model.predict(X_train_semi)))
print(confusion_matrix(y_test, model.predict(X_test)))
print(classification_report(y_test, model.predict(X_test)))
###Output
precision recall f1-score support
0 0.97 1.00 0.98 3192
1 0.27 0.03 0.06 94
avg / total 0.95 0.97 0.96 3286
###Markdown
We see that here the Label Propagation does not really improve our model... (or a bit only). Here we see that with a 50% threshold it's maybe too strict for this case... Maybe we should lower this.
###Code
# try to spread more labels (use thereshold lower than 0.5 in order to predict more labels)
# here we spread the same proportion of 1 in the unlabeled data set as in the labeled train data set
y_semi_bis = pd.Series([1 if x > pd.Series(proba_1).quantile(q=1-np.mean(y_train)) else 0 for x in proba_1])
y_train_semi_bis = pd.concat([y_train, y_semi_bis])
model.fit(X_train_semi, y_train_semi_bis)
print(confusion_matrix(y_train_semi_bis, model.predict(X_train_semi)))
print(confusion_matrix(y_test, model.predict(X_test)))
print(classification_report(y_test, model.predict(X_test)))
###Output
precision recall f1-score support
0 0.97 0.99 0.98 3192
1 0.26 0.13 0.17 94
avg / total 0.95 0.96 0.96 3286
###Markdown
Lowering the threshold improves the f1 score for the category. Let's try XGBoost
###Code
import xgboost as xgb
# lets estimate a XG boosting classifier
XGmodel = xgb.XGBClassifier(n_estimators=100, random_state=42, learning_rate=0.1)
XGmodel.fit(X_train, y_train)
print(confusion_matrix(y_train, XGmodel.predict(X_train)))
print(confusion_matrix(y_test, XGmodel.predict(X_test)))
###Output
[[7430 4]
[ 198 24]]
[[3188 4]
[ 93 1]]
###Markdown
Here we see that only one comment was label as "screen" by our prediction model on the testing set. And 431 that should have been detected did not get detected! This is pretty pretty bad. The reason might be that our Gradient Boosting method focuses on optimizing the prediction error, which is not the metric that makes sense in our case.
###Code
print(classification_report(y_test, XGmodel.predict(X_test)))
###Output
precision recall f1-score support
0 0.97 1.00 0.99 3192
1 0.20 0.01 0.02 94
avg / total 0.95 0.97 0.96 3286
###Markdown
semi supervised learning combined to XGBoost
###Code
# with n neigh = 10
XGmodel.fit(X_train_semi, y_train_semi)
print(confusion_matrix(y_train_semi, XGmodel.predict(X_train_semi)))
print(confusion_matrix(y_test, XGmodel.predict(X_test)))
print(classification_report(y_test, XGmodel.predict(X_test)))
###Output
precision recall f1-score support
0 0.97 1.00 0.99 3192
1 0.00 0.00 0.00 94
avg / total 0.94 0.97 0.96 3286
###Markdown
This does not work. It does not label any comment as 'screen'...
###Code
# try to spread more labels (use thereshold lower than 0.5 in order to predict more labels)
# here we spread the same proportion of 1 in the unlabeled data set as in the labeled train data set
y_semi_bis = pd.Series([1 if x > pd.Series(proba_1).quantile(q=1-np.mean(y_train)) else 0 for x in proba_1])
y_train_semi_bis = pd.concat([y_train, y_semi_bis])
XGmodel.fit(X_train_semi, y_train_semi_bis)
print(confusion_matrix(y_train_semi_bis, XGmodel.predict(X_train_semi)))
print(confusion_matrix(y_test, XGmodel.predict(X_test)))
print(classification_report(y_test, XGmodel.predict(X_test)))
###Output
precision recall f1-score support
0 0.97 0.99 0.98 3192
1 0.27 0.11 0.15 94
avg / total 0.95 0.97 0.96 3286
|
Data-Lake/notebooks/1_procedural_vs_functional_in_python.ipynb | ###Markdown
Procedural ProgrammingThis notebook contains the code from the previous screencast. The code counts the number of times a song appears in the log_of_songs variable. You'll notice that the first time you run `count_plays("Despacito")`, you get the correct count. However, when you run the same code again `count_plays("Despacito")`, the results are no longer correct.This is because the global variable `play_count` stores the results outside of the count_plays function. InstructionsRun the code cells in this notebook to see the problem with
###Code
log_of_songs = [
"Despacito",
"Nice for what",
"No tears left to cry",
"Despacito",
"Havana",
"In my feelings",
"Nice for what",
"Despacito",
"All the stars"
]
play_count = 0
def count_plays(song_title):
global play_count
for song in log_of_songs:
if song == song_title:
play_count = play_count + 1
return play_count
count_plays("Despacito")
count_plays("Despacito")
###Output
_____no_output_____ |
00_download_and_preprocess/caltech_for_detectron.ipynb | ###Markdown
Start create
###Code
origin_data_dir = '/root/notebooks/final/caltech_conver_data'
img_data = glob.glob(origin_data_dir+'/**/*.jpg', recursive=True)
# json_data = glob.glob(origin_data_dir+'/**/*.json', recursive=True)
img_data[:10]
# json_data[:10]
# Image read dir
street_dir = '/root/notebooks/0858611-2/final_project/caltech_pedestrian_extractor/video_extractor/*'
# Image save dir
save_dir = '/root/notebooks/final/result_dataset_9'
# num_imgs = 10000
num_imgs = 'all'
# Check dir folder exit
# If not, create one
if os.path.exists(save_dir) == False:
os.makedirs(save_dir)
for s in ['street', 'street_json']:
if os.path.exists(os.path.join(save_dir, s)) == False:
os.makedirs(os.path.join(save_dir, s))
#street_imgs = glob.glob(street_dir+'/**/*.jpg', recursive=True)
street_imgs = img_data
#street_imgs = random.shuffle(random.sample(street_imgs, 5000))
if num_imgs not in 'all':
street_imgs = random.sample(street_imgs, num_imgs)
random.shuffle(street_imgs)
street_img_refined = []
# street_json_refined = []
len(street_imgs)
pbar = tqdm(total=len(street_imgs))
for i in range(len(street_imgs)):
#if (i%500==0):
#print("Process (",i,"/",len(street_imgs),") ","{:.2f}".format(100*i/len(street_imgs))," %")
pbar.update()
img_path = street_imgs[i]
json_dir = img_path.replace('images', 'annotations')
json_dir = json_dir.replace('jpg', 'json')
input_file = open (json_dir)
json_array = json.load(input_file)
#if json_array != []:
if json_array == []:
street_img_refined.append(street_imgs[i])
input_file.close()
pbar.close()
len(street_img_refined)
pbar = tqdm(total=len(street_img_refined))
for i in range(len(street_img_refined)):
pbar.update()
img_path = street_img_refined[i]
json_dir = img_path.replace('images', 'annotations')
json_dir = json_dir.replace('jpg', 'json')
shutil.copyfile(json_dir, save_dir+'/street_json/'+str('{0:06}'.format(i))+'.json')
shutil.copyfile(img_path, save_dir+'/street/'+str('{0:06}'.format(i))+'.jpg')
pbar.close()
###Output
100%|██████████| 113278/113278 [43:12<00:00, 43.70it/s]
|
Exemplo - 01/Questao 01 - bs.ipynb | ###Markdown
Questão 01 - Riyadh Levi
###Code
# Importando pacote request externo ao Python
import requests
from bs4 import BeautifulSoup
# Definindo uma função para efetuar o download da página
def download(url, num_retries=2):
print('Downloading: ', url)
page = None
try:
response = requests.get(url)
page = response.text
if response.status_code >= 400:
print('Download error:', response.text)
if num_retries and 500 <= response.status_code < 600:
return download(url, num_retries - 1)
except requests.exceptions.RequestExceptions as e:
print('Download error: ', e.reason)
return page
#efetuando o download do site e iniciando a 'sopa'
url = 'https://www.rottentomatoes.com/browse/tv-list-1'
html = download(url)
soup = BeautifulSoup(html, 'html.parser')
# Capturando a tag tabela dentro do site e armazenando os dados em uma variável chamada t
t = soup.find_all('table')
# Imprimindo a variável
len(t)
t_01 = t[0]
t_02 = t[1]
# Imprimindo o conteúdo da variável
t_01.contents
# Capturando o número de linhas da tabela (Número de elementos)
numFilmes = (len(t[0].contents)-1)
numFilmes
# Ao pegar o texto dentro do conteúdo da tabela vem também caracteres que não são interesantes, é preciso 'varrer'
t[1].contents[1].get_text()
# Cria uma lista vazia, para cada filme dentro do conteúdo da tabela adiciona o filme a lista, sendo que os caracteres não interessantes serão varridos
lista_filmes = []
for filme in t[1].contents:
if filme != '\n':
# O primeiro .replace substitui os '\n' presentes no conteúdo por ''
# O segundo .replace substitui os '%' presentes no conteúdo por '% - '
# O terceiro .replace substitui os 'No Score Yet' presentes no conteúdo por 'SA' (Sem Avaliação)
lista_filmes.append(filme.get_text().replace('\n','').replace('%', '% - ').replace('No Score Yet', 'SA - '))
# Imprime os filmes dentro da lista, já tratados
lista_filmes
# Cria uma lista vazia (será armazenado dicionários dentro dela)
list_dict = []
# Para cada filme na lista de filmes, é criado um dicionário com o Nome: nome, Avaliação: avaliação e esse dicionário é adicionado a lista list_dict
for filme in lista_filmes:
list_dict.append({'Nome' : filme.split('-')[1], 'Avaliação' : filme.split('-')[0]})
# Imprime lista de dicionários
list_dict
# Importa pacote externo ao Python
import pandas as pd
# Cria uma tabela recebendo a lista de dicionários como parâmetro e a ordem das colunas passa a ser 'Nome' e 'Avaliação'
pd.DataFrame(list_dict).filter(items=['Nome','Avaliação'])
###Output
_____no_output_____ |
numpy-data-science-essential-training/Ex_Files_NumPy_Data_EssT/Exercise Files/Ch 1/01_01/Starting/Intro.ipynb | ###Markdown
What is a Jupyter notebook? Application for creating and sharing documents that contain:- live code- equations- visualizations- explanatory textHome page: http://jupyter.org/ Notebook tutorials- [Quick Start Guide](https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/)- [User Documentation](http://jupyter-notebook.readthedocs.io/en/latest/)- [Examples Documentation](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/examples_index.html)- [Cal Tech](http://bebi103.caltech.edu/2015/tutorials/t0b_intro_to_jupyter_notebooks.html) Notebook Users- students, readers, viewers, learners - read a digital book - interact with a "live" book- notebook developers - create notebooks for students, readers, ... Notebooks contain cells- Code cells - execute computer (Python, or many other languages)- Markdown cells - documentation, "narrative" cells - guide a reader through a notebook Following cells are "live" cells
###Code
print ("Hello Jupyter World!; You are helping me learn")
(5+7)/4
import numpy as np
my_first_array = np.arange(11)
print (my_first_array)
###Output
[ 0 1 2 3 4 5 6 7 8 9 10]
|
Semantic_Segmentation.ipynb | ###Markdown
Semantic Segmentation
###Code
import os.path
import tensorflow as tf
import helper
import warnings
from distutils.version import LooseVersion
import project_tests as tests
import sys
import cv2
import scipy
import numpy as np
###Output
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ec2-user/.config/matplotlib/matplotlibrc", line #2
(fname, cnt))
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ec2-user/.config/matplotlib/matplotlibrc", line #3
(fname, cnt))
###Markdown
Check for a GPU
###Code
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
Default GPU Device: /device:GPU:0
###Markdown
Check TensorFlow Version
###Code
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
def load_vgg(sess, vgg_path):
"""
Load Pretrained VGG Model into TensorFlow.
:param sess: TensorFlow Session
:param vgg_path: Path to vgg folder, containing "variables/" and "saved_model.pb"
:return: Tuple of Tensors from VGG model (image_input, keep_prob, layer3_out, layer4_out, layer7_out)
"""
# TODO: Implement function
# Use tf.saved_model.loader.load to load the model and weights
vgg_tag = 'vgg16'
vgg_input_tensor_name = 'image_input:0'
vgg_keep_prob_tensor_name = 'keep_prob:0'
vgg_layer3_out_tensor_name = 'layer3_out:0'
vgg_layer4_out_tensor_name = 'layer4_out:0'
vgg_layer7_out_tensor_name = 'layer7_out:0'
# Refer https://stackoverflow.com/questions/45705070/how-to-load-and-use-a-saved-model-on-tensorflow
tf.saved_model.loader.load(sess, [vgg_tag], vgg_path)
graph = tf.get_default_graph()
image_input = graph.get_tensor_by_name(vgg_input_tensor_name)
keep_prob = graph.get_tensor_by_name(vgg_keep_prob_tensor_name)
layer3_out = graph.get_tensor_by_name(vgg_layer3_out_tensor_name)
layer4_out = graph.get_tensor_by_name(vgg_layer4_out_tensor_name)
layer7_out = graph.get_tensor_by_name(vgg_layer7_out_tensor_name)
return image_input, keep_prob, layer3_out, layer4_out, layer7_out
def layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes):
"""
Create the layers for a fully convolutional network. Build skip-layers using the vgg layers.
:param vgg_layer3_out: TF Tensor for VGG Layer 3 output
:param vgg_layer4_out: TF Tensor for VGG Layer 4 output
:param vgg_layer7_out: TF Tensor for VGG Layer 7 output
:param num_classes: Number of classes to classify
:return: The Tensor for the last layer of output
"""
std_dev = 0.001
reg = 0.0001
# 1x1 Convolutions
conx_1x1_layer3 = tf.layers.conv2d(vgg_layer3_out, num_classes, 1,
padding='SAME',
kernel_initializer = tf.random_normal_initializer(stddev = std_dev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(reg),
name = "conx_1x1_layer3")
conx_1x1_layer4 = tf.layers.conv2d(vgg_layer4_out, num_classes, 1,
padding='SAME',
kernel_initializer = tf.random_normal_initializer(stddev = std_dev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(reg),
name = "conx_1x1_layer4")
conx_1x1_layer7 = tf.layers.conv2d(vgg_layer7_out, num_classes, 1,
padding='SAME',
kernel_initializer = tf.random_normal_initializer(stddev = std_dev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(reg),
name = "conx_1x1_layer7")
upsample_2x_l7 = tf.layers.conv2d_transpose(vgg_layer7_out, num_classes, 4, strides = (2, 2), padding='SAME',
kernel_initializer = tf.random_normal_initializer(stddev = std_dev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(reg),
name = "upsample_2x_l7")
fuse1 = tf.add(upsample_2x_l7, conx_1x1_layer4)
upsample_2x_f1 = tf.layers.conv2d_transpose(fuse1, num_classes, 4, strides = (2, 2), padding='SAME',
kernel_initializer = tf.random_normal_initializer(stddev = std_dev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(reg),
name = "upsample_2x_f1")
fuse2 = tf.add(upsample_2x_f1, conx_1x1_layer3)
upsample_2x_f2 = tf.layers.conv2d_transpose(fuse2, num_classes, 16, strides = (8, 8), padding='SAME',
kernel_initializer = tf.random_normal_initializer(stddev = std_dev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(reg),
name = "upsample_2x_f2")
return upsample_2x_f2
def optimize(nn_last_layer, correct_label, learning_rate, num_classes):
"""
Build the TensorFLow loss and optimizer operations.
:param nn_last_layer: TF Tensor of the last layer in the neural network
:param correct_label: TF Placeholder for the correct label image
:param learning_rate: TF Placeholder for the learning rate
:param num_classes: Number of classes to classify
:return: Tuple of (logits, train_op, cross_entropy_loss)
"""
logits = tf.reshape(nn_last_layer, (-1, num_classes))
labels = tf.reshape(correct_label, (-1, num_classes))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
reg_losses = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
reg_constant = 0.0001
loss = loss_operation + reg_constant * sum(reg_losses)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
training_operation = optimizer.minimize(loss)
return logits, training_operation, loss
def train_nn(sess, epochs, batch_size, get_batches_fn, train_op, cross_entropy_loss, input_image,
correct_label, keep_prob, learning_rate):
"""
Train neural network and print out the loss during training.
:param sess: TF Session
:param epochs: Number of epochs
:param batch_size: Batch size
:param get_batches_fn: Function to get batches of training data. Call using get_batches_fn(batch_size)
:param train_op: TF Operation to train the neural network
:param cross_entropy_loss: TF Tensor for the amount of loss
:param input_image: TF Placeholder for input images
:param correct_label: TF Placeholder for label images
:param keep_prob: TF Placeholder for dropout keep probability
:param learning_rate: TF Placeholder for learning rate
"""
for i in range(epochs):
for images, labels in get_batches_fn(batch_size):
_, loss = sess.run([train_op, cross_entropy_loss],
feed_dict={input_image : images,
correct_label : labels,
keep_prob: 0.5,
learning_rate : 0.0001})
print('Epoch {}/{}; Training Loss:{:.03f}'.format(i+1, epochs, loss))
def gen_test_output_video(sess, logits, keep_prob, image_pl, video_file, image_shape):
"""
Generate test output using the test images
:param sess: TF session
:param logits: TF Tensor for the logits
:param keep_prob: TF Placeholder for the dropout keep robability
:param image_pl: TF Placeholder for the image placeholder
:param image_shape: Tuple - Shape of image
:return: Output for for each test image
"""
cap = cv2.VideoCapture(video_file)
counter=0
while True:
ret, frame = cap.read()
if frame is None:
break
image = scipy.misc.imresize(frame, image_shape)
im_softmax = sess.run(
[tf.nn.softmax(logits)],
{keep_prob: 1.0, image_pl: [image]})
im_softmax = im_softmax[0][:, 1].reshape(image_shape[0], image_shape[1])
segmentation = (im_softmax > 0.5).reshape(image_shape[0], image_shape[1], 1)
mask = np.dot(segmentation, np.array([[0, 255, 0, 127]]))
mask_full = scipy.misc.imresize(mask, frame.shape)
mask_full = scipy.misc.toimage(mask_full, mode="RGBA")
mask = scipy.misc.toimage(mask, mode="RGBA")
street_im = scipy.misc.toimage(image)
street_im.paste(mask, box=None, mask=mask)
street_im_full = scipy.misc.toimage(frame)
street_im_full.paste(mask_full, box=None, mask=mask_full)
cv2.imwrite("video_output/video%08d.jpg"%counter,np.array(street_im_full))
counter=counter+1
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
def run():
num_classes = 2
image_shape = (160, 576)
data_dir = './data'
runs_dir = './runs'
tests.test_for_kitti_dataset(data_dir)
# Download pretrained vgg model
helper.maybe_download_pretrained_vgg(data_dir)
# OPTIONAL: Train and Inference on the cityscapes dataset instead of the Kitti dataset.
# You'll need a GPU with at least 10 teraFLOPS to train on.
# https://www.cityscapes-dataset.com/
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, 'vgg')
# Create function to get batches
get_batches_fn = helper.gen_batch_function(os.path.join(data_dir, 'data_road/training'), image_shape)
# OPTIONAL: Augment Images for better results
# https://datascience.stackexchange.com/questions/5224/how-to-prepare-augment-images-for-neural-network
correct_label = tf.placeholder(dtype=tf.float32, shape=(None, None, None, num_classes), name='correct_label')
learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')
# TODO: Build NN using load_vgg, layers, and optimize function
input_image, keep_prob, layer3_out, layer4_out, layer7_out = load_vgg(sess, vgg_path)
outputs = layers(layer3_out, layer4_out, layer7_out, num_classes)
logits, training_operation, loss_operation = optimize(outputs, correct_label, learning_rate, num_classes)
epochs = 50
batch_size = 20
# TODO: Train NN using the train_nn function
sess.run(tf.global_variables_initializer())
train_nn(sess, epochs, batch_size, get_batches_fn, training_operation, loss_operation, input_image, correct_label, keep_prob, learning_rate)
saver = tf.train.Saver()
saver.save(sess, './fcn_ss')
print("Model saved")
# TODO: Save inference data using helper.save_inference_samples
helper.save_inference_samples(runs_dir, data_dir, sess, image_shape, logits, keep_prob, input_image)
# OPTIONAL: Apply the trained model to a video
video_file='project_video.mp4'
gen_test_output_video(sess, logits, keep_prob, input_image, video_file, image_shape)
run()
###Output
Tests Passed
INFO:tensorflow:Restoring parameters from b'./data/vgg/variables/variables'
WARNING:tensorflow:From <ipython-input-6-df592e219464>:13: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2.
Epoch 1/50; Training Loss:0.694
Epoch 1/50; Training Loss:0.700
Epoch 1/50; Training Loss:0.693
Epoch 1/50; Training Loss:0.691
Epoch 1/50; Training Loss:0.687
Epoch 1/50; Training Loss:0.681
Epoch 1/50; Training Loss:0.665
Epoch 1/50; Training Loss:0.632
Epoch 1/50; Training Loss:0.596
Epoch 1/50; Training Loss:0.553
Epoch 1/50; Training Loss:0.492
Epoch 1/50; Training Loss:0.469
Epoch 1/50; Training Loss:0.467
Epoch 1/50; Training Loss:0.477
Epoch 1/50; Training Loss:0.480
Epoch 2/50; Training Loss:0.428
Epoch 2/50; Training Loss:0.415
Epoch 2/50; Training Loss:0.364
Epoch 2/50; Training Loss:0.380
Epoch 2/50; Training Loss:0.407
Epoch 2/50; Training Loss:0.393
Epoch 2/50; Training Loss:0.387
Epoch 2/50; Training Loss:0.353
Epoch 2/50; Training Loss:0.337
Epoch 2/50; Training Loss:0.326
Epoch 2/50; Training Loss:0.330
Epoch 2/50; Training Loss:0.314
Epoch 2/50; Training Loss:0.301
Epoch 2/50; Training Loss:0.245
Epoch 2/50; Training Loss:0.245
Epoch 3/50; Training Loss:0.238
Epoch 3/50; Training Loss:0.268
Epoch 3/50; Training Loss:0.212
Epoch 3/50; Training Loss:0.209
Epoch 3/50; Training Loss:0.218
Epoch 3/50; Training Loss:0.211
Epoch 3/50; Training Loss:0.194
Epoch 3/50; Training Loss:0.179
Epoch 3/50; Training Loss:0.199
Epoch 3/50; Training Loss:0.170
Epoch 3/50; Training Loss:0.190
Epoch 3/50; Training Loss:0.174
Epoch 3/50; Training Loss:0.196
Epoch 3/50; Training Loss:0.202
Epoch 3/50; Training Loss:0.150
Epoch 4/50; Training Loss:0.165
Epoch 4/50; Training Loss:0.138
Epoch 4/50; Training Loss:0.165
Epoch 4/50; Training Loss:0.159
Epoch 4/50; Training Loss:0.157
Epoch 4/50; Training Loss:0.162
Epoch 4/50; Training Loss:0.166
Epoch 4/50; Training Loss:0.154
Epoch 4/50; Training Loss:0.162
Epoch 4/50; Training Loss:0.151
Epoch 4/50; Training Loss:0.155
Epoch 4/50; Training Loss:0.148
Epoch 4/50; Training Loss:0.156
Epoch 4/50; Training Loss:0.150
Epoch 4/50; Training Loss:0.138
Epoch 5/50; Training Loss:0.138
Epoch 5/50; Training Loss:0.151
Epoch 5/50; Training Loss:0.163
Epoch 5/50; Training Loss:0.129
Epoch 5/50; Training Loss:0.115
Epoch 5/50; Training Loss:0.152
Epoch 5/50; Training Loss:0.129
Epoch 5/50; Training Loss:0.127
Epoch 5/50; Training Loss:0.136
Epoch 5/50; Training Loss:0.114
Epoch 5/50; Training Loss:0.117
Epoch 5/50; Training Loss:0.139
Epoch 5/50; Training Loss:0.131
Epoch 5/50; Training Loss:0.110
Epoch 5/50; Training Loss:0.125
Epoch 6/50; Training Loss:0.126
Epoch 6/50; Training Loss:0.130
Epoch 6/50; Training Loss:0.116
Epoch 6/50; Training Loss:0.108
Epoch 6/50; Training Loss:0.125
Epoch 6/50; Training Loss:0.088
Epoch 6/50; Training Loss:0.098
Epoch 6/50; Training Loss:0.121
Epoch 6/50; Training Loss:0.113
Epoch 6/50; Training Loss:0.124
Epoch 6/50; Training Loss:0.120
Epoch 6/50; Training Loss:0.109
Epoch 6/50; Training Loss:0.095
Epoch 6/50; Training Loss:0.102
Epoch 6/50; Training Loss:0.093
Epoch 7/50; Training Loss:0.101
Epoch 7/50; Training Loss:0.110
Epoch 7/50; Training Loss:0.104
Epoch 7/50; Training Loss:0.096
Epoch 7/50; Training Loss:0.133
Epoch 7/50; Training Loss:0.100
Epoch 7/50; Training Loss:0.106
Epoch 7/50; Training Loss:0.098
Epoch 7/50; Training Loss:0.093
Epoch 7/50; Training Loss:0.110
Epoch 7/50; Training Loss:0.104
Epoch 7/50; Training Loss:0.098
Epoch 7/50; Training Loss:0.100
Epoch 7/50; Training Loss:0.101
Epoch 7/50; Training Loss:0.091
Epoch 8/50; Training Loss:0.097
Epoch 8/50; Training Loss:0.084
Epoch 8/50; Training Loss:0.086
Epoch 8/50; Training Loss:0.105
Epoch 8/50; Training Loss:0.100
Epoch 8/50; Training Loss:0.068
Epoch 8/50; Training Loss:0.096
Epoch 8/50; Training Loss:0.087
Epoch 8/50; Training Loss:0.101
Epoch 8/50; Training Loss:0.095
Epoch 8/50; Training Loss:0.095
Epoch 8/50; Training Loss:0.087
Epoch 8/50; Training Loss:0.085
Epoch 8/50; Training Loss:0.092
Epoch 8/50; Training Loss:0.090
Epoch 9/50; Training Loss:0.080
Epoch 9/50; Training Loss:0.083
Epoch 9/50; Training Loss:0.074
Epoch 9/50; Training Loss:0.086
Epoch 9/50; Training Loss:0.081
Epoch 9/50; Training Loss:0.070
Epoch 9/50; Training Loss:0.086
Epoch 9/50; Training Loss:0.076
Epoch 9/50; Training Loss:0.076
Epoch 9/50; Training Loss:0.092
Epoch 9/50; Training Loss:0.079
Epoch 9/50; Training Loss:0.075
Epoch 9/50; Training Loss:0.087
Epoch 9/50; Training Loss:0.082
Epoch 9/50; Training Loss:0.081
Epoch 10/50; Training Loss:0.075
Epoch 10/50; Training Loss:0.083
Epoch 10/50; Training Loss:0.083
Epoch 10/50; Training Loss:0.076
Epoch 10/50; Training Loss:0.076
Epoch 10/50; Training Loss:0.055
Epoch 10/50; Training Loss:0.071
Epoch 10/50; Training Loss:0.061
Epoch 10/50; Training Loss:0.066
Epoch 10/50; Training Loss:0.097
Epoch 10/50; Training Loss:0.076
Epoch 10/50; Training Loss:0.086
Epoch 10/50; Training Loss:0.076
Epoch 10/50; Training Loss:0.079
Epoch 10/50; Training Loss:0.078
Epoch 11/50; Training Loss:0.072
Epoch 11/50; Training Loss:0.067
Epoch 11/50; Training Loss:0.063
Epoch 11/50; Training Loss:0.100
Epoch 11/50; Training Loss:0.083
Epoch 11/50; Training Loss:0.080
Epoch 11/50; Training Loss:0.067
Epoch 11/50; Training Loss:0.079
Epoch 11/50; Training Loss:0.078
Epoch 11/50; Training Loss:0.069
Epoch 11/50; Training Loss:0.065
Epoch 11/50; Training Loss:0.073
Epoch 11/50; Training Loss:0.070
Epoch 11/50; Training Loss:0.087
Epoch 11/50; Training Loss:0.068
Epoch 12/50; Training Loss:0.077
Epoch 12/50; Training Loss:0.063
Epoch 12/50; Training Loss:0.072
Epoch 12/50; Training Loss:0.061
Epoch 12/50; Training Loss:0.053
Epoch 12/50; Training Loss:0.077
Epoch 12/50; Training Loss:0.054
Epoch 12/50; Training Loss:0.059
Epoch 12/50; Training Loss:0.071
Epoch 12/50; Training Loss:0.054
Epoch 12/50; Training Loss:0.064
Epoch 12/50; Training Loss:0.064
Epoch 12/50; Training Loss:0.069
Epoch 12/50; Training Loss:0.064
Epoch 12/50; Training Loss:0.048
Epoch 13/50; Training Loss:0.065
Epoch 13/50; Training Loss:0.077
Epoch 13/50; Training Loss:0.055
Epoch 13/50; Training Loss:0.051
Epoch 13/50; Training Loss:0.066
Epoch 13/50; Training Loss:0.061
Epoch 13/50; Training Loss:0.067
Epoch 13/50; Training Loss:0.048
Epoch 13/50; Training Loss:0.051
Epoch 13/50; Training Loss:0.053
Epoch 13/50; Training Loss:0.062
Epoch 13/50; Training Loss:0.061
Epoch 13/50; Training Loss:0.052
Epoch 13/50; Training Loss:0.057
Epoch 13/50; Training Loss:0.051
Epoch 14/50; Training Loss:0.056
Epoch 14/50; Training Loss:0.053
Epoch 14/50; Training Loss:0.061
Epoch 14/50; Training Loss:0.059
Epoch 14/50; Training Loss:0.046
Epoch 14/50; Training Loss:0.050
Epoch 14/50; Training Loss:0.058
Epoch 14/50; Training Loss:0.057
Epoch 14/50; Training Loss:0.049
Epoch 14/50; Training Loss:0.048
Epoch 14/50; Training Loss:0.071
Epoch 14/50; Training Loss:0.054
Epoch 14/50; Training Loss:0.056
Epoch 14/50; Training Loss:0.050
Epoch 14/50; Training Loss:0.063
Epoch 15/50; Training Loss:0.062
Epoch 15/50; Training Loss:0.050
Epoch 15/50; Training Loss:0.053
Epoch 15/50; Training Loss:0.062
Epoch 15/50; Training Loss:0.050
Epoch 15/50; Training Loss:0.059
Epoch 15/50; Training Loss:0.056
Epoch 15/50; Training Loss:0.046
Epoch 15/50; Training Loss:0.052
Epoch 15/50; Training Loss:0.049
Epoch 15/50; Training Loss:0.039
Epoch 15/50; Training Loss:0.034
Epoch 15/50; Training Loss:0.046
Epoch 15/50; Training Loss:0.056
Epoch 15/50; Training Loss:0.067
Epoch 16/50; Training Loss:0.054
Epoch 16/50; Training Loss:0.057
Epoch 16/50; Training Loss:0.048
Epoch 16/50; Training Loss:0.052
Epoch 16/50; Training Loss:0.045
Epoch 16/50; Training Loss:0.059
Epoch 16/50; Training Loss:0.047
Epoch 16/50; Training Loss:0.052
Epoch 16/50; Training Loss:0.046
Epoch 16/50; Training Loss:0.044
Epoch 16/50; Training Loss:0.046
Epoch 16/50; Training Loss:0.036
Epoch 16/50; Training Loss:0.048
Epoch 16/50; Training Loss:0.051
###Markdown
NEW CNN Model
###Code
# Setup
import os
from keras.preprocessing.image import ImageDataGenerator
# data directory
os.chdir("C:/Users/Sudhanshu Biyani/Desktop/folder")
image_dimensions = 80
# batch size
training_batch_size = 64 # larger = better but more computationally costly and memory intensive
validate_batch_size = 1 # optimize at runtime for parallel cores, otherwise doesn't matter much
# normalization
# normalize each chip
samplewise_center = True
samplewise_std_normalization = True
# normalize by larger batches
featurewise_center = False
featurewise_std_normalization = False
# adjacent pixel correllation reduction
# never explored
zca_whitening = False
zca_epsilon = 1e-6
# data augmentation
# training only
transform = 0.1
zoom_range = 0.1
rotate = 360
flip = True
datagen_train = ImageDataGenerator(
samplewise_center=samplewise_center,
featurewise_center=featurewise_center,
featurewise_std_normalization=featurewise_std_normalization,
samplewise_std_normalization=samplewise_std_normalization,
zca_whitening=zca_whitening,
zca_epsilon=zca_epsilon,
rotation_range=rotate,
width_shift_range=transform,
height_shift_range=transform,
shear_range=transform,
zoom_range=zoom_range,
fill_mode='nearest',
horizontal_flip=flip,
vertical_flip=flip,
rescale=1./255,
preprocessing_function=None)
# data augmentation
# evaluation only
transform = 0
rotate = 0
flip = False
datagen_verify = ImageDataGenerator(
samplewise_center=samplewise_center,
featurewise_center=featurewise_center,
featurewise_std_normalization=featurewise_std_normalization,
samplewise_std_normalization=samplewise_std_normalization,
zca_whitening=zca_whitening,
zca_epsilon=zca_epsilon,
rotation_range=rotate,
width_shift_range=transform,
height_shift_range=transform,
shear_range=transform,
zoom_range=transform,
fill_mode='nearest',
horizontal_flip=flip,
vertical_flip=flip,
rescale=1./255,
preprocessing_function=None)
generator_train = datagen_train.flow_from_directory(
'train',
target_size=(image_dimensions,image_dimensions),
color_mode="rgb",
batch_size=training_batch_size,
class_mode='categorical',
shuffle=True)
generator_verify = datagen_verify.flow_from_directory(
'verify',
target_size=(image_dimensions,image_dimensions),
color_mode="rgb",
batch_size=validate_batch_size,
class_mode='categorical',
shuffle=True)
print('Done')
# define MobileNet architecture
from keras.applications import MobileNet
model = MobileNet(
input_shape=(image_dimensions, image_dimensions,3),
alpha=0.25,
depth_multiplier=1,
dropout=0.5,
include_top=True,
weights=None,
input_tensor=None,
pooling=None,
classes=8
)
model.compile(loss='categorical_crossentropy',
optimizer='adam')
#model.summary()
print('Done')
# Train CNN
from PIL import Image
from keras.callbacks import ModelCheckpoint
nEpochs = 500
checkpointer = ModelCheckpoint(
filepath='sat_mobilenet_v0.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='auto',
save_weights_only=False)
nFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/train")])
nBatches = nFiles//training_batch_size
nValFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/verify")])
nValbatches = nValFiles//validate_batch_size
hist = model.fit_generator(
generator_train,
steps_per_epoch=nBatches,
epochs=nEpochs,
verbose=2,
validation_data=generator_verify,
validation_steps=nValbatches,
max_queue_size=10,
callbacks=[checkpointer])
print('Done')
###Output
Epoch 1/500
###Markdown
OTHER EVAL TOOLS
###Code
# LOAD Pretrained MOBILENET MODEL
from keras.applications import mobilenet
from keras.models import load_model
model = load_model('sat_mobilenet_v0.h5', custom_objects={
'relu6': mobilenet.relu6,
'DepthwiseConv2D': mobilenet.DepthwiseConv2D})
print('Done')
# EVALUATE
nValFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/verify")])
nValbatches = nValFiles//validate_batch_size
evaluation = model.evaluate_generator(
generator_verify,
steps=nValbatches,
max_queue_size=10)
print(model.metrics_names)
print(evaluation)
# PREDICT
generator_predict = datagen_verify.flow_from_directory(
'verify',
target_size=(image_dimensions,image_dimensions),
color_mode="rgb",
batch_size=validate_batch_size,
class_mode='categorical',
shuffle=False)
nValFiles = sum([len(files) for r, d, files in os.walk("C:/Users/Sudhanshu Biyani/Desktop/folder/verify")])
nValbatches = nValFiles//validate_batch_size
predictions = model.predict_generator(
generator_predict,
steps=nValbatches,
max_queue_size=10,
verbose=1)
#RUN ON IMAGE FROM DRONE
import imageio
print ("hello)")
import os
print (os.path)
temp = imageio.imread('C:\Users\Sudhanshu Biyani\OneDrive - Arizona State University\Semester 2\CSE 591 - Perception in Robotics\Project\100x100 Slices\Camelback\x0y0.png')
###Output
_____no_output_____
###Markdown
print ("hello")
###Code
temp = imageio.imread('C:/Users/Sudhanshu Biyani/Desktop/x0y0.pmg')
temp = imageio.imread('C:/Users/Sudhanshu Biyani/Desktop/x0y0.png')
print (temp)
import pandas as pd
import scipy.misc
scipy.misc.imsave('outfile.jpg', temp)
###Output
C:\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: `imsave` is deprecated!
`imsave` is deprecated in SciPy 1.0.0, and will be removed in 1.2.0.
Use ``imageio.imwrite`` instead.
"""Entry point for launching an IPython kernel.
###Markdown
**Semantic Segmentation - Samay Gandhi** **Pytorch** *Check the specifications of gpu*
###Code
# !pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
%cd '/content/drive/MyDrive/Datasets /Semantic Drone Dataset'
###Output
/content/drive/MyDrive/Datasets /Semantic Drone Dataset
###Markdown
*Import necessary libraries*
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
import cv2
from PIL import Image
%matplotlib inline
import torch
import torch.nn as nn
from torch.utils.data import DataLoader,Dataset,random_split
from torchvision import transforms
from torchvision import datasets
import torchvision.transforms.functional as TF
import torch.nn.functional as F
from torch.autograd import Variable
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
LR = 1e-4
###Output
_____no_output_____
###Markdown
*View the Images*
###Code
images = 'dataset/semantic_drone_dataset/Original_Images'
rgb_masks = 'RGB_color_image_masks'
labels = 'dataset/semantic_drone_dataset/Labels'
image = Image.open(images + '/original_images/594.jpg')
rgb_mask = Image.open(rgb_masks + '/RGB_color_image_masks/594.png')
label = Image.open(labels + '/label_images_semantic/594.png')
fig = plt.figure(figsize=(32,32))
rows = 1
columns = 3
fig.add_subplot(rows,columns,1)
plt.imshow(image)
plt.axis('off')
plt.title("Image")
fig.add_subplot(rows,columns,2)
plt.imshow(rgb_mask,alpha=0.9)
plt.axis('off')
plt.title("Label with RGB mask")
fig.add_subplot(rows,columns,3)
plt.imshow(label, cmap='gray')
plt.axis('off')
plt.title("Label with mask")
###Output
_____no_output_____
###Markdown
*Dataset class and Dataloaders*
###Code
# 0 : others - 0
# 1 : area - 1
# 9 : roof - 2
# 3 : grass - 3
# 5 : water - 4
# 15 : person - 5
# 17 : car - 6
class DroneDataset(Dataset):
def __init__(self,images_path,labels_path):
self.images = datasets.ImageFolder(images_path,
transform=transforms.Compose([
transforms.Resize((256,256)),
transforms.ToTensor(),
transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
]))
self.labels = datasets.ImageFolder(labels_path,transform=transforms.Compose([
transforms.Grayscale(),
transforms.Resize((256,256)),
transforms.ToTensor()
]))
def __getitem__(self,index):
img_output = self.labels[index][0]
img_output = 255*img_output
#Manipulate the label Images
mask = np.array([[0,0],
[1,1],
[2,0],
[3,3],
[4,1],
[5,5],
[6,0],
[7,5],
[8,3],
[9,2],
[10,0],
[11,2],
[12,2],
[13,0],
[14,0],
[15,5],
[16,5],
[17,6],
[18,6],
[19,3],
[20,3],
[21,0],
[22,0],
[23,0]
])
for i in range(0,24):
img_output[img_output == i] = mask[i][1]
img_output = img_output.to(torch.int64)
return self.images[index][0],img_output
def __len__(self):
return len(self.images)
dataset = DroneDataset(images,labels)
torch.unique(dataset[2][1])
#Split the data into train and val dataset
n_val = 10
train_dataset,val_dataset = random_split(dataset,[len(dataset)-n_val,n_val],generator=torch.Generator().manual_seed(42))
#Make the data_loader now so that the data is ready for training
batch_size = 4
train_loader = DataLoader(train_dataset,batch_size)
test_loader = DataLoader(val_dataset,batch_size*2)
###Output
_____no_output_____
###Markdown
*Model*
###Code
#Define the CNN block now
#Defined as per the U-net Structure
#Made some modifications too to the original structure
class DoubleCNNBlock(nn.Module):
def __init__(self,in_channels,out_channels):
super().__init__()
self.conv1 = nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=3,
padding=1,
stride=1,
bias=False
)
self.bn1 = nn.BatchNorm2d(
out_channels
)
self.act1 = nn.ReLU()
self.conv2 = nn.Conv2d(
in_channels=out_channels,
out_channels=out_channels,
kernel_size=3,
padding=1,
stride=1,
bias=False
)
self.bn2 = nn.BatchNorm2d(
out_channels
)
self.act2 = nn.ReLU()
def forward(self,x):
out = self.act1(self.bn1(self.conv1(x)))
out = self.act2(self.bn2(self.conv2(out)))
return out
class UpConv(nn.Module):
def __init__(self,in_channels,out_channels):
super().__init__()
self.tconv = nn.ConvTranspose2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=2,
stride=2
)
def forward(self,x,skip_connection):
out = self.tconv(x)
if out.shape != skip_connection.shape:
out = TF.resize(out ,size=skip_connection.shape[2:])
out = torch.cat([skip_connection,out],axis = 1)
return out
class Bottom(nn.Module):
def __init__(self,channel=[128,256]):
super().__init__()
self.channel=channel
self.conv1 = nn.Conv2d(
in_channels=self.channel[0],
out_channels=self.channel[1],
kernel_size=3,
padding=1,
stride=1,
bias=False
)
self.bn1 = nn.BatchNorm2d(
self.channel[1]
)
self.act1 = nn.ReLU()
self.conv2 = nn.Conv2d(
in_channels=self.channel[1],
out_channels=self.channel[1],
kernel_size=3,
padding=1,
stride=1,
bias=False
)
self.bn2 = nn.BatchNorm2d(
self.channel[1]
)
self.act2 = nn.ReLU()
self.bottom = nn.Sequential(
self.conv1,
self.bn1,
self.act1,
self.conv2,
self.bn2,
self.act2
)
def forward(self,x):
# out = self.act1(self.bn1(self.conv1(x)))
# print("1:{}".format(out.shape))
# out = self.act2(self.bn2(self.conv2(out)))
# print("2:{}".format(out.shape))
return self.bottom(x)
class Unet(nn.Module):
def __init__(self,num_classes,filters=[16,32,64,128],input_channels=3):
super().__init__()
self.contract = nn.ModuleList()
self.expand = nn.ModuleList() #64 - #128 - #256 - #512 - #1024 -#512
self.filters = filters
self.input_channels = input_channels
self.num_classes = num_classes
self.pool = nn.MaxPool2d(
kernel_size=2,
stride=2
)
for filters in self.filters:
self.contract.append(
DoubleCNNBlock(
in_channels=input_channels,
out_channels=filters
)
)
input_channels = filters
for filters in reversed(self.filters):
self.expand.append(
UpConv(
in_channels=filters*2,
out_channels=filters
)
)
self.expand.append(
DoubleCNNBlock(
in_channels=filters*2,
out_channels=filters
)
)
self.final = nn.Conv2d(
in_channels=self.filters[0],
out_channels=num_classes,
kernel_size=3,
padding=1,
stride=1
)
def forward(self,x):
skip_connections = []
for downs in self.contract:
out = downs(x)
skip_connections.append(out)
out = self.pool(out)
x = out
bottom = Bottom()
bottom.to(DEVICE)
y = bottom(x)
for idx in range(0,len(self.expand),2):
skip_connection = skip_connections[len(skip_connections)-idx//2-1]
y = self.expand[idx](y,skip_connection)
y = self.expand[idx+1](y)
return self.final(y)
model = Unet(num_classes=8)
model.to(DEVICE)
def DICEloss(preds,outputs,smooth=1):
preds = F.softmax(preds,dim=1)
labels_one_hot = F.one_hot(outputs, num_classes = 8).permute(0,3,1,2).contiguous()
intersection = torch.sum(preds*labels_one_hot)
total = torch.sum(preds*preds) + torch.sum(labels_one_hot*labels_one_hot)
return 1-((2*intersection + smooth)/(total))
model = Unet(num_classes=8)
model.load_state_dict(torch.load('Only 7 classes2'))
model.to(DEVICE)
opt = torch.optim.Adam(model.parameters(),lr = 1e-5)
###Output
_____no_output_____
###Markdown
*Training the model*
###Code
#Training the model
model.train()
num_epochs = 15
loss_per_iteration = []
iters = []
for epochs in range(1,num_epochs+1):
loss_per_epoch = 0.0
batch_num = 0
for inputs,outputs in tqdm(train_loader):
torch.cuda.empty_cache()
inputs,outputs = inputs.to(DEVICE),outputs.to(DEVICE)
preds = model(inputs)
loss = DICEloss(preds,outputs.squeeze(axis=1))
loss.backward()
opt.step()
opt.zero_grad()
loss_per_epoch += loss
batch_num +=1
#print("Batch num: {} | Dice Loss:{}".format(batch_num,loss))
loss_per_iteration.append(loss_per_epoch)
iters.append(epochs)
print("[{}/{}] Loss : {} ".format(epochs,num_epochs,loss_per_epoch))
#Saving the model after every epoch
torch.save(model.state_dict(),'Only 7 classes2')
print("Saved the model...")
plt.title('Loss with epochs')
plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.plot(iters,loss_per_iteration)
plt.imshow(torch.argmax(F.softmax(model(TF.to_tensor(TF.resize(image,size=(256,256))).to(DEVICE).unsqueeze(0)),dim=1),axis=1).cpu()[0],cmap='gray')
plt.imshow((TF.to_tensor(TF.resize(label,size=(256,256))))[0],cmap='gray')
img = torch.argmax(F.softmax(model(TF.to_tensor(TF.resize(image,size=(256,256))).to(DEVICE).unsqueeze(0)),dim=1),axis=1)
color_array = np.array([[0,0,0],
[128,64,128],
[70,70,70],
[0,102,0],
[28,42,168],
[125,22,96],
[9,143,150]])
print(color_array)
from skimage.color import label2rgb
plt.imshow(label2rgb(img.view(256,256).detach().cpu().numpy()))
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____ |
cliffwalking_temporal_difference.ipynb | ###Markdown
Temporal-Difference Methods Mini Project: OpenAI Gym CliffWalkingEnvThis notebook contains my implementations of many Temporal-Difference (TD) methods. Part 0: Explore CliffWalkingEnvCreate an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.
###Code
import gym
env = gym.make('CliffWalking-v0')
###Output
_____no_output_____
###Markdown
The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below.
###Code
print(env.action_space)
print(env.observation_space)
###Output
Discrete(4)
Discrete(48)
###Markdown
In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function.
###Code
import numpy as np
from plot_utils import plot_values
# define the optimal state-value function
V_opt = np.zeros((4,12))
V_opt[0:13][0] = -np.arange(3, 15)[::-1]
V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1
V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2
V_opt[3][0] = -13
plot_values(V_opt)
###Output
_____no_output_____
###Markdown
Part 1: TD Prediction: State ValuesImplementation of TD prediction (for estimating the state-value function).We will begin by investigating a policy where the agent moves:- `RIGHT` in states `0` through `10`, inclusive, - `DOWN` in states `11`, `23`, and `35`, and- `UP` in states `12` through `22`, inclusive, states `24` through `34`, inclusive, and state `36`.The policy is specified and printed below. Note that states where the agent does not choose an action have been marked with `-1`.
###Code
policy = np.hstack([1*np.ones(11), 2, 0, np.zeros(10), 2, 0, np.zeros(10), 2, 0, -1*np.ones(11)])
print("\nPolicy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy.reshape(4,12))
###Output
Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 2.]
[ 0. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]]
###Markdown
Run the next cell to visualize the state-value function that corresponds to this policy. Make sure that you take the time to understand why this is the corresponding value function!
###Code
V_true = np.zeros((4,12))
for i in range(3):
V_true[0:12][i] = -np.arange(3, 15)[::-1] - i
V_true[1][11] = -2
V_true[2][11] = -1
V_true[3][0] = -17
plot_values(V_true)
###Output
_____no_output_____
###Markdown
The above figure is what you will try to approximate through the TD prediction algorithm.Your algorithm for TD prediction has five arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `policy`: This is a 1D numpy array with `policy.shape` equal to the number of states (`env.nS`). `policy[s]` returns the action that the agent chooses when in state `s`.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `V`: This is a dictionary where `V[s]` is the estimated value of state `s`.
###Code
from collections import defaultdict, deque
import sys
def td_prediction(env, num_episodes, policy, alpha, gamma=1.0):
# initialize empty dictionaries of floats
V = defaultdict(float)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# begin an episode, observe S
state = env.reset()
while True:
# choose action A
action = policy[state]
# take action A, observe R, S'
next_state, reward, done, info = env.step(action)
# perform updates
V[state] = V[state] + (alpha * (reward + (gamma * V[next_state]) - V[state]))
# S <- S'
state = next_state
# end episode if reached terminal state
if done:
break
return V
###Output
_____no_output_____
###Markdown
Run the code cell below to test your implementation and visualize the estimated state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
###Code
import check_test
# evaluate the policy and reshape the state-value function
V_pred = td_prediction(env, 5000, policy, .01)
# please do not change the code below this line
V_pred_plot = np.reshape([V_pred[key] if key in V_pred else 0 for key in np.arange(48)], (4,12))
check_test.run_check('td_prediction_check', V_pred_plot)
plot_values(V_pred_plot)
###Output
Episode 5000/5000
###Markdown
How close is your estimated state-value function to the true state-value function corresponding to the policy? You might notice that some of the state values are not estimated by the agent. This is because under this policy, the agent will not visit all of the states. In the TD prediction algorithm, the agent can only estimate the values corresponding to states that are visited. Part 2: TD Control: SarsaImplementation of the Sarsa control algorithm.The algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def update_Q(Qsa, Qsa_next, reward, alpha, gamma):
""" updates the action-value function estimate using the most recent time step """
return Qsa + (alpha * (reward + (gamma * Qsa_next) - Qsa))
def epsilon_greedy_probs(env, Q_s, i_episode, eps=None):
""" obtains the action probabilities corresponding to epsilon-greedy policy """
epsilon = 1.0 / i_episode
if eps is not None:
epsilon = eps
policy_s = np.ones(env.nA) * epsilon / env.nA
policy_s[np.argmax(Q_s)] = 1 - epsilon + (epsilon / env.nA)
return policy_s
import matplotlib.pyplot as plt
%matplotlib inline
def sarsa(env, num_episodes, alpha, gamma=1.0):
# initialize action-value function (empty dictionary of arrays)
Q = defaultdict(lambda: np.zeros(env.nA))
# initialize performance monitor
plot_every = 100
tmp_scores = deque(maxlen=plot_every)
scores = deque(maxlen=num_episodes)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# initialize score
score = 0
# begin an episode, observe S
state = env.reset()
# get epsilon-greedy action probabilities
policy_s = epsilon_greedy_probs(env, Q[state], i_episode)
# pick action A
action = np.random.choice(np.arange(env.nA), p=policy_s)
# limit number of time steps per episode
for t_step in np.arange(300):
# take action A, observe R, S'
next_state, reward, done, info = env.step(action)
# add reward to score
score += reward
if not done:
# get epsilon-greedy action probabilities
policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode)
# pick next action A'
next_action = np.random.choice(np.arange(env.nA), p=policy_s)
# update TD estimate of Q
Q[state][action] = update_Q(Q[state][action], Q[next_state][next_action],
reward, alpha, gamma)
# S <- S'
state = next_state
# A <- A'
action = next_action
if done:
# update TD estimate of Q
Q[state][action] = update_Q(Q[state][action], 0, reward, alpha, gamma)
# append score
tmp_scores.append(score)
break
if (i_episode % plot_every == 0):
scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores))
return Q
###Output
_____no_output_____
###Markdown
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
###Code
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsa = sarsa(env, 5000, .01)
# print the estimated optimal policy
policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_sarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsa)
# plot the estimated optimal state-value function
V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)])
plot_values(V_sarsa)
###Output
Episode 5000/5000
###Markdown
Part 3: TD Control: Q-learningImplementation of the Q-learning control algorithm.The algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def q_learning(env, num_episodes, alpha, gamma=1.0):
# initialize action-value function (empty dictionary of arrays)
Q = defaultdict(lambda: np.zeros(env.nA))
# initialize performance monitor
plot_every = 100
tmp_scores = deque(maxlen=plot_every)
scores = deque(maxlen=num_episodes)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# initialize score
score = 0
# begin an episode, observe S
state = env.reset()
while True:
# get epsilon-greedy action probabilities
policy_s = epsilon_greedy_probs(env, Q[state], i_episode)
# pick next action A
action = np.random.choice(np.arange(env.nA), p=policy_s)
# take action A, observe R, S'
next_state, reward, done, info = env.step(action)
# add reward to score
score += reward
# update Q
Q[state][action] = update_Q(Q[state][action], np.max(Q[next_state]), \
reward, alpha, gamma)
# S <- S'
state = next_state
# until S is terminal
if done:
# append score
tmp_scores.append(score)
break
if (i_episode % plot_every == 0):
scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores))
return Q
###Output
_____no_output_____
###Markdown
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
###Code
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsamax = q_learning(env, 5000, .01)
# print the estimated optimal policy
policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12))
check_test.run_check('td_control_check', policy_sarsamax)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_sarsamax)
# plot the estimated optimal state-value function
plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])
###Output
Episode 5000/5000
###Markdown
Part 4: TD Control: Expected SarsaImplementation of the Expected Sarsa control algorithm.The algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
###Code
def expected_sarsa(env, num_episodes, alpha, gamma=1.0):
# initialize action-value function (empty dictionary of arrays)
Q = defaultdict(lambda: np.zeros(env.nA))
# initialize performance monitor
plot_every = 100
tmp_scores = deque(maxlen=plot_every)
scores = deque(maxlen=num_episodes)
# loop over episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
# initialize score
score = 0
# begin an episode
state = env.reset()
# get epsilon-greedy action probabilities
policy_s = epsilon_greedy_probs(env, Q[state], i_episode, 0.005)
while True:
# pick next action
action = np.random.choice(np.arange(env.nA), p=policy_s)
# take action A, observe R, S'
next_state, reward, done, info = env.step(action)
# add reward to score
score += reward
# get epsilon-greedy action probabilities (for S')
policy_s = epsilon_greedy_probs(env, Q[next_state], i_episode, 0.005)
# update Q
Q[state][action] = update_Q(Q[state][action], np.dot(Q[next_state], policy_s), \
reward, alpha, gamma)
# S <- S'
state = next_state
# until S is terminal
if done:
# append score
tmp_scores.append(score)
break
if (i_episode % plot_every == 0):
scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(scores),endpoint=False),np.asarray(scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(scores))
return Q
###Output
_____no_output_____
###Markdown
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
###Code
# obtain the estimated optimal policy and corresponding action-value function
Q_expsarsa = expected_sarsa(env, 10000, 1)
# print the estimated optimal policy
policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12)
check_test.run_check('td_control_check', policy_expsarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):")
print(policy_expsarsa)
# plot the estimated optimal state-value function
plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])
###Output
Episode 10000/10000 |
ML0101EN-Reg-Polynomial-Regression-Co2.ipynb | ###Markdown
Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to implement Polynomial Regression* Create a model, train it, test it and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
--2021-06-16 17:03:57-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.45.118.108
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.45.118.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.1s
2021-06-16 17:03:58 (480 KB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)* **MODELYEAR** e.g. 2014* **MAKE** e.g. Acura* **MODEL** e.g. ILX* **VEHICLE CLASS** e.g. SUV* **ENGINE SIZE** e.g. 4.7* **CYLINDERS** e.g 6* **TRANSMISSION** e.g. A6* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Let's select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Let's plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta\_1 x + \theta\_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**?Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, *ENGINESIZE*. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).The equation and the sample example is displayed below.$$\begin{bmatrix}v\_1\\\\v\_2\\\\\vdots\\\\v_n\end{bmatrix}\longrightarrow \begin{bmatrix}\[ 1 & v\_1 & v\_1^2]\\\\\[ 1 & v\_2 & v\_2^2]\\\\\vdots & \vdots & \vdots\\\\\[ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix}2.\\\\2.4\\\\1.5\\\\\vdots\end{bmatrix} \longrightarrow \begin{bmatrix}\[ 1 & 2. & 4.]\\\\\[ 1 & 2.4 & 5.76]\\\\\[ 1 & 1.5 & 2.25]\\\\\vdots & \vdots & \vdots\\\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does.Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x\_1$, $x\_1^2$ with $x\_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta\_1 x\_1 + \theta\_2 x\_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems.so we can use **LinearRegression()** function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 50.24792065 -1.48002782]]
Intercept: [107.13432424]
###Markdown
As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line.Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y_ ) )
###Output
Mean absolute error: 23.65
Residual sum of squares (MSE): 974.78
R2-score: 0.76
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
# write your code here
###Output
_____no_output_____
###Markdown
Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:* Use scikit-learn to implement Polynomial Regression* Create a model, train it, test it and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
--2021-07-16 14:09:09-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.45.118.108
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.45.118.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.1s
2021-07-16 14:09:09 (515 KB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?utm_medium=Exinfluencer\&utm_source=Exinfluencer\&utm_content=000026UJ\&utm_term=10006555\&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkML0101ENSkillsNetwork20718538-2021-01-01)* **MODELYEAR** e.g. 2014* **MAKE** e.g. Acura* **MODEL** e.g. ILX* **VEHICLE CLASS** e.g. SUV* **ENGINE SIZE** e.g. 4.7* **CYLINDERS** e.g 6* **TRANSMISSION** e.g. A6* **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9* **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9* **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2* **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Let's select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Let's plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta\_1 x + \theta\_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**?Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, *ENGINESIZE*. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).The equation and the sample example is displayed below.$$\begin{bmatrix}v\_1\\\\v\_2\\\\\vdots\\\\v_n\end{bmatrix}\longrightarrow \begin{bmatrix}\[ 1 & v\_1 & v\_1^2]\\\\\[ 1 & v\_2 & v\_2^2]\\\\\vdots & \vdots & \vdots\\\\\[ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix}2.\\\\2.4\\\\1.5\\\\\vdots\end{bmatrix} \longrightarrow \begin{bmatrix}\[ 1 & 2. & 4.]\\\\\[ 1 & 2.4 & 5.76]\\\\\[ 1 & 1.5 & 2.25]\\\\\vdots & \vdots & \vdots\\\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does.Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x\_1$, $x\_1^2$ with $x\_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta\_1 x\_1 + \theta\_2 x\_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems.so we can use **LinearRegression()** function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 51.35397214 -1.61693093]]
Intercept: [106.00963209]
###Markdown
As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line.Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y_ ) )
###Output
Mean absolute error: 21.51
Residual sum of squares (MSE): 752.90
R2-score: 0.80
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
# write your code here
poly = PolynomialFeatures(degree=3)
train_x_poly = poly.fit_transform(train_x)
lm = linear_model.LinearRegression().fit(train_x_poly, train_y)
test_x_poly = poly.fit_transform(test_x)
test_y_ = lm.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y_ ) )
###Output
Mean absolute error: 21.40
Residual sum of squares (MSE): 748.45
R2-score: 0.80
###Markdown
Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
--2020-12-02 12:10:17-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 67.228.254.196
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.06s
2020-12-02 12:10:17 (1.25 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Lets select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Lets plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 49.24593366 -1.35528257]]
Intercept: [109.17377855]
###Markdown
As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
###Output
Mean absolute error: 22.37
Residual sum of squares (MSE): 808.50
R2-score: 0.74
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
poly3=PolynomialFeatures(degree=2)
train_3=poly3.fit_transform(train_x)
clf3=linear_model.LinearRegression()
clf3.fit(train_3,train_y)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX+ clf3.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
test_x_5=poly3.fit_transform(test_x)
test_y_hat=clf3.predict(test_x_5)
r2_score(test_y_hat,test_y)
###Output
_____no_output_____
###Markdown
Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
--2021-01-11 14:56:43-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.07s
2021-01-11 14:56:43 (960 KB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Lets select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Lets plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 46.9043068 -1.03462409]]
Intercept: [112.73427353]
###Markdown
As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y_ ) )
###Output
Mean absolute error: 24.83
Residual sum of squares (MSE): 1076.09
R2-score: 0.75
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
# write your code here
poly3 = PolynomialFeatures(degree=3)
train_x_poly3 = poly3.fit_transform(train_x)
clf3 = linear_model.LinearRegression()
train_y3_ = clf3.fit(train_x_poly3, train_y)
# The coefficients
print ('Coefficients: ', clf3.coef_)
print ('Intercept: ',clf3.intercept_)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
test_x_poly3 = poly3.fit_transform(test_x)
test_y3_ = clf3.predict(test_x_poly3)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y3_ ) )
###Output
Coefficients: [[ 0. 34.9672599 2.31840676 -0.28374433]]
Intercept: [125.20476164]
Mean absolute error: 24.70
Residual sum of squares (MSE): 1065.72
R2-score: 0.75
###Markdown
Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
--2021-03-14 10:26:31-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.04s
2021-03-14 10:26:32 (1.77 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Lets select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Lets plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 47.38818147 -1.07885709]]
Intercept: [112.28907701]
###Markdown
As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y_ ) )
###Output
Mean absolute error: 23.68
Residual sum of squares (MSE): 997.21
R2-score: 0.75
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
poly3 = PolynomialFeatures(degree=3)
train_x_poly3 = poly3.fit_transform(train_x)
clf3 = linear_model.LinearRegression()
train_y3_ = clf3.fit(train_x_poly3, train_y)
# The coefficients
print ('Coefficients: ', clf3.coef_)
print ('Intercept: ',clf3.intercept_)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
test_x_poly3 = poly3.fit_transform(test_x)
test_y3_ = clf3.predict(test_x_poly3)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y3_ ) )
###Output
Coefficients: [[ 0. 23.4893808 5.65016492 -0.57089149]]
Intercept: [137.18458186]
Mean absolute error: 23.56
Residual sum of squares (MSE): 993.88
R2-score: 0.75
###Markdown
Polynomial RegressionEstimated time needed: **15** minutes ObjectivesAfter completing this lab you will be able to:- Use scikit-learn to implement Polynomial Regression- Create a model, train,test and use the model Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
###Output
--2021-01-26 22:03:43-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/Module%202/data/FuelConsumptionCo2.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.63.118.104
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.63.118.104|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.04s
2021-01-26 22:03:43 (1.82 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
**Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](https://www.ibm.com/us-en/cloud/object-storage?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork-20718538&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Lets select some features that we want to use for regression.
###Code
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Lets plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$$y = b + \theta_1 x + \theta_2 x^2$$Now, the question is: how we can fit our data on this equation while we have only x values, such as **Engine Size**? Well, we can create a few additional features: 1, $x$, and $x^2$.**PolynomialFeatures()** function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2). The equation and the sample example is displayed below. $$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}\longrightarrow \begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$$$$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix} \longrightarrow \begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$$y = b + \theta_1 x_1 + \theta_2 x_2$$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use **LinearRegression()** function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 47.714427 -1.1506322]]
Intercept: [111.31518537]
###Markdown
As mentioned before, **Coefficient** and **Intercept** , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y_ ) )
###Output
Mean absolute error: 22.94
Residual sum of squares (MSE): 941.67
R2-score: 0.76
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
# write your code here
poly3 = PolynomialFeatures(degree=3)
train_x_poly3 = poly3.fit_transform(train_x)
clf3 = linear_model.LinearRegression()
train_y3_ = clf3.fit(train_x_poly3, train_y)
# The coefficients
print ('Coefficients: ', clf3.coef_)
print ('Intercept: ',clf3.intercept_)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1)
yy = clf3.intercept_[0]+ clf3.coef_[0][1]*XX + clf3.coef_[0][2]*np.power(XX, 2) + clf3.coef_[0][3]*np.power(XX, 3)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
test_x_poly3 = poly3.fit_transform(test_x)
test_y3_ = clf3.predict(test_x_poly3)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y3_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y3_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y,test_y3_ ) )
###Output
Coefficients: [[ 0. 31.15465792 3.51291658 -0.39602495]]
Intercept: [128.56348815]
Mean absolute error: 22.76
Residual sum of squares (MSE): 930.74
R2-score: 0.76
|
FaceSwap.ipynb | ###Markdown

###Code
#@title **1.セットアップ**(数分くらい掛かります)
# Clone github
!git clone https://github.com/sugi-san/sber-swap.git
%cd sber-swap
# load arcface
!wget -P ./arcface_model https://github.com/sberbank-ai/sber-swap/releases/download/arcface/backbone.pth
!wget -P ./arcface_model https://github.com/sberbank-ai/sber-swap/releases/download/arcface/iresnet.py
# load landmarks detector
!wget -P ./insightface_func/models/antelope https://github.com/sberbank-ai/sber-swap/releases/download/antelope/glintr100.onnx
!wget -P ./insightface_func/models/antelope https://github.com/sberbank-ai/sber-swap/releases/download/antelope/scrfd_10g_bnkps.onnx
# load model itself
!wget -P ./weights https://github.com/sberbank-ai/sber-swap/releases/download/sber-swap-v2.0/G_unet_2blocks.pth
# load super res model
!wget -P ./weights https://github.com/sberbank-ai/sber-swap/releases/download/super-res/10_net_G.pth
# Install required libraries
!pip install mxnet-cu101mkl
!pip install onnxruntime-gpu==1.8
!pip install insightface==0.2.1
!pip install kornia==0.5.4
# library import
import cv2
import torch
import time
import os
from utils.inference.image_processing import crop_face, get_final_image, show_images
from utils.inference.video_processing import read_video, get_target, get_final_video, add_audio_from_another_video, face_enhancement
from utils.inference.core import model_inference
from network.AEI_Net import AEI_Net
from coordinate_reg.image_infer import Handler
from insightface_func.face_detect_crop_multi import Face_detect_crop
from arcface_model.iresnet import iresnet100
from models.pix2pix_model import Pix2PixModel
from models.config_sr import TestOptions
# --- Initialize models ---
app = Face_detect_crop(name='antelope', root='./insightface_func/models')
app.prepare(ctx_id= 0, det_thresh=0.6, det_size=(640,640))
# main model for generation
G = AEI_Net(backbone='unet', num_blocks=2, c_id=512)
G.eval()
G.load_state_dict(torch.load('weights/G_unet_2blocks.pth', map_location=torch.device('cpu')))
G = G.cuda()
G = G.half()
# arcface model to get face embedding
netArc = iresnet100(fp16=False)
netArc.load_state_dict(torch.load('arcface_model/backbone.pth'))
netArc=netArc.cuda()
netArc.eval()
# model to get face landmarks
handler = Handler('./coordinate_reg/model/2d106det', 0, ctx_id=0, det_size=640)
# model to make superres of face, set use_sr=True if you want to use super resolution or use_sr=False if you don't
use_sr = True
if use_sr:
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
torch.backends.cudnn.benchmark = True
opt = TestOptions()
#opt.which_epoch ='10_7'
model = Pix2PixModel(opt)
model.netG.train()
# warning
import warnings
warnings.simplefilter('ignore')
# import function
from function import *
# make folder
import os
os.makedirs('download', exist_ok=True)
###Output
_____no_output_____
###Markdown

###Code
#@title **2.写真の表示**
display_pic('examples/images')
#@title **3.顔の入れ替え(写真)**
source_img = '02.jpg' #@param {type:"string"}
target_img = '01.jpg' #@param {type:"string"}
source_path = 'examples/images/'+source_img
target_path = 'examples/images/' + target_img
source_full = cv2.imread(source_path)
crop_size = 224 # don't change this
batch_size = 40
source = crop_face(source_full, app, crop_size)[0]
source = [source[:, :, ::-1]]
target_full = cv2.imread(target_path)
full_frames = [target_full]
target = get_target(full_frames, app, crop_size)
final_frames_list, crop_frames_list, full_frames, tfm_array_list = model_inference(full_frames,
source,
target,
netArc,
G,
app,
set_target = False,
crop_size=crop_size,
BS=batch_size)
result = get_final_image(final_frames_list, crop_frames_list, full_frames[0], tfm_array_list, handler)
cv2.imwrite('examples/results/result.png', result)
#@title **4.画像の表示**
import matplotlib.pyplot as plt
show_images([source[0][:, :, ::-1], target_full, result], ['Source Image', 'Target Image', 'Swapped Image'], figsize=(20, 15))
#@title **5.画像のダウンロード**
import shutil
source_name = os.path.splitext(source_img)
target_name = os.path.splitext(target_img)
download_name = 'download/'+source_name[0]+'_'+target_name[0]+'.png'
shutil.copy('examples/results/result.png', download_name)
from google.colab import files
files.download(download_name)
###Output
_____no_output_____
###Markdown

###Code
#@title **6.写真と動画の表示**
# --- 画像表示 ---
print('=== images ===')
display_pic('examples/images')
# --- 動画表示 ---
print('=== videos ===')
reset_folder('pic')
files = sorted(os.listdir('examples/videos'))
for file in files:
save_frame(file)
display_movie('pic', files)
#@title **7.顔の入れ替え(動画)**
source_img = '05.jpg' #@param {type:"string"}
video = '01.mp4' #@param {type:"string"}
source_path = 'examples/images/'+source_img
path_to_video = 'examples/videos/'+video
source_full = cv2.imread(source_path)
OUT_VIDEO_NAME = "examples/results/result.mp4"
crop_size = 224 # don't change this
batch_size = 40
source = crop_face(source_full, app, crop_size)[0]
source = [source[:, :, ::-1]]
full_frames, fps = read_video(path_to_video)
target = get_target(full_frames, app, crop_size)
START_TIME = time.time()
final_frames_list, crop_frames_list, full_frames, tfm_array_list = model_inference(full_frames,
source,
target,
netArc,
G,
app,
set_target = False,
crop_size=crop_size,
BS=batch_size)
if use_sr:
final_frames_list = face_enhancement(final_frames_list, model)
get_final_video(final_frames_list,
crop_frames_list,
full_frames,
tfm_array_list,
OUT_VIDEO_NAME,
fps,
handler)
add_audio_from_another_video(path_to_video, OUT_VIDEO_NAME, "audio")
print(f'Full pipeline took {time.time() - START_TIME}')
print(f"Video saved with path {OUT_VIDEO_NAME}")
#@title **8.動画の表示**
display_mp4('examples/results/result.mp4')
#@title **9.動画のダウンロード**
import shutil
source_name = os.path.splitext(source_img)
video_name = os.path.splitext(video)
download_name = 'download/'+source_name[0]+'_'+video_name[0]+'.mp4'
shutil.copy('examples/results/result.mp4', download_name)
from google.colab import files
files.download(download_name)
###Output
_____no_output_____
###Markdown

###Code
#@title **10.データアップロード**
#@markdown ・selectで写真(images)か動画(videos)を選択して下さい)\
#@markdown ・動画はHD以下、20秒以内にして下さい
import os
import shutil
from google.colab import files
import cv2
select = 'videos' #@param ["images", "videos"]
# ルートへ画像をアップロード
uploaded = files.upload()
uploaded = list(uploaded.keys())
# ルートから指定フォルダーへ移動
for file in uploaded:
shutil.move(file, 'examples/'+select+'/'+file)
###Output
_____no_output_____
###Markdown
**Face Swap:**> Credits: https://github.com/neuralchen/SimSwap **Installation**
###Code
# copy github repository into session storage
!git clone https://github.com/neuralchen/SimSwap
# install python packages
!pip install insightface==0.2.1 onnxruntime moviepy imageio==2.4.1
# download model checkpoints
!wget -P /content/SimSwap/arcface_model https://github.com/neuralchen/SimSwap/releases/download/1.0/arcface_checkpoint.tar
!wget https://github.com/neuralchen/SimSwap/releases/download/1.0/checkpoints.zip
!unzip ./checkpoints.zip -d /content/SimSwap/checkpoints
!wget -P /content/SimSwap/parsing_model/checkpoint https://github.com/neuralchen/SimSwap/releases/download/1.0/79999_iter.pth
!wget --no-check-certificate "https://sh23tw.dm.files.1drv.com/y4mmGiIkNVigkSwOKDcV3nwMJulRGhbtHdkheehR5TArc52UjudUYNXAEvKCii2O5LAmzGCGK6IfleocxuDeoKxDZkNzDRSt4ZUlEt8GlSOpCXAFEkBwaZimtWGDRbpIGpb_pz9Nq5jATBQpezBS6G_UtspWTkgrXHHxhviV2nWy8APPx134zOZrUIbkSF6xnsqzs3uZ_SEX_m9Rey0ykpx9w" -O antelope.zip
!unzip ./antelope.zip -d /content/SimSwap/insightface_func/models/
# clean content directory
! rm ./antelope.zip ./checkpoints.zip
# import packages
import os
import cv2
import torch
import fractions
import numpy as np
from PIL import Image
import torch.nn.functional as F
from torchvision import transforms
# move to the SimSwap directory
os.chdir("SimSwap")
# import project modules
from models.models import create_model
from options.test_options import TestOptions
from insightface_func.face_detect_crop_multi import Face_detect_crop
from util.videoswap import video_swap
from util.add_watermark import watermark_image
###Output
_____no_output_____
###Markdown
**Inference**
###Code
# convert image to tensor
transformer = transforms.Compose([
transforms.ToTensor(),
])
# Instead of softmax loss, we use arcface loss
transformer_Arcface = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
# denormalize image tensor
detransformer = transforms.Compose([
transforms.Normalize([0, 0, 0], [1/0.229, 1/0.224, 1/0.225]),
transforms.Normalize([-0.485, -0.456, -0.406], [1, 1, 1])
])
# Get test options as opt object
opt = TestOptions()
# Hardcode few parameters with opt object
opt.initialize()
opt.parser.add_argument('-f')
opt = opt.parse()
opt.pic_a_path = './demo_file/input_picture.png' # Place input picture here
opt.video_path = './demo_file/input_video.mp4' # Place input video here
opt.output_path = './output/demo.mp4' # Target destination folder for output
opt.temp_path = './tmp'
opt.Arc_path = './arcface_model/arcface_checkpoint.tar'
opt.isTrain = False # Puts in evaluation mode
opt.no_simswaplogo = True # Removes simswap logo
opt.use_mask = True # New feature up-to-date
crop_size = opt.crop_size
torch.nn.Module.dump_patches = True
model = create_model(opt)
model.eval()
app = Face_detect_crop(name='antelope', root='./insightface_func/models')
# reduce det_threshold if face is not being recognized
app.prepare(ctx_id= 0, det_thresh=0.3, det_size=(640,640))
with torch.no_grad():
pic_a = opt.pic_a_path
img_a_whole = cv2.imread(pic_a)
print(img_a_whole.shape)
img_a_align_crop, _ = app.get(img_a_whole,crop_size)
img_a_align_crop_pil = Image.fromarray(cv2.cvtColor(img_a_align_crop[0],cv2.COLOR_BGR2RGB))
img_a = transformer_Arcface(img_a_align_crop_pil)
img_id = img_a.view(-1, img_a.shape[0], img_a.shape[1], img_a.shape[2])
# moves tensor to GPU
img_id = img_id.cuda()
# create latent id
img_id_downsample = F.interpolate(img_id, size=(112,112))
latend_id = model.netArc(img_id_downsample)
latend_id = latend_id.detach().to('cpu')
latend_id = latend_id/np.linalg.norm(latend_id,axis=1,keepdims=True)
latend_id = latend_id.to('cuda')
# swap faces of input video with input image
video_swap(opt.video_path,
latend_id,
model, app,
opt.output_path,
temp_results_dir=opt.temp_path,
no_simswaplogo = opt.no_simswaplogo,
use_mask=opt.use_mask
)
###Output
_____no_output_____
###Markdown
**Display Output Video**
###Code
from IPython.display import HTML
from base64 import b64encode
# path for input video
input_path = "/content/SimSwap/output/demo.mp4"
# path for the output compressed video
output_path = "/content/SimSwap/output/cmp_demo.mp4"
os.system(f"ffmpeg -i {input_path} -vcodec libx264 {output_path}")
# Show video
mp4 = open(output_path,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=1024 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
! rm /content/SimSwap/output/cmp_demo.mp4
! rm /content/SimSwap/output/demo.mp4
###Output
_____no_output_____ |
_notebooks/2021-01-19-First-Post.ipynb | ###Markdown
"First Post"> "Awesome summary" - toc: true- branch: master- badges: true- comments: true- author: Reut Farkash- categories: [testingthings, jupyter] My First Fastpages Notebook Blog Post> Trying to blog to better keep track of the best resources I come across. To create a similar blog:[fastpages github](https://github.com/fastai/fastpages)[1littlecoder tutorial vid](https://www.youtube.com/watch?v=L0boq3zqazI&ab_channel=1littlecoder) Top Deep learning playlists:[CS231n Winter 2016](https://www.youtube.com/watch?v=NfnWJUyUJYU&list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC) - Stanford - Deep learning for computer vision Git / GitHub Python Data Structures and Algorithms Ethics Biology Statistics Causal Inference
###Code
###Output
_____no_output_____ |
notebooks/Voronoi Reflection Trick.ipynb | ###Markdown
Voronoi TesselationTracking data gives an unparalelled level of detail about the positioning of players and their control of space on the pitch. However, this data can also be difficult to work with and hard to interpret. One solution to these issues is to transform the data in ways that make it easier to analyse further. One such transformation is the Voronoi tesselation wherein the pitch is broken down into regions closest to each player. This pitch breakdown gives a rough estimate of the space a player or team has, or how this available space changes over time. Here, we demonstrate how to build a Voronoi tesselation using the existing Scipy implementation combined with one small trick. 1. Data and setup Plotting a pitchTo help with visualisation we first define a basic pitch plotter using a slightly modified version of the code from [FCPython](https://fcpython.com/visualisation/drawing-pitchmap-adding-lines-circles-matplotlib).
###Code
#Dimensions of the plotted pitch
max_h, max_w = 90, 130
#Creates the pitch plot an returns the axes.
def createPitch():
#Create figure
fig=plt.figure(figsize=(13,9))
ax=plt.subplot(111)
#Pitch Outline & Centre Line
plt.plot([0,0],[0,90], color="black")
plt.plot([0,130],[90,90], color="black")
plt.plot([130,130],[90,0], color="black")
plt.plot([130,0],[0,0], color="black")
plt.plot([65,65],[0,90], color="black")
#Left Penalty Area
plt.plot([16.5,16.5],[65,25],color="black")
plt.plot([0,16.5],[65,65],color="black")
plt.plot([16.5,0],[25,25],color="black")
#Right Penalty Area
plt.plot([130,113.5],[65,65],color="black")
plt.plot([113.5,113.5],[65,25],color="black")
plt.plot([113.5,130],[25,25],color="black")
#Left 6-yard Box
plt.plot([0,5.5],[54,54],color="black")
plt.plot([5.5,5.5],[54,36],color="black")
plt.plot([5.5,0.5],[36,36],color="black")
#Right 6-yard Box
plt.plot([130,124.5],[54,54],color="black")
plt.plot([124.5,124.5],[54,36],color="black")
plt.plot([124.5,130],[36,36],color="black")
#Prepare Circles
centreCircle = plt.Circle((65,45),9.15,color="black",fill=False)
centreSpot = plt.Circle((65,45),0.8,color="black")
leftPenSpot = plt.Circle((11,45),0.8,color="black")
rightPenSpot = plt.Circle((119,45),0.8,color="black")
#Draw Circles
ax.add_patch(centreCircle)
ax.add_patch(centreSpot)
ax.add_patch(leftPenSpot)
ax.add_patch(rightPenSpot)
#Prepare Arcs
leftArc = mpl.patches.Arc((11,45),height=18.3,width=18.3,angle=0,theta1=310,theta2=50,color="black")
rightArc = mpl.patches.Arc((119,45),height=18.3,width=18.3,angle=0,theta1=130,theta2=230,color="black")
#Draw Arcs
ax.add_patch(leftArc)
ax.add_patch(rightArc)
#Tidy Axes
plt.axis('off')
#Display Pitch
return ax
#An example pitch
ax = createPitch()
###Output
_____no_output_____
###Markdown
Data and TransformationsWe begin by grabbing a single frame of x,y positions for 22 players that is in Tracab format. We then transform these positions to dimensions of the FCPython pitch.
###Code
#Five frames of tracking data in Tracab format
df = pd.read_csv('../data/tracab-like-frames.csv')
#The dimensions of the tracab pitch
data_w, data_h = 10500, 6800
#Pull the x/y coordinats for the home/away team
h_xs = df[[c for c in df.columns if 'H' in c and '_x' in c]].iloc[0].values
h_ys = df[[c for c in df.columns if 'H' in c and '_y' in c]].iloc[0].values
a_xs = df[[c for c in df.columns if 'A' in c and '_x' in c]].iloc[0].values
a_ys = df[[c for c in df.columns if 'A' in c and '_y' in c]].iloc[0].values
#This transforms the data to the plotting coords we use.
def transform_data(xs, ys, data_w, data_h, max_w, max_h):
x_fix = lambda x : (x+data_w/2.)*(max_w / data_w)
y_fix = lambda y : (y+data_h/2.)*(max_h / data_h)
p_xs = list(map(x_fix, xs))
p_ys = list(map(y_fix, ys))
return p_xs, p_ys
#Home team xs and ys
h_xs, h_ys = transform_data(h_xs, h_ys, data_w, data_h, max_w, max_h)
#Away team xs and ys
a_xs, a_ys = transform_data(a_xs, a_ys, data_w, data_h, max_w, max_h)
###Output
_____no_output_____
###Markdown
Plotting the players
###Code
ax = createPitch()
ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=90.)
ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=90.)
###Output
_____no_output_____
###Markdown
2. Voronoi Tesselation - First attemptThe hard work of performing a Voronoi tesselation is fortunately already implemented as part of the scipy package which means all we need to do is provide data in the correct form. There is also a plotting function to help visualise the Voronoi tesselation.
###Code
from scipy.spatial import Voronoi
#Combined all of the players into a length 22 list of points
xs = h_xs+a_xs
ys = h_ys+a_ys
ps = [(x,y) for x,y in zip(xs, ys)]
#Perform the voronoi calculation, returns a scipy.spatial convex hull object
vor = Voronoi(ps)
###Output
_____no_output_____
###Markdown
Scipy.spatial provides a method that can plot a Voronoi tesselation onto provided axes. We can combine this with the plotting above to show the Voronoi tesselation of players
###Code
from scipy.spatial import voronoi_plot_2d
ax = createPitch()
voronoi_plot_2d(vor, ax, show_vertices=False, show_points=False)
ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=90.)
ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=90.)
plt.xlim(-15,145)
plt.ylim(-10,100)
###Output
_____no_output_____
###Markdown
3. Problem - Dealing with pitch boundariesThe Voronoi tesselation algorithm doesn't know that we're looking at a bounded box (the pitch) when building the tesselation. As a result, the algorithm identifies polygons for some players with a vertex outside of the pitch. This is not ideal if we want to look at pitch control etc. Note also the dotted lines. These indicate those points equidistant from two players and go to infinity - also not ideal for a modelling football.Rather than go back and try to build a Voronoi algorithm for ourselves that accounts for the bounded pitch we can use properties of the Voronoi algorithm to _trick_ it into putting the boundaries where we need them.**The Trick:** By adding the reflection of all players about each of the four touchlines, each touchline necessarily becomes a the edge of a polygon found by the Voronoi algorithm.By running the Voronoi algorithm on this extended set of points, and then throwing away all information about points that aren't actually players on the pitch, we end up with a Voronoi tesselation with polygons truncated by the touchlines. This is exactly what we need!
###Code
#Step 1 - Create a bigger set of points by reflecting the player points about all of the axes.
extended_ps = (ps +
[(-p[0], p[1]) for p in ps] + #Reflection in left touchline
[(p[0], -p[1]) for p in ps] + #Reflection in bottom touchline
[(2*max_w-p[0], p[1]) for p in ps]+ #Reflection in right touchline
[(p[0], 2*max_h-p[1]) for p in ps] #Relfection in top touchline
)
#Step 2 - Create a Voronoi tesselation for this extended point set
vor = Voronoi(extended_ps)
#Step 3 (Optional) - Check that the Voronoi tesselation works correctly and finds the pitch boundaries
# ax = createPitch()
fig=plt.figure(figsize=(13,9))
ax=plt.subplot(111)
e_xs, e_ys = zip(*extended_ps)
voronoi_plot_2d(vor, ax, show_vertices=False, show_points=False, line_colors='k', zorder=0)
ax.scatter(e_xs, e_ys, c='grey', s=20.)
ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=20.)
ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=20.)
plt.xlim(-0.5*max_w,1.5*max_w)
plt.ylim(-0.5*max_h,1.5*max_h);
#Step 4 - Throw away the reflected points and their Voronoi polygons, then plot
ax = createPitch()
#Plot the Voronoi regions that contain the player points
for pix, p in enumerate(vor.points): #Each polygon in the VT has a corresponding point
region = vor.regions[vor.point_region[pix]] #That point corresponds to a region
if not -1 in region: #-1 is a point at infinity, we don't need those polygons
polygon = [vor.vertices[i] for i in region] #The region polygon as a list of points
if p[0] in xs and p[1] in ys:
if p[0] in a_xs and p[1] in a_ys:
plt.fill(*zip(*polygon), alpha=0.2, c='xkcd:pale red')
else:
plt.fill(*zip(*polygon), alpha=0.2, c='xkcd:denim blue')
#Add in the player points
ax.scatter(h_xs, h_ys, c='xkcd:denim blue', s=90.)
ax.scatter(a_xs, a_ys, c='xkcd:pale red', s=90.)
plt.xlim(0,max_w)
plt.ylim(0,max_h);
###Output
_____no_output_____ |
7 QUORA INSINCERE QUESTIONN/introducing-bert-with-tensorflow.ipynb | ###Markdown
BERTBERT, or Bidirectional Encoder Representations from Transformers, is a new method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks.Academic paper which describes BERT in detail and provides full results on a number of tasks can be found here: https://arxiv.org/abs/1810.04805.Github account for the paper can be found here: https://github.com/google-research/bertBERT is a method of pre-training language representations, meaning training of a general-purpose "language understanding" model on a large text corpus (like Wikipedia), and then using that model for downstream NLP tasks (like question answering). BERT outperforms previous methods because it is the first *unsupervised, deeply bidirectional *system for pre-training NLP.  Downloading all necessary dependenciesYou will have to turn on internet for that.This code is slightly modefied version of this colab notebook https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb
###Code
import pandas as pd
import os
import numpy as np
import pandas as pd
import zipfile
from matplotlib import pyplot as plt
%matplotlib inline
import sys
import datetime
#downloading weights and cofiguration file for the model
!wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
repo = 'model_repo'
with zipfile.ZipFile("uncased_L-12_H-768_A-12.zip","r") as zip_ref:
zip_ref.extractall(repo)
!ls 'model_repo/uncased_L-12_H-768_A-12'
!wget https://raw.githubusercontent.com/google-research/bert/master/modeling.py
!wget https://raw.githubusercontent.com/google-research/bert/master/optimization.py
!wget https://raw.githubusercontent.com/google-research/bert/master/run_classifier.py
!wget https://raw.githubusercontent.com/google-research/bert/master/tokenization.py
###Output
_____no_output_____
###Markdown
Example below is done on preprocessing code, similar to **CoLa**:The Corpus of Linguistic Acceptability isa binary single-sentence classification task, where the goal is to predict whether an English sentenceis linguistically “acceptable” or notYou can use pretrained BERT model for wide variety of tasks, including classification.The task of CoLa is close to the task of Quora competition, so I thought it woud be interesting to use that example.Obviously, outside sources aren't allowed in Quora competition, so you won't be able to use BERT to submit a prediction.
###Code
# Available pretrained model checkpoints:
# uncased_L-12_H-768_A-12: uncased BERT base model
# uncased_L-24_H-1024_A-16: uncased BERT large model
# cased_L-12_H-768_A-12: cased BERT large model
#We will use the most basic of all of them
BERT_MODEL = 'uncased_L-12_H-768_A-12'
BERT_PRETRAINED_DIR = f'{repo}/uncased_L-12_H-768_A-12'
OUTPUT_DIR = f'{repo}/outputs'
print(f'***** Model output directory: {OUTPUT_DIR} *****')
print(f'***** BERT pretrained directory: {BERT_PRETRAINED_DIR} *****')
from sklearn.model_selection import train_test_split
train_df = pd.read_csv('../input/train.csv')
train_df = train_df.sample(2000)
train, test = train_test_split(train_df, test_size = 0.1, random_state=42)
train_lines, train_labels = train.question_text.values, train.target.values
test_lines, test_labels = test.question_text.values, test.target.values
import modeling
import optimization
import run_classifier
import tokenization
import tensorflow as tf
def create_examples(lines, set_type, labels=None):
#Generate data for the BERT model
guid = f'{set_type}'
examples = []
if guid == 'train':
for line, label in zip(lines, labels):
text_a = line
label = str(label)
examples.append(
run_classifier.InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
else:
for line in lines:
text_a = line
label = '0'
examples.append(
run_classifier.InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
# Model Hyper Parameters
TRAIN_BATCH_SIZE = 32
EVAL_BATCH_SIZE = 8
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
WARMUP_PROPORTION = 0.1
MAX_SEQ_LENGTH = 128
# Model configs
SAVE_CHECKPOINTS_STEPS = 1000 #if you wish to finetune a model on a larger dataset, use larger interval
# each checpoint weights about 1,5gb
ITERATIONS_PER_LOOP = 1000
NUM_TPU_CORES = 8
VOCAB_FILE = os.path.join(BERT_PRETRAINED_DIR, 'vocab.txt')
CONFIG_FILE = os.path.join(BERT_PRETRAINED_DIR, 'bert_config.json')
INIT_CHECKPOINT = os.path.join(BERT_PRETRAINED_DIR, 'bert_model.ckpt')
DO_LOWER_CASE = BERT_MODEL.startswith('uncased')
label_list = ['0', '1']
tokenizer = tokenization.FullTokenizer(vocab_file=VOCAB_FILE, do_lower_case=DO_LOWER_CASE)
train_examples = create_examples(train_lines, 'train', labels=train_labels)
tpu_cluster_resolver = None #Since training will happen on GPU, we won't need a cluster resolver
#TPUEstimator also supports training on CPU and GPU. You don't need to define a separate tf.estimator.Estimator.
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
model_dir=OUTPUT_DIR,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=ITERATIONS_PER_LOOP,
num_shards=NUM_TPU_CORES,
per_host_input_for_training=tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2))
num_train_steps = int(
len(train_examples) / TRAIN_BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
model_fn = run_classifier.model_fn_builder(
bert_config=modeling.BertConfig.from_json_file(CONFIG_FILE),
num_labels=len(label_list),
init_checkpoint=INIT_CHECKPOINT,
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=False, #If False training will fall on CPU or GPU, depending on what is available
use_one_hot_embeddings=True)
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=False, #If False training will fall on CPU or GPU, depending on what is available
model_fn=model_fn,
config=run_config,
train_batch_size=TRAIN_BATCH_SIZE,
eval_batch_size=EVAL_BATCH_SIZE)
"""
Note: You might see a message 'Running train on CPU'.
This really just means that it's running on something other than a Cloud TPU, which includes a GPU.
"""
# Train the model.
print('Please wait...')
train_features = run_classifier.convert_examples_to_features(
train_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
print('***** Started training at {} *****'.format(datetime.datetime.now()))
print(' Num examples = {}'.format(len(train_examples)))
print(' Batch size = {}'.format(TRAIN_BATCH_SIZE))
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print('***** Finished training at {} *****'.format(datetime.datetime.now()))
"""
There is a weird bug in original code.
When predicting, estimator returns an empty dict {}, without batch_size.
I redefine input_fn_builder and hardcode batch_size, irnoring 'params' for now.
"""
def input_fn_builder(features, seq_length, is_training, drop_remainder):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
all_input_ids = []
all_input_mask = []
all_segment_ids = []
all_label_ids = []
for feature in features:
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_segment_ids.append(feature.segment_ids)
all_label_ids.append(feature.label_id)
def input_fn(params):
"""The actual input function."""
print(params)
batch_size = 32
num_examples = len(features)
d = tf.data.Dataset.from_tensor_slices({
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"segment_ids":
tf.constant(
all_segment_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
"label_ids":
tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32),
})
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
return d
return input_fn
predict_examples = create_examples(test_lines, 'test')
predict_features = run_classifier.convert_examples_to_features(
predict_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = input_fn_builder(
features=predict_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
result = estimator.predict(input_fn=predict_input_fn)
from tqdm import tqdm
preds = []
for prediction in tqdm(result):
for class_probability in prediction:
preds.append(float(class_probability))
results = []
for i in tqdm(range(0,len(preds),2)):
if preds[i] < 0.9:
results.append(1)
else:
results.append(0)
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
print(accuracy_score(np.array(results), test_labels))
print(f1_score(np.array(results), test_labels))
###Output
_____no_output_____ |
notebooks/plotter.ipynb | ###Markdown
This notebook plots the KN lightcurves
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Read the data, sort it by filter
###Code
data = pd.read_csv('input/gw170817.data', delim_whitespace=True)
for col in data.columns:
print(col)
#print(data['MJD'])
i_band_mag = data.loc[data['Band'] == 'i']['Mag']
i_band_time = data.loc[data['Band'] == 'i']['MJD']
z_band_mag = data.loc[data['Band'] == 'z']['Mag']
z_band_time = data.loc[data['Band'] == 'z']['MJD']
Y_band_mag = data.loc[data['Band'] == 'Y']['Mag']
Y_band_time = data.loc[data['Band'] == 'Y']['MJD']
r_band_mag = data.loc[data['Band'] == 'r']['Mag']
r_band_time = data.loc[data['Band'] == 'r']['MJD']
g_band_mag = data.loc[data['Band'] == 'g']['Mag']
g_band_time = data.loc[data['Band'] == 'g']['MJD']
u_band_mag = data.loc[data['Band'] == 'u']['Mag']
u_band_time = data.loc[data['Band'] == 'u']['MJD']
#print(i_band_mag)
###Output
MJD
Band
Mag
e_mag
###Markdown
Make the plots
###Code
scatter = plt.plot(u_band_time, u_band_mag, '.g-')
plt.title('u-band')
plt.xlabel('MJD')
plt.ylabel('u-band magnitude')
plt.xlim(57980,57996)
plt.gca().invert_yaxis()
plt.show()
scatter = plt.plot(g_band_time, g_band_mag, '.b-')
plt.title('g-band')
plt.xlabel('MJD')
plt.ylabel('g-band magnitude')
plt.xlim(57980,57996)
plt.gca().invert_yaxis()
plt.show()
scatter = plt.plot(r_band_time, r_band_mag, '.g-')
plt.title('r-band')
plt.xlabel('MJD')
plt.ylabel('r-band magnitude')
plt.xlim(57980,57996)
plt.gca().invert_yaxis()
plt.show()
scatter = plt.plot(i_band_time, i_band_mag, '.r-')
plt.title('i-band')
plt.xlabel('MJD')
plt.ylabel('i-band magnitude')
plt.xlim(57980,57996)
plt.gca().invert_yaxis()
plt.show()
scatter = plt.plot(z_band_time, z_band_mag, '.y-')
plt.title('z-band')
plt.xlabel('MJD')
plt.ylabel('z-band magnitude')
plt.xlim(57980,57996)
plt.gca().invert_yaxis()
plt.show()
scatter = plt.plot(Y_band_time, Y_band_mag, '.b-')
plt.title('y-band')
plt.xlabel('MJD')
plt.ylabel('y-band magnitude')
plt.xlim(57980,57996)
plt.gca().invert_yaxis()
plt.show()
scatter = plt.plot(u_band_time, u_band_mag, label = "u")
scatter = plt.plot(g_band_time, g_band_mag, label = "g")
scatter = plt.plot(r_band_time, r_band_mag, label = "r")
scatter = plt.plot(i_band_time, i_band_mag, label = "i")
scatter = plt.plot(z_band_time, z_band_mag, label = "z")
scatter = plt.plot(Y_band_time, Y_band_mag, label = "y")
plt.title('All Bands')
plt.xlabel('MJD')
plt.ylabel('Band Magnitude')
plt.xlim(57983,57996)
plt.gca().invert_yaxis()
plt.legend()
plt.show()
###Output
_____no_output_____ |
nbs/effect_prediction.ipynb | ###Markdown
Variant effect predictionThe variant effect prediction parts integrated in `concise` are designed to extract importance scores for a single nucleotide variant in a given sequence. Predictions are made for each output individually for a multi-task model. In this short tutorial we will be using a small model to explain the basic functionality and outputs.At the moment there are three different effect scores to be chosen from. All of them require as in input:* The input sequence with the variant with its reference genotype* The input sequence with the variant with its alternative genotype* Both aformentioned sequences in reverse-complement* Information on where (which basepair, 0-based) the mutation is placed in the forward sequencesThe following variant scores are available:* In-silico mutagenesis (ISM): - Predict the outputs of the sequences containing the reference and alternative genotype of the variant and use the differential output as a effect score.* Gradient-based score* Dropout-based score Calculating effect scoresFirstly we will need to have a trained model and a set of input sequences containing the variants we want to look at. For this tutorial we will be using a small model:
###Code
from effect_demo_setup import *
from concise.models import single_layer_pos_effect as concise_model
import numpy as np
# Generate training data for the model, use a 1000bp sequence
param, X_feat, X_seq, y, id_vec = load_example_data(trim_seq_len = 1000)
# Generate the model
dc = concise_model(pooling_layer="sum",
init_motifs=["TGCGAT", "TATTTAT"],
n_splines=10,
n_covariates=0,
seq_length=X_seq.shape[1],
**param)
# Train the model
dc.fit([X_seq], y, epochs=1,
validation_data=([X_seq], y))
# In order to select the right output of a potential multitask model we have to generate a list of output labels, which will be used alongside the model itself.
model_output_annotation = np.array(["output_1"])
###Output
Using TensorFlow backend.
###Markdown
As with any prediction that you want to make with a model it is necessary that the input sequences have to fit the input dimensions of your model, in this case the reference and alternative sequences in their forward and reverse-complement state have to have the shape [?, 1000, 4].We will be storing the dataset in a dictionary for convenience:
###Code
import h5py
dataset_path = "%s/data/sample_hqtl_res.hdf5"%concise_demo_data_path
dataset = {}
with h5py.File(dataset_path, "r") as ifh:
ref = ifh["test_in_ref"].value
alt = ifh["test_in_alt"].value
dirs = ifh["test_out"]["seq_direction"].value
# This datset is stored with forward and reverse-complement sequences in an interlaced manner
assert(dirs[0] == b"fwd")
dataset["ref"] = ref[::2,...]
dataset["alt"] = alt[::2,...]
dataset["ref_rc"] = ref[1::2,...]
dataset["alt_rc"] = alt[1::2,...]
dataset["y"] = ifh["test_out"]["type"].value[::2]
# The sequence is centered around the mutatiom with the mutation occuring on position when looking at forward sequences
dataset["mutation_position"] = np.array([500]*dataset["ref"].shape[0])
###Output
_____no_output_____
###Markdown
All prediction functions have the same general set of required input values. Before going into more detail of the individual prediction functions We will look into how to run them. The following input arguments are availble for all functions: model: Keras model ref: Input sequence with the reference genotype in the mutation position ref_rc: Reverse complement of the 'ref' argument alt: Input sequence with the alternative genotype in the mutation position alt_rc: Reverse complement of the 'alt' argument mutation_positions: Position on which the mutation was placed in the forward sequences out_annotation_all_outputs: Output labels of the model. out_annotation: Select for which of the outputs (in case of a multi-task model) the predictions should be calculated.The `out_annotation` argument is not required. We will now run the available predictions individually.
###Code
from concise.effects.ism import ism
from concise.effects.gradient import gradient_pred
from concise.effects.dropout import dropout_pred
ism_result = ism(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"],
alt_rc = dataset["alt_rc"],
mutation_positions = dataset["mutation_position"],
out_annotation_all_outputs = model_output_annotation, diff_type = "diff")
gradient_result = gradient_pred(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"],
alt_rc = dataset["alt_rc"],
mutation_positions = dataset["mutation_position"],
out_annotation_all_outputs = model_output_annotation)
dropout_result = dropout_pred(model = dc,
ref = dataset["ref"],
ref_rc = dataset["ref_rc"],
alt = dataset["alt"], alt_rc = dataset["alt_rc"], mutation_positions = dataset["mutation_position"], out_annotation_all_outputs = model_output_annotation)
gradient_result
###Output
_____no_output_____
###Markdown
The output of all functions is a dictionary, please refer to the individual chapters further on for an explanation of the individual values. Every dictionary contains pandas dataframes as values. Every column of the dataframe is named according to the values given in the `out_annotation_all_outputs` labels and contains the respective predicted scores. Convenience functionFor convenience there is also a function available which enables the execution of all functions in one call.Additional arguments of the `effect_from_model` function are: methods: A list of prediction functions to be executed. Using the same function more often than once (even with different parameters) will overwrite the results of the previous calculation of that function. extra_args: None or a list of the same length as 'methods'. The elements of the list are dictionaries with additional arguments that should be passed on to the respective functions in 'methods'. Arguments defined here will overwrite arguments that are passed to all methods. **argv: Additional arguments to be passed on to all methods, e.g,: out_annotation.
###Code
from concise.effects.snp_effects import effect_from_model
# Define the parameters:
params = {"methods": [gradient_pred, dropout_pred, ism],
"model": dc,
"ref": dataset["ref"],
"ref_rc": dataset["ref_rc"],
"alt": dataset["alt"],
"alt_rc": dataset["alt_rc"],
"mutation_positions": dataset["mutation_position"],
"extra_args": [None, {"dropout_iterations": 60},
{"rc_handling" : "maximum", "diff_type":"diff"}],
"out_annotation_all_outputs": model_output_annotation,
}
results = effect_from_model(**params)
###Output
_____no_output_____
###Markdown
Again the returned value is a dictionary containing the results of the individual calculations, the keys are the names of the executed functions:
###Code
print(results.keys())
###Output
_____no_output_____ |
08_TorchText/pytorch-seq2seq-modern/2_Learning_Phrase_Representations_using_RNN_Encoder_Decoder_for_Statistical_Machine_Translation.ipynb | ###Markdown
2 - Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine TranslationIn this second notebook on sequence-to-sequence models using PyTorch and TorchText, we'll be implementing the model from [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](https://arxiv.org/abs/1406.1078). This model will achieve improved test perplexity whilst only using a single layer RNN in both the encoder and the decoder. IntroductionLet's remind ourselves of the general encoder-decoder model.We use our encoder (green) over the embedded source sequence (yellow) to create a context vector (red). We then use that context vector with the decoder (blue) and a linear layer (purple) to generate the target sentence.In the previous model, we used an multi-layered LSTM as the encoder and decoder.One downside of the previous model is that the decoder is trying to cram lots of information into the hidden states. Whilst decoding, the hidden state will need to contain information about the whole of the source sequence, as well as all of the tokens have been decoded so far. By alleviating some of this information compression, we can create a better model!We'll also be using a GRU (Gated Recurrent Unit) instead of an LSTM (Long Short-Term Memory). Why? Mainly because that's what they did in the paper (this paper also introduced GRUs) and also because we used LSTMs last time. To understand how GRUs (and LSTMs) differ from standard RNNS, check out [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) link. Is a GRU better than an LSTM? [Research](https://arxiv.org/abs/1412.3555) has shown they're pretty much the same, and both are better than standard RNNs. Preparing DataAll of the data preparation will be (almost) the same as last time, so we'll very briefly detail what each code block does. See the previous notebook for a recap.We'll import PyTorch, TorchText, spaCy and a few standard modules.
###Code
! pip install spacy==3.0.6 --quiet
###Output
[K |████████████████████████████████| 12.8MB 226kB/s
[K |████████████████████████████████| 51kB 9.1MB/s
[K |████████████████████████████████| 9.1MB 22.1MB/s
[K |████████████████████████████████| 624kB 40.1MB/s
[K |████████████████████████████████| 460kB 51.8MB/s
[?25h
###Markdown
You might need to restart the Runtime after installing the spacy models
###Code
! python -m spacy download en_core_web_sm --quiet
! python -m spacy download de_core_news_sm --quiet
import torch
import torch.nn as nn
import torch.optim as optim
# shit from the past
# from torchtext.legacy.datasets import Multi30k
# from torchtext.legacy.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
from typing import *
###Output
_____no_output_____
###Markdown
Then set a random seed for deterministic results/reproducability.
###Code
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Previously we reversed the source (German) sentence, however in the paper we are implementing they don't do this, so neither will we. Load our data.
###Code
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
from torchtext.datasets import Multi30k
SRC_LANGUAGE = 'de'
TGT_LANGUAGE = 'en'
# Place-holders
token_transform = {}
vocab_transform = {}
# Create source and target language tokenizer. Make sure to install the dependencies.
# the 'language' should be a full qualified name, since shortcuts like `de` and `en` are deprecated in spaCy 3.0+
token_transform[SRC_LANGUAGE] = get_tokenizer('spacy', language='de_core_news_sm')
token_transform[TGT_LANGUAGE] = get_tokenizer('spacy', language='en_core_web_sm')
# Training, Validation and Test data Iterator
train_iter, val_iter, test_iter = Multi30k(split=('train', 'valid', 'test'), language_pair=(SRC_LANGUAGE, TGT_LANGUAGE))
train_list, val_list, test_list = list(train_iter), list(val_iter), list(test_iter)
train_list[0]
print(f"Number of training examples: {len(train_iter)}")
print(f"Number of validation examples: {len(val_iter)}")
print(f"Number of testing examples: {len(test_iter)}")
###Output
Number of training examples: 29000
Number of validation examples: 1014
Number of testing examples: 1000
###Markdown
Then create our vocabulary, converting all tokens appearing less than twice into `` tokens.
###Code
# helper function to yield list of tokens
def yield_tokens(data_iter: Iterable, language: str) -> List[str]:
language_index = {SRC_LANGUAGE: 0, TGT_LANGUAGE: 1}
for data_sample in data_iter:
yield token_transform[language](data_sample[language_index[language]])
# Define special symbols and indices
UNK_IDX, PAD_IDX, BOS_IDX, EOS_IDX = 0, 1, 2, 3
# Make sure the tokens are in order of their indices to properly insert them in vocab
special_symbols = ['<unk>', '<pad>', '<bos>', '<eos>']
for ln in [SRC_LANGUAGE, TGT_LANGUAGE]:
# Create torchtext's Vocab object
vocab_transform[ln] = build_vocab_from_iterator(yield_tokens(train_list, ln),
min_freq=1,
specials=special_symbols,
special_first=True)
# Set UNK_IDX as the default index. This index is returned when the token is not found.
# If not set, it throws RuntimeError when the queried token is not found in the Vocabulary.
for ln in [SRC_LANGUAGE, TGT_LANGUAGE]:
vocab_transform[ln].set_default_index(UNK_IDX)
###Output
_____no_output_____
###Markdown
Finally, define the `device` and create our iterators.
###Code
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from torch.nn.utils.rnn import pad_sequence
# helper function to club together sequential operations
def sequential_transforms(*transforms):
def func(txt_input):
for transform in transforms:
txt_input = transform(txt_input)
return txt_input
return func
# function to add BOS/EOS and create tensor for input sequence indices
def tensor_transform(token_ids: List[int]):
return torch.cat((torch.tensor([BOS_IDX]),
torch.tensor(token_ids),
torch.tensor([EOS_IDX])))
# src and tgt language text transforms to convert raw strings into tensors indices
text_transform = {}
for ln in [SRC_LANGUAGE, TGT_LANGUAGE]:
text_transform[ln] = sequential_transforms(token_transform[ln], #Tokenization
vocab_transform[ln], #Numericalization
tensor_transform) # Add BOS/EOS and create tensor
# function to collate data samples into batch tesors
def collate_fn(batch):
src_batch, tgt_batch = [], []
for src_sample, tgt_sample in batch:
src_batch.append(text_transform[SRC_LANGUAGE](src_sample.rstrip("\n")))
tgt_batch.append(text_transform[TGT_LANGUAGE](tgt_sample.rstrip("\n")))
src_batch = pad_sequence(src_batch, padding_value=PAD_IDX)
tgt_batch = pad_sequence(tgt_batch, padding_value=PAD_IDX)
return src_batch, tgt_batch
from torch.utils.data import DataLoader
BATCH_SIZE = 128
train_dataloader = DataLoader(train_list, batch_size=BATCH_SIZE, collate_fn=collate_fn)
val_dataloader = DataLoader(val_list, batch_size=BATCH_SIZE, collate_fn=collate_fn)
test_dataloader = DataLoader(test_list, batch_size=BATCH_SIZE, collate_fn=collate_fn)
###Output
_____no_output_____
###Markdown
Building the Seq2Seq Model EncoderThe encoder is similar to the previous one, with the multi-layer LSTM swapped for a single-layer GRU. We also don't pass the dropout as an argument to the GRU as that dropout is used between each layer of a multi-layered RNN. As we only have a single layer, PyTorch will display a warning if we try and use pass a dropout value to it.Another thing to note about the GRU is that it only requires and returns a hidden state, there is no cell state like in the LSTM.$$\begin{align*}h_t &= \text{GRU}(e(x_t), h_{t-1})\\(h_t, c_t) &= \text{LSTM}(e(x_t), h_{t-1}, c_{t-1})\\h_t &= \text{RNN}(e(x_t), h_{t-1})\end{align*}$$From the equations above, it looks like the RNN and the GRU are identical. Inside the GRU, however, is a number of *gating mechanisms* that control the information flow in to and out of the hidden state (similar to an LSTM). Again, for more info, check out [this](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) excellent post. The rest of the encoder should be very familar from the last tutorial, it takes in a sequence, $X = \{x_1, x_2, ... , x_T\}$, passes it through the embedding layer, recurrently calculates hidden states, $H = \{h_1, h_2, ..., h_T\}$, and returns a context vector (the final hidden state), $z=h_T$.$$h_t = \text{EncoderGRU}(e(x_t), h_{t-1})$$This is identical to the encoder of the general seq2seq model, with all the "magic" happening inside the GRU (green).
###Code
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.hid_dim = hid_dim
self.embedding = nn.Embedding(input_dim, emb_dim) #no dropout as only one layer!
self.rnn = nn.GRU(emb_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
#src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
#embedded = [src len, batch size, emb dim]
outputs, hidden = self.rnn(embedded) #no cell state!
#outputs = [src len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#outputs are always from the top hidden layer
return hidden
###Output
_____no_output_____
###Markdown
DecoderThe decoder is where the implementation differs significantly from the previous model and we alleviate some of the information compression.Instead of the GRU in the decoder taking just the embedded target token, $d(y_t)$ and the previous hidden state $s_{t-1}$ as inputs, it also takes the context vector $z$. $$s_t = \text{DecoderGRU}(d(y_t), s_{t-1}, z)$$Note how this context vector, $z$, does not have a $t$ subscript, meaning we re-use the same context vector returned by the encoder for every time-step in the decoder. Before, we predicted the next token, $\hat{y}_{t+1}$, with the linear layer, $f$, only using the top-layer decoder hidden state at that time-step, $s_t$, as $\hat{y}_{t+1}=f(s_t^L)$. Now, we also pass the embedding of current token, $d(y_t)$ and the context vector, $z$ to the linear layer.$$\hat{y}_{t+1} = f(d(y_t), s_t, z)$$Thus, our decoder now looks something like this:Note, the initial hidden state, $s_0$, is still the context vector, $z$, so when generating the first token we are actually inputting two identical context vectors into the GRU.How do these two changes reduce the information compression? Well, hypothetically the decoder hidden states, $s_t$, no longer need to contain information about the source sequence as it is always available as an input. Thus, it only needs to contain information about what tokens it has generated so far. The addition of $y_t$ to the linear layer also means this layer can directly see what the token is, without having to get this information from the hidden state. However, this hypothesis is just a hypothesis, it is impossible to determine how the model actually uses the information provided to it (don't listen to anyone that says differently). Nevertheless, it is a solid intuition and the results seem to indicate that this modifications are a good idea!Within the implementation, we will pass $d(y_t)$ and $z$ to the GRU by concatenating them together, so the input dimensions to the GRU are now `emb_dim + hid_dim` (as context vector will be of size `hid_dim`). The linear layer will take $d(y_t), s_t$ and $z$ also by concatenating them together, hence the input dimensions are now `emb_dim + hid_dim*2`. We also don't pass a value of dropout to the GRU as it only uses a single layer.`forward` now takes a `context` argument. Inside of `forward`, we concatenate $y_t$ and $z$ as `emb_con` before feeding to the GRU, and we concatenate $d(y_t)$, $s_t$ and $z$ together as `output` before feeding it through the linear layer to receive our predictions, $\hat{y}_{t+1}$.
###Code
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, dropout):
super().__init__()
self.hid_dim = hid_dim
self.output_dim = output_dim
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU(emb_dim + hid_dim, hid_dim)
self.fc_out = nn.Linear(emb_dim + hid_dim * 2, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, context):
#input = [batch size]
#hidden = [n layers * n directions, batch size, hid dim]
#context = [n layers * n directions, batch size, hid dim]
#n layers and n directions in the decoder will both always be 1, therefore:
#hidden = [1, batch size, hid dim]
#context = [1, batch size, hid dim]
input = input.unsqueeze(0)
#input = [1, batch size]
embedded = self.dropout(self.embedding(input))
#embedded = [1, batch size, emb dim]
emb_con = torch.cat((embedded, context), dim = 2)
#emb_con = [1, batch size, emb dim + hid dim]
output, hidden = self.rnn(emb_con, hidden)
#output = [seq len, batch size, hid dim * n directions]
#hidden = [n layers * n directions, batch size, hid dim]
#seq len, n layers and n directions will always be 1 in the decoder, therefore:
#output = [1, batch size, hid dim]
#hidden = [1, batch size, hid dim]
output = torch.cat((embedded.squeeze(0), hidden.squeeze(0), context.squeeze(0)),
dim = 1)
#output = [batch size, emb dim + hid dim * 2]
prediction = self.fc_out(output)
#prediction = [batch size, output dim]
return prediction, hidden
###Output
_____no_output_____
###Markdown
Seq2Seq ModelPutting the encoder and decoder together, we get:Again, in this implementation we need to ensure the hidden dimensions in both the encoder and the decoder are the same.Briefly going over all of the steps:- the `outputs` tensor is created to hold all predictions, $\hat{Y}$- the source sequence, $X$, is fed into the encoder to receive a `context` vector- the initial decoder hidden state is set to be the `context` vector, $s_0 = z = h_T$- we use a batch of `` tokens as the first `input`, $y_1$- we then decode within a loop: - inserting the input token $y_t$, previous hidden state, $s_{t-1}$, and the context vector, $z$, into the decoder - receiving a prediction, $\hat{y}_{t+1}$, and a new hidden state, $s_t$ - we then decide if we are going to teacher force or not, setting the next input as appropriate (either the ground truth next token in the target sequence or the highest predicted next token)
###Code
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
#tensor to store decoder outputs
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
#last hidden state of the encoder is the context
context = self.encoder(src)
#context also used as the initial hidden state of the decoder
hidden = context
#first input to the decoder is the <sos> tokens
input = trg[0,:]
for t in range(1, trg_len):
#insert input token embedding, previous hidden state and the context state
#receive output tensor (predictions) and new hidden state
output, hidden = self.decoder(input, hidden, context)
#place predictions in a tensor holding predictions for each token
outputs[t] = output
#decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
#get the highest predicted token from our predictions
top1 = output.argmax(1)
#if teacher forcing, use actual next token as next input
#if not, use predicted token
input = trg[t] if teacher_force else top1
return outputs
###Output
_____no_output_____
###Markdown
Training the Seq2Seq ModelThe rest of this tutorial is very similar to the previous one. We initialise our encoder, decoder and seq2seq model (placing it on the GPU if we have one). As before, the embedding dimensions and the amount of dropout used can be different between the encoder and the decoder, but the hidden dimensions must remain the same.
###Code
INPUT_DIM = len(vocab_transform[SRC_LANGUAGE])
OUTPUT_DIM = len(vocab_transform[TGT_LANGUAGE])
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, DEC_DROPOUT)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Seq2Seq(enc, dec, device).to(device)
###Output
_____no_output_____
###Markdown
Next, we initialize our parameters. The paper states the parameters are initialized from a normal distribution with a mean of 0 and a standard deviation of 0.01, i.e. $\mathcal{N}(0, 0.01)$. It also states we should initialize the recurrent parameters to a special initialization, however to keep things simple we'll also initialize them to $\mathcal{N}(0, 0.01)$.
###Code
def init_weights(m):
for name, param in m.named_parameters():
nn.init.normal_(param.data, mean=0, std=0.01)
model.apply(init_weights)
###Output
_____no_output_____
###Markdown
We print out the number of parameters.Even though we only have a single layer RNN for our encoder and decoder we actually have **more** parameters than the last model. This is due to the increased size of the inputs to the GRU and the linear layer. However, it is not a significant amount of parameters and causes a minimal amount of increase in training time (~3 seconds per epoch extra).
###Code
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 24,728,918 trainable parameters
###Markdown
We initiaize our optimizer.
###Code
optimizer = optim.Adam(model.parameters())
###Output
_____no_output_____
###Markdown
We also initialize the loss function, making sure to ignore the loss on `` tokens.
###Code
criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX)
###Output
_____no_output_____
###Markdown
We then create the training loop...
###Code
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src, trg = batch
src, trg = src.to(device), trg.to(device)
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
###Output
_____no_output_____
###Markdown
...and the evaluation loop, remembering to set the model to `eval` mode and turn off teaching forcing.
###Code
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src, trg = batch
src, trg = src.to(device), trg.to(device)
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
###Output
_____no_output_____
###Markdown
We'll also define the function that calculates how long an epoch takes.
###Code
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Then, we train our model, saving the parameters that give us the best validation loss.
###Code
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_dataloader, optimizer, criterion, CLIP)
valid_loss = evaluate(model, val_dataloader, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
###Output
Epoch: 01 | Time: 1m 2s
Train Loss: 4.385 | Train PPL: 80.200
Val. Loss: 5.010 | Val. PPL: 149.942
Epoch: 02 | Time: 1m 2s
Train Loss: 4.083 | Train PPL: 59.338
Val. Loss: 4.814 | Val. PPL: 123.220
Epoch: 03 | Time: 1m 2s
Train Loss: 3.759 | Train PPL: 42.918
Val. Loss: 4.510 | Val. PPL: 90.930
Epoch: 04 | Time: 1m 2s
Train Loss: 3.412 | Train PPL: 30.329
Val. Loss: 4.313 | Val. PPL: 74.698
Epoch: 05 | Time: 1m 2s
Train Loss: 3.065 | Train PPL: 21.426
Val. Loss: 4.271 | Val. PPL: 71.570
Epoch: 06 | Time: 1m 2s
Train Loss: 2.789 | Train PPL: 16.265
Val. Loss: 4.204 | Val. PPL: 66.965
Epoch: 07 | Time: 1m 2s
Train Loss: 2.525 | Train PPL: 12.494
Val. Loss: 4.161 | Val. PPL: 64.145
Epoch: 08 | Time: 1m 2s
Train Loss: 2.309 | Train PPL: 10.064
Val. Loss: 4.163 | Val. PPL: 64.271
Epoch: 09 | Time: 1m 2s
Train Loss: 2.117 | Train PPL: 8.305
Val. Loss: 4.168 | Val. PPL: 64.570
Epoch: 10 | Time: 1m 2s
Train Loss: 1.988 | Train PPL: 7.299
Val. Loss: 4.139 | Val. PPL: 62.737
###Markdown
Finally, we test the model on the test set using these "best" parameters.
###Code
model.load_state_dict(torch.load('tut2-model.pt'))
test_loss = evaluate(model, test_dataloader, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
###Output
| Test Loss: 4.094 | Test PPL: 59.971 |
|
pymks/fmks/tests/non_periodic.ipynb | ###Markdown
Implement Masking and Test Issue 517Testing for weighted masks and fix [517](https://github.com/materialsinnovation/pymks/issues/517).
###Code
import dask.array as da
import numpy as np
from pymks.fmks import correlations
from pymks import plot_microstructures
A = da.from_array(np.array([
[
[1, 0, 0],
[0, 1, 1],
[1, 1, 0]
],
[
[0, 0, 1],
[1, 0, 0],
[0, 0, 1]
]
]))
mask = np.ones((2,3,3))
mask[:,2,1:] = 0
mask = da.from_array(mask)
plot_microstructures(A[0], A[1],
titles=['Structure[0]', 'Structure[1]'],
cmap='gray', figsize_weight=2.5)
plot_microstructures(mask[0], mask[1],
titles=['Mask[0]', 'Mask[1]'],
cmap='viridis', figsize_weight=2.5)
###Output
_____no_output_____
###Markdown
Check that periodic still worksThe normalization occurs in the two_point_stats function and the auto-correlation/cross-correlation occur in the cross_correlation function. Checking that the normalization is properly calculated.First is the auto-correlation. Second is the cross-correlation.
###Code
correct = (correlations.cross_correlation(A, A).compute() / 9).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A).compute().round(3).astype(np.float64)
assert (correct == tested).all()
correct = (correlations.cross_correlation(A, 1-A).compute() / 9).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Check that masked periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In masked periodic, we assume that vectors going across the boundary of the structure come back on the other side. However, a vector landing in the masked area is discarded (ie not included in the correlation sum).Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation.
###Code
correct_periodic_mask_auto = np.array([
[
[2,1,2],
[1,4,1],
[2,1,2]
],
[
[1,0,0],
[0,2,0],
[0,0,1]
]
])
correct_periodic_mask_cross = np.array([
[
[1,3,1],
[2,0,2],
[1,1,1]
],
[
[0,1,2],
[2,0,2],
[1,2,0]
]
])
norm_periodic_mask = np.array([
[5,5,5],
[6,7,6],
[5,5,5]
])
# Auto-Correlation
correct = (correct_periodic_mask_auto / norm_periodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64)
assert (correct == tested).all()
# Cross-Correlation
correct = (correct_periodic_mask_cross / norm_periodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Test that non-periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In non-periodic, we assume that a vector used to count up 2 point states can only connect two states in the structure. A vector going outside of the bounds of the structure is not counted.Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation.
###Code
correct_nonperiodic_auto = np.array([
[
[1,1,2],
[2,5,2],
[2,1,1]
],
[
[0,0,0],
[0,3,0],
[0,0,0]
]
])
correct_nonperiodic_cross = np.array([
[
[2,3,1],
[1,0,2],
[0,2,1]
],
[
[1,2,1],
[2,0,1],
[1,2,1]
]
])
norm_nonperiodic = np.array([
[4,6,4],
[6,9,6],
[4,6,4]
])
# Auto-Correlation
correct = (correct_nonperiodic_auto / norm_nonperiodic).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
# Cross-Correlation
correct = (correct_nonperiodic_cross / norm_nonperiodic).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Check that non-periodic masking worksIn non-periodic masking, vectors that go across the boundary or land in a mask are not included in the sum.
###Code
correct_nonperiodic_mask_auto = np.array([
[
[1,0,1],
[1,4,1],
[1,0,1]
],
[
[0,0,0],
[0,2,0],
[0,0,0]
]
])
correct_nonperiodic_mask_cross = np.array([
[
[1,3,1],
[1,0,1],
[0,1,0]
],
[
[0,1,1],
[1,0,1],
[1,2,0]
]
])
norm_nonperiodic_mask = np.array([
[2,4,3],
[4,7,4],
[3,4,2]
])
# Auto-Correlation
correct = (correct_nonperiodic_mask_auto / norm_nonperiodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
# Cross-Correlation
correct = (correct_nonperiodic_mask_cross / norm_nonperiodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Check that different sized dask arrays are valid masks.We want to be able to specify the same mask for each sample. We also want to be able to specify a different mask for each sample. This validates that both are possible.
###Code
A = da.random.random([1000,3,3])
mask_same4all = da.random.randint(0,2,[3,3])
mask_same4some = da.random.randint(0,2,[100,3,3])
mask_diff4all = da.random.randint(0,2,[1000,3,3])
correlations.two_point_stats(A, A, mask=mask_same4all)
# The following check fails. Therefore, the current implementation
# only works for one mask for all or different mask for all, which
# is feature rich enough for me.
# correlations.two_point_stats(A, A, mask=mask_same4some)
correlations.two_point_stats(A, A, mask=mask_diff4all)
###Output
_____no_output_____
###Markdown
Some check that boolean and integers are valid masksA mask could be true and false specifying where there is a microstructure. However, it could also be any value in the range $[0,1]$ which specifies the probability a value is correctly assigned. The mask right now only implements confidence in a single phase, although idealy it should represent the confidence in all phases. However, for the use cases where there are 2 phases, a mask with a probability for one phase also completely describes the confidence in the other phase. Therefore, this implementation is complete for 2 phases.
###Code
mask_int = da.random.randint(0,2,[1000,3,3])
mask_bool = mask_int.copy().astype(bool)
print(mask_int.dtype, mask_bool.dtype)
correlations.two_point_stats(A, A, mask=mask_int)
correlations.two_point_stats(A, A, mask=mask_bool)
###Output
int64 bool
###Markdown
Implement Masking and Test Issue 517Testing for weighted masks and fix [517](https://github.com/materialsinnovation/pymks/issues/517).
###Code
import dask.array as da
import numpy as np
from pymks.fmks import correlations
from pymks import plot_microstructures
A = da.from_array(np.array([
[
[1, 0, 0],
[0, 1, 1],
[1, 1, 0]
],
[
[0, 0, 1],
[1, 0, 0],
[0, 0, 1]
]
]))
mask = np.ones((2,3,3))
mask[:,2,1:] = 0
mask = da.from_array(mask)
plot_microstructures(A[0], A[1],
titles=['Structure[0]', 'Structure[1]'],
cmap='gray', figsize_weight=2.5)
plot_microstructures(mask[0], mask[1],
titles=['Mask[0]', 'Mask[1]'],
cmap='viridis', figsize_weight=2.5)
###Output
_____no_output_____
###Markdown
Check that periodic still worksThe normalization occurs in the two_point_stats function and the auto-correlation/cross-correlation occur in the cross_correlation function. Checking that the normalization is properly calculated.First is the auto-correlation. Second is the cross-correlation.
###Code
correct = (correlations.cross_correlation(A, A).compute() / 9).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A).compute().round(3).astype(np.float64)
assert (correct == tested).all()
correct = (correlations.cross_correlation(A, 1-A).compute() / 9).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Check that masked periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In masked periodic, we assume that vectors going across the boundary of the structure come back on the other side. However, a vector landing in the masked area is discarded (ie not included in the correlation sum).Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation.
###Code
correct_periodic_mask_auto = np.array([
[
[2,1,2],
[1,4,1],
[2,1,2]
],
[
[1,0,0],
[0,2,0],
[0,0,1]
]
])
correct_periodic_mask_cross = np.array([
[
[1,3,1],
[2,0,2],
[1,1,1]
],
[
[0,1,2],
[2,0,2],
[1,2,0]
]
])
norm_periodic_mask = np.array([
[5,5,5],
[6,7,6],
[5,5,5]
])
# Auto-Correlation
correct = (correct_periodic_mask_auto / norm_periodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64)
assert (correct == tested).all()
# Cross-Correlation
correct = (correct_periodic_mask_cross / norm_periodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=True).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Test that non-periodic worksTwo point statistics are part correlation and part normalization. The correlation sums up the number of possible 2-point states. In non-periodic, we assume that a vector used to count up 2 point states can only connect two states in the structure. A vector going outside of the bounds of the structure is not counted.Below, are the hand computed correlation and normalization. The correct 2point stats are the correlation divided by the normalization. First, is the auto-correlation and second is the cross-correlation.
###Code
correct_nonperiodic_auto = np.array([
[
[1,1,2],
[2,5,2],
[2,1,1]
],
[
[0,0,0],
[0,3,0],
[0,0,0]
]
])
correct_nonperiodic_cross = np.array([
[
[2,3,1],
[1,0,2],
[0,2,1]
],
[
[1,2,1],
[2,0,1],
[1,2,1]
]
])
norm_nonperiodic = np.array([
[4,6,4],
[6,9,6],
[4,6,4]
])
# Auto-Correlation
correct = (correct_nonperiodic_auto / norm_nonperiodic).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
# Cross-Correlation
correct = (correct_nonperiodic_cross / norm_nonperiodic).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Check that non-periodic masking worksIn non-periodic masking, vectors that go across the boundary or land in a mask are not included in the sum.
###Code
correct_nonperiodic_mask_auto = np.array([
[
[1,0,1],
[1,4,1],
[1,0,1]
],
[
[0,0,0],
[0,2,0],
[0,0,0]
]
])
correct_nonperiodic_mask_cross = np.array([
[
[1,3,1],
[1,0,1],
[0,1,0]
],
[
[0,1,1],
[1,0,1],
[1,2,0]
]
])
norm_nonperiodic_mask = np.array([
[2,4,3],
[4,7,4],
[3,4,2]
])
# Auto-Correlation
correct = (correct_nonperiodic_mask_auto / norm_nonperiodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
# Cross-Correlation
correct = (correct_nonperiodic_mask_cross / norm_nonperiodic_mask).round(3).astype(np.float64)
tested = correlations.two_point_stats(A, 1-A, mask=mask, periodic_boundary=False).compute().round(3).astype(np.float64)
assert (correct == tested).all()
###Output
_____no_output_____
###Markdown
Check that different sized dask arrays are valid masks.We want to be able to specify the same mask for each sample. We also want to be able to specify a different mask for each sample. This validates that both are possible.
###Code
A = da.random.random([1000,3,3])
mask_same4all = da.random.randint(0,2,[3,3])
mask_same4some = da.random.randint(0,2,[100,3,3])
mask_diff4all = da.random.randint(0,2,[1000,3,3])
correlations.two_point_stats(A, A, mask=mask_same4all)
# The following check fails. Therefore, the current implementation
# only works for one mask for all or different mask for all, which
# is feature rich enough for me.
# correlations.two_point_stats(A, A, mask=mask_same4some)
correlations.two_point_stats(A, A, mask=mask_diff4all);
###Output
_____no_output_____
###Markdown
Some check that boolean and integers are valid masksA mask could be true and false specifying where there is a microstructure. However, it could also be any value in the range $[0,1]$ which specifies the probability a value is correctly assigned. The mask right now only implements confidence in a single phase, although idealy it should represent the confidence in all phases. However, for the use cases where there are 2 phases, a mask with a probability for one phase also completely describes the confidence in the other phase. Therefore, this implementation is complete for 2 phases.
###Code
mask_int = da.random.randint(0,2,[1000,3,3])
mask_bool = mask_int.copy().astype(bool)
print(mask_int.dtype, mask_bool.dtype)
correlations.two_point_stats(A, A, mask=mask_int)
correlations.two_point_stats(A, A, mask=mask_bool);
###Output
int64 bool
|
F20/deep_learning_cross_validation.ipynb | ###Markdown
Deep Learning Pipeline for Random cross validation Before you run next block, please make sure you download the Waveform Data folder from Rice Box. Note you only need to download the patient folders which have labelled events associated with them (to save space), which are currently patients: 1, 2, 3, 4, 7 , 8, 13, 14, 15, 16, 17, 18, 19, 20, 22. However make sure to keep all patients within their own folder, and keep all patients together in a Waveform Data folder. Also, make sure to download the Labelled_Events.xlsx file from the GitHub repo. Save both of these to a place where the local version of this notebook has access, and make sure you know the local paths. Please also download ECG_feature_extraction.py, ECG_preprocessing.py, PPG_preprocessing.py, data_generator.py and CNN_models.py . The detailed information about these .py files can be found in readme file. Make sure all of you have all installed packages and they are up to date using the requirements.txt file (pip install -r requirements.txt).
###Code
import h5py
import pywt
import numpy as np
import os
import random
from glob import glob
# from sklearn.model_selection import train_test_split
from ECG_feature_extraction import *
from ECG_preprocessing import *
from PPG_preprocessing import *
from os import listdir
import pandas as pd
###Output
_____no_output_____
###Markdown
loading cwt images for later training deep learning modelThree parameters need to be provided for this section:patient_folder_path: the local path of the folder containing all Waveform Data (folder containing folders for each patient)excel_file_path: the local path Labelled__Events.xlsx filesave_path: the path of folder where you want to save the "cwt images" (these will be used for modelling).
###Code
def load_event_cwt_images(save_path,patient_folder_path,excel_file_path,excel_sheet_name='PJ',fs=240):
'''
load cwt features
input:
save_path: it is the folder path to save these np.array files
patient_folder_path: it is the folder containing different patients data
excel_file_path: the path for labelled event excel
excel_sheet_name: it is the labelled event that you plan to work with. Basically save the same events into a folder call the same name as the excel_sheet_name
fs: sampling frequncy
output:
no return value
but you can check the saved file based on your save_path
'''
labelevent = pd.read_excel(excel_file_path,sheet_name=excel_sheet_name)
count = 1
# save_path = save_path+excel_sheet_name+'/'
for _,record in labelevent.iterrows():
label_record = record.tolist()
patient_id,event_start_time,event_end_time = label_record
patient_file_path = patient_folder_path+'/'+str(int(patient_id))
for block_file in listdir(patient_file_path):
# trying to find the ecg signal and ppg signal during the label event time
block_path = patient_file_path+'/'+block_file
all_signals = h5py.File(block_path, 'r')
signals_keys = set(all_signals.keys())
block_start_time,block_end_time = all_signals['time'][0],all_signals['time'][-1]
if block_start_time <= event_start_time <= event_end_time <= block_end_time:
start_index = int((event_start_time-block_start_time)*fs)
end_index = int((event_end_time-block_start_time)*fs)
#event_time = all_signals['time'][start_index:end_index +1]
ecg, ppg = None, None
if 'GE_WAVE_ECG_2_ID' in signals_keys:
ecg = all_signals['GE_WAVE_ECG_2_ID'][start_index:end_index +1]
if 'GE_WAVE_SPO2_WAVE_ID' in signals_keys:
ppg = all_signals['GE_WAVE_SPO2_WAVE_ID'][start_index:end_index +1]
# print("loaded ppg: ", ppg)
if ppg is None or ecg is None: continue
# ECG signal preprocessing for denoising and R-peak detection
R_peak_index,ecg_denoise = ecg_preprocessing_final(ecg) # the location of R_peak during the label event
ppg_denoise = PPG_denoising(ppg)
## extract cwt features for ecg signal and ppg signal
ecg_cwt = compute_cwt_features(ecg_denoise,R_peak_index,scales = np.arange(1,129),windowL=-240,windowR=240,wavelet = 'morl')
ppg_cwt = compute_cwt_features(ppg_denoise,R_peak_index,scales = np.arange(1,129),windowL=-240,windowR=240,wavelet = 'coif')
if len(ecg_cwt)!=len(ppg_cwt):
raise Exception("The beat length is not correct!!! Please check!")
if not ecg_cwt or not ppg_cwt: continue
for i in range(len(ecg_cwt)):
combined = np.stack((ecg_cwt[i],ppg_cwt[i]),axis=-1)
np.save(save_path+str(count)+'_'+excel_sheet_name,combined)
# temp = ecg_cwt[i]
# temp = np.reshape(temp,(128,480,1))
# np.save(save_path+str(count)+'_'+excel_sheet_name,temp)
count+=1
return
def load_cwt_files(patient_folder_path,excel_file_path,save_path,label_type= ['PJ','PJRP','PO','PP','PS','PVC']):
'''
Implements function load_event_cwt_images to generate cwt features and then save into a specific folder
Arguments:
patient_folder_path: the path of the folder which save the patients' waveforms
excel_file_path: the path of the excel file which contains the label events
save_path: the folder path to save cwt features
label_types: a default list containing labels
Returns:
no return
'''
for label in label_type:
load_event_cwt_images(save_path,patient_folder_path,excel_file_path,excel_sheet_name=label)
############# you should modify this line to change these respective paths based on the instructions ##############################################
load_cwt_files(patient_folder_path='I:/COMP549/data',excel_file_path='I:/COMP549/events/Labelled_Events.xlsx',save_path='I:/COMP549/cwt_features_images_ecg/')
###Output
After detrend before wavelet: [146.67326 167.97446 173.65193 ... 172.30809 172.17563 170.23831]
###Markdown
set up the library for deep learning if any error generated at this step, please update the required libraries
###Code
import time
import os
#from data_generator import get_train_valid_generator
#from losses import make_loss, dice_coef_clipped, binary_crossentropy, dice_coef, ceneterline_loss
import tensorflow as tf
import time
#import matplotlib.pyplot as plt
# -------------------------- set gpu using tf ---------------------------
# import tensorflow as tf
# import time
# config = tf.ConfigProto()
# config.gpu_options.allow_growth = True
# session = tf.Session(config=config)
# ------------------- start importing keras module ---------------------
from keras.callbacks import (ModelCheckpoint, CSVLogger, TensorBoard, EarlyStopping)
# import tensorflow.keras.backend.tensorflow_backend as K
from keras.optimizers import Adam
from CNN_models import *
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Train the deep learning model Please provide the following information:EPOCHS: epoch number for training BATCH_SIZE: batch size for trainingDATA_DIR: the path of folder containing "cwt images"LOG_DIR: the path of folder you would like to save the training logVAL_SIZE: the percentage of test dataset
###Code
##############################Please modify this part ###############################################
EPOCHS = 20
BATCH_SIZE = 16#8
DATA_DIR = 'I:/COMP549/cwt_features_images_ecg' #I:/COMP549/cwt_features_images'
LOG_DIR = "./log"
VAL_SIZE = 0.15
#########################################################################################################
def summarize_diagnostics(history):
# you could use this function to plot the result
fig, ax = plt.subplots(1,2, figsize=(20, 10))
# plot loss
ax[0].set_title('Loss Curves', fontsize=20)
ax[0].plot(history.history['loss'], label='train')
ax[0].plot(history.history['val_loss'], label='test')
ax[0].set_xlabel('Epochs', fontsize=15)
ax[0].set_ylabel('Loss', fontsize=15)
ax[0].legend(fontsize=15)
# plot accuracy
ax[1].set_title('Classification Accuracy', fontsize=20)
ax[1].plot(history.history['accuracy'], label='train')
ax[1].plot(history.history['val_accuracy'], label='test')
ax[1].set_xlabel('Epochs', fontsize=15)
ax[1].set_ylabel('Accuracy', fontsize=15)
ax[1].legend(fontsize=15)
def train():
model = twoLayerCNN(input_size=(32,120,2))
#model = VGG(input_shape=(128,480,2))
model.summary()
# model.load_weights(pre_model_path)
# model.compile(optimizer=Adam(lr=3e-4), loss=make_loss('bce_dice'),
# metrics=[dice_coef, binary_crossentropy, ceneterline_loss, dice_coef_clipped])
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer= Adam(lr=3e-5),
metrics=['accuracy'])
print("got twolayerCNN")
model_name = 'twolayerCNN_ecg-{}'.format(int(time.time()))
if not os.path.exists("./results/"):
os.mkdir('./results')
if not os.path.exists("./weights/"):
os.mkdir('./weights')
save_model_weights = "./weights/ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.hdf5"
print('Fitting model...')
start_time = time.time()
tensorboard = TensorBoard(log_dir = LOG_DIR, write_images=True)
earlystop = tf.keras.callbacks.EarlyStopping(monitor='val_loss',patience=3, verbose=1, mode='min')
checkpoint = tf.keras.callbacks.ModelCheckpoint(save_model_weights,
monitor="val_loss",
mode = "min",
verbose=1,
save_best_only=True,
save_weights_only=True)
csv_logger = CSVLogger('./results/{}_train.log'.format(model_name))
train_gen, valid_gen, num_train, num_valid = get_train_valid_generator(data_dir=DATA_DIR,batch_size=BATCH_SIZE,val_size = VAL_SIZE)
history = model.fit(x = train_gen,
validation_data=valid_gen,
epochs=EPOCHS,
steps_per_epoch=(num_train+BATCH_SIZE-1)//BATCH_SIZE,
validation_steps=(num_valid+BATCH_SIZE-1)//BATCH_SIZE,
callbacks=[earlystop, checkpoint, tensorboard, csv_logger])
end_time = time.time()
print("Training time(h):", (end_time - start_time) / 3600)
summarize_diagnostics(history)
if __name__ == "__main__":
train()
###Output
conv1 shape : (None, 32, 120, 32)
conv2 shape: (None, 16, 60, 64)
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 32, 120, 2)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 32, 120, 32) 608
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 60, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 16, 60, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 16, 60, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 30, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 8, 30, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 15360) 0
_________________________________________________________________
dense (Dense) (None, 2) 30722
=================================================================
Total params: 49,826
Trainable params: 49,826
Non-trainable params: 0
_________________________________________________________________
got twolayerCNN
Fitting model...
Epoch 1/20
1/5947 [..............................] - ETA: 0s - loss: 0.6836 - accuracy: 0.5000WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
2/5947 [..............................] - ETA: 1:03:53 - loss: 0.6557 - accuracy: 0.5938WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.5071s vs `on_train_batch_end` time: 0.7835s). Check your callbacks.
15/5947 [..............................] - ETA: 52:08 - loss: 0.5923 - accuracy: 0.7542 |
notebooks/tpore_survival_analysis_same_membrane.ipynb | ###Markdown
Load data
###Code
df = pd.read_csv(f"{processed_data_dir}data.csv").drop("Unnamed: 0", axis=1)
df.Replica = df.membrane
df.Replica = df.Replica.astype("category")
df["Replica_enc"] = df.Replica.cat.codes
category_dic = {i: cat for i, cat in enumerate(np.unique(df["Replica"]))}
category_dic
n_categories = len(category_dic)
dummies = pd.get_dummies(df.Replica, prefix="Replica")
for col in dummies.columns:
df[col] = dummies[col]
df.tpore = df.tpore * 10
df.tpore = df.tpore.astype(int)
df.head()
###Output
_____no_output_____
###Markdown
Visualize Data
###Code
df["tpore"].groupby(df["Replica"]).describe()
_ = df["tpore"].hist(by=df["Replica"], sharex=True, density=True, bins=10)
_ = df["tpore"].hist(bins=50)
###Output
_____no_output_____
###Markdown
Visualize Priors These are the shapes of the priors used.
###Code
beta = 1
alpha = 5
d = st.gamma(scale=1 / beta, a=alpha)
x = np.linspace(0, 10, 100)
tau_0_pdf = d.pdf(x)
plt.plot(x, tau_0_pdf, "k-", lw=2)
plt.xlabel("lambda0(t)")
###Output
_____no_output_____
###Markdown
Prepare data
###Code
n_sims = df.shape[0]
sims = np.arange(n_sims)
interval_length = 15 # 1.5 ns
interval_bounds = np.arange(0, df.tpore.max() + interval_length + 1, interval_length)
n_intervals = interval_bounds.size - 1
intervals = np.arange(n_intervals)
last_period = np.floor((df.tpore - 0.01) / interval_length).astype(int)
pore = np.zeros((n_sims, n_intervals))
pore[sims, last_period] = np.ones(n_sims)
exposure = (
np.greater_equal.outer(df.tpore.values, interval_bounds[:-1]) * interval_length
)
exposure[sims, last_period] = df.tpore - interval_bounds[last_period]
###Output
_____no_output_____
###Markdown
Run Model
###Code
with pm.Model() as model:
lambda0 = pm.Gamma("lambda0", 5, 1, shape=n_intervals)
beta = pm.Normal("beta", 0, sigma=100, shape=(n_categories))
lambda_ = pm.Deterministic(
"lambda_", T.outer(T.exp(T.dot(beta, dummies.T)), lambda0)
)
mu = pm.Deterministic("mu", exposure * lambda_)
exp_beta = pm.Deterministic("exp_beta", np.exp(beta))
obs = pm.Poisson(
"obs",
mu,
observed=pore,
)
pm.model_to_graphviz(model)
%%time
if infer:
with model:
trace = pm.sample(1000, tune=1000, random_seed=RANDOM_SEED, return_inferencedata=True, cores=8)
else:
trace=load_trace(model_path, url_data)
if infer:
trace.posterior = trace.posterior.reset_index(
["beta_dim_0", "exp_beta_dim_0", "lambda0_dim_0"], drop=True
)
trace = trace.rename(
{
"lambda0_dim_0": "t",
"beta_dim_0": "Membrane",
"exp_beta_dim_0": "Membrane",
}
)
trace = trace.assign_coords(
t=interval_bounds[:-1] / 10,
Membrane=list(category_dic.values()),
)
trace
###Output
_____no_output_____
###Markdown
Convergences
###Code
with az.rc_context(rc={"plot.max_subplots": None}):
az.plot_trace(trace, var_names=["beta", "lambda0"])
with az.rc_context(rc={"plot.max_subplots": None}):
az.plot_autocorr(trace, combined=True, var_names=["lambda0", "beta"])
def get_survival_function(trace):
l = []
for interval in range(n_intervals - 1):
l.append(
np.trapz(
trace.values[:, :, :, 0 : interval + 1],
axis=3,
dx=interval_length,
)
)
l = np.exp(-np.array(l))
return l
def get_ecdf(data):
x = np.sort(data)
n = x.size
y = np.arange(1, n + 1) / n
return x, y
def get_hdi(x, axis, alpha=0.06):
x_mean = np.nanmedian(x, axis=axis)
percentiles = 100 * np.array([alpha / 2.0, 1.0 - alpha / 2.0])
hdi = np.nanpercentile(x, percentiles, axis=axis)
return x_mean, hdi
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
survival_function = get_survival_function(trace.posterior.lambda_.astype(np.float16))
# Empyrical CDF data
ax.plot(*get_ecdf(df.tpore / 10), label="obs.")
# Empyrical CDF data-binned
binned_data = np.where(pore[:, :] == 1)[1] * interval_length / 10
ax.plot(*get_ecdf(binned_data), label="obs. binned")
# Plot Posterior Predictive
hdi = get_hdi(survival_function[:, :, :, :], axis=(1, 2, 3))
x = np.arange(n_intervals - 1) * interval_length / 10.0
ax.plot(x, 1 - hdi[0], label="Posterior Predictive Check")
ax.fill_between(x, 1 - hdi[1][0, :], 1 - hdi[1][1, :], alpha=0.1, color="g")
ax.set_xlabel("t-pore (ns)")
ax.set_ylabel("CDF(t-pore)")
ax.set_title("Posterior Predictive Check")
ax.legend()
n_categories = len(category_dic)
n_rows = ceil(n_categories / 4)
fig, ax = plt.subplots(n_rows, 4, figsize=(6 * 4, 4 * n_rows))
ax.flatten()
for i in range(n_categories):
# Mask by replica type
mask = df.Replica == category_dic[i]
survival_function = get_survival_function(trace.posterior.lambda_[:, :, mask, :].astype(np.float16))
# Empyrical CDF data
ax[i].plot(*get_ecdf(df[mask].tpore / 10), label="obs.")
# Empyrical CDF data-binned
binned_data = np.where(pore[mask, :] == 1)[1] * interval_length / 10
ax[i].plot(*get_ecdf(binned_data), label="obs. binned")
# Plot Posterior Predictive
hdi = get_hdi(survival_function[:, :, :, :], axis=(1, 2, 3))
x = np.arange(n_intervals - 1) * interval_length / 10.0
ax[i].plot(x, 1 - hdi[0], label="Posterior Predictive Check")
ax[i].fill_between(x, 1 - hdi[1][0, :], 1 - hdi[1][1, :], alpha=0.1, color="g")
ax[i].set_xlabel("t-pore (ns)")
ax[i].set_ylabel("CDF(t-pore)")
ax[i].set_title(f"Posterior Predictive Check {category_dic[i]}")
ax[i].legend()
###Output
_____no_output_____
###Markdown
Analyze Plot posterior
###Code
variable = "lambda0"
ax = az.plot_forest(trace, var_names=variable, combined=True)
ax[0].set_xlabel("lambda0[t]")
variable = "beta"
ax = az.plot_forest(trace, var_names=variable, combined=True)
ax[0].set_xlabel("beta")
variable = "exp_beta"
ax = az.plot_forest(trace, var_names=variable, combined=True)
ax[0].set_xlabel("exp(beta)")
hdi = az.hdi(trace.posterior, var_names=["exp_beta"])
for i in range(n_categories):
print(f"{category_dic[i]} {hdi.exp_beta[i,:].values.mean()}")
fig, ax = plt.subplots(1, 2, figsize=(20, 7))
lambda0 = trace.posterior.lambda0.values
beta = trace.posterior.beta.values
y, hdi = get_hdi(lambda0, (0, 1))
x = interval_bounds[:-1] / 10
ax[0].fill_between(x, hdi[0], hdi[1], alpha=0.25, step="pre", color="grey")
ax[0].step(x, y, label="baseline", color="grey")
for i in range(n_categories):
lam = np.exp(beta[:, :, [i]]) * lambda0
y, hdi = get_hdi(lam, (0, 1))
ax[1].fill_between(x, hdi[0], hdi[1], alpha=0.25, step="pre")
ax[1].step(x, y, label=f"{category_dic[i]}")
ax[0].legend(loc="best")
ax[0].set_ylabel("lambda0")
ax[0].set_xlabel("t (ns)")
ax[1].legend(loc="best")
ax[1].set_ylabel("lambda_i")
ax[1].set_xlabel("t (ns)")
###Output
_____no_output_____
###Markdown
Save Model?
###Code
print(model_path)
if save_data:
remove(model_path)
trace.to_netcdf(model_path)
###Output
Didn't remove anything
|
notebook/04_transfer_learning_CNN.ipynb | ###Markdown
6.0 Transfer Learning with Pre-trained Model (CNN)
###Code
import pretrainedmodels
# https://github.com/Cadene/pretrained-models.pytorch
import torch
import torch.nn as nn
from torch.utils.data import Dataset
# Start with the lightweight model for experiment
# We choose Resnet34 for our initial transfer learning
model_name = 'resnet34'
backbone = pretrainedmodels.__dict__[model_name](pretrained='imagenet')
backbone
# With torch.nn , we can acccess each of the layers in the pre-trained model
backbone.layer4
###Output
_____no_output_____
###Markdown
1.0 We need to convert the color channel to black/gray (1 channel), instead of original Imagenet color channel (3 channel)
###Code
# original
backbone.conv1
# changed
backbone.conv1 = nn.Conv2d(1,64,7,2,3, bias = False)
###Output
_____no_output_____
###Markdown
2.0 Convert the last out_features to be classes of 3 different output for this competition (i.e.it has 186 classes)
###Code
in_features = backbone.last_linear.in_features
in_features
backbone.last_linear = nn.Linear(in_features, 186)
# check it has changed to 186 output features
backbone.last_linear
###Output
_____no_output_____
###Markdown
3.0 Test out the customized Pre-trained model
###Code
batches = torch.rand(6,1,137,236)
batches.shape
outputs = backbone(batches)
outputs.shape
# logits
outputs
outputs.max()
outputs.min()
###Output
_____no_output_____ |
003_synthetic_features_and_outliers.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/AmoDinho/Machine-Learning-Crash-with-TF/blob/master/synthetic_features_and_outliers.ipynb) Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Synthetic Features and Outliers **Learning Objectives:** * Create a synthetic feature that is the ratio of two other features * Use this new feature as an input to a linear regression model * Improve the effectiveness of the model by identifying and clipping (removing) outliers out of the input data Let's revisit our model from the previous First Steps with TensorFlow exercise. First, we'll import the California housing data into a *pandas* `DataFrame`: Setup
###Code
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.metrics as metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
###Output
_____no_output_____
###Markdown
Next, we'll set up our input function, and define the function for model training:
###Code
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(learning_rate, steps, batch_size, input_feature):
"""Trains a linear regression model.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
Returns:
A Pandas `DataFrame` containing targets and the corresponding predictions done
after training the model.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]].astype('float32')
my_label = "median_house_value"
targets = california_housing_dataframe[my_label].astype('float32')
# Create input functions.
training_input_fn = lambda: my_input_fn(my_feature_data, targets, batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "RMSE (on training data):"
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, root_mean_squared_error)
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Create a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print "Final RMSE (on training data): %0.2f" % root_mean_squared_error
return calibration_data
###Output
_____no_output_____
###Markdown
Task 1: Try a Synthetic FeatureBoth the `total_rooms` and `population` features count totals for a given city block.But what if one city block were more densely populated than another? We can explore how block density relates to median house value by creating a synthetic feature that's a ratio of `total_rooms` and `population`.In the cell below, create a feature called `rooms_per_person`, and use that as the `input_feature` to `train_model()`.What's the best performance you can get with this single feature by tweaking the learning rate? (The better the performance, the better your regression line should fit the data, and the lowerthe final RMSE should be.) **NOTE**: You may find it helpful to add a few code cells below so you can try out several different learning rates and compare the results. To add a new code cell, hover your cursor directly below the center of this cell, and click **CODE**.
###Code
#
# YOUR CODE HERE
#
california_housing_dataframe["rooms_per_person"] = california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"]
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person"
)
###Output
Training model...
RMSE (on training data):
period 00 : 212.73
period 01 : 190.37
period 02 : 169.58
period 03 : 154.51
period 04 : 141.20
period 05 : 133.88
period 06 : 131.58
period 07 : 130.85
period 08 : 131.73
period 09 : 133.20
Model training finished.
###Markdown
SolutionClick below for a solution.
###Code
california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"])
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person")
###Output
_____no_output_____
###Markdown
Task 2: Identify OutliersWe can visualize the performance of our model by creating a scatter plot of predictions vs. target values. Ideally, these would lie on a perfectly correlated diagonal line.Use Pyplot's [`scatter()`](https://matplotlib.org/gallery/shapes_and_collections/scatter.html) to create a scatter plot of predictions vs. targets, using the rooms-per-person model you trained in Task 1.Do you see any oddities? Trace these back to the source data by looking at the distribution of values in `rooms_per_person`.
###Code
# YOUR CODE HERE
plt.figure(figsize=(15,6))
plt.subplot(1,2,1)
plt.scatter(calibration_data["predictions"], calibration_data["targets"])
###Output
_____no_output_____
###Markdown
Task 3: Clip OutliersSee if you can further improve the model fit by setting the outlier values of `rooms_per_person` to some reasonable minimum or maximum.For reference, here's a quick example of how to apply a function to a Pandas `Series`: clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))The above `clipped_feature` will have no values less than `0`.
###Code
#First clip the feature
california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5))
_ = california_housing_dataframe["rooms_per_person"].hist()
##Verify Clip
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person"
)
#Plot the new model
_ = plt.scatter(calibration_data["predictions"], calibration_data["targets"])
###Output
_____no_output_____ |
asi_challenge.ipynb | ###Markdown
CLAUDIO SCALZO USER asi17 ASI Challenge Exercise Naive Bayes Classification and Bayesian Linear Regression on the Fashion-MNIST and CIFAR-10 datasets DESCRIPTIONThis notebook presents the "from-scratch" implementations of the Naive Bayes Classification and the Bayesian Linear Regression, applied to the Fashion-MNIST and CIFAR-10 datasets.INSTRUCTIONS TO RUN THE NOTEBOOKTo be able to run the notebook the only thing to ensure is that the datasets are in the correct directories. The following structure is the correct one:- asi_challenge_claudio_scalzo.ipynb- datasets/ - Fashion-MNIST/ - fashion-mnist_train.csv - fashion-mnist_test.csv - CIFAR-10/ - data_batch_1 - data_batch_2 - data_batch_3 - data_batch_4 - data_batch_5 - test_batchCOLORSFor the sake of readability, the notebook will follow a color convention: All the cells related to the Fashion-MNIST dataset will be in green and labeled with: FASHION-MNIST All the cells related to the CIFAR-10 dataset will be in yellow and labeled with: CIFAR-10 All the blue cells are generic comments and the answers to the exercise questions are marked with: ANSWER or TASKSECTIONSThe sections numbering will follow exactly the one provided in the requirements PDF.
###Code
### LIBRARIES IMPORT
# Data structures
import numpy as np
import pandas as pd
from numpy.linalg import inv, solve
# Plot
import seaborn as sns
import matplotlib.pyplot as plt
# Utilities
from time import time
import pickle
# SciPy, scikit-learn
from sklearn.metrics import mean_squared_error, log_loss, confusion_matrix
from scipy.stats import t
# Warnings
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
1. Datasets loading TASK1. Download the Fashion-MNIST and CIFAR-10 datasets and import them.The first step consists in the datasets import. This process will be split in two parts, one for the Fashion-MNIST dataset and another one for the CIFAR-10 dataset. While in the first case it will be very easy (being the dataset saved in csv files), in the seconds case the process will be longer, because the CIFAR datasets are saved in binary files. FASHION-MNIST Let's define the datasets location and load them in two Pandas DataFrame: mnistTrain and mnistTest.
###Code
# DIRECTORY AND CONSTANTS DEFINITION
mnistPath = "./datasets/Fashion-MNIST/"
height = 28
width = 28
# FILEPATHS DEFINITION
mnistTrainFile = mnistPath + "fashion-mnist_train.csv"
mnistTestFile = mnistPath + "fashion-mnist_test.csv"
# LOAD THE MNIST AND CIFAR TRAINSET AND DATASET
mnistTrain = pd.read_csv(mnistTrainFile)
mnistTest = pd.read_csv(mnistTestFile)
###Output
_____no_output_____
###Markdown
Now we can show some example of the loaded data:
###Code
# SHOW SOME SAMPLES
plt.figure(figsize=(15,10))
for i in range(6):
plt.subplot(1,6,i+1)
image = mnistTrain.drop(columns=["label"]).loc[i].values.reshape((height, width))
plt.imshow(image, cmap="gray")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
CIFAR-10 First of all, we have to declare the path of the CIFAR-10 datasets and some useful values:
###Code
# DIRECTORY AND CONSTANTS DEFINITION
cifarPath = "./datasets/CIFAR-10/"
trainfiles = 5
height = 32
width = 32
channels = 3
pixels = height * width * channels
chpix = height * width
###Output
_____no_output_____
###Markdown
Now, let's define a function to load a single binary file which contains a certain number of images:
###Code
# FUNCTION TO LOAD A SINGLE TRAINFILE
def loadImages(filename):
# Load binary file
file = open(filename, "rb")
# Unpickle
data = pickle.load(file, encoding="bytes")
# Get raw images and raw classes
rawImages = data[b'data']
rawClasses = data[b'labels']
return np.array(rawImages, dtype=int), np.array(rawClasses, dtype=int)
###Output
_____no_output_____
###Markdown
Now it's time to use the previous function to load all the five trainsets in our directory: they will be merged in a unique Pandas DataFrame named cifarTrain.
###Code
# ALLOCATE AN EMPTY ARRAY (width of number of pixels + one for the class label)
images = np.empty(shape=(0, pixels + 1), dtype=int)
# LOAD ALL THE TRAINFILES
for i in range(trainfiles):
# Load the images and classes for the "i"th trainfile
newImages, newClasses = loadImages(filename = cifarPath + "data_batch_" + str(i + 1))
# Create the new batch (concatenating images and classes)
newBatch = np.concatenate((np.asmatrix(newClasses).T, newImages), axis=1)
# Concatenate the new batch with the previous ones
images = np.concatenate((images, newBatch), axis=0)
# CREATE THE TRAIN DATAFRAME
attributes = [("pixel" + str(i) + "_" + str(c)) for c in ["r", "g", "b"] for i in range(height * width)]
cifarTrain = pd.DataFrame(images, columns = ["label"] + attributes)
###Output
_____no_output_____
###Markdown
The cifarTrain has been imported, now let's do the same for the file containing the testset: also in this case, it will be saved in a dataframe, cifarTest.
###Code
# LOAD THE IMAGES AND CLASSES
newImages, newClasses = loadImages(filename = cifarPath + "test_batch")
# CREATE THE IMAGES ARRAY (concatenating images and classes)
images = np.concatenate((np.asmatrix(newClasses).T, newImages), axis=1)
# CREATE THE TEST DATAFRAME
attributes = [("pixel" + str(i) + "_" + str(c)) for i in range(height * width) for c in ["r", "g", "b"]]
cifarTest = pd.DataFrame(images, columns = ["label"] + attributes)
###Output
_____no_output_____
###Markdown
Now we can show some example of the loaded data:
###Code
# SHOW SOME SAMPLES
plt.figure(figsize=(15,10))
for i in range(0,6):
plt.subplot(1,6,i+1)
imageR = cifarTrain.iloc[i, 1 : chpix+1].values.reshape((height,width))
imageG = cifarTrain.iloc[i, chpix+1 : 2*chpix+1].values.reshape((height,width))
imageB = cifarTrain.iloc[i, 2*chpix+1 : 3*chpix+1].values.reshape((height,width))
image = np.dstack((imageR, imageG, imageB))
plt.imshow(image)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Everything is loaded! We can start analyzing our data. 2. Descriptive statistics 2.1 Data description The first step is to investigate data. Some really simple statistics are shown: they are useful to introduce and to understand the data. FASHION-MNIST
###Code
# PRINT TO DESCRIBE THE TRAIN AND THE TEST
print("[TRAINSET]")
print("Number of rows:", mnistTrain.shape[0])
print("Attributes:", mnistTrain.drop(columns=['label']).shape[1], "(without considering the label)")
print("\n[TESTSET]")
print("Number of rows:", mnistTest.shape[0])
print("Attributes:", mnistTest.drop(columns=['label']).shape[1], "(without considering the label)")
print("\nExample:")
display(mnistTrain.head(5))
###Output
[TRAINSET]
Number of rows: 60000
Attributes: 784 (without considering the label)
[TESTSET]
Number of rows: 10000
Attributes: 784 (without considering the label)
Example:
###Markdown
The number of rows is 60000, while the number of columns is 785 (784 attributes + 1 label). But what does they mean? Each row represents a picture. Each column represents a pixel (784 = 28x28). So, the value of a row "r" in a given column "c" represents the brightness (from 0 to 255) of a given pixel "c" in a given picture "r".In the testset we find the same situation but with a smaller row dimension: 10000. The number of columns is, of course, the same: 785 (784 attributes + 1 label). CIFAR-10
###Code
# PRINT TO DESCRIBE THE TRAIN
print("[TRAINSET]")
print("Number of rows:", cifarTrain.shape[0])
print("Attributes:", cifarTrain.drop(columns=['label']).shape[1], "(without considering the label)")
print("\n[TESTSET]")
print("Number of rows:", cifarTest.shape[0])
print("Attributes:", cifarTest.drop(columns=['label']).shape[1], "(without considering the label)")
print("\nExample:")
display(cifarTrain.head(5))
###Output
[TRAINSET]
Number of rows: 50000
Attributes: 3072 (without considering the label)
[TESTSET]
Number of rows: 10000
Attributes: 3072 (without considering the label)
Example:
###Markdown
The number of rows is 50000, because we merged 5 files of 10000 rows (images) each. The number of columns is instead 3073 (3072 attributes + the label): why this number? Because each picture was a 32x32 pixels, with 3 channels (RGB), so each picture has 3072 pixels.The number of rows in the testset is smaller: 10000. 2.2 Data distribution analysis Now is time to analyze the distribution of our data. In this section I'm going to analyze the distribution in the trainset, which will be useful to train the model. FASHION-MNIST CIFAR-10
###Code
# TAKE DISTRIBUTION
mnistDistribution = mnistTrain["label"].value_counts()
cifarDistribution = cifarTrain["label"].value_counts()
# TAKE CLASSES AND FREQUENCIES
mnistClasses = np.array(mnistDistribution.index)
mnistFrequencies = np.array(mnistDistribution.values)
cifarClasses = np.array(cifarDistribution.index)
cifarFrequencies = np.array(cifarDistribution.values)
# PLOT THE DISTRIBUTION OF THE TARGET VARIABLE
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.bar(mnistClasses, mnistFrequencies, align="center", color="green")
plt.xticks(list(range(np.min(mnistClasses), np.max(mnistClasses)+1)))
plt.xlabel("Class")
plt.ylabel("Count")
plt.title("[Fashion-MNIST]", weight="semibold");
plt.subplot(1,2,2)
plt.bar(cifarClasses, cifarFrequencies, align="center", color="orange")
plt.xticks(list(range(np.min(mnistClasses), np.max(mnistClasses)+1)))
plt.xlabel("Class")
plt.ylabel("Count")
plt.title("[CIFAR-10]", weight="semibold");
plt.suptitle("Distribution of the label in the trainset", fontsize=16, weight="bold")
plt.show()
###Output
_____no_output_____
###Markdown
QUESTIONComment on the distribution of class labels and the dimensionality of the input and how these may affect the analysis.ANSWER- The dimensionalityFirst of all, the dimensionality is very high. As previously said, each column represents a pixel of the image! So, even a very small picture has a lot of features. A big dimensionality like this (784 attributes on the Fashion-MNIST and 3072 attributes on the CIFAR-10) can usually represent an issue, generally known as "curse of dimensionality" (source).However, the Naive Bayes classifier is usually suited when dealing with high-dimensional datasets: indeed, thanks to its simplicity and thanks also to its Naive assumptions can perform well when data dimensionality is really really high.In our case, the high dimensionality is an issue especially for the regressor.The Bayesian Linear Regression algorithm, indeed, has to find the weights (and find the regression line) basing its analysis on a big set of dimensions, which is of course harder (and computationally heavier because of the big matrices in the products).- The distributionThe distribution is uniform: each class has the same amount of images in the dataset. We'll use this fact to compute the prior probabilities in the Naive Bayes Classifier: being each prior the same for each class, the model will not be biased towards some classes, because the posterior computation will be equally influenced by this factor for each class. Before starting the new section, let's define some functions to graphically plot the confusion matrix, the errorplot and the scatter plot. This function will be useful to show the classifier and the regressor performance in the two datasets.
###Code
# FUNCTION TO PLOT THE REQUIRED CONFUSION MATRICES
def plotConfusionMatrix(cm1, cm2, classes1, classes2):
def plotCM(cm, classes, cmap, title):
sns.heatmap(cm, cmap=cmap, annot=True, fmt="d", cbar=False)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.title(title)
plt.figure(figsize=(16,7))
plt.subplot(1,2,1)
plotCM(cm1, classes1, "Greens", "[Fashion-MNIST]")
plt.subplot(1,2,2)
plotCM(cm2, classes2, "Oranges", "[CIFAR-10]")
plt.subplots_adjust(wspace=0.4)
plt.show()
print()
# FUNCTION TO PLOT THE REQUIRED SCATTER PLOTS
def plotScatterPlot(raw1, raw2, corr1, corr2):
def plotSP(raw, corr, color, title):
plt.title(title)
plt.xticks(np.arange(-2,12))
plt.yticks(np.arange(0,10))
plt.ylabel('True label')
plt.xlabel('Predicted continuous label value')
plt.grid(linestyle=':')
plt.scatter(raw, corr, color=color)
plt.figure(figsize=(15,8))
plt.subplot(1,2,1)
plotSP(raw1, corr1, "green", "[Fashion-MNIST]")
plt.subplot(1,2,2)
plotSP(raw2, corr2, "orange", "[CIFAR-10]")
plt.suptitle("Scatter plot of true raw predictions versus predicted ones", weight="semibold", fontsize=14)
plt.show()
print()
# FUNCTION TO PLOT THE REQUIRED ERROR PLOTS
def plotErrorPlot(pred1, pred2, var1, var2):
def plotEP(pred, var, correct, color, title):
plt.errorbar(np.arange(0,30), pred[:30], yerr=t.ppf(0.997, len(pred)-1)*np.sqrt(var[:30]), ls="None",
color=color, marker=".", markerfacecolor="black")
# plt.scatter(np.arange(0,30), correct[:30], c="blue", alpha=0.2, linewidths=0.1)
# plt.legend(["True classes", "Predictions (with error)"], loc=2)
plt.yticks(np.arange(-1,13,1))
plt.ylabel('Predictive variance')
plt.xlabel('Sample of dataset')
plt.grid(linestyle=':')
plt.title(title)
plt.figure(figsize=(15,8))
plt.subplot(1,2,1)
plotEP(pred1, var1, mnistCorrect, "green", "[Fashion-MNIST]")
plt.subplot(1,2,2)
plotEP(pred2, var2, cifarCorrect, "orange", "[CIFAR-10]")
plt.suptitle("Predicted variances on a subset of the predicted data", weight="semibold", fontsize=14)
plt.show()
print()
###Output
_____no_output_____
###Markdown
Moreover, to facilitate each model's work, we can normalize the values of our datasets (except for the class label) dividing each value by 255. Let's do it:
###Code
def normalize(dataset):
return dataset.apply(lambda col: col.divide(255) if(col.name != "label") else col)
# NORMALIZE MNIST
mnistTrainNorm = normalize(mnistTrain)
mnistTestNorm = normalize(mnistTest)
# NORMALIZE CIFAR
cifarTrainNorm = normalize(cifarTrain)
cifarTestNorm = normalize(cifarTest)
# PRINT AN EXAMPLE
print("Example of the normalized MNIST trainset:")
display(mnistTrainNorm.head(5))
# BACKUP NON-NORMALIZED
mnistTrainFull = mnistTrain
mnistTestFull = mnistTest
cifarTrainFull = cifarTrain
cifarTestFull = cifarTest
# SPLIT THE DATASETS IN 'X' AND 'y'
# Fashion-MNIST
mnistTrain = mnistTrainNorm.drop(columns=['label']).values
mnistTarget = mnistTrainNorm['label'].values
mnistTest = mnistTestNorm.drop(columns=['label']).values
mnistCorrect = mnistTestNorm['label'].values
# CIFAR-10
cifarTrain = cifarTrainNorm.drop(columns=['label']).values
cifarTarget = cifarTrainNorm['label'].values
cifarTest = cifarTestNorm.drop(columns=['label']).values
cifarCorrect = cifarTestNorm['label'].values
###Output
_____no_output_____
###Markdown
Now we're ready to start the classification. 3. Classification TASKa) Implement the Naive Bayes Classifier.The Naive Bayes Classifier is for sure the most basic and simple algorithm belonging to the probabilistic classifiers family. It puts its roots into the Bayes theorem, specifically the Naive version, which considers independent all the features. This assumption has of course two main aspects: the first one is to heavily simplify the computation, the other one is of course to be too "naive", not respecting most of the times the real dependence among features.$$P(t_{new}=k \mid \mathbf{X}, \mathbf{t}, \mathbf{x_{new}}) = \dfrac{p(\mathbf{x_{new}} \mid t_{new}=k, \mathbf{X}, \mathbf{t}) \space P(t_{new}=k)} {\sum_{j=0}^{K-1} p(\mathbf{x_{new}} \mid t_{new}=j, \mathbf{X}, \mathbf{t}) \space P(t_{new}=j) }$$The prior probability, $P(t_{new}=k)$, will be computed taking the occurrence probability of each class (in this case, the same for each class, given the label distribution).The likelihood, instead, is represented by:$$p(\mathbf{x} \mid t=k, \mathbf{X}, \mathbf{t}) = \mathcal{N}(\mu_{kd}, \sigma_{kd})$$where $\mu$ and $\sigma$ are respectively the mean of each feature for each class, and the variance of each feature for each class.Given the fact that we're only interested to the maximum posterior value among all class for each image, we can use also the log-likelihood: in this way, numerical issues are avoided.Moreover, the denominator is just a normalization constant, not useful in the max-search, we can avoid it.The computed expression so, will be:$$\log p(t_{new}=k \mid \mathbf{X}, \mathbf{t}, \mathbf{x_{new}}) = \log p(\mathbf{x} \mid t=k, \mathbf{X}, \mathbf{t}) + \log P(t=k)$$
###Code
class NaiveBayesClassifier:
# ----- PRIVATE METHODS ------------------------------------------------- #
# MEANS AND VARIANCES FOR THE LIKELIHOOD: P(X|C)
def _computeMeansStds(self, train, target):
# Temp DataFrame
pdf = pd.DataFrame(train)
pdf['label']= target
smoothing = 1e-5
# Compute means and variances. For example:
# <means> | attr0 | attr1 | ... # <stds> | attr0 | attr1 | ...
# -------------------------- # --------------------------
# class0 | 12 | 3 | ... # class0 | 0.2 | 0.03 | ...
# class1 | 8 | 0 | ... # class1 | 0.07 | 0.1 | ...
# ... | ... | ... | ... # ... | ... | ... | ...
self.means = pdf.groupby("label").mean().values
self.stds = pdf.groupby("label").std().values + smoothing
# PRIORS: P(C)
def _computePriors(self, target):
# Compute the distribution of the label
self.priors = np.bincount(target) / len(target)
# LIKELIHOOD: P(X|C)
def _logLikelihood(self, data, c):
return np.sum(-np.log(self.stds[c, :]) - 0.5 * np.log(2 * np.pi)
-0.5 * np.divide((data - self.means[c, :])**2, self.stds[c, :]**2), axis=1)
# ----------------------------------------------------------------------- #
# ----- PUBLIC METHODS -------------------------------------------------- #
# TRAIN - LIKELIHOOD and PRIOR
def fit(self, train, target):
# Classes
self.classes = list(np.unique(target))
# Compute priors and likelihoods
self._computePriors(target)
self._computeMeansStds(train, target)
return self.classes
# TEST - POSTERIOR: P(C|X)
def predict(self, test):
# The posterior array will be like:
# <post> | sample0 | sample1 | ...
# -----------------------------
# class0 | 0.1 | 0.4 | ...
# class1 | 0.18 | 0.35 | ...
# ... | ... | ... | ...
self.posteriors = np.array([self._logLikelihood(test, c) + np.log(self.priors[c]) for c in self.classes])
# Select the class with max probability (and also its posteriors) for each sample
return np.argmax(self.posteriors, axis=0), self.posteriors.T
# VALIDATE PREDICTION
def validate(self, pred, correct, prob):
# Accuracy, error, confusion matrix
acc = np.mean(pred == correct)
ll = log_loss(correct, prob)
cm = confusion_matrix(correct, pred)
return acc, ll, cm
# ----------------------------------------------------------------------- #
###Output
_____no_output_____
###Markdown
QUESTIONb) Describe a positive and a negative feature of the classifier for these tasks.ANSWERRegarding positive features, as said before, the Naive Bayes Classifier has the capability to work also with really high-dimensional datasets. Thanks to its simplicity, indeed, it hasn't relevant dimensionality issues. Moreover, there is no need to set (and search for the best) hyperparameters to make it work: it works as it maximum capabilities right after it's implemented.The negative feature is, of course, its Naive assumption. It assumes that all the features are independent, which of course is not true for the biggest part of the available datasets. This model is too simple for a good image classification, a field in which more complex models, like Convolutional Neural Networks, are leading (source). QUESTIONc) Describe any data pre-processing that you suggest for this data and your classifier.ANSWERClassifiers (and models, in general) can be hugely helped by a good data pre-processing. In this case, one of the first things that one can think is the dimensionality reduction. Like said before the Naive Bayes Classifier doesn't suffer a lot from high-dimensional datasets, but speaking in general terms, models are of course facilitated in their work when they have to deal with a reduced set of feature. For this reason one can think about PCA (Principal Component Analysis, source here) or LDA (Linear discriminant analysis, source here): in this case LDA is clearly more appropriate, because it looks for linear combination of variables that can express better the original space (like PCA) but taking into considerations the labels, so making a net distinction between classes of the dataset.Another thing that can be tried, is to transform each picture of the CIFAR-10 dataset in grayscale, deleting the color information. This can be done with a simple weighted sum between the R, G and B components (0.21 R + 0.72 G + 0.07 B). Of course this will be a dimensionality reduction, but it doesn't make so much sense because it will make bigger the correlation between the features instead of having all the colour channels separated, making even worse the naive assumption of independence between features.Talking about two concrete pre-processing related to these two datasets: images have of course been flattened (when they were originally loaded in the "square" shape) and the pixel values have been normalized, bringing them in the range [0.0, 1.0] instead of [0, 255]. TASKd) Apply your classifier to the two given datasets.
###Code
# CLASSIFY FUNCTION
def classify(train, target, test, correct):
# NAIVE BAYES CLASSIFIER
nbc = NaiveBayesClassifier()
# TRAIN
startTime = time()
classes = nbc.fit(train, target)
endTime = time()
print("Train time: %.3f seconds" % (endTime-startTime))
# TEST
startTime = time()
pred, prob = nbc.predict(test)
endTime = time()
print("Test time: %.3f seconds\n" % (endTime-startTime))
# VALIDATION
accuracy, ll, cm = nbc.validate(pred, correct, prob)
print("Accuracy: %.2f%%" % (accuracy * 100))
print("LogLikelihood Loss: %.2f" % (ll))
return cm
###Output
_____no_output_____
###Markdown
FASHION-MNIST Let's start the classification for the Fashion-MNIST dataset:
###Code
# CLASSIFY
mnistCM = classify(mnistTrain, mnistTarget, mnistTest, mnistCorrect)
###Output
Train time: 1.165 seconds
Test time: 1.847 seconds
Accuracy: 59.16%
LogLikelihood Loss: 2.14
###Markdown
CIFAR-10 Now it's time for the CIFAR-10 classification:
###Code
# CLASSIFY
cifarCM = classify(cifarTrain, cifarTarget, cifarTest, cifarCorrect)
###Output
Train time: 6.331 seconds
Test time: 7.290 seconds
Accuracy: 29.76%
LogLikelihood Loss: 5.85
###Markdown
TASKe) Display the confusion matrix on the test data. FASHION-MNIST CIFAR-10
###Code
# PLOT THE CONFUSION MATRICES
plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses)
###Output
_____no_output_____
###Markdown
QUESTIONf) Discuss the performance, compare them against a classifier that outputsrandom class labels, and suggest ways in which performance could be improved.ANSWERThe performance are "good", considering that our models are very very simple. What is clear is that the performances on the Fashion-MNIST are way better than the CIFAR-10 dataset. One of the things that cause the model to work badly are of course the dimensionality of the CIFAR-10 dataset.The accuracies are:- [CLASSIFICATION] Fashion-MNIST accuracy: 59.16%- [CLASSIFICATION] CIFAR-10 accuracy: 29.76% Let's see what happens for a Random classifier:
###Code
# RANDOM PREDICTIONS
mnistRandPred = np.random.randint(0, 9, mnistTest.shape[0])
cifarRandPred = np.random.randint(0, 9, cifarTest.shape[0])
# ACCURACY
mnistRandAcc = np.mean(mnistRandPred == mnistCorrect)
cifarRandAcc = np.mean(cifarRandPred == cifarCorrect)
# SHOW
print("[RANDOM Classifier] Fashion-MNIST random accuracy: %.2f%% (expected around 10%%)" % (mnistRandAcc * 100))
print("[RANDOM Classifier] CIFAR-10 random accuracy: %.2f%% (expected around 10%%)" % (cifarRandAcc * 100))
###Output
[RANDOM Classifier] Fashion-MNIST random accuracy: 10.33% (expected around 10%)
[RANDOM Classifier] CIFAR-10 random accuracy: 10.40% (expected around 10%)
###Markdown
The random classifier, of course, has an accuracy around 10%: the probability of getting the right class is $\frac{right \space class}{all \space classes}$, in this case: $\frac{1}{10}$. Trying a different approach: grayscale CIFAR-10 CIFAR-10 (grayscale)
###Code
# CREATE GRAYSCALE CIFAR-10
hl = 32 * 32
cifarGrayTrain = np.empty((cifarTrain.shape[0],hl))
cifarGrayTest = np.empty((cifarTest.shape[0],hl))
for i in range(hl):
cifarGrayTrain[:,i] = (0.21 * cifarTrainFull.iloc[:,i+1] +
0.72 * cifarTrainFull.iloc[:,hl+i+1] +
0.07 * cifarTrainFull.iloc[:,2*hl+i+1]) / 255
cifarGrayTest[:,i] = (0.21 * cifarTestFull.iloc[:,i+1]
+ 0.72 * cifarTestFull.iloc[:,hl+i+1]
+ 0.07 * cifarTestFull.iloc[:,2*hl+i+1]) / 255
# CLASSIFY
cifarCM = classify(cifarGrayTrain, cifarTarget, cifarGrayTest, cifarCorrect)
###Output
Train time: 1.007 seconds
Test time: 3.104 seconds
Accuracy: 26.84%
LogLikelihood Loss: 5.65
###Markdown
The grayscale approach, as expected, doesn't improve the predictions. Indeed, transform the coloured picture in grayscale, just makes the correlations between features bigger (e.g. some pixels which were of a dark - but different - colour, now can be dark and of the same - or similar - gray value), going farther from the Naive assumption. 4. Bayesian Regression TASKa) Implement the Bayesian Linear Regression.
###Code
class BayesianLinearRegression:
# ----- PRIVATE METHODS ------------------------------------------------- #
# CREATE THE MATRIX FOR THE MATRICIAL-FORM REGRESSION
def _matricize(self, x, k):
# ALLOCATE MATRIX
X = np.ones(shape=(x.shape[0], 1), dtype=int)
# STACK COLUMNS
for i in range(k):
X = np.hstack((X, np.power(x, i+1)))
return X
# COMPUTE THE WEIGHTS ARRAY
def _weights(self, X, t):
# np.linalg.solve, when feasible, is faster so:
# inv(X.T.dot(X)).dot(X.T).dot(t)
# becomes:
return solve(X.T.dot(X), X.T.dot(t))
# RETURN THE VARIANCE
def _variance(self, X, w, t):
return (t - X.dot(w.T)).T.dot(t - X.dot(w.T)) / X.shape[0]
# RETURN THE PREDICTED t
def _target(self, X_new, w):
return X_new.dot(w.T)
# RETURN THE PREDICTIVE VARIANCE
def _predictiveVar(self, X_new, X, var):
return var * np.diag(X_new.dot(inv(X.T.dot(X))).dot(X_new.T))
# ----------------------------------------------------------------------- #
# ----- PUBLIC METHODS -------------------------------------------------- #
# TRAIN
def fit(self, train, target, k):
# Compute X, w and t
self.X = self._matricize(train, k)
self.w = self._weights(self.X, target)
self.var = self._variance(self.X, self.w, target)
return np.unique(target)
# TEST
def predict(self, test, k):
# Compute the matrix for the test set
X_new = self._matricize(test, k)
# Predict the new target for the test set (as a continuous variable)
t_new_raw = self._target(X_new, self.w)
# Compute the predictive variance
var_new = self._predictiveVar(X_new, self.X, self.var)
return t_new_raw, var_new
# VALIDATION
def validate(self, correct, raw):
# Accuracy, error, confusion matrix, mse
mse = mean_squared_error(correct, raw)
return mse
# ----------------------------------------------------------------------- #
###Output
_____no_output_____
###Markdown
TASKb) Treat class labels as continuous and apply regression to the training data.
###Code
def regress(train, target, test, correct, k):
# BAYESIAN LINEAR REGRESSION
blr = BayesianLinearRegression()
# TRAIN
startTime = time()
classes = blr.fit(train, target, k)
endTime = time()
print("Train time: %.3f seconds" % (endTime-startTime))
# TEST
startTime = time()
raw, var = blr.predict(test, k)
endTime = time()
print("Test time: %.3f seconds\n" % (endTime-startTime))
# VALIDATION
mse = blr.validate(correct, raw)
print("[RAW PREDICTIONS] Mean Squared Error (MSE): %.2f" % (mse))
return raw, var
def validatePictures(mnistRaw, mnistVar, cifarRaw, cifarVar):
# SCATTER PLOT
plotScatterPlot(mnistRaw, cifarRaw, mnistCorrect, cifarCorrect)
# ERRORPLOT
plotErrorPlot(mnistRaw, cifarRaw, mnistVar, cifarVar)
###Output
_____no_output_____
###Markdown
FASHION-MNIST
###Code
# REGRESS
mnistRaw, mnistVar = regress(mnistTrain, mnistTarget, mnistTest, mnistCorrect, k = 1)
###Output
Train time: 1.456 seconds
Test time: 3.905 seconds
[RAW PREDICTIONS] Mean Squared Error (MSE): 1.96
###Markdown
CIFAR-10
###Code
# REGRESS
cifarRaw, cifarVar = regress(cifarTrain, cifarTarget, cifarTest, cifarCorrect, k = 1)
###Output
Train time: 12.835 seconds
Test time: 22.006 seconds
[RAW PREDICTIONS] Mean Squared Error (MSE): 8.03
###Markdown
TASKc) Produce a scatter plot showing the predictions versus the true targets for thetest set and compute the mean squared error on the test set.The mean squared error has been shown before, is:- [Fashion-MNIST] Mean Squared Error (MSE): 1.96- [CIFAR-10] Mean Squared Error (MSE): 8.03 FASHION-MNIST CIFAR-10
###Code
# PLOT IMAGES
validatePictures(mnistRaw, mnistVar, cifarRaw, cifarVar)
###Output
_____no_output_____
###Markdown
As we can see from the previous plots, the regression predicts a set of continuous values, often out of the [0,9] range. In the second plot we can observe (in a little subset of data) the error that each prediction has: this has been computed using the predicted variance and getting the 99% confidence-level of the standard deviation.It's soon clear, looking at the error plot, that the errors on the CIFAR-10 predictions are bigger than the ones in the Fashion-MNIST: the model is less certain in its predictions in the CIFAR-10 second dataset. QUESTIONd) Suggest a way to discretize predictions and display the confusion matrix on thetest data and report accuracy.
###Code
# DISCRETIZER
discretizer = np.vectorize(lambda label: 9 if label > 9 else (0 if label < 0 else round(label)))
###Output
_____no_output_____
###Markdown
ANSWERThe predictions have been discretized in a really simple way: the continuous variables have been rounded to the closest integer. Moreover, the values smaller than 0 have been approximated to 0, and the values bigger than 9 have been approximated to 9.More advanced approaches could have been taken, like one-hot encode the labels and regress on each "column" of the one-hot encoded classes: this approch will be done afterwards. FASHION-MNIST
###Code
# DISCRETIZE
mnistPred = np.array(discretizer(mnistRaw), dtype=int)
# VALIDATE
accuracy = np.mean(mnistPred == mnistCorrect)
mnistCM = confusion_matrix(mnistCorrect, mnistPred)
print("[DISCRETE PREDICTIONS] Accuracy: %.2f%%" % (accuracy * 100))
###Output
[DISCRETE PREDICTIONS] Accuracy: 39.19%
###Markdown
CIFAR-10
###Code
# DISCRETIZE
cifarPred = np.array(discretizer(cifarRaw), dtype=int)
# VALIDATE
accuracy = np.mean(cifarPred == cifarCorrect)
cifarCM = confusion_matrix(cifarCorrect, cifarPred)
print("[DISCRETE PREDICTIONS] Accuracy: %.2f%%" % (accuracy * 100))
###Output
[DISCRETE PREDICTIONS] Accuracy: 10.95%
###Markdown
The regressor performances are:- [REGRESSION] Fashion-MNIST accuracy: 39.19%- [REGRESSION] CIFAR-10 accuracy: 10.95% FASHION-MNIST CIFAR-10
###Code
# CONFUSION MATRIX
plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses)
###Output
_____no_output_____
###Markdown
QUESTIONe) Discuss regression performance with respect to classification performance.ANSWERThe regression performances are of course very weak with respect to classification performance. A linear regression is a "wrong" tool to approach image classification problems.Also from the point of view of the computational time, in both datasets the Bayesian Regression is way slower than the Naive Bayes Classifier. QUESTIONf) Describe one limitation of using regression for this particular task.ANSWEROne big limitation of linear regression is that it works trying to find a set of weights that models the relationships between the continuous data and the labels. In this case, even if it "works", is out of context: we're trying to find a set of discrete labels (from 0 to 9) according to some pre-defined pattern. We're using a "little drill against a huge building in reinforced concrete".It would have been a little bit more meaningful if the [0,9] range would have had an "ordinal" information (like a gradual scale of values, where 0 "nominal" values where, for example, 2 means different than 1, and non greater than 1. Trying a different approach: one-hot encoded labels One approach to improve the Bayesian Linear Regression performance can be to one-hot encode the targets and regress on them one by one. In this case, the target column becomes a 10-column matrix, and a loop can be done on each column, using as target one at a time: the result will be a prediction matrix (10000, 10) and thanks to the argmax the best class will be chosen.Let's try it:
###Code
from keras.utils import to_categorical
def regressOneHot(train, target, test, correct, k):
# BAYESIAN LINEAR REGRESSION
blr = BayesianLinearRegression()
# FIT & PREDICT
target_bin = to_categorical(target, len(mnistClasses))
pred = np.zeros((test.shape[0], len(mnistClasses)))
for i in range(10):
blr.fit(train, target_bin[:,i], k)
pred[:,i], _ = blr.predict(test, k)
pred = np.argmax(pred, axis=1)
# VALIDATION
accuracy = np.mean(pred == correct)
cm = confusion_matrix(correct, pred)
print("[DISCRETE PREDICTIONS] Accuracy: %.2f%%" % (accuracy * 100))
return cm
###Output
Using TensorFlow backend.
###Markdown
FASHION-MNIST
###Code
mnistCM = regressOneHot(mnistTrain, mnistTarget, mnistTest, mnistCorrect, 1)
###Output
[DISCRETE PREDICTIONS] Accuracy: 82.18%
###Markdown
CIFAR-10
###Code
cifarCM = regressOneHot(cifarTrain, cifarTarget, cifarTest, cifarCorrect, 1)
###Output
[DISCRETE PREDICTIONS] Accuracy: 36.37%
###Markdown
FASHION-MNIST CIFAR-10
###Code
# CONFUSION MATRIX
plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses)
###Output
_____no_output_____
###Markdown
The results are way better! The accuracies now are:- [Fashion-MNIST] Accuracy: 82.18% (before was 29%)- [CIFAR-10] Accuracy: 36.37% (before was 11%)The one-hot encoding actually worked. Indeed, taking this approach, we regress on each of the one-hot-encoded labels, overtaking the issue, described before, of the nominal (versus ordinal) target label. 5. Bonus question Integrating Convolutional Neural Networks (with the LeNet architecture) and the Naive Bayes Classifier Convolutional Neural Networks actually represent one of the most powerful methods to face image classification problems (source).The simplest architecture is the LeNet (source): two convolution layers alternated by the max pooling phase, followed by a flatten phase and a set of fully connected layers.Let's implement the model using Keras:
###Code
%%capture
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.utils import to_categorical
class LeNetCNN:
def reshape(self, train, target, test, correct, num_classes, input_shape):
# DESIRED INPUT SHAPE
h, w, c = self.input_shape = input_shape
self.num_classes = num_classes
# RESHAPE
# Train set
self.train = train.reshape((train.shape[0], h, w, c)).astype('float32')
self.target_bin = to_categorical(target, num_classes)
# Test set
self.test = test.reshape((test.shape[0], h, w, c)).astype('float32')
self.correct_bin = to_categorical(correct, num_classes)
return self.train, self.test
def buildAndRun(self, batch_size, epochs):
# MODEL CONSTRUCTION (LeNet architecture)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=self.input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu', name="intermediate"))
model.add(Dropout(0.5))
model.add(Dense(self.num_classes, activation='softmax'))
# MODEL COMPILING
model.compile(loss="categorical_crossentropy", optimizer="adadelta", metrics=['accuracy'])
# TRAIN
model.fit(self.train,
self.target_bin,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split=0.1)
# PREDICT
score = model.evaluate(self.test, self.correct_bin, verbose=0)
print("\nConvolutional Neural Network:")
print(' - Loss: %.2f' % (score[0]))
print(' - Accuracy: %.2f%%' % (score[1]*100))
return model
###Output
_____no_output_____
###Markdown
Now let's run the model with our two dataset: FASHION-MNIST
###Code
# BUILD, RESHAPE THE DATASETS AND RUN THE CNN
cnn = LeNetCNN()
train, test = cnn.reshape(mnistTrain, mnistTarget, mnistTest, mnistCorrect,
num_classes = 10, input_shape = (28,28,1))
model = cnn.buildAndRun(batch_size = 128, epochs = 10)
###Output
Train on 54000 samples, validate on 6000 samples
Epoch 1/10
54000/54000 [==============================] - 71s 1ms/step - loss: 0.7031 - acc: 0.7400 - val_loss: 0.4586 - val_acc: 0.8315
Epoch 2/10
54000/54000 [==============================] - 72s 1ms/step - loss: 0.4707 - acc: 0.8298 - val_loss: 0.3906 - val_acc: 0.8635
Epoch 3/10
54000/54000 [==============================] - 77s 1ms/step - loss: 0.4097 - acc: 0.8517 - val_loss: 0.3472 - val_acc: 0.8797
Epoch 4/10
54000/54000 [==============================] - 68s 1ms/step - loss: 0.3736 - acc: 0.8653 - val_loss: 0.3234 - val_acc: 0.8838
Epoch 5/10
54000/54000 [==============================] - 67s 1ms/step - loss: 0.3511 - acc: 0.8734 - val_loss: 0.3051 - val_acc: 0.8877
Epoch 6/10
54000/54000 [==============================] - 67s 1ms/step - loss: 0.3306 - acc: 0.8814 - val_loss: 0.2944 - val_acc: 0.8953
Epoch 7/10
54000/54000 [==============================] - 67s 1ms/step - loss: 0.3173 - acc: 0.8856 - val_loss: 0.2823 - val_acc: 0.8978
Epoch 8/10
54000/54000 [==============================] - 67s 1ms/step - loss: 0.3038 - acc: 0.8892 - val_loss: 0.2759 - val_acc: 0.9040
Epoch 9/10
54000/54000 [==============================] - 67s 1ms/step - loss: 0.2957 - acc: 0.8935 - val_loss: 0.2699 - val_acc: 0.9042
Epoch 10/10
54000/54000 [==============================] - 67s 1ms/step - loss: 0.2849 - acc: 0.8966 - val_loss: 0.2800 - val_acc: 0.9037
Convolutional Neural Network:
- Loss: 0.26
- Accuracy: 90.57%
###Markdown
The accuracy of the output of the neural network is not bad at all. However, I'm not interested in it, but in using the intermediate model built after the two "Convolution -> ReLU activation -> Pooling" phases, right after the outputs are flattened.Now the intermediate_model will be used to generate the intermediate trainset and testset which will be given as input to the Naive Bayes Classifier.
###Code
# EXTRACT THE MODEL OF THE INTERMEDIATE LAYER
model_intermediate = Model(inputs=model.input, outputs=model.get_layer("intermediate").output)
# PREDICT TO GET THE INTERMEDIATE TRAINSET AND TESTSET
train_intermediate = model_intermediate.predict(train)
test_intermediate = model_intermediate.predict(test)
# CLASSIFY MNIST
mnistCM = classify(train = train_intermediate,
target = mnistTarget,
test = test_intermediate,
correct = mnistCorrect)
###Output
Train time: 0.255 seconds
Test time: 0.159 seconds
Accuracy: 89.81%
LogLikelihood Loss: 1.30
###Markdown
The accuracy of the Naive Bayes Classifier (using as inputs the outputs of the convolutional layers) is very high. Let's see what happens with the CIFAR-10 dataset: CIFAR-10
###Code
# BUILD, RESHAPE THE DATASETS AND RUN THE CNN
cnn = LeNetCNN()
train, test = cnn.reshape(cifarTrain, cifarTarget, cifarTest, cifarCorrect,
num_classes = 10, input_shape = (32,32,3))
model = cnn.buildAndRun(batch_size = 128, epochs = 10)
###Output
Train on 45000 samples, validate on 5000 samples
Epoch 1/10
45000/45000 [==============================] - 91s 2ms/step - loss: 1.9032 - acc: 0.3108 - val_loss: 1.6068 - val_acc: 0.4172
Epoch 2/10
45000/45000 [==============================] - 86s 2ms/step - loss: 1.5847 - acc: 0.4348 - val_loss: 1.4236 - val_acc: 0.4908
Epoch 3/10
45000/45000 [==============================] - 98s 2ms/step - loss: 1.4572 - acc: 0.4832 - val_loss: 1.3557 - val_acc: 0.5252
Epoch 4/10
45000/45000 [==============================] - 105s 2ms/step - loss: 1.3718 - acc: 0.5148 - val_loss: 1.2849 - val_acc: 0.5334
Epoch 5/10
45000/45000 [==============================] - 100s 2ms/step - loss: 1.3163 - acc: 0.5366 - val_loss: 1.2414 - val_acc: 0.5712
Epoch 6/10
45000/45000 [==============================] - 93s 2ms/step - loss: 1.2624 - acc: 0.5542 - val_loss: 1.1885 - val_acc: 0.5890
Epoch 7/10
45000/45000 [==============================] - 87s 2ms/step - loss: 1.2212 - acc: 0.5742 - val_loss: 1.1503 - val_acc: 0.5972
Epoch 8/10
45000/45000 [==============================] - 87s 2ms/step - loss: 1.1852 - acc: 0.5840 - val_loss: 1.1831 - val_acc: 0.5792
Epoch 9/10
45000/45000 [==============================] - 87s 2ms/step - loss: 1.1454 - acc: 0.5998 - val_loss: 1.0722 - val_acc: 0.6280
Epoch 10/10
45000/45000 [==============================] - 87s 2ms/step - loss: 1.1217 - acc: 0.6064 - val_loss: 1.0748 - val_acc: 0.6262
Convolutional Neural Network:
- Loss: 1.11
- Accuracy: 60.69%
###Markdown
Also in the CIFAR-10 case (like in the Fashion-MNIST) the accuracy of the output of the neural network is better than the one provided by the pure Naive Bayes Classifier. However, like said before, the interest in not in the network output but in the intermediate model built after the two "Convolution -> ReLU activation -> Pooling" phases, right after the outputs are flattened.Now the intermediate_model will be used to generate the intermediate trainset and testset which will be given as input to the Naive Bayes Classifier.
###Code
# EXTRACT THE MODEL OF THE INTERMEDIATE LAYER
model_intermediate = Model(inputs=model.input, outputs=model.get_layer("intermediate").output)
# PREDICT TO GET THE INTERMEDIATE TRAINSET AND TESTSET
train_intermediate = model_intermediate.predict(train)
test_intermediate = model_intermediate.predict(test)
# CLASSIFY CIFAR
cifarCM = classify(train = train_intermediate,
target = cifarTarget,
test = test_intermediate,
correct = cifarCorrect)
###Output
Train time: 0.165 seconds
Test time: 0.152 seconds
Accuracy: 60.07%
LogLikelihood Loss: 2.69
###Markdown
FASHION-MNIST CIFAR-10
###Code
# PLOT THE CONFUSION MATRICES
plotConfusionMatrix(mnistCM, cifarCM, mnistClasses, cifarClasses)
###Output
_____no_output_____
###Markdown
The performance are way better!We obtain the 89.81% of accuracy with the Fashion-MNIST dataset, and 60.07% of accuracy with the CIFAR-10.This means that also very simple models, like the Naive Bayes Classifier, can be hugely helped by anticipating powerful models like CNNs! Trying the grayscale CIFAR-10 Let's try the same for the grayscale CIFAR-10. In the first parts of the notebook, the Naive Bayes Classifier performed worse with the grayscale CIFAR-10.Let's see what happens with this new hybrid model: CIFAR-10 (grayscale)
###Code
# BUILD, RESHAPE THE DATASETS AND RUN THE CNN
cnn = LeNetCNN()
train, test = cnn.reshape(cifarGrayTrain, cifarTarget, cifarGrayTest, cifarCorrect,
num_classes = 10, input_shape = (32,32,1))
model = cnn.buildAndRun(batch_size = 128, epochs = 10)
# EXTRACT THE MODEL OF THE INTERMEDIATE LAYER
model_intermediate = Model(inputs=model.input, outputs=model.get_layer("intermediate").output)
# PREDICT TO GET THE INTERMEDIATE TRAINSET AND TESTSET
train_intermediate = model_intermediate.predict(train)
test_intermediate = model_intermediate.predict(test)
# CLASSIFY CIFAR
cifarCM = classify(train = train_intermediate,
target = cifarTarget,
test = test_intermediate,
correct = cifarCorrect)
###Output
Train time: 0.201 seconds
Test time: 0.169 seconds
Accuracy: 63.93%
LogLikelihood Loss: 3.77
|
CNN-based-Handwritten-Hindi-Text-Recognition-main/CNN Model 2.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount("/content/drive/")
###Output
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
###Markdown
Importing Libraries and Importing Dataset
###Code
#importing necessary libraries
import os
import keras
import matplotlib
import cv2
import numpy as np
import skimage.io as io
import pandas as pd
import matplotlib.pyplot as plt
from scipy import interp
from itertools import cycle
from keras.layers import *
from keras.utils import *
from keras.optimizers import Adam
from keras.models import *
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from sklearn import model_selection
import sklearn.metrics as metrics
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import roc_auc_score
###Output
_____no_output_____
###Markdown
Reading the data from the disk
###Code
# reading data from the disk storage
data= pd.read_csv(r'/content/drive/My Drive/devanagari-character-set.csv')
data.shape
size=data.shape[0]
# shape of the data is 92000 images
# and each image is 32x32 with 28 pixels of the region representing the actual text
# and 4 pixels as padding
#creating a temp type array of our dataset
array=data.values
#X is for input values and Y is for output given on that input attributes
X=array[:,0:1024].astype(float)
Y=array[:,1024]
###Output
_____no_output_____
###Markdown
Pre-processing for Y values
###Code
#collecting the digit value from Y[i]
i=0
Y_changed=np.ndarray(Y.shape)
for name in Y:
x = name.split('_')
if(x[0]=='character'):
Y_changed[i]=int(x[1])
elif x[0]=='digit':
Y_changed[i]=(37 + int(x[1]))
i=i+1
# copy the contents of the array to our original array
Y=Y_changed
#removing the extra elements after memory allocation for numpy array
Y=Y[0:size].copy()
print("The processed Y shape is "+str(Y.shape))
###Output
The processed Y shape is (92000,)
###Markdown
Train and Test Split
###Code
#size of the testing data
split_size=0.20
#seed value for keeping same randomness in training and testing dataset
seed=6
#splitting of the data
X_train,X_test,Y_train,Y_test=model_selection.train_test_split(X,Y,test_size=split_size,random_state=seed)
###Output
_____no_output_____
###Markdown
Reshaping the data
###Code
# reshaping the data in order to convert the given 1D array of an image to actual grid representaion
X_train = X_train.reshape((size*4)//5,32,32,1)
print(X_train.shape)
Y_train = Y_train.reshape((size*4)//5,1)
print(Y_train.shape)
X_test = X_test.reshape(size//5,32,32,1)
print(X_test.shape)
Y_test = Y_test.reshape(size//5,1)
print(Y_test.shape)
###Output
(73600, 32, 32, 1)
(73600, 1)
(18400, 32, 32, 1)
(18400, 1)
###Markdown
Creating a reference dictionary
###Code
# a reference array for final classification of data
# reference = {1: 'ka', 2: 'kha', 3: 'ga', 4: 'gha', 5: 'kna', 6: 'cha', 7: 'chha', 8: 'ja', 9: 'jha', 10: 'yna', 11: 'taamatar', 12: 'thaa', 13: 'daa', 14: 'dhaa', 15: 'adna', 16: 'tabala', 17: 'tha', 18: 'da', 19: 'dha', 20: 'na', 21: 'pa', 22: 'pha', 23: 'ba', 24: 'bha', 25: 'ma', 26: 'yaw', 27: 'ra', 28: 'la', 29: 'waw', 30: 'motosaw', 31: 'petchiryakha', 32: 'patalosaw', 33: 'ha', 34: 'chhya', 35: 'tra', 36: 'gya', 37: 0, 38: 1, 39: 2, 40: 3, 41: 4, 42: 5, 43: 6, 44: 7, 45: 8, 46: 9}
reference = {1: 'क', 2: 'ख', 3: 'ग', 4: 'घ', 5: 'ङ', 6: 'च', 7: 'छ', 8: 'ज', 9: 'झ', 10: 'ञ', 11: 'ट', 12: 'ठ', 13: 'ड', 14: 'ढ', 15: 'ण', 16: 'त', 17: 'थ', 18: 'द', 19: 'ध', 20: 'न', 21: 'प', 22: 'फ', 23: 'ब', 24: 'भ', 25: 'म', 26: 'य', 27: 'र', 28: 'ल', 29: 'व', 30: 'स', 31: 'ष', 32: 'श', 33: 'ह', 34: 'श्र', 35: 'त्र', 36: 'ज्ञ', 37: 0, 38: 1, 39: 2, 40: 3, 41: 4, 42: 5, 43: 6, 44: 7, 45: 8, 46: 9}
labels=['क', 'ख', 'ग', 'घ', 'ङ', 'च', 'छ', 'ज', 'झ', 'ञ', 'ट', 'ठ', 'ड', 'ढ', 'ण', 'त', 'थ', 'द', 'ध', 'न', 'प', 'फ', 'ब', 'भ', 'म', 'य', 'र', 'ल', 'व', 'स', 'ष', 'श', 'ह', 'श्र', 'त्र', 'ज्ञ', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
print(reference)
print(type(reference))
###Output
{1: 'क', 2: 'ख', 3: 'ग', 4: 'घ', 5: 'ङ', 6: 'च', 7: 'छ', 8: 'ज', 9: 'झ', 10: 'ञ', 11: 'ट', 12: 'ठ', 13: 'ड', 14: 'ढ', 15: 'ण', 16: 'त', 17: 'थ', 18: 'द', 19: 'ध', 20: 'न', 21: 'प', 22: 'फ', 23: 'ब', 24: 'भ', 25: 'म', 26: 'य', 27: 'र', 28: 'ल', 29: 'व', 30: 'स', 31: 'ष', 32: 'श', 33: 'ह', 34: 'श्र', 35: 'त्र', 36: 'ज्ञ', 37: 0, 38: 1, 39: 2, 40: 3, 41: 4, 42: 5, 43: 6, 44: 7, 45: 8, 46: 9}
<class 'dict'>
###Markdown
Normalization and shuffling of data
###Code
#normalization of data
X_train = X_train/255
X_test = X_test/255
X_train, Y_train = shuffle(X_train, Y_train, random_state = 2)
X_test, Y_test = shuffle(X_test, Y_test, random_state = 2)
###Output
_____no_output_____
###Markdown
Testing and Validation split
###Code
X_test, X_val, Y_test, Y_val = train_test_split(X_test, Y_test, test_size = 0.6, random_state = 1)
print(X_test.shape)
print(X_val.shape)
###Output
(7360, 32, 32, 1)
(11040, 32, 32, 1)
###Markdown
Splitting of Y values into 46 categories for training, testing and validation
###Code
Y_test = to_categorical(Y_test)
Y_val = to_categorical(Y_val)
Y_train = to_categorical(Y_train)
inputs = Input(shape = (32,32,1))
conv0 = Conv2D(64, 3, padding = 'same', activation = 'relu')(inputs)
conv1 = Conv2D(64, 3, padding='same', activation='relu')(conv0)
conv2 = Conv2D(128, 3, padding='same', activation='relu')(conv1)
pool2 = MaxPooling2D((2,2))(conv2)
conv3 = Conv2D(128, 3, padding='same', activation='relu')(pool2)
conv4 = Conv2D(256, 5, padding='same', activation='relu')(conv3)
pool4 = MaxPooling2D((2,2))(conv4)
conv5 = Conv2D(256, 5, padding='same', activation='relu')(pool4)
flat = Flatten()(conv5)
dense0 = Dense(512, activation='relu')(flat)
dense1 = Dense(128, activation='relu')(dense0)
dense2 = Dense(64, activation='relu')(dense1)
dense3 = Dense(47, activation='softmax')(dense2)
model = Model(inputs,dense3)
print(model.summary())
###Output
Model: "functional_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 32, 32, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 32, 32, 64) 640
_________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 64) 36928
_________________________________________________________________
conv2d_2 (Conv2D) (None, 32, 32, 128) 73856
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 16, 16, 128) 147584
_________________________________________________________________
conv2d_4 (Conv2D) (None, 16, 16, 256) 819456
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 256) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 8, 8, 256) 1638656
_________________________________________________________________
flatten (Flatten) (None, 16384) 0
_________________________________________________________________
dense (Dense) (None, 512) 8389120
_________________________________________________________________
dense_1 (Dense) (None, 128) 65664
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 47) 3055
=================================================================
Total params: 11,183,215
Trainable params: 11,183,215
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Data Augmentation:https://keras.io/api/preprocessing/image/tf.keras.preprocessing.image.ImageDataGenerator( featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-06, rotation_range=0, width_shift_range=0.0, height_shift_range=0.0, brightness_range=None, shear_range=0.0, zoom_range=0.0, channel_shift_range=0.0, fill_mode="nearest", cval=0.0, horizontal_flip=False, vertical_flip=False, rescale=None, preprocessing_function=None, data_format=None, validation_split=0.0, dtype=None,)
###Code
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import *
datagen = ImageDataGenerator(
rotation_range = 20,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range=0.2,
zoom_range = 0.2,
brightness_range=[0.4,1.5]
)
datagen.fit(X_train)
model.compile(Adam(lr = 10e-4), loss = 'categorical_crossentropy', metrics = ['accuracy'])
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.8, patience=3)
history = model.fit_generator(datagen.flow(X_train, Y_train, batch_size = 200), epochs = 10, validation_data = (X_val, Y_val), callbacks = [reduce_lr])
# Accuracy
print(history)
fig1, ax_acc = plt.subplots()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.title('Model - Accuracy')
plt.legend(['Training', 'Validation'], loc='lower right')
plt.show()
# Loss
fig2, ax_loss = plt.subplots()
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Model- Loss')
plt.legend(['Training', 'Validation'], loc='upper right')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.show()
###Output
_____no_output_____
###Markdown
Model Testing and Accuracy check* model.evaluate()* Precision, Recall, F1-score, Support* Plot ROC and compare AUC
###Code
model.evaluate(X_test, Y_test, batch_size = 400, verbose =1)
Y_pred = model.predict(x = X_test, verbose = 1)
Y_score=model.predict(X_test)
print(Y_score)
n_classes=47
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(Y_test[:, i], Y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(Y_test.ravel(), Y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Compute macro-average ROC curve and ROC area
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(n_classes):
mean_tpr += interp(all_fpr, fpr[i], tpr[i])
# Finally average it and compute AUC
mean_tpr /= n_classes
fpr["macro"] = all_fpr
tpr["macro"] = mean_tpr
roc_auc["macro"] = auc(fpr["macro"], tpr["macro"])
# Plot all ROC curves
lw=2
plt.figure(1)
plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)
plt.plot(fpr["macro"], tpr["macro"],
label='macro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["macro"]),
color='navy', linestyle=':', linewidth=4)
colors = cycle(['aqua', 'darkorange', 'cornflowerblue'])
for i, color in zip(range(n_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label='ROC of class {0} (area = {1:0.2f})'
''.format(i, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Some extension of Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
figure = plt.gcf() # get current figure
figure.set_size_inches(15,10)
plt.show()
Y_pred = np.argmax(Y_pred, axis = 1)
print(Y_pred.shape)
Y_test = np.argmax(Y_test, axis = 1)
print(Y_test.shape)
print("Classification report for the model %s:\n%s\n" % (model, metrics.classification_report(Y_test, Y_pred)))
###Output
Classification report for the model <tensorflow.python.keras.engine.functional.Functional object at 0x7f66059db6d8>:
precision recall f1-score support
1 0.98 0.26 0.41 161
2 0.75 0.53 0.62 167
3 0.87 0.77 0.82 146
4 0.74 0.10 0.17 172
5 0.80 0.03 0.05 151
6 1.00 0.40 0.57 162
7 0.72 0.16 0.26 162
8 0.82 0.55 0.66 184
9 1.00 0.03 0.07 173
10 0.73 0.45 0.55 168
11 0.86 0.04 0.07 159
12 0.00 0.00 0.00 151
13 0.50 0.03 0.06 146
14 1.00 0.01 0.01 154
15 0.64 0.37 0.47 150
16 0.66 0.98 0.79 167
17 0.58 0.22 0.32 169
18 0.37 0.20 0.26 159
19 0.50 0.01 0.01 153
20 0.56 0.87 0.68 180
21 0.83 0.50 0.62 156
22 0.96 0.17 0.29 143
23 0.24 0.68 0.35 166
24 0.38 0.34 0.36 165
25 0.48 0.82 0.61 158
26 0.85 0.31 0.45 149
27 0.62 0.86 0.72 169
28 0.80 0.41 0.54 153
29 0.18 0.75 0.30 151
30 0.94 0.42 0.58 151
31 0.10 0.99 0.18 165
32 0.29 0.70 0.41 155
33 0.80 0.31 0.45 141
34 0.32 0.12 0.17 168
35 0.60 0.85 0.71 150
36 0.60 0.77 0.67 183
37 1.00 0.15 0.26 171
38 0.63 0.99 0.77 179
39 0.73 0.87 0.79 160
40 0.63 0.65 0.64 156
41 0.51 0.25 0.33 162
42 0.99 0.84 0.91 169
43 1.00 0.42 0.59 162
44 1.00 0.49 0.66 137
45 0.76 0.92 0.83 156
46 0.93 0.09 0.16 151
accuracy 0.46 7360
macro avg 0.68 0.45 0.44 7360
weighted avg 0.68 0.46 0.44 7360
|
bin/jupyter/.ipynb_checkpoints/decision-tree-checkpoint.ipynb | ###Markdown
FA (weighted) Classifcation
###Code
df.wa = read_excel( "../../results/df-water-access.xlsx" ,sheet=1)
df.exp =read_excel("../../results/df-water-explore.xlsx" ,sheet=1)
df.cluster = read_excel("../../results/df-fa-seven-cluster-rank.xlsx" ,sheet=1)
df.wb = read_excel("../../results/df-wb.xlsx" ,sheet=1 )
df.exp$clusters <- as.factor(df.cluster$clusters)
df <- merge(x = df.exp,
y = df.wb,
by = c("Country"))
df <- df[, c(1:13, 17,21)]
#scaling the world bank data similar to DHS aggregation out of 100
df.wb <- df[,c(9:15)]
df.wb <- data.frame(lapply(df.wb, function(x) scale(x, center = FALSE, scale = max(x, na.rm = TRUE)/100)))
df.scale <- cbind(df, df.wb)
df.scale <- df.scale[,c(1:8,15:21)]
# explanation of histogram sample of cart
df.a <- df[, c(1:6,8)]
hist(df$cart)
# explanation of the explnatory variables.
explnatory <- df[,c(2:7, 9:15)]
chart.Correlation(explnatory, histogram=TRUE, pch=19 , tl.cex = .7 )
#Giving unique names for the typology
# "Decentralized" , "Hybrid", "Centralized"
df <- df%>%
mutate(clusters=case_when(
.$clusters=="1" ~ "Decentralized",
.$clusters=="2" ~ "Hybrid",
.$clusters=="3" ~ "Centralized",
))
df.scale <- df.scale%>%
mutate(clusters=case_when(
.$clusters=="1" ~ "Decentralized",
.$clusters=="2" ~ "Hybrid",
.$clusters=="3" ~ "Centralized",
))
df$clusters <- as.factor(df$clusters)
df.scale$clusters <- as.factor(df.scale$clusters)
write_xlsx(df , '../../results/class.xlsx')
write_xlsx(df.scale , '../../results/class-scale.xlsx')
head(df)
###Output
_____no_output_____
###Markdown
Tree
###Code
# Make big tree
form <- as.formula(clusters ~ . - Country)
tree.fwa <- rpart(form,data=df,control=rpart.control(minsplit=4,cp=0.01, xval = nrow(df), maxsurrogate = 0, minbucket = 4 )
)
par(mar=c(1,1,1,1))
pdf(file = "../../docs/manuscript/pdf-image/cp.pdf"
,
width = 5,
height = 5 )
plotcp(tree.fwa)
dev.off()
printcp(tree.fwa)
#size of the plot
options(repr.plot.width=10, repr.plot.height=10)
par(mar = c(1,1,1,1))
par(cex=1)
#Interatively prune the tree
tree.pru <- prune(tree.fwa, cp=0.017) # interactively trim the tree
# Development of fancy plots
pdf(file = "../../docs/manuscript/pdf-image/rpart.pdf"
,
width = 7,
height = 7 )
fancyRpartPlot(tree.pru, main ='', sub ='', caption='' ,palettes=c("Blues","Greens", "Reds" ))
dev.off()
summary(tree.pru)
# Development of fanct variable importance plot
tree.fwa$variable.importance
var.imp = read_excel( "../../results/variable-importance.xlsx" ,sheet=1)
s <- ggplot(var.imp , aes(x= reorder(Variable, + Importance), y= Importance)) +
geom_bar(stat="identity", fill="steelblue") +
theme_minimal() +
coord_flip() +
theme(text = element_text(size=17))+ #Font size
theme(axis.text.x = element_text(size=17),
axis.text.y = element_text(size=17)) + #Adjusting the tick sizes
xlab("")
pdf(file = "../../docs/manuscript/pdf-image/var-imp.pdf"
,
width = 12,
height = 7 )
par(mar=c(1,1,1,10))
s
dev.off()
###Output
_____no_output_____ |
DeepLearningNN.ipynb | ###Markdown
A simple Deep Learning Model for Classifying Sentiment
###Code
# Code : Import Libraries
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from keras.utils.np_utils import to_categorical
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.feature_extraction.text import CountVectorizer
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
import re
import warnings; warnings.simplefilter('ignore')
# Code Load Data
tweetsInfo = pd.read_csv('AllTweetInfo.csv')
tweetsInfo.head(2)
###Output
_____no_output_____
###Markdown
Create Input Features and Train Validation Split
###Code
# Get Input Featurs
ftr_col = 'text_features'
tokenizer = Tokenizer(split=' ')
tokenizer.fit_on_texts(tweetsInfo[ftr_col].values)
X_t = tokenizer.texts_to_sequences(tweetsInfo[ftr_col].values)
X_padded = pad_sequences(X_t)
# Creat Train and Validation Split
y = pd.get_dummies(tweetsInfo['sentiment']).values
X_train, X_test, Y_train, Y_test = train_test_split(X_padded,y, test_size = 0.3, random_state = 27)
#print(X_train.shape,Y_train.shape)
#print(X_test.shape,Y_test.shape)
val_size = 100
X_validate = X_test[-val_size:]
Y_validate = Y_test[-val_size:]
X_test = X_test[:-val_size]
Y_test = Y_test[:-val_size]
###Output
_____no_output_____
###Markdown
A simple LSTM Network
###Code
def CreateModel(X_shape):
lstm_out1, lstm_out2, l1, l2, em = 196,196,2,2,56
model = Sequential()
model.add(Embedding(max_fatures, em, input_length = X_shape))
model.add(LSTM(lstm_out1, dropout=0.2))
model.add(Dense(l1,activation='relu'))
model.add(Dense(l2,activation='relu'))
model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
return model
# embed_dim = 56
# lstm_out1 = 196
# lstm_out2 = 196
# l1 = 2
# l2 = 2
# X_shape = X_padded.shape[1]
# Helper Functin For Model Creation
def CreatModel(batch_size, epochs, X_shape, X_train, Y_train):
#model = KerasClassifier(build_fn = CreateModel)
currmodel = CreateModel(X_shape)
print(currmodel.summary())
print()
print('Training Model')
currmodel.fit(X_train, Y_train, epochs = epochs, batch_size=batch_size, verbose = 2)
print()
return currmodel
X_shape = X_padded.shape[1]
m = CreatModel(32, 10, X_shape, X_train, Y_train)
print("Evalution Scores")
score,acc = m.evaluate(X_test, Y_test, verbose = 2, batch_size = 32)
print("score: %.2f" % (score))
print("acc: %.2f" % (acc))
# define the grid search parameters
# param_grid = dict(batch_size=batch_size, epochs=epochs)
# grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1)
# grid_result = grid.fit(X_train, Y_train)
# print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
# batch_size = [10, 20, 40, 60, 80, 100]
# epochs = [10, 50, 100]
# batch_size = [10, 20, 40, 60, 80, 100]
# epochs = [10, 50, 100]
#creat_train_eval(lstm_out1, lstm_out2, l1, l2, em, batchsize,X_shape)
# NN = KerasClassifier(build_fn=CreateModel, verbose=0)
# #Defualt Params
# batchsize = 32
# # Params to be Tuned
# epochs = [5, 10]
# batches = [5, 10, 100]
# optimizers = ['rmsprop', 'adam']
# # Tune Params
# hyperparameters = dict(optimizer=optimizers, epochs=epochs, batch_size=batches)
# grid = GridSearchCV(estimator=NN, param_grid=hyperparameters)
# grid_result = grid.fit(X_train, Y_train)
# grid_result.best_params
###Output
_____no_output_____ |
jupyter_book/book_template/content/03/4/Introduction_to_Tables.ipynb | ###Markdown
We can now apply Python to analyze data. We will work with data stored in Table structures.Tables are a fundamental way of representing data sets. A table can be viewed in two ways:* a sequence of named columns that each describe a single attribute of all entries in a data set, or* a sequence of rows that each contain all information about a single individual in a data set.We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details. The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it.
###Code
cones
###Output
_____no_output_____
###Markdown
The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.A table method is just like a function, but it must operate on a table. So the call looks like`name_of_table.method(arguments)`For example, if you want to see just the first two rows of a table, you can use the table method `show`.
###Code
cones.show(2)
###Output
_____no_output_____
###Markdown
You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows. Choosing Sets of Columns The method `select` creates a new table consisting of only the specified columns.
###Code
cones.select('Flavor')
###Output
_____no_output_____
###Markdown
This leaves the original table unchanged.
###Code
cones
###Output
_____no_output_____
###Markdown
You can select more than one column, by separating the column labels by commas.
###Code
cones.select('Flavor', 'Price')
###Output
_____no_output_____
###Markdown
You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column.
###Code
cones.drop('Color')
###Output
_____no_output_____
###Markdown
You can name this new table and look at it again by just typing its name.
###Code
no_colors = cones.drop('Color')
no_colors
###Output
_____no_output_____
###Markdown
Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table. Sorting Rows The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones.
###Code
cones.sort('Price')
###Output
_____no_output_____
###Markdown
To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method. By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`.
###Code
cones.sort('Price', descending=True)
###Output
_____no_output_____
###Markdown
Like `select` and `drop`, the `sort` method leaves the original table unchanged. Selecting Rows that Satisfy a Condition The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones.
###Code
cones.where('Flavor', 'chocolate')
###Output
_____no_output_____
###Markdown
The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`.
###Code
cones.where('Flavor', 'Chocolate')
###Output
_____no_output_____
###Markdown
Like all the other table methods in this section, `where` leaves the original table unchanged. Example: Salaries in the NBA "The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.Each row represents one player. The columns are:| **Column Label** | Description ||--------------------|-----------------------------------------------------|| `PLAYER` | Player's name || `POSITION` | Player's position on team || `TEAM` | Team name ||`SALARY` | Player's salary in 2015-2016, in millions of dollars| The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016.
###Code
nba
###Output
_____no_output_____
###Markdown
Fans of Stephen Curry can find his row by using `where`.
###Code
nba.where('PLAYER', 'Stephen Curry')
###Output
_____no_output_____
###Markdown
We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors.
###Code
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
###Output
_____no_output_____
###Markdown
By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses.
###Code
warriors.show()
###Output
_____no_output_____
###Markdown
The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order.
###Code
nba.sort('SALARY')
###Output
_____no_output_____
###Markdown
These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table. The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows.
###Code
nba.sort('SALARY', descending=True)
###Output
_____no_output_____
###Markdown
We can now apply Python to analyze data. We will work with data stored in Table structures.Tables are a fundamental way of representing data sets. A table can be viewed in two ways:* a sequence of named columns that each describe a single attribute of all entries in a data set, or* a sequence of rows that each contain all information about a single individual in a data set.We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details. The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it.
###Code
cones
###Output
_____no_output_____
###Markdown
The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.A table method is just like a function, but it must operate on a table. So the call looks like`name_of_table.method(arguments)`For example, if you want to see just the first two rows of a table, you can use the table method `show`.
###Code
cones.show(2)
###Output
_____no_output_____
###Markdown
You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows. Choosing Sets of Columns The method `select` creates a new table consisting of only the specified columns.
###Code
cones.select('Flavor')
###Output
_____no_output_____
###Markdown
This leaves the original table unchanged.
###Code
cones
###Output
_____no_output_____
###Markdown
You can select more than one column, by separating the column labels by commas.
###Code
cones.select('Flavor', 'Price')
###Output
_____no_output_____
###Markdown
You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column.
###Code
cones.drop('Color')
###Output
_____no_output_____
###Markdown
You can name this new table and look at it again by just typing its name.
###Code
no_colors = cones.drop('Color')
no_colors
###Output
_____no_output_____
###Markdown
Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table. Sorting Rows The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones.
###Code
cones.sort('Price')
###Output
_____no_output_____
###Markdown
To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method. By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`.
###Code
cones.sort('Price', descending=True)
###Output
_____no_output_____
###Markdown
Like `select` and `drop`, the `sort` method leaves the original table unchanged. Selecting Rows that Satisfy a Condition The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones.
###Code
cones.where('Flavor', 'chocolate')
###Output
_____no_output_____
###Markdown
The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`.
###Code
cones.where('Flavor', 'Chocolate')
###Output
_____no_output_____
###Markdown
Like all the other table methods in this section, `where` leaves the original table unchanged. Example: Salaries in the NBA "The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.Each row represents one player. The columns are:| **Column Label** | Description ||--------------------|-----------------------------------------------------|| `PLAYER` | Player's name || `POSITION` | Player's position on team || `TEAM` | Team name ||`SALARY` | Player's salary in 2015-2016, in millions of dollars| The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016.
###Code
nba
###Output
_____no_output_____
###Markdown
Fans of Stephen Curry can find his row by using `where`.
###Code
nba.where('PLAYER', 'Stephen Curry')
###Output
_____no_output_____
###Markdown
We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors.
###Code
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
###Output
_____no_output_____
###Markdown
By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses.
###Code
warriors.show()
###Output
_____no_output_____
###Markdown
The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order.
###Code
nba.sort('SALARY')
###Output
_____no_output_____
###Markdown
These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table. The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows.
###Code
nba.sort('SALARY', descending=True)
###Output
_____no_output_____
###Markdown
We can now apply Python to analyze data. We will work with data stored in Table structures.Tables are a fundamental way of representing data sets. A table can be viewed in two ways:* a sequence of named columns that each describe a single attribute of all entries in a data set, or* a sequence of rows that each contain all information about a single individual in a data set.We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details. The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it.
###Code
cones
###Output
_____no_output_____
###Markdown
The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.A table method is just like a function, but it must operate on a table. So the call looks like`name_of_table.method(arguments)`For example, if you want to see just the first two rows of a table, you can use the table method `show`.
###Code
cones.show(2)
###Output
_____no_output_____
###Markdown
You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows. Choosing Sets of Columns The method `select` creates a new table consisting of only the specified columns.
###Code
cones.select('Flavor')
###Output
_____no_output_____
###Markdown
This leaves the original table unchanged.
###Code
cones
###Output
_____no_output_____
###Markdown
You can select more than one column, by separating the column labels by commas.
###Code
cones.select('Flavor', 'Price')
###Output
_____no_output_____
###Markdown
You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column.
###Code
cones.drop('Color')
###Output
_____no_output_____
###Markdown
You can name this new table and look at it again by just typing its name.
###Code
no_colors = cones.drop('Color')
no_colors
###Output
_____no_output_____
###Markdown
Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table. Sorting Rows The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones.
###Code
cones.sort('Price')
###Output
_____no_output_____
###Markdown
To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method. By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`.
###Code
cones.sort('Price', descending=True)
###Output
_____no_output_____
###Markdown
Like `select` and `drop`, the `sort` method leaves the original table unchanged. Selecting Rows that Satisfy a Condition The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones.
###Code
cones.where('Flavor', 'chocolate')
###Output
_____no_output_____
###Markdown
The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`.
###Code
cones.where('Flavor', 'Chocolate')
###Output
_____no_output_____
###Markdown
Like all the other table methods in this section, `where` leaves the original table unchanged. Example: Salaries in the NBA "The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.Each row represents one player. The columns are:| **Column Label** | Description ||--------------------|-----------------------------------------------------|| `PLAYER` | Player's name || `POSITION` | Player's position on team || `TEAM` | Team name ||`SALARY` | Player's salary in 2015-2016, in millions of dollars| The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016.
###Code
nba
###Output
_____no_output_____
###Markdown
Fans of Stephen Curry can find his row by using `where`.
###Code
nba.where('PLAYER', 'Stephen Curry')
###Output
_____no_output_____
###Markdown
We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors.
###Code
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
###Output
_____no_output_____
###Markdown
By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses.
###Code
warriors.show()
###Output
_____no_output_____
###Markdown
The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order.
###Code
nba.sort('SALARY')
###Output
_____no_output_____
###Markdown
These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table. The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows.
###Code
nba.sort('SALARY', descending=True)
###Output
_____no_output_____ |
notebook/data-preprocessing.ipynb | ###Markdown
Data Importing and Preprocessing - Women's March 2017 dataset Este notebook trata da parte de leitura dos dados dos tweets de arquivos `.json` e pré-processamento das informações contidas nestes arquivos.Quando a API do Twitter retorna dados de tweets recuperados através dela, eles são entregues em um arquivo JSON com vários dados e metadados que compõe um tweet. Muitos desses dados contidos no JSON são desnecessários para a análise que temos como objetivo, por isso, os trechos de código abaixo tratam as informações necessárias para a análise e descartam as demais.Para esse trabalho utilizamos bibliotecas específicas além do que a linguagem provém:- **`os`**: lida com detalhes específicos ao sistema operacional da máquina rodando a análise, independente de qual sistemaoperacional seja;- **`re`**: lida com expressões regulares pra reconhecimento de padrões em textos;- **`glob`**: lida com padrão de arquivos em sistemas Unix;- **`pandas`**: toolkit para análise de dados;- **`json_normalize`**: um sub-módulo do pandas para tratamento de arquivos e dados JSON.
###Code
import os
import re
import glob
import pandas as pd
from pandas.io.json import json_normalize
###Output
_____no_output_____
###Markdown
Com auxílio da biblioteca **`re`**, foram criadas funções para converter algumas informações presentes no texto do tweet para informações simples e concisas para análise posterior.
###Code
# Helper functions
def get_hashtags(text):
s = re.findall('(?:^|\s)[##]{1}(\w+)', text)
return s if len(s) > 0 else ''
def get_mentions(text):
s = re.findall('(?:^|\s?|\.?)[@@]{1}([^\s#<>[\]|{}:;,.\(\)=+]+)', text)
return s if len(s) > 0 else ''
def get_source(text):
s = re.findall('<a\s+?href=\"[^\"]*\"\s+?rel=\"[^\"]*\">([^<>]+)<\/a>', text)
return s[0] if len(s) > 0 else ''
def get_urls(text):
s = re.findall('http[s]?://(?:[a-z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+', text)
return s[0] if len(s) > 0 else ''
path = '../data/'
filenames = glob.glob(os.path.join(path, '*.json'))
filenames.sort()
###Output
_____no_output_____
###Markdown
Devido ao enorme número de dados que foram recuperados da API (11.249.944 tweets), o pré-processamento foi feito em blocos. Na importação dos dados da API, os tweets foram divididos em 16 arquivos, e na importação dos dados esses 16 arquivos foram lidos, pré-processados, e outros 16 arquivos foram criados com os dados prontos para a análise, que geraram 9.170.486 instâncias.Os dados para análise se resumem em:- id do tweet- data e hora de criação do tweet- dispositivo fonte- texto do tweet- hashtags presentes no tweet- menções à usuários- urls presentes no tweet- número de vezes que o tweet foi favoritado- número de vezes que o tweet foi retweetado- localização do usuário- nome do usuário- username- quantos seguidores o usuário tem- se é um usuário verificado ou nãoApós o pré-processamento dos tweets, onde foram escolhidos apenas 14 características para observação, conseguimos reduzir a nossa amostra de um arquivo de cerca de 96GB iniciais para um arquivo de 4,8GB, descartando apenas os metadados desnecessários e os tweets de idioma diferente do inglês.
###Code
for file in filenames:
json_reader = pd.read_json(file, lines=True, chunksize=2048)
wm_data = pd.DataFrame()
for chunk in json_reader:
not_truncated = chunk[chunk['truncated'] == False]
only_english = not_truncated[not_truncated['lang'] == 'en'].reset_index()
only_english['hashtags'] = only_english['text'].apply(get_hashtags)
only_english['mentions'] = only_english['text'].apply(get_mentions)
only_english['urls'] = only_english['text'].apply(get_urls)
only_english['source'] = only_english['source'].apply(get_source)
user_df = json_normalize(only_english['user'])
# Selecting only few columns
tweet_df = only_english[['id_str', 'created_at', 'source', 'text', 'hashtags', 'mentions', \
'urls', 'favorite_count', 'retweet_count']]
user_df = user_df[['location', 'name', 'screen_name', 'followers_count', 'verified']]
frames = [tweet_df, user_df]
df = pd.concat(frames, axis=1)
wm_data.append(df)
fp = file[2:]
filepath = '../data/clean_{}'.format(fp)
with open(filepath, 'w') as f:
f.write(wm_data.to_json(orient='records', lines=True))
###Output
_____no_output_____ |
c4_wk1_Tensorflow_serving_in_Colab.ipynb | ###Markdown
**Train and deploy a model with TensorFlow Serving**
###Code
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Not running Python 3. Use Runtime > Change runtime type'
# TensorFlow and tf.keras
print("Installing dependencies for Colab environment")
!pip install -Uq grpcio==1.26.0
import tensorflow as tf
from tensorflow import keras
# Helper libraries
import matplotlib.pyplot as plt
import os
import numpy as np
import subprocess
print('TensorFlow version: {}'.format(tf.__version__))
###Output
Installing dependencies for Colab environment
TensorFlow version: 2.6.0
###Markdown
Import Mnist dataset
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Scale the values of the arrays below to be between 0.0 and 1.0.
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
###Code
# Reshape the arrays below.
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1)
print('\ntrain_images.shape: {}, of {}'.format(train_images.shape,
train_images.dtype))
print('test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
idx = 42
plt.imshow(test_images[idx].reshape(28,28), cmap=plt.cm.binary)
plt.title('True Label: {}'.format(test_labels[idx]), fontdict={'size': 16})
plt.show()
# Create a model.
model = keras.Sequential([
keras.layers.Conv2D(input_shape=(28,28,1), filters=8, kernel_size=3,
strides=2, activation='relu', name='Conv1'),
keras.layers.Flatten(),
keras.layers.Dense(10, activation=tf.nn.softmax, name='Softmax')
])
model.summary()
# Configure the model for training.
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
epochs = 5
# Train the model.
history = model.fit(train_images, train_labels, epochs=epochs)
# Evaluate the model on the test images.
results_eval = model.evaluate(test_images, test_labels, verbose=0)
for metric, value in zip(model.metrics_names, results_eval):
print(metric + ': {:.3}'.format(value))
import tempfile
MODEL_DIR = tempfile.gettempdir()
version = 1
export_path = os.path.join(MODEL_DIR, str(version))
print('export_path = {}\n'.format(export_path))
keras.models.save_model(
model,
export_path,
overwrite=True,
include_optimizer=True,
save_format=None,
signatures=None,
options=None
)
print('\nSaved model:')
!ls -l {export_path}
###Output
export_path = /tmp/1
###Markdown
Examine your saved modelWe'll use the command line utility `saved_model_cli` to look at the [MetaGraphDefs](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/MetaGraphDef) (the models) and [SignatureDefs](../signature_defs) (the methods you can call) in our SavedModel. See [this discussion of the SavedModel CLI](https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/saved_model.mdcli-to-inspect-and-execute-savedmodel) in the TensorFlow Guide.
###Code
!saved_model_cli show --dir {export_path} --all
###Output
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['Conv1_input'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 28, 28, 1)
name: serving_default_Conv1_input:0
The given SavedModel SignatureDef contains the following output(s):
outputs['Softmax'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 10)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
WARNING: Logging before flag parsing goes to stderr.
W1102 10:27:53.778752 139784460539776 deprecation.py:506] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling __init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
Defined Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
Conv1_input: TensorSpec(shape=(None, 28, 28, 1), dtype=tf.float32, name=u'Conv1_input')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Function Name: '_default_save_signature'
Traceback (most recent call last):
File "/usr/local/bin/saved_model_cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 990, in main
args.func(args)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 691, in show
_show_all(args.dir)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 283, in _show_all
_show_defined_functions(saved_model_dir)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/tools/saved_model_cli.py", line 186, in _show_defined_functions
function._list_all_concrete_functions_for_serialization() # pylint: disable=protected-access
AttributeError: '_WrapperFunction' object has no attribute '_list_all_concrete_functions_for_serialization'
###Markdown
Add TensorFlow Serving distribution URI as a package source:
###Code
import sys
# We need sudo prefix if not on a Google Colab.
if 'google.colab' not in sys.modules:
SUDO_IF_NEEDED = 'sudo'
else:
SUDO_IF_NEEDED = ''
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | {SUDO_IF_NEEDED} tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | {SUDO_IF_NEEDED} apt-key add -
!{SUDO_IF_NEEDED} apt update
###Output
deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2943 100 2943 0 0 143k 0 --:--:-- --:--:-- --:--:-- 143k
OK
Get:1 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]
Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:4 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]
Hit:5 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Get:6 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Hit:7 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease
Get:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]
Get:9 http://storage.googleapis.com/tensorflow-serving-apt stable InRelease [3,012 B]
Get:10 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Ign:11 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease
Ign:12 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease
Get:13 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release [696 B]
Hit:14 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release
Get:15 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release.gpg [836 B]
Get:16 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,810 kB]
Get:17 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [927 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,213 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,837 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [667 kB]
Get:21 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [44.7 kB]
Get:23 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 Packages [341 B]
Get:24 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server-universal amd64 Packages [348 B]
Get:25 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Packages [786 kB]
Get:26 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [2,400 kB]
Get:27 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [633 kB]
Get:28 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,434 kB]
Fetched 14.1 MB in 3s (4,769 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
63 packages can be upgraded. Run 'apt list --upgradable' to see them.
###Markdown
Install TensorFlow Serving
###Code
#!apt autoremove
!{SUDO_IF_NEEDED} apt-get install tensorflow-model-server
###Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
libnvidia-common-460
Use 'apt autoremove' to remove it.
The following NEW packages will be installed:
tensorflow-model-server
0 upgraded, 1 newly installed, 0 to remove and 63 not upgraded.
Need to get 347 MB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://storage.googleapis.com/tensorflow-serving-apt stable/tensorflow-model-server amd64 tensorflow-model-server all 2.6.0 [347 MB]
Fetched 347 MB in 5s (66.8 MB/s)
Selecting previously unselected package tensorflow-model-server.
(Reading database ... 155062 files and directories currently installed.)
Preparing to unpack .../tensorflow-model-server_2.6.0_all.deb ...
Unpacking tensorflow-model-server (2.6.0) ...
Setting up tensorflow-model-server (2.6.0) ...
###Markdown
Start running Tensorflow Serving
###Code
os.environ["MODEL_DIR"] = MODEL_DIR
%%bash --bg
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=digits_model \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
!tail server.log
###Output
2021-11-02 10:34:41.196601: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 46151 microseconds.
2021-11-02 10:34:41.197083: I tensorflow_serving/servables/tensorflow/saved_model_warmup_util.cc:59] No warmup data file found at /tmp/1/assets.extra/tf_serving_warmup_requests
2021-11-02 10:34:41.197218: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: digits_model version: 1}
2021-11-02 10:34:41.197750: I tensorflow_serving/model_servers/server_core.cc:486] Finished adding/updating models
2021-11-02 10:34:41.197794: I tensorflow_serving/model_servers/server.cc:133] Using InsecureServerCredentials
2021-11-02 10:34:41.197806: I tensorflow_serving/model_servers/server.cc:383] Profiler service is enabled
2021-11-02 10:34:41.198223: I tensorflow_serving/model_servers/server.cc:409] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2021-11-02 10:34:41.198611: I tensorflow_serving/model_servers/server.cc:430] Exporting HTTP/REST API at:localhost:8501 ...
[evhttp_server.cc : 245] NET_LOG: Entering the event loop ...
###Markdown
Make REST requests
###Code
import json
import random
import requests
# docs_infra: no_execute
!pip install -q requests
headers = {"content-type": "application/json"}
rando = random.randint(0,len(test_images)-5)
data = json.dumps({"signature_name": "serving_default", "instances":test_images[rando:rando+5].tolist()})
json_response = requests.post('http://localhost:8501/v1/models/digits_model:predict', data=data, headers=headers)
predictions = json.loads(json_response.text)['predictions']
plt.figure(figsize=(10,15))
for i in range(5):
plt.subplot(1,5,i+1)
plt.imshow(test_images[rando+i].reshape(28,28), cmap = plt.cm.binary)
plt.axis('off')
color = 'green' if np.argmax(predictions[i]) == test_labels[rando+i] else 'red'
plt.title('Prediction: {}\n True Label: {}'.format(np.argmax(predictions[i]),
test_labels[rando+i]), color=color)
plt.show()
###Output
_____no_output_____ |
a-proof-time-series/1-data_exploration.ipynb | ###Markdown
Data exploration from dataset without domains and labels Selecting IDs from non-annotated dataset
###Code
df = pd.read_table("./processed/covid_data_without_levels_anonimized.csv", sep=',' , index_col=0)
df.head()
#number of unique patients
unique_id = df['MDN'].unique()
df['MDN'].nunique() #1290 unique ids
#group by unique patients
df_grouped_ind = df.groupby(['MDN']).count()
df_grouped_ind
df_filtered100 =df.groupby(['MDN']).filter(lambda x: len(x) > 100)
df_filtered100.nunique()
df_filtered500 =df.groupby(['MDN']).filter(lambda x: len(x) > 500)
df_filtered1000 =df.groupby(['MDN']).filter(lambda x: len(x) > 1000)
pd.crosstab(df_filtered1000['Notitiedatum'], df_filtered1000['MDN']).plot(title= 'Annotations frequency over time - over 1000 notes')
pd.crosstab(df_filtered500['Notitiedatum'], df_filtered500['MDN']).plot(legend=False, title= 'Annotations frequency over time - over 500 notes')
#grouped by patients and date to see the spread on notes within each patient
df.groupby(['MDN', 'Notitiedatum']).count()
#how many annotated data per group
print(df_filtered100.annotated.value_counts())
print(df_filtered500.annotated.value_counts())
print(df_filtered1000.annotated.value_counts())
data = [{'uniqueID':df_filtered100['MDN'].nunique(), 'annotated': df_filtered100.annotated.value_counts()[1], 'not annoated': df_filtered100.annotated.value_counts()[0], 'total notes':df_filtered100.shape[0]},
{'uniqueID':df_filtered500['MDN'].nunique(), 'annotated': df_filtered500.annotated.value_counts()[1], 'not annoated': df_filtered500.annotated.value_counts()[0], 'total notes':df_filtered500.shape[0]},
{'uniqueID':df_filtered1000['MDN'].nunique(), 'annotated': df_filtered1000.annotated.value_counts()[1], 'not annoated': df_filtered1000.annotated.value_counts()[0], 'total notes':df_filtered1000.shape[0]}]
# Creates DataFrame.
df_summary = pd.DataFrame(data, index =['>100 per ID', '>500 per ID', '>1000 per ID'])
df_summary
df_filtered500['MDN'].unique()
###Output
_____no_output_____ |
DSN_KOWOPE.ipynb | ###Markdown
###Code
!pip install catboost
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
% matplotlib inline
from sklearn.base import BaseEstimator, TransformerMixin
import xgboost
from catboost import CatBoostClassifier
from lightgbm import LGBMModel,LGBMClassifier
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier, VotingClassifier
from mlxtend.classifier import StackingClassifier
from sklearn.linear_model import LinearRegression
from sklearn import model_selection
from sklearn.model_selection import train_test_split, RandomizedSearchCV, StratifiedKFold
from sklearn.metrics import roc_auc_score
import os, gc, warnings
warnings.filterwarnings('ignore')
from google.colab import drive
drive.mount('/content/drive')
Train = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Train.csv')
Test = pd.read_csv('/content/drive/My Drive/Colab Notebooks/Test.csv')
def fill_arbitary(col):
for i in col:
b = -999999
Train[i].fillna(b,inplace=True)
Test[i].fillna(b,inplace= True)
def model_predict(estimator,train,label,test, estimator_name):
mean_train = []
mean_test_val = []
test_pred = np.zeros(test.shape[0])
val_pred = np.zeros(train.shape[0])
for count, (train_index,test_index) in enumerate(skf.split(train,label)):
x_train,x_test = train.iloc[train_index],train.iloc[test_index]
y_train,y_test = label.iloc[train_index],label.iloc[test_index]
print(f'========================Fold{count +1}==========================')
estimator.fit(x_train, y_train)
train_predict = estimator.predict_proba(x_train)[:,1]
test_predict = estimator.predict_proba(x_test)[:,1]
val_pred[test_index] = test_predict
test_pred+= estimator.predict_proba(test)[:,1]
print('\nValidation scores', roc_auc_score(y_test,test_predict))
print('\nTraining scores', roc_auc_score(y_train,train_predict))
mean_train.append(roc_auc_score(y_train, train_predict))
mean_test_val.append(roc_auc_score(y_test,test_predict))
print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val))
print('Average Training ROC score for 10 folds split:',np.mean(mean_train))
print('standard Deviation for 10 folds split:',np.std(mean_test_val))
return val_pred, test_pred, estimator_name
def lgbm_predict(estimator,train,label,test,estimator_name):
mean_train = []
mean_test_val = []
test_pred = np.zeros(test.shape[0])
val_pred = np.zeros(train.shape[0])
for count, (train_index,test_index) in enumerate(skf.split(train,label)):
x_train,x_test = train.iloc[train_index],train.iloc[test_index]
y_train,y_test = label.iloc[train_index],label.iloc[test_index]
print(f'========================Fold{count +1}==========================')
estimator.fit(x_train,y_train,eval_set=[(x_test,y_test)],early_stopping_rounds=200,
verbose=250)
train_predict = estimator.predict_proba(x_train, num_iteration = estimator.best_iteration_)[:,1]
test_predict = estimator.predict_proba(x_test, num_iteration = estimator.best_iteration_)[:,1]
val_pred[test_index] = test_predict
test_pred+= estimator.predict_proba(test, num_iteration = estimator.best_iteration_)[:,1]
print('\nValidation scores', roc_auc_score(y_test,test_predict))
print('\nTraining scores', roc_auc_score(y_train,train_predict))
mean_train.append(roc_auc_score(y_train, train_predict))
mean_test_val.append(roc_auc_score(y_test,test_predict))
print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val))
print('Average Training ROC score for 10 folds split:',np.mean(mean_train))
print('standard Deviation for 10 folds split:',np.std(mean_test_val))
return val_pred, test_pred, estimator_name
def xgb_predict(estimator,train,label,test,estimator_name):
mean_train = []
mean_test_val = []
test_pred = np.zeros(test.shape[0])
val_pred = np.zeros(train.shape[0])
for count, (train_index,test_index) in enumerate(skf.split(train,label)):
x_train,x_test = train.iloc[train_index],train.iloc[test_index]
y_train,y_test = label.iloc[train_index],label.iloc[test_index]
print(f'========================Fold{count +1}==========================')
estimator.fit(x_train, y_train, early_stopping_rounds = 200, eval_metric="auc",
eval_set=[(x_test, y_test)],verbose=250)
train_predict = estimator.predict_proba(x_train, ntree_limit = estimator.get_booster().best_ntree_limit)[:,1]
test_predict = estimator.predict_proba(x_test, ntree_limit = estimator.get_booster().best_ntree_limit)[:,1]
val_pred[test_index] = test_predict
test_pred+= estimator.predict_proba(test, ntree_limit = estimator.get_booster().best_ntree_limit)[:,1]
print('\nTesting scores', roc_auc_score(y_test,test_predict))
print('\nTraining scores', roc_auc_score(y_train,train_predict))
mean_train.append(roc_auc_score(y_train, train_predict))
mean_test_val.append(roc_auc_score(y_test,test_predict))
print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val))
print('Average Training ROC score for 10 folds split:',np.mean(mean_train))
print('standard Deviation for 10 folds split:',np.std(mean_test_val))
return val_pred, test_pred, estimator_name
def cat_predict(estimator,train,label,test,estimator_name):
mean_train = []
mean_test_val = []
test_pred = np.zeros(test.shape[0])
val_pred = np.zeros(train.shape[0])
for count, (train_index,test_index) in enumerate(skf.split(train,label)):
x_train,x_test = train.iloc[train_index],train.iloc[test_index]
y_train,y_test = label.iloc[train_index],label.iloc[test_index]
print(f'========================Fold{count +1}==========================')
estimator.fit(x_train,y_train,eval_set=[(x_test,y_test)],early_stopping_rounds=200,
verbose=250,use_best_model=True)
train_predict = estimator.predict_proba(x_train)[:,1]
test_predict = estimator.predict_proba(x_test)[:,1]
val_pred[test_index] = test_predict
test_pred+= estimator.predict_proba(test)[:,1]
print('\nTesting scores', roc_auc_score(y_test,test_predict))
print('\nTraining scores', roc_auc_score(y_train,train_predict))
mean_train.append(roc_auc_score(y_train, train_predict))
mean_test_val.append(roc_auc_score(y_test,test_predict))
print('Average Testing ROC score for 10 folds split:',np.mean(mean_test_val))
print('Average Training ROC score for 10 folds split:',np.mean(mean_train))
print('standard Deviation for 10 folds split:',np.std(mean_test_val))
return val_pred, test_pred, estimator_name
class TargetEncoder(BaseEstimator, TransformerMixin):
"""Target encoder.
Replaces categorical column(s) with the mean target value for
each category.
"""
def __init__(self, cols=None):
"""Target encoder
Parameters
----------
cols : list of str
Columns to target encode. Default is to target
encode all categorical columns in the DataFrame.
"""
if isinstance(cols, str):
self.cols = [cols]
else:
self.cols = cols
def fit(self, X, y):
"""Fit target encoder to X and y
Parameters
----------
X : pandas DataFrame, shape [n_samples, n_columns]
DataFrame containing columns to encode
y : pandas Series, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
# Encode all categorical cols by default
if self.cols is None:
self.cols = [col for col in X
if str(X[col].dtype)=='object']
# Check columns are in X
for col in self.cols:
if col not in X:
raise ValueError('Column \''+col+'\' not in X')
# Encode each element of each column
self.maps = dict() #dict to store map for each column
for col in self.cols:
tmap = dict()
uniques = X[col].unique()
for unique in uniques:
tmap[unique] = y[X[col]==unique].mean()
self.maps[col] = tmap
return self
def transform(self, X, y=None):
"""Perform the target encoding transformation.
Parameters
----------
X : pandas DataFrame, shape [n_samples, n_columns]
DataFrame containing columns to encode
Returns
-------
pandas DataFrame
Input DataFrame with transformed columns
"""
Xo = X.copy()
for col, tmap in self.maps.items():
vals = np.full(X.shape[0], np.nan)
for val, mean_target in tmap.items():
vals[X[col]==val] = mean_target
Xo[col] = vals
return Xo
def fit_transform(self, X, y=None):
"""Fit and transform the data via target encoding.
Parameters
----------
X : pandas DataFrame, shape [n_samples, n_columns]
DataFrame containing columns to encode
y : pandas Series, shape = [n_samples]
Target values (required!).
Returns
-------
pandas DataFrame
Input DataFrame with transformed columns
"""
return self.fit(X, y).transform(X, y)
fill_arbitary(Train.drop(["Applicant_ID","default_status"],axis=1))
Train.default_status.replace({"yes":1,"no":0},inplace=True)
encoder = TargetEncoder()
a = pd.DataFrame(Train.form_field47)
b = pd.DataFrame(Test.form_field47)
X_target_encoded = encoder.fit(a,Train["default_status"])
Train = X_target_encoded.transform(Train)
Test = X_target_encoded.transform(Test)
Train = pd.get_dummies(Train, columns=['form_field47'])
Test = pd.get_dummies(Test, columns=['form_field47'])
train = Train.drop(["Applicant_ID","default_status"],1)
target = Train["default_status"]
test = Test.drop(["Applicant_ID"],1)
skf = StratifiedKFold(n_splits = 10,shuffle=True,random_state=199)
###Output
_____no_output_____
###Markdown
XGBOOST
###Code
clf1=xgboost.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.3, gamma=0.0,
learning_rate=0.01, max_delta_step=0, max_depth=6,
min_child_weight=5, missing=None, n_estimators=700, n_jobs=1,
nthread=None, objective='binary:logistic', random_state=40,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=0.7, verbosity=1)
XGB_train, XGB_test, XGB_name = xgb_predict(clf1, train, target, test,'XGB')
###Output
========================Fold1==========================
[0] validation_0-auc:0.783569
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.831124
[500] validation_0-auc:0.833935
[699] validation_0-auc:0.834871
Testing scores 0.8348708571412803
Training scores 0.8726294761950564
========================Fold2==========================
[0] validation_0-auc:0.790504
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.833055
[500] validation_0-auc:0.835176
[699] validation_0-auc:0.835863
Testing scores 0.8358631028608515
Training scores 0.8723105346951949
========================Fold3==========================
[0] validation_0-auc:0.790541
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.835962
[500] validation_0-auc:0.839742
[699] validation_0-auc:0.841372
Testing scores 0.841372282901621
Training scores 0.8717170006294914
========================Fold4==========================
[0] validation_0-auc:0.802071
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.842515
[500] validation_0-auc:0.845995
[699] validation_0-auc:0.847387
Testing scores 0.8473930222686983
Training scores 0.8713779819956291
========================Fold5==========================
[0] validation_0-auc:0.791245
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.834076
[500] validation_0-auc:0.838032
[699] validation_0-auc:0.839907
Testing scores 0.8399071121406688
Training scores 0.8713854048834265
========================Fold6==========================
[0] validation_0-auc:0.792897
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.842445
[500] validation_0-auc:0.844653
[699] validation_0-auc:0.845389
Testing scores 0.845461397155159
Training scores 0.8704780831967167
========================Fold7==========================
[0] validation_0-auc:0.774624
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.82707
[500] validation_0-auc:0.832472
[699] validation_0-auc:0.834565
Testing scores 0.8345746529453788
Training scores 0.8721230336554464
========================Fold8==========================
[0] validation_0-auc:0.789074
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.829003
[500] validation_0-auc:0.832207
[699] validation_0-auc:0.833425
Testing scores 0.8334248147157227
Training scores 0.8723327496975007
========================Fold9==========================
[0] validation_0-auc:0.797298
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.836494
[500] validation_0-auc:0.839825
[699] validation_0-auc:0.841088
Testing scores 0.8410876159492046
Training scores 0.8713398505796698
========================Fold10==========================
[0] validation_0-auc:0.795566
Will train until validation_0-auc hasn't improved in 200 rounds.
[250] validation_0-auc:0.838427
[500] validation_0-auc:0.842228
[699] validation_0-auc:0.843474
Testing scores 0.8434780555011572
Training scores 0.8713463009901032
Average Testing ROC score for 10 folds split: 0.8397432913579742
Average Training ROC score for 10 folds split: 0.8717040416518236
standard Deviation for 10 folds split: 0.004637733147910091
###Markdown
CATBOOST
###Code
clf2=CatBoostClassifier(border_count=200, max_depth=7, n_estimators=5000, l2_leaf_reg=10, learning_rate=0.03,
bootstrap_type = 'Bernoulli', silent=False, use_best_model=False, eval_metric='AUC', random_seed=34)
CAT_train, CAT_test, CAT_name = cat_predict(clf2, train, target, test,'CAT')
###Output
========================Fold1==========================
0: test: 0.7900518 best: 0.7900518 (0) total: 56.8ms remaining: 4m 43s
250: test: 0.8335453 best: 0.8335507 (249) total: 11s remaining: 3m 27s
500: test: 0.8353724 best: 0.8353724 (500) total: 21.7s remaining: 3m 15s
750: test: 0.8353617 best: 0.8355416 (608) total: 32.5s remaining: 3m 3s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8355416104
bestIteration = 608
Shrink model to first 609 iterations.
Testing scores 0.8355416104184248
Training scores 0.8729225301874605
========================Fold2==========================
0: test: 0.7961931 best: 0.7961931 (0) total: 48.7ms remaining: 4m 3s
250: test: 0.8353298 best: 0.8353809 (244) total: 11s remaining: 3m 28s
500: test: 0.8370820 best: 0.8371787 (490) total: 21.9s remaining: 3m 16s
750: test: 0.8374007 best: 0.8374126 (676) total: 32.9s remaining: 3m 6s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8374512479
bestIteration = 754
Shrink model to first 755 iterations.
Testing scores 0.8374512479305218
Training scores 0.8806035948353764
========================Fold3==========================
0: test: 0.7976162 best: 0.7976162 (0) total: 51.3ms remaining: 4m 16s
250: test: 0.8387629 best: 0.8387629 (250) total: 11.5s remaining: 3m 37s
500: test: 0.8420867 best: 0.8420881 (499) total: 22.5s remaining: 3m 22s
750: test: 0.8430277 best: 0.8430527 (744) total: 33.8s remaining: 3m 11s
1000: test: 0.8432417 best: 0.8432502 (997) total: 45.2s remaining: 3m
1250: test: 0.8431715 best: 0.8433837 (1192) total: 56.7s remaining: 2m 49s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8433836804
bestIteration = 1192
Shrink model to first 1193 iterations.
Testing scores 0.8433836803606234
Training scores 0.8986255127859466
========================Fold4==========================
0: test: 0.8023044 best: 0.8023044 (0) total: 62.7ms remaining: 5m 13s
250: test: 0.8452849 best: 0.8452849 (250) total: 13.2s remaining: 4m 9s
500: test: 0.8474455 best: 0.8476055 (484) total: 24.6s remaining: 3m 40s
750: test: 0.8481216 best: 0.8481824 (739) total: 35.7s remaining: 3m 22s
1000: test: 0.8486150 best: 0.8486481 (966) total: 46.5s remaining: 3m 5s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8486481191
bestIteration = 966
Shrink model to first 967 iterations.
Testing scores 0.8486481191053611
Training scores 0.8890688236106942
========================Fold5==========================
0: test: 0.7976873 best: 0.7976873 (0) total: 45ms remaining: 3m 44s
250: test: 0.8382213 best: 0.8382213 (250) total: 11.1s remaining: 3m 30s
500: test: 0.8406750 best: 0.8407245 (496) total: 21.9s remaining: 3m 16s
750: test: 0.8410173 best: 0.8410660 (691) total: 32.7s remaining: 3m 5s
1000: test: 0.8410251 best: 0.8411415 (817) total: 43.7s remaining: 2m 54s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.841141512
bestIteration = 817
Shrink model to first 818 iterations.
Testing scores 0.8411415120389779
Training scores 0.8819964517276514
========================Fold6==========================
0: test: 0.8022439 best: 0.8022439 (0) total: 50.3ms remaining: 4m 11s
250: test: 0.8456641 best: 0.8456641 (250) total: 11.1s remaining: 3m 30s
500: test: 0.8473241 best: 0.8473389 (498) total: 22.5s remaining: 3m 22s
750: test: 0.8477716 best: 0.8478324 (746) total: 33.4s remaining: 3m 9s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8478324429
bestIteration = 746
Shrink model to first 747 iterations.
Testing scores 0.8478324428838977
Training scores 0.8783355665768353
========================Fold7==========================
0: test: 0.7872877 best: 0.7872877 (0) total: 49.3ms remaining: 4m 6s
250: test: 0.8321991 best: 0.8321991 (250) total: 11.1s remaining: 3m 30s
500: test: 0.8361658 best: 0.8361658 (500) total: 22s remaining: 3m 17s
750: test: 0.8377947 best: 0.8378594 (740) total: 32.9s remaining: 3m 6s
1000: test: 0.8386791 best: 0.8387142 (996) total: 43.8s remaining: 2m 55s
1250: test: 0.8390111 best: 0.8392390 (1167) total: 54.7s remaining: 2m 44s
1500: test: 0.8392478 best: 0.8393998 (1476) total: 1m 5s remaining: 2m 33s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8395668816
bestIteration = 1548
Shrink model to first 1549 iterations.
Testing scores 0.83956688162493
Training scores 0.9126801489200083
========================Fold8==========================
0: test: 0.7834297 best: 0.7834297 (0) total: 47.6ms remaining: 3m 57s
250: test: 0.8312266 best: 0.8312266 (250) total: 11.2s remaining: 3m 32s
500: test: 0.8339722 best: 0.8340069 (490) total: 22.2s remaining: 3m 19s
750: test: 0.8345778 best: 0.8346993 (727) total: 33.2s remaining: 3m 7s
1000: test: 0.8351939 best: 0.8353133 (930) total: 44.2s remaining: 2m 56s
1250: test: 0.8349874 best: 0.8353923 (1075) total: 55.2s remaining: 2m 45s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8353922965
bestIteration = 1075
Shrink model to first 1076 iterations.
Testing scores 0.8353922965320741
Training scores 0.8949412147380144
========================Fold9==========================
0: test: 0.8013960 best: 0.8013960 (0) total: 49.2ms remaining: 4m 5s
250: test: 0.8399025 best: 0.8399025 (250) total: 11.1s remaining: 3m 30s
500: test: 0.8414454 best: 0.8414518 (477) total: 22s remaining: 3m 17s
750: test: 0.8420560 best: 0.8420560 (750) total: 33.1s remaining: 3m 7s
1000: test: 0.8418872 best: 0.8422984 (946) total: 45.2s remaining: 3m
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8422983939
bestIteration = 946
Shrink model to first 947 iterations.
Testing scores 0.8422983938811368
Training scores 0.888883356485162
========================Fold10==========================
0: test: 0.7908759 best: 0.7908759 (0) total: 52.1ms remaining: 4m 20s
250: test: 0.8400487 best: 0.8400499 (248) total: 12.6s remaining: 3m 58s
500: test: 0.8424217 best: 0.8424827 (496) total: 23.5s remaining: 3m 31s
750: test: 0.8429035 best: 0.8429035 (750) total: 34.6s remaining: 3m 15s
1000: test: 0.8431591 best: 0.8432622 (954) total: 45.7s remaining: 3m 2s
1250: test: 0.8433648 best: 0.8435077 (1208) total: 56.8s remaining: 2m 50s
Stopped by overfitting detector (200 iterations wait)
bestTest = 0.8435077065
bestIteration = 1208
Shrink model to first 1209 iterations.
Testing scores 0.8435077065019819
Training scores 0.9005667464724361
Average Testing ROC score for 10 folds split: 0.841476389127793
Average Training ROC score for 10 folds split: 0.8898623946339586
standard Deviation for 10 folds split: 0.004387148691646989
###Markdown
LGBM
###Code
clf3=LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=0.7,
importance_type='split', learning_rate=0.05, max_depth=3,
metric='auc', min_child_samples=15, min_child_weight=0.001,
min_split_gain=0.0, n_estimators=5000, n_jobs=-1, num_leaves=300,
num_threads=15, num_trees=500, objective=None, random_state=29,
reg_alpha=4, reg_lambda=4, silent=True, subsample=0.7,
subsample_for_bin=200000, subsample_freq=3)
LGBM_train, LGBM_test, LGBM_name = lgbm_predict(clf3, train, target, test,'LGBM')
###Output
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.832994
[500] valid_0's auc: 0.834734
Did not meet early stopping. Best iteration is:
[477] valid_0's auc: 0.834758
Validation scores 0.8347582313017391
Training scores 0.8565784467131087
========================Fold2==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.83602
[500] valid_0's auc: 0.837135
Did not meet early stopping. Best iteration is:
[485] valid_0's auc: 0.837203
Validation scores 0.8372030571447642
Training scores 0.8563558260698734
========================Fold3==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.838294
[500] valid_0's auc: 0.841396
Did not meet early stopping. Best iteration is:
[500] valid_0's auc: 0.841396
Validation scores 0.8413960843807278
Training scores 0.8560760244973092
========================Fold4==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.844361
[500] valid_0's auc: 0.846562
Did not meet early stopping. Best iteration is:
[493] valid_0's auc: 0.846591
Validation scores 0.8465908434330081
Training scores 0.8553614805435277
========================Fold5==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.837805
[500] valid_0's auc: 0.840685
Did not meet early stopping. Best iteration is:
[500] valid_0's auc: 0.840685
Validation scores 0.8406849720737937
Training scores 0.8563553449764859
========================Fold6==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.843998
[500] valid_0's auc: 0.845937
Did not meet early stopping. Best iteration is:
[500] valid_0's auc: 0.845937
Validation scores 0.8459365027265129
Training scores 0.856165139754816
========================Fold7==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.831257
[500] valid_0's auc: 0.834918
Did not meet early stopping. Best iteration is:
[486] valid_0's auc: 0.835076
Validation scores 0.8350763065058433
Training scores 0.8560600853475305
========================Fold8==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.831024
[500] valid_0's auc: 0.834307
Did not meet early stopping. Best iteration is:
[498] valid_0's auc: 0.834325
Validation scores 0.8343252914762337
Training scores 0.8571583901326212
========================Fold9==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.839066
[500] valid_0's auc: 0.841259
Did not meet early stopping. Best iteration is:
[495] valid_0's auc: 0.841281
Validation scores 0.8412807784284143
Training scores 0.8560935529720934
========================Fold10==========================
Training until validation scores don't improve for 200 rounds.
[250] valid_0's auc: 0.840326
[500] valid_0's auc: 0.843027
Did not meet early stopping. Best iteration is:
[465] valid_0's auc: 0.843111
Validation scores 0.8431108657816418
Training scores 0.8549488062963058
Average Testing ROC score for 10 folds split: 0.8400362933252679
Average Training ROC score for 10 folds split: 0.8561153097303672
standard Deviation for 10 folds split: 0.004291250752815905
###Markdown
RANDOM FOREST
###Code
clf4=RandomForestClassifier(bootstrap=False, ccp_alpha=0.0, class_weight=None,
criterion='gini', max_depth=10, max_features='auto',
max_leaf_nodes=None, max_samples=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=20, min_samples_split=20,
min_weight_fraction_leaf=0.0, n_estimators=400,
n_jobs=-1, oob_score=False, random_state=6, verbose=0,
warm_start=False)
RF_train, RF_test, RF_name = model_predict(clf4, train, target, test,'RF')
###Output
========================Fold1==========================
Validation scores 0.8306279847787816
Training scores 0.8861992803677387
========================Fold2==========================
Validation scores 0.8313701424932463
Training scores 0.8861823952670765
========================Fold3==========================
Validation scores 0.8340843734838415
Training scores 0.8856023701333954
========================Fold4==========================
Validation scores 0.8418490886189433
Training scores 0.8852444792277329
========================Fold5==========================
Validation scores 0.832393951043807
Training scores 0.8858514945519453
========================Fold6==========================
Validation scores 0.8423337337367709
Training scores 0.8851825165239982
========================Fold7==========================
Validation scores 0.8265494027047229
Training scores 0.8858131740036257
========================Fold8==========================
Validation scores 0.8271567310530104
Training scores 0.885895204916245
========================Fold9==========================
Validation scores 0.835906189749856
Training scores 0.885141600059136
========================Fold10==========================
Validation scores 0.836353368215782
Training scores 0.8851824388210343
Average Testing ROC score for 10 folds split: 0.8338624965878761
Average Training ROC score for 10 folds split: 0.8856294953871927
standard Deviation for 10 folds split: 0.005130857811429926
###Markdown
GBM
###Code
clf5 = GradientBoostingClassifier(ccp_alpha=0.0, criterion='friedman_mse', init=None,
learning_rate=0.9, loss='deviance', max_depth=2,
max_features=1, max_leaf_nodes=2,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=20, min_samples_split=20,
min_weight_fraction_leaf=0.1, n_estimators=200,
n_iter_no_change=None, presort='deprecated',
random_state=67, subsample=0.7, tol=0.0001,
validation_fraction=0.1, verbose=0,
warm_start=False)
GBM_train, GBM_test, GBM_name = model_predict(clf5, train, target, test,'GBM')
Train_stack3 = pd.DataFrame(XGB_train)
Train_stack3 = pd.concat([Train_stack3,pd.DataFrame(CAT_train),pd.DataFrame(LGBM_train),
pd.DataFrame(RF_train),pd.DataFrame(GBM_train)],1)
Test_stack3 = pd.DataFrame(XGB_test)
Test_stack3 = pd.concat([Test_stack3,pd.DataFrame(CAT_test),pd.DataFrame(LGBM_test),
pd.DataFrame(RF_test),pd.DataFrame(GBM_test)],1)
Test_stack3.columns=[XGB_name, CAT_name, LGBM_name, RF_name, GBM_name]
Train_stack3.columns=[XGB_name, CAT_name, LGBM_name, RF_name, GBM_name]
Test_stack3 = Test_stack3/10
Train_stack3
Test_stack3
meta_estimator = LinearRegression()
final_prediction = meta_estimator.fit(Train_stack3, target).predict(Test_stack3)
Train_stack3.corr()
final_prediction
# Create a data frame with two columns: Applicant_ID & default_status. default_status contains your predictions
Applicant_ID = np.array(Test['Applicant_ID'])
Solution = pd.DataFrame(final_prediction, Applicant_ID, columns = ["default_status"])
print(Solution)
# Write your solution to a csv file with the name Solution.csv
Solution.to_csv("Zindi Credit Project37.csv", index_label = ["Applicant_ID"])
###Output
_____no_output_____ |
c7_classification_performance_measures/03_implement_confusion_matrix_precision_and_recall.ipynb | ###Markdown
实现混淆矩阵,精准率和召回率
###Code
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target.copy()
# 把数据变为极度偏斜的数据
# 把手写数字分为9和非9两大类, 重点关注的是分类为9的数字
y[digits.target==9] = 1
y[digits.target!=9] = 0
from sklearn.model_selection._split import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
from sklearn.linear_model.logistic import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
log_reg.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
虽然0.975555555551看上去很高了,但因为我们的数据是极度偏斜的数据,即使我们把全部分类预测为"非9"也会有0.9左右的正确率
###Code
y_predict = log_reg.predict(X_test)
###Output
_____no_output_____
###Markdown
求TP,FP,FN,TN的值
###Code
def TN(y_true, y_predict):
assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致'
return np.sum((y_true == 0) & (y_predict == 0))
TN(y_test, y_predict)
def FP(y_true, y_predict):
assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致'
return np.sum((y_true == 0) & (y_predict == 1))
FP(y_test, y_predict)
def FN(y_true, y_predict):
assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致'
return np.sum((y_true == 1) & (y_predict == 0))
FN(y_test, y_predict)
def TP(y_true, y_predict):
assert len(y_true) == len(y_predict),'y_true与y_predict的样本数目必须一致'
return np.sum((y_true == 1) & (y_predict == 1))
TP(y_test, y_predict)
def confusion_matrix(y_true, y_predict):
"""返回一个2✖️2的混淆矩阵"""
return np.array([
[TN(y_true, y_predict), FP(y_true, y_predict)],
[FN(y_true, y_predict), TP(y_true, y_predict)]
])
confusion_matrix(y_test, y_predict)
###Output
_____no_output_____
###Markdown
根据混淆矩阵求精准率和召回率
###Code
def precision_score(y_true, y_predict):
"""求精准率"""
tp = TP(y_true, y_predict)
fp = FP(y_true, y_predict)
try:
return tp / (tp + fp)
except: # 分母为0时,结果返回0
return 0.0
# 精准率
precision_score(y_test, y_predict)
def recall_score(y_true, y_predict):
"""求召回率"""
tp = TP(y_true, y_predict)
fn = FN(y_true, y_predict)
try:
return tp / (tp + fn)
except:
return 0.0
# 召回率
recall_score(y_test, y_predict)
###Output
_____no_output_____
###Markdown
scikit-learn中的混淆矩阵,精准率和召回率 混淆矩阵
###Code
import sklearn.metrics.classification as classification
classification.confusion_matrix(y_test, y_predict)
###Output
_____no_output_____
###Markdown
精准率
###Code
classification.precision_score(y_test, y_predict)
###Output
_____no_output_____
###Markdown
召回率
###Code
classification.recall_score(y_test, y_predict)
###Output
_____no_output_____ |
galaxy_project/Ga) Two star test implementation.ipynb | ###Markdown
Two star test
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, interactive, fixed
from plotting_function import plotter
from initial_velocities import velocities_m, velocities_S
from DE_solver import derivs, equationsolver
###Output
_____no_output_____
###Markdown
Defining some test values for a simple two star system to check if everything was working correctly:
###Code
max_time_test = 1
time_step_test = 80
M_test = 1e11
S_test = 1e11
S_y_test = 70
S_x_test = -.01*S_y_test**2+25
m_x_test_1 = -3.53
m_y_test_1 = 3.53
m_x_test_2 = -3.53
m_y_test_2 = -3.53
vxS_test = velocities_S(M_test,S_test,S_x_test,S_y_test)[0]
vyS_test = velocities_S(M_test,S_test,S_x_test,S_y_test)[1]
vxm_test_1 = velocities_m(M_test,m_x_test_1,m_y_test_1)[0]
vym_test_1 = velocities_m(M_test,m_x_test_1,m_y_test_1)[1]
vxm_test_2 = velocities_m(M_test,m_x_test_2,m_y_test_2)[0]
vym_test_2 = velocities_m(M_test,m_x_test_2,m_y_test_2)[1]
ic_test = np.array([S_x_test,S_y_test,vxS_test,vyS_test,m_x_test_1,m_y_test_1,vxm_test_1,vym_test_1,
m_x_test_2,m_y_test_2,vxm_test_2,vym_test_2])
###Output
_____no_output_____
###Markdown
Using equationsolver to solve the DE's
###Code
sol_test = equationsolver(ic_test,max_time_test,time_step_test,M_test,S_test)
###Output
_____no_output_____
###Markdown
Saving results and initial conditions to disk
###Code
np.savez('two_star_test_sol+ic.npz',sol_test,ic_test)
###Output
_____no_output_____ |
experiments/java_parsing.ipynb | ###Markdown
Experiments in spliting Java code
###Code
import regex
def split_methods(code):
"""Parse Java files into separate methods
:param code: Java code to parse.
:rtype: map
"""
pattern = r'(?:(?:public|private|static|protected)\s+)*\s*[\w\<\>\[\]]+\s+\w+\s*\([^{]+({(?:[^{}]+\/\*.*?\*\/|[^{}]+\/\/.*?$|[^{}]+|(?1))*+})'
scanner = regex.finditer(pattern, code, regex.MULTILINE)
return map(lambda match: match.group(0), scanner)
file = open("experiments/fixtures/forest-fire.java", "r")
code = file.read()
file.close()
methods = split_methods(code)
for i, method in enumerate(methods):
print("\n\nFunction {}\n--".format(i))
print(method)
###Output
Function 0
--
private static List<String> process(List<String> land){
List<String> newLand = new LinkedList<String>();
for(int i = 0; i < land.size(); i++){
String rowAbove, thisRow = land.get(i), rowBelow;
if(i == 0){//first row
rowAbove = null;
rowBelow = land.get(i + 1);
}else if(i == land.size() - 1){//last row
rowBelow = null;
rowAbove = land.get(i - 1);
}else{//middle
rowBelow = land.get(i + 1);
rowAbove = land.get(i - 1);
}
newLand.add(processRows(rowAbove, thisRow, rowBelow));
}
return newLand;
}
Function 1
--
private static String processRows(String rowAbove, String thisRow,
String rowBelow){
String newRow = "";
for(int i = 0; i < thisRow.length();i++){
switch(thisRow.charAt(i)){
case BURNING:
newRow+= EMPTY;
break;
case EMPTY:
newRow+= Math.random() < P ? TREE : EMPTY;
break;
case TREE:
String neighbors = "";
if(i == 0){//first char
neighbors+= rowAbove == null ? "" : rowAbove.substring(i, i + 2);
neighbors+= thisRow.charAt(i + 1);
neighbors+= rowBelow == null ? "" : rowBelow.substring(i, i + 2);
if(neighbors.contains(Character.toString(BURNING))){
newRow+= BURNING;
break;
}
}else if(i == thisRow.length() - 1){//last char
neighbors+= rowAbove == null ? "" : rowAbove.substring(i - 1, i + 1);
neighbors+= thisRow.charAt(i - 1);
neighbors+= rowBelow == null ? "" : rowBelow.substring(i - 1, i + 1);
if(neighbors.contains(Character.toString(BURNING))){
newRow+= BURNING;
break;
}
}else{//middle
neighbors+= rowAbove == null ? "" : rowAbove.substring(i - 1, i + 2);
neighbors+= thisRow.charAt(i + 1);
neighbors+= thisRow.charAt(i - 1);
neighbors+= rowBelow == null ? "" : rowBelow.substring(i - 1, i + 2);
if(neighbors.contains(Character.toString(BURNING))){
newRow+= BURNING;
break;
}
}
newRow+= Math.random() < F ? BURNING : TREE;
}
}
return newRow;
}
Function 2
--
public static List<String> populate(int width, int height){
List<String> land = new LinkedList<String>();
for(;height > 0; height--){//height is just a copy anyway
StringBuilder line = new StringBuilder(width);
for(int i = width; i > 0; i--){
line.append((Math.random() < TREE_PROB) ? TREE : EMPTY);
}
land.add(line.toString());
}
return land;
}
Function 3
--
public static void processN(List<String> land, int n){
for(int i = 0;i < n; i++){
land = process(land);
}
}
Function 4
--
public static void processNPrint(List<String> land, int n){
for(int i = 0;i < n; i++){
land = process(land);
print(land);
}
}
Function 5
--
public static void print(List<String> land){
for(String row: land){
System.out.println(row);
}
System.out.println();
}
Function 6
--
public static void main(String[] args){
List<String> land = Arrays.asList(".TTT.T.T.TTTT.T",
"T.T.T.TT..T.T..",
"TT.TTTT...T.TT.",
"TTT..TTTTT.T..T",
".T.TTT....TT.TT",
"...T..TTT.TT.T.",
".TT.TT...TT..TT",
".TT.T.T..T.T.T.",
"..TTT.TT.T..T..",
".T....T.....TTT",
"T..TTT..T..T...",
"TTT....TTTTTT.T",
"......TwTTT...T",
"..T....TTTTTTTT",
".T.T.T....TT...");
print(land);
processNPrint(land, 10);
System.out.println("Random land test:");
land = populate(10, 10);
print(land);
processNPrint(land, 10);
}
|
8-Labs/Lab03/dev_src/Lab3.ipynb | ###Markdown
**Download** (right-click, save target as ...) this page as a jupyterlab notebook from:[Lab3](https://atomickitty.ddns.net:8000/user/sensei/files/engr-1330-webroot/engr-1330-webbook/ctds-psuedocourse/docs/8-Labs/Lab2/Lab3_Dev.ipynb?_xsrf=2%7C1b4d47c3%7C0c3aca0c53606a3f4b71c448b09296ae%7C1623531240)___ Laboratory 3: Structures and Conditions.
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
###Output
DESKTOP-EH6HD63
desktop-eh6hd63\farha
C:\Users\Farha\Anaconda3\python.exe
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
###Markdown
Full name: R: Title of the notebook: Date: ___  Data Structures: List (Array)A list is a collection of data that are somehow related. It is a convenient way to refer to acollection of similar things by a single name, and using an index (like a subscript in math)to identify a particular item.Consider the "math-like" variable $x$ below:\begin{gather}x_0= 7 \\x_1= 11 \\x_2= 5 \\x_3= 9 \\x_4= 13 \\\... \\x_N= 223 \\\end{gather} The variable name is $x$ and the subscripts correspond to different values. Thus the `value` of the variable named $x$ associated with subscript $3$ is the number $9$.The figure below is a visual representation of a the concept that treats a variable as a collection of cells. In the figure, the variable name is `MyList`, the subscripts are replaced by an indexwhich identifies which cell is being referenced. The value is the cell content at the particular index. So in the figure the value of `MyList` at Index = 3 is the number 9.'In engineering and data science we use lists a lot - we often call then vectors, arrays, matrices and such, but they are ultimately just lists.To declare a list you can write the list name and assign it values. The square brackets are used to identify that the variable is a list. Like: MyList = [7,11,5,9,13,66,99,223]One can also declare a null list and use the `append()` method to fill it as needed. MyOtherList = [ ] Python indices start at ZERO. Alot of other lnguages start at ONE. Its just the convention. The first element in a list has an index of 0, the second an index of 1, and so on.We access the contents of a list by referring to its name and index. For example MyList[3] has a value of the number 9.
###Code
MyOtherList = [] #Create an empty list
MyOtherList.append(765) #Add one item to the list
print(MyOtherList)
MyList = [7,11,5,9,13,66,99,223] #Define a list
print(MyList)
sublist = MyList[3:6] #slice a sublist
print("sublist is: ", sublist)
mysum = sum(sublist) #sum the numbers in the sublist
print("Sum: ", mysum)
mylength = len(sublist) #get the length of the sublist
print("Length: ", mylength)
###Output
[765]
[7, 11, 5, 9, 13, 66, 99, 223]
sublist is: [9, 13, 66]
Sum: 88
Length: 3
###Markdown
Data Structures: Special List | TupleA tuple is a special kind of list where the values cannot be changed after the list is created.It is useful for list-like things that are static - like days in a week, or months of a year.You declare a tuple like a list, except use round brackets instead of square brackets. MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec") Data Structures: Special List | DictionaryA dictionary is a special kind of list where the items are related data PAIRS. It is a lot like a relational database (it probably is one in fact) where the first item in the pair is called the key, and must be unique in a dictionary, and the second item in the pair is the data.The second item could itself be a list, so a dictionary would be a meaningful way to build adatabase in Python.To declare a dictionary using `curly` brackets MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03}To declare a dictionary using the `dict()` method MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03) ___Some examples follow:
###Code
MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec")
MyTupleName
MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03}
print(MyPetsNamesAndMass)
MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03)
print(MyPetsNamesAndMassToo)
# Tuples
MyTupleName = ("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec")
# Access a Tuple
print ("5th element of the tuple:", MyTupleName[4])
# Dictionary
MyPetsNamesAndMass = { "Dusty":7.8 , "Aspen":6.3, "Merrimee":0.03}
# Access the Dictionary
print ("Aspen's mass = ", MyPetsNamesAndMass["Aspen"])
# Change a value in a dictionary
print ("Merrimee's mass" , MyPetsNamesAndMass["Merrimee"])
MyPetsNamesAndMass["Merrimee"] = 0.01
print ("Merrimee's mass" , MyPetsNamesAndMass["Merrimee"], "She lost weight !")
# Alternate dictionary
MyPetsNamesAndMassToo = dict(Dusty = 7.8 , Aspen = 6.3, Merrimee = 0.03)
print ("Merrimee's mass" , MyPetsNamesAndMassToo["Merrimee"])
# Attempt to change a Tuple
#MyTupleName[3]=("Fred") # Activate this line and see what happens!
###Output
5th element of the tuple: May
Aspen's mass = 6.3
Merrimee's mass 0.03
Merrimee's mass 0.01 She lost weight !
Merrimee's mass 0.03
###Markdown
___ Example: Nested DictionaryFrom the dictionary below, print "Pandemic" and "Tokyo":
###Code
FD = {"Quentin":"Tarantino","2020":[2020,"COVID",19,"Pandemic"],"Bond":["James","Gun",("Paris","Tokyo","London")]} #A nested dictionary
print(FD)
FD['2020'][3]
FD['Bond'][2][1]
###Output
_____no_output_____
###Markdown
___ Conditional ExecutionConditional statements are logical expressions that evaluate as TRUE or FALSE and usingthese results to perform further operations based on these conditions.All flow control in a program depends on evaluating conditions. The program will proceeddiferently based on the outcome of one or more conditions - really sophisticated AI programs are a collection of conditions and correlations. Amazon knowing what you kind of want is based on correlations of your past behavior compared to other peoples similar, butmore recent behavior, and then it uses conditional statements to decide what item to offer you in your recommendation items. It's spooky, but ultimately just a program running in the background trying to make your money theirs. Conditional Execution: ComparisonThe most common conditional operation is comparison. If we wish to compare whether twovariables are the same we use the == (double equal sign).For example x == y means the program will ask whether x and y have the same value. If they do, the result is TRUE if not then the result is FALSE.Other comparison signs are `!=` does NOT equal, ` `larger than, `=` greater than or equal.There are also three logical operators when we want to build multiple compares(multiple conditioning); these are `and`, `or`, and `not`.The `and` operator returns TRUE if (and only if) **all** conditions are TRUE.For instance `5 == 5 and 5 < 6` will return a TRUE because both conditions are true.The `or` operator returns `TRUE` if at least one condition is true. If **all** conditions are FALSE, then it will return a FALSE. For instance `4 > 3 or 17 > 20 or 3 == 2` will return `TRUE`because the first condition is true.The `not` operator returns `TRUE` if the condition after the `not` keyword is false. Think of itas a way to do a logic reversal.
###Code
# Compare
x = 7
y = 10
print("x =: ",x,"y =: ",y)
print("x is equal to y : ",x==y)
print("x is not equal to y : ",x!=y)
print("x is greater than y : ",x>y)
print("x is less than y : ",x<y)
# Logical operators
print("5 == 5 and 5 < 6 ? ",5 == 5 and 5 < 6)
print("4 > 3 or 17 > 20 ",4 > 3 or 17 > 20)
print("not 5 == 5",not 5 == 5)
###Output
5 == 5 and 5 < 6 ? True
4 > 3 or 17 > 20 True
not 5 == 5 False
###Markdown
Conditional Execution: Block `if` statement The `if` statement is a common flow control statement. It allows the program to evaluate if a certain condition is satisfied and to perform a designed action based on the result of the evaluation. The structure of an `if` statement is if condition1 is met: do A elif condition 2 is met: do b elif condition 3 is met: do c else: do e The `elif` means "else if". The `:` colon is an important part of the structure it tells where the action begins. Also there are no scope delimiters like (), or {} . Instead Python uses indentation to isolate blocks of code. This convention is hugely important - many other coding environments use delimiters (called scoping delimiters), but Python does not. The indentation itself is the scoping delimiter.The next code fragment illustrates illustrates how the `if` statements work. The program asks the user for input. The use of `raw_input()` will let the program read any input as a stringso non-numeric results will not throw an error. The input is stored in the variable named `userInput`. Next the statement if `userInput == "1":` compares the value of `userInput`with the string `"1"`. If the value in the variable is indeed \1", then the program will executethe block of code in the indentation after the colon. In this case it will execute print "Hello World" print "How do you do? "Alternatively, if the value of `userInput` is the string `'2'`, then the program will execute print "Snakes on a plane "For all other values the program will execute print "You did not enter a valid number"
###Code
# Block if example
userInput = input('Enter the number 1 or 2')
# Use block if structure
if userInput == '1':
print("Hello World")
print("How do you do? ")
elif userInput == '2':
print("Snakes on a plane ")
else:
print("You did not enter a valid number")
###Output
Enter the number 1 or 21
Hello World
How do you do?
###Markdown
Conditional Execution: Inline `if` statementAn inline `if` statement is a simpler form of an `if` statement and is more convenient if youonly need to perform a simple conditional task. The syntax is: do TaskA `if` condition is true `else` do TaskB An example would be myInt = 3 num1 = 12 if myInt == 0 else 13 num1An alternative way is to enclose the condition in brackets for some clarity like myInt = 3 num1 = 12 if (myInt == 0) else 13 num1In either case the result is that `num1` will have the value `13` (unless you set myInt to 0).One can also use `if` to construct extremely inefficient loops.
###Code
myInt = 0
num1 = 12 if (myInt == 0) else 13
num1
###Output
_____no_output_____
###Markdown
___ Example: Pass or Fail?Take the following inputs from the user: 1. Grade for Lesson 1 (from 0 to 5) 2. Grade for Lesson 2 (from 0 to 5) 3. Grade for Lesson 3 (from 0 to 5) Compute the average of the three grades. Use the result to decide whether the student will pass or fail.
###Code
Lesson1 = int(input('Enter the grade for Lesson 1'))
Lesson2 = int(input('Enter the grade for Lesson 2'))
Lesson3 = int(input('Enter the grade for Lesson 3'))
Average = int(Lesson1+Lesson2+Lesson3)/3
print('Average Course Grade:',Average)
if Average >= 5:
print("Passed")
else:
print("Failed")
###Output
Enter the grade for Lesson 12
Enter the grade for Lesson 25
Enter the grade for Lesson 31
Average Course Grade: 2.6666666666666665
Failed
|
Modulo2/Ejercicios/Problemas Diversos.ipynb | ###Markdown
PROBLEMAS DIVERSOS
###Code
def cantidad():
n=int(input("Ingrese cantidad de alumnos:"))
print (n)
return n
def nota():
nota = float(input("Introduce la nota(0 - 10): "))
return nota
def validar_nota(nota):
try:
c=nota()
if c >=0 and c <= 10:
return c # Importante romper la iteración si todo ha salido bien
else:
print('nota fuera del rango')
print("Ingrese nota nuevamente:")
na = float(input("Introduce la nota(0 - 10): "))
return na
except:
print("Ha ocurrido un error, introduce bien la nota")
def ingresar_alumnos(n):
promedio=0
aprobados=0
desaprobados=0
total = 0
lista_alumnos = []
lista =[]
for i in range (n):
alumno ={}
nom = input("Ingrese el nombre del alumno {}:".format(i+1))
alumno['nombre']=nom
alumno['nota1']=validar_nota(nota)
alumno['nota2']=validar_nota(nota)
alumno['nota3']=validar_nota(nota)
alumno['prom']=round(((alumno['nota1']+alumno['nota2']+alumno['nota3'])/3),2)
promedio = alumno['prom']
dato = str(promedio) + ", corresponde al alumno: " + nom
if promedio>=4:
alumno['estado']="aprobado"
aprobados+=1
total+=promedio
else:
alumno['estado']="desaprobado"
desaprobados+=1
total+=promedio
# agregando alumno a lista alumnos
lista_alumnos.append(alumno)
lista.append(dato)
print(lista_alumnos)
return (aprobados,desaprobados,total,n,lista)
def imprimir(x,y,z,n,lista):
prom_cur=round(float(z/n),2)
print ("La cantidad de aprobados son: {} \nLa cantidad de desaprobados son: {} \nEl promedio total del curso es: {} ".format(x,y,prom_cur))
return lista
def promedios (lista):
lista.sort() #se ordena la lista
print('El Máximo Promedio es:',lista[-1])
print('El Mínimo Promedio es:',lista[0])
###Output
_____no_output_____
###Markdown
1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
n=cantidad()
aprobados,desaprobados,total,n,lista=ingresar_alumnos(n)
###Output
Ingrese cantidad de alumnos: 4
###Markdown
2. y 3.*Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. Informar el promedio de nota del curso total.
###Code
lista=imprimir(aprobados,desaprobados,total,n,lista)
###Output
La cantidad de aprobados son: 3
La cantidad de desaprobados son: 1
El promedio total del curso es: 4.66
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
promedios (lista)
###Output
El Máximo Promedio es: 6.33, corresponde al alumno: Andrea
El Mínimo Promedio es: 2.33, corresponde al alumno: Saul
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
cantidad
lista_alumnos = []
for i in range(3):
alumno = {}
# ingreso nombre
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
#ingreso de notas
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
#agrupando datos en lista
lista_alumnos.append(alumno)
lista_alumnos
alumno
for persona in lista_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
gonzalo 5.0
martina 6.0
Isabel 5.666666666666667
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. 3.Informar el promedio de nota del curso total. 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
lista_alumnos = []
for i in range(cantidad):
alumno = {}
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
lista_alumnos.append(alumno)
lista_alumnos
alumno
###Output
_____no_output_____
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
def apro_desap():
for j in lista_alumnos:
prom = sum(j['notas'])/3
if prom >= 4:
print(j['nombre'], ': Aprobado')
else:
print(j['nombre'], ': Desaprobado')
apro_desap()
###Output
Miguel : Aprobado
Ayelen : Desaprobado
Walter : Desaprobado
###Markdown
3.Informar el promedio de nota del curso total.
###Code
for n in lista_alumnos:
print(n['nombre'], sum(n['notas'])/3)
###Output
Miguel 8.333333333333334
Ayelen 3.6666666666666665
Walter 3.0
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def prom_alto_bajo():
bajo = 11
alto = 0
for n in lista_alumnos:
if ((sum(n['notas'])/3) <= bajo):
bajo = sum(n['notas'])/3
print('El promedio mas bajo es {}'.format(bajo))
for n in lista_alumnos:
if ((sum(n['notas'])/3) >= alto):
alto = sum(n['notas'])/3
print('El promedio mas alto es {}'.format(alto))
prom_alto_bajo()
###Output
El promedio mas bajo es 3.0
El promedio mas alto es 8.333333333333334
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
alumnos=[]
num=int(input("Ingrese el número de alumnos "))
listado_alumnos=[]
for i in range(num):
nomb=input("Ingrese el nombre completo del alumno: ")
while True:
try:
nota1=int(input("Ingrese la nota 1 del alumno "))
except ValueError:
print("Debes escribir un número.")
continue
if 0 < nota1 < 11:
break
while True:
try:
nota2=int(input("Ingrese la nota 2 del alumno "))
except ValueError:
print("Debes escribir un número.")
continue
if 0 < nota2 < 11:
break
while True:
try:
nota3=int(input("Ingrese la nota 3 del alumno "))
except ValueError:
print("Debes escribir un número.")
continue
if 0 < nota3 < 11:
break
alumnos={'nombre':nomb,'notas':[nota1,nota2,nota3]}
listado_alumnos.append(alumnos)
listado_alumnos
###Output
_____no_output_____
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
paso = 0
npaso = 0
for persona in listado_alumnos:
if sum(persona['notas'])/3 >= 4:
paso += 1
print(persona['nombre'],"APROBADO")
else:
npaso += 1
print(persona['nombre'],"DESAPROBADO")
print(F"Los alumnos aprobados son {paso} alumnos reprobados son {npaso}")
###Output
francis DESAPROBADO
marco APROBADO
atalaya APROBADO
Los alumnos aprobados son 2 alumnos reprobados son 1
###Markdown
3.Informar el promedio de nota del curso total.
###Code
for persona in listado_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
francis 2.0
marco 5.0
atalaya 8.0
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def cargaralumnos(self, listado_alumnos):
notas = []
for i in range(self.num):
nombre=input(f"Ingrese el nombre completo del alumno {i+1}: ")
for n in range(3):
while True:
try:
nota = float(input(f"Ingrese la nota {n+1} del alumno: "))
if nota >= 0 and nota <= 10:
notas.append(nota)
break
else:
print("La nota debe estar comprendida entre 0 y 10")
except:
print("Ingrese un número valido")
alumno = {'nombre' : nombre, 'notas' : [notas[0], notas[1], notas[2]]}
notas.clear()
listado_alumnos.append(alumno)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
#1. Declarando Lista vacia
lista_alumnos = []
#2. Definiendo la función para cargar n alumnos
def alumnos(lista_alumnos, cantidad):
for i in range(cantidad):
n = 0
alumno = {}
nombre = input(f"Ingrese el nombre completo del alumno {len(lista_alumnos) + 1}: ")
alumno['nombre'] = nombre
while n < 3:
try:
nota = float(input(f"Ingresa la nota {n + 1}: "))
if nota >= 0 and nota <= 10:
alumno[f'nota{n+1}'] = nota
n = n+1
else:
print("La nota debe estar comprendida entre 0 y 10")
except:
print("Ingrese una nota valida.")
lista_alumnos.append(alumno)
#3. Ingresando datos
while True:
try:
cantidad = int(input("Ingrese la cantidad de alumnos a insertar"))
if cantidad <= 0:
print("Se debe registrar una cantidad de alumnos mayor a 0")
else:
break
except:
print("Por favor ingrese un valor de cantidad válido: ")
alumnos(lista_alumnos, cantidad)
#4. Imprimiendo datos
lista_alumnos
#------------- SOLUCIÓN DEL PROFESOR -----------------
#cantidad = int(input('¿Cuántos alumnos desea ingresar?'))
#cantidad
#lista_alumnos = []
#for i in range(cantidad):
# alumno = {}
#ingreso nombre
# nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
# alumno['nombre'] = nombre
#ingreso de notas
# alumno['notas'] = []
# for n in range(3):
# nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
# alumno['notas'].append(nota)
#agrupando datos en lista
# lista_alumnos.append(alumno)
#lista_alumnos
#alumno
#-----------------------------------------------------
###Output
_____no_output_____
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
def promedio (lista_alumnos):
for alumno in lista_alumnos:
promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3
alumno['promedio'] = promedio
def evaluar(lista_alumnos):
aprobados = 0
desaprobados = 0
#Hallando promedio de cada alumno
promedio(lista_alumnos)
for alumno in lista_alumnos:
if alumno['promedio'] >= 4:
alumno['estado'] = 'Aprobado'
aprobados += 1
else:
alumno['estado'] = 'Desaprobado'
desaprobados += 1
print(f'La cantidad de alumnos aprobados es de: {aprobados}')
print(f'La cantidad de alumnos desaprobados es de: {desaprobados}')
evaluar(lista_alumnos)
###Output
La cantidad de alumnos aprobados es de: 2
La cantidad de alumnos desaprobados es de: 1
###Markdown
3.Informar el promedio de nota del curso total.
###Code
def promedio_curso(lista_alumnos):
promedio = 0
for alumno in lista_alumnos:
promedio += alumno['promedio']
return promedio / len(lista_alumnos)
print(f"El promedio de nota del curso total es: {promedio_curso(lista_alumnos)}")
###Output
El promedio de nota del curso total es: 6.0
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def puesto_promedio(lista_alumnos):
palto = 0
pbajo = 10
for alumno in lista_alumnos:
nombre = alumno['nombre']
if alumno['promedio'] >= palto:
alumno_alto = alumno['nombre']
palto = alumno['promedio']
if alumno['promedio'] <= pbajo:
alumno_bajo = alumno['nombre']
pbajo = alumno['promedio']
print(f"El alumno con el promedio más alto es: {alumno_alto}")
print(f"El alumno con el promedio más bajo es: {alumno_bajo}")
puesto_promedio(lista_alumnos)
###Output
El alumno con el promedio más alto es: Eddie
El alumno con el promedio más bajo es: Raúl
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def buscar_alumno(nombre, lista_alumnos):
for alumno in lista_alumnos:
if alumno['nombre'] == nombre:
print(alumno)
nombre = input("Ingrese el nombre del o los alumnos a buscar: ")
buscar_alumno(nombre, lista_alumnos)
###Output
Ingrese el nombre del o los alumnos a buscar: Eddie
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
cantidad
lista_alumnos = []
for i in range(3):
alumno = {}
# ingreso nombre
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
#ingreso de notas
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
#agrupando datos en lista
lista_alumnos.append(alumno)
lista_alumnos
alumno
for persona in lista_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
gonzalo 5.0
martina 6.0
Isabel 5.666666666666667
###Markdown
***************************************OTRO METODO CON LISTAS
###Code
alumnos=[]
num=int(input("Ingrese el número de alumnos "))
listado_alumnos=[]
for i in range(num):
nomb=input("Ingrese el nombre completo del alumno: ")
while True:
try:
nota1=int(input("Ingrese la nota 1 del alumno "))
except ValueError:
print("Debes escribir un número.")
continue
if 0 < nota1 < 11:
break
while True:
try:
nota2=int(input("Ingrese la nota 2 del alumno "))
except ValueError:
print("Debes escribir un número.")
continue
if 0 < nota2 < 11:
break
while True:
try:
nota3=int(input("Ingrese la nota 3 del alumno "))
except ValueError:
print("Debes escribir un número.")
continue
if 0 < nota3 < 11:
break
alumnos={'nombre':nomb,'notas':[nota1,nota2,nota3]}
listado_alumnos.append(alumnos)
###Output
Ingrese el nombre completo del alumno: diego
Ingrese la nota 1 del alumno 1
Ingrese la nota 2 del alumno 2
Ingrese la nota 3 del alumno 3
Ingrese el nombre completo del alumno: chamako
Ingrese la nota 1 del alumno 2
Ingrese la nota 2 del alumno 3
Ingrese la nota 3 del alumno 4
Ingrese el nombre completo del alumno: marco
Ingrese la nota 1 del alumno 1
Ingrese la nota 2 del alumno 2
Ingrese la nota 3 del alumno 3
Ingrese el nombre completo del alumno: luis
Ingrese la nota 1 del alumno 4
Ingrese la nota 2 del alumno 5
Ingrese la nota 3 del alumno 6
Ingrese el nombre completo del alumno: carlos
Ingrese la nota 1 del alumno 7
Ingrese la nota 2 del alumno 8
Ingrese la nota 3 del alumno 9
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
paso = 0
npaso = 0
for persona in listado_alumnos:
if sum(persona['notas'])/3 >= 4:
paso += 1
print(persona['nombre'],"APROBADO")
else:
npaso += 1
print(persona['nombre'],"DESAPROBADO")
print(F"Los alumnos aprobados son {paso} alumnos reprobados son {npaso}")
###Output
diego DESAPROBADO
chamako DESAPROBADO
marco DESAPROBADO
luis APROBADO
carlos APROBADO
Los alumnos aprobados son 2 alumnos reprobados son 3
###Markdown
3.Informar el promedio de nota del curso total.
###Code
for persona in listado_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
diego 2.0
chamako 3.0
marco 2.0
luis 5.0
carlos 8.0
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
cantidad
lista_alumnos = []
for i in range(3):
alumno = {}
# ingreso nombre
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
#ingreso de notas
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
#agrupando datos en lista
lista_alumnos.append(alumno)
lista_alumnos
alumno
for persona in lista_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
gonzalo 5.0
martina 6.0
Isabel 5.666666666666667
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. 3.Informar el promedio de nota del curso total. 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
#1. Declarando Lista vacia
lista_alumnos = []
#2. Definiendo la función para cargar n alumnos
def alumnos(lista_alumnos, cantidad):
for n in range(cantidad):
n = 0
alumno = {}
nombre = input(f"Ingrese el nombre completo del alumno {len(lista_alumnos) + 1}: ")
alumno['nombre'] = nombre
while n < 3:
try:
nota = float(input(f"Ingresa la nota {n + 1}: "))
if nota >= 0 and nota <= 10:
alumno[f'nota{n+1}'] = nota
n = n+1
else:
print("La nota debe estar comprendida entre 0 y 10")
except:
print("Ingrese una nota valida.")
lista_alumnos.append(alumno)
#3. Ingresando datos
while True:
try:
cantidad = int(input("Ingrese la cantidad de alumnos a insertar"))
if cantidad <= 0:
print("Se debe registrar una cantidad de alumnos mayor a 0")
else:
break
except:
print("Por favor ingrese un valor de cantidad válido: ")
alumnos(lista_alumnos, cantidad)
#4. Imprimiendo datos
lista_alumnos
###Output
_____no_output_____
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
def promedio (lista_alumnos):
for alumno in lista_alumnos:
promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3
alumno['promedio'] = promedio
def evaluar(lista_alumnos):
aprobados = 0
desaprobados = 0
#Hallando promedio de cada alumno
promedio(lista_alumnos)
for alumno in lista_alumnos:
if alumno['promedio'] >= 4:
alumno['estado'] = 'Aprobado'
aprobados += 1
else:
alumno['estado'] = 'Desaprobado'
desaprobados += 1
print(f'La cantidad de alumnos aprobados es de: {aprobados}')
print(f'La cantidad de alumnos desaprobados es de: {desaprobados}')
evaluar(lista_alumnos)
###Output
_____no_output_____
###Markdown
3.Informar el promedio de nota del curso total.
###Code
def promedio_curso(lista_alumnos):
promedio = 0
for alumno in lista_alumnos:
promedio += alumno['promedio']
return promedio / len(lista_alumnos)
print(f"El promedio de nota del curso total es: {promedio_curso(lista_alumnos)}")
###Output
_____no_output_____
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def puesto_promedio(lista_alumnos):
palto = 0
pbajo = 10
for alumno in lista_alumnos:
nombre = alumno['nombre']
if alumno['promedio'] >= palto:
alumno_alto = alumno['nombre']
palto = alumno['promedio']
if alumno['promedio'] <= pbajo:
alumno_bajo = alumno['nombre']
pbajo = alumno['promedio']
print(f"El alumno con el promedio más alto es: {alumno_alto}")
print(f"El alumno con el promedio más bajo es: {alumno_bajo}")
puesto_promedio(lista_alumnos)
###Output
_____no_output_____
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def buscar_alumno(nombre, lista_alumnos):
for alumno in lista_alumnos:
if alumno['nombre'] == nombre:
print(alumno)
nombre = input("Ingrese el nombre del o los alumnos a buscar: ")
buscar_alumno(nombre, lista_alumnos)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
alumnos = input("Ingrese el nombre del estudiante a añadir: ")
if alumnos=="":
print("el nombre no puede estar vacio")
### LAS NOTAS DEBEN ESTAR COMPRENDIDAS ENTRE O Y 10
print("INTRODUCE LA NOTA DE LA PRIMERA PC")
calif1 = input()
print("INTRODUCE LA NOTA DE LA SEGUNDA PC")
calif2 = input()
print("INTRODUCE LA NOTA DE LA SEGUNDA PC")
calif3 = input()
###Output
INTRODUCE LA NOTA DE LA PRIMERA PC
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
calificacion1 = int(calif1)
calificacion2 = int(calif2)
calificacion3 = int(calif3)
###promedio de las 3 notas
suma_de_notas = calificacion1+calificacion2+calificacion3
promed = suma_de_notas/3
print("el promedio de notas es: %d"%promed)
### tener en cuenta que se apueba con nota >=4
if promed>=4:
print("aprobado")
else:
print("desaprobado")
###Output
aprobado
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
lista = []
def alumnos(lista, cant):
for i in range(cant):
n = 0
alumno = {}
nombre = input(f"Ingrese el nombre completo del alumno {len(lista) + 1}: ")
alumno['nombre'] = nombre
while n < 3:
try:
nota = float(input(f"Ingresa la nota {n + 1}: "))
if nota >= 0 and nota <= 10:
alumno[f'nota{n+1}'] = nota
n = n+1
else:
print("La nota debe ser menor a 10 y mayor a 0")
except:
print("Ingrese una nota menor a 10 y mayor a 0")
lista.append(alumno)
while True:
try:
cant = int(input("Ingrese la cantidad de alumnos:"))
if cant <= 0:
print("La cantidad de alumnos debe ser mayor a 0")
else:
break
except:
print("Ingrese una nota mayor a 0")
alumnos(lista, cant)
lista
###Output
Ingrese la cantidad de alumnos: 3
Ingrese el nombre completo del alumno 1: Dany Joel Anaya Sánchez
Ingresa la nota 1: 4
Ingresa la nota 2: 5
Ingresa la nota 3: 6
Ingrese el nombre completo del alumno 2: José Alejandro Jara Piña
Ingresa la nota 1: 6
Ingresa la nota 2: 7
Ingresa la nota 3: 8
Ingrese el nombre completo del alumno 3: Erick Andres Melo Villar
Ingresa la nota 1: 1
Ingresa la nota 2: 2
Ingresa la nota 3: -1
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
def promedio (lista):
for alumno in lista:
promedio = (alumno['nota1'] + alumno['nota2'] + alumno['nota3']) / 3
alumno['promedio'] = promedio
def evaluar(lista):
aprobados = 0
desaprobados = 0
promedio(lista)
for alumno in lista:
if alumno['promedio'] >= 4:
alumno['estado'] = 'Aprobado'
aprobados += 1
else:
alumno['estado'] = 'Desaprobado'
desaprobados += 1
print(f'Cantidad de alumnos aprobados: {aprobados}')
print(f'Cantidad de alumnos desaprobados es de: {desaprobados}')
evaluar(lista)
###Output
Cantidad de alumnos aprobados: 3
Cantidad de alumnos desaprobados es de: 0
###Markdown
3.Informar el promedio de nota del curso total.
###Code
def promedio_curso(lista):
promedio = 0
for alumno in lista:
promedio += alumno['promedio']
return promedio / len(lista)
print(f"Promedio de nota del curso total: {promedio_curso(lista)}")
###Output
Promedio de nota del curso total: 5.444444444444444
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def puesto_promedio(lista):
promedioalto = 0
promediobajo = 10
for alumno in lista:
nombre = alumno['nombre']
if alumno['promedio'] >= promedioalto:
alumno_alto = alumno['nombre']
promedioalto = alumno['promedio']
if alumno['promedio'] <= promediobajo:
alumno_bajo = alumno['nombre']
promediobajo = alumno['promedio']
print(f"Alumno con el promedio más alto: {alumno_alto}")
print(f"Alumno con el promedio más bajo: {alumno_bajo}")
puesto_promedio(lista)
###Output
Alumno con el promedio más alto: José Alejandro Jara Piña
Alumno con el promedio más bajo: Erick Andres Melo Villar
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def buscar_alumno(nombre, lista):
for alumno in lista:
if alumno['nombre'] == nombre:
print(alumno)
nombre = input("Nombre de alumno(s) que desea buscar: ")
buscar_alumno(nombre, lista)
###Output
Nombre de alumno(s) que desea buscar: Erick Andres Melo Villar
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
lista_alumnos = []
for i in range(cantidad):
alumno = {}
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
lista_alumnos.append(alumno)
alumno
###Output
_____no_output_____
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
def apro_desap():
for j in lista_alumnos:
prom = sum(j['notas'])/3
if prom >= 4:
print(j['nombre'], ': Aprobado')
else:
print(j['nombre'], ': Desaprobado')
apro_desap()
###Output
Anggie : Aprobado
###Markdown
3.Informar el promedio de nota del curso total.
###Code
for n in lista_alumnos:
print(n['nombre'], sum(n['notas'])/3)
###Output
Anggie 15.333333333333334
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def prom_alto_bajo():
bajo = 11
alto = 0
for n in lista_alumnos:
if ((sum(n['notas'])/3) <= bajo):
bajo = sum(n['notas'])/3
print('El promedio mas bajo es {}'.format(bajo))
for n in lista_alumnos:
if ((sum(n['notas'])/3) >= alto):
alto = sum(n['notas'])/3
print('El promedio mas alto es {}'.format(alto))
prom_alto_bajo()
###Output
El promedio mas bajo es 11
El promedio mas alto es 15.333333333333334
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
Ingrese cantidad de alumnos: 1
Ingrese el nombre del alumno 1: 15
Ingrese Nota 1: 18
Ingrese Nota 2: 13
Ingrese Nota 3: 17
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
cantidad
lista_alumnos = []
for i in range(3):
alumno = {}
# ingreso nombre
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
#ingreso de notas
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
#agrupando datos en lista
lista_alumnos.append(alumno)
lista_alumnos
alumno
for persona in lista_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
gonzalo 5.0
martina 6.0
Isabel 5.666666666666667
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno. 3.Informar el promedio de nota del curso total. 4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja. 5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
cantidad = int(input('Cuantos alumnos desea ingresas? '))
cantidad
lista_alumnos = []
for i in range(3):
alumno = {}
# ingreso nombre
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
#ingreso de notas
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
#agrupando datos en lista
lista_alumnos.append(alumno)
lista_alumnos
alumno
for persona in lista_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
gonzalo 5.0
martina 6.0
Isabel 5.666666666666667
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
cant=int(input("Ingrese la cantidad de alumnos: "))
lista_alum=[]
for i in range(cant):
alumno={}
nombre=input("Ingrese nombre del estudiante: ")
alumno['nombre']=nombre
alumno['notas']=[]
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
lista_alum.append(alumno)
print(lista_alum)
for estudiante in lista_alum:
if (sum(estudiante['notas'])/3) >= 4:
print(estudiante['nombre'], sum(estudiante['notas'])/3, "APROBADO")
else:
print(estudiante['nombre'], sum(estudiante['notas'])/3, "DESAPROBADO")
###Output
_____no_output_____
###Markdown
3.Informar el promedio de nota del curso total.
###Code
cant=int(input("Ingrese la cantidad de alumnos: "))
lista_alum=[]
for i in range(cant):
alumno={}
nombre=input("Ingrese nombre del estudiante: ")
alumno['nombre']=nombre
alumno['notas']=[]
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
lista_alum.append(alumno)
for estudiante in lista_alum:
print(estudiante['nombre'], "Su promedio es: ",sum(estudiante['notas'])/3)
###Output
_____no_output_____
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def prom_alto():
cant=int(input("Ingrese la cantidad de alumnos: "))
lista_alum=[]
for i in range(cant):
alumno={}
nombre=input(f"Ingrese nombre del estudiante {i+1}: ")
alumno['nombre']=nombre
alumno['notas']=[]
alumno['promedio']=[]
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
alumno['promedio'] =sum(alumno['notas'])/3
lista_alum.append(alumno)
ordenados = sorted(lista_alum, key=lambda alumno : alumno['promedio'])
print(ordenados)
print("El estudiante con promedio BAJO es :", ordenados[0])
print("El estudiante con promedio ALTO es :", ordenados[-1])
prom_alto()
###Output
_____no_output_____
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
_____no_output_____
###Markdown
PROBLEMAS DIVERSOS 1.Realizar una función que permita la carga de n alumnos. Por cada alumno se deberá preguntar el nombre completo y permitir el ingreso de 3 notas. Las notas deben estar comprendidas entre 0 y 10. Devolver el listado de alumnos.
###Code
N=input("Ingresar su nombre compelto:")
NOTA1=float(input("Ingresar primera nota:"))
NOTA2=float(input("Ingresar segunda nota:"))
NOTA3=float(input("Ingresar tercera nota:"))
cantidad = int(input('Cuantos alumnos desea ingresas? '))
cantidad
lista_alumnos = []
for i in range(3):
alumno = {}
# ingreso nombre
nombre = input(f'Ingrese el nombre del alumno {i+1}: ')
alumno['nombre']= nombre
#ingreso de notas
alumno['notas'] = []
for n in range(3):
nota = float(input(f'Ingrese la nota {n+1} del alumno: '))
alumno['notas'].append(nota)
#agrupando datos en lista
lista_alumnos.append(alumno)
lista_alumnos
alumno
for persona in lista_alumnos:
print(persona['nombre'], sum(persona['notas'])/3)
###Output
lisseth 8.0
camila 12.333333333333334
pedro 11.0
###Markdown
2.Definir una función que dado un listado de alumnos evalúe cuántos aprobaron y cuántos desaprobaron, teniendo en cuenta que se aprueba con 4. La nota será el promedio de las 3 notas para cada alumno.
###Code
numeroCalificaciones=0
while True:
float
numeroCalificaciones=int(raw_input("Dame el numero de calificaciones: "))
break
except ValueError:
print "Error"
suma=0
Calificaciones=[]
for i in range(0,numeroCalificaciones):
while True:
try:
Calificacion= int(raw_input("dame la calificacion"+str(i)+":"))
break
except ValueError:
print "Error:"
Calificaciones.append(Calificacion)
suma=suma + calificacion
promedio= suma/numeroCalificaciones
for i in range(0,numeroCalificaciones):
if Calificaciones[i]>=15:
print(srt(Calificaciones[i]) +" Calificacion Aprobatoria")
else:
print(srt(Calificaciones[i]) +" Calificacion NO Aprobatoria")
print promedio
# Al escanear se devuelve como cadena
promedio_como_cadena = input("Dime tu promedio: ")
# Convertir a float
promedio = float(promedio_como_cadena)
# Hacer la comparación
if promedio >= 11:
print("Aprobado")
else:
print("No aprobado")
###Output
Dime tu promedio: 10
No aprobado
###Markdown
3.Informar el promedio de nota del curso total.
###Code
###Output
_____no_output_____
###Markdown
4.Realizar una función que indique quién tuvo el promedio más alto y quién tuvo la nota promedio más baja.
###Code
def fun(nota):
if nota > 7:
return "Promociona"
else:
if nota < 4:
return "Aplazado"
else:
if 4 <= nota <= 7:
return "Aprobado"
aplazados = aprobados = notables = 0
while True:
nota = float(input('Ingrese nota (0 para terminar):'))
if nota == 0:
break
if nota > 10:
continue
else:
if nota < 4:
aplazados += 1
elif nota >= 4 and nota <=7:
aprobados += 1
elif nota > 7 and nota <= 10:
notables += 1
print ('\nNumero de aprobados %d' %aprobados)
print('Numero de aplazados %d' %aplazados)
print('Numero de notables %d' %notables)
###Output
Ingrese nota (0 para terminar):12
Ingrese nota (0 para terminar):12
Ingrese nota (0 para terminar):0
Numero de aprobados 0
Numero de aplazados 0
Numero de notables 0
###Markdown
5.Realizar una función que permita buscar un alumno por nombre, siendo el nombre completo o parcial, y devuelva una lista con los n alumnos que concuerden con ese nombre junto con todos sus datos, incluido el promedio de sus notas.
###Code
def alumno(n):
notas=[]
nombre=[]
for i in range(n):
name= input(f'Ingrese el nombre del alumno {i+1}: ')
nombre.append(name)
nota_1 = float(input('Ingrese Nota 1: '))
nota_2 = float(input('Ingrese Nota 2: '))
nota_3 = float(input('Ingrese Nota 3: '))
notas.append([nota_1,nota_2,nota_3])
print("Alumnos \t Notas")
for i in range(n):
print(nombre[i],"\t \t",notas[i][0],notas[i][1],notas[i][2])
n=int(input("Ingrese cantidad de alumnos: "))
alumno(n)
###Output
Ingrese cantidad de alumnos: 2
Ingrese el nombre del alumno 1: camila
Ingrese Nota 1: 12
Ingrese Nota 2: 13
Ingrese Nota 3: 14
Ingrese el nombre del alumno 2: pedro
Ingrese Nota 1: 12
Ingrese Nota 2: 13
Ingrese Nota 3: 14
Alumnos Notas
camila 12.0 13.0 14.0
pedro 12.0 13.0 14.0
|
notebooks/02_basic_numerical_operations.ipynb | ###Markdown
Numerical Operations in Python
###Code
from __future__ import print_function
# we will use the print function in this tutorial for python 2 - 3 compatibility
a = 4
b = 5
c = 6
# we'll declare three integers to assist us in our operations
###Output
_____no_output_____
###Markdown
If we want to add the first two together (and store the result in a variable we will call `S`):```pythonS = a + b ```The last part of the equation (i.e `a+b`) is the numerical operation. This sums the value stored in the variable `a` with the value stored in `b`.The plus sign (`+`) is called an arithmetic operator.The equal sign is a symbol used for assigning a value to a variable. In this case the result of the operation is assigned to a new variable called `S`. The basic numeric operators in python are:
###Code
# Sum:
S = a + b
print('a + b =', S)
# Difference:
D = c - a
print('c + a =', D)
# Product:
P = b * c
print('b * c =', P)
# Quotient:
Q = c / a
print('c / a =', Q)
# Remainder:
R = c % a
print('a % b =', R)
# Floored Quotient:
F = c // a
print('a // b =', F)
# Negative:
N = -a
print('-a =', N)
# Power:
Pow = b ** a
print('b ** a =', Pow)
###Output
a + b = 9
c + a = 2
b * c = 30
c / a = 1.5
a % b = 2
a // b = 1
-a = -4
b ** a = 625
###Markdown
What is the difference between `/` and `//` ?The first performs a regular division between two numbers, while the second performs a *euclidean division* **without the remainder**. Important note: In python 2 `/` would return an integer if the two numbers participating in the division were integers. In that sense:```pythonQ = 6 / 4 this would perform a euclidean division because both divisor and dividend are integers!Q = 6.0 / 4 this would perform a real division because the dividend is a floatQ = c / (a * 1.0) this would perform a real division because the divisor is a floatQ = c / float(a) this would perform a real division because the divisor is a float```One way to make python 2 compatible with python 3 division is to import `division` from the `__future__` package. We will do this for the remainder of this tutorial.
###Code
from __future__ import division
Q = c / a
print(Q)
###Output
1.5
###Markdown
We can combine more than one operations in a single line.
###Code
E = a + b - c
print(E)
###Output
3
###Markdown
Priorities are the same as in algebra: parentheses -> powers -> products -> sumsWe can also perform more complex assignment operations:
###Code
print('a =', a)
print('S =', S)
S += a # equivalent to S = S + a
print('+ a =', S)
S -= a # equivalent to S = S - a
print('- a =', S)
S *= a # equivalent to S = S * a
print('* a =', S)
S /= a # equivalent to S = S / a
print('/ a =', S)
S %= a # equivalent to S = S % a
print('% a =', S)
S **= a # equivalent to S = S ** a
print('** a =', S)
S //= a # equivalent to S = S // a
print('// a =', S)
###Output
a = 4
S = 9
+ a = 13
- a = 9
* a = 36
/ a = 9.0
% a = 1.0
** a = 1.0
// a = 0.0
###Markdown
Other operations:
###Code
n = -3
print('n =', n)
A = abs(n) # Absolute:
print('absolute(n) =', A)
C = complex(n, a) # Complex: -3+4j
print('complex(n,a) =', C)
c = C.conjugate() # Conjugate: -3-4j
print('conjugate(C) =', c)
###Output
n = -3
absolute(n) = 3
complex(n,a) = (-3+4j)
conjugate(C) = (-3-4j)
###Markdown
Bitwise operations:Operations that first convert a number to its binary equivalent and then perform operations bit by bit bevore converting them again to their original form.
###Code
a = 3 # or 011 (in binary)
b = 5 # or 101 (in binary)
print(a | b) # bitwise OR: 111 (binary) --> 7 (decimal)
print(a ^ b) # exclusive OR: 110 (binary) --> 6 (decimal)
print(a & b) # bitwise AND: 001 (binary) --> 1 (decimal)
print(b << a) # b shifted left by a bits: 101000 (binary) --> 40 (decimal)
print(8 >> a) # 8 shifted left by a bits: 0001 (binary - was 1000 before shift) --> 1(decimal)
print(~a) # NOT: 100 (binary) --> -4 (decimal)
###Output
7
6
1
40
1
-4
###Markdown
Built-in methodsSome data types have built in methods, for example we can check if a float variable stores an integer as follows:
###Code
a = 3.0
t = a.is_integer()
print(t)
a = 3.2
t = a.is_integer()
print(t)
###Output
True
False
###Markdown
Note that the casting operation between floats to integers just discards the decimal part (it doesn't attempt to round the number).
###Code
print(int(3.21))
print(int(3.99))
###Output
3
3
###Markdown
We can always `round` the number beforehand.
###Code
int(round(3.6))
###Output
_____no_output_____
###Markdown
ExerciseWhat do the following operations return?
###Code
E1 = ( 3.2 + 12 ) * 2 / ( 1 + 1 )
E2 = abs(-4 ** 3)
E3 = complex( 8 % 3, int(-2 * 1.0 / 4)-1 )
E4 = (6.0 / 4.0).is_integer()
E5 = (4 | 2) ^ (5 & 6)
###Output
_____no_output_____
###Markdown
Python's mathematical functionsMost math functions are included in a seperate library called `math`.
###Code
import math
x = 4
print('exp = ', math.exp(x)) # exponent of x (e**x)
print('log = ',math.log(x)) # natural logarithm (base=e) of x
print('log2 = ',math.log(x,2)) # logarithm of x with base 2
print('log10 = ',math.log10(x)) # logarithm of x with base 10, equivalent to math.log(x,10)
print('sqrt = ',math.sqrt(x)) # square root
print('cos = ',math.cos(x)) # cosine of x (x is in radians)
print('sin = ',math.sin(x)) # sine
print('tan = ',math.tan(x)) # tangent
print('arccos = ',math.acos(.5)) # arc cosine (in radians)
print('arcsin = ',math.asin(.5)) # arc sine
print('arctan = ',math.atan(.5)) # arc tangent
# arc-trigonometric functions only accept values in [-1,1]
print('deg = ',math.degrees(x)) # converts x from radians to degrees
print('rad = ',math.radians(x)) # converts x from degrees to radians
print('e = ',math.e) # mathematical constant e = 2.718281...
print('pi = ',math.pi) # mathematical constant pi = 3.141592...
###Output
exp = 54.598150033144236
log = 1.3862943611198906
log2 = 2.0
log10 = 0.6020599913279624
sqrt = 2.0
cos = -0.6536436208636119
sin = -0.7568024953079282
tan = 1.1578212823495775
arccos = 1.0471975511965979
arcsin = 0.5235987755982989
arctan = 0.4636476090008061
deg = 229.1831180523293
rad = 0.06981317007977318
e = 2.718281828459045
pi = 3.141592653589793
###Markdown
The `math` package also provides other functions such as hyperbolic trigonometric functions, error functions, gamma functions etc. Generating a pseudo-random numberPython has a built-in package for generating pseudo-random sequences called `random`.
###Code
import random
print(random.randint(1,10))
# Generates a random integer in [1,10]
print(random.randrange(1,100,2))
# Generates a random integer from [1,100) with step 2, i.e from 1, 3, 5, ..., 97, 99.
print(random.uniform(0,1))
# Generates a random float in [0,1]
###Output
1
21
0.7912325286049906
###Markdown
ExampleConsider the complex number $3 + 4j$. Calculate it's magnitude and it's angle, then transform it into a tuple of it's polar form.
###Code
z = 3 + 4j
###Output
_____no_output_____
###Markdown
Solution attempt 1 (analytical). We don't know any of the built-in complex methods and we try to figure out an analytical solution. We will first calculate the real and imaginary parts of the complex number and then we will try to apply the Pythagorean theorem to calculate the magnitude. Step 1: Find the real part of the complex number.We will make use of the mathematical formula: $$Re(z) = \frac{1}{2} \cdot ( z + \overline{z} )$$
###Code
rl = ( z + z.conjugate() ) / 2
print(rl)
###Output
(3+0j)
###Markdown
Note that *rl* is still in complex format, even though it represents a real number... Step 2: Find the imaginary part of the complex number.**1st way**, like before, we use the mathematical formula: $$Im(z) = \frac{z - \overline{z}}{2i}$$
###Code
im = ( z - z.conjugate() ) / 2j
print(im)
###Output
(4+0j)
###Markdown
Same as before `im` is in complex format, even though it represents a real number... Step 3: Find the sum of the squares of the real and the imaginary parts:$$ S = Re(z)^2 + Im(z)^2 $$
###Code
sq_sum = rl**2 + im**2
print(sq_sum)
###Output
(25+0j)
###Markdown
Still we are in complex format.Let's try to calculate it's square root to find out the magnitude:
###Code
mag = math.sqrt(sq_sum)
###Output
_____no_output_____
###Markdown
Oh... so the `math.sqrt()` method doesn't support complex numbers, even though what we're trying to use actually represents a real number. Well, let's try to cast it as an integer and then pass it into *math.sqrt()*.
###Code
sq_sum = int(sq_sum)
###Output
_____no_output_____
###Markdown
We still get the same error.We're not stuck in a situation where we are trying to do something **mathematically sound**, that the computer refuses to do.But what is causing this error? In math $25$ and $25+0i$ are exactly the same number. Both represent a natural number. But the computer sees them as two different entities entirely. One is an object of the *integer* data type and the other is an object of the *complex* data type. The programmer who wrote the code for the `math.sqrt()` method of the math package, created it so that it can be used on *integers* and *floats* (but not *complex* numbers), even though in our instance the two are semantically the same thing.Ok, so trying our first approach didn't work out. Let's try calculating this another way. We know from complex number theory that:$$ z \cdot \overline{z} = Re(z)^2 + Im(z)^2 $$
###Code
sq_sum = z * z.conjugate()
mag = math.sqrt(sq_sum)
###Output
_____no_output_____
###Markdown
This didn't work out either... Solution attempt 2. We know that a complex number represents a vector in the *Re*, *Im* axes. Mathematically speaking the absolute value of a real number is defined differently than the absolute value of a complex one. Graphically though, they can both be defined as the distance of the number from (0,0). If we wanted to calculate the absolute of a real number we should just disregard it's sign and treat it as positive. On the other hand if we wanted to do the same thing to a complex number we would need to calculate the euclidean norm of it's vector (or in other words measure the distance from the complex number to (0,0), using the Pythagorean theorem). So in essence what we are looking for is the absolute value of the complex number. Step 1: Calculate the magnitude.
###Code
mag = abs(z)
print(mag)
###Output
5.0
###Markdown
Ironically, this is the exact opposite situation of where we were before. Two things that have totally **different mathematical definitions** and methods of calculation (the absolute value of a complex and an integer), can be calculated using the same function.** 2nd Way:** As a side note we could have calculated the magnitude using the previous way, if we knew some of the complex numbers' built-in functions:
###Code
rl = z.real
print('real =', rl)
im = z.imag
print('imaginary =', im)
# (now that these numbers are floats we can continue and perform operations such as the square root
mag = math.sqrt(rl**2 + im**2) # mag = 5.0
print('magnitude =', mag)
###Output
real = 3.0
imaginary = 4.0
magnitude = 5.0
###Markdown
Step 2: Calculate the angle.**1st way: ** First we will calculate the cosine of the angle. The cosine is the real part divided by the magnitude.
###Code
cos_ang = rl / mag
print(cos_ang)
###Output
0.6
###Markdown
To find the angle we use the arc cosine function from the math package.
###Code
ang = math.acos(cos_ang)
print('phase in rad =', ang)
print('phase in deg =', math.degrees(ang))
###Output
phase in rad = 0.9272952180016123
phase in deg = 53.13010235415599
###Markdown
**2nd way:** Another way tou find the angle (or more correctly phase) of the complex number is to use a function from the `cmath` (complex math) package.
###Code
import cmath
ang = cmath.phase(z)
print('phase in rad =', ang)
###Output
phase in rad = 0.9272952180016122
###Markdown
Without needing to calculate anything beforehand (no *rl* and no *mag* needed). Step 3: Create a tuple of the complex number's polar form:
###Code
pol = (mag, ang)
print(pol)
###Output
(5.0, 0.9272952180016122)
###Markdown
Solution attempt 4 (using python's built in cmath package):
###Code
pol = cmath.polar(z)
print(pol)
###Output
(5.0, 0.9272952180016122)
###Markdown
Numerical Operations in Python
###Code
from __future__ import print_function
# we will use the print function in this tutorial for python 2 - 3 compatibility
a = 4
b = 5
c = 6
# we'll declare three integers to assist us in our operations
###Output
_____no_output_____
###Markdown
If we want to add the first two together (and store the result in a variable we will call `S`):```pythonS = a + b ```The last part of the equation (i.e `a+b`) is the numerical operation. This sums the value stored in the variable `a` with the value stored in `b`.The plus sign (`+`) is called an arithmetic operator.The equal sign is a symbol used for assigning a value to a variable. In this case the result of the operation is assigned to a new variable called `S`. The basic numeric operators in python are:
###Code
# Sum:
S = a + b
print('a + b =', S)
# Difference:
D = c - a
print('c + a =', D)
# Product:
P = b * c
print('b * c =', P)
# Quotient:
Q = c / a
print('c / a =', Q)
# Remainder:
R = c % a
print('a % b =', R)
# Floored Quotient:
F = c // a
print('a // b =', F)
# Negative:
N = -a
print('-a =', N)
# Power:
Pow = b ** a
print('b ** a =', Pow)
###Output
a + b = 9
c + a = 2
b * c = 30
c / a = 1.5
a % b = 2
a // b = 1
-a = -4
b ** a = 625
###Markdown
What is the difference between `/` and `//` ?The first performs a regular division between two numbers, while the second performs a *euclidean division* **without the remainder**. Important note: In python 2 `/` would return an integer if the two numbers participating in the division were integers. In that sense:```pythonQ = 6 / 4 this would perform a euclidean division because both divisor and dividend are integers!Q = 6.0 / 4 this would perform a real division because the dividend is a floatQ = c / (a * 1.0) this would perform a real division because the divisor is a floatQ = c / float(a) this would perform a real division because the divisor is a float```One way to make python 2 compatible with python 3 division is to import `division` from the `__future__` package. We will do this for the remainder of this tutorial.
###Code
from __future__ import division
Q = c / a
print(Q)
###Output
1.5
###Markdown
We can combine more than one operations in a single line.
###Code
E = a + b - c
print(E)
###Output
3
###Markdown
Priorities are the same as in algebra: parentheses -> powers -> products -> sumsWe can also perform more complex assignment operations:
###Code
print('a =', a)
print('S =', S)
S += a # equivalent to S = S + a
print('+ a =', S)
S -= a # equivalent to S = S - a
print('- a =', S)
S *= a # equivalent to S = S * a
print('* a =', S)
S /= a # equivalent to S = S / a
print('/ a =', S)
S %= a # equivalent to S = S % a
print('% a =', S)
S **= a # equivalent to S = S ** a
print('** a =', S)
S //= a # equivalent to S = S // a
print('// a =', S)
###Output
a = 4
S = 9
+ a = 13
- a = 9
* a = 36
/ a = 9.0
% a = 1.0
** a = 1.0
// a = 0.0
###Markdown
Other operations:
###Code
n = -3
print('n =', n)
A = abs(n) # Absolute:
print('absolute(n) =', A)
C = complex(n, a) # Complex: -3+4j
print('complex(n,a) =', C)
c = C.conjugate() # Conjugate: -3-4j
print('conjugate(C) =', c)
###Output
n = -3
absolute(n) = 3
complex(n,a) = (-3+4j)
conjugate(C) = (-3-4j)
###Markdown
Bitwise operations:Operations that first convert a number to its binary equivalent and then perform operations bit by bit bevore converting them again to their original form.
###Code
a = 3 # or 011 (in binary)
b = 5 # or 101 (in binary)
print(a | b) # bitwise OR: 111 (binary) --> 7 (decimal)
print(a ^ b) # exclusive OR: 110 (binary) --> 6 (decimal)
print(a & b) # bitwise AND: 001 (binary) --> 1 (decimal)
print(b << a) # b shifted left by a bits: 101000 (binary) --> 40 (decimal)
print(8 >> a) # 8 shifted left by a bits: 0001 (binary - was 1000 before shift) --> 1(decimal)
print(~a) # NOT: 100 (binary) --> -4 (decimal)
###Output
7
6
1
40
1
-4
###Markdown
Built-in methodsSome data types have built in methods, for example we can check if a float variable stores an integer as follows:
###Code
a = 3.0
t = a.is_integer()
print(t)
a = 3.2
t = a.is_integer()
print(t)
###Output
True
False
###Markdown
Note that the casting operation between floats to integers just discards the decimal part (it doesn't attempt to round the number).
###Code
print(int(3.21))
print(int(3.99))
###Output
3
3
###Markdown
We can always `round` the number beforehand.
###Code
int(round(3.6))
###Output
_____no_output_____
###Markdown
ExerciseWhat do the following operations return? E1 = ( 3.2 + 12 ) * 2 / ( 1 + 1 )E2 = abs(-4 ** 3)E3 = complex( 8 % 3, int(-2 * 1.0 / 4)-1 )E4 = (6.0 / 4.0).is_integer()E5 = (4 | 2) ^ (5 & 6) Python's mathematical functionsMost math functions are included in a seperate library called `math`.
###Code
import math
x = 4
print('exp = ', math.exp(x)) # exponent of x (e**x)
print('log = ',math.log(x)) # natural logarithm (base=e) of x
print('log2 = ',math.log(x,2)) # logarithm of x with base 2
print('log10 = ',math.log10(x)) # logarithm of x with base 10, equivalent to math.log(x,10)
print('sqrt = ',math.sqrt(x)) # square root
print('cos = ',math.cos(x)) # cosine of x (x is in radians)
print('sin = ',math.sin(x)) # sine
print('tan = ',math.tan(x)) # tangent
print('arccos = ',math.acos(.5)) # arc cosine (in radians)
print('arcsin = ',math.asin(.5)) # arc sine
print('arctan = ',math.atan(.5)) # arc tangent
# arc-trigonometric functions only accept values in [-1,1]
print('deg = ',math.degrees(x)) # converts x from radians to degrees
print('rad = ',math.radians(x)) # converts x from degrees to radians
print('e = ',math.e) # mathematical constant e = 2.718281...
print('pi = ',math.pi) # mathematical constant pi = 3.141592...
###Output
exp = 54.598150033144236
log = 1.3862943611198906
log2 = 2.0
log10 = 0.6020599913279624
sqrt = 2.0
cos = -0.6536436208636119
sin = -0.7568024953079282
tan = 1.1578212823495775
arccos = 1.0471975511965979
arcsin = 0.5235987755982989
arctan = 0.4636476090008061
deg = 229.1831180523293
rad = 0.06981317007977318
e = 2.718281828459045
pi = 3.141592653589793
###Markdown
The `math` package also provides other functions such as hyperbolic trigonometric functions, error functions, gamma functions etc. Generating a pseudo-random numberPython has a built-in package for generating pseudo-random sequences called `random`.
###Code
import random
print(random.randint(1,10))
# Generates a random integer in [1,10]
print(random.randrange(1,100,2))
# Generates a random integer from [1,100) with step 2, i.e from 1, 3, 5, ..., 97, 99.
print(random.uniform(0,1))
# Generates a random float in [0,1]
###Output
1
21
0.7912325286049906
###Markdown
ExampleConsider the complex number $3 + 4j$. Calculate it's magnitude and it's angle, then transform it into a tuple of it's polar form.
###Code
z = 3 + 4j
###Output
_____no_output_____
###Markdown
Solution attempt 1 (analytical). We don't know any of the built-in complex methods and we try to figure out an analytical solution. We will first calculate the real and imaginary parts of the complex number and then we will try to apply the Pythagorean theorem to calculate the magnitude. Step 1: Find the real part of the complex number.We will make use of the mathematical formula: $$Re(z) = \frac{1}{2} \cdot ( z + \overline{z} )$$
###Code
rl = ( z + z.conjugate() ) / 2
print(rl)
###Output
(3+0j)
###Markdown
Note that *rl* is still in complex format, even though it represents a real number... Step 2: Find the imaginary part of the complex number.**1st way**, like before, we use the mathematical formula: $$Im(z) = \frac{z - \overline{z}}{2i}$$
###Code
im = ( z - z.conjugate() ) / 2j
print(im)
###Output
(4+0j)
###Markdown
Same as before `im` is in complex format, even though it represents a real number... Step 3: Find the sum of the squares of the real and the imaginary parts:$$ S = Re(z)^2 + Im(z)^2 $$
###Code
sq_sum = rl**2 + im**2
print(sq_sum)
###Output
(25+0j)
###Markdown
Still we are in complex format.Let's try to calculate it's square root to find out the magnitude:
###Code
mag = math.sqrt(sq_sum)
###Output
_____no_output_____
###Markdown
Oh... so the `math.sqrt()` method doesn't support complex numbers, even though what we're trying to use actually represents a real number. Well, let's try to cast it as an integer and then pass it into *math.sqrt()*.
###Code
sq_sum = int(sq_sum)
###Output
_____no_output_____
###Markdown
We still get the same error.We're not stuck in a situation where we are trying to do something **mathematically sound**, that the computer refuses to do.But what is causing this error? In math $25$ and $25+0i$ are exactly the same number. Both represent a natural number. But the computer sees them as two different entities entirely. One is an object of the *integer* data type and the other is an object of the *complex* data type. The programmer who wrote the code for the `math.sqrt()` method of the math package, created it so that it can be used on *integers* and *floats* (but not *complex* numbers), even though in our instance the two are semantically the same thing.Ok, so trying our first approach didn't work out. Let's try calculating this another way. We know from complex number theory that:$$ z \cdot \overline{z} = Re(z)^2 + Im(z)^2 $$
###Code
sq_sum = z * z.conjugate()
mag = math.sqrt(sq_sum)
###Output
_____no_output_____
###Markdown
This didn't work out either... Solution attempt 2. We know that a complex number represents a vector in the *Re*, *Im* axes. Mathematically speaking the absolute value of a real number is defined differently than the absolute value of a complex one. Graphically though, they can both be defined as the distance of the number from (0,0). If we wanted to calculate the absolute of a real number we should just disregard it's sign and treat it as positive. On the other hand if we wanted to do the same thing to a complex number we would need to calculate the euclidean norm of it's vector (or in other words measure the distance from the complex number to (0,0), using the Pythagorean theorem). So in essence what we are looking for is the absolute value of the complex number. Step 1: Calculate the magnitude.
###Code
mag = abs(z)
print(mag)
###Output
5.0
###Markdown
Ironically, this is the exact opposite situation of where we were before. Two things that have totally **different mathematical definitions** and methods of calculation (the absolute value of a complex and an integer), can be calculated using the same function.**2nd Way:** As a side note we could have calculated the magnitude using the previous way, if we knew some of the complex numbers' built-in functions:
###Code
rl = z.real
print('real =', rl)
im = z.imag
print('imaginary =', im)
# (now that these numbers are floats we can continue and perform operations such as the square root
mag = math.sqrt(rl**2 + im**2) # mag = 5.0
print('magnitude =', mag)
###Output
real = 3.0
imaginary = 4.0
magnitude = 5.0
###Markdown
Step 2: Calculate the angle.**1st way:** First we will calculate the cosine of the angle. The cosine is the real part divided by the magnitude.
###Code
cos_ang = rl / mag
print(cos_ang)
###Output
0.6
###Markdown
To find the angle we use the arc cosine function from the math package.
###Code
ang = math.acos(cos_ang)
print('phase in rad =', ang)
print('phase in deg =', math.degrees(ang))
###Output
phase in rad = 0.9272952180016123
phase in deg = 53.13010235415599
###Markdown
**2nd way:** Another way tou find the angle (or more correctly phase) of the complex number is to use a function from the `cmath` (complex math) package.
###Code
import cmath
ang = cmath.phase(z)
print('phase in rad =', ang)
###Output
phase in rad = 0.9272952180016122
###Markdown
Without needing to calculate anything beforehand (no *rl* and no *mag* needed). Step 3: Create a tuple of the complex number's polar form:
###Code
pol = (mag, ang)
print(pol)
###Output
(5.0, 0.9272952180016122)
###Markdown
Solution attempt 4 (using python's built in cmath package):
###Code
pol = cmath.polar(z)
print(pol)
###Output
(5.0, 0.9272952180016122)
|
Python_Misc/TMWP_PY36_OO_Towers_of_Hanoi.ipynb | ###Markdown
Python 3 [conda default] Towers of HanoiThe "Towers of Hanoi' problem is a popular choice by computer programming training classes. An execellent write-up of it can be found here:[Python Course.eu - Towers of Hanoi](http://www.python-course.eu/towers_of_hanoi.php)Wikipedia also has some great information about the problem, its history, and related programming concerns: [Hannoi on Wikipedia](http://en.wikipedia.org/wiki/Tower_of_Hanoi) (though some of it gets highly technical). In this Notebook- [The Solution](solution): Immediately below is a solution to the problem organized as OO Python code. This code leverages concepts and ideas from the best of what is found in the research section, but creates a unique implementation that could be used to achieve multiple objectives: output the answer, store the answer, tell us different things about the answer. This code also illustrates many concepts of the Python programming language that students and non-experts may find useful.- [OO Design Considerations For The Solution](ooDesign) - notes on the object design heirarchy (what was chosen over what was rejected).- [Putting a Tracer on The Solution To Watch Recursion in Action](trace) - This section is an experiment purely for the educational value. It makes it possible to watch recursive function calls traced through the hanoi solution object.- [Related Research and Experiments](Research): This section contains code from multiple sources showing approaches to the Towers of Hanoi problem. It also contains edits to this code that help unmask things about the algorithms' inner workings, as well as enhancements and experiments that ultimately pave the way for the final solution given at the start of this notebook. Version NotesThis code was originally written in Python 2.7. It was later converted to Python 3.6. The two changes that were required in order to do this were: - import sys (was not needed under Python 2.7- .pop() worked on range objects in Python 2.7, they had to be wrapped in list() to work under Python 3.6 - Example: `(list(range(numDisks, 0, -1)), 1)`- all the rest of this code is unchanged from the original Python 2.7 experiment An Object Oriented Solution to The Towers of Hanoi
###Code
'''
This solution leverages the best of the code in this notebook to attempt to create something
extensible, self-contained, and capable of delivering different outputs to meet different needs.
It gets longer than the more elegant solutions in the "Research" section, but the design
wraps the basic functionality with different features that take into account different potential
future use cases.
'''
### Verson Two: Object code
import pandas as pd
from warnings import warn
from warnings import filterwarnings
import numpy as np
import sys ## added during PY 2.7 to 3.6 upgrade test
class SimpleWarning(object):
'''class SimpleWarning() -->\n\n configures warn() for the most common "alert the user" use case. '''
def __init__(self, warnText, wStackLevel=1, wCategory=RuntimeWarning):
self._wrnTxt = "\n%s" %(warnText)
filterwarnings("once")
warn(self._wrnTxt, stacklevel=wStackLevel, category=wCategory)
sys.stderr.flush() # this provides warning ahead of the output instead of after it
# sys is imported by warnings so we don't have to import it here
# common categories to use: UserWarning, Warning, RuntimeWarning, ResourceWarning
def reInitialize(self, riWarnText, riWStackLevel=1, riWCategory=RuntimeWarning):
'''SimpleWarning.reInitialize(...)-->\n\nFor multiple warnings in one code procedure,
this function can reinitialize the same object to be reused.'''
self.__init__(self, riWarnText, riWStackLevel, riWCategory)
class HanoiSolution(object):
_mvListValues = ["Step", "Count", "None", "Visual"]
def __init__(self, numDisks, moveList="Visual", divider=30):
# conditions for warning and to help control all output that comes later:
if numDisks <= 0:
self._tmpTxt = "is not valid for the number of disks. Resetting number of disks to default."
self._tmpTxt = "%s %s" %(numDisks, self._tmpTxt)
# warn("\n%s %s" %(numDisks, self._tmpTxt))
self._hsWarn = SimpleWarning(self._tmpTxt)
numDisks = 3
if numDisks > 1:
self._chr1 = 's' # used to ensure printed output is plural if disks > 1
else: # set plurality conditions here and then just add self._chr1
self._chr1 = '' # instead of 's' on words throughout the code where it applies
if numDisks > 9: # for 10 disks or more (issue a warning)
self._tmpTxt = "disks selected. The number of steps in a solution grows at an accelerated rate"
self._tmpTxt += " as the number of disks increases. \nThis program may take a while to complete.\n"
self._tmpTxt += "Please be patient..."
self._tmpTxt = "%d %s" %(numDisks, self._tmpTxt)
# warn("\n%s %s" %(numDisks, self._tmpTxt))
self._hsWarn = SimpleWarning(self._tmpTxt)
# peg structure: ( [ disks ], Peg_ID_Number )
self._peg1 = (list(range(numDisks, 0, -1)), 1)
self._peg2 = ([], 2)
self._peg3 = ([], 3)
self.dsks = numDisks # number of disks for simulation
self.moveCount = 0 # move counter
self.divChars = divider # number of characters for divider used in output
self.moveListDefault = moveList # what type of output from the moveList do you want?
# store the answer as a default for the object to use
# invalid moveList argument is reset to default and a warning is output:
if moveList not in self._mvListValues:
self._tmpTxt = "is not a valid moveList arg for _output_diskProgress(...). " + \
"Default will be used."
self._tmpTxt = "'%s' %s" %(moveList, self._tmpTxt)
# warn("\n'%s' %s" %(moveList, self._tmpTxt))
self._hsWarn = SimpleWarning(self._tmpTxt)
self.moveListDefault = self._mvListValues[-1] # last value, by convention is default for obj class
# make it default for this instance of the class
else:
self.moveListDefault = moveList
self._moveDisks(nDisks=numDisks, source=self._peg1, target=self._peg3,
auxiliary=self._peg2, moveList=self.moveListDefault)
if moveList != "None": print(self.__str__()) # this outputs final answer with move count
# at end of all moveList args that include printed
# output
# meat and potatoes of the algorithm: sumulation of moving the disks from one peg to another
def _moveDisks(self, nDisks, source, target, auxiliary, moveList):
if nDisks > 0:
# move n-1 disks from source to auxiliary
self._moveDisks(nDisks-1, source, auxiliary, target, moveList)
if self.moveCount == 0:
# output initial state if appropriate
self._diskMovementProgression(nDisks, source, target, moveList)
self.moveCount += 1 # increment counter of how many steps it takes
# move the nth disk from source to target
target[0].append(source[0].pop())
# in this object: outputs the moves in accordance with moveList argument
self._diskMovementProgression(nDisks, source, target, moveList)
# move the n-1 disks that were left on auxiliary to target
self._moveDisks(nDisks-1, auxiliary, target, source, moveList)
def _diskMovementProgression(self, nDisks, source, target, moveList):
# this function sets up ability to over-ride the function call in the middle
# of moveDisk by child objects
self._output_diskProgress(nDisks, source, target, moveList)
def _output_diskProgress(self, nDisks, source, target, moveList):
# Display our progress (create each step of the answer and output it)
if moveList == "Visual" or moveList == "Step":
if moveList == "Visual":
if self.moveCount > 0:
print("Step %d:" %self.moveCount)
else: # in this context, moveCount = 0
print("Initial State:")
# used by both "Visual" and "Step"
if self.moveCount == 0:
pass
else:
print("Move disk " + str(nDisks) + " from peg " + str(source[1]) +
" to peg " + str(target[1]))
if moveList == "Visual":
print("-"*self.divChars)
print(str(self._peg1[0]) + '\n' + str(self._peg2[0]) + '\n' + str(self._peg3[0]) +
'\n' + '#'*self.divChars)
elif moveList == "None" or moveList == "Count":
pass
else:
# this scenario should never occur the way this code is written.
# if it does, we want the code to throw an error so we know to look into it
raise ValueError("%s is not a valid arg for moveList in _output_diskProgress(...).")
def reInitialize(self, nDisks, moveList, divider=30):
# allows resetting the object for a new simulation without having to create a new instance
self.__init__(nDisks, moveList, divider)
def __str__(self):
# what we want to see for print(HanoiSolution)
return ("%d disk" + self._chr1 + " would take %d move" + self._chr1 +
" to solve.") %(self.dsks, self.moveCount)
class HanoiStoredSolution(HanoiSolution):
dfCellDataType = np.int64
def __init__(self, numDisks, moveList="Stored", divider=30):
self._solutionDF = pd.DataFrame({'disk':[],'fromPeg':[], 'toPeg':[]}, dtype=self.dfCellDataType)
self._mvListValues.append("Stored")
super(HanoiStoredSolution, self).__init__(numDisks, moveList, divider)
# alternatively, this should also work: HanoiSolution.__init__(self, numDisks, moveList, divider)
if moveList == "Stored":
# print("You selected to store the movelist with this agrument: %s" %moveList) # debug statement
print("The move list is stored in a dataframe accessible with `.get_solutionDF()`:")
print(self.get_solutionDF())
def _store_diskProgress(self, dsk, sourceID, targetID):
# Builds this: self._solutionPD = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[] })
if self.moveCount > 0:
self._solutionDF = self._solutionDF.append(pd.DataFrame({ 'disk':[dsk],
'fromPeg':[sourceID],'toPeg':[targetID] }),
ignore_index=True)
def _diskMovementProgression(self, nDisks, source, target, moveList="Visual"):
# tell us about it and store the results:
if moveList == "Stored":
output_moveList = "None"
else:
output_moveList = moveList
self._output_diskProgress(nDisks, source, target, output_moveList)
self._store_diskProgress(nDisks, source[1], target[1])
def get_solutionDF(self):
return self._solutionDF
print("Hanoi Solution Objects Loaded and ready to use.")
# doc strings for SimpleWarning
print(SimpleWarning.__doc__)
print("-"*72)
print(SimpleWarning.reInitialize.__doc__)
###Output
class SimpleWarning() -->
configures warn() for the most common "alert the user" use case.
------------------------------------------------------------------------
SimpleWarning.reInitialize(...)-->
For multiple warnings in one code procedure,
this function can reinitialize the same object to be reused.
###Markdown
Tests of Hanoi Solution ObjectsThese tests are designed to test and show all the functionality built into the Hanoi solution objects. Comments in each cell indicate what is being tested
###Code
# Method Resolution Order for the class objects
print(HanoiSolution.__mro__)
print(HanoiStoredSolution.__mro__)
# pass in an invalid moveList argument ... show the warning that is displayed but code continues to execute
import sys
myHanoiTower = HanoiSolution(1, "Something Stupid")
# pass in invalid numDisks argument ... show warning, code executes with object default
# note: warning comes at the end of the output
myHanoiTower = HanoiSolution(0) # default moveList = 'Visible'
# reinitialize existing object with new number of disks
# just output the final count
myHanoiTower.reInitialize(25, "Count") # this took maybe 3 minutes to run on my computer
anotherHanoiTower = HanoiSolution(13, "None") # warning kicks in if numDisks is >= 10
# this cell is part of testing the "None" option for moveList
print(anotherHanoiTower) # now we can ask it for the answer
anotherHanoiTower.moveCount # or obtain the final answer to send to other code
# reinitialize with "Step" output ... using the same object but reinitializing for new number of Disks and Output request
myHanoiTower.reInitialize(7, "Step")
# object stores these elements once created:
print(myHanoiTower.divChars)
print(myHanoiTower.dsks)
print(myHanoiTower.moveCount)
print(myHanoiTower.moveListDefault) # build a describe or summary function for this later?
myHanoiSSTower = HanoiStoredSolution(3)
# access this element and reset it if your computer is not 64bit:
myHanoiSSTower.dfCellDataType
myHanoiSSTower = HanoiStoredSolution(3, "Count") # output stops with the move count for the solution
myHanoiSSTower.get_solutionDF() # produces the df table if last line on Jupyter
myHanoiSSTower = HanoiStoredSolution(3, "None") # outputs Nothing (except initialization lines)
myHanoiSSTower.get_solutionDF() # produces the df table if last line on Jupyter
# in production code, might turn off class object intialization lines
# other stored elements in the object:
print(myHanoiSSTower.divChars)
print(myHanoiSSTower.dsks)
print(myHanoiSSTower.moveCount)
print(myHanoiSSTower.moveListDefault)
myHanoiSSTower.reInitialize(10, "Count") # just showing some more of the parent code working in the child object
# myHanoiSSTower.get_solutionDF() # solution DF is big, uncomment this line to view it
myHanoiSSTower.get_solutionDF().tail() # this validates the count is right by showing final records in the DF
# note: index runs 0 ... 1022, so 1023 is the correct count
myHanoiSSTower2 = HanoiStoredSolution(5, "Visual", 72) # args: numDisks, moveList (type), divider (num chars)
myHanoiSSTower2.get_solutionDF() # show stored DF in the object
myHanoiSSTower2.reInitialize(7, "Step", 35)
myHanoiSSTower2.get_solutionDF() # show stored solution when done
# show changes to stored values:
print(myHanoiSSTower2.divChars)
print(myHanoiSSTower2.dsks)
print(myHanoiSSTower2.moveCount)
print(myHanoiSSTower2.moveListDefault)
###Output
35
7
127
Step
###Markdown
OO Design ConsiderationsTheoretically, as simulations grow larger, it may be desirable to have versions of the code that store the results versus versions that do not (so as not to expend the memory storing the steps when the resulting DF is not needed). Python does support multi-inheritance, and so in theory, the objects could have followed an inheritance scheme like this:Base Object: output of moves in solution => Child that can output all disk moves (as simple steplist) => Child that can add more visual output to move list => Child that can store all disk moves in DF => multi-inherence: child that can do all output + store results in DFInstead, a simpler design that avoids multi-inheritance was selected. Multi-inheritance increases the complexity of maintenance and creates code that is harder to read and instantly see what it does. There are many use cases for which this complexity is worth what it gains you, but not in this such a design feels over-engineered. The final object model selected has just two "hanoi solution" objects in it:Base Object: can print out whatever we wish to see of the solution => Child Object: inherits all print options and stores the moves in a DF Making The Solution Traceable (Watching The Recursion)This modification to the solution is designed for purely academic reasons. One of the reasons "The Towers of Hanoi" problem is so popular in code language education programs is that it is a problem best solved through recursion. In fact, it is said that the problem is difficult to solve without recursion. The purpose of the coding modifications in this section are to add tracer lines into the output that make visible the method calls and recursive method calls in action.Output gets messy, but is interesting from a purely academic and educational standpoint.
###Code
class HanoiStoredSolutionTron(HanoiStoredSolution):
''' HanoiStoredSolutionTron -->\n\nAdds TRON (tracer on) functionality to HanoiStoredSolution.
Created as an illustration of the flow of recursive function calls.'''
def __init__(self, numDisks, moveList="Stored", divider=30):
print(HanoiStoredSolutionTron.__mro__)
print("calling: __init__(self, " + str(numDisks) + ", " + str(moveList) + ", " + str(divider) + ")")
HanoiStoredSolution.__init__(self, numDisks, moveList, divider)
def _moveDisks(self, nDisks, source, target, auxiliary, moveList):
print("calling: _moveDisks(self, " + str(nDisks) + ", " + str(source) + ", " +
str(target) + ", "+ str(auxiliary) + ", " + str(moveList) + ")")
HanoiStoredSolution._moveDisks(self, nDisks, source, target, auxiliary, moveList)
def _diskMovementProgression(self, nDisks, source, target, moveList="Visual"):
print("calling: _diskMovementProgression(self, " + str(nDisks) + ", " + str(source) +
", " + str(target) + ", " + str(moveList) + ")")
HanoiStoredSolution._diskMovementProgression(self, nDisks, source, target, moveList)
def _store_diskProgress(self, dsk, sourceID, targetID):
print("calling: _store_diskProgress(self, " + str(dsk) + ", " + str(sourceID) +
", " + str(targetID) + ")")
HanoiStoredSolution._store_diskProgress(self, dsk, sourceID, targetID)
def _output_diskProgress(self, nDisks, source, target, moveList):
print("calling: _output_diskProgress(self, " + str(nDisks) + ", " + str(source) + ", " +
str(target) + ", " + str(moveList) + ")")
HanoiStoredSolution._output_diskProgress(self, nDisks, source, target, moveList)
def reInitialize(self, nDisks, moveList, divider=30):
print("calling: reInitialize(self, " + str(nDisks) + ", " + str(moveList) + ", " + str(divider) + ")")
HanoiStoredSolution.reInitialize(self, nDisks, moveList, divider)
def __str__(self):
print("calling: __str__(self)")
return HanoiStoredSolution.__str__(self)
def get_solutionDF(self):
print("calling: get_solutionDF(self)")
return HanoiStoredSolution.get_solutionDF(self)
print("HanoiStoredSolutionTron Object Loaded.")
print(HanoiStoredSolutionTron.__doc__)
hsst1 = HanoiStoredSolutionTron(3, "Visual", 72) # function calls w/ full output showing
hsst1.reInitialize(3, "Count") # Just function call trace and final move count
hsst1.get_solutionDF()
###Output
calling: get_solutionDF(self)
###Markdown
Hannoi Solutions Research and ExperimentationCode presented here, when it has a source, the source is sited. Then edits and enhancements are made to this code experimenting with it in different ways as part of the research that ultimately led to the solution given at the start of this notebook.
###Code
# Example 1:
# source: http://www.python-course.eu/towers_of_hanoi.php
''' This code solves the puzzle, but shows us nothing in terms of how it does it.
What we really want is a program that solves the puzzle and provides a solution.
But this code is a good clean example of recursive programming.
'''
def hanoi(n, source, helper, target):
if n > 0:
# move tower of size n - 1 to helper:
hanoi(n - 1, source, target, helper)
# move disk from source peg to target peg
if source:
target.append(source.pop())
# move tower of size n-1 from helper to target
hanoi(n - 1, helper, source, target)
source = [4,3,2,1]
target = []
helper = []
hanoi(len(source),source,helper,target)
print(source, helper, target)
# modified from source for Python 2.7 as well as 3.x compatibility
# source: http://www.python-course.eu/towers_of_hanoi.php
''' This is better, but the solution provided is output in such a mess that its hard to
see the solution from what is essentially a trace of the inner workings of the program.
This code makes a good demonstration of how the recursive algorithm does its work though.
'''
def hanoi(n, source, helper, target):
print("hanoi( " + str(n) + str(source) + str(helper) + str(target) + " called")
# modified from source for 2.7 and 3.x compatibility
if n > 0:
# move tower of size n - 1 to helper:
hanoi(n - 1, source, target, helper)
# move disk from source peg to target peg
if source[0]:
disk = source[0].pop()
print("moving " + str(disk) + " from " + source[1] + " to " + target[1])
# modified from source for Python 2.7 and 3.x compatibility
target[0].append(disk)
# move tower of size n-1 from helper to target
hanoi(n - 1, helper, source, target)
source = ([4,3,2,1], "source")
target = ([], "target")
helper = ([], "helper")
hanoi(len(source[0]),source,helper,target)
print(source, helper, target)
# modified from source for Python 2.7 as well as 3.x compatibility
# let's take the previous code and modify it so we can run w/ and w/o the trace for better analysis
# some other tweaks to language and output will also be made
def hanoi(n, source, helper, target, tron = False, diskTrace = False):
if tron == True: # tron = "Tracer On" and was the title of a popular movie set in a virtual world
print("hanoi( " + str(n) + str(source) + str(helper) + str(target) + " called")
# modified from source for compatibility with Python 2.7 or 3.x
if n > 0:
# move tower of size n - 1 to helper:
hanoi(n - 1, source, target, helper, tron, diskTrace)
# move disk from source peg to target peg
if source[0]:
disk = source[0].pop()
if diskTrace == True:
mv = "move disk " + str(disk)
else:
mv = "move"
print(mv + " from " + source[1] + " to " + target[1])
target[0].append(disk)
# move tower of size n-1 from helper to target
hanoi(n - 1, helper, source, target, tron, diskTrace)
# set up pegs
source = ([4,3,2,1], "source")
target = ([], "target")
helper = ([], "helper")
# run simulation and print results:
print(source + helper + target)
hanoi(len(source[0]),source,helper,target, diskTrace = True)
# add final argument of True to turn full program trace back on
# then output will look like previous cell
# it is disabled here to demonstrate the cleaner "solution" output
print(source + helper + target)
# these lines modified from source for 2.7 and 3.x compatibility
# source: http://www.python-course.eu/towers_of_hanoi.php
# this code used as starting point and then modified and enhanced considerably to create this version
# this alteration to the source will make the code a bit more self contained and will give options for which
# peg gets moved to which peg. For simplicity, the story it tells is we are moving from "peg 1" to
# "peg 3" (labeled simply 1, 2, 3) rather than "source", "target", etc.
# user can chose which of the 3 pegs is source, target, and what earlier code called "helper" or "auxilliary"
def hanoi(n, start=1, end=3, spare=2, tron=False, diskTrace=False):
# sets up data structure(s) to pass into our recursive child function
if sorted([start, end, spare]) != [1,2,3]:
raise ValueError("Arguments: start, end, spare - must be unique and can only contain the values 1, 2, or 3.\n" +
"This tells the program which peg (of the 3 pegs) is used for what role in the game.")
hanoi_towers = [(list(range(n, 0, -1)), start), ([], end), ([], spare)]
step_count = [0]
# embedded child function does all the actual work:
#start #spare #end
def hanoiRecurModule(n, source, helper, target, tron = False, diskTrace = False):
if tron == True: # tron = "Tracer On" and was the title of a popular movie set in a virtual world
print("hanoiRecurModule( " + str(n) + str(source) + str(helper) + str(target) + " called")
# modified from source for compatibility with Python 2.7 or 3.x
if n > 0:
# move tower of size n - 1 to helper:
hanoiRecurModule(n - 1, source, target, helper, tron, diskTrace)
# move disk from source peg to target peg
if source[0]:
disk = source[0].pop()
if diskTrace == True:
mv = "move disk " + str(disk)
else:
mv = "move"
print(mv + " from " + str(source[1]) + " to " + str(target[1]))
step_count[0] += 1
target[0].append(disk)
# move tower of size n-1 from helper to target
hanoiRecurModule(n - 1, helper, source, target, tron, diskTrace)
#start #spare #end
hanoiRecurModule(n, hanoi_towers[0], hanoi_towers[2], hanoi_towers[1], tron, diskTrace)
if step_count == [1]:
endSentence = " step."
else:
endSentence = " steps."
print("Task completed in " + str(step_count)[1:-1] + endSentence)
# run simulation and print results:
hanoi(4, start=1, end=3, spare=2, diskTrace = True)
hanoi(1, start=1, end=3, spare=2, diskTrace = True)
hanoi(1, start=1, end=3, spare=2, diskTrace = False)
# testing the ValueError
try:
hanoi(4, start=1, end=3, spare=1, diskTrace = True)
except Exception as ee:
print(str(type(ee))+": \n"+str(ee))
# with tracer on
hanoi(3, start=1, end=3, spare=2, tron=True, diskTrace = True)
# source: https://en.wikipedia.org/wiki/Tower_of_Hanoi
# recursive implementation section
# this solution requires slightly more steps than the above code, but is still quite elegant
# it also provides the best visual metaphor for the solution in its output of any of the
# code in this notebook yet
A = [5,4,3,2,1]
B = []
C = []
def move(n, source, target, auxiliary):
if n > 0:
# move n-1 disks from source to auxiliary, so they are out of the way
move(n-1, source, auxiliary, target)
# move the nth disk from source to target
target.append(source.pop())
# Display our progress
print(str(A) + '\n' + str(B) + '\n' + str(C) + '\n' + '##############')
# modified from source so it will work in Python 2.7 or Python 3.x
# move the n-1 disks that we left on auxiliary onto target
move(n-1, auxiliary, target, source)
# initiate call from source A to target C with auxiliary B
move(5, A, C, B)
# Solution Experiment One
# modified from code presented in previous cells ..
# why are we asking the user for things the code can do for us ...
# this version is more self-contained and requires less of the user to run it
def solveHanoi(numDisks):
peg1 = list(range(numDisks, 0, -1))
peg2 = []
peg3 = []
def moveDisks(numDisks, source, target, auxiliary):
# python allows nested functions but this may not be best practice
# completing the code this way just as an experiment
if numDisks > 0:
# move n-1 disks from source to auxiliary, so they are out of the way
moveDisks(numDisks-1, source, auxiliary, target)
# move the nth disk from source to target
target.append(source.pop())
# Display our progress
print(str(peg1) + '\n' + str(peg2) + '\n' + str(peg3) + '\n' + '##############')
# modified from source so it will work in Python 2.7 or Python 3.x
# move the n-1 disks that we left on auxiliary onto target
moveDisks(numDisks-1, auxiliary, target, source)
moveDisks(numDisks, source=peg1, target=peg3, auxiliary=peg2)
# initiate call from source A to target C with auxiliary B
solveHanoi(5)
### Verson One: Object code
## Useful help topic: http://stackoverflow.com/questions/3277367/how-does-pythons-super-work-with-multiple-inheritance
import pandas as pd
import numpy as np
class HanoiSolution_v1(object):
def __init__(self, numDisks):
# peg structure: ( [ disks ], Peg_ID_Number )
self._peg1 = (list(range(numDisks, 0, -1)), 1)
self._peg2 = ([], 2)
self._peg3 = ([], 3)
self.dsks = numDisks # number of disks for simulation
self.moveCount = 0 # move counter
self.divChars = 25 # number of characters for divider used in output
self._solutionPD = pd.DataFrame({'disk':[],'fromPeg':[], 'toPeg':[]}, dtype=np.int64)
self._moveDisks(nDisks=numDisks, source=self._peg1, target=self._peg3, auxiliary=self._peg2)
def _moveDisks(self, nDisks, source, target, auxiliary):
if nDisks > 0:
# move n-1 disks from source to auxiliary, so they are out of the way
self._moveDisks(nDisks-1, source, auxiliary, target)
# move the nth disk from source to target
target[0].append(source[0].pop())
# Display our progress (create each step of the answer and output it)
self.moveCount += 1
print("Step %d:" %self.moveCount)
print("-"*self.divChars)
print("Move disk " + str(nDisks) + " from " + str(source[1]) + " to " + str(target[1]))
print(str(self._peg1[0]) + '\n' + str(self._peg2[0]) + '\n' + str(self._peg3[0]) +
'\n' + '#'*self.divChars)
self._store_diskProgress(nDisks, source[1], target[1])
# move the n-1 disks that were left on auxiliary to target
self._moveDisks(nDisks-1, auxiliary, target, source)
return self.moveCount
def _store_diskProgress(self, dsk, sourceID, targetID):
# Builds this: self._solutionPD = pd.DataFrame({ 'disk':[],'fromPeg':[], 'toPeg':[] })
self._solutionPD = self._solutionPD.append(pd.DataFrame({ 'disk':[dsk],'fromPeg':[sourceID],
'toPeg':[targetID] }), ignore_index=True)
def __str__(self):
return "%d disks would take %d moves to solve." %(self.dsks, self.moveCount)
myHanoiTower_v1 = HanoiSolution_v1(5)
print(myHanoiTower_v1)
# exploration of the object structure:
print(type(myHanoiTower_v1))
myHanoiTower_v1._solutionPD
###Output
_____no_output_____ |
tour_model_eval/Compare user mode mapping effect with outputs.ipynb | ###Markdown
This compares the effect of the `same_mode` mapping change on the staging databaseTODO: Extend to the other databases as wellThis assumes that the models have been built (using `build_save_model.py`) for the "before" values and `bin/build_label_model.py -a` for the "after" values.They have been renamed to `user_label_first_round_.before` and `user_label_first_round_.after`, and `locations_first_round_.before` and `locations_first_round_.after`A sample script that could be used for this renaming is: `for f in user_labels_first_round_*; do mv $f $f.before; done`This script reads those files and works with them.
###Code
import os
os.environ["EMISSION_SERVER_HOME"] = "/Users/kshankar/e-mission/e-mission-server"
MODEL_DIR = os.getenv("EMISSION_SERVER_HOME"); MODEL_DIR
import emission.analysis.modelling.tour_model_first_only.load_predict as lp
label_result_list = []
for l in os.listdir(MODEL_DIR):
if l.startswith("user_labels_first_round") and not l.endswith(".after"):
uuid = l.split("_")[4]
before_ui_map = lp.loadModel(MODEL_DIR+"/"+l)
after_ui_map = lp.loadModel(MODEL_DIR+"/"+l+".after")
for cluster_label in before_ui_map:
before_cluster_options = before_ui_map[cluster_label]
after_cluster_options = after_ui_map[cluster_label]
before_max_p = sorted(before_cluster_options, key=lambda lp: lp["p"])[-1]["p"]
after_max_p = sorted(after_cluster_options, key=lambda lp: lp["p"])[-1]["p"]
label_result_list.append({"user_id": uuid, "cluster_label": cluster_label,
"before_unique_combo_len": len(before_cluster_options),
"after_unique_combo_len": len(after_cluster_options),
"before_max_p": before_max_p, "after_max_p": after_max_p})
import pandas as pd
label_result_df = pd.DataFrame(label_result_list); label_result_df
mismatched_df = label_result_df.query("before_max_p != after_max_p"); mismatched_df
len(mismatched_df)
print(mismatched_df.drop("user_id", axis=1).head().to_markdown())
ax = mismatched_df.user_id.value_counts().plot(kind="bar")
ax.set_xticklabels(list(range(len(mismatched_df))))
label_result_df[["before_max_p", "after_max_p"]].plot.box(by="user_id")
label_result_df.query("before_max_p < 1")[["before_max_p", "after_max_p"]].plot.box(by="user_id")
label_result_df.query("before_max_p < 1").after_max_p.describe()
###Output
_____no_output_____ |
assignment2/160575/160575.ipynb | ###Markdown
EM Algorithm Batch EM **Import necessary libraries**
###Code
import numpy as np
import random
import matplotlib.pyplot as plt
import scipy.io
%matplotlib inline
###Output
_____no_output_____
###Markdown
**Load Data**
###Code
data = scipy.io.loadmat('mnist_small.mat')
X = data['X']
Y = data['Y']
###Output
_____no_output_____
###Markdown
**Print Data Shape**
###Code
print(X.shape, Y.shape)
###Output
(10000, 784) (10000, 1)
###Markdown
**GMM Algorithm**
###Code
def gmm(X, K):
[N, D] = X.shape
if K >= N:
print('you are trying to make too many clusters!')
return
numIter = 200 # maximum number of iterations to run
si2 = 1 # initialize si2 dumbly
pk = np.ones(K) / K # initialize pk uniformly
mu = np.random.rand(K, D) # initialize means randomly
z = np.zeros((N, K))
for iteration in range(numIter):
# in the first step, we do assignments:
# each point is probabilistically assigned to each center
for n in range(N):
for k in range(K):
# TBD: compute z(n,k) = log probability that
# the nth data point belongs to cluster k
z[n][k] = np.log(pk[k]) - np.linalg.norm(X[n] - mu[k])**2 / (2*si2)
# turn log probabilities into actual probabilities
maxZ = np.max(z[n])
z[n] = np.exp(z[n] - maxZ - np.log(np.sum(np.exp(z[n] - maxZ))))
nk = np.sum(z, axis=0)
# re-estimate pk
pk = nk/N
# re-estimate the variance
mu = z.T@X
mu = np.array([mu[k]/nk[k] for k in range(K)])
# re-estimate the variance
si2 = np.sum(np.square(X - z@mu))/(N*D)
return mu, pk, z, si2
###Output
_____no_output_____
###Markdown
**Running GMM for k = 5, 10, 15, 20**
###Code
for k in [5, 10, 15, 20]:
mu, pk, z, si2 = gmm(X, k) # calling the function
# printing mean
for i in range(k):
plt.imshow(mu[i].reshape((28, 28)), cmap='gray')
plt.savefig('figure '+str(i+1)+' for k_'+str(k))
plt.show()
###Output
_____no_output_____
###Markdown
Online EM **Online GMM algorithm**
###Code
def online_gmm(X, K):
batch_size = 100 # the batch size for onlineEM
kappa = 0.55 # kappa for learning rate
numIter = 200 # total number of iterations
np.random.shuffle(X) # randomly shuffle X to include examples from all digits
X = X[:batch_size] # select the first 100 of 100
[N, D] = X.shape # N and D from X
if K >= N:
print('you are trying to make too many clusters!')
return
# initialize si2 dumbly
si2 = 1
# initialize pk uniformly
pk = np.ones(K) / K
# we initialize the means totally randomly
mu = np.random.rand(K, D)
z = np.zeros((N, K))
for iteration in range(numIter):
learning_rate = (iteration + 1)**(-0.55) # learning for rate for the iteration
for n in range(N):
for k in range(K):
# TBD: compute z(n,k) = log probability that
# the nth data point belongs to cluster k
z[n][k] = np.log(pk[k]) - np.linalg.norm(mu[k] - X[n])**2 / (2*si2)
maxZ = np.max(z[n])
# turn log probabilities into actual probabilities
z[n] = np.exp(z[n] - maxZ - np.log(np.sum(np.exp(z[n] - maxZ))))
nk = np.sum(z, axis=0)
# re-estimate pk
pk = (1-learning_rate)*pk + learning_rate*nk/N
mu_prev = mu
mu = z.T@X
mu = (1-learning_rate)*mu_prev + learning_rate*np.array([mu[k]/nk[k] if nk[k] is not 0 else mu_prev for k in range(K)])
si2 = np.sum(np.square(X - z@mu))/(N*D)
return mu, pk, si2
###Output
_____no_output_____
###Markdown
**Running Online GMM for k = 5, 10, 15, 20**
###Code
for k in [5, 10, 15, 20]:
mu, pk, si2 = online_gmm(X, k) # calling the function
# printing mean
for i in range(k):
plt.imshow(mu[i].reshape((28, 28)), cmap='gray')
# plt.savefig('onlineEM_figure '+str(i+1)+' for k_'+str(k))
plt.show()
###Output
_____no_output_____ |
plot_mlp_losses.ipynb | ###Markdown
Arguments
###Code
subject = 'F'
voxel_num = 500
loss_type = 'Train'
def collect_mlp_losses(n_folds, encoding_model, subject, voxel_num, loss_type):
fold_losses = []
for fold in range(n_folds):
curr_fold_losses = np.load("{}/mlp_fold_{}_losses/subject_{}/fold_{}.npy".format(encoding_model, loss_type, subject, fold))
curr_fold_losses = curr_fold_losses[voxel_num]
fold_losses.append(curr_fold_losses)
fold_losses = np.array(fold_losses)
return fold_losses
X = np.arange(1,11)
mlp_initial_losses = collect_mlp_losses(n_folds, 'mlp_initial', subject, voxel_num, loss_type)
mlp_smallerhiddensize_losses = collect_mlp_losses(n_folds, 'mlp_smallerhiddensize', subject, voxel_num, loss_type)
mlp_largerhiddensize_losses = collect_mlp_losses(n_folds, 'mlp_largerhiddensize', subject, voxel_num, loss_type)
mlp_additionalhiddenlayer_losses = collect_mlp_losses(n_folds, 'mlp_additionalhiddenlayer', subject, voxel_num, loss_type)
fig, axs = plt.subplots(2, 2, figsize=(14,8))
for fold in range(n_folds):
axs_x, axs_y = fold // 2, fold % 2
axs[axs_x, axs_y].plot(X, mlp_initial_losses[fold], color='green')
axs[axs_x, axs_y].plot(X, mlp_smallerhiddensize_losses[fold], color='blue')
axs[axs_x, axs_y].plot(X, mlp_largerhiddensize_losses[fold], color='red')
axs[axs_x, axs_y].plot(X, mlp_additionalhiddenlayer_losses[fold], color='black')
axs[axs_x, axs_y].set_title('{} Losses: Subject {} - Voxel {} - Fold {}'.format(loss_type, subject, voxel_num, fold+1))
for i, ax in enumerate(axs.flat):
if i // 2 == 0:
ax.set(ylabel='Loss')
else:
ax.set(xlabel='Epoch', ylabel='Loss')
green_patch = mpatches.Patch(color='green', label='mlp_initial')
blue_patch = mpatches.Patch(color='blue', label='mlp_smallerhiddensize')
red_patch = mpatches.Patch(color='red', label='mlp_largerhiddensize')
black_patch = mpatches.Patch(color='black', label='mlp_additionalhiddenlayer')
plt.legend(handles=[green_patch, blue_patch, red_patch, black_patch])
plt.show()
###Output
_____no_output_____ |
finding-relationships-data-python/02/demos/demo-06-HistogramsKDEPlotsRugPlots.ipynb | ###Markdown
Automobile Dataset Source: https://www.kaggle.com/toramky/automobile-dataset* symboling - Rating corresponds to the degree to which the auto is more risky than its price indicates. Cars are initially assigned a risk factor symbol associated with its price. Then, if it is more risky (or less), this symbol is adjusted by moving it up (or down) the scale. Actuarians call this process "symboling" * 3 -> Risky * -3 -> pretty safe* normalized-losses - The third factor is the relative average loss payment per insured vehicle year. This value is normalized for all autos within a particular size classification (two-door small, station wagons, sports/speciality, etc…), and represents the average loss per car per year.* make - making company* fuel-type - Type of fuels* aspiration - * num-of-doors* body-style* drive-wheels* engine-location* wheel-base* length* width* height* curb-weight* engine-type* num-of-cylinders* engine-size* fuel-system* bore* stroke* compression-ratio* horsepower* peak-rpm* city-mpg* highway-mpg* price Import the data
###Code
automobile_data = pd.read_csv('datasets/Automobile_data.csv',
na_values = '?')
automobile_data.head(5)
automobile_data.shape
automobile_data.isnull().sum()
###Output
_____no_output_____
###Markdown
Cleaning
###Code
automobile_data.dropna(inplace=True)
automobile_data.shape
###Output
_____no_output_____
###Markdown
Saving back to dataset folder for future use
###Code
automobile_data.to_csv('datasets/automobile_data_processed.csv', index = False)
automobile_data.dtypes
###Output
_____no_output_____
###Markdown
Describing the data
###Code
automobile_data.describe().transpose()
###Output
_____no_output_____
###Markdown
* From here we can see that the distribution of price. Most of the vehicle has the price in the range of 5000-10000
###Code
plt.figure(figsize=(12, 8))
sns.distplot(automobile_data['price'],
color='red')
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
* If we will add more bin then we can see the exact range for the price
###Code
plt.figure(figsize=(12, 8))
sns.distplot(automobile_data['price'],
bins=20, color='red')
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
* This is just the distplot and see, the distribution
###Code
plt.figure(figsize=(12, 8))
sns.distplot(automobile_data['price'],
hist=False, color='blue')
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
* We can add the bin rug plot also to show the distribution
###Code
plt.figure(figsize=(12,8))
sns.distplot(automobile_data['price'],
hist=False, rug=True, color='blue')
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
Rug plot
###Code
plt.figure(figsize=(12,8))
sns.rugplot(automobile_data['price'],
height=0.5, color='blue')
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
* Kde plot will let's know the density of each range of the price
###Code
plt.figure(figsize=(12,8))
sns.kdeplot(automobile_data['price'],
shade=True, color='blue')
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
Scatterplot * Now let's take the horsepower and price of the car from the automobile data, So basically the horsepower is increasing according to price
###Code
plt.figure(figsize=(12, 8))
sns.scatterplot(x='horsepower', y='price',
data=automobile_data, s=120)
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____
###Markdown
* If we will check the number of cylinders according to the price and horsepower, Most of the cars are using number of cylinders is 4.* Also when the horsepower is increasing then the price is increasing
###Code
plt.figure(figsize=(12, 8))
sns.scatterplot(x='horsepower', y='price',
data=automobile_data,
hue='num-of-cylinders', s=120)
plt.title('Automobile Data')
plt.show()
sns.regplot(x='horsepower', y='price',
data=automobile_data)
plt.show()
sns.regplot(x='highway-mpg', y='price',
data=automobile_data)
plt.show()
###Output
_____no_output_____
###Markdown
* Now let's see the relationship of horsepower and price
###Code
sns.jointplot(x='horsepower', y='price',
data=automobile_data)
plt.show()
sns.jointplot(x='horsepower', y='price',
data=automobile_data, kind='reg')
plt.show()
###Output
_____no_output_____
###Markdown
* We can just see the density
###Code
sns.jointplot(x='horsepower', y='price',
data=automobile_data, kind='kde')
plt.show()
###Output
_____no_output_____
###Markdown
* The better representaion is here now about the density. Now it is very clear about which horsepower and price has more density
###Code
sns.jointplot(x='horsepower', y='price',
data=automobile_data, kind='hex')
plt.show()
###Output
_____no_output_____
###Markdown
* Also we can see the rug plot ang kde plot together , to see the distribution range* From here we can see that, the range of horsepower 50-60 has more density, and the price for the high density is 5000-10000* Also, the rug plot will help us to understand
###Code
f, ax = plt.subplots(figsize=(6, 6))
sns.kdeplot(automobile_data['horsepower'], automobile_data['price'], ax=ax)
sns.rugplot(automobile_data['horsepower'], color="limegreen", ax=ax)
sns.rugplot(automobile_data['price'], color="red", vertical=True, ax=ax)
plt.title('Automobile Data')
plt.show()
###Output
_____no_output_____ |
Session 04 - Language Models.ipynb | ###Markdown
Language ModellingThe Natural Language Toolkit has data types and functions that make life easier for us when we want to count bigrams and compute their probabilities.
###Code
# Needed imports
import nltk
%matplotlib notebook
###Output
_____no_output_____
###Markdown
**Import the Brown corpus**The Brown University Standard Corpus of Present-Day American Englis, or just Brown Corpus (https://en.wikipedia.org/wiki/Brown_Corpus), is a general corpus containing 500 samples of English-language text, totaling roughly one million words, compiled from works published in the United States in 1961.
###Code
from nltk.corpus import brown
brown.categories()
###Output
_____no_output_____
###Markdown
We can access the words of the Brown corpus, either all of them of those belonging to any of its categories.
###Code
print(brown.words())
print(brown.words(categories='mystery'))
###Output
[u'The', u'Fulton', u'County', u'Grand', u'Jury', ...]
[u'There', u'were', u'thirty-eight', u'patients', ...]
###Markdown
We compute the word frequency by using the `FreqDist` function of NLTK (an nltk.FreqDist() is like a dictionary, but it is ordered by frequency). The following uses this function to compute the freqs and plot the 20 most frequent words 1. Frequency Distribution
###Code
freq_brown = nltk.FreqDist(brown.words())
list(freq_brown.keys())[:20]
freq_brown.most_common(20)
###Output
_____no_output_____
###Markdown
We can draw the frequency distribution by plotting it
###Code
freq_brown.plot(30)
###Output
_____no_output_____
###Markdown
We can see that they are mostly stopwords and punctuation signs.From NLTK we can access a list of stowords from different languages. This is helpful if we want to remove them.
###Code
from nltk.corpus import stopwords
print(stopwords.words('english'))
###Output
[u'i', u'me', u'my', u'myself', u'we', u'our', u'ours', u'ourselves', u'you', u"you're", u"you've", u"you'll", u"you'd", u'your', u'yours', u'yourself', u'yourselves', u'he', u'him', u'his', u'himself', u'she', u"she's", u'her', u'hers', u'herself', u'it', u"it's", u'its', u'itself', u'they', u'them', u'their', u'theirs', u'themselves', u'what', u'which', u'who', u'whom', u'this', u'that', u"that'll", u'these', u'those', u'am', u'is', u'are', u'was', u'were', u'be', u'been', u'being', u'have', u'has', u'had', u'having', u'do', u'does', u'did', u'doing', u'a', u'an', u'the', u'and', u'but', u'if', u'or', u'because', u'as', u'until', u'while', u'of', u'at', u'by', u'for', u'with', u'about', u'against', u'between', u'into', u'through', u'during', u'before', u'after', u'above', u'below', u'to', u'from', u'up', u'down', u'in', u'out', u'on', u'off', u'over', u'under', u'again', u'further', u'then', u'once', u'here', u'there', u'when', u'where', u'why', u'how', u'all', u'any', u'both', u'each', u'few', u'more', u'most', u'other', u'some', u'such', u'no', u'nor', u'not', u'only', u'own', u'same', u'so', u'than', u'too', u'very', u's', u't', u'can', u'will', u'just', u'don', u"don't", u'should', u"should've", u'now', u'd', u'll', u'm', u'o', u're', u've', u'y', u'ain', u'aren', u"aren't", u'couldn', u"couldn't", u'didn', u"didn't", u'doesn', u"doesn't", u'hadn', u"hadn't", u'hasn', u"hasn't", u'haven', u"haven't", u'isn', u"isn't", u'ma', u'mightn', u"mightn't", u'mustn', u"mustn't", u'needn', u"needn't", u'shan', u"shan't", u'shouldn', u"shouldn't", u'wasn', u"wasn't", u'weren', u"weren't", u'won', u"won't", u'wouldn', u"wouldn't"]
###Markdown
**But should we remove them? Why?** No, just think in what we are trying to do here. We are trying to use the dataset to create a model of the language to, given a set of words, predict the most probable next word. For this process, stopwords, as well as punctuation or other signs are need.For the same reason, we shall not stemmize/lemmatize, neither normalize the words. We need all these variations to learn a proper language model (i.e, `the` != `The`)As we will discuss in the comming lessons, both stemming and stopwords removal could be useful in other tasks such Text Classification. 2. Bigram ModelWe'll start small and we will create a language model based on bi-grams. To that end, we will use the `ConditionalFreqDist` function of NLTK. `nltk.ConditionalFreqDist()` counts frequencies of pairs. When given a list of bigrams, it maps each first word of a bigram to a FreqDist over the second words of the bigram.If you remember the theoretical session, we are applying the Markov assumption: the next element (word in our case) of a sequence can be predicted by just focusing on the previous one.The following code creates these bi-gram counts.If we pring the `conditions` we can see the antecedent of the bi-grams. (`conditions()` in a `ConditionalFreqDist` are like `keys()` in a dictionary).
###Code
cfreq_brown_2gram = nltk.ConditionalFreqDist(nltk.bigrams(brown.words()))
cfreq_brown_2gram.conditions()[:20]
###Output
_____no_output_____
###Markdown
Let' see the most frequent terms after the word `my`.
###Code
# the cfreq_brown_2gram entry for "my" is a FreqDist (i.e, a dictionary of word and freqCount).
my_terms = cfreq_brown_2gram["my"]
# Sort (desc) the terms by frequency and print the 25th most common
sorted(my_terms.items(), key=lambda x: -x[1])[:25]
###Output
_____no_output_____
###Markdown
We can do the same with the `most_common` function
###Code
cfreq_brown_2gram["my"].most_common(25)
###Output
_____no_output_____
###Markdown
With the `nltk.ConditionalProbDist()`, map pairs are mapped to probabilities, instead of counts.
###Code
cprob_brown_2gram = nltk.ConditionalProbDist(cfreq_brown_2gram, nltk.MLEProbDist) # Uses a Maximum Likelihood Estimation (MLE) estimator
###Output
_____no_output_____
###Markdown
This again has `conditions()` wihch are like dictionary keys
###Code
cprob_brown_2gram.conditions()
###Output
_____no_output_____
###Markdown
We can also find the words that can come after `my` by using the function `samples()`
###Code
cprob_brown_2gram["my"].samples()
###Output
_____no_output_____
###Markdown
In addition, you can see the prob of a particular pair
###Code
cprob_brown_2gram["my"].prob("own")
cprob_brown_2gram["my"].prob("leg")
###Output
_____no_output_____
###Markdown
3. Compute the probability of a sentence Create a function to compute the probability of a word from its frequency
###Code
def unigram_prob(word):
len_brown = len(brown.words())
return float(freq_brown[word]) / float(len_brown)
unigram_prob("night")
###Output
_____no_output_____
###Markdown
We now can ask for the probability of a word sequence.For instance: `P(how do you do) = P(how) * P(do|how) * P(you|do) * P(do | you)`
###Code
unigram_prob("how") * cprob_brown_2gram["how"].prob("do") * cprob_brown_2gram["do"].prob("you") * cprob_brown_2gram["you"].prob("do")
###Output
_____no_output_____
###Markdown
Compare it with the prob of another not so common sentence: `how do you dance`
###Code
unigram_prob("how") * cprob_brown_2gram["how"].prob("do") * cprob_brown_2gram["do"].prob("you") * cprob_brown_2gram["you"].prob("dance")
###Output
_____no_output_____
###Markdown
As expected, one order of magnitude less probable 4. Generate Language With our bi-gram language model already generated, we can now use it to generate text and see what has our model learned.
###Code
cprob_brown_2gram["my"].generate()
###Output
_____no_output_____
###Markdown
Let's see if the model create valid text or just jiberish
###Code
word = "my"
text = ""
for index in range(20):
text += word + " "
word = cprob_brown_2gram[ word].generate()
print(text)
###Output
my burning arcs which is bounded up in a line of him in her back of a Democratic duties normally
###Markdown
It is not a valid sentence, but it has some kind of sense. Remember that we are just learning from bigrams! **We can try another datasets to train a language models using different dataset.**In particular we are going to import the book dataset of NLTK, which includes the text of different books. The following function takes a text (i.e., the text o a given book) to learn a language model, and a initial word to start the generation and the number of words that have to be generated.
###Code
# Here is how to do this with NLTK books:
from nltk.book import *
def generate_text(text, initialword, numwords):
bigrams = list(nltk.ngrams(text, 2))
cpd = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(bigrams), nltk.MLEProbDist)
word = initialword
text = ""
for i in range(numwords):
text += word + " "
word = cpd[ word].generate()
print(text)
###Output
*** Introductory Examples for the NLTK Book ***
Loading text1, ..., text9 and sent1, ..., sent9
Type the name of the text or sentence to view it.
Type: 'texts()' or 'sents()' to list the materials.
text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
text6: Monty Python and the Holy Grail
text7: Wall Street Journal
text8: Personals Corpus
text9: The Man Who Was Thursday by G . K . Chesterton 1908
###Markdown
We use different books to generate text
###Code
# Holy Grail
generate_text(text6, "I", 25)
# sense and sensibility
generate_text(text2, "I", 25)
###Output
I can it had passed it all , with my exchange , on remaining half so important Tuesday came only be sure you know where
###Markdown
5. TriGrams Let's try a more advance model using tri-grams to see if it is able to generate better language.We cannot use the `ConditionalFreqDist` as before. `nltk.ConditionalFreqDist` expects its data as a sequence of `(condition, item)` tuples. `nltk.trigrams` returns tuples of length 3. Therefore, we have to adapt the trigrams output.
###Code
def generate_text(text, initialword, numwords):
trigrams = list(nltk.ngrams(text, 3, pad_right=True, pad_left=True))
trigram_pairs = (((w0, w1), w2) for w0, w1, w2 in trigrams)
cpd = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(trigram_pairs), nltk.MLEProbDist)
word = initialword
text = ""
for i in range(numwords):
w = cpd[(word[i],word[i+1])].generate()
word += [w]
print(" ".join(word))
generate_text(text2, ["I", "am"], 25)
###Output
I am afraid , Miss Dashwood was above with her increase of emotion , her eyes were red and swollen ; and without selfishness -- without encouraging
###Markdown
As expected, it creates a better LM.Can we go on with more n-grams? Let's see 6. N-grams
###Code
def generate_text(text, initialword, numwords):
ngrams = list(nltk.ngrams(text, 4, pad_right=True, pad_left=True))
ngram_pairs = (((w0, w1, w2), w3) for w0, w1, w2, w3 in ngrams)
cpd = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(ngram_pairs), nltk.MLEProbDist)
word = initialword
text = ""
for i in range(numwords):
w = cpd[(word[i],word[i+1], word[i+2])].generate()
word += [w]
print(" ".join(word))
generate_text(text2, ["I", "am", "very"], 25)
###Output
I am very sure that Colonel Brandon would give me a living ." " No ," answered Elinor , without knowing what she said . I have many
###Markdown
As we make the n-grams larger we got more accurate language models. However, if we create large n-grams we are not going to have enough data to train our models: we will never see enough data (enough sequences of n-grams) to train the model 7. Star Wars Let's try to generate some text based on the dialogues from the Star Wars scripts (episodes IV,V, and VI).All the information for this exercise was retrieved from the [Visualizing Star Wars Movie Scripts](https://github.com/gastonstat/StarWars) project.We start by reading all the dialogue lines from the scripts, which are labeled with the character speaking. We are only considering Luke, Leia, Han Solo and Vader. We left Chewbacca out of the example for obvious reasons...We read all the lines of each character and combine them in one single string. We tokenize this string using the `WordPunctTokenizer` and use these tokens to create an NLTK Text object.__NOTE__: some warnings may appear when executing this part (something like *Skipping line...*), due to some minor parsing errors when generating the dataframe. You can ignore them.
###Code
import nltk
from nltk import word_tokenize, WordPunctTokenizer
import pandas
wpt = WordPunctTokenizer()
c3po_string = ""
vader_string = ""
solo_string = ""
luke_string = ""
leia_string = ""
def read_lines(path):
lines = pandas.read_csv(path, delim_whitespace=True, error_bad_lines=False)
solo_lines = lines.loc[lines['Char'] == 'HAN']['Text']
vader_lines = lines.loc[lines['Char'] == 'VADER']['Text']
luke_lines = lines.loc[lines['Char'] == 'LUKE']['Text']
leia_lines = lines.loc[lines['Char'] == 'LEIA']['Text']
global vader_string, c3po_string, solo_string, luke_string, leia_string
solo_string = solo_string + " " + " ".join(solo_lines)
vader_string = vader_string + " " + " ".join(vader_lines)
luke_string = luke_string + " " + " ".join(luke_lines)
leia_string = leia_string + " " + " ".join(leia_lines)
read_lines('files/SW_EpisodeIV.txt')
read_lines('files/SW_EpisodeV.txt')
read_lines('files/SW_EpisodeVI.txt')
solo_text = nltk.Text(wpt.tokenize(solo_string))
vader_text = nltk.Text(wpt.tokenize(vader_string))
luke_text = nltk.Text(wpt.tokenize(luke_string))
leia_text = nltk.Text(wpt.tokenize(leia_string))
###Output
Skipping line 555: expected 3 fields, saw 9
Skipping line 54: expected 3 fields, saw 4
Skipping line 191: expected 3 fields, saw 4
Skipping line 285: expected 3 fields, saw 10
###Markdown
Using these Text objects, we can proceed in the same way as in the previous examples, to generate texts. The following `generate_text_backoff` tries to generate a new word based on an 4-gram proabability. If this fails, it tries the Tri-gram one and then the Bi-gram. If none of them are sucessful, it just stops. Recalling from the POS tagging session, this is known as a backoff strategy.This function takes as parameter some training text, an initial words to start the sentence, and the length of the text to be generated.
###Code
def generate_text_backoff(text, initialwords, numwords):
#ngrams
ngrams = list(nltk.ngrams(text, 4, pad_right=True, pad_left=True))
ngram_pairs = (((w0, w1, w2), w3) for w0, w1, w2, w3 in ngrams)
cpdNgram = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(ngram_pairs), nltk.MLEProbDist)
#trigram
trigrams = list(nltk.ngrams(text, 3, pad_right=True, pad_left=True))
trigram_pairs = (((w0, w1), w2) for w0, w1, w2 in trigrams)
cpd3gram = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(trigram_pairs), nltk.MLEProbDist)
#bigram
bigrams = list(nltk.ngrams(text, 2))
cpd2gram = nltk.ConditionalProbDist(nltk.ConditionalFreqDist(bigrams), nltk.MLEProbDist)
word = initialwords
for i in range(numwords):
#try n-gram
if (word[i],word[i+1], word[i+2]) in cpdNgram:
w = cpdNgram[(word[i],word[i+1], word[i+2])].generate()#.max()
#try 3-gram
elif (word[i+1],word[i+2]) in cpd3gram:
w = cpd3gram[(word[i+1],word[i+2])].generate()#.max()
#try 2-gram
elif word[i+2] in cpd2gram:
w = cpd2gram[word[i+2]].generate().#max()
#at least we tried...
else:
break
word += [w]
return " ".join(word)
###Output
_____no_output_____
###Markdown
Now that we have our function ready, let's try to generate some texts and check how they vary from one character to another, using by the different starting tuples.
###Code
print("Han Solo: " + generate_text_backoff(solo_text, ["Chewie", "come", "here"], 25) + "\n")
print("Leia: " + generate_text_backoff(leia_text, ["My", "name", "is"], 25) + "\n")
print("Luke: " + generate_text_backoff(luke_text, ["It", "sure", "is"], 25) + "\n")
print("Vader: " + generate_text_backoff(vader_text, ["It", "sure", "is"], 25) + "\n")
print("Vader: " + generate_text_backoff(vader_text, ["I", "am", "your"], 25) + "\n")
###Output
_____no_output_____ |
pi/device_info.ipynb | ###Markdown
Device Info & Maintenance Software Update```bashsudo apt updatesudo apt -y full-upgradesource /home/pi/.venv/jns/bin/activate pip3 list --outdatedcd ~/iot49git pull``` System
###Code
!uname -a
!cat /etc/os-release
###Output
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
###Markdown
Disk
###Code
!df -h
###Output
Filesystem Size Used Avail Use% Mounted on
/dev/root 29G 3.8G 24G 14% /
devtmpfs 430M 0 430M 0% /dev
tmpfs 463M 0 463M 0% /dev/shm
tmpfs 463M 12M 451M 3% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 463M 0 463M 0% /sys/fs/cgroup
/dev/mmcblk0p1 253M 49M 204M 20% /boot
###Markdown
apt Packages
###Code
# largest installed packages
!dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -nr | head -n 20
# remove package
# !sudo apt purge -y packagename
###Output
_____no_output_____
###Markdown
Pip
###Code
!pip list
###Output
Package Version
--------------------------------- ---------
anyio 2.2.0
argon2-cffi 20.1.0
astroid 2.5.1
async-generator 1.10
attrs 20.3.0
Automat 20.2.0
autopep8 1.5.5
Babel 2.9.0
backcall 0.2.0
bleach 3.3.0
bleak 0.10.0
certifi 2020.12.5
cffi 1.14.5
chardet 4.0.0
colorzero 1.1
constantly 15.1.0
cryptography 3.4.6
cycler 0.10.0
decorator 4.4.2
defusedxml 0.7.1
entrypoints 0.3
flake8 3.8.4
gpiozero 1.5.1
hyperlink 21.0.0
hypothesis 6.8.0
idna 2.10
ifaddr 0.1.7
importlib-metadata 3.7.2
incremental 21.3.0
iniconfig 1.1.1
iot-device 0.4.6
iot-kernel 0.4.6
ipykernel 5.5.0
ipython 7.21.0
ipython-genutils 0.2.0
isort 5.7.0
jedi 0.17.2
Jinja2 2.11.3
json5 0.9.5
jsonschema 3.2.0
jupyter-client 6.1.11
jupyter-contrib-core 0.3.3
jupyter-contrib-nbextensions 0.5.1
jupyter-core 4.7.1
jupyter-highlight-selected-word 0.2.0
jupyter-latex-envs 1.4.6
jupyter-lsp 1.1.4
jupyter-nbextensions-configurator 0.4.1
jupyter-packaging 0.7.12
jupyter-server 1.4.1
jupyterlab 3.0.10
jupyterlab-pygments 0.1.2
jupyterlab-server 2.3.0
kiwisolver 1.3.1
lazy-object-proxy 1.5.2
lxml 4.6.2
MarkupSafe 1.1.1
matplotlib 3.3.4
mccabe 0.6.1
mistune 0.8.4
mpmath 1.2.1
nbclassic 0.2.6
nbclient 0.5.3
nbconvert 6.0.7
nbformat 5.1.2
nest-asyncio 1.5.1
notebook 6.2.0
numpy 1.20.1
packaging 20.9
pandas 1.2.3
pandocfilters 1.4.3
parso 0.7.1
pexpect 4.8.0
picamera 1.13
pickleshare 0.7.5
Pillow 8.1.2
pip 21.0.1
pluggy 0.13.1
prometheus-client 0.9.0
prompt-toolkit 3.0.17
ptyprocess 0.7.0
py 1.10.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycodestyle 2.6.0
pycparser 2.20
pycurl 7.43.0.6
pydocstyle 5.1.1
pyflakes 2.2.0
Pygments 2.8.1
pylint 2.7.2
pyOpenSSL 20.0.1
pyparsing 2.4.7
pyrsistent 0.17.3
pyserial 3.5
pytest 6.2.2
python-dateutil 2.8.1
python-jsonrpc-server 0.4.0
python-language-server 0.36.2
pytz 2021.1
PyYAML 5.4.1
pyzmq 22.0.3
readline 6.2.4.1
requests 2.25.1
rope 0.18.0
scipy 1.6.1
Send2Trash 1.5.0
service-identity 18.1.0
setuptools 52.0.0
six 1.15.0
sniffio 1.2.0
snowballstemmer 2.1.0
sortedcontainers 2.3.0
sympy 1.7.1
termcolor 1.1.0
terminado 0.9.2
testpath 0.4.4
toml 0.10.2
tornado 6.1
traitlets 5.0.5
Twisted 21.2.0
txdbus 1.1.2
typed-ast 1.4.2
typing-extensions 3.7.4.3
ujson 4.0.2
urllib3 1.26.3
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 0.58.0
wheel 0.36.2
wrapt 1.12.1
yapf 0.30.0
zeroconf 0.28.8
zipp 3.4.1
zope.interface 5.2.0
|
docs/examples/idealized.ipynb | ###Markdown
Idealized Synthetic Data*Under development*
###Code
import sys; sys.path.append("../../")
import numpy as np
import pandas as pd
import xarray as xr
from melodies_monet import driver
an = driver.analysis()
an.control = "control_idealized.yaml"
an.read_control()
an
###Output
_____no_output_____
###Markdown
````{admonition} Note: This is the complete file that was loaded.:class: dropdown```{literalinclude} control_idealized.yaml:caption::linenos:``````` Generate data Model
###Code
rs = np.random.RandomState(42)
control = an.control_dict
nlat = 100
nlon = 200
lon = np.linspace(-161, -60, nlon)
lat = np.linspace(18, 60, nlat)
Lon, Lat = np.meshgrid(lon, lat)
time = pd.date_range(control['analysis']['start_time'], control['analysis']['end_time'], freq="3H")
ntime = time.size
# Generate translating and expanding Gaussian
x_ = np.linspace(-1, 1, lon.size)
y_ = np.linspace(-1, 1, lat.size)
x, y = np.meshgrid(x_, y_)
mu = np.linspace(-0.5, 0.5, ntime)
sigma = np.linspace(0.3, 1, ntime)
g = np.exp(
-(
(
(x[np.newaxis, ...] - mu[:, np.newaxis, np.newaxis])**2
+ y[np.newaxis, ...]**2
) / (
2 * sigma[:, np.newaxis, np.newaxis]**2
)
)
)
# Coordinates
lat_da = xr.DataArray(lat, dims="lat", attrs={'longname': 'latitude', 'units': 'degN'}, name="lat")
lon_da = xr.DataArray(lon, dims="lon", attrs={'longname': 'longitude', 'units': 'degE'}, name="lon")
time_da = xr.DataArray(time, dims="time", name="time")
# Generate dataset
field_names = control['model']['test_model']['variables'].keys()
ds_dict = dict()
for field_name in field_names:
units = control['model']['test_model']['variables'][field_name]['units']
# data = rs.rand(ntime, nlat, nlon)
data = g
da = xr.DataArray(
data,
# coords={"lat": lat_da, "lon": lon_da, "time": time_da},
coords=[time_da, lat_da, lon_da],
dims=['time', 'lat', 'lon'],
attrs={'units': units},
)
ds_dict[field_name] = da
ds = xr.Dataset(ds_dict).expand_dims("z", axis=1)
ds["z"] = [1]
ds_mod = ds
ds_mod
ds.squeeze("z").A.plot(col="time")
ds.to_netcdf(control['model']['test_model']['files'])
###Output
_____no_output_____
###Markdown
Obs
###Code
# Generate positions
# TODO: only within land boundaries
n = 500
lats = rs.uniform(lat[0], lat[-1], n)#[np.newaxis, :]
lons = rs.uniform(lon[0], lon[-1], n)#[np.newaxis, :]
siteid = np.arange(n)[np.newaxis, :].astype(str)
# Generate dataset
field_names = control['model']['test_model']['variables'].keys()
ds_dict = dict()
for field_name0 in field_names:
field_name = control['model']['test_model']['mapping']['test_obs'][field_name0]
units = control['model']['test_model']['variables'][field_name0]['units']
values = (
ds_mod.A.squeeze().interp(lat=xr.DataArray(lats), lon=xr.DataArray(lons)).values
+ rs.normal(scale=0.3, size=(ntime, n))
)[:, np.newaxis]
da = xr.DataArray(
values,
coords={
"x": ("x", np.arange(n)), # !!!
"time": ("time", time),
"latitude": (("y", "x"), lats[np.newaxis, :], lat_da.attrs),
"longitude": (("y", "x"), lons[np.newaxis, :], lon_da.attrs),
"siteid": (("y", "x"), siteid),
},
dims=("time", "y", "x"),
attrs={'units': units},
)
ds_dict[field_name] = da
ds = xr.Dataset(ds_dict)
ds
ds.to_netcdf(control['obs']['test_obs']['filename'])
###Output
_____no_output_____
###Markdown
Load data
###Code
an.open_models()
an.models['test_model'].obj
an.open_obs()
an.obs['test_obs'].obj
%%time
an.pair_data()
an.paired
an.paired['test_obs_test_model'].obj
an.paired['test_obs_test_model'].obj.dims
###Output
_____no_output_____
###Markdown
Plot
###Code
%%time
an.plotting()
###Output
Warning: variables dict for A_obs not provided, so defaults used
Warning: variables dict for B_obs not provided, so defaults used
Wall time: 4.19 s
|
datasets/switchboard-corpus/convert.ipynb | ###Markdown
Code to Convert the Switchboard dataset into Convokit format
###Code
import os
os.chdir("../../") # import convokit
from convokit import Corpus, Speaker, Utterance
os.chdir("datasets/switchboard-corpus") # then come back for swda
from swda import Transcript
import glob
###Output
_____no_output_____
###Markdown
Create UsersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
###Code
files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files
user_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex,
"education": trans.from_caller_education,
"birth_year": trans.from_caller_birth_year,
"dialect_area": trans.from_caller_dialect_area}
user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex,
"education": trans.to_caller_education,
"birth_year": trans.to_caller_birth_year,
"dialect_area": trans.to_caller_dialect_area}
###Output
_____no_output_____
###Markdown
Create a Speaker object for each unique user in the dataset
###Code
corpus_users = {k: Speaker(name = k, meta = v) for k,v in user_meta.items()}
###Output
_____no_output_____
###Markdown
Check number of users in the dataset
###Code
print("Number of users in the data = {}".format(len(corpus_users)))
# Example metadata from user 1632
corpus_users['1632'].meta
###Output
_____no_output_____
###Markdown
Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (Speaker): the Speaker giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance
###Code
utterance_corpus = {}
# Iterate thru each transcript
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
utts = trans.utterances
root = str(trans.conversation_no) + "-0" # Get id of root utterance
recent_A = None
recent_B = None
# Iterate thru each utterance in transcript
last_speaker = ''
cur_speaker = ''
all_text = ''
text_pos = ''
text_tag_list = []
counter = 0
first_utt = True
for i, utt in enumerate(utts):
idx = str(utt.conversation_no) + "-" + str(counter)
text = utt.text
# Check which user is talking
if 'A' in utt.caller:
recent_A = idx;
user = str(trans.from_caller)
cur_speaker = user
else:
recent_B = idx;
user = str(trans.to_caller)
cur_speaker = user
# Only add as an utterance if the user has finished talking
if cur_speaker != last_speaker and i > 0:
# Put act-tag and POS information into metadata
meta = {'tag': text_tag_list,
}
# For reply_to, find the most recent utterance from the other caller
if first_utt:
reply_to = None
first_utt = False
elif 'A' in utt.caller:
reply_to = recent_B
else:
reply_to = recent_A
utterance_corpus[idx] = Utterance(idx, corpus_users[user], root,
reply_to, None, all_text, meta)
# Update with the current utterance information
# This is the first utterance of the next statement
all_text = utt.text
text_pos = utt.pos
text_tag_list = [(utt.text, utt.act_tag)]
counter += 1
else:
# Otherwise, combine all the text from the user
all_text += utt.text
text_pos += utt.pos
text_tag_list.append((utt.text, utt.act_tag))
last_speaker = cur_speaker
last_speaker_idx = idx
utterance_list = [utterance for k,utterance in utterance_corpus.items()]
###Output
_____no_output_____
###Markdown
Check number of utterances in the dataset
###Code
print("Number of utterances in the data = {}".format(len(utterance_corpus)))
# Example utterance object
utterance_corpus['4325-2']
###Output
_____no_output_____
###Markdown
Create corpus from list of utterances
###Code
switchboard_corpus = Corpus(utterances=utterance_list, version=1)
print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids())))
###Output
number of conversations in the dataset = 1155
###Markdown
Create Conversations
###Code
# Set conversation Metadata
for i, c in enumerate(switchboard_corpus.conversations):
trans = Transcript(files[i], './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
convo = switchboard_corpus.conversations[c]
convo.meta['filename'] = files[i]
date = trans.talk_day
convo_date = "%d-%d-%d" % (date.year, date.month, date.day)
convo.meta['talk_day'] = convo_date
convo.meta['topic_description'] = trans.topic_description
convo.meta['length'] = trans.length
convo.meta['prompt'] = trans.prompt
convo.meta['from_caller'] = str(trans.from_caller)
convo.meta['to_caller'] = str(trans.to_caller)
print(switchboard_corpus.conversations['4384-0'].meta)
###Output
{'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'}
###Markdown
Update corpus level metadata
###Code
switchboard_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
switchboard_meta[idx] = {}
switchboard_corpus.meta['metadata'] = switchboard_meta
switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus"
switchboard_corpus.meta['metadata']['4325']
###Output
_____no_output_____
###Markdown
Save created corpus
###Code
switchboard_corpus.dump("corpus", base_path = "./")
###Output
_____no_output_____
###Markdown
Check if available info from dataset can be viewed directly
###Code
from convokit import meta_index
meta_index(filename = "./corpus")
switchboard_corpus = Corpus(filename = "./corpus")
switchboard_corpus.print_summary_stats()
###Output
Number of Users: 440
Number of Utterances: 122646
Number of Conversations: 1155
###Markdown
Code to Convert the Switchboard dataset into Convokit format
###Code
import os
os.chdir("../../") # import convokit
from convokit import Corpus, User, Utterance
os.chdir("datasets/switchboard-corpus") # then come back for swda
from swda import Transcript
import glob
###Output
_____no_output_____
###Markdown
Create UsersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
###Code
files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files
user_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex,
"education": trans.from_caller_education,
"birth_year": trans.from_caller_birth_year,
"dialect_area": trans.from_caller_dialect_area}
user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex,
"education": trans.to_caller_education,
"birth_year": trans.to_caller_birth_year,
"dialect_area": trans.to_caller_dialect_area}
###Output
_____no_output_____
###Markdown
Create a User object for each unique user in the dataset
###Code
corpus_users = {k: User(name = k, meta = v) for k,v in user_meta.items()}
###Output
_____no_output_____
###Markdown
Check number of users in the dataset
###Code
print("Number of users in the data = {}".format(len(corpus_users)))
# Example metadata from user 1632
corpus_users['1632'].meta
###Output
_____no_output_____
###Markdown
Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (User): the User giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance
###Code
utterance_corpus = {}
# Iterate thru each transcript
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
utts = trans.utterances
root = str(trans.conversation_no) + "-0" # Get id of root utterance
recent_A = None
recent_B = None
# Iterate thru each utterance in transcript
last_speaker = ''
cur_speaker = ''
all_text = ''
text_pos = ''
text_tag_list = []
counter = 0
first_utt = True
for i, utt in enumerate(utts):
idx = str(utt.conversation_no) + "-" + str(counter)
text = utt.text
# Check which user is talking
if 'A' in utt.caller:
recent_A = idx;
user = str(trans.from_caller)
cur_speaker = user
else:
recent_B = idx;
user = str(trans.to_caller)
cur_speaker = user
# Only add as an utterance if the user has finished talking
if cur_speaker != last_speaker and i > 0:
# Put act-tag and POS information into metadata
meta = {'tag': text_tag_list,
}
# For reply_to, find the most recent utterance from the other caller
if first_utt:
reply_to = None
first_utt = False
elif 'A' in utt.caller:
reply_to = recent_B
else:
reply_to = recent_A
utterance_corpus[idx] = Utterance(idx, corpus_users[user], root,
reply_to, None, all_text, meta)
# Update with the current utterance information
# This is the first utterance of the next statement
all_text = utt.text
text_pos = utt.pos
text_tag_list = [(utt.text, utt.act_tag)]
counter += 1
else:
# Otherwise, combine all the text from the user
all_text += utt.text
text_pos += utt.pos
text_tag_list.append((utt.text, utt.act_tag))
last_speaker = cur_speaker
last_speaker_idx = idx
utterance_list = [utterance for k,utterance in utterance_corpus.items()]
###Output
_____no_output_____
###Markdown
Check number of utterances in the dataset
###Code
print("Number of utterances in the data = {}".format(len(utterance_corpus)))
# Example utterance object
utterance_corpus['4325-2']
###Output
_____no_output_____
###Markdown
Create corpus from list of utterances
###Code
switchboard_corpus = Corpus(utterances=utterance_list, version=1)
print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids())))
###Output
number of conversations in the dataset = 1155
###Markdown
Create Conversations
###Code
# Set conversation Metadata
for i, c in enumerate(switchboard_corpus.conversations):
trans = Transcript(files[i], './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
convo = switchboard_corpus.conversations[c]
convo.meta['filename'] = files[i]
date = trans.talk_day
convo_date = "%d-%d-%d" % (date.year, date.month, date.day)
convo.meta['talk_day'] = convo_date
convo.meta['topic_description'] = trans.topic_description
convo.meta['length'] = trans.length
convo.meta['prompt'] = trans.prompt
convo.meta['from_caller'] = str(trans.from_caller)
convo.meta['to_caller'] = str(trans.to_caller)
print(switchboard_corpus.conversations['4384-0'].meta)
###Output
{'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'}
###Markdown
Update corpus level metadata
###Code
switchboard_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
switchboard_meta[idx] = {}
switchboard_corpus.meta['metadata'] = switchboard_meta
switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus"
switchboard_corpus.meta['metadata']['4325']
###Output
_____no_output_____
###Markdown
Save created corpus
###Code
switchboard_corpus.dump("corpus", base_path = "./")
###Output
_____no_output_____
###Markdown
Check if available info from dataset can be viewed directly
###Code
from convokit import meta_index
meta_index(filename = "./corpus")
switchboard_corpus = Corpus(filename = "./corpus")
switchboard_corpus.print_summary_stats()
###Output
Number of Users: 440
Number of Utterances: 122646
Number of Conversations: 1155
###Markdown
Code to Convert the Switchboard dataset into Convokit format
###Code
import os
os.chdir("../../") # import convokit
from convokit import Corpus, Speaker, Utterance
os.chdir("datasets/switchboard-corpus") # then come back for swda
from swda import Transcript
import glob
###Output
_____no_output_____
###Markdown
Create SpeakersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
###Code
files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files
user_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex,
"education": trans.from_caller_education,
"birth_year": trans.from_caller_birth_year,
"dialect_area": trans.from_caller_dialect_area}
user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex,
"education": trans.to_caller_education,
"birth_year": trans.to_caller_birth_year,
"dialect_area": trans.to_caller_dialect_area}
###Output
_____no_output_____
###Markdown
Create a Speaker object for each unique user in the dataset
###Code
corpus_speakers = {k: Speaker(id = k, meta = v) for k,v in user_meta.items()}
###Output
_____no_output_____
###Markdown
Check number of users in the dataset
###Code
print("Number of users in the data = {}".format(len(corpus_speakers)))
# Example metadata from user 1632
corpus_speakers['1632'].meta
###Output
_____no_output_____
###Markdown
Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (Speaker): the Speaker giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance
###Code
utterance_corpus = {}
# Iterate thru each transcript
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
utts = trans.utterances
root = str(trans.conversation_no) + "-0" # Get id of root utterance
recent_A = None
recent_B = None
# Iterate thru each utterance in transcript
last_speaker = ''
cur_speaker = ''
all_text = ''
text_pos = ''
text_tag_list = []
counter = 0
first_utt = True
for i, utt in enumerate(utts):
idx = str(utt.conversation_no) + "-" + str(counter)
text = utt.text
# Check which user is talking
if 'A' in utt.caller:
recent_A = idx;
user = str(trans.from_caller)
cur_speaker = user
else:
recent_B = idx;
user = str(trans.to_caller)
cur_speaker = user
# Only add as an utterance if the user has finished talking
if cur_speaker != last_speaker and i > 0:
# Put act-tag and POS information into metadata
meta = {'tag': text_tag_list,
}
# For reply_to, find the most recent utterance from the other caller
if first_utt:
reply_to = None
first_utt = False
elif 'A' in utt.caller:
reply_to = recent_B
else:
reply_to = recent_A
utterance_corpus[idx] = Utterance(idx, corpus_speakers[user], root,
reply_to, None, all_text, meta)
# Update with the current utterance information
# This is the first utterance of the next statement
all_text = utt.text
text_pos = utt.pos
text_tag_list = [(utt.text, utt.act_tag)]
counter += 1
else:
# Otherwise, combine all the text from the user
all_text += utt.text
text_pos += utt.pos
text_tag_list.append((utt.text, utt.act_tag))
last_speaker = cur_speaker
last_speaker_idx = idx
utterance_list = [utterance for k,utterance in utterance_corpus.items()]
###Output
_____no_output_____
###Markdown
Check number of utterances in the dataset
###Code
print("Number of utterances in the data = {}".format(len(utterance_corpus)))
# Example utterance object
utterance_corpus['4325-2']
###Output
_____no_output_____
###Markdown
Create corpus from list of utterances
###Code
switchboard_corpus = Corpus(utterances=utterance_list, version=1)
print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids())))
###Output
number of conversations in the dataset = 1155
###Markdown
Create Conversations
###Code
# Set conversation Metadata
for i, c in enumerate(switchboard_corpus.conversations):
trans = Transcript(files[i], './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
convo = switchboard_corpus.conversations[c]
convo.meta['filename'] = files[i]
date = trans.talk_day
convo_date = "%d-%d-%d" % (date.year, date.month, date.day)
convo.meta['talk_day'] = convo_date
convo.meta['topic_description'] = trans.topic_description
convo.meta['length'] = trans.length
convo.meta['prompt'] = trans.prompt
convo.meta['from_caller'] = str(trans.from_caller)
convo.meta['to_caller'] = str(trans.to_caller)
print(switchboard_corpus.conversations['4384-0'].meta)
###Output
{'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'}
###Markdown
Update corpus level metadata
###Code
switchboard_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
switchboard_meta[idx] = {}
switchboard_corpus.meta['metadata'] = switchboard_meta
switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus"
switchboard_corpus.meta['metadata']['4325']
###Output
_____no_output_____
###Markdown
Save created corpus
###Code
switchboard_corpus.dump("corpus", base_path = "./")
###Output
_____no_output_____
###Markdown
Check if available info from dataset can be viewed directly
###Code
from convokit import meta_index
meta_index(filename = "./corpus")
switchboard_corpus = Corpus(filename = "./corpus")
switchboard_corpus.print_summary_stats()
###Output
Number of Speakers: 440
Number of Utterances: 122646
Number of Conversations: 1155
###Markdown
Code to Convert the Switchboard dataset into Convokit format
###Code
import os
os.chdir("../../") # import convokit
from convokit import Corpus, User, Utterance
os.chdir("datasets/switchboard-corpus") # then come back for swda
from swda import Transcript
import glob
###Output
_____no_output_____
###Markdown
Create UsersEach caller is considered a user, and there are total of 440 different callers in this dataset. Each user is marked with a numerical id, and the metadata for each user includes the following information:- Gender (str): MALE or FEMALE- Education (int): 0, 1, 2, 3, 9- Birth Year (int): YYYY- Dialect Area (str): MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN
###Code
files = glob.glob("./swda/*/sw_*.utt.csv") # Switchboard utterance files
user_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
user_meta[str(trans.from_caller)] = {"sex": trans.from_caller_sex,
"education": trans.from_caller_education,
"birth_year": trans.from_caller_birth_year,
"dialect_area": trans.from_caller_dialect_area}
user_meta[str(trans.to_caller)] = {"sex": trans.to_caller_sex,
"education": trans.to_caller_education,
"birth_year": trans.to_caller_birth_year,
"dialect_area": trans.to_caller_dialect_area}
###Output
_____no_output_____
###Markdown
Create a User object for each unique user in the dataset
###Code
corpus_users = {k: User(name = k, meta = v) for k,v in user_meta.items()}
###Output
_____no_output_____
###Markdown
Check number of users in the dataset
###Code
print("Number of users in the data = {}".format(len(corpus_users)))
# Example metadata from user 1632
corpus_users['1632'].meta
###Output
_____no_output_____
###Markdown
Create UtterancesUtterances are found in the "text" field of each Transcript object. There are 221,616 utterances in total.Each Utterance object has the following fields:- id (str): the unique id of the utterance- user (User): the User giving the utterance- root (str): id of the root utterance of the conversation- reply_to (str): id of the utterance this replies to- timestamp: timestamp of the utterance (not applicable in Switchboard)- text (str): text of the utterance- metadata - tag (str): the DAMSL act-tag of the utterance - pos (str): the part-of-speech tagged portion of the utterance - trees (nltk Tree): parsed tree of the utterance
###Code
utterance_corpus = {}
# Iterate thru each transcript
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
utts = trans.utterances
root = str(trans.conversation_no) + "-0" # Get id of root utterance
recent_A = None
recent_B = None
# Iterate thru each utterance in transcript
last_speaker = ''
cur_speaker = ''
all_text = ''
text_pos = ''
text_tag_list = []
counter = 0
first_utt = True
for i, utt in enumerate(utts):
idx = str(utt.conversation_no) + "-" + str(counter)
text = utt.text
# Check which user is talking
if 'A' in utt.caller:
recent_A = idx;
user = str(trans.from_caller)
cur_speaker = user
else:
recent_B = idx;
user = str(trans.to_caller)
cur_speaker = user
# Only add as an utterance if the user has finished talking
if cur_speaker != last_speaker and i > 0:
# Put act-tag and POS information into metadata
meta = {'tag': text_tag_list,
}
# For reply_to, find the most recent utterance from the other caller
if first_utt:
reply_to = None
first_utt = False
elif 'A' in utt.caller:
reply_to = recent_B
else:
reply_to = recent_A
utterance_corpus[idx] = Utterance(idx, corpus_users[user], root,
reply_to, None, all_text, meta)
# Update with the current utterance information
# This is the first utterance of the next statement
all_text = utt.text
text_pos = utt.pos
text_tag_list = [(utt.text, utt.act_tag)]
counter += 1
else:
# Otherwise, combine all the text from the user
all_text += utt.text
text_pos += utt.pos
text_tag_list.append((utt.text, utt.act_tag))
last_speaker = cur_speaker
last_speaker_idx = idx
utterance_list = [utterance for k,utterance in utterance_corpus.items()]
###Output
_____no_output_____
###Markdown
Check number of utterances in the dataset
###Code
print("Number of utterances in the data = {}".format(len(utterance_corpus)))
# Example utterance object
utterance_corpus['4325-2']
###Output
_____no_output_____
###Markdown
Create corpus from list of utterances
###Code
switchboard_corpus = Corpus(utterances=utterance_list, version=1)
print("number of conversations in the dataset = {}".format(len(switchboard_corpus.get_conversation_ids())))
###Output
number of conversations in the dataset = 1155
###Markdown
Create Conversations
###Code
# Set conversation Metadata
for i, c in enumerate(switchboard_corpus.conversations):
trans = Transcript(files[i], './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
convo = switchboard_corpus.conversations[c]
convo.meta['filename'] = files[i]
date = trans.talk_day
convo_date = "%d-%d-%d" % (date.year, date.month, date.day)
convo.meta['talk_day'] = convo_date
convo.meta['topic_description'] = trans.topic_description
convo.meta['length'] = trans.length
convo.meta['prompt'] = trans.prompt
convo.meta['from_caller'] = str(trans.from_caller)
convo.meta['to_caller'] = str(trans.to_caller)
print(switchboard_corpus.conversations['4384-0'].meta)
###Output
{'filename': './swda/sw13utt/sw_1325_4384.utt.csv', 'talk_day': '1992-3-25', 'topic_description': 'CHILD CARE', 'length': 5, 'prompt': 'FIND OUT WHAT CRITERIA THE OTHER CALLER WOULD USE IN SELECTING CHILD CARE SERVICES FOR A PRESCHOOLER. IS IT EASY OR DIFFICULT TO FIND SUCH CARE?', 'from_caller': '1653', 'to_caller': '1646'}
###Markdown
Update corpus level metadata
###Code
switchboard_meta = {}
for file in files:
trans = Transcript(file, './swda/swda-metadata.csv')
idx = str(trans.conversation_no)
switchboard_meta[idx] = {}
switchboard_corpus.meta['metadata'] = switchboard_meta
switchboard_corpus.meta['name'] = "The Switchboard Dialog Act Corpus"
switchboard_corpus.meta['metadata']['4325']
###Output
_____no_output_____
###Markdown
Save created corpus
###Code
switchboard_corpus.dump("corpus", base_path = "./")
###Output
_____no_output_____
###Markdown
Check if available info from dataset can be viewed directly
###Code
from convokit import meta_index
meta_index(filename = "./corpus")
switchboard_corpus = Corpus(filename = "./corpus")
switchboard_corpus.print_summary_stats()
###Output
Number of Users: 440
Number of Utterances: 122646
Number of Conversations: 1155
|
RECOMMENDER SYSTEM/USER-BASED COLLABORATIVE FILTERING.ipynb | ###Markdown
PROBLEM STATEMENT - This notebook implements a movie recommender system. - Recommender systems are used to suggest movies or songs to users based on their interest or usage history. - For example, Netflix recommends movies to watch based on the previous movies you've watched. - In this example, we will use Item-based Collaborative Filter - Dataset MovieLens: https://grouplens.org/datasets/movielens/100k/ - Photo Credit: https://pxhere.com/en/photo/1588369  STEP 0: LIBRARIES IMPORT
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
STEP 1: IMPORT DATASET
###Code
# Two datasets are available, let's load the first one:
movie_titles_df = pd.read_csv("Movie_Id_Titles")
movie_titles_df.head(20)
# Let's load the second one!
movies_rating_df = pd.read_csv('u.data', sep='\t', names=['user_id', 'item_id', 'rating', 'timestamp'])
movies_rating_df.head(10)
movies_rating_df.tail()
# Let's drop the timestamp
movies_rating_df.drop(['timestamp'], axis = 1, inplace = True)
movies_rating_df
movies_rating_df.describe()
movies_rating_df.info()
# Let's merge both dataframes together so we can have ID with the movie name
movies_rating_df = pd.merge(movies_rating_df, movie_titles_df, on = 'item_id')
movies_rating_df
movies_rating_df.shape
###Output
_____no_output_____
###Markdown
STEP 2: VISUALIZE DATASET
###Code
movies_rating_df.groupby('title')['rating'].describe()
ratings_df_mean = movies_rating_df.groupby('title')['rating'].describe()['mean']
ratings_df_count = movies_rating_df.groupby('title')['rating'].describe()['count']
ratings_df_count
ratings_mean_count_df = pd.concat([ratings_df_count, ratings_df_mean], axis = 1)
ratings_mean_count_df.reset_index()
ratings_mean_count_df['mean'].plot(bins=100, kind='hist', color = 'r')
ratings_mean_count_df['count'].plot(bins=100, kind='hist', color = 'r')
# Let's see the highest rated movies!
# Apparently these movies does not have many reviews (i.e.: small number of ratings)
ratings_mean_count_df[ratings_mean_count_df['mean'] == 5]
# List all the movies that are most rated
# Please note that they are not necessarily have the highest rating (mean)
ratings_mean_count_df.sort_values('count', ascending = False).head(100)
###Output
_____no_output_____
###Markdown
STEP 3: PERFORM ITEM-BASED COLLABORATIVE FILTERING ON ONE MOVIE SAMPLE
###Code
userid_movietitle_matrix = movies_rating_df.pivot_table(index = 'user_id', columns = 'title', values = 'rating')
userid_movietitle_matrix
titanic = userid_movietitle_matrix['Titanic (1997)']
titanic
# Let's calculate the correlations
titanic_correlations = pd.DataFrame(userid_movietitle_matrix.corrwith(titanic), columns=['Correlation'])
titanic_correlations = titanic_correlations.join(ratings_mean_count_df['count'])
titanic_correlations
titanic_correlations.dropna(inplace=True)
titanic_correlations
# Let's sort the correlations vector
titanic_correlations.sort_values('Correlation', ascending=False)
titanic_correlations[titanic_correlations['count']>80].sort_values('Correlation',ascending=False).head()
# Pick up star wars movie and repeat the excerise
###Output
_____no_output_____
###Markdown
STEP4: CREATE AN ITEM-BASED COLLABORATIVE FILTER ON THE ENTIRE DATASET
###Code
# Recall this matrix that we created earlier of all movies and their user ID/ratings
userid_movietitle_matrix
movie_correlations = userid_movietitle_matrix.corr(method = 'pearson', min_periods = 80)
# pearson : standard correlation coefficient
# Obtain the correlations between all movies in the dataframe
movie_correlations
# Let's create our own dataframe with our own ratings!
myRatings = pd.read_csv("My_Ratings.csv")
#myRatings.reset_index
myRatings
len(myRatings.index)
myRatings['Movie Name'][0]
similar_movies_list = pd.Series()
for i in range(0, 2):
similar_movie = movie_correlations[myRatings['Movie Name'][i]].dropna() # Get same movies with same ratings
similar_movie = similar_movie.map(lambda x: x * myRatings['Ratings'][i]) # Scale the similarity by your given ratings
similar_movies_list = similar_movies_list.append(similar_movie)
similar_movies_list.sort_values(inplace = True, ascending = False)
print (similar_movies_list.head(10))
###Output
Liar Liar (1997) 5.000000
Con Air (1997) 2.349141
Pretty Woman (1990) 2.348951
Michael (1996) 2.210110
Indiana Jones and the Last Crusade (1989) 2.072136
Top Gun (1986) 2.028602
G.I. Jane (1997) 1.989656
Multiplicity (1996) 1.984302
Grumpier Old Men (1995) 1.953494
Ghost and the Darkness, The (1996) 1.895376
dtype: float64
|
08-north-korean-news-odonnchadha.ipynb | ###Markdown
North Korean NewsScrape the North Korean news agency http://kcna.kpSave a CSV called `nk-news.csv`. This file should include:* The **article headline*** The value of **`onclick`** (they don't have normal links)* The **article ID** (for example, the article ID for `fn_showArticle("AR0125885", "", "NT00", "L")` is `AR0125885`The last part is easiest using pandas. Be sure you don't save the index!* _**Tip:** If you're using requests+BeautifulSoup, you can always look at response.text to see if the page looks like what you think it looks like_* _**Tip:** Check your URL to make sure it is what you think it should be!_* _**Tip:** Does it look different if you scrape with BeautifulSoup compared to if you scrape it with Selenium?_* _**Tip:** For the last part, how do you pull out part of a string from a longer string?_* _**Tip:** `expand=False` is helpful if you want to assign a single new column when extracting_* _**Tip:** `(` and `)` mean something special in regular expressions, so you have to say "no really seriously I mean `(`" by using `\(` instead_* _**Tip:** if your `.*` is taking up too much stuff, you can try `.*?` instead, which instead of "take as much as possible" it means "take only as much as needed"_
###Code
import requests
import re
from bs4 import BeautifulSoup
url = "http://kcna.kp/kcna.user.home.retrieveHomeInfoList.kcmsf"
raw_html = requests.get(url).content
soup_doc = BeautifulSoup(raw_html, "html.parser")
print(type(soup_doc))
### TEST VIEWS OF DATA
# raw_html
# print(soup_doc)
print(soup_doc.prettify())
soup_doc.find_all('h3')
# soup_doc.select('div#events-horizontal')
# upcoming_events_div = soup.select_one('div#events-horizontal')
# article_area > div.harticle15 > ul:nth-child(9) > li:nth-child(2) > h3 > strong > font > a.titlebet
# //*[@id="article_area"]/div[1]/ul[2]
###Output
_____no_output_____ |
Image_Captioning.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
Download the required data : Annotations,Captions,Images
###Code
import os
import sys
from pycocotools.coco import COCO
import urllib
import zipfile
os.makedirs('opt' , exist_ok=True)
os.chdir( '/content/opt' )
!git clone 'https://github.com/cocodataset/cocoapi.git'
###Output
Cloning into 'cocoapi'...
remote: Enumerating objects: 975, done.[K
remote: Total 975 (delta 0), reused 0 (delta 0), pack-reused 975[K
Receiving objects: 100% (975/975), 11.72 MiB | 29.57 MiB/s, done.
Resolving deltas: 100% (575/575), done.
###Markdown
Download the Annotations and Captions :
###Code
os.chdir('/content/opt/cocoapi')
# Download the annotation :
annotations_trainval2014 = 'http://images.cocodataset.org/annotations/annotations_trainval2014.zip'
image_info_test2014 = 'http://images.cocodataset.org/annotations/image_info_test2014.zip'
urllib.request.urlretrieve(annotations_trainval2014 , filename = 'annotations_trainval2014.zip' )
urllib.request.urlretrieve(image_info_test2014 , filename= 'image_info_test2014.zip' )
###Output
_____no_output_____
###Markdown
Extract Annotations from ZIP file
###Code
with zipfile.ZipFile('annotations_trainval2014.zip' , 'r') as zip_ref:
zip_ref.extractall( '/content/opt/cocoapi' )
try:
os.remove( 'annotations_trainval2014.zip' )
print('zip removed')
except:
None
with zipfile.ZipFile('image_info_test2014.zip' , 'r') as zip_ref:
zip_ref.extractall( '/content/opt/cocoapi' )
try:
os.remove( 'image_info_test2014.zip' )
print('zip removed')
except:
None
###Output
zip removed
zip removed
###Markdown
Initialize and verify the loaded data
###Code
os.chdir('/content/opt/cocoapi/annotations')
# initialize COCO API for instance annotations
dataType = 'val2014'
instances_annFile = 'instances_{}.json'.format(dataType)
print(instances_annFile)
coco = COCO(instances_annFile)
# initialize COCO API for caption annotations
captions_annFile = 'captions_{}.json'.format(dataType)
coco_caps = COCO(captions_annFile)
# get image ids
ids = list(coco.anns.keys())
###Output
instances_val2014.json
loading annotations into memory...
Done (t=4.81s)
creating index...
index created!
loading annotations into memory...
Done (t=0.33s)
creating index...
index created!
###Markdown
plot a sample Image
###Code
import matplotlib.pyplot as plt
import skimage.io as io
import numpy as np
%matplotlib inline
#Pick a random annotation id and display img of that annotation :
ann_id = np.random.choice( ids )
img_id = coco.anns[ann_id]['image_id']
img = coco.loadImgs( img_id )[0]
url = img['coco_url']
print(url)
I = io.imread(url)
plt.imshow(I)
# Display captions for that annotation id :
ann_ids = coco_caps.getAnnIds( img_id )
print('Number of annotations i.e captions for the image: ' , ann_ids)
print()
anns = coco_caps.loadAnns( ann_ids )
coco_caps.showAnns(anns)
###Output
http://images.cocodataset.org/val2014/COCO_val2014_000000454382.jpg
Number of annotations i.e captions for the image: [168868, 216949, 219721, 231967, 238819]
The blue dump truck rides down the street next to the houses.
A blue dump truck traveling down a street past tall houses.
A blue truck parked on the road near houses.
Dump truck alone on road with buildings and bare trees and shrubs behind it.
A blue dump truck sits parked on a residential street.
###Markdown
Download Train , Test , Val Images :
###Code
os.chdir('/content/opt/cocoapi')
train2014 = 'http://images.cocodataset.org/zips/train2014.zip'
test2014 = 'http://images.cocodataset.org/zips/test2014.zip'
val2014 = 'http://images.cocodataset.org/zips/val2014.zip'
urllib.request.urlretrieve( train2014 , 'train2014' )
urllib.request.urlretrieve( test2014 , 'test2014' )
#urllib.request.urlretrieve( val2014 , 'val2014' )
###Output
_____no_output_____
###Markdown
unzip the download image zip files
###Code
os.chdir('/content/opt/cocoapi')
with zipfile.ZipFile( 'train2014' , 'r' ) as zip_ref:
zip_ref.extractall( 'images' )
try:
os.remove( 'train2014' )
print('zip removed')
except:
None
os.chdir('/content/opt/cocoapi')
with zipfile.ZipFile( 'test2014' , 'r' ) as zip_ref:
zip_ref.extractall( 'images' )
try:
os.remove( 'test2014' )
print('zip removed')
except:
None
###Output
zip removed
zip removed
###Markdown
Step1 Explore the DataLoader Vocabulary.py
###Code
# vocabulary.py -------------------------------------------------------------
import nltk
import pickle
import os.path
from pycocotools.coco import COCO
from collections import Counter
class Vocabulary(object):
def __init__(self,
vocab_threshold,
vocab_file='./vocab.pkl',
start_word="<start>",
end_word="<end>",
unk_word="<unk>",
annotations_file='../cocoapi/annotations/captions_train2014.json',
vocab_from_file=False):
"""Initialize the vocabulary.
Args:
vocab_threshold: Minimum word count threshold.
vocab_file: File containing the vocabulary.
start_word: Special word denoting sentence start.
end_word: Special word denoting sentence end.
unk_word: Special word denoting unknown words.
annotations_file: Path for train annotation file.
vocab_from_file: If False, create vocab from scratch & override any existing vocab_file
If True, load vocab from from existing vocab_file, if it exists
"""
self.vocab_threshold = vocab_threshold
self.vocab_file = vocab_file
self.start_word = start_word
self.end_word = end_word
self.unk_word = unk_word
self.annotations_file = annotations_file
self.vocab_from_file = vocab_from_file
self.get_vocab()
def get_vocab(self):
"""Load the vocabulary from file OR build the vocabulary from scratch."""
if os.path.exists(self.vocab_file) & self.vocab_from_file:
with open(self.vocab_file, 'rb') as f:
vocab = pickle.load(f)
self.word2idx = vocab.word2idx
self.idx2word = vocab.idx2word
print('Vocabulary successfully loaded from vocab.pkl file!')
else:
self.build_vocab()
with open(self.vocab_file, 'wb') as f:
pickle.dump(self, f)
def build_vocab(self):
"""Populate the dictionaries for converting tokens to integers (and vice-versa)."""
self.init_vocab()
self.add_word(self.start_word)
self.add_word(self.end_word)
self.add_word(self.unk_word)
self.add_captions()
def init_vocab(self):
"""Initialize the dictionaries for converting tokens to integers (and vice-versa)."""
self.word2idx = {}
self.idx2word = {}
self.idx = 0
def add_word(self, word):
"""Add a token to the vocabulary."""
if not word in self.word2idx:
self.word2idx[word] = self.idx
self.idx2word[self.idx] = word
self.idx += 1
def add_captions(self):
"""Loop over training captions and add all tokens to the vocabulary that meet or exceed the threshold."""
coco = COCO(self.annotations_file)
counter = Counter()
ids = coco.anns.keys()
for i, id in enumerate(ids):
caption = str(coco.anns[id]['caption'])
tokens = nltk.tokenize.word_tokenize(caption.lower())
counter.update(tokens)
if i % 100000 == 0:
print("[%d/%d] Tokenizing captions..." % (i, len(ids)))
words = [word for word, cnt in counter.items() if cnt >= self.vocab_threshold]
for i, word in enumerate(words):
self.add_word(word)
def __call__(self, word):
if not word in self.word2idx:
return self.word2idx[self.unk_word]
return self.word2idx[word]
def __len__(self):
return len(self.word2idx)
###Output
_____no_output_____
###Markdown
data_loader.py
###Code
# Data Loader ---------------------------------------------------------------------------------------------
import nltk
import os
import torch
import torch.utils.data as data
from PIL import Image
from pycocotools.coco import COCO
import numpy as np
from tqdm import tqdm
import random
import json
def get_loader(transform,
mode='train',
batch_size=1,
vocab_threshold=None,
vocab_file='./vocab.pkl',
start_word="<start>",
end_word="<end>",
unk_word="<unk>",
vocab_from_file=True,
num_workers=0,
cocoapi_loc='/opt'):
"""Returns the data loader.
Args:
transform: Image transform.
mode: One of 'train' or 'test'.
batch_size: Batch size (if in testing mode, must have batch_size=1).
vocab_threshold: Minimum word count threshold.
vocab_file: File containing the vocabulary.
start_word: Special word denoting sentence start.
end_word: Special word denoting sentence end.
unk_word: Special word denoting unknown words.
vocab_from_file: If False, create vocab from scratch & override any existing vocab_file.
If True, load vocab from from existing vocab_file, if it exists.
num_workers: Number of subprocesses to use for data loading
cocoapi_loc: The location of the folder containing the COCO API: https://github.com/cocodataset/cocoapi
"""
assert mode in ['train', 'test'], "mode must be one of 'train' or 'test'."
if vocab_from_file==False: assert mode=='train', "To generate vocab from captions file, must be in training mode (mode='train')."
# Based on mode (train, val, test), obtain img_folder and annotations_file.
if mode == 'train':
if vocab_from_file==True: assert os.path.exists(vocab_file), "vocab_file does not exist. Change vocab_from_file to False to create vocab_file."
img_folder = os.path.join(cocoapi_loc, 'cocoapi/images/train2014/')
annotations_file = os.path.join(cocoapi_loc, 'cocoapi/annotations/captions_train2014.json')
if mode == 'test':
assert batch_size==1, "Please change batch_size to 1 if testing your model."
assert os.path.exists(vocab_file), "Must first generate vocab.pkl from training data."
assert vocab_from_file==True, "Change vocab_from_file to True."
img_folder = os.path.join(cocoapi_loc, 'cocoapi/images/test2014/')
annotations_file = os.path.join(cocoapi_loc, 'cocoapi/annotations/image_info_test2014.json')
# COCO caption dataset.
dataset = CoCoDataset(transform=transform,
mode=mode,
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_file=vocab_file,
start_word=start_word,
end_word=end_word,
unk_word=unk_word,
annotations_file=annotations_file,
vocab_from_file=vocab_from_file,
img_folder=img_folder)
if mode == 'train':
# Randomly sample a caption length, and sample indices with that length.
indices = dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
initial_sampler = data.sampler.SubsetRandomSampler(indices=indices)
# data loader for COCO dataset.
data_loader = data.DataLoader(dataset=dataset,
num_workers=num_workers,
batch_sampler=data.sampler.BatchSampler(sampler=initial_sampler,
batch_size=dataset.batch_size,
drop_last=False))
else:
data_loader = data.DataLoader(dataset=dataset,
batch_size=dataset.batch_size,
shuffle=True,
num_workers=num_workers)
return data_loader
class CoCoDataset(data.Dataset):
def __init__(self, transform, mode, batch_size, vocab_threshold, vocab_file, start_word,
end_word, unk_word, annotations_file, vocab_from_file, img_folder):
self.transform = transform
self.mode = mode
self.batch_size = batch_size
self.vocab = Vocabulary(vocab_threshold, vocab_file, start_word,
end_word, unk_word, annotations_file, vocab_from_file)
self.img_folder = img_folder
if self.mode == 'train':
self.coco = COCO(annotations_file)
self.ids = list(self.coco.anns.keys())
print('Obtaining caption lengths...')
all_tokens = [nltk.tokenize.word_tokenize(str(self.coco.anns[self.ids[index]]['caption']).lower()) for index in tqdm(np.arange(len(self.ids)))]
self.caption_lengths = [len(token) for token in all_tokens]
else:
test_info = json.loads(open(annotations_file).read())
self.paths = [item['file_name'] for item in test_info['images']]
def __getitem__(self, index):
# obtain image and caption if in training mode
if self.mode == 'train':
ann_id = self.ids[index]
caption = self.coco.anns[ann_id]['caption']
img_id = self.coco.anns[ann_id]['image_id']
path = self.coco.loadImgs(img_id)[0]['file_name']
# Convert image to tensor and pre-process using transform
image = Image.open(os.path.join(self.img_folder, path)).convert('RGB')
image = self.transform(image)
# Convert caption to tensor of word ids.
tokens = nltk.tokenize.word_tokenize(str(caption).lower())
caption = []
caption.append(self.vocab(self.vocab.start_word))
caption.extend([self.vocab(token) for token in tokens])
caption.append(self.vocab(self.vocab.end_word))
caption = torch.Tensor(caption).long()
# return pre-processed image and caption tensors
return image, caption
# obtain image if in test mode
else:
path = self.paths[index]
# Convert image to tensor and pre-process using transform
PIL_image = Image.open(os.path.join(self.img_folder, path)).convert('RGB')
orig_image = np.array(PIL_image)
image = self.transform(PIL_image)
# return original image and pre-processed image tensor
return orig_image, image
def get_train_indices(self):
sel_length = np.random.choice(self.caption_lengths)
all_indices = np.where([self.caption_lengths[i] == sel_length for i in np.arange(len(self.caption_lengths))])[0]
indices = list(np.random.choice(all_indices, size=self.batch_size))
return indices
def __len__(self):
if self.mode == 'train':
return len(self.ids)
else:
return len(self.paths)
###Output
_____no_output_____
###Markdown
Dataloader creation
###Code
import sys
from pycocotools.coco import COCO
!pip install nltk
import nltk
nltk.download('punkt')
from torchvision import transforms
# Define a transform to pre-process the training images.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Set the minimum word count threshold.
vocab_threshold = 8
# Specify the batch size.
batch_size = 200
# Obtain the data loader.
data_loader_train = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=False,
cocoapi_loc = '/content/opt')
import torch
import numpy as np
import torch.utils.data as data
# Exploring the dataloader now :
sample_caption = 'A person doing a trick xxxx on a rail while riding a skateboard.'
sample_tokens = nltk.tokenize.word_tokenize( sample_caption.lower() )
sample_caption = []
start_word = data_loader_train.dataset.vocab.start_word
end_word = data_loader_train.dataset.vocab.end_word
sample_tokens.insert(0 , start_word)
sample_tokens.append(end_word)
sample_caption.extend( [ data_loader_train.dataset.vocab(token) for token in sample_tokens ] )
sample_caption = torch.Tensor( sample_caption ).long()
print('Find Below the Sample tokens and the idx values of those tokens in word2idx' , '\n')
print(sample_tokens)
print(sample_caption )
print('Find index values for words below \n')
print('Start idx {} , End idx {} , unknown idx {}'.format( 0,1,2 ))
# Lets check word2idx in vocb
print('First few vocab' , dict(list(data_loader_train.dataset.vocab.word2idx.items())[:10]))
# Print the total number of keys in the word2idx dictionary.
print('Total number of tokens in vocabulary:', len(data_loader_train.dataset.vocab))
###Output
First few vocab {'<start>': 0, '<end>': 1, '<unk>': 2, 'a': 3, 'very': 4, 'clean': 5, 'and': 6, 'well': 7, 'decorated': 8, 'empty': 9}
Total number of tokens in vocabulary: 7073
###Markdown
Step 2: Use the Data Loader to Obtain BatchesThe captions in the dataset vary greatly in length. You can see this by examining `data_loader.dataset.caption_lengths`, a Python list with one entry for each training caption (where the value stores the length of the corresponding caption). In the code cell below, we use this list to print the total number of captions in the training data with each length. As you will see below, the majority of captions have length 10. Likewise, very short and very long captions are quite rare.
###Code
from collections import Counter
counter = Counter(data_loader_train.dataset.caption_lengths)
lengths = sorted( counter.items() , key = lambda pair : pair[1] , reverse=True )
for val,count in lengths:
print( 'value %2d count %5d' %(val,count) )
if count < 10000:
break
###Output
value 10 count 86334
value 11 count 79948
value 9 count 71934
value 12 count 57637
value 13 count 37645
value 14 count 22335
value 8 count 20771
value 15 count 12841
value 16 count 7729
###Markdown
To generate batches of training data, we begin by first sampling a caption length (where the probability that any length is drawn is proportional to the number of captions with that length in the dataset). Then, we retrieve a batch of size `batch_size` of image-caption pairs, where all captions have the sampled length. This approach for assembling batches matches the procedure in [this paper](https://arxiv.org/pdf/1502.03044.pdf) and has been shown to be computationally efficient without degrading performance.Run the code cell below to generate a batch. The `get_train_indices` method in the `CoCoDataset` class first samples a caption length, and then samples `batch_size` indices corresponding to training data points with captions of that length. These indices are stored below in `indices`.These indices are supplied to the data loader, which then is used to retrieve the corresponding data points. The pre-processed images and captions in the batch are stored in `images` and `captions`.
###Code
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader_train.dataset.get_train_indices()
print('Sample Indices:' , indices )
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
sampler = data.sampler.SubsetRandomSampler( indices )
data_loader_train.batch_sampler.sampler = sampler
# obtain images, caption :
images , captions = next(iter(data_loader_train))
print(images.shape , captions.shape)
###Output
Sample Indices: [364220, 241543, 319815, 116354, 114582, 307649, 115217, 16948, 51787, 226827, 73848, 126963, 250676, 9538, 61102, 127666, 185651, 59314, 133641, 261485, 264340, 289678, 149341, 402152, 335108, 407115, 157272, 6151, 6600, 372761, 311533, 28604, 192585, 289947, 354326, 165509, 51134, 60859, 165878, 61715, 91975, 311726, 243462, 156881, 380643, 398269, 123678, 47498, 338653, 147094, 162088, 413379, 216311, 198913, 376596, 358961, 122811, 26997, 376488, 14894, 202376, 58856, 308987, 271161, 68161, 19618, 396538, 156274, 309753, 45759, 211793, 305514, 337269, 292970, 331635, 311510, 208640, 105570, 293107, 108782, 191947, 132584, 367952, 208657, 220552, 84165, 267140, 355447, 210245, 255111, 119437, 173160, 60367, 241446, 4949, 52803, 405757, 310024, 90704, 411894, 408404, 290443, 298771, 242154, 140971, 199808, 236390, 253064, 9524, 21141, 8932, 307443, 28445, 371693, 202967, 176705, 75601, 323405, 97186, 381356, 362725, 166656, 118944, 115961, 388047, 239326, 378820, 162684, 217240, 222029, 120129, 269512, 110314, 186867, 299294, 37371, 52729, 351248, 136968, 35254, 396989, 172400, 239099, 241661, 36358, 413430, 400403, 101006, 212381, 397283, 342339, 316051, 397098, 401370, 279713, 74279, 18483, 332961, 238322, 299761, 407369, 108212, 44403, 331635, 72893, 98197, 307528, 308098, 348520, 117081, 96016, 138362, 225536, 393645, 282158, 298562, 50680, 156576, 311336, 148936, 308964, 394994, 53333, 179381, 84165, 158379, 31342, 92272, 130109, 81364, 180549, 322327, 413562, 347379, 286183, 292989, 89276, 208379, 280488, 294320]
torch.Size([200, 3, 224, 224]) torch.Size([200, 13])
###Markdown
Step 3: Experiment with the CNN EncoderThe encoder uses the pre-trained ResNet-50 architecture (with the final fully-connected layer removed) to extract features from a batch of pre-processed images. The output is then flattened to a vector, before being passed through a `Linear` layer to transform the feature vector to have the same size as the word embedding.
###Code
import torch
import torch.nn as nn
import torchvision.models as models
class EncoderCNN(nn.Module):
def __init__(self, embed_size):
super(EncoderCNN, self).__init__()
resnet = models.resnet50(pretrained=True)
for param in resnet.parameters():
param.requires_grad_(False)
modules = list(resnet.children())[:-1]
self.resnet = nn.Sequential(*modules)
self.embed = nn.Linear(resnet.fc.in_features, embed_size)
def forward(self, images):
features = self.resnet(images)
features = features.view(features.size(0), -1)
features = self.embed(features)
return features
# specify dim of image embedding
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
embed_size = 256
encoder = EncoderCNN( embed_size )
encoder.to(device)
images= images.to(device) # images from step2
features = encoder(images)
print(type(features) , features.shape , images.shape)
assert( type(features) == torch.Tensor ) , 'Encoder output should be pytorch tensor'
assert (features.shape[0] == batch_size) & (features.shape[1] == embed_size) , "The shape of the encoder output is incorrect."
###Output
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth
###Markdown
Step 4: Implement the RNN DecoderIn the code cell below, `outputs` should be a PyTorch tensor with size `[batch_size, captions.shape[1], vocab_size]`. Your output should be designed such that `outputs[i,j,k]` contains the model's predicted score, indicating how likely the `j`-th token in the `i`-th caption in the batch is the `k`-th token in the vocabulary. In the next notebook of the sequence (**2_Training.ipynb**), we provide code to supply these scores to the [`torch.nn.CrossEntropyLoss`](http://pytorch.org/docs/master/nn.htmltorch.nn.CrossEntropyLoss) optimizer in PyTorch.
###Code
import os
import torch.utils.data as data
import torch
import math
import pickle
import matplotlib.pyplot as plt
% matplotlib inline
class DecoderRNN(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers=1):
super( DecoderRNN , self).__init__()
self.embed_size = embed_size
self.hidden_size = hidden_size
self.vocab_size = vocab_size
self.num_layers = num_layers
self.word_embedding = nn.Embedding( self.vocab_size , self.embed_size )
self.lstm = nn.LSTM( input_size = self.embed_size ,
hidden_size = self.hidden_size,
num_layers = self.num_layers ,
batch_first = True
)
self.fc = nn.Linear( self.hidden_size , self.vocab_size )
def init_hidden( self, batch_size ):
return ( torch.zeros( self.num_layers , batch_size , self.hidden_size ).to(device),
torch.zeros( self.num_layers , batch_size , self.hidden_size ).to(device) )
def forward(self, features, captions):
captions = captions[:,:-1]
self.batch_size = features.shape[0]
self.hidden = self.init_hidden( self.batch_size )
embeds = self.word_embedding( captions )
inputs = torch.cat( ( features.unsqueeze(dim=1) , embeds ) , dim =1 )
lstm_out , self.hidden = self.lstm(inputs , self.hidden)
outputs = self.fc( lstm_out )
return outputs
def Predict(self, inputs, max_len=20):
final_output = []
batch_size = inputs.shape[0]
hidden = self.init_hidden(batch_size)
while True:
lstm_out, hidden = self.lstm(inputs, hidden)
outputs = self.fc(lstm_out)
outputs = outputs.squeeze(1)
_, max_idx = torch.max(outputs, dim=1)
final_output.append(max_idx.cpu().numpy()[0].item())
if (max_idx == 1 or len(final_ouput) >=20 ):
break
inputs = self.word_embedding(max_idx)
inputs = inputs.unsqueeze(1)
return final_output
embed_size = 256
hidden_size = 100
num_layers =1
num_epochs = 4
print_every = 150
save_every = 1
vocab_size = len(data_loader_train.dataset.vocab)
total_step = math.ceil( len(data_loader_train.dataset.caption_lengths) / data_loader_train.batch_sampler.batch_size )
decoder = DecoderRNN( embed_size , hidden_size, vocab_size ,num_layers)
criterion = nn.CrossEntropyLoss()
lr = 0.001
all_params = list(decoder.parameters()) + list( encoder.embed.parameters() )
optimizer = torch.optim.Adam( params = all_params , lr = lr )
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_save_path = '/content/drive/My Drive/Colab Notebooks/ComputerVision/RNN_LSTM/image_caption/CVND---Image-Captioning-Project/checkpoint'
os.makedirs( model_save_path , exist_ok=True)
# Save the params needed to created the model :
decoder_input_params = {'embed_size' : embed_size ,
'hidden_size' : hidden_size ,
'num_layers' : num_layers,
'lr' : lr ,
'vocab_size' : vocab_size
}
with open( os.path.join(model_save_path , 'decoder_input_params_12_20_2019.pickle'), 'wb') as handle:
pickle.dump(decoder_input_params, handle, protocol=pickle.HIGHEST_PROTOCOL)
import sys
for e in range(num_epochs):
for step in range(total_step):
indices = data_loader_train.dataset.get_train_indices()
new_sampler = data.sampler.SubsetRandomSampler( indices )
data_loader_train.batch_sampler.sampler = new_sampler
images,captions = next(iter(data_loader_train))
images , captions = images.to(device) , captions.to(device)
encoder , decoder = encoder.to(device) , decoder.to(device)
encoder.zero_grad()
decoder.zero_grad()
features = encoder(images)
output = decoder( features , captions )
loss = criterion( output.view(-1, vocab_size) , captions.view(-1) )
loss.backward()
optimizer.step()
stat_vals = 'Epochs [%d/%d] Step [%d/%d] Loss [%.4f] ' %( e+1,num_epochs,step,total_step,loss.item() )
if step % print_every == 0 :
print(stat_vals)
sys.stdout.flush()
if e % save_every == 0:
torch.save( encoder.state_dict() , os.path.join( model_save_path , 'encoderdata_{}.pkl'.format(e+1) ) )
torch.save( decoder.state_dict() , os.path.join( model_save_path , 'decoderdata_{}.pkl'.format(e+1) ) )
###Output
Epochs [1/4] Step [0/2071] Loss [8.8806]
Epochs [1/4] Step [150/2071] Loss [4.0232]
Epochs [1/4] Step [300/2071] Loss [3.5489]
###Markdown
Load the saved checkpoint
###Code
model_save_path = '/content/drive/My Drive/Colab Notebooks/ComputerVision/RNN_LSTM/image_caption/CVND---Image-Captioning-Project/checkpoint'
os.makedirs( model_save_path , exist_ok=True)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
with open( os.path.join(model_save_path , 'decoder_input_params_12_19_2019.pickle'), 'rb') as handle:
decoder_input_params = pickle.load(handle)
embed_size = decoder_input_params['embed_size']
hidden_size= decoder_input_params['hidden_size']
vocab_size = decoder_input_params['vocab_size']
num_layers = decoder_input_params['num_layers']
encoder = EncoderCNN( embed_size )
encoder.load_state_dict( torch.load( os.path.join( model_save_path , 'encoderdata_{}.pkl'.format(1) ) ) )
decoder = DecoderRNN( embed_size , hidden_size , vocab_size , num_layers )
decoder.load_state_dict( torch.load( os.path.join( model_save_path , 'decoderdata_{}.pkl'.format(1) ) ) )
###Output
_____no_output_____
###Markdown
Create Dataloader for test data :
###Code
from torchvision import transforms
# Define a transform to pre-process the training images.
transform_test = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Obtain the data loader.
data_loader_test = get_loader(transform=transform_test,
mode='test',
cocoapi_loc = '/content/opt')
data_iter = iter(data_loader_test)
def get_sentences( original_img, all_predictions ):
sentence = ' '
plt.imshow(original_img.squeeze())
return sentence.join([data_loader_test.dataset.vocab.idx2word[idx] for idx in all_predictions[1:-1] ] )
encoder.to(device)
decoder.to(device)
encoder.eval()
decoder.eval()
original_img , processed_img = next( data_iter )
features = encoder(processed_img.to(device) ).unsqueeze(1)
final_output = decoder.predict( features , max_len=20)
get_sentences(original_img, final_output)
###Output
_____no_output_____
###Markdown
Features/weights of all images - transfer learning: strip off last layer of CNN - probably a fully connected layer with softmax activation, for classification - take the weights (4096 x 1) and feed into an RNN (specifically LSTM)- greedy search vs beam search for image caption- think of a tree structure - greedy search: given a word, choose the most likely next word; then, given the first two words, choose the most likely third word, etc.- greedy search may not result in globally optimal outcome- beam search: given a word, limit to top N most likely next words....- other extreme: form every possible caption and choose the best- model architecture of CNN: VGG (Visual Geometry Group) model, which is pretrained on the ImageNet dataset, has 16 layers- reshape each of 8,000 color images 
###Code
def extract_features(directory):
"""Modify VGG and pass all images through modified VGG; collect results in a dictionary"""
# load the CNN model; need to import VGG
model = VGG16()
# pop off the last layer of this model
model.layers.pop()
print(model.summary())
# output is the new last layer of the model; is this step necessary?
# need to import Model
model = Model(inputs = model.inputs, outputs = model.layers[-1].output)
# view architecture / parameters
print(model.summary())
# pass all 8K images through the model and collect weights in a dictionary
features = {}
# need to import listdir
for name in listdir(directory):
filename = directory + '/' + name
# load and reshape image
# shouldn't target_size = (3,224,224)?
image = load_img(filename, target_size = (224,224))
# convert the image pixels to a (3 dimensional?) numpy array, then to a 4 dimensional array
image = img_to_array(image)
image = image.reshape((1,image.shape[0],image.shape[1],image.shape[2]))
# preprocess image in a black box before passing into model
image = preprocess_input(image)
feature = model.predict(image, verbose = 0)
# image_id - all but .jpg - will be a key in features dictionary
image_id = name.split('.')[0]
features[image_id] = feature
print('>%s' % name)
return features
# imports
from os import listdir
# will dump the features dictionary into a .pkl file
from pickle import dump, load
from keras.applications.vgg16 import VGG16, preprocess_input
# from keras.applications.vgg19 import VGG19
from keras.preprocessing.image import load_img, img_to_array
from keras.models import Model, load_model # used after copying model from EC2 instance
# checking the functionality of listdir
listdir('../Flicker8k_Dataset/')[:5]
directory = 'Flicker8k_Dataset/'
features = extract_features(directory)
print('Extracted features for %d images' % len(features))
dump(features, open('features.pkl','wb')) # why 'wb' and not just 'w'?
###Output
_____no_output_____
###Markdown
Images with multiple descriptions (human captions)
###Code
def load_doc(filename):
"""Open and read text file containing human captions - load into memory"""
# open the file in read mode
file = open(filename, 'r')
# read all the human captions
doc = file.read()
# close the context manager
file.close()
return doc
filename_captions = '../Flickr8k_text/Flickr8k.token.txt'
doc = load_doc(filename_captions)
def load_descriptions(doc):
"""Dictionary of photo identifier (aka image_id) to list of 5 textual descriptions"""
descriptions = {}
# iterate through lines of doc
for line in doc.split('\n'):
tokens = line.split() # tokens is a list, split by whitespace
if len(tokens) < 2:
continue # move on to next line; continue vs pass?
image_id, image_desc = tokens[0], tokens[1:]
image_id = image_id.split('.')[0] # again, drop the .jpg
# re-join the description after previously splitting
image_desc = ' '.join(image_desc)
if image_id not in descriptions.keys():
descriptions[image_id] = []
descriptions[image_id].append(image_desc) # .append for lists, .update for sets
return descriptions
descriptions = load_descriptions(doc)
print(len(descriptions))
# this means there are 92 images not included in any of train, dev, and test sets
###Output
8092
###Markdown
Clean the descriptions and reduce the size of the vocab- convert all words to lowercase- remove all punctuation; what's the easiest way to do this?- remove words with fewer than 2 characters, e.g. "a"- remove words containing at least one number
###Code
def clean_descriptions(descriptions):
"""Clean textual descriptions through a series of list comprehensions"""
# make a translation table to filter out punctuation
table = str.maketrans('', '', string.punctuation) # why can't it be ", " ??!!
for key, desc_list in descriptions.items():
# for desc in desc_list:
for i in range(len(desc_list)):
desc = desc_list[i]
# tokenize the description
desc = desc.split()
# convert to lowercase via list comprehension
desc = [word.lower() for word in desc]
# probably can remove punctuation before converting to lowercase
desc = [word.translate(table) for word in desc]
desc = [word for word in desc if len(word) > 1]
desc = [word for word in desc if word.isalpha()]
# overwrite desc_list[i]
desc_list[i] = ' '.join(desc)
import string
clean_descriptions(descriptions)
string.punctuation
def to_vocabulary(descriptions):
"""Determine the size of the vocabulary: the number of unique words"""
vocab = set()
for key, desc_list in descriptions.items():
for desc in desc_list:
vocab.update(desc.split())
return vocab
# vocab = []
# for key, desc_list in descriptions.items():
# for desc in desc_list:
# vocab.append(word for word in desc.split())
# return set(vocab)
vocabulary = to_vocabulary(descriptions)
print('Size of vocabulary: %d' % len(vocabulary))
def save_descriptions(descriptions, filename):
"""One line per description, not one line per image!"""
lines = []
for key, desc_list in descriptions.items():
for desc in desc_list:
lines.append(key + ' ' + desc)
print(len(lines))
data = '\n'.join(lines)
file = open(filename, 'w') # why not "wb"? "wb" only for .pkl
file.write(data)
file.close()
save_descriptions(descriptions, 'descriptions.txt')
###Output
40460
###Markdown
Note that $40460 = 8092\times 5$. Just the training images and descriptions
###Code
def load_set(filename):
"""Obtain list of image_id's for training images for filtering purposes"""
doc = load_doc(filename)
dataset = []
for line in doc.split('\n'):
if len(line) < 1:
continue # will there be any line with zero characters ?!
identifier = line.split('.')[0]
dataset.append(identifier)
return set(dataset) # why are we allowed to de-duplicate only at the very end?
def load_clean_descriptions(filename, dataset):
"""Load RELEVANT clean descriptions into memory, wrapped in startseq, endseq"""
descriptions = {}
doc = load_doc(filename)
for line in doc.split('\n'):
tokens = line.split()
image_id, image_desc = tokens[0], tokens[1:] # done this before
if image_id in dataset:
if image_id not in descriptions.keys():
descriptions[image_id] = []
# wrap description in startseq, endseq
image_desc = 'startseq ' + ' '.join(image_desc) + ' endseq'
descriptions[image_id].append(image_desc)
return descriptions
def load_photo_features(filename, dataset):
"""Load FEATURES of relevant photos, as a dictionary"""
all_features = load(open(filename, 'rb'))
# filter based on image_id's with a dictionary comprehension
features = {image_id: all_features[image_id] for image_id in dataset}
return features
filename_training = '../Flickr8k_text/Flickr_8k.trainImages.txt'
train = load_set(filename_training)
print('Number of training images: %d' % len(train))
train_descriptions = load_clean_descriptions('descriptions.txt', train)
print(len(train_descriptions))
train_features = load_photo_features('features.pkl', train)
print(len(train_features))
def to_lines(descriptions):
"""All descriptions, of training images, in a list - prior to encoding"""
all_desc = []
for key, desc_list in descriptions.items():
for desc in desc_list:
all_desc.append(desc) # keys not included in all_desc
return all_desc
def create_tokenizer(descriptions):
"""Fit Keras tokenizer on training descriptions"""
all_desc = to_lines(descriptions)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(all_desc)
return tokenizer # return fitted tokenizer
tokenizer = create_tokenizer(train_descriptions)
training_vocab_size = len(tokenizer.word_index) + 1
# add 1 due to zero indexing
# tokenizer.word_index is a dictionary with keys being the (unique) words in the training vocabulary
# training vocab contains the words "startseq", "endseq"
print('Size of vocabulary - training images: %d' % training_vocab_size)
import numpy as np
def max_length(descriptions):
"""Return maximum length across all training descriptions"""
all_desc = to_lines(descriptions)
return max(len(desc.split()) for desc in all_desc)
max_length = max_length(train_descriptions)
print('Length of longest caption among training images: %d' % max_length)
def create_sequences(tokenizer,max_length,descriptions,photos): # more like create_arrays
"""Input - output pairs for each image"""
X1, X2, y = [], [], []
for key, desc_list in descriptions.items():
for desc in desc_list:
# encode each description; recall: each description begins with "startseq" and ends with "endseq"
seq = tokenizer.texts_to_sequences([desc])[0] # already fitted tokenizer on training descriptions
# convert seq into several X2, y pairs
for i in range(1,len(seq)):
in_seq, out_seq = seq[:i], seq[i]
# add zeros to the front of in_seq so that len(in_seq) = max_length
in_seq = pad_sequences([in_seq], maxlen = max_length)[0]
# encode (one-hot-encode) out_seq
out_seq = to_categorical([out_seq], num_classes = training_vocab_size)[0]
X1.append(photos[key][0]) # why not just photos[key] ???
X2.append(in_seq)
y.append(out_seq)
return np.array(X1), np.array(X2), np.array(y) # return numpy arrays for model training
X1train, X2train, ytrain = create_sequences(tokenizer, max_length, train_descriptions, train_features)
print(X1train.shape)
print(X2train.shape)
print(ytrain.shape)
###Output
_____no_output_____
###Markdown
Model structure and training
###Code
def define_model(max_length, training_vocab_size):
"""Model which feeds photo features into an LSTM layer/cell and generates captions one word at a time"""
input_1 = Input(shape = (4096,))
f1 = Dropout(0.5)(input_1) # for regularization
# fully connected layer with 256 nodes, 256 = 2 ** 8, 4096 = 2 ** 12
f2 = Dense(256, activation = 'relu')(f1) # input_shape = , "leaky relu"
input_2 = Input(shape = (max_length,))
# recall that after padding, len(in_seq) = max_length
# 5 human captions per image
s1 = Embedding(input_dim = training_vocab_size, output_dim = 256, mask_zero = True)(input_2)
# embed each word as a vector with 256 components
s2 = Dropout(0.5)(s1)
s3 = LSTM(256)(s2)
decoder1 = add([f2,s3]) # f2 + s3
decoder2 = Dense(256, activation = 'relu')(decoder1)
outputs = Dense(training_vocab_size, activation = 'softmax')(decoder2)
model = Model(inputs = [input_1, input_2], outputs = outputs)
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam') # model.fit, model.predict
# categorical_crossentropy vs BLEU score
# can't directly optimize for BLEU score
print(model.summary())
# plot_model(model, to_file = 'model.png', show_shapes = True)
return model
###Output
_____no_output_____
###Markdown
- 6,000 training images - 30,000 training captions- ~7,500 unique words in training captions - this is training_vocab_size- after tokenizing, think of tokenizer.word_index dictionary- values in this dictionary range from 1 to training_vocab_size- from the documentation: If mask_zero is set to True (ignore zeros added during padding), input_dim should equal size of vocabulary + 1.
###Code
# imports
from keras.utils.vis_utils import plot_model
from keras.layers import Dense, Embedding, Input, LSTM, Dropout
from keras.layers.merge import add
from keras.callbacks import ModelCheckpoint
model = define_model(max_length, training_vocab_size)
###Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 34) 0
__________________________________________________________________________________________________
input_1 (InputLayer) (None, 4096) 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 34, 256) 1940224 input_2[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 4096) 0 input_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 34, 256) 0 embedding_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 1048832 dropout_1[0][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 256) 525312 dropout_2[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 256) 0 dense_1[0][0]
lstm_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 256) 65792 add_1[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 7579) 1947803 dense_2[0][0]
==================================================================================================
Total params: 5,527,963
Trainable params: 5,527,963
Non-trainable params: 0
__________________________________________________________________________________________________
None
###Markdown
- for embedding layer, $1940224 = 256\times 7579$- $1048832 = (256\times 4096) + 256$- for LSTM layer/cell, $525312 = 4(256^2 + (256\times 256) + 256)$- $65792 = (256\times 256) + 256$- $1947803 = (256\times 7579) + 7579$ 
###Code
# check validation loss after each epoch and save models which improve val_loss
filepath = 'model-ep{epoch:02d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5' # .hdf5
checkpoint = ModelCheckpoint(filepath, monitor = 'val_loss', verbose = 1, save_best_only = True, mode = 'min')
# dev images, i.e. validation images
filename_dev = '../Flickr8k_text/Flickr_8k.devImages.txt'
dev = load_set(filename_dev)
print('Number of images in dev dataset: %d' % len(dev))
# include only descriptions pertaining to dev images
dev_descriptions = load_clean_descriptions('descriptions.txt', dev)
print(len(dev_descriptions))
# include only features pertaining to dev images
dev_features = load_photo_features('features.pkl', dev)
print(len(dev_features))
# same max_length = 34, same tokenizer trained on training captions
X1dev, X2dev, ydev = create_sequences(tokenizer, max_length, dev_descriptions, dev_features)
print(X1dev.shape)
print(X2dev.shape)
print(ydev.shape)
# finally, let's fit the captioning model which was defined by define_model
# why 20 epochs? verbose = 2 more or less verbose than verbose = 1?
model.fit([X1train,X2train], ytrain, epochs=20, verbose=2, callbacks=[checkpoint], validation_data=([X1dev,X2dev], ydev))
###Output
_____no_output_____
###Markdown
Model evaluation by BLEU scores So far, we have used the training images to fit the captioning model, and the development images to determine val_loss. Now we will use the *test* images for the first time, to evaluate the trained model.
###Code
def word_from_id(integer, tokenizer):
"""Convert integer (value) to corresponding vocabulary word (key) using tokenizer.word_index dictionary"""
for word, index in tokenizer.word_index.items():
if index == integer:
return word
return None
def generate_caption(model, photo, tokenizer, max_length):
"""Given a photo feature vector, generate a caption, word by word, using the model just trained"""
# caption begins with "startseq"
in_text = 'startseq'
# iterate over maximum potential length of caption
for i in range(max_length):
# encode in_text using tokenizer.word_index
sequence = tokenizer.texts_to_sequences([in_text])[0]
# pad this sequence so that its length is max_length = 34
sequence = pad_sequences([sequence], maxlen = max_length)
# predict next word in the sequence; y_vec is vector of probabilities with 7579 components
y_vec = model.predict([photo,sequence], verbose = 0)
# pick out the position of the word with greatest probability
y_int = np.argmax(y_vec)
# convert this position into English word by means of the function we just wrote
word = word_from_id(y_int, tokenizer)
if word is None:
break
# recursion: append word as input for generating the next word
in_text += ' ' + word
if word == 'endseq':
break
return in_text
def evaluate_model(model, photos, descriptions, tokenizer, max_length):
"""Compare the generated caption with the 5 human descriptions across the whole test set"""
actual, generated = [], []
for key, desc_list in descriptions.items():
yhat = generate_caption(model, photos[key], tokenizer, max_length)
# each desc begins with "startseq" and ends with "endseq"
# split_desc is a list of 5 sublists
split_desc = [desc.split() for desc in desc_list]
# actual is a list of lists of lists
actual.append(split_desc)
# generated is a list of lists
generated.append(yhat.split())
print(len(actual))
print(len(generated))
# compute BLEU scores
print('BLEU-1: %f' % corpus_bleu(actual, generated, weights = (1.0,0,0,0)))
print('BLEU-2: %f' % corpus_bleu(actual, generated, weights = (0.5,0.5,0,0)))
print('BLEU-3: %f' % corpus_bleu(actual, generated, weights = (0.33,0.33,0.33,0)))
print('BLEU-4: %f' % corpus_bleu(actual, generated, weights = (0.25,0.25,0.25,0.25)))
%%bash
pip install nltk
from nltk.translate.bleu_score import corpus_bleu
# test images, previously unused
# shouldn't there be 1,092 test images?
filename_test = '../Flickr8k_text/Flickr_8k.testImages.txt'
test = load_set(filename_test)
print('Number of images in test dataset: %d' % len(test))
# include only descriptions pertaining to test images
test_descriptions = load_clean_descriptions('descriptions.txt', test)
print(len(test_descriptions))
# include only features pertaining to test images
test_features = load_photo_features('features.pkl', test)
print(len(test_features))
# load the model which was trained on an AWS EC2 instance
filename_model = '../model-ep3-loss3.664-val_loss3.839.h5'
model = load_model(filename_model)
evaluate_model(model, test_features, test_descriptions, tokenizer, max_length)
###Output
1000
1000
BLEU-1: 0.340888
BLEU-2: 0.175420
BLEU-3: 0.099687
BLEU-4: 0.051633
###Markdown
BLEU scores range from 0 (worst) to 1 (best). **SHOULD GO BACK AND RETRAIN THE MODEL FOR MORE THAN 3 EPOCHS!!** Generate captions for entirely new images
###Code
dump(tokenizer, open('tokenizer.pkl', 'wb'), protocol=3)
type(tokenizer)
def extract_features_2(filename):
"""Extract features for just one photo, unlike extract_features"""
# instantiate Visual Geometry Group's CNN model
model = VGG16()
model.layers.pop()
model = Model(inputs = model.inputs, outputs = model.layers[-1].output) # not strictly necessary
# reshape image before passing through pretrained VGG model
image = load_img(filename, target_size=(224,224))
image = img_to_array(image)
print(image.shape)
image = image.reshape((1,image.shape[0],image.shape[1],image.shape[2]))
print(image.shape)
image = preprocess_input(image)
features_2 = model.predict(image, verbose = 0) # the prediction is a vector with 4096 components
return features_2
photo = extract_features_2('example.jpg')
caption = generate_caption(model, photo, tokenizer, max_length)
caption = caption.split()
caption = ' '.join(caption[1:-1])
print(caption)
###Output
black dog is running through the water
###Markdown

###Code
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
dog = plt.imread('example.jpg')
plt.imshow(dog);
###Output
_____no_output_____
###Markdown
Coding practice - data structures*Cracking the Coding Interview* Class of nodes for binary trees, and functions for traversal
###Code
class Node:
def __init__(self, value):
self.val = value
self.left = None
self.right = None
def trav(self):
if self.left:
self.left.trav()
print(self.val)
if self.right:
self.right.trav()
def preorder(self):
print(self.val)
if self.left:
self.left.preorder()
if self.right:
self.right.preorder()
def postorder(self):
if self.left:
self.left.postorder()
if self.right:
self.right.postorder()
print(self.val)
node_8 = Node(8)
node_3 = Node(3)
node_10 = Node(10)
node_8.left = node_3
node_8.right = node_10
node_1 = Node(1)
node_6 = Node(6)
node_3.left = node_1
node_3.right = node_6
node_4 = Node(4)
node_7 = Node(7)
node_6.left = node_4
node_6.right = node_7
node_14 = Node(14)
node_13 = Node(13)
node_10.right = node_14
node_14.left = node_13
###Output
_____no_output_____
###Markdown
Function to create minimal / balanced BST from sorted array
###Code
def min_bst_helper(start,end,arr):
if start > end:
return
mid = (start + end) // 2
n = Node(arr[mid])
# print(n.val)
n.left = min_bst_helper(start,mid - 1,arr)
n.right = min_bst_helper(mid + 1,end,arr)
return n
def min_bst(sort_arr):
return min_bst_helper(0,len(sort_arr) - 1,sort_arr)
sort_arr = [1,3,4,6,7,8,10,13,14]
min_bst(sort_arr).val
min_bst(sort_arr).left.val
min_bst(sort_arr).right.val
test_list = []
test_set = set()
test_list.append(4)
test_list.append(3)
# test_list.insert(0,3)
test_list
test_list.append(3)
test_set.update([3])
test_list
test_set
test_list.append(3)
test_set.update([3])
test_list
test_set
test_list.append(4)
test_list
test_list.pop() # the last thing that was appended gets popped off, like a stack
test_list
###Output
_____no_output_____
###Markdown
Heaps - specifically, min heaps
###Code
from heapq import heappush, heappop
test_heap = []
heappush(test_heap, 3)
heappush(test_heap, 4)
heappush(test_heap, 2)
heappush(test_heap, 5)
heappush(test_heap, 1)
heappush(test_heap, 7)
heappush(test_heap, 8)
heappush(test_heap, 6)
test_heap
print(test_heap[0])
print(min(test_heap))
###Output
1
1
###Markdown
Class of stacks, which are basically just Python lists - LIFO!
###Code
class Stack:
def __init__(self):
self.stack = []
def stackpop(self):
if len(self.stack) == 0:
return "Can't pop since it's empty!"
else:
return self.stack.pop()
def stackpush(self,val):
return self.stack.append(val)
def stackpeak(self):
if len(self.stack) == 0:
return "Can't peek since it's empty"
else:
return self.stack[-1]
test_stack = Stack()
test_stack.stack
test_stack.stackpop()
test_stack.stackpush(3)
test_stack.stack
###Output
_____no_output_____
###Markdown
Towers of Hanoi, a meta-class problem (OOP)
###Code
class Tower:
def __init__(self, i):
self.disks = Stack()
self.index = i
# def index(self):
# return self.index
def add(self, d): # d is the value of the disk we are trying to place
if len(self.disks.stack) != 0 and self.disks.stackpeak() <= d:
print("Error placing disk " + str(d))
else:
self.disks.stackpush(d)
def move_top_to(self, t): # t is the index of another tower
top = self.disks.stackpop()
t.add(top)
def move_disks(self, n, destination, buffer): # destination, buffer are indices for the other two towers
if n > 0:
self.move_disks(n-1, buffer, destination)
self.move_top_to(destination)
buffer.move_disks(n-1, destination, self)
def hanoi(n): # n is the number of disks
towers = []
for i in range(3):
# towers[i] = Tower(i)
towers.append(Tower(i))
for j in range(n, 0, -1):
towers[0].add(j) # populating Tower(0) with the n disks
towers[0].move_disks(n, towers[2], towers[1])
return towers
towers = hanoi(5)
towers[0].disks.stack
towers[2].disks.stack
towers[1].disks.stack
###Output
_____no_output_____
###Markdown
Making change - RECURSION
###Code
def count_ways(amount):
denoms = [100,50,25,10,5,1]
return count_ways_helper(amount, denoms, 0)
def count_ways_helper(amount, denoms, index):
if index >= len(denoms) - 1 or amount == 0:
return 1 # not ways += 1? don't increment index by 1.
denom_amount = denoms[index]
ways = 0 # clearing and resetting ways?!
for i in range(amount):
if i * denom_amount > amount:
break
amount_remaining = amount - (i * denom_amount)
ways += count_ways_helper(amount_remaining, denoms, index + 1)
return ways
count_ways(100)
def num_ways(amount):
ways = 0
for i in range(amount + 1):
for j in range((amount // 5) + 1):
for k in range((amount // 10) + 1):
for l in range((amount // 25) + 1):
if i + 5*j + 10*k + 25*l == amount:
ways += 1
return ways
num_ways(500)
###Output
_____no_output_____
###Markdown
Class of queues - FIFO!
###Code
class Queue:
def __init__(self):
self.queue = []
def queuepop(self): # dequeue
if len(self.queue) == 0:
return "Can't pop since it's empty"
else:
self.queue.pop()
def queuepush(self,val): # enqueue
# return self.queue.append(val)
return self.queue.insert(0,val)
test_queue = Queue()
test_queue.queue
test_queue.queuepop()
test_queue.queuepush(2)
test_queue.queuepush(3)
test_queue.queue
test_queue.queuepop()
test_queue.queue
###Output
_____no_output_____
###Markdown
Class of nodes for singly linked lists
###Code
class Node_LL:
def __init__(self,value):
self.val = value
self.next = None
def traverse(self):
node = self
while node != None:
print(node.val)
node = node.next
def trav_recursive(self):
print(self.val)
if self.next:
self.next.trav_recursive()
node1 = Node_LL(12) # the head node
node2 = Node_LL(99)
node3 = Node_LL(37)
node1.next = node2
node2.next = node3
node1.traverse()
node1.trav_recursive()
###Output
12
99
37
###Markdown
Class of nodes for doubly linked lists
###Code
class Node_DLL:
def __init__(self,value):
self.val = value
self.next = None
self.prev = None
def traverse_forward(self):
node = self
while node != None:
print(node.val)
node = node.next
def traverse_backward(self):
node = self
while node != None:
print(node.val)
node = node.prev
def delete(self):
self.prev.next = self.next
self.next.prev = self.prev
node1 = Node_DLL(12)
node2 = Node_DLL(99)
node3 = Node_DLL(37)
node1.next = node2
node2.next = node3
node3.prev = node2
node2.prev = node1
node1.traverse_forward()
node3.traverse_backward()
node2.delete()
node1.next.val
node3.prev.val
###Output
_____no_output_____
###Markdown
Breadth first search / traversal for binary trees (w/o queues)
###Code
def bfs(node):
result = []
current_level = [node]
while current_level != []:
next_level = []
for node in current_level:
result.append(node.val)
if node.left:
next_level.append(node.left)
if node.right:
next_level.append(node.right)
current_level = next_level
return result
bfs(node_8)
node_8.trav()
node_8.preorder()
node_8.postorder()
###Output
1
4
7
6
3
13
14
10
8
###Markdown
Miscellaneous functions
###Code
def word_count_helper(string):
output_dict = {}
for word in string.split(' '):
if word in output_dict.keys():
output_dict[word] += 1 # value is count, not a list
else:
output_dict[word] = 1
return output_dict
word_count_helper('hello hello world')
def max_profit(prices):
if not prices:
print('There are no prices!')
else:
max_profit = 0
max_price = prices[-1]
# min_price = prices[0]
for price in prices[::-1]:
if max_price - price > max_profit:
max_profit = max_price - price
if price > max_price:
max_price = price
# if max_profit < price - min_price:
# max_profit = price - min_price
# if price < min_price:
# min_price = price
return max_profit
prices = [3,-1,4,9.5,0]
max_profit(prices)
max_profit([])
def magic_slow(sort_arr):
magic_indices = []
for i in range(len(sort_arr)):
if sort_arr[i] == i:
magic_indices.append(i)
if len(magic_indices) == 0:
return "There are no magic indices"
else:
return magic_indices
magic_slow([0,1,2,3])
magic_slow([1,2,3,4])
magic_slow([-40,-20,-1,1,2,3,5,7,9,12,13])
def magic_fast_helper(arr, start, end):
if start > end:
return
mid = (start + end) // 2
if arr[mid] == mid:
magic_indices.append(mid)
elif arr[mid] > mid:
return magic_fast_helper(arr, start, mid - 1)
else:
return magic_fast_helper(arr, mid + 1, end)
def magic_fast(sort_arr):
return magic_fast_helper(sort_arr, 0, len(sort_arr) - 1)
magic_indices = []
magic_fast([-40,-20,-1,1,2,3,5,7,9,12,13])
print(magic_indices)
def power(n):
if n == 0:
return [[]]
if n == 1:
return [[], [1]]
temp_list = []
for subset in power(n-1):
temp_list.append(subset + [n])
if n > 1:
return power(n-1) + temp_list
power(5)
def kaprekar(number):
if len(str(number)) != 4 or len(set(str(number))) == 1:
return "Invalid input"
else:
ascending = int(''.join(sorted(str(number))))
descending = int(''.join(sorted(str(number), reverse = True)))
output = descending - ascending
count = 1
while output != 6174:
ascending = int(''.join(sorted(str(output))))
descending = int(''.join(sorted(str(output), reverse = True)))
output = descending - ascending
count += 1
return count
kaprekar(5790)
###Output
_____no_output_____
###Markdown
Fibonacci: memoization & recursion
###Code
def fibonacci(n):
"""Return nth number in Fibonacci sequence using memoization"""
if n < 0:
print('Value Error: input must be nonnegative integer!')
else:
if n == 0:
return 0
if n == 1:
return 1
a = 0
b = 1
for i in range(2,n):
c = a + b
a = b
b = c
return a + b
fibonacci(-1)
fibonacci(35)
def fib(n):
"""Return the nth Fibonacci number using recursion"""
if n < 0:
print('Value Error: input must be nonnegative integer!')
else:
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n-1) + fib(n-2)
fib(35)
fib(-1)
###Output
Value Error: input must be nonnegative integer!
###Markdown

###Code
from google.colab import drive
drive.mount('/content/drive')
import os
import string
import glob
from tensorflow.keras.applications import MobileNet
import tensorflow.keras.applications.mobilenet
from tensorflow.keras.applications.inception_v3 import InceptionV3
import tensorflow.keras.applications.inception_v3
from tqdm import tqdm
import tensorflow.keras.preprocessing.image
import pickle
from time import time
import numpy as np
from PIL import Image
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import (LSTM, Embedding,
TimeDistributed, Dense, RepeatVector,
Activation, Flatten, Reshape, concatenate,
Dropout, BatchNormalization)
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras import Input, layers
from tensorflow.keras import optimizers
from tensorflow.keras.models import Model
from tensorflow.keras.layers import add
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
START = "startseq"
STOP = "endseq"
EPOCHS = 10
USE_INCEPTION = True
###Output
_____no_output_____
###Markdown
Data building and cleaning
###Code
root_captioning= '/content/drive/MyDrive/Image captioning data'
null_punct = str.maketrans('', '', string.punctuation)
lookup = dict()
with open( os.path.join(root_captioning,'Flickr8k_text',\
'Flickr8k.token.txt'), 'r') as fp:
max_length = 0
for line in fp.read().split('\n'):
tok = line.split()
if len(line) >= 2:
id = tok[0].split('.')[0]
desc = tok[1:]
# Cleanup description
desc = [word.lower() for word in desc]
desc = [w.translate(null_punct) for w in desc]
desc = [word for word in desc if len(word)>1]
desc = [word for word in desc if word.isalpha()]
max_length = max(max_length,len(desc))
if id not in lookup:
lookup[id] = list()
lookup[id].append(' '.join(desc))
lex = set()
for key in lookup:
[lex.update(d.split()) for d in lookup[key]]
print(len(lookup)) # How many unique words
print(len(lex)) # The dictionary
print(max_length) # Maximum length of a caption (in words)
# Warning, running this too soon on GDrive can sometimes not work.
# Just re-run if len(img) = 0
img = glob.glob(os.path.join(root_captioning,'flicker8k_dataset', '*.jpg'))
len(img)
train_images_path = os.path.join(root_captioning,\
'Flickr8k_text','Flickr_8k.trainImages.txt')
train_images = set(open(train_images_path, 'r').read().strip().split('\n'))
test_images_path = os.path.join(root_captioning,
'Flickr8k_text','Flickr_8k.testImages.txt')
test_images = set(open(test_images_path, 'r').read().strip().split('\n'))
train_img = []
test_img = []
for i in img:
f = os.path.split(i)[-1]
if f in train_images:
train_img.append(f)
elif f in test_images:
test_img.append(f)
print(len(train_images))
print(len(test_images))
train_descriptions = {k:v for k,v in lookup.items() if f'{k}.jpg' \
in train_images}
for n,v in train_descriptions.items():
for d in range(len(v)):
v[d] = f'{START} {v[d]} {STOP}'
len(train_descriptions)
###Output
_____no_output_____
###Markdown
Choosing a computer vision neural network to transfer
###Code
encode_model = InceptionV3(weights='imagenet')
encode_model = Model(encode_model.input, encode_model.layers[-2].output)
WIDTH = 299
HEIGHT = 299
OUTPUT_DIM = 2048
preprocess_input = \
tensorflow.keras.applications.inception_v3.preprocess_input
encode_model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 299, 299, 3) 0
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 149, 149, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 149, 149, 32) 96 conv2d[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 149, 149, 32) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 147, 147, 32) 9216 activation[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 147, 147, 32) 96 conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 147, 147, 32) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 147, 147, 64) 18432 activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 147, 147, 64) 192 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 147, 147, 64) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 73, 73, 64) 0 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 73, 73, 80) 5120 max_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 73, 73, 80) 240 conv2d_3[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 73, 73, 80) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 71, 71, 192) 138240 activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 71, 71, 192) 576 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 71, 71, 192) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 35, 35, 192) 0 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 35, 35, 64) 12288 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 35, 35, 64) 192 conv2d_8[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 35, 35, 64) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 35, 35, 48) 9216 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 35, 35, 96) 55296 activation_8[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 35, 35, 48) 144 conv2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 35, 35, 96) 288 conv2d_9[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 35, 35, 48) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 35, 35, 96) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
average_pooling2d (AveragePooli (None, 35, 35, 192) 0 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 35, 35, 64) 12288 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 35, 35, 64) 76800 activation_6[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 35, 35, 96) 82944 activation_9[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 35, 35, 32) 6144 average_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 35, 35, 64) 192 conv2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 35, 35, 64) 192 conv2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 35, 35, 96) 288 conv2d_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 35, 35, 32) 96 conv2d_11[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 35, 35, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 35, 35, 64) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 35, 35, 96) 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 35, 35, 32) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
mixed0 (Concatenate) (None, 35, 35, 256) 0 activation_5[0][0]
activation_7[0][0]
activation_10[0][0]
activation_11[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 35, 35, 64) 16384 mixed0[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 35, 35, 64) 192 conv2d_15[0][0]
__________________________________________________________________________________________________
activation_15 (Activation) (None, 35, 35, 64) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 35, 35, 48) 12288 mixed0[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 35, 35, 96) 55296 activation_15[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 35, 35, 48) 144 conv2d_13[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 35, 35, 96) 288 conv2d_16[0][0]
__________________________________________________________________________________________________
activation_13 (Activation) (None, 35, 35, 48) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
activation_16 (Activation) (None, 35, 35, 96) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, 35, 35, 256) 0 mixed0[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 35, 35, 64) 16384 mixed0[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 35, 35, 64) 76800 activation_13[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 35, 35, 96) 82944 activation_16[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 35, 35, 64) 16384 average_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 35, 35, 64) 192 conv2d_12[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 35, 35, 64) 192 conv2d_14[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 35, 35, 96) 288 conv2d_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 35, 35, 64) 192 conv2d_18[0][0]
__________________________________________________________________________________________________
activation_12 (Activation) (None, 35, 35, 64) 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
activation_14 (Activation) (None, 35, 35, 64) 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
activation_17 (Activation) (None, 35, 35, 96) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
activation_18 (Activation) (None, 35, 35, 64) 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
mixed1 (Concatenate) (None, 35, 35, 288) 0 activation_12[0][0]
activation_14[0][0]
activation_17[0][0]
activation_18[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 35, 35, 64) 18432 mixed1[0][0]
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 35, 35, 64) 192 conv2d_22[0][0]
__________________________________________________________________________________________________
activation_22 (Activation) (None, 35, 35, 64) 0 batch_normalization_22[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 35, 35, 48) 13824 mixed1[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 35, 35, 96) 55296 activation_22[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 35, 35, 48) 144 conv2d_20[0][0]
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 35, 35, 96) 288 conv2d_23[0][0]
__________________________________________________________________________________________________
activation_20 (Activation) (None, 35, 35, 48) 0 batch_normalization_20[0][0]
__________________________________________________________________________________________________
activation_23 (Activation) (None, 35, 35, 96) 0 batch_normalization_23[0][0]
__________________________________________________________________________________________________
average_pooling2d_2 (AveragePoo (None, 35, 35, 288) 0 mixed1[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 35, 35, 64) 18432 mixed1[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 35, 35, 64) 76800 activation_20[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 35, 35, 96) 82944 activation_23[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, 35, 35, 64) 18432 average_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 35, 35, 64) 192 conv2d_19[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 35, 35, 64) 192 conv2d_21[0][0]
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 35, 35, 96) 288 conv2d_24[0][0]
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 35, 35, 64) 192 conv2d_25[0][0]
__________________________________________________________________________________________________
activation_19 (Activation) (None, 35, 35, 64) 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
activation_21 (Activation) (None, 35, 35, 64) 0 batch_normalization_21[0][0]
__________________________________________________________________________________________________
activation_24 (Activation) (None, 35, 35, 96) 0 batch_normalization_24[0][0]
__________________________________________________________________________________________________
activation_25 (Activation) (None, 35, 35, 64) 0 batch_normalization_25[0][0]
__________________________________________________________________________________________________
mixed2 (Concatenate) (None, 35, 35, 288) 0 activation_19[0][0]
activation_21[0][0]
activation_24[0][0]
activation_25[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, 35, 35, 64) 18432 mixed2[0][0]
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, 35, 35, 64) 192 conv2d_27[0][0]
__________________________________________________________________________________________________
activation_27 (Activation) (None, 35, 35, 64) 0 batch_normalization_27[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D) (None, 35, 35, 96) 55296 activation_27[0][0]
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, 35, 35, 96) 288 conv2d_28[0][0]
__________________________________________________________________________________________________
activation_28 (Activation) (None, 35, 35, 96) 0 batch_normalization_28[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, 17, 17, 384) 995328 mixed2[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D) (None, 17, 17, 96) 82944 activation_28[0][0]
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, 17, 17, 384) 1152 conv2d_26[0][0]
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, 17, 17, 96) 288 conv2d_29[0][0]
__________________________________________________________________________________________________
activation_26 (Activation) (None, 17, 17, 384) 0 batch_normalization_26[0][0]
__________________________________________________________________________________________________
activation_29 (Activation) (None, 17, 17, 96) 0 batch_normalization_29[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 17, 17, 288) 0 mixed2[0][0]
__________________________________________________________________________________________________
mixed3 (Concatenate) (None, 17, 17, 768) 0 activation_26[0][0]
activation_29[0][0]
max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D) (None, 17, 17, 128) 98304 mixed3[0][0]
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, 17, 17, 128) 384 conv2d_34[0][0]
__________________________________________________________________________________________________
activation_34 (Activation) (None, 17, 17, 128) 0 batch_normalization_34[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D) (None, 17, 17, 128) 114688 activation_34[0][0]
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, 17, 17, 128) 384 conv2d_35[0][0]
__________________________________________________________________________________________________
activation_35 (Activation) (None, 17, 17, 128) 0 batch_normalization_35[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D) (None, 17, 17, 128) 98304 mixed3[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D) (None, 17, 17, 128) 114688 activation_35[0][0]
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, 17, 17, 128) 384 conv2d_31[0][0]
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, 17, 17, 128) 384 conv2d_36[0][0]
__________________________________________________________________________________________________
activation_31 (Activation) (None, 17, 17, 128) 0 batch_normalization_31[0][0]
__________________________________________________________________________________________________
activation_36 (Activation) (None, 17, 17, 128) 0 batch_normalization_36[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D) (None, 17, 17, 128) 114688 activation_31[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D) (None, 17, 17, 128) 114688 activation_36[0][0]
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, 17, 17, 128) 384 conv2d_32[0][0]
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, 17, 17, 128) 384 conv2d_37[0][0]
__________________________________________________________________________________________________
activation_32 (Activation) (None, 17, 17, 128) 0 batch_normalization_32[0][0]
__________________________________________________________________________________________________
activation_37 (Activation) (None, 17, 17, 128) 0 batch_normalization_37[0][0]
__________________________________________________________________________________________________
average_pooling2d_3 (AveragePoo (None, 17, 17, 768) 0 mixed3[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D) (None, 17, 17, 192) 147456 mixed3[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D) (None, 17, 17, 192) 172032 activation_32[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D) (None, 17, 17, 192) 172032 activation_37[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, 17, 17, 192) 576 conv2d_30[0][0]
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, 17, 17, 192) 576 conv2d_33[0][0]
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, 17, 17, 192) 576 conv2d_38[0][0]
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, 17, 17, 192) 576 conv2d_39[0][0]
__________________________________________________________________________________________________
activation_30 (Activation) (None, 17, 17, 192) 0 batch_normalization_30[0][0]
__________________________________________________________________________________________________
activation_33 (Activation) (None, 17, 17, 192) 0 batch_normalization_33[0][0]
__________________________________________________________________________________________________
activation_38 (Activation) (None, 17, 17, 192) 0 batch_normalization_38[0][0]
__________________________________________________________________________________________________
activation_39 (Activation) (None, 17, 17, 192) 0 batch_normalization_39[0][0]
__________________________________________________________________________________________________
mixed4 (Concatenate) (None, 17, 17, 768) 0 activation_30[0][0]
activation_33[0][0]
activation_38[0][0]
activation_39[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D) (None, 17, 17, 160) 122880 mixed4[0][0]
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, 17, 17, 160) 480 conv2d_44[0][0]
__________________________________________________________________________________________________
activation_44 (Activation) (None, 17, 17, 160) 0 batch_normalization_44[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D) (None, 17, 17, 160) 179200 activation_44[0][0]
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, 17, 17, 160) 480 conv2d_45[0][0]
__________________________________________________________________________________________________
activation_45 (Activation) (None, 17, 17, 160) 0 batch_normalization_45[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D) (None, 17, 17, 160) 122880 mixed4[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D) (None, 17, 17, 160) 179200 activation_45[0][0]
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, 17, 17, 160) 480 conv2d_41[0][0]
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, 17, 17, 160) 480 conv2d_46[0][0]
__________________________________________________________________________________________________
activation_41 (Activation) (None, 17, 17, 160) 0 batch_normalization_41[0][0]
__________________________________________________________________________________________________
activation_46 (Activation) (None, 17, 17, 160) 0 batch_normalization_46[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D) (None, 17, 17, 160) 179200 activation_41[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D) (None, 17, 17, 160) 179200 activation_46[0][0]
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, 17, 17, 160) 480 conv2d_42[0][0]
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, 17, 17, 160) 480 conv2d_47[0][0]
__________________________________________________________________________________________________
activation_42 (Activation) (None, 17, 17, 160) 0 batch_normalization_42[0][0]
__________________________________________________________________________________________________
activation_47 (Activation) (None, 17, 17, 160) 0 batch_normalization_47[0][0]
__________________________________________________________________________________________________
average_pooling2d_4 (AveragePoo (None, 17, 17, 768) 0 mixed4[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D) (None, 17, 17, 192) 147456 mixed4[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D) (None, 17, 17, 192) 215040 activation_42[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D) (None, 17, 17, 192) 215040 activation_47[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, 17, 17, 192) 576 conv2d_40[0][0]
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, 17, 17, 192) 576 conv2d_43[0][0]
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, 17, 17, 192) 576 conv2d_48[0][0]
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, 17, 17, 192) 576 conv2d_49[0][0]
__________________________________________________________________________________________________
activation_40 (Activation) (None, 17, 17, 192) 0 batch_normalization_40[0][0]
__________________________________________________________________________________________________
activation_43 (Activation) (None, 17, 17, 192) 0 batch_normalization_43[0][0]
__________________________________________________________________________________________________
activation_48 (Activation) (None, 17, 17, 192) 0 batch_normalization_48[0][0]
__________________________________________________________________________________________________
activation_49 (Activation) (None, 17, 17, 192) 0 batch_normalization_49[0][0]
__________________________________________________________________________________________________
mixed5 (Concatenate) (None, 17, 17, 768) 0 activation_40[0][0]
activation_43[0][0]
activation_48[0][0]
activation_49[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D) (None, 17, 17, 160) 122880 mixed5[0][0]
__________________________________________________________________________________________________
batch_normalization_54 (BatchNo (None, 17, 17, 160) 480 conv2d_54[0][0]
__________________________________________________________________________________________________
activation_54 (Activation) (None, 17, 17, 160) 0 batch_normalization_54[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D) (None, 17, 17, 160) 179200 activation_54[0][0]
__________________________________________________________________________________________________
batch_normalization_55 (BatchNo (None, 17, 17, 160) 480 conv2d_55[0][0]
__________________________________________________________________________________________________
activation_55 (Activation) (None, 17, 17, 160) 0 batch_normalization_55[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D) (None, 17, 17, 160) 122880 mixed5[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D) (None, 17, 17, 160) 179200 activation_55[0][0]
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, 17, 17, 160) 480 conv2d_51[0][0]
__________________________________________________________________________________________________
batch_normalization_56 (BatchNo (None, 17, 17, 160) 480 conv2d_56[0][0]
__________________________________________________________________________________________________
activation_51 (Activation) (None, 17, 17, 160) 0 batch_normalization_51[0][0]
__________________________________________________________________________________________________
activation_56 (Activation) (None, 17, 17, 160) 0 batch_normalization_56[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D) (None, 17, 17, 160) 179200 activation_51[0][0]
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, 17, 17, 160) 179200 activation_56[0][0]
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, 17, 17, 160) 480 conv2d_52[0][0]
__________________________________________________________________________________________________
batch_normalization_57 (BatchNo (None, 17, 17, 160) 480 conv2d_57[0][0]
__________________________________________________________________________________________________
activation_52 (Activation) (None, 17, 17, 160) 0 batch_normalization_52[0][0]
__________________________________________________________________________________________________
activation_57 (Activation) (None, 17, 17, 160) 0 batch_normalization_57[0][0]
__________________________________________________________________________________________________
average_pooling2d_5 (AveragePoo (None, 17, 17, 768) 0 mixed5[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D) (None, 17, 17, 192) 147456 mixed5[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D) (None, 17, 17, 192) 215040 activation_52[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, 17, 17, 192) 215040 activation_57[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, 17, 17, 192) 576 conv2d_50[0][0]
__________________________________________________________________________________________________
batch_normalization_53 (BatchNo (None, 17, 17, 192) 576 conv2d_53[0][0]
__________________________________________________________________________________________________
batch_normalization_58 (BatchNo (None, 17, 17, 192) 576 conv2d_58[0][0]
__________________________________________________________________________________________________
batch_normalization_59 (BatchNo (None, 17, 17, 192) 576 conv2d_59[0][0]
__________________________________________________________________________________________________
activation_50 (Activation) (None, 17, 17, 192) 0 batch_normalization_50[0][0]
__________________________________________________________________________________________________
activation_53 (Activation) (None, 17, 17, 192) 0 batch_normalization_53[0][0]
__________________________________________________________________________________________________
activation_58 (Activation) (None, 17, 17, 192) 0 batch_normalization_58[0][0]
__________________________________________________________________________________________________
activation_59 (Activation) (None, 17, 17, 192) 0 batch_normalization_59[0][0]
__________________________________________________________________________________________________
mixed6 (Concatenate) (None, 17, 17, 768) 0 activation_50[0][0]
activation_53[0][0]
activation_58[0][0]
activation_59[0][0]
__________________________________________________________________________________________________
conv2d_64 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0]
__________________________________________________________________________________________________
batch_normalization_64 (BatchNo (None, 17, 17, 192) 576 conv2d_64[0][0]
__________________________________________________________________________________________________
activation_64 (Activation) (None, 17, 17, 192) 0 batch_normalization_64[0][0]
__________________________________________________________________________________________________
conv2d_65 (Conv2D) (None, 17, 17, 192) 258048 activation_64[0][0]
__________________________________________________________________________________________________
batch_normalization_65 (BatchNo (None, 17, 17, 192) 576 conv2d_65[0][0]
__________________________________________________________________________________________________
activation_65 (Activation) (None, 17, 17, 192) 0 batch_normalization_65[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0]
__________________________________________________________________________________________________
conv2d_66 (Conv2D) (None, 17, 17, 192) 258048 activation_65[0][0]
__________________________________________________________________________________________________
batch_normalization_61 (BatchNo (None, 17, 17, 192) 576 conv2d_61[0][0]
__________________________________________________________________________________________________
batch_normalization_66 (BatchNo (None, 17, 17, 192) 576 conv2d_66[0][0]
__________________________________________________________________________________________________
activation_61 (Activation) (None, 17, 17, 192) 0 batch_normalization_61[0][0]
__________________________________________________________________________________________________
activation_66 (Activation) (None, 17, 17, 192) 0 batch_normalization_66[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, 17, 17, 192) 258048 activation_61[0][0]
__________________________________________________________________________________________________
conv2d_67 (Conv2D) (None, 17, 17, 192) 258048 activation_66[0][0]
__________________________________________________________________________________________________
batch_normalization_62 (BatchNo (None, 17, 17, 192) 576 conv2d_62[0][0]
__________________________________________________________________________________________________
batch_normalization_67 (BatchNo (None, 17, 17, 192) 576 conv2d_67[0][0]
__________________________________________________________________________________________________
activation_62 (Activation) (None, 17, 17, 192) 0 batch_normalization_62[0][0]
__________________________________________________________________________________________________
activation_67 (Activation) (None, 17, 17, 192) 0 batch_normalization_67[0][0]
__________________________________________________________________________________________________
average_pooling2d_6 (AveragePoo (None, 17, 17, 768) 0 mixed6[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, 17, 17, 192) 147456 mixed6[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, 17, 17, 192) 258048 activation_62[0][0]
__________________________________________________________________________________________________
conv2d_68 (Conv2D) (None, 17, 17, 192) 258048 activation_67[0][0]
__________________________________________________________________________________________________
conv2d_69 (Conv2D) (None, 17, 17, 192) 147456 average_pooling2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_60 (BatchNo (None, 17, 17, 192) 576 conv2d_60[0][0]
__________________________________________________________________________________________________
batch_normalization_63 (BatchNo (None, 17, 17, 192) 576 conv2d_63[0][0]
__________________________________________________________________________________________________
batch_normalization_68 (BatchNo (None, 17, 17, 192) 576 conv2d_68[0][0]
__________________________________________________________________________________________________
batch_normalization_69 (BatchNo (None, 17, 17, 192) 576 conv2d_69[0][0]
__________________________________________________________________________________________________
activation_60 (Activation) (None, 17, 17, 192) 0 batch_normalization_60[0][0]
__________________________________________________________________________________________________
activation_63 (Activation) (None, 17, 17, 192) 0 batch_normalization_63[0][0]
__________________________________________________________________________________________________
activation_68 (Activation) (None, 17, 17, 192) 0 batch_normalization_68[0][0]
__________________________________________________________________________________________________
activation_69 (Activation) (None, 17, 17, 192) 0 batch_normalization_69[0][0]
__________________________________________________________________________________________________
mixed7 (Concatenate) (None, 17, 17, 768) 0 activation_60[0][0]
activation_63[0][0]
activation_68[0][0]
activation_69[0][0]
__________________________________________________________________________________________________
conv2d_72 (Conv2D) (None, 17, 17, 192) 147456 mixed7[0][0]
__________________________________________________________________________________________________
batch_normalization_72 (BatchNo (None, 17, 17, 192) 576 conv2d_72[0][0]
__________________________________________________________________________________________________
activation_72 (Activation) (None, 17, 17, 192) 0 batch_normalization_72[0][0]
__________________________________________________________________________________________________
conv2d_73 (Conv2D) (None, 17, 17, 192) 258048 activation_72[0][0]
__________________________________________________________________________________________________
batch_normalization_73 (BatchNo (None, 17, 17, 192) 576 conv2d_73[0][0]
__________________________________________________________________________________________________
activation_73 (Activation) (None, 17, 17, 192) 0 batch_normalization_73[0][0]
__________________________________________________________________________________________________
conv2d_70 (Conv2D) (None, 17, 17, 192) 147456 mixed7[0][0]
__________________________________________________________________________________________________
conv2d_74 (Conv2D) (None, 17, 17, 192) 258048 activation_73[0][0]
__________________________________________________________________________________________________
batch_normalization_70 (BatchNo (None, 17, 17, 192) 576 conv2d_70[0][0]
__________________________________________________________________________________________________
batch_normalization_74 (BatchNo (None, 17, 17, 192) 576 conv2d_74[0][0]
__________________________________________________________________________________________________
activation_70 (Activation) (None, 17, 17, 192) 0 batch_normalization_70[0][0]
__________________________________________________________________________________________________
activation_74 (Activation) (None, 17, 17, 192) 0 batch_normalization_74[0][0]
__________________________________________________________________________________________________
conv2d_71 (Conv2D) (None, 8, 8, 320) 552960 activation_70[0][0]
__________________________________________________________________________________________________
conv2d_75 (Conv2D) (None, 8, 8, 192) 331776 activation_74[0][0]
__________________________________________________________________________________________________
batch_normalization_71 (BatchNo (None, 8, 8, 320) 960 conv2d_71[0][0]
__________________________________________________________________________________________________
batch_normalization_75 (BatchNo (None, 8, 8, 192) 576 conv2d_75[0][0]
__________________________________________________________________________________________________
activation_71 (Activation) (None, 8, 8, 320) 0 batch_normalization_71[0][0]
__________________________________________________________________________________________________
activation_75 (Activation) (None, 8, 8, 192) 0 batch_normalization_75[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 768) 0 mixed7[0][0]
__________________________________________________________________________________________________
mixed8 (Concatenate) (None, 8, 8, 1280) 0 activation_71[0][0]
activation_75[0][0]
max_pooling2d_3[0][0]
__________________________________________________________________________________________________
conv2d_80 (Conv2D) (None, 8, 8, 448) 573440 mixed8[0][0]
__________________________________________________________________________________________________
batch_normalization_80 (BatchNo (None, 8, 8, 448) 1344 conv2d_80[0][0]
__________________________________________________________________________________________________
activation_80 (Activation) (None, 8, 8, 448) 0 batch_normalization_80[0][0]
__________________________________________________________________________________________________
conv2d_77 (Conv2D) (None, 8, 8, 384) 491520 mixed8[0][0]
__________________________________________________________________________________________________
conv2d_81 (Conv2D) (None, 8, 8, 384) 1548288 activation_80[0][0]
__________________________________________________________________________________________________
batch_normalization_77 (BatchNo (None, 8, 8, 384) 1152 conv2d_77[0][0]
__________________________________________________________________________________________________
batch_normalization_81 (BatchNo (None, 8, 8, 384) 1152 conv2d_81[0][0]
__________________________________________________________________________________________________
activation_77 (Activation) (None, 8, 8, 384) 0 batch_normalization_77[0][0]
__________________________________________________________________________________________________
activation_81 (Activation) (None, 8, 8, 384) 0 batch_normalization_81[0][0]
__________________________________________________________________________________________________
conv2d_78 (Conv2D) (None, 8, 8, 384) 442368 activation_77[0][0]
__________________________________________________________________________________________________
conv2d_79 (Conv2D) (None, 8, 8, 384) 442368 activation_77[0][0]
__________________________________________________________________________________________________
conv2d_82 (Conv2D) (None, 8, 8, 384) 442368 activation_81[0][0]
__________________________________________________________________________________________________
conv2d_83 (Conv2D) (None, 8, 8, 384) 442368 activation_81[0][0]
__________________________________________________________________________________________________
average_pooling2d_7 (AveragePoo (None, 8, 8, 1280) 0 mixed8[0][0]
__________________________________________________________________________________________________
conv2d_76 (Conv2D) (None, 8, 8, 320) 409600 mixed8[0][0]
__________________________________________________________________________________________________
batch_normalization_78 (BatchNo (None, 8, 8, 384) 1152 conv2d_78[0][0]
__________________________________________________________________________________________________
batch_normalization_79 (BatchNo (None, 8, 8, 384) 1152 conv2d_79[0][0]
__________________________________________________________________________________________________
batch_normalization_82 (BatchNo (None, 8, 8, 384) 1152 conv2d_82[0][0]
__________________________________________________________________________________________________
batch_normalization_83 (BatchNo (None, 8, 8, 384) 1152 conv2d_83[0][0]
__________________________________________________________________________________________________
conv2d_84 (Conv2D) (None, 8, 8, 192) 245760 average_pooling2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_76 (BatchNo (None, 8, 8, 320) 960 conv2d_76[0][0]
__________________________________________________________________________________________________
activation_78 (Activation) (None, 8, 8, 384) 0 batch_normalization_78[0][0]
__________________________________________________________________________________________________
activation_79 (Activation) (None, 8, 8, 384) 0 batch_normalization_79[0][0]
__________________________________________________________________________________________________
activation_82 (Activation) (None, 8, 8, 384) 0 batch_normalization_82[0][0]
__________________________________________________________________________________________________
activation_83 (Activation) (None, 8, 8, 384) 0 batch_normalization_83[0][0]
__________________________________________________________________________________________________
batch_normalization_84 (BatchNo (None, 8, 8, 192) 576 conv2d_84[0][0]
__________________________________________________________________________________________________
activation_76 (Activation) (None, 8, 8, 320) 0 batch_normalization_76[0][0]
__________________________________________________________________________________________________
mixed9_0 (Concatenate) (None, 8, 8, 768) 0 activation_78[0][0]
activation_79[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 8, 8, 768) 0 activation_82[0][0]
activation_83[0][0]
__________________________________________________________________________________________________
activation_84 (Activation) (None, 8, 8, 192) 0 batch_normalization_84[0][0]
__________________________________________________________________________________________________
mixed9 (Concatenate) (None, 8, 8, 2048) 0 activation_76[0][0]
mixed9_0[0][0]
concatenate[0][0]
activation_84[0][0]
__________________________________________________________________________________________________
conv2d_89 (Conv2D) (None, 8, 8, 448) 917504 mixed9[0][0]
__________________________________________________________________________________________________
batch_normalization_89 (BatchNo (None, 8, 8, 448) 1344 conv2d_89[0][0]
__________________________________________________________________________________________________
activation_89 (Activation) (None, 8, 8, 448) 0 batch_normalization_89[0][0]
__________________________________________________________________________________________________
conv2d_86 (Conv2D) (None, 8, 8, 384) 786432 mixed9[0][0]
__________________________________________________________________________________________________
conv2d_90 (Conv2D) (None, 8, 8, 384) 1548288 activation_89[0][0]
__________________________________________________________________________________________________
batch_normalization_86 (BatchNo (None, 8, 8, 384) 1152 conv2d_86[0][0]
__________________________________________________________________________________________________
batch_normalization_90 (BatchNo (None, 8, 8, 384) 1152 conv2d_90[0][0]
__________________________________________________________________________________________________
activation_86 (Activation) (None, 8, 8, 384) 0 batch_normalization_86[0][0]
__________________________________________________________________________________________________
activation_90 (Activation) (None, 8, 8, 384) 0 batch_normalization_90[0][0]
__________________________________________________________________________________________________
conv2d_87 (Conv2D) (None, 8, 8, 384) 442368 activation_86[0][0]
__________________________________________________________________________________________________
conv2d_88 (Conv2D) (None, 8, 8, 384) 442368 activation_86[0][0]
__________________________________________________________________________________________________
conv2d_91 (Conv2D) (None, 8, 8, 384) 442368 activation_90[0][0]
__________________________________________________________________________________________________
conv2d_92 (Conv2D) (None, 8, 8, 384) 442368 activation_90[0][0]
__________________________________________________________________________________________________
average_pooling2d_8 (AveragePoo (None, 8, 8, 2048) 0 mixed9[0][0]
__________________________________________________________________________________________________
conv2d_85 (Conv2D) (None, 8, 8, 320) 655360 mixed9[0][0]
__________________________________________________________________________________________________
batch_normalization_87 (BatchNo (None, 8, 8, 384) 1152 conv2d_87[0][0]
__________________________________________________________________________________________________
batch_normalization_88 (BatchNo (None, 8, 8, 384) 1152 conv2d_88[0][0]
__________________________________________________________________________________________________
batch_normalization_91 (BatchNo (None, 8, 8, 384) 1152 conv2d_91[0][0]
__________________________________________________________________________________________________
batch_normalization_92 (BatchNo (None, 8, 8, 384) 1152 conv2d_92[0][0]
__________________________________________________________________________________________________
conv2d_93 (Conv2D) (None, 8, 8, 192) 393216 average_pooling2d_8[0][0]
__________________________________________________________________________________________________
batch_normalization_85 (BatchNo (None, 8, 8, 320) 960 conv2d_85[0][0]
__________________________________________________________________________________________________
activation_87 (Activation) (None, 8, 8, 384) 0 batch_normalization_87[0][0]
__________________________________________________________________________________________________
activation_88 (Activation) (None, 8, 8, 384) 0 batch_normalization_88[0][0]
__________________________________________________________________________________________________
activation_91 (Activation) (None, 8, 8, 384) 0 batch_normalization_91[0][0]
__________________________________________________________________________________________________
activation_92 (Activation) (None, 8, 8, 384) 0 batch_normalization_92[0][0]
__________________________________________________________________________________________________
batch_normalization_93 (BatchNo (None, 8, 8, 192) 576 conv2d_93[0][0]
__________________________________________________________________________________________________
activation_85 (Activation) (None, 8, 8, 320) 0 batch_normalization_85[0][0]
__________________________________________________________________________________________________
mixed9_1 (Concatenate) (None, 8, 8, 768) 0 activation_87[0][0]
activation_88[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 8, 8, 768) 0 activation_91[0][0]
activation_92[0][0]
__________________________________________________________________________________________________
activation_93 (Activation) (None, 8, 8, 192) 0 batch_normalization_93[0][0]
__________________________________________________________________________________________________
mixed10 (Concatenate) (None, 8, 8, 2048) 0 activation_85[0][0]
mixed9_1[0][0]
concatenate_1[0][0]
activation_93[0][0]
__________________________________________________________________________________________________
avg_pool (GlobalAveragePooling2 (None, 2048) 0 mixed10[0][0]
==================================================================================================
Total params: 21,802,784
Trainable params: 21,768,352
Non-trainable params: 34,432
__________________________________________________________________________________________________
###Markdown
Creating the traing set
###Code
def encodeImage(img):
# Resize all images to a standard size (specified bythe image
# encoding network)
img = img.resize((WIDTH, HEIGHT), Image.ANTIALIAS)
# Convert a PIL image to a numpy array
x = tensorflow.keras.preprocessing.image.img_to_array(img)
# Expand to 2D array
x = np.expand_dims(x, axis=0)
# Perform any preprocessing needed by InceptionV3 or others
x = preprocess_input(x)
# Call InceptionV3 (or other) to extract the smaller feature set for
# the image.
x = encode_model.predict(x) # Get the encoding vector for the image
# Shape to correct form to be accepted by LSTM captioning network.
x = np.reshape(x, OUTPUT_DIM )
return x
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return f"{h}:{m:>02}:{s:>05.2f}"
train_path = os.path.join(root_captioning,"flicker8k_dataset",f'train{OUTPUT_DIM}.pkl')
start = time()
encoding_train = {}
for id in tqdm(train_img):
image_path = os.path.join(root_captioning,'flicker8k_dataset', id)
img = tensorflow.keras.preprocessing.image.load_img(image_path, \
target_size=(HEIGHT, WIDTH))
encoding_train[id] = encodeImage(img)
with open(train_path, "wb") as fp:
pickle.dump(encoding_train, fp)
print(f"\nGenerating training set took: {hms_string(time()-start)}")
encoding_train
test_path = os.path.join(root_captioning,"flicker8k_dataset",f'test{OUTPUT_DIM}.pkl')
start = time()
encoding_test = {}
for id in tqdm(test_img):
image_path = os.path.join(root_captioning,'flicker8k_dataset', id)
img = tensorflow.keras.preprocessing.image.load_img(image_path, \
target_size=(HEIGHT, WIDTH))
encoding_test[id] = encodeImage(img)
with open(test_path, "wb") as fp:
pickle.dump(encoding_test, fp)
print(f"\nGenerating testing set took: {hms_string(time()-start)}")
all_train_captions = []
for key, val in train_descriptions.items():
for cap in val:
all_train_captions.append(cap)
len(all_train_captions)
word_count_threshold = 10
word_counts = {}
nsents = 0
for sent in all_train_captions:
nsents += 1
for w in sent.split(' '):
word_counts[w] = word_counts.get(w, 0) + 1
vocab = [w for w in word_counts if word_counts[w] >= word_count_threshold]
print('preprocessed words %d ==> %d' % (len(word_counts), len(vocab)))
idxtoword = {}
wordtoidx = {}
ix = 1
for w in vocab:
wordtoidx[w] = ix
idxtoword[ix] = w
ix += 1
vocab_size = len(idxtoword) + 1
vocab_size
max_length +=2
print(max_length)
###Output
34
###Markdown
Using a Data Generator
###Code
def data_generator(descriptions, photos, wordtoidx, \
max_length, num_photos_per_batch):
# x1 - Training data for photos
# x2 - The caption that goes with each photo
# y - The predicted rest of the caption
x1, x2, y = [], [], []
n=0
while True:
for key, desc_list in descriptions.items():
n+=1
photo = photos[key+'.jpg']
# Each photo has 5 descriptions
for desc in desc_list:
# Convert each word into a list of sequences.
seq = [wordtoidx[word] for word in desc.split(' ') \
if word in wordtoidx]
# Generate a training case for every possible sequence and outcome
for i in range(1, len(seq)):
in_seq, out_seq = seq[:i], seq[i]
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
x1.append(photo)
x2.append(in_seq)
y.append(out_seq)
if n==num_photos_per_batch:
yield ([np.array(x1), np.array(x2)], np.array(y))
x1, x2, y = [], [], []
n=0
###Output
_____no_output_____
###Markdown
Loading Glove Embedding
###Code
embeddings_index = {}
f = open(os.path.join(root_captioning, 'glove.6B.200d.txt'), encoding="utf-8")
for line in tqdm(f):
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print(f'Found {len(embeddings_index)} word vectors.')
###Output
400000it [00:34, 11520.90it/s]
###Markdown
Bulding the Neural Network
###Code
embedding_dim = 200
# Get 200-dim dense vector for each of the 10000 words in out vocabulary
embedding_matrix = np.zeros((vocab_size, embedding_dim))
for word, i in wordtoidx.items():
#if i < max_words:
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# Words not found in the embedding index will be all zeros
embedding_matrix[i] = embedding_vector
embedding_matrix.shape
inputs1 = Input(shape=(OUTPUT_DIM,))
fe1 = Dropout(0.5)(inputs1)
fe2 = Dense(256, activation='relu')(fe1)
inputs2 = Input(shape=(max_length,))
se1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs2)
se2 = Dropout(0.5)(se1)
se3 = LSTM(256)(se2)
decoder1 = add([fe2, se3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
caption_model = Model(inputs=[inputs1, inputs2], outputs=outputs)
embedding_dim
caption_model.summary()
caption_model.layers[2].set_weights([embedding_matrix])
caption_model.layers[2].trainable = False
caption_model.compile(loss='categorical_crossentropy', optimizer='adam')
from keras.models import model_from_json
from keras.models import load_model
caption_model.save("/content/drive/MyDrive/Image captioning data/caption-model.hdf5")
###Output
_____no_output_____
###Markdown
Train the Neural Network
###Code
number_pics_per_bath = 3
steps = len(train_descriptions)//number_pics_per_bath
model_path = os.path.join(root_captioning,f'caption-model.hdf5')
if not os.path.exists(model_path):
for i in tqdm(range(EPOCHS*2)):
generator = data_generator(train_descriptions, encoding_train,
wordtoidx, max_length, number_pics_per_bath)
caption_model.fit_generator(generator, epochs=1,
steps_per_epoch=steps, verbose=1)
caption_model.optimizer.lr = 1e-4
number_pics_per_bath = 6
steps = len(train_descriptions)//number_pics_per_bath
for i in range(EPOCHS):
generator = data_generator(train_descriptions, encoding_train,
wordtoidx, max_length, number_pics_per_bath)
caption_model.fit_generator(generator, epochs=1,
steps_per_epoch=steps, verbose=1)
caption_model.save_weights(model_path)
print(f"\Training took: {hms_string(time()-start)}")
else:
caption_model.load_weights(model_path)
###Output
_____no_output_____
###Markdown
Generating Captions
###Code
def generateCaption(photo):
in_text = START
for i in range(max_length):
sequence = [wordtoidx[w] for w in in_text.split() if w in wordtoidx]
sequence = pad_sequences([sequence], maxlen=max_length)
yhat = caption_model.predict([photo,sequence], verbose=0)
yhat = np.argmax(yhat)
word = idxtoword[yhat]
in_text += ' ' + word
if word == STOP:
break
final = in_text.split()
final = final[1:-1]
final = ' '.join(final)
return final
###Output
_____no_output_____
###Markdown
Evaluate Performance on test data from Flicker8k
###Code
for z in range(2): # set higher to see more examples
pic = list(encoding_test.keys())[z]
image = encoding_test[pic].reshape((1,OUTPUT_DIM))
print(os.path.join(root_captioning,'flicker8k_dataset', pic))
x=plt.imread(os.path.join(root_captioning,'flicker8k_dataset', pic))
plt.imshow(x)
plt.show()
print("Caption:",generateCaption(image))
print("_____________________________________")
encoding_test[pic].shape
###Output
_____no_output_____
###Markdown
Image Captioning (Soumitra Dnyaneshwar Edake) Auto Image Caption Generator Steps:- Feature Extraction- Descriptions Generation- Model Training- Model Evaluation- Caption Generator Initial Step
###Code
#imports
import os
import numpy as np
from numpy import array
from time import time
from pickle import dump
from pickle import load
import string
from keras import Input, Model
from keras.backend import set_value
from keras.layers import Dropout, Embedding, Dense, LSTM, add
from keras.utils import to_categorical
from keras.applications.inception_v3 import preprocess_input
from keras_preprocessing.image import load_img, img_to_array
from keras_preprocessing.sequence import pad_sequences
from keras.applications.inception_v3 import InceptionV3
from keras.models import Model
import matplotlib.pyplot as plt
from PIL import Image
import scipy
import scipy.misc
import scipy.cluster
###Output
Bad key "text.kerning_factor" on line 4 in
D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\matplotlib\mpl-data\stylelib\_classic_test_patch.mplstyle.
You probably need to get an updated matplotlibrc file from
https://github.com/matplotlib/matplotlib/blob/v3.1.3/matplotlibrc.template
or from the matplotlib source distribution
###Markdown
Define Paths to appropriate directories and files
###Code
# input paths
path_dataset = "dataset\\flicker8k-dataset\\Flickr8k_Dataset\\Flicker8k_Dataset\\"
path_tokens = "dataset\\flicker8k-dataset\\Flickr8k_text\\Flickr8k.token.txt"
path_train_set = "dataset\\flicker8k-dataset\\Flickr8k_text\\Flickr_8k.trainImages.txt"
path_test_set = "dataset\\flicker8k-dataset\\Flickr8k_text\\Flickr_8k.testImages.txt"
path_glove_txt = "dataset\\pre-trained-glove\\glove.6B.200d.txt"
# output paths
path_desc = "descriptions.txt"
path_extracted_train_features = "extracted_train_features.enc"
path_extracted_test_features = "extracted_test_features.enc"
###Output
_____no_output_____
###Markdown
Below lines help us to overcome ***keras scrach graph*** error
###Code
import tensorflow as tf
# setting GPU memory growth for no memory glitches
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
###Output
_____no_output_____
###Markdown
1. Feature ExtractionAll the features will be extracted from images and will be collectively (train and test) saved in a pickle dumped file. Define the modified InceptionV3 model
###Code
# we need InceptionV3 only to extract features thats why we remove last layer
model = InceptionV3(weights='imagenet')
model_popped = Model(inputs=model.input, outputs=model.layers[-2].output)
# To open sets
def set_opener(path):
load_set = open(path, 'r')
data = load_set.readlines()
load_set.close()
return data
# pre processing and feature extraction
def feature_extractor(image, in_model):
img = load_img(image, target_size=(299, 299))
x = img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
ext_ft = in_model.predict(x)
ext_ft = np.reshape(ext_ft, ext_ft.shape[1])
return ext_ft
# get a set of all train images
train_images = set_opener(path_train_set)
# get a set of all test images
test_images = set_opener(path_test_set)
print("Length: Train:", len(train_images))
print("Length: Test: ", len(test_images))
all_sets = [train_images, test_images]
outputs = [path_extracted_train_features, path_extracted_test_features]
total_count = 0
# set initial time
start_time = time()
for i, dataset in enumerate(all_sets):
count = 0
features_encoded = dict()
for name in dataset:
count += 1
name = name.strip()
image_path = path_dataset + name
feature_vector = feature_extractor(image_path, model_popped)
image_name = name.split('.')[0]
features_encoded[image_name] = feature_vector
print('> Processing {}/{}'.format(count, len(dataset)) + ' : %s' % name)
total_count += count
# store to file
dump(features_encoded, open(outputs[i], 'wb'))
print("\nFeatures extracted :", len(features_encoded))
print('Features saved to :', outputs[i], end='\n\n')
print("Total Features Extracted :", total_count)
print("Processing Time :", time() - start_time, "sec")
###Output
> Processing 1/6000 : 2513260012_03d33305cf.jpg
> Processing 2/6000 : 2903617548_d3e38d7f88.jpg
> Processing 3/6000 : 3338291921_fe7ae0c8f8.jpg
> Processing 4/6000 : 488416045_1c6d903fe0.jpg
> Processing 5/6000 : 2644326817_8f45080b87.jpg
> Processing 6/6000 : 218342358_1755a9cce1.jpg
> Processing 7/6000 : 2501968935_02f2cd8079.jpg
> Processing 8/6000 : 2699342860_5288e203ea.jpg
> Processing 9/6000 : 2638369467_8fc251595b.jpg
> Processing 10/6000 : 2926786902_815a99a154.jpg
> Processing 11/6000 : 2851304910_b5721199bc.jpg
> Processing 12/6000 : 3423802527_94bd2b23b0.jpg
> Processing 13/6000 : 3356369156_074750c6cc.jpg
> Processing 14/6000 : 2294598473_40637b5c04.jpg
> Processing 15/6000 : 1191338263_a4fa073154.jpg
> Processing 16/6000 : 2380765956_6313d8cae3.jpg
> Processing 17/6000 : 3197891333_b1b0fd1702.jpg
> Processing 18/6000 : 3119887967_271a097464.jpg
> Processing 19/6000 : 2276499757_b44dc6f8ce.jpg
> Processing 20/6000 : 2506892928_7e79bec613.jpg
> Processing 21/6000 : 2187222896_c206d63396.jpg
> Processing 22/6000 : 2826769554_85c90864c9.jpg
> Processing 23/6000 : 3097196395_ec06075389.jpg
> Processing 24/6000 : 3603116579_4a28a932e2.jpg
> Processing 25/6000 : 3339263085_6db9fd0981.jpg
> Processing 26/6000 : 2532262109_87429a2cae.jpg
> Processing 27/6000 : 2076906555_c20dc082db.jpg
> Processing 28/6000 : 2502007071_82a8c639cf.jpg
> Processing 29/6000 : 3113769557_9edbb8275c.jpg
> Processing 30/6000 : 3325974730_3ee192e4ff.jpg
> Processing 31/6000 : 1655781989_b15ab4cbff.jpg
> Processing 32/6000 : 1662261486_db967930de.jpg
> Processing 33/6000 : 2410562803_56ec09f41c.jpg
> Processing 34/6000 : 2469498117_b4543e1460.jpg
> Processing 35/6000 : 69710415_5c2bfb1058.jpg
> Processing 36/6000 : 3414734842_beb543f400.jpg
> Processing 37/6000 : 3006217970_90b42e6b27.jpg
> Processing 38/6000 : 2192411521_9c7e488c5e.jpg
> Processing 39/6000 : 3535879138_9281dc83d5.jpg
> Processing 40/6000 : 2685788323_ceab14534a.jpg
> Processing 41/6000 : 3465606652_f380a38050.jpg
> Processing 42/6000 : 2599131872_65789d86d5.jpg
> Processing 43/6000 : 2244613488_4d1f9edb33.jpg
> Processing 44/6000 : 2738077433_10e6264b6f.jpg
> Processing 45/6000 : 3537201804_ce07aff237.jpg
> Processing 46/6000 : 1597557856_30640e0b43.jpg
> Processing 47/6000 : 3357194782_c261bb6cbf.jpg
> Processing 48/6000 : 3682038869_585075b5ff.jpg
> Processing 49/6000 : 236474697_0c73dd5d8b.jpg
> Processing 50/6000 : 2641288004_30ce961211.jpg
> Processing 51/6000 : 267164457_2e8b4d30aa.jpg
> Processing 52/6000 : 2453891449_fedb277908.jpg
> Processing 53/6000 : 281419391_522557ce27.jpg
> Processing 54/6000 : 354999632_915ea81e53.jpg
> Processing 55/6000 : 3109136206_f7d201b368.jpg
> Processing 56/6000 : 2281054343_95d6d3b882.jpg
> Processing 57/6000 : 3296584432_bef3c965a3.jpg
> Processing 58/6000 : 3526431764_056d2c61dc.jpg
> Processing 59/6000 : 3549997413_01388dece0.jpg
> Processing 60/6000 : 143688895_e837c3bc76.jpg
> Processing 61/6000 : 2495394666_2ef6c37519.jpg
> Processing 62/6000 : 3384742888_85230c34d5.jpg
> Processing 63/6000 : 1160034462_16b38174fe.jpg
> Processing 64/6000 : 334768700_51c439b9ee.jpg
> Processing 65/6000 : 412101267_7257e6d8c0.jpg
> Processing 66/6000 : 2623939135_0cd02ffa5d.jpg
> Processing 67/6000 : 3043266735_904dda6ded.jpg
> Processing 68/6000 : 3034585889_388d6ffcc0.jpg
> Processing 69/6000 : 2069279767_fb32bfb2de.jpg
> Processing 70/6000 : 2593406865_ab98490c1f.jpg
> Processing 71/6000 : 432167214_c17fcc1a2d.jpg
> Processing 72/6000 : 305749904_54a612fd1a.jpg
> Processing 73/6000 : 2780087302_6a77658cbf.jpg
> Processing 74/6000 : 3051998298_38da5746fa.jpg
> Processing 75/6000 : 1574401950_6bedc0d29b.jpg
> Processing 76/6000 : 539493431_744eb1abaa.jpg
> Processing 77/6000 : 3524436870_7670df68e8.jpg
> Processing 78/6000 : 2081446176_f97dc76951.jpg
> Processing 79/6000 : 2265367960_7928c5642f.jpg
> Processing 80/6000 : 460350019_af60511a3b.jpg
> Processing 81/6000 : 2976946039_fb9147908d.jpg
> Processing 82/6000 : 2308108566_2cba6bca53.jpg
> Processing 83/6000 : 3367758711_a8c09607ac.jpg
> Processing 84/6000 : 3666056567_661e25f54c.jpg
> Processing 85/6000 : 3099264059_21653e2536.jpg
> Processing 86/6000 : 2988439935_7cea05bc48.jpg
> Processing 87/6000 : 241345864_138471c9ea.jpg
> Processing 88/6000 : 3019199755_a984bc21b1.jpg
> Processing 89/6000 : 3201594926_cd2009eb13.jpg
> Processing 90/6000 : 2540751930_d71c7f5622.jpg
> Processing 91/6000 : 1475046848_831245fc64.jpg
> Processing 92/6000 : 2877637572_641cd29901.jpg
> Processing 93/6000 : 1308472581_9961782889.jpg
> Processing 94/6000 : 2282260240_55387258de.jpg
> Processing 95/6000 : 2363419943_717e6b119d.jpg
> Processing 96/6000 : 392976422_c8d0514bc3.jpg
> Processing 97/6000 : 103205630_682ca7285b.jpg
> Processing 98/6000 : 1347519824_e402241e4f.jpg
> Processing 99/6000 : 584484388_0eeb36d03d.jpg
> Processing 100/6000 : 2460823604_7f6f786b1c.jpg
> Processing 101/6000 : 121800200_bef08fae5f.jpg
> Processing 102/6000 : 2422302286_385725e3cf.jpg
> Processing 103/6000 : 3183883750_b6acc40397.jpg
> Processing 104/6000 : 3091912922_0d6ebc8f6a.jpg
> Processing 105/6000 : 2787868417_810985234d.jpg
> Processing 106/6000 : 3670075789_92ea9a183a.jpg
> Processing 107/6000 : 3329169877_175cb16845.jpg
> Processing 108/6000 : 751074141_feafc7b16c.jpg
> Processing 109/6000 : 3445428367_25bafffe75.jpg
> Processing 110/6000 : 3542418447_7c337360d6.jpg
> Processing 111/6000 : 2730819220_b58af1119a.jpg
> Processing 112/6000 : 3543378438_47e2712486.jpg
> Processing 113/6000 : 2335619125_2e2034f2c3.jpg
> Processing 114/6000 : 3520199925_ca18d0f41e.jpg
> Processing 115/6000 : 3374722123_6fe6fef449.jpg
> Processing 116/6000 : 3280672302_2967177653.jpg
> Processing 117/6000 : 3073579130_7c95d16a7f.jpg
> Processing 118/6000 : 99679241_adc853a5c0.jpg
> Processing 119/6000 : 3759492488_592cd78ed1.jpg
> Processing 120/6000 : 2875528143_94d9480fdd.jpg
> Processing 121/6000 : 1052358063_eae6744153.jpg
> Processing 122/6000 : 111766423_4522d36e56.jpg
> Processing 123/6000 : 2474918824_88660c7757.jpg
> Processing 124/6000 : 3697675767_97796334e4.jpg
> Processing 125/6000 : 241346317_be3f07bd2e.jpg
> Processing 126/6000 : 2694178830_116be6a6a9.jpg
> Processing 127/6000 : 513116697_ad0f4dc800.jpg
> Processing 128/6000 : 371364900_5167d4dd7f.jpg
> Processing 129/6000 : 2860041212_797afd6ccf.jpg
> Processing 130/6000 : 1481062342_d9e34366c4.jpg
> Processing 131/6000 : 3556792157_d09d42bef7.jpg
> Processing 132/6000 : 3226254560_2f8ac147ea.jpg
> Processing 133/6000 : 2252123185_487f21e336.jpg
> Processing 134/6000 : 2353088412_5e5804c6f5.jpg
> Processing 135/6000 : 3359587274_4a2b140b84.jpg
> Processing 136/6000 : 3588417747_b152a51c52.jpg
> Processing 137/6000 : 1055623002_8195a43714.jpg
> Processing 138/6000 : 3454315016_f1e30d4676.jpg
> Processing 139/6000 : 2837808847_5407af1986.jpg
> Processing 140/6000 : 3544803461_a418ca611e.jpg
> Processing 141/6000 : 3046916429_8e2570b613.jpg
> Processing 142/6000 : 2570559405_dc93007f76.jpg
> Processing 143/6000 : 2518219912_f47214aa16.jpg
> Processing 144/6000 : 2951092164_4940b9a517.jpg
> Processing 145/6000 : 2273038287_3004a72a34.jpg
> Processing 146/6000 : 3710971182_cb01c97d15.jpg
> Processing 147/6000 : 3544483327_830349e7bc.jpg
> Processing 148/6000 : 3055716848_b253324afc.jpg
> Processing 149/6000 : 3287236038_8998e6b82f.jpg
> Processing 150/6000 : 3597210806_95b07bb968.jpg
> Processing 151/6000 : 3453284877_8866189055.jpg
> Processing 152/6000 : 2640000969_b5404a5143.jpg
> Processing 153/6000 : 2451988767_244bff98d1.jpg
> Processing 154/6000 : 3682428916_69ce66d375.jpg
> Processing 155/6000 : 276356412_dfa01c3c9e.jpg
> Processing 156/6000 : 3616846215_d61881b60f.jpg
> Processing 157/6000 : 2360194369_d2fd03b337.jpg
> Processing 158/6000 : 576093768_e78f91c176.jpg
> Processing 159/6000 : 2934837034_a8ca5b1f50.jpg
> Processing 160/6000 : 241345639_1556a883b1.jpg
> Processing 161/6000 : 2876994989_a4ebbd8491.jpg
> Processing 162/6000 : 2339516180_12493e8ecf.jpg
> Processing 163/6000 : 3301438465_10121a2412.jpg
> Processing 164/6000 : 101669240_b2d3e7f17b.jpg
> Processing 165/6000 : 300500054_56653bf217.jpg
> Processing 166/6000 : 1956678973_223cb1b847.jpg
> Processing 167/6000 : 1213336750_2269b51397.jpg
> Processing 168/6000 : 478750151_e0adb5030a.jpg
> Processing 169/6000 : 2755952680_68a0a1fa42.jpg
> Processing 170/6000 : 47870024_73a4481f7d.jpg
> Processing 171/6000 : 3165826902_6bf9c4bdb2.jpg
###Markdown
Two files, ***extracted_train_features.enc*** and ***extracted_test_features.enc***, are created. These files stores the features extracted form each set respectively. 2. Descriptions Generating
###Code
# load descriptions
descriptions_tokens = open(path_tokens, 'r')
raw_descriptions = descriptions_tokens.read()
def load_descriptions(file_name):
desc_mappings = dict()
for line in file_name.split('\n'):
tokens = line.split()
if len(line) < 2:
continue
image_name, image_desc = tokens[0], tokens[1:]
image_name = image_name.split('.')[0]
image_desc = ' '.join(image_desc)
if image_name not in desc_mappings:
desc_mappings[image_name] = list()
desc_mappings[image_name].append(image_desc)
return desc_mappings
def clean_descriptions(descriptions):
table = str.maketrans('', '', string.punctuation)
for key, desc_list in descriptions.items():
for i in range(len(desc_list)):
desc = desc_list[i]
desc = desc.split()
desc = [word.lower() for word in desc]
desc = [w.translate(table) for w in desc]
desc = [word for word in desc if len(word) > 1]
desc = [word for word in desc if word.isalpha()]
desc_list[i] = ' '.join(desc)
return descriptions
def save_descriptions(descriptions, file_name):
count = 0
lines = list()
for key, desc_list in descriptions.items():
for desc in desc_list:
lines.append(key + ' ' + desc)
count += 1
data = '\n'.join(lines)
file = open(file_name, 'w')
file.write(data)
file.close()
return count
# parse descriptions
all_descriptions = load_descriptions(raw_descriptions)
print('Images: %d ' % len(all_descriptions))
# clean descriptions
all_descriptions = clean_descriptions(all_descriptions)
# save to file
count = save_descriptions(all_descriptions, path_desc)
print('Descriptions :', count)
print('File saved to :', path_desc)
###Output
Descriptions : 40460
File saved to : descriptions.txt
###Markdown
3. Model Training 3.1 Define Functions and initiate pre training stage
###Code
def pick_load(path):
file = open(path, "rb")
data = load(file)
file.close()
return data
def desc_loader(filename):
load_desc = open(filename, 'r')
data = load_desc.read()
load_desc.close()
return data
def load_set(filename):
doc = desc_loader(filename)
dataset = list()
for line in doc.split('\n'):
if len(line) < 1:
continue
i_name = line.split('.')[0]
dataset.append(i_name)
return set(dataset)
def load_clean_descriptions(filename, dataset):
doc = desc_loader(filename)
descriptions = dict()
for line in doc.split('\n'):
tokens = line.split()
image_id, image_desc = tokens[0], tokens[1:]
if image_id in dataset:
if image_id not in descriptions:
descriptions[image_id] = list()
desc = '<start> ' + ' '.join(image_desc) + ' <end>'
descriptions[image_id].append(desc)
return descriptions
def caption_creator(descriptions):
captions = []
for key, val in descriptions.items():
for cap in val:
captions.append(cap)
return captions
def to_lines(descriptions):
all_desc = list()
for key in descriptions.keys():
[all_desc.append(d) for d in descriptions[key]]
return all_desc
def get_max_length(descriptions):
lines = to_lines(descriptions)
return max(len(d.split()) for d in lines)
train_features = pick_load(path_extracted_train_features)
train = load_set(path_train_set)
train_descriptions = load_clean_descriptions(path_desc, train)
print('Train Samples: %d' % len(train_descriptions))
all_train_captions = caption_creator(train_descriptions)
print('Total Captions:', len(all_train_captions))
max_length = get_max_length(train_descriptions)
print('Description Length: %d' % max_length)
###Output
Description Length: 34
###Markdown
3.2 Load Embeddings
###Code
def get_all_set(directory_path):
dataset_all = os.listdir(directory_path)
all_set = list()
for line in dataset_all:
if len(line) < 1:
continue
i_name = line.split('.')[0]
all_set.append(i_name)
return set(all_set)
def minimize_words_count(captions):
word_threshold = 10
word_counts = dict()
words_used = 0
for word in captions:
words_used += 1
for w in word.split():
word_counts[w] = word_counts.get(w, 0) + 1
vocab = [w for w in word_counts if word_counts[w] >= word_threshold]
print('Minimized Vocabulary (Words) : %d -> %d' % (len(word_counts) + 1, len(vocab) + 1))
int_to_word_mappings = dict()
word_to_int_mappings = dict()
integer = 1
for w in vocab:
word_to_int_mappings[w] = integer
int_to_word_mappings[integer] = w
integer += 1
vocab_size = len(int_to_word_mappings) + 1
data = vocab_size, word_to_int_mappings, int_to_word_mappings
save_path = 'token_mappings.tk'
dump(data, open(save_path, 'wb'))
def load_mappings():
save_path = 'token_mappings.tk'
while True:
if os.path.exists(save_path):
print('Old Word to Vector embeddings found, '
'Loading them!')
return pick_load(save_path)
else:
print('No Old Word to Vector embeddings found, '
'Creating a new one!')
all_set = get_all_set(path_dataset)
all_descriptions = load_clean_descriptions(path_desc, all_set)
all_captions = []
for key, val in all_descriptions.items():
for cap in val:
all_captions.append(cap)
minimize_words_count(all_captions)
vocab_size, word_to_int, int_to_word = load_mappings()
def emb_load(vocab_size, word_to_int):
embeddings_index = {}
f = open(path_glove_txt, encoding="utf-8")
for line in f:
values = line.split()
word = values[0]
coefficients = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefficients
f.close()
print('Found %s word vectors' % len(embeddings_index))
embedding_dim = 200
embedding_matrix = np.zeros((vocab_size, embedding_dim))
for word, i in word_to_int.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
return embedding_dim, embedding_matrix
print('Loading Glove Word2Vec model, please wait...')
embedding_dim, embedding_matrix = emb_load(vocab_size, word_to_int)
###Output
Loading Glove Word2Vec model, please wait...
Found 400000 word vectors
###Markdown
3.3 Train a Model
###Code
def create_model(vocab_size, embedding_dim, embedding_matrix, max_length):
# LSTM Model
inputs_image = Input(shape=(2048,))
feature_layer_1 = Dropout(0.2)(inputs_image)
feature_layer_2 = Dense(256, activation='relu')(feature_layer_1)
inputs_sequence = Input(shape=(max_length,))
sequence_layer_1 = Embedding(vocab_size, embedding_dim, mask_zero=True)(inputs_sequence)
sequence_layer_2 = Dropout(0.2)(sequence_layer_1)
sequence_layer_3 = LSTM(256)(sequence_layer_2)
decoder1 = add([feature_layer_2, sequence_layer_3])
decoder2 = Dense(256, activation='relu')(decoder1)
outputs = Dense(vocab_size, activation='softmax')(decoder2)
model = Model(inputs=[inputs_image, inputs_sequence], outputs=outputs)
model.layers[2].set_weights([embedding_matrix])
model.layers[2].trainable = False
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
def data_generator(descriptions, image, word_to_int, max_length, num_photos_per_batch, vocab_size):
list_photos = list()
list_in_seq = list()
list_out_seq = list()
n = 0
while True:
for key, desc_list in descriptions.items():
n += 1
photo = image[key]
for desc in desc_list:
seq = [word_to_int[word] for word in desc.split(' ') if word in word_to_int]
for i in range(1, len(seq)):
in_seq, out_seq = seq[:i], seq[i]
in_seq = pad_sequences([in_seq], maxlen=max_length)[0]
out_seq = to_categorical([out_seq], num_classes=vocab_size)[0]
list_photos.append(photo)
list_in_seq.append(in_seq)
list_out_seq.append(out_seq)
if n == num_photos_per_batch:
yield [[array(list_photos), array(list_in_seq)], array(list_out_seq)]
list_photos, list_in_seq, list_out_seq = list(), list(), list()
n = 0
def train_model(idn, model, epochs, model_parameters_alpha, model_parameters_omega):
train_descriptions = model_parameters_alpha[0]
train_features = model_parameters_alpha[1]
word_to_int = model_parameters_alpha[2]
max_length = model_parameters_alpha[3]
vocab_size = model_parameters_alpha[4]
number_pics_per_bath = model_parameters_omega[0]
steps = model_parameters_omega[1]
if len(model_parameters_omega) == 3:
extras = model_parameters_omega[2]
set_value(model.optimizer.lr, extras[0])
for i in range(epochs):
generator = data_generator(train_descriptions,
train_features,
word_to_int,
max_length,
number_pics_per_bath,
vocab_size
)
history = model.fit_generator(generator,
epochs=1,
steps_per_epoch=steps,
verbose=1,
)
# pull out metrics from the model
loss = history.history.get('loss')[0]
# model naming
model_name = 'model_' + str(idn) + '_' + str(i) + '_(loss_%.3f' % loss + ').h5'
# saving the model to local storage
model.save(str(model_name))
print('\nModel saved : ' + model_name, end="\n\n")
use_model = create_model(vocab_size, embedding_dim, embedding_matrix, max_length)
###Output
_____no_output_____
###Markdown
Defining Training Parameters
###Code
epochs = 10
number_pics_per_bath = 3
steps = len(train_descriptions)
model_parameters_alpha = [train_descriptions, train_features, word_to_int, max_length, vocab_size]
model_parameters_omega = [number_pics_per_bath, steps]
###Output
_____no_output_____
###Markdown
The ACTUAL Training Process
###Code
train_model(1, use_model, epochs, model_parameters_alpha, model_parameters_omega)
###Output
Epoch 1/1
6000/6000 [==============================] - 242s 40ms/step - loss: 3.4276
Model saved : model_1_0_(loss_3.431).h5
Epoch 1/1
6000/6000 [==============================] - 243s 40ms/step - loss: 2.8096
Model saved : model_1_1_(loss_2.816).h5
Epoch 1/1
6000/6000 [==============================] - 241s 40ms/step - loss: 2.5633
Model saved : model_1_2_(loss_2.571).h5
Epoch 1/1
6000/6000 [==============================] - 244s 41ms/step - loss: 2.4074
Model saved : model_1_3_(loss_2.416).h5
Epoch 1/1
6000/6000 [==============================] - 246s 41ms/step - loss: 2.3031
Model saved : model_1_4_(loss_2.312).h5
Epoch 1/1
6000/6000 [==============================] - 237s 40ms/step - loss: 2.2288
Model saved : model_1_5_(loss_2.238).h5
Epoch 1/1
6000/6000 [==============================] - 240s 40ms/step - loss: 2.1760
Model saved : model_1_6_(loss_2.185).h5
Epoch 1/1
6000/6000 [==============================] - 242s 40ms/step - loss: 2.1348
Model saved : model_1_7_(loss_2.144).h5
Epoch 1/1
6000/6000 [==============================] - 238s 40ms/step - loss: 2.1015
Model saved : model_1_8_(loss_2.111).h5
Epoch 1/1
6000/6000 [==============================] - 237s 39ms/step - loss: 2.0741
Model saved : model_1_9_(loss_2.084).h5
###Markdown
Model Evaluation
###Code
def evaluate_model(eval_model, descriptions, features, max_length, word_to_int,
int_to_word):
actual, predicted = list(), list()
count = 0
for key, desc_list in descriptions.items():
# generate description
count += 1
print('Eval Progress : {}/{}'.format(count, len(descriptions)))
y_hat = pred_caption_greedy(features[key], eval_model, max_length, word_to_int, int_to_word)
# store actual and predicted
references = [d.split() for d in desc_list]
actual.append(references)
predicted.append(y_hat.split())
# calculate BLEU score
print('BLEU-1: %f' % corpus_bleu(actual, predicted, weights=(1.0, 0, 0, 0)))
print('BLEU-2: %f' % corpus_bleu(actual, predicted, weights=(0.5, 0.5, 0, 0)))
print('BLEU-3: %f' % corpus_bleu(actual, predicted, weights=(0.3, 0.3, 0.3, 0)))
print('BLEU-4: %f' % corpus_bleu(actual, predicted, weights=(0.25, 0.25, 0.25, 0.25)))
###Output
_____no_output_____
###Markdown
We use greedy method to build a caption
###Code
def pred_caption_greedy(photo, model, max_length, word_to_int, int_to_word):
photo = np.array(photo)
photo = np.expand_dims(photo, axis=0)
in_text = '<start>'
for i in range(max_length):
sequence = [word_to_int[w] for w in in_text.split() if w in word_to_int]
sequence = pad_sequences([sequence], maxlen=max_length)
y_hat = model.predict([photo, sequence], verbose=0)
y_hat = np.argmax(y_hat)
word = int_to_word[y_hat]
in_text += ' ' + word
if word == '<end>':
break
pred_caption = in_text.split()
pred_caption = pred_caption[1:-1]
pred_caption = ' '.join(pred_caption)
return pred_caption
test_features = load(open(path_extracted_test_features, "rb"))
test = load_set(path_test_set)
test_descriptions = load_clean_descriptions(path_desc, test)
print('Test Samples: %d' % len(test_descriptions))
###Output
Test Samples: 1000
###Markdown
load a model and run evaluation on it
###Code
from keras.engine.saving import load_model
from nltk.translate.bleu_score import corpus_bleu
use_model = load_model('model_1_9_(loss_2.084).h5')
evaluate_model(use_model, test_descriptions, test_features, max_length, word_to_int, int_to_word)
###Output
Eval Progress : 1/1000
Eval Progress : 2/1000
Eval Progress : 3/1000
Eval Progress : 4/1000
Eval Progress : 5/1000
Eval Progress : 6/1000
Eval Progress : 7/1000
Eval Progress : 8/1000
Eval Progress : 9/1000
Eval Progress : 10/1000
Eval Progress : 11/1000
Eval Progress : 12/1000
Eval Progress : 13/1000
Eval Progress : 14/1000
Eval Progress : 15/1000
Eval Progress : 16/1000
Eval Progress : 17/1000
Eval Progress : 18/1000
Eval Progress : 19/1000
Eval Progress : 20/1000
Eval Progress : 21/1000
Eval Progress : 22/1000
Eval Progress : 23/1000
Eval Progress : 24/1000
Eval Progress : 25/1000
Eval Progress : 26/1000
Eval Progress : 27/1000
Eval Progress : 28/1000
Eval Progress : 29/1000
Eval Progress : 30/1000
Eval Progress : 31/1000
Eval Progress : 32/1000
Eval Progress : 33/1000
Eval Progress : 34/1000
Eval Progress : 35/1000
Eval Progress : 36/1000
Eval Progress : 37/1000
Eval Progress : 38/1000
Eval Progress : 39/1000
Eval Progress : 40/1000
Eval Progress : 41/1000
Eval Progress : 42/1000
Eval Progress : 43/1000
Eval Progress : 44/1000
Eval Progress : 45/1000
Eval Progress : 46/1000
Eval Progress : 47/1000
Eval Progress : 48/1000
Eval Progress : 49/1000
Eval Progress : 50/1000
Eval Progress : 51/1000
Eval Progress : 52/1000
Eval Progress : 53/1000
Eval Progress : 54/1000
Eval Progress : 55/1000
Eval Progress : 56/1000
Eval Progress : 57/1000
Eval Progress : 58/1000
Eval Progress : 59/1000
Eval Progress : 60/1000
Eval Progress : 61/1000
Eval Progress : 62/1000
Eval Progress : 63/1000
Eval Progress : 64/1000
Eval Progress : 65/1000
Eval Progress : 66/1000
Eval Progress : 67/1000
Eval Progress : 68/1000
Eval Progress : 69/1000
Eval Progress : 70/1000
Eval Progress : 71/1000
Eval Progress : 72/1000
Eval Progress : 73/1000
Eval Progress : 74/1000
Eval Progress : 75/1000
Eval Progress : 76/1000
Eval Progress : 77/1000
Eval Progress : 78/1000
Eval Progress : 79/1000
Eval Progress : 80/1000
Eval Progress : 81/1000
Eval Progress : 82/1000
Eval Progress : 83/1000
Eval Progress : 84/1000
Eval Progress : 85/1000
Eval Progress : 86/1000
Eval Progress : 87/1000
Eval Progress : 88/1000
Eval Progress : 89/1000
Eval Progress : 90/1000
Eval Progress : 91/1000
Eval Progress : 92/1000
Eval Progress : 93/1000
Eval Progress : 94/1000
Eval Progress : 95/1000
Eval Progress : 96/1000
Eval Progress : 97/1000
Eval Progress : 98/1000
Eval Progress : 99/1000
Eval Progress : 100/1000
Eval Progress : 101/1000
Eval Progress : 102/1000
Eval Progress : 103/1000
Eval Progress : 104/1000
Eval Progress : 105/1000
Eval Progress : 106/1000
Eval Progress : 107/1000
Eval Progress : 108/1000
Eval Progress : 109/1000
Eval Progress : 110/1000
Eval Progress : 111/1000
Eval Progress : 112/1000
Eval Progress : 113/1000
Eval Progress : 114/1000
Eval Progress : 115/1000
Eval Progress : 116/1000
Eval Progress : 117/1000
Eval Progress : 118/1000
Eval Progress : 119/1000
Eval Progress : 120/1000
Eval Progress : 121/1000
Eval Progress : 122/1000
Eval Progress : 123/1000
Eval Progress : 124/1000
Eval Progress : 125/1000
Eval Progress : 126/1000
Eval Progress : 127/1000
Eval Progress : 128/1000
Eval Progress : 129/1000
Eval Progress : 130/1000
Eval Progress : 131/1000
Eval Progress : 132/1000
Eval Progress : 133/1000
Eval Progress : 134/1000
Eval Progress : 135/1000
Eval Progress : 136/1000
Eval Progress : 137/1000
Eval Progress : 138/1000
Eval Progress : 139/1000
Eval Progress : 140/1000
Eval Progress : 141/1000
Eval Progress : 142/1000
Eval Progress : 143/1000
Eval Progress : 144/1000
Eval Progress : 145/1000
Eval Progress : 146/1000
Eval Progress : 147/1000
Eval Progress : 148/1000
Eval Progress : 149/1000
Eval Progress : 150/1000
Eval Progress : 151/1000
Eval Progress : 152/1000
Eval Progress : 153/1000
Eval Progress : 154/1000
Eval Progress : 155/1000
Eval Progress : 156/1000
Eval Progress : 157/1000
Eval Progress : 158/1000
Eval Progress : 159/1000
Eval Progress : 160/1000
Eval Progress : 161/1000
Eval Progress : 162/1000
Eval Progress : 163/1000
Eval Progress : 164/1000
Eval Progress : 165/1000
Eval Progress : 166/1000
Eval Progress : 167/1000
Eval Progress : 168/1000
Eval Progress : 169/1000
Eval Progress : 170/1000
Eval Progress : 171/1000
Eval Progress : 172/1000
Eval Progress : 173/1000
Eval Progress : 174/1000
Eval Progress : 175/1000
Eval Progress : 176/1000
Eval Progress : 177/1000
Eval Progress : 178/1000
Eval Progress : 179/1000
Eval Progress : 180/1000
Eval Progress : 181/1000
Eval Progress : 182/1000
Eval Progress : 183/1000
Eval Progress : 184/1000
Eval Progress : 185/1000
Eval Progress : 186/1000
Eval Progress : 187/1000
Eval Progress : 188/1000
Eval Progress : 189/1000
Eval Progress : 190/1000
Eval Progress : 191/1000
Eval Progress : 192/1000
Eval Progress : 193/1000
Eval Progress : 194/1000
Eval Progress : 195/1000
Eval Progress : 196/1000
Eval Progress : 197/1000
Eval Progress : 198/1000
Eval Progress : 199/1000
Eval Progress : 200/1000
Eval Progress : 201/1000
Eval Progress : 202/1000
Eval Progress : 203/1000
Eval Progress : 204/1000
Eval Progress : 205/1000
Eval Progress : 206/1000
Eval Progress : 207/1000
Eval Progress : 208/1000
Eval Progress : 209/1000
Eval Progress : 210/1000
Eval Progress : 211/1000
Eval Progress : 212/1000
Eval Progress : 213/1000
Eval Progress : 214/1000
Eval Progress : 215/1000
Eval Progress : 216/1000
Eval Progress : 217/1000
Eval Progress : 218/1000
Eval Progress : 219/1000
Eval Progress : 220/1000
Eval Progress : 221/1000
Eval Progress : 222/1000
Eval Progress : 223/1000
Eval Progress : 224/1000
Eval Progress : 225/1000
Eval Progress : 226/1000
Eval Progress : 227/1000
Eval Progress : 228/1000
Eval Progress : 229/1000
Eval Progress : 230/1000
Eval Progress : 231/1000
Eval Progress : 232/1000
Eval Progress : 233/1000
Eval Progress : 234/1000
Eval Progress : 235/1000
Eval Progress : 236/1000
Eval Progress : 237/1000
Eval Progress : 238/1000
Eval Progress : 239/1000
Eval Progress : 240/1000
Eval Progress : 241/1000
Eval Progress : 242/1000
Eval Progress : 243/1000
Eval Progress : 244/1000
Eval Progress : 245/1000
Eval Progress : 246/1000
Eval Progress : 247/1000
Eval Progress : 248/1000
Eval Progress : 249/1000
Eval Progress : 250/1000
Eval Progress : 251/1000
Eval Progress : 252/1000
Eval Progress : 253/1000
Eval Progress : 254/1000
Eval Progress : 255/1000
Eval Progress : 256/1000
Eval Progress : 257/1000
Eval Progress : 258/1000
Eval Progress : 259/1000
Eval Progress : 260/1000
Eval Progress : 261/1000
Eval Progress : 262/1000
Eval Progress : 263/1000
Eval Progress : 264/1000
Eval Progress : 265/1000
Eval Progress : 266/1000
Eval Progress : 267/1000
Eval Progress : 268/1000
Eval Progress : 269/1000
Eval Progress : 270/1000
Eval Progress : 271/1000
Eval Progress : 272/1000
Eval Progress : 273/1000
Eval Progress : 274/1000
Eval Progress : 275/1000
Eval Progress : 276/1000
Eval Progress : 277/1000
Eval Progress : 278/1000
Eval Progress : 279/1000
Eval Progress : 280/1000
Eval Progress : 281/1000
Eval Progress : 282/1000
Eval Progress : 283/1000
Eval Progress : 284/1000
Eval Progress : 285/1000
Eval Progress : 286/1000
Eval Progress : 287/1000
Eval Progress : 288/1000
Eval Progress : 289/1000
Eval Progress : 290/1000
Eval Progress : 291/1000
Eval Progress : 292/1000
Eval Progress : 293/1000
Eval Progress : 294/1000
Eval Progress : 295/1000
Eval Progress : 296/1000
Eval Progress : 297/1000
Eval Progress : 298/1000
Eval Progress : 299/1000
Eval Progress : 300/1000
Eval Progress : 301/1000
Eval Progress : 302/1000
Eval Progress : 303/1000
Eval Progress : 304/1000
Eval Progress : 305/1000
Eval Progress : 306/1000
Eval Progress : 307/1000
Eval Progress : 308/1000
Eval Progress : 309/1000
Eval Progress : 310/1000
Eval Progress : 311/1000
Eval Progress : 312/1000
Eval Progress : 313/1000
Eval Progress : 314/1000
Eval Progress : 315/1000
Eval Progress : 316/1000
Eval Progress : 317/1000
Eval Progress : 318/1000
Eval Progress : 319/1000
Eval Progress : 320/1000
Eval Progress : 321/1000
Eval Progress : 322/1000
Eval Progress : 323/1000
Eval Progress : 324/1000
Eval Progress : 325/1000
Eval Progress : 326/1000
Eval Progress : 327/1000
Eval Progress : 328/1000
Eval Progress : 329/1000
Eval Progress : 330/1000
Eval Progress : 331/1000
Eval Progress : 332/1000
Eval Progress : 333/1000
###Markdown
Caption Generator
###Code
os.listdir()
def get_avg(inp):
size = len(inp)
tot = 0
for i in inp:
tot += i
return tot / size
def get_dominant_color(image):
clusters = 5
im = Image.open(image)
im = im.resize((150, 150))
ar = np.asarray(im)
shape = ar.shape
ar = ar.reshape(scipy.product(shape[:2]), shape[2]).astype(float)
codes, dist = scipy.cluster.vq.kmeans(ar, clusters)
vec, dist = scipy.cluster.vq.vq(ar, codes)
counts, bins = scipy.histogram(vec, len(codes))
index_max = scipy.argmax(counts)
peak_color = codes[index_max]
return peak_color
def process_text(text):
pro_txt = ''
word = ""
for i in range(len(text)):
word += text[i]
if i % max_length == 0 and i != 0:
pro_txt += '\n'
if text[i] == ' ':
pro_txt += word
word = ''
if word != '':
pro_txt += word
return pro_txt
def draw(image_name, text):
img = plt.imread(image_name)
fig, ax = plt.subplots()
plt.imshow(img)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.set_xticks([])
ax.set_yticks([])
txt = process_text(text)
lines = txt.split('\n')
max_val = 0
for line in lines:
if max_val < len(line):
max_val = len(line)
plot_shape = plt.rcParams["figure.figsize"]
plot_width = plot_shape[0]
fs = int((plot_width / max_val) * 100)
if fs not in range(10, 21):
fs = 16
b_color = get_dominant_color(image_name)
b_color = [x / 255.0 for x in b_color]
f_color = get_avg(b_color)
if f_color > 0.5:
f_color = 'black'
else:
f_color = 'white'
plt.xlabel(txt,
fontsize=fs, style='italic', color=f_color,
bbox=dict(facecolor=b_color, edgecolor='white', alpha=0.9, boxstyle='round'),
labelpad=9)
plt.show()
image_name = "image_sample.JPG"
img = feature_extractor(image_name, model_popped)
pred_caption = pred_caption_greedy(img, use_model, max_length, word_to_int, int_to_word)
draw(image_name, pred_caption)
print("\nInput :", image_name)
print("Caption :", pred_caption)
###Output
D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\ipykernel_launcher.py:14: DeprecationWarning: scipy.product is deprecated and will be removed in SciPy 2.0.0, use numpy.product instead
D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\ipykernel_launcher.py:17: DeprecationWarning: scipy.histogram is deprecated and will be removed in SciPy 2.0.0, use numpy.histogram instead
D:\s0um\Softwares\Anaconda3\envs\keras-gpu\lib\site-packages\ipykernel_launcher.py:18: DeprecationWarning: scipy.argmax is deprecated and will be removed in SciPy 2.0.0, use numpy.argmax instead
|
step2_1_model1_predict_save.ipynb | ###Markdown
將資料做切割(太長的會被分開/或404的),帶入第一個model預測name並且儲存
###Code
# train_full_content.csv 這張表的是由主辦單位提供後經過爬蟲得來的
df = pd.read_csv('train_full_content.csv' ,encoding='utf-8',index_col=0)
df.head()
# hyperlink:連結
#全文
#aml人物名單
name_list = df.name_list
all_name_list = []
for i in range(len(name_list)):
ii = [j for j in name_list.iloc[i][1:-1].replace(" ", "").split("'") if len(j)>=2]
s = []
for k in ii:
#print(k)
s.append(k)
all_name_list.append(s)
df['name_list'] = all_name_list
def split_content(x):
if len(x)<=500:
return [x]
elif (len(x)>=500) and(len(x)<1000):
return [ x[:500+3],x[500-6:] ]
elif (len(x)>=1000) and (len(x)<1500):
return [ x[:500+3],x[500-3:1000+3] ,x[1000-3:]]
elif (len(x)>=1500) and (len(x)<2000):
return [ x[:500+3],x[500-3:1000+3] ,x[1000-3:1500+3], x[1500-3:]]
else:
return [x[:500+3],x[500-3:1000+3] ,x[1000-3:1500+3], x[1500-3:2000-3] , x[2000-3:2000-3+500] ]
# 將長篇文章依據字數500左右去分割_list裡面有三至五個不等的文章
df['article_split'] = df['article'].apply(lambda x :split_content(x))
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from transformers import BertTokenizer, BertConfig
from keras.preprocessing.sequence import pad_sequences #2.2.4
from sklearn.model_selection import train_test_split
from tqdm import tqdm, trange
tokenizer_chinese = BertTokenizer.from_pretrained("bert-base-chinese", do_lower_case=False)
# 定義model的class
tag_values = ['O',
'B_person_name',
'M_person_name',
'E_person_name',
'PAD']
tag2idx = { 'O': 0,
'B_person_name': 1,
'M_person_name': 2,
'E_person_name': 3,
'PAD': 4}
# load model
PATH = 'step1_1output_bertmode_step1_ner.pth'#'bertmode_asia.pth'
model = torch.load(PATH)
model.eval()
# 找出每篇文章所有的名字
cols_name = [ ]
for p in range(len(df)):
row_name = []
for sentence in df['article_split'].iloc[p]:
# bert預測
tokenized_sentence = tokenizer_chinese.encode(sentence)
input_ids = torch.tensor([tokenized_sentence]).cuda()
with torch.no_grad():
output = model(input_ids)
label_indices = np.argmax(output[0].to('cpu').numpy(), axis=2)
tokens = tokenizer_chinese.convert_ids_to_tokens(input_ids.to('cpu').numpy()[0])
new_tokens, new_labels = [], []
for token, label_idx in zip(tokens, label_indices[0]):
if token.startswith("##"):
new_tokens[-1] = new_tokens[-1] + token[2:]
else:
new_labels.append(tag_values[label_idx])#ex:['O','O','O','O',...]
new_tokens.append(token)# ex:['[CLS]', '益', '公', '司', '債', '或', '新',...]
texto=''
for i in range(len(new_labels)):
if new_labels[i] !='O':
texto+= new_tokens[i]
else:
texto+='O' #'OOO張堯勇OOOOOOOOOOOOO'
for i in texto.split('O'):
if len(i)>1:#['張堯勇', '張堯勇'],單一個字或者空白的會被削去
row_name.append(i)
uniq_name = list(set(row_name)) #['鄭心芸', '巴菲特', '詹姆斯·西蒙斯', '堯勇', '索羅斯', '張堯勇']
cols_name.append(uniq_name)
list(set(row_name))
cols_name[:10]
df['all_name'] = cols_name
df.head()
import pickle
df.to_pickle("step2_1_output_train_full_data.pkl")
###Output
_____no_output_____ |
Final/DATA643_Final_Project.ipynb | ###Markdown
DATA 643 - Final Project Sreejaya Nair and Suman K Polavarapu Description: *Explore the Apache Spark Cluster Computing Framework by analysing the movielens dataset. Provide recommendations using MLLib*
###Code
import os
import sys
import urllib2
import collections
import matplotlib.pyplot as plt
import math
from time import time, sleep
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Prepare the pySpark Environment
###Code
spark_home = os.environ.get('SPARK_HOME', None)
if not spark_home:
raise ValueError("Please set SPARK_HOME environment variable!")
# Add the py4j to the path.
sys.path.insert(0, os.path.join(spark_home, 'python'))
sys.path.insert(0, os.path.join(spark_home, 'C:/spark/python/lib/py4j-0.9-src.zip'))
###Output
_____no_output_____
###Markdown
Initialize Spark Context
###Code
from pyspark.mllib.recommendation import ALS, Rating
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local[*]").setAppName("MovieRecommendationsALS").set("spark.executor.memory", "2g")
sc = SparkContext(conf = conf)
###Output
_____no_output_____
###Markdown
Load and Analyse Data
###Code
def loadMovieNames():
movieNames = {}
for line in urllib2.urlopen("https://raw.githubusercontent.com/psumank/DATA643/master/WK5/ml-100k/u.item"):
fields = line.split('|')
movieNames[int(fields[0])] = fields[1].decode('ascii', 'ignore')
return movieNames
print "\nLoading movie names..."
nameDict = loadMovieNames()
print "\nLoading ratings data..."
data = sc.textFile("file:///C:/Users/p_sum/.ipynb_checkpoints/ml-100k/u.data")
ratings = data.map(lambda x: x.split()[2])
#action -- just to trigger the driver [ lazy evaluation ]
rating_results = ratings.countByValue()
sortedResults = collections.OrderedDict(sorted(rating_results.items()))
for key, value in sortedResults.iteritems():
print "%s %i" % (key, value)
###Output
1 6110
2 11370
3 27145
4 34174
5 21201
###Markdown
Ratings Histogram
###Code
ratPlot = plt.bar(range(len(sortedResults)), sortedResults.values(), align='center')
plt.xticks(range(len(sortedResults)), list(sortedResults.keys()))
ratPlot[3].set_color('g')
print "Ratings Histogram"
###Output
Ratings Histogram
###Markdown
Most popular movies
###Code
movies = data.map(lambda x: (int(x.split()[1]), 1))
movieCounts = movies.reduceByKey(lambda x, y: x + y)
flipped = movieCounts.map( lambda (x, y) : (y, x))
sortedMovies = flipped.sortByKey(False)
sortedMoviesWithNames = sortedMovies.map(lambda (count, movie) : (nameDict[movie], count))
results = sortedMoviesWithNames.collect()
subset = results[0:10]
popular_movieNm = [str(i[0]) for i in subset]
popularity_strength = [int(i[1]) for i in subset]
popMovplot = plt.barh(range(len(subset)), popularity_strength, align='center')
plt.yticks(range(len(subset)), popular_movieNm)
popMovplot[0].set_color('g')
print "Most Popular Movies from the Dataset"
###Output
Most Popular Movies from the Dataset
###Markdown
Similar Movies Find similar movies for a given movie using cosine similarity
###Code
ratingsRDD = data.map(lambda l: l.split()).map(lambda l: (int(l[0]), (int(l[1]), float(l[2]))))
ratingsRDD.takeOrdered(10, key = lambda x: x[0])
ratingsRDD.take(4)
# Movies rated by same user. ==> [ user ID ==> ( (movieID, rating), (movieID, rating)) ]
userJoinedRatings = ratingsRDD.join(ratingsRDD)
userJoinedRatings.takeOrdered(10, key = lambda x: x[0])
# Remove dups
def filterDups( (userID, ratings) ):
(movie1, rating1) = ratings[0]
(movie2, rating2) = ratings[1]
return movie1 < movie2
uniqueUserJoinedRatings = userJoinedRatings.filter(filterDups)
uniqueUserJoinedRatings.takeOrdered(10, key = lambda x: x[0])
# Now key by (movie1, movie2) pairs ==> (movie1, movie2) => (rating1, rating2)
def makeMovieRatingPairs((user, ratings)):
(movie1, rating1) = ratings[0]
(movie2, rating2) = ratings[1]
return ((movie1, movie2), (rating1, rating2))
moviePairs = uniqueUserJoinedRatings.map(makeMovieRatingPairs)
moviePairs.takeOrdered(10, key = lambda x: x[0])
#collect all ratings for each movie pair and compute similarity. (movie1, movie2) = > (rating1, rating2), (rating1, rating2) ...
moviePairRatings = moviePairs.groupByKey()
moviePairRatings.takeOrdered(10, key = lambda x: x[0])
#Compute Similarity
def cosineSimilarity(ratingPairs):
numPairs = 0
sum_xx = sum_yy = sum_xy = 0
for ratingX, ratingY in ratingPairs:
sum_xx += ratingX * ratingX
sum_yy += ratingY * ratingY
sum_xy += ratingX * ratingY
numPairs += 1
numerator = sum_xy
denominator = sqrt(sum_xx) * sqrt(sum_yy)
score = 0
if (denominator):
score = (numerator / (float(denominator)))
return (score, numPairs)
moviePairSimilarities = moviePairRatings.mapValues(cosineSimilarity).cache()
moviePairSimilarities.takeOrdered(10, key = lambda x: x[0])
###Output
_____no_output_____
###Markdown
Lets find similar movies for Toy Story (Movie ID: 1)
###Code
scoreThreshold = 0.97
coOccurenceThreshold = 50
inputMovieID = 1 #Toy Story.
# Filter for movies with this sim that are "good" as defined by our quality thresholds.
filteredResults = moviePairSimilarities.filter(lambda((pair,sim)): \
(pair[0] == inputMovieID or pair[1] == inputMovieID) and sim[0] > scoreThreshold and sim[1] > coOccurenceThreshold)
#Top 10 by quality score.
results = filteredResults.map(lambda((pair,sim)): (sim, pair)).sortByKey(ascending = False).take(10)
print "Top 10 similar movies for " + nameDict[inputMovieID]
for result in results:
(sim, pair) = result
# Display the similarity result that isn't the movie we're looking at
similarMovieID = pair[0]
if (similarMovieID == inputMovieID):
similarMovieID = pair[1]
print nameDict[similarMovieID] + "\tscore: " + str(sim[0]) + "\tstrength: " + str(sim[1])
###Output
Top 10 similar movies for Toy Story (1995)
Hamlet (1996) score: 0.974543871512 strength: 67
Raiders of the Lost Ark (1981) score: 0.974084217219 strength: 273
Cinderella (1950) score: 0.974002987747 strength: 105
Winnie the Pooh and the Blustery Day (1968) score: 0.973415495885 strength: 58
Cool Hand Luke (1967) score: 0.97334234772 strength: 98
Great Escape, The (1963) score: 0.973270581613 strength: 77
African Queen, The (1951) score: 0.973151271508 strength: 101
Apollo 13 (1995) score: 0.972395120538 strength: 207
12 Angry Men (1957) score: 0.971987295102 strength: 81
Wrong Trousers, The (1993) score: 0.971814306667 strength: 90
###Markdown
Recommender using MLLib Training the recommendation model
###Code
ratings = data.map(lambda l: l.split()).map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2]))).cache()
ratings.take(3)
ratings.take(1)[0]
nratings = ratings.count()
nUsers = ratings.keys().distinct().count()
nMovies = ratings.values().distinct().count()
print "We have Got %d ratings from %d users on %d movies." % (nratings, nUsers, nMovies)
# Build the recommendation model using Alternating Least Squares
#Train a matrix factorization model given an RDD of ratings given by users to items, in the form of
#(userID, itemID, rating) pairs. We approximate the ratings matrix as the product of two lower-rank matrices
#of a given rank (number of features). To solve for these features, we run a given number of iterations of ALS.
#The level of parallelism is determined automatically based on the number of partitions in ratings.
#Our ratings are in the form of ==> [userid, (movie id, rating)] ==> [ (1, (61, 4.0)), (1, (189, 3.0)) etc. ]
start = time()
seed = 5L
iterations = 10
rank = 8
model = ALS.train(ratings, rank, iterations)
duration = time() - start
print "Model trained in %s seconds" % round(duration,3)
###Output
Model trained in 4.084 seconds
###Markdown
Recommendations
###Code
#Lets recommend movies for the user id - 2
userID = 2
print "\nTop 10 recommendations:"
recommendations = model.recommendProducts(userID, 10)
for recommendation in recommendations:
print nameDict[int(recommendation[1])] + \
" score " + str(recommendation[2])
###Output
Top 10 recommendations:
Angel Baby (1995) score 7.30157994119
Burnt By the Sun (1994) score 5.91702154482
Horseman on the Roof, The (Hussard sur le toit, Le) (1995) score 5.91615270541
Duoluo tianshi (1995) score 5.72715083338
Alphaville (1965) score 5.71454149871
Boys, Les (1997) score 5.65218523752
Whole Wide World, The (1996) score 5.57786180842
Funny Face (1957) score 5.53967043305
Ruling Class, The (1972) score 5.48367186049
Once Were Warriors (1994) score 5.48150506587
|
models/Classifiers-SBC-ICA-Dogs-3-c3.ipynb | ###Markdown
Data Preparation and loading
###Code
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion() # interactive mode
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'Stanford Dogs_3/c3'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=64,
shuffle=True, num_workers=2)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
import pandas as pd
train_results = pd.DataFrame(columns=['model', 'epoch', 'epoch_loss', 'epoch_acc'])
val_results = pd.DataFrame(columns=['model', 'epoch', 'epoch_loss', 'epoch_acc'])
model_times = pd.DataFrame(columns=['Model', 'Time'])
def train_model(model, model_name, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
if(phase == 'train'):
train_results.loc[len(train_results.index)] = [model_name, epoch, float("{:.4f}".format(epoch_loss)), float("{:.4f}".format(epoch_acc))]
elif(phase == 'val'):
val_results.loc[len(val_results.index)] = [model_name, epoch, float("{:.4f}".format(epoch_loss)), float("{:.4f}".format(epoch_acc))]
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
model_times.loc[len(model_times.index)] = [model_name, str('{:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))]
# load best model weights
model.load_state_dict(best_model_wts)
return model
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
###Output
_____no_output_____
###Markdown
VGG19 with BN
###Code
torch.cuda.empty_cache()
model_ft = models.vgg19_bn(pretrained=True)
model_name = "VGG-19"
num_ftrs = model_ft.classifier[6].in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.classifier[6] = nn.Linear(num_ftrs, 10)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=20)
###Output
_____no_output_____
###Markdown
VGG16 with BN
###Code
torch.cuda.empty_cache()
model_ft = models.vgg16_bn(pretrained=True)
model_name = "VGG-16"
num_ftrs = model_ft.classifier[6].in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.classifier[6] = nn.Linear(num_ftrs, 10)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=20)
###Output
_____no_output_____
###Markdown
ResNet
###Code
torch.cuda.empty_cache()
model_ft = models.resnet50(pretrained=True)
model_name = "ResNet50"
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 10)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=20)
###Output
_____no_output_____
###Markdown
ResNeXt50
###Code
torch.cuda.empty_cache()
model_ft = models.resnext50_32x4d(pretrained=True)
model_name = "ResNeXt50"
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 10)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=20)
###Output
_____no_output_____
###Markdown
AlexNet
###Code
torch.cuda.empty_cache()
model_ft = models.alexnet(pretrained=True)
model_name = "AlexNet"
num_ftrs = model_ft.classifier[6].in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.classifier[6] = nn.Linear(num_ftrs, 10)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, model_name, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=20)
train_results.to_excel('train_results_dogs_3_c3.xlsx')
val_results.to_excel('val_results_dogs_3_c3.xlsx')
model_times.to_excel('training_times_dogs_3_c3.xlsx')
###Output
_____no_output_____ |
python/archive/cosyne_figures.ipynb | ###Markdown
OT-based image alignment
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import affine_transform
from scipy.stats import multivariate_normal
from scipy.io import loadmat
from otimage import readers, imagerep, imagereg
from otimage.utils import plot_maxproj
idx = range(2, 8)
img_path = '/home/mn2822/Desktop/WormOT/data/zimmer/raw/mCherry_v00065-00115.hdf5'
out_dir = '/home/mn2822/Desktop/WormOT/cosyne_figs'
with readers.ZimmerReader(img_path) as reader:
for i in idx:
img = reader.get_frame(i)
plt.figure()
plot_maxproj(img)
plt.axis('off')
plt.savefig(f'{out_dir}/frame_{i}.png')
# Select frames
t1 = 6
t2 = t1 + 1
# Load two successive frames from dataset
img_path = '/home/mn2822/Desktop/WormOT/data/zimmer/raw/mCherry_v00065-00115.hdf5'
with readers.ZimmerReader(img_path) as reader:
frame_1 = reader.get_frame(t1)
frame_2 = reader.get_frame(t2)
img_shape = frame_1.shape
# Load MP components
n_mps = 50
mp_path = '/home/mn2822/Desktop/WormOT/data/zimmer/mp_components/mp_0000_0050.mat'
mp_data = loadmat(mp_path)
cov = mp_data['cov']
pts_1 = mp_data['means'][t1, 0:n_mps, :]
pts_2 = mp_data['means'][t2, 0:n_mps, :]
wts_1 = mp_data['weights'][t1, 0:n_mps, 0]
wts_2 = mp_data['weights'][t2, 0:n_mps, 0]
alpha, beta, _ = imagereg.ot_reg_linear(pts_1, pts_2, wts_1, wts_2)
# Apply linear transform to first frame to reconstruct frame at time t
inv_beta = np.linalg.inv(beta)
inv_alpha = -inv_beta @ alpha
rec_img = affine_transform(frame_1, inv_beta, inv_alpha, mode='nearest')
# MP reconstruction
#rec_pts_t = reg_data['rec_pts'][t, :, :].astype(int)
#rec_img_t = imagerep.reconstruct_image(rec_pts_t, [cov], wts_0, img_shape)
#plt.figure(figsize=(15, 15))
#plt.subplot(131)
#plot_maxproj(frame_1)
#plt.title(f'frame {t1}')
#plt.axis('off')
#plt.subplot(132)
#plot_maxproj(frame_2)
#plt.title(f'frame {t2}')
#plt.axis('off')
#plt.subplot(133)
#plot_maxproj(rec_img)
#plt.title(f'frame {t2} (reconstruction)');
#plt.axis('off')
plt.figure()
plot_maxproj(rec_img)
plt.axis('off')
plt.savefig(f'{out_dir}/trans_{t1}_{t2}.png')
###Output
_____no_output_____ |
code/Taking_A_Step_Back.ipynb | ###Markdown
So in explore_SLFV_GP.ipynb, I tried a bunch of different things on a VERY big lightcurve. But I think I'm getting ahead of myself, so I'm gonna take a step back here...
###Code
import numpy as np
import pandas as pd
from TESStools import *
import os
import warnings
from multiprocessing import Pool, cpu_count
from scipy.stats import multivariate_normal
from tqdm.notebook import tqdm
import h5py as h5
import pymc3 as pm
import pymc3_ext as pmx
import aesara_theano_fallback.tensor as tt
from celerite2.theano import terms, GaussianProcess
from pymc3_ext.utils import eval_in_model
import arviz as az
import exoplanet
print(f"exoplanet.__version__ = '{exoplanet.__version__}'")
from aesara_theano_fallback import __version__ as tt_version
from celerite2 import __version__ as c2_version
pm.__version__, pmx.__version__, tt_version, c2_version
###Output
_____no_output_____
###Markdown
Ok here is our example data we're going to be working with. It's almost two years of TESS observations, with a year in between them
###Code
cool_sgs = pd.read_csv('sample.csv',index_col=0)
example = cool_sgs[cool_sgs['CommonName']=='HD 269953']
tic = example.index[0]
lc, lc_smooth = lc_extract(get_lc_from_id(tic), smooth=128)
time, flux, err = lc['Time'].values, lc['Flux'].values, lc['Err'].values
###Output
_____no_output_____
###Markdown
Let's parse the lightcurve into TESS Sectors.
###Code
orbit_times = pd.read_csv('../data/orbit_times_20210629_1340.csv',skiprows=5)
sector_group = orbit_times.groupby('Sector')
sector_starts = sector_group['Start TJD'].min()
sector_ends = sector_group['End TJD'].max()
sectors = pd.DataFrame({'Sector':sector_starts.index,'Start TJD':sector_starts.values,'End TJD':sector_ends.values})
fig = plt.figure(dpi=300)
plt.scatter(time, flux, s=1, c='k')
for i,row in sectors.iterrows():
plt.axvline(x=row['Start TJD'], c='C0')
plt.axvline(x=row['End TJD'], c='C3')
plt.text(0.5*(row['Start TJD']+row['End TJD']),1.007,int(row['Sector']))
sector_lcs = []
for i,row in sectors.iterrows():
sec_lc = lc[(lc['Time']>=row['Start TJD'])&(lc['Time']<=row['End TJD'])]
if len(sec_lc) > 0:
sec_lc.insert(3,'Sector',np.tile(int(row['Sector']),len(sec_lc)))
sector_lcs.append(sec_lc)
lc_new = pd.concat(sector_lcs)
lc_new
all_sectors = np.unique(lc_new['Sector'])
this_sector = lc_new[lc_new['Sector'] == all_sectors[0]]
this_sector
this_time, this_flux, this_err = this_sector['Time'].values, this_sector['Flux'].values, this_sector['Err'].values
pseudo_NF = 0.5 / (np.mean(np.diff(this_time)))
rayleigh = 1.0 / (this_time.max() - this_time.min())
ls = LombScargle(this_time,this_flux,dy=this_err,)
freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF)
power /= len(this_time)
fig, ax = plt.subplots(2, 1, dpi=300)
ax[0].scatter(this_time, this_flux,s=1,c='k')
ax[0].plot(lc_smooth['Time'],lc_smooth['Flux'],c='C2')
ax[0].set(xlim=(this_time.min(),this_time.max()))
ax[1].loglog(freq, power)
###Output
_____no_output_____
###Markdown
Let's fit the GP to this!
###Code
# Here's a cute function that does that, but the mean can be any number of sinusoids!
def pm_fit_gp_sin(time, flux, err, fs=None, amps=None, phases=None, model=None, return_var=False, thin=50):
"""
Use PyMC3 to do a maximum likelihood fit for a GP + multiple periodic signals
Inputs
------
time : array-like
Times of observations
flux : array-like
Observed fluxes
err : array-like
Observational uncertainties
fs : array-like, elements are PyMC3 distributions
Array with frequencies to fit, default None (i.e., only the GP is fit)
amps : array-like, elements are PyMC3 distributions
Array with amplitudes to fit, default None (i.e., only the GP is fit)
phases : array-like, elements are PyMC3 distributions
Array with phases to fit, default None (i.e., only the GP is fit)
model : `pymc3.model.Model`
PyMC3 Model object, will fail unless given
return_var : bool, default True
If True, returns the variance of the GP
thin : integer, default 50
Calculate the variance of the GP every `thin` points.
Returns
-------
map_soln : dict
Contains best-fit parameters and the gp predictions
logp : float
The log-likelihood of the model
bic : float
The Bayesian Information Criterion, -2 ln P + m ln N
var : float
If `return_var` is True, returns the variance of the GP
"""
assert model is not None, "Must provide a PyMC3 model object"
#Step 1: Mean model
mean_flux = pm.Normal("mean_flux", mu = 1.0, sigma=np.std(flux))
if fs is not None:
#Making a callable for celerite
mean_model = tt.sum([a * tt.sin(2.0*np.pi*f*time + phi) for a,f,phi in zip(amps,fs,phases)],axis=0) + mean_flux
#And add it to the model
pm.Deterministic("mean", mean_model)
else:
mean_model = mean_flux
mean = pm.Deterministic("mean", mean_flux)
#Step 2: Compute Lomb-Scargle Periodogram
pseudo_NF = 0.5 / (np.mean(np.diff(time)))
rayleigh = 1.0 / (time.max() - time.min())
ls = LombScargle(time,flux)
freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF)
power /= len(time)
#Step 3: Do the basic peridogram fit to guess nu_char and alpha_0
popt, pcov, resid = fit_red_noise(freq, power)
a0, tau_char, gamma, aw = popt
nu_char = 1.0/(2*np.pi*tau_char)
# A jitter term describing excess white noise (analogous to C_w)
log_jitter = pm.Uniform("log_jitter", lower=np.log(aw)-15, upper=np.log(aw)+15, testval=np.log(np.median(np.abs(np.diff(flux)))))
# A term to describe the SLF variability
# sigma is the standard deviation of the GP, tau roughly corresponds to the
#breakoff in the power spectrum. rho and tau are related by a factor of
#pi/Q (the quality factor)
#guesses for our parameters
omega_0_guess = 2*np.pi*nu_char
Q_guess = 1/np.sqrt(2)
sigma_guess = a0 * np.sqrt(omega_0_guess*Q_guess) * np.power(np.pi/2.0, 0.25)
#sigma
logsigma = pm.Uniform("log_sigma", lower=np.log(sigma_guess)-10, upper=np.log(sigma_guess)+10)
sigma = pm.Deterministic("sigma",tt.exp(logsigma))
#rho (characteristic timescale)
logrho = pm.Uniform("log_rho", lower=np.log(0.01/nu_char), upper=np.log(100.0/nu_char))
rho = pm.Deterministic("rho", tt.exp(logrho))
nuchar = pm.Deterministic("nu_char", 1.0 / rho)
#tau (damping timescale)
logtau = pm.Uniform("log_tau", lower=np.log(0.01*2.0*Q_guess/omega_0_guess),upper=np.log(100.0*2.0*Q_guess/omega_0_guess))
tau = pm.Deterministic("tau", tt.exp(logtau))
nudamp = pm.Deterministic("nu_damp", 1.0 / tau)
#We also want to track Q, as it's a good estimate of how stochastic the
#process is.
Q = pm.Deterministic("Q", np.pi*tau/rho)
kernel = terms.SHOTerm(sigma=sigma, rho=rho, tau=tau)
gp = GaussianProcess(
kernel,
t=time,
diag=err ** 2.0 + tt.exp(2 * log_jitter),
quiet=True,
)
# Compute the Gaussian Process likelihood and add it into the
# the PyMC3 model as a "potential"
gp.marginal("gp", observed=flux-mean_model)
# Compute the mean model prediction for plotting purposes
pm.Deterministic("pred", gp.predict(flux-mean_model))
# Optimize to find the maximum a posteriori parameters
map_soln = pmx.optimize()
logp = model.logp(map_soln)
# parameters are tau, sigma, Q/rho, mean, jitter, plus 3 per frequency (rho is fixed)
if fs is not None:
n_par = 5.0 + (3.0 * len(fs))
else:
n_par = 5.0
bic = -2.0*logp + n_par * np.log(len(time))
#compute variance as well...
if return_var:
eval_in_model(gp.compute(time[::thin],yerr=err[::thin]), map_soln)
mu, var = eval_in_model(gp.predict(flux[::thin], t=time[::thin], return_var=True), map_soln)
return map_soln, logp, bic, var
return map_soln, logp, bic
with pm.Model() as model:
map_soln, logp, bic = pm_fit_gp_sin(this_time, this_flux, this_err, model=model)
fig = plt.figure(dpi=300)
plt.scatter(this_time, this_flux, c='k', s=1)
plt.plot(this_time, map_soln['pred']+map_soln['mean_flux'])
plt.scatter(this_time, resid_flux,c='k',s=1)
resid_flux = this_flux - (map_soln['pred']+map_soln['mean_flux'])
ls_resid = LombScargle(this_time,resid_flux,dy=this_err,)
freq_r,power_r=ls_resid.autopower(normalization='psd',maximum_frequency=pseudo_NF)
power_r /= len(this_time)
fig, ax = plt.subplots(2, 1, dpi=300)
ax[0].scatter(this_time, resid_flux,s=1,c='k')
ax[0].set(xlim=(this_time.min(),this_time.max()))
ax[1].loglog(freq_r, power_r)
###Output
_____no_output_____
###Markdown
Let's try this with two sectors of data!
###Code
two_sec = lc_new[lc_new['Sector'] < 3]
two_sec
time, flux, err = lc[['Time','Flux','Err']].values.T
time
def gp_multisector(lc, fs=None, amps=None, phases=None, model=None, return_var=False, thin=50):
"""
Use PyMC3 to do a maximum likelihood fit for a GP + multiple periodic
signals, but now with a twist: handles multiple sectors!
Inputs
------
ls : `pandas.DataFrame`
Dataframe containing the lightcurve. Must have Time, Flux, Err, and
Sector as columns.
fs : array-like, elements are PyMC3 distributions
Array with frequencies to fit, default None (i.e., only the GP is fit)
amps : array-like, elements are PyMC3 distributions
Array with amplitudes to fit, default None (i.e., only the GP is fit)
phases : array-like, elements are PyMC3 distributions
Array with phases to fit, default None (i.e., only the GP is fit)
model : `pymc3.model.Model`
PyMC3 Model object, will fail unless given
return_var : bool, default True
If True, returns the variance of the GP
thin : integer, default 50
Calculate the variance of the GP every `thin` points.
Returns
-------
map_soln : dict
Contains best-fit parameters and the gp predictions
logp : float
The log-likelihood of the model
bic : float
The Bayesian Information Criterion, -2 ln P + m ln N
var : float
If `return_var` is True, returns the variance of the GP
"""
assert model is not None, "Must provide a PyMC3 model object"
time, flux, err, sectors = lc[['Time','Flux','Err','Sector']].values.T
#Step 1: Mean model
mean_flux = pm.Normal("mean_flux", mu = 1.0, sigma=np.std(flux))
if fs is not None:
#Making a callable for celerite
mean_model = tt.sum([a * tt.sin(2.0*np.pi*f*time + phi) for a,f,phi in zip(amps,fs,phases)],axis=0) + mean_flux
#And add it to the model
pm.Deterministic("mean", mean_model)
else:
mean_model = mean_flux
mean = pm.Deterministic("mean", mean_flux)
#Step 2: Compute Lomb-Scargle Periodogram
pseudo_NF = 0.5 / (np.mean(np.diff(time)))
rayleigh = 1.0 / (time.max() - time.min())
ls = LombScargle(time,flux)
freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF)
power /= len(time)
#Step 3: Do the basic peridogram fit to guess nu_char and alpha_0
popt, pcov, resid = fit_red_noise(freq, power)
a0, tau_char, gamma, aw = popt
nu_char = 1.0/(2*np.pi*tau_char)
# A jitter term per sector describing excess white noise (analogous to C_w)
jitters = [pm.Uniform(f"log_jitter_S{int(s)}", lower=np.log(aw)-15, upper=np.log(aw)+15, testval=np.log(np.median(np.abs(np.diff(flux))))) for s in np.unique(sectors)]
# A term to describe the SLF variability, shared across sectors
#guesses for our parameters
omega_0_guess = 2*np.pi*nu_char
Q_guess = 1/np.sqrt(2)
sigma_guess = a0 * np.sqrt(omega_0_guess*Q_guess) * np.power(np.pi/2.0, 0.25)
#sigma
logsigma = pm.Uniform("log_sigma", lower=np.log(sigma_guess)-10, upper=np.log(sigma_guess)+10)
sigma = pm.Deterministic("sigma",tt.exp(logsigma))
#rho (characteristic timescale)
logrho = pm.Uniform("log_rho", lower=np.log(0.01/nu_char), upper=np.log(100.0/nu_char))
rho = pm.Deterministic("rho", tt.exp(logrho))
nuchar = pm.Deterministic("nu_char", 1.0 / rho)
#tau (damping timescale)
logtau = pm.Uniform("log_tau", lower=np.log(0.01*2.0*Q_guess/omega_0_guess),upper=np.log(100.0*2.0*Q_guess/omega_0_guess))
tau = pm.Deterministic("tau", tt.exp(logtau))
nudamp = pm.Deterministic("nu_damp", 1.0 / tau)
#We also want to track Q, as it's a good estimate of how stochastic the
#process is.
Q = pm.Deterministic("Q", np.pi*tau/rho)
kernel = terms.SHOTerm(sigma=sigma, rho=rho, tau=tau)
#A number of GP objects with shared hyperparameters
gps = [GaussianProcess(
kernel,
t=time[sectors==s],
diag=err[sectors==s] ** 2.0 + tt.exp(2 * j),
quiet=True,)
for s,j in zip(np.unique(sectors),jitters)
]
for s,gp in zip(np.unique(sectors),gps):
# Compute the Gaussian Process likelihood and add it into the
# the PyMC3 model as a "potential"
gp.marginal(f"gp_S{int(s)}", observed=(flux-mean_model)[sectors==s])
# Compute the mean model prediction for plotting purposes
pm.Deterministic(f"pred_S{int(s)}", gp.predict((flux-mean_model)[sectors==s]))
# Optimize to find the maximum a posteriori parameters
map_soln = pmx.optimize()
logp = model.logp(map_soln)
# parameters are logtau, logsigma, logrho, mean, jitter*n_sectors, plus 3 per frequency (rho is fixed)
base_par = 4 + len(np.unique(sectors))
if fs is not None:
n_par = base_par + (3.0 * len(fs))
else:
n_par = base_par
bic = -2.0*logp + n_par * np.log(len(time))
#compute variance as well...
if return_var:
eval_in_model(gp.compute(time[::thin],yerr=err[::thin]), map_soln)
mu, var = eval_in_model(gp.predict(flux[::thin], t=time[::thin], return_var=True), map_soln)
return map_soln, logp, bic, var
return map_soln, logp, bic
with pm.Model() as model_m:
map_soln, logp, bic = gp_multisector(two_sec, model=model_m)
with pm.Model() as model_all:
map_soln, logp, bic = gp_multisector(lc_new, model=model_all)
###Output
optimizing logp for variables: [log_tau, log_rho, log_sigma, log_jitter_S39, log_jitter_S38, log_jitter_S36, log_jitter_S35, log_jitter_S34, log_jitter_S33, log_jitter_S32, log_jitter_S31, log_jitter_S30, log_jitter_S29, log_jitter_S28, log_jitter_S13, log_jitter_S12, log_jitter_S11, log_jitter_S10, log_jitter_S9, log_jitter_S8, log_jitter_S6, log_jitter_S5, log_jitter_S4, log_jitter_S3, log_jitter_S2, log_jitter_S1, mean_flux]
|
notebooks/Vera's Experiments.ipynb | ###Markdown
New Word2Vec
###Code
# print "Generating %d-dim word embedding ..." %ndim
# int2ch, ch2int = get_vocab()
# ch_lists = []
# quatrains = get_quatrains()
# for idx, poem in enumerate(quatrains):
# for sentence in poem['sentences']:
# ch_lists.append(filter(lambda ch: ch in ch2int, sentence))
# # the i-th characters in the poem, used to boost Dui Zhang
# i_characters = [[sentence[j] for sentence in poem['sentences']] for j in range(len(poem['sentences'][0]))]
# for characters in i_characters:
# ch_lists.append(filter(lambda ch: ch in ch2int, characters))
# if 0 == (idx+1)%10000:
# print "[Word2Vec] %d/%d poems have been processed." %(idx+1, len(quatrains))
# print "Hold on. This may take some time ..."
# model = models.Word2Vec(ch_lists, size = ndim, min_count = 5)
# embedding = uniform(-1.0, 1.0, [VOCAB_SIZE, ndim])
# for idx, ch in enumerate(int2ch):
# if ch in model.wv:
# embedding[idx,:] = model.wv[ch]
# np.save(_w2v_path, embedding)
# print "Word embedding is saved."
###Output
_____no_output_____ |
PythonCode/experiments/benchmark_vs_others/tax-credit-data/ipynb/runtime/compute-runtimes.ipynb | ###Markdown
Prepare the environment-----------------------First we'll import various functions that we'll need for generating the report and configure the environment.
###Code
from os.path import join, expandvars, abspath
from joblib import Parallel, delayed
from tax_credit.framework_functions import (runtime_make_test_data,
runtime_make_commands,
clock_runtime)
## project_dir should be the directory where you've downloaded (or cloned) the
## tax-credit repository.
project_dir = '../..'
data_dir = join(project_dir, "data")
results_dir = join(project_dir, 'temp_results_runtime')
runtime_results = join(results_dir, 'runtime_results.txt')
tmpdir = join(results_dir, 'tmp')
ref_db_dir = join(project_dir, 'data/ref_dbs/gg_13_8_otus')
ref_seqs = join(ref_db_dir, '99_otus_clean.fasta')
ref_taxa = join(ref_db_dir, '99_otu_taxonomy_clean.tsv')
num_iters = 1
sampling_depths = [1, 4000] #[1] + list(range(2000,10001,2000))
###Output
_____no_output_____
###Markdown
Generate test datasetsSubsample reference sequences to create a series of test datasets and references.
###Code
runtime_make_test_data(ref_seqs, tmpdir, sampling_depths)
###Output
_____no_output_____
###Markdown
Import to qiime for q2-feature-classifier methods, train scikit-learn classifiers. We do not include the training step in the runtime analysis, because under normal operating conditions a reference dataset will be trained once, then re-used many times for any datasets that use the same marker gene (e.g., 16S rRNA). Separating the training step from the classification step was a conscious decision on part of the designers to make classification as quick as possible, and removing redundant training steps!
###Code
! qiime tools import --input-path {ref_taxa} --output-path {ref_taxa}.qza --type "FeatureData[Taxonomy]" --input-format HeaderlessTSVTaxonomyFormat
for depth in sampling_depths:
tmpfile = join(tmpdir, str(depth)) + '.fna'
! qiime tools import --input-path {tmpfile} --output-path {tmpfile}.qza --type "FeatureData[Sequence]"
! qiime feature-classifier fit-classifier-naive-bayes --o-classifier {tmpfile}.nb.qza --i-reference-reads {tmpfile}.qza --i-reference-taxonomy {ref_taxa}.qza
###Output
[32mImported ../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy_clean.tsv as HeaderlessTSVTaxonomyFormat to ../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy_clean.tsv.qza[0m
[32mImported ../../temp_results_runtime/tmp/1.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/1.fna.qza[0m
[32mSaved TaxonomicClassifier to: ../../temp_results_runtime/tmp/1.fna.nb.qza[0m
[32mImported ../../temp_results_runtime/tmp/2000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/2000.fna.qza[0m
[32mSaved TaxonomicClassifier to: ../../temp_results_runtime/tmp/2000.fna.nb.qza[0m
[32mImported ../../temp_results_runtime/tmp/4000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/4000.fna.qza[0m
[32mSaved TaxonomicClassifier to: ../../temp_results_runtime/tmp/4000.fna.nb.qza[0m
[32mImported ../../temp_results_runtime/tmp/6000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/6000.fna.qza[0m
[32mSaved TaxonomicClassifier to: ../../temp_results_runtime/tmp/6000.fna.nb.qza[0m
[32mImported ../../temp_results_runtime/tmp/8000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/8000.fna.qza[0m
[32mSaved TaxonomicClassifier to: ../../temp_results_runtime/tmp/8000.fna.nb.qza[0m
[32mImported ../../temp_results_runtime/tmp/10000.fna as DNASequencesDirectoryFormat to ../../temp_results_runtime/tmp/10000.fna.qza[0m
[32mSaved TaxonomicClassifier to: ../../temp_results_runtime/tmp/10000.fna.nb.qza[0m
###Markdown
Preparing the method/parameter combinationsFinally we define the method, parameter combintations that we want to test and command templates to execute.Template fields must adhere to following format: {0} = output directory {1} = input data {2} = reference sequences {3} = reference taxonomy {4} = method name {5} = other parameters
###Code
blast_template = ('qiime feature-classifier classify-consensus-blast --i-query {1}.qza --o-classification '
'{0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}')
vsearch_template = ('qiime feature-classifier classify-consensus-vsearch --i-query {1}.qza '
'--o-classification {0}/assign.tmp --i-reference-reads {2}.qza --i-reference-taxonomy {3}.qza {5}')
naive_bayes_template = ('qiime feature-classifier classify-sklearn '
'--o-classification {0}/assign.tmp --i-classifier {2}.nb.qza --i-reads {1}.qza {5}')
mindivlp_template = ('python ../../../classify_mindivlp.py -i {1} -o {0} -r {2} -t {3} -p') # PythonCode/experiments/benchmark_vs_others
# {method: template, method-specific params}
methods = {
#'blast+' : (blast_template, '--p-evalue 0.001'),
#'vsearch' : (vsearch_template, '--p-perc-identity 0.90'),
#'naive-bayes': (naive_bayes_template, '--p-confidence 0.7'),
'mindivlp': (mindivlp_template, '-s 8 -l 12 -c 1000 -q 0.01')
}
###Output
_____no_output_____
###Markdown
Generate the list of commands and run them First we will vary the size of the reference database and search a single sequence against it.
###Code
commands_a = runtime_make_commands(tmpdir, tmpdir, methods, ref_taxa,
sampling_depths, num_iters=1, subsample_ref=True)
###Output
_____no_output_____
###Markdown
Next, we will vary the number of query seqs, and keep the number of ref seqs constant
###Code
commands_b = runtime_make_commands(tmpdir, tmpdir, methods, abspath(ref_taxa),
sampling_depths, num_iters=1, subsample_ref=False)
###Output
_____no_output_____
###Markdown
Let's look at the first command in each list and the total number of commands as a sanity check...
###Code
commands_a = [('python ../../../classify_mindivlp.py -i ../../temp_results_runtime/tmp/1.fna -o ../../temp_results_runtime/tmp -r ../../temp_results_runtime/tmp/4000.fna -t ../../data/ref_dbs/gg_13_8_otus/99_otu_taxonomy_clean.tsv -p', 'mindivlp', '1', '4000', 0)]
commands_b = []
print(len(commands_a + commands_b))
print(commands_a[1])
print(commands_b[-1])
Parallel(n_jobs=1)(delayed(clock_runtime)(command, runtime_results, force=False) for command in (list(set(commands_a + commands_b))));
###Output
_____no_output_____ |
library/model_analysis/Model_Visualization.ipynb | ###Markdown
Visualizing Model States We often simulate a simple free recall experiment and visualize model states throughout to explore their capacity toexhibit classical patterns of primacy, recency, and temporal contiguity. Any arbitrary configuration of parameters canbe specified for the model, including an `experiment_count`, determining the number of simulations with the givenparameters.In each experiment:1. A specified number of unique items are each experienced once,2. Context is momentarily drifted toward its pre-experimental state, and3. The model freely recalls items until it stops, with retrieval of previously experienced items disallowed.To visualize model state, we add to our `model_analysis` submodule three basic categories of visualizations. Tovisualize model state throughout encoding, we track the state of `context` and the amount of `support` for recall ofeach item based on contextual state. We also prepare a visualization of the final state of `memory` once encoding isfinished. To visualize model state throughout retrieval, we similarly track `context` and `support` at each step ofrecall. An additional visualization makes clearer the distribution of outcome probabilities at a particular index ofrecall (e.g. after a second item has been recalled). While the previous sets of analyses focus on behavior of aparticular instantiation of the model, a final set of analysis focuses on model behavior across many simulations. Wetrack recall probability as a function of serial position, probability of starting recall with each serial position,and conditional response probability as a function of lag. Parameter ConfigurationPick some parameters for Instance_CMR and CMR to organize comparisons. EncodingFirst we create simulations and visualizations to track model state throughout encoding of new memories. To do this,we produce two parallel functions, `encoding_states` and `plot_states` that collect and visualize encoding states,respectively. An additional wrapper function called `encoding_visualizations` plots these states in addition to thefinal overall state of model memory.
###Code
icmr_parameters = {
}
cmr_parameters = {
}
#hide
import numpy as np
def encoding_states(model):
"""
Tracks state of context, and item supports across encoding. Model is also advanced to a state of fully encoded
memories.
**Required model attributes**:
- item_count: specifies number of items encoded into memory
- context: vector representing an internal contextual state
- experience: adding a new trace to the memory model
- activations: function returning item activations given a vector probe
- outcome_probabilities: function returning item supports given a set of activations
**Returns** array representations of context and support for retrieval of each item at each increment of item
encoding. Each has shape model.item_count by model.item_count + 1.
"""
experiences = np.eye(model.item_count, model.item_count + 1, 1)
cmr_experiences = np.eye(model.item_count, model.item_count)
encoding_contexts, encoding_supports = model.context, []
# track model state across experiences
for i in range(len(experiences)):
try:
model.experience(experiences[i].reshape((1, -1)))
except ValueError:
# special case for CMR
model.experience(cmr_experiences[i].reshape((1, -1)))
# track model contexts and item supports
encoding_contexts = np.vstack((encoding_contexts, model.context))
if model.__class__.__name__ == 'CMR':
activation_cue = lambda model: model.context
else:
activation_cue = lambda model: np.hstack((np.zeros(model.item_count + 1), model.context))
if len(encoding_supports) > 0:
encoding_supports = np.vstack((encoding_supports, model.outcome_probabilities(activation_cue(model))))
else:
encoding_supports = model.outcome_probabilities(activation_cue(model))
return encoding_contexts, encoding_supports
show_doc(encoding_states, title_level=3)
# hide
# collapse_input
import seaborn as sns
import matplotlib.pyplot as plt
def plot_states(matrix, title, figsize=(15, 15), savefig=False):
"""
Plots an array of model states as a value-annotated heatmap with an arbitrary title.
**Arguments**:
- matrix: an array of model states, ideally with columns representing unique feature indices and rows
representing unique update indices
- title: a title for the generated plot, ideally conveying what array values represent at each entry
- savefig: boolean deciding whether generated figure is saved (True if Yes)
"""
plt.figure(figsize=figsize)
sns.heatmap(matrix, annot=True, linewidths=.5)
plt.title(title)
plt.xlabel('Feature Index')
plt.ylabel('Update Index')
if savefig:
plt.savefig('figures/{}.jpeg'.format(title).replace(' ', '_').lower(), bbox_inches='tight')
plt.show()
show_doc(plot_states, title_level=3)
def encoding_visualizations(model, savefig=True):
"""
Plots encoding contexts, encoding supports as heatmaps.
**Required model attributes**:
- item_count: specifies number of items encoded into memory
- context: vector representing an internal contextual state
- experience: adding a new trace to the memory model
- activations: function returning item activations given a vector probe
- outcome_probabilities: function returning item supports given a set of activations
- memory: a unitary representation of the current state of memory
**Also** requires savefig: boolean deciding if generated figure is saved
"""
encoding_contexts, encoding_supports = encoding_states(model)
plot_states(encoding_contexts, 'Encoding Contexts', savefig=savefig)
plot_states(encoding_supports, 'Supports For Each Item At Each Increment of Encoding', savefig=savefig)
try:
show_doc(encoding_visualizations, title_level=3)
except:
pass
###Output
_____no_output_____
###Markdown
Demo ICMR
###Code
from instance_cmr.models import InstanceCMR
model = InstanceCMR(**icmr_parameters)
encoding_visualizations(model)
###Output
_____no_output_____
###Markdown
 CMR
###Code
from instance_cmr.models import CMR
model = CMR(**cmr_parameters)
encoding_visualizations(model)
###Output
_____no_output_____
###Markdown
 Latent Mfc/Mcf
###Code
def latent_mfc_mcf(model):
"""
Generates the latent $M^{FC}$ and $M^{CF}$ in the specified ICMR instance.
For exploring and demonstrating model equivalence, we can calculate for any state of ICMR's dual-store memory
array $M$ a corresponding $M^{FC}$ (or $M^{CF}$) by computing for each orthogonal $f_i$ (or $c_i$) the model's
corresponding echo representation.
"""
encoding_states(model)
# start by finding latent mfc: the contextual representation cued when each orthogonal $f_i$ is cued
latent_mfc = np.zeros((model.item_count, model.item_count+1))
cue = np.zeros(model.item_count*2 + 2)
for i in range(model.item_count):
cue *= 0
cue[i+1] = 1
latent_mfc[i] = model.echo(cue)[model.item_count + 1:]
# now the latent mcf
latent_mcf = np.zeros((model.item_count+1, model.item_count))
for i in range(model.item_count+1):
cue *= 0
cue[model.item_count+1+i] = 1
latent_mcf[i] = model.echo(cue)[1:model.item_count + 1] # start at 1 due to dummy column in F
# plotting
return latent_mfc, latent_mcf
if True:
# ICMR
model = InstanceCMR(**parameters)
latent_mfc, latent_mcf = latent_mfc_mcf(model)
print(model.__class__.__name__)
plot_states(model.memory, 'ICMR Memory')
plot_states(latent_mfc, 'ICMR Latent Mfc')
plot_states(latent_mcf, 'ICMR Latent Mcf')
# CMR
model = CMR(**parameters)
encoding_states(model)
print(model.__class__.__name__)
plot_states(model.mfc, 'CMR Mfc')
plot_states(model.mcf, 'CMR Mcf')
###Output
_____no_output_____
###Markdown
RetrievalTracking model state across each step of retrieval. Since it's stochastic, these values change with eachrandom seed. An additional optional parameter `first_recall_item` can control which item is recalled first bythe model (`0` denotes termination of recall while actual items are 1-indexed); it is useful for testinghypotheses about model dynamics during recall. We leave the parameter set at `None`, for now, indicating nocontrolled first recall.
###Code
import numpy as np
def retrieval_states(model, first_recall_item=None):
"""
Tracks state of context, and item supports across retrieval. Model is also advanced into a state of
completed free recall.
**Required model attributes**:
- item_count: specifies number of items encoded into memory
- context: vector representing an internal contextual state
- experience: adding a new trace to the memory model
- activations: function returning item activations given a vector probe
- outcome_probabilities: function returning item supports given a set of activations
- free_recall: function that freely recalls a given number of items or until recall stops
- state: indicates whether model is encoding or engaged in recall with a string
**Also** optionally uses first_recall_item: can specify an item for first recall
**Returns** array representations of context and support for retrieval of each item at each increment of item
retrieval. Also returns recall train associated with simulation.
"""
if model.__class__.__name__ == 'CMR':
activation_cue = lambda model: model.context
else:
activation_cue = lambda model: np.hstack((np.zeros(model.item_count + 1), model.context))
# encoding items, presuming model is freshly initialized
encoding_states(model)
retrieval_contexts, retrieval_supports = model.context, model.outcome_probabilities(activation_cue(model))
# pre-retrieval distraction
model.free_recall(0)
retrieval_contexts = np.vstack((retrieval_contexts, model.context))
retrieval_supports = np.vstack((retrieval_supports, model.outcome_probabilities(activation_cue(model))))
# optional forced first item recall
if first_recall_item is not None:
model.force_recall(first_recall_item)
retrieval_contexts = np.vstack((retrieval_contexts, model.context))
retrieval_supports = np.vstack((retrieval_supports, model.outcome_probabilities(activation_cue(model))))
# actual recall
while model.retrieving:
model.free_recall(1)
retrieval_contexts = np.vstack((retrieval_contexts, model.context))
retrieval_supports = np.vstack((retrieval_supports, model.outcome_probabilities(activation_cue(model))))
return retrieval_contexts, retrieval_supports, model.recall[:model.recall_total]
try:
show_doc(retrieval_states, title_level=3)
except:
pass
def outcome_probs_at_index(model, support_index_to_plot=1, savefig=True):
"""
Plots outcome probability distribution at a specific index of free recall.
**Required model attributes**:
- item_count: specifies number of items encoded into memory
- context: vector representing an internal contextual state
- experience: adding a new trace to the memory model
- activations: function returning item activations given a vector probe
- outcome_probabilities: function returning item supports given a set of activations
- free_recall: function that freely recalls a given number of items or until recall stops
- state: indicates whether model is encoding or engaged in recall with a string
**Other arguments**:
- support_index_to_plot: index of retrieval to plot
- savefig: whether to save or display the figure of interest
**Generates** a plot of outcome probabilities as a line graph. Also returns vector representation of the
generated probabilities.
"""
retrieval_supports = retrieval_states(model)[1]
plt.plot(np.arange(model.item_count + 1), retrieval_supports[support_index_to_plot])
plt.xlabel('Choice Index')
plt.ylabel('Outcome Probability')
plt.title('Outcome Probabilities At Recall Index {}'.format(support_index_to_plot))
plt.show()
return retrieval_supports[support_index_to_plot]
try:
show_doc(outcome_probs_at_index, title_level=3)
except:
pass
def retrieval_visualizations(model, savefig=True):
"""
Plots incremental retrieval contexts and supports, as heatmaps, and prints recalled items.
**Required model attributes**:
- item_count: specifies number of items encoded into memory
- context: vector representing an internal contextual state
- experience: adding a new trace to the memory model
- activations: function returning item activations given a vector probe
- outcome_probabilities: function returning item supports given a set of activations
**Also** uses savefig: boolean deciding whether figures are saved (True) or displayed
"""
retrieval_contexts, retrieval_supports, recall = retrieval_states(model)
plot_states(retrieval_contexts, 'Retrieval Contexts', savefig=savefig)
plot_states(retrieval_supports, 'Supports For Each Item At Each Increment of Retrieval',
savefig=savefig)
return recall
try:
show_doc(retrieval_visualizations, title_level=3)
except:
pass
###Output
_____no_output_____
###Markdown
Demo ICMR
###Code
model = InstanceCMR(**icmr_parameters)
retrieval_visualizations(model)
###Output
_____no_output_____
###Markdown
Outputs can look like... CMR
###Code
model = CMR(**cmr_parameters)
retrieval_visualizations(model)
###Output
_____no_output_____
###Markdown
 Organizational AnalysesUpon completion, the `psifr` toolbox is used to generate three plots corresponding to the contents of Figure4 in Morton & Polyn, 2016:1. Recall probability as a function of serial position2. Probability of starting recall with each serial position3. Conditional response probability as a function of lagWhereas previous visualizations were based on an arbitrary model simulation, the current figures are based onaverages over a simulation of the model some specified amount of times.
###Code
import pandas as pd
from psifr import fr
def temporal_organization_analyses(model, experiment_count, savefig=False, figsize=(15, 15), first_recall_item=None):
"""
Visualization of the outcomes of a trio of organizational analyses of model performance on a free recall
task.
**Required model attributes**:
- item_count: specifies number of items encoded into memory
- context: vector representing an internal contextual state
- experience: adding a new trace to the memory model
- free_recall: function that freely recalls a given number of items or until recall stops
**Other arguments**:
- experiment_count: number of simulations to compute curves over
- savefig: whether to save or display the figure of interest
**Returns** three plots corresponding to the contents of Figure 4 in Morton & Polyn, 2016:
1. Recall probability as a function of serial position
2. Probability of starting recall with each serial position
3. Conditional response probability as a function of lag
"""
# encode items
try:
model.experience(np.eye(model.item_count, model.item_count + 1, 1))
except ValueError:
# so we can apply to CMR
model.experience(np.eye(model.item_count, model.item_count))
# simulate retrieval for the specified number of times, tracking results in df
data = []
for experiment in range(experiment_count):
data += [[experiment, 0, 'study', i + 1, i] for i in range(model.item_count)]
for experiment in range(experiment_count):
if first_recall_item is not None:
model.force_recall(first_recall_item)
data += [[experiment, 0, 'recall', i + 1, o] for i, o in enumerate(model.free_recall())]
data = pd.DataFrame(data, columns=['subject', 'list', 'trial_type', 'position', 'item'])
merged = fr.merge_free_recall(data)
# visualizations
# spc
recall = fr.spc(merged)
g = fr.plot_spc(recall)
plt.title('Serial Position Curve')
if savefig:
plt.savefig('figures/spc.jpeg', bbox_inches='tight')
else:
plt.show()
# P(Start Recall) For Each Serial Position
prob = fr.pnr(merged)
pfr = prob.query('output <= 1')
g = fr.plot_spc(pfr).add_legend()
plt.title('Probability of Starting Recall With Each Serial Position')
if savefig:
plt.savefig('figures/pfr.jpeg', bbox_inches='tight')
else:
plt.show()
# Conditional response probability as a function of lag
crp = fr.lag_crp(merged)
g = fr.plot_lag_crp(crp)
plt.title('Conditional Response Probability')
if savefig:
plt.savefig('figures/crp.jpeg', bbox_inches='tight')
else:
plt.show()
try:
show_doc(temporal_organization_analyses, title_level=3)
except:
pass
###Output
_____no_output_____
###Markdown
Demo
###Code
from instance_cmr.models import InstanceCMR
model = InstanceCMR(**icmr_parameters)
temporal_organization_analyses(model, 100, True)
from instance_cmr.models import CMR
model = CMR(**cmr_parameters)
temporal_organization_analyses(model, 100, True)
###Output
_____no_output_____ |
House Sales_in_King_Count_USA.ipynb | ###Markdown
Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id : A notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of bedroomsbathrooms: Number of bathroomssqft_living: Square footage of the homesqft_lot: Square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above : Square footage of house apart from basementsqft_basement: Square footage of the basementyr_built : Built Yearyr_renovated : Year when house was renovatedzipcode: Zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 : Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 : LotSize area in 2015(implies-- some renovations) You will require the following libraries:
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
from sklearn.linear_model import LinearRegression
%matplotlib inline
###Output
_____no_output_____
###Markdown
Module 1: Importing Data Sets Load the csv:
###Code
file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)
###Output
_____no_output_____
###Markdown
We use the method head to display the first 5 columns of the dataframe.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image. We use the method describe to obtain a statistical summary of the dataframe.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Module 2: Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True We can see we have missing values for the columns bedrooms and bathrooms
###Code
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
_____no_output_____
###Markdown
We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace(). Don't forget to set the inplace parameter to True
###Code
mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)
###Output
_____no_output_____
###Markdown
We also replace the missing values of the column 'bathrooms' with the mean of the column 'bathrooms' using the method replace(). Don't forget to set the inplace parameter top True
###Code
mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
_____no_output_____
###Markdown
Module 3: Exploratory Data Analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe. Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers. Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price. We can use the Pandas method corr() to find the feature other than price that is most correlated with price.
###Code
df.corr()['price'].sort_values()
###Output
_____no_output_____
###Markdown
Module 4: Model Development We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2.
###Code
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm.fit(X,Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2. Question 7Fit a linear regression model to predict the 'price' using the list of features:
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
###Output
_____no_output_____
###Markdown
Then calculate the R^2. Take a screenshot of your code. This will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression()
###Code
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
###Output
_____no_output_____
###Markdown
Question 8Use the list to create a pipeline object to predict the 'price', fit the object using the features in the list features, and calculate the R^2. Module 5: Model Evaluation and Refinement Import the necessary modules:
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
###Output
_____no_output_____
###Markdown
We will split the data into training and testing sets:
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features]
Y = df['price']
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)
print("number of test samples:", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
###Output
_____no_output_____
###Markdown
Question 9Create and fit a Ridge regression object using the training data, set the regularization parameter to 0.1, and calculate the R^2 using the test data.
###Code
from sklearn.linear_model import Ridge
###Output
_____no_output_____
###Markdown
Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id :a notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of Bedrooms/Housebathrooms: Number of bathrooms/bedroomssqft_living: square footage of the homesqft_lot: square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is Overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above :square footage of house apart from basementsqft_basement: square footage of the basementyr_built :Built Yearyr_renovated :Year when house was renovatedzipcode:zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 :Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 :lotSize area in 2015(implies-- some renovations) You will require the following libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.0 Importing the Data Load the csv:
###Code
file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)
###Output
_____no_output_____
###Markdown
we use the method head to display the first 5 columns of the dataframe.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We use the method describe to obtain a statistical summary of the dataframe.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
2.0 Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True
###Code
df1=df.drop(['id', 'Unnamed: 0'], axis=1)
df1.describe()
###Output
_____no_output_____
###Markdown
we can see we have missing values for the columns bedrooms and bathrooms
###Code
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 13
number of NaN values for the column bathrooms : 10
###Markdown
We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace. Don't forget to set the inplace parameter top True
###Code
mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)
###Output
_____no_output_____
###Markdown
We also replace the missing values of the column 'bathrooms' with the mean of the column 'bedrooms' using the method replace.Don't forget to set the inplace parameter top Ture
###Code
mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 13
number of NaN values for the column bathrooms : 0
###Markdown
3.0 Exploratory data analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe.
###Code
df1=df["floors"].value_counts()
df1.to_frame()
###Output
_____no_output_____
###Markdown
Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers .
###Code
sns.boxplot(x="waterfront",y="price",data=df)
###Output
_____no_output_____
###Markdown
Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price.
###Code
sns.regplot(x="sqft_above",y="price",data=df)
plt.ylim(0,)
###Output
_____no_output_____
###Markdown
We can use the Pandas method corr() to find the feature other than price that is most correlated with price.
###Code
df.corr()['price'].sort_values()
###Output
_____no_output_____
###Markdown
Module 4: Model Development Import libraries
###Code
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2.
###Code
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm
lm.fit(X,Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2.
###Code
X=df[['sqft_living']]
Y=df[['price']]
lm1=LinearRegression()
lm1
lm1.fit(X,Y)
lm1.score(X,Y)
###Output
_____no_output_____
###Markdown
Question 7Fit a linear regression model to predict the 'price' using the list of features:
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
###Output
_____no_output_____
###Markdown
the calculate the R^2. Take a screenshot of your code
###Code
X=df[["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]]
Y=df[['price']]
lm2=LinearRegression()
lm2
lm2.fit(X,Y)
lm2.score(X,Y)
###Output
_____no_output_____
###Markdown
this will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression()
###Code
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
###Output
_____no_output_____
###Markdown
Question 8Use the list to create a pipeline object, predict the 'price', fit the object using the features in the list features , then fit the model and calculate the R^2
###Code
pipe=Pipeline(Input)
pipe
pipe.fit(X,Y)
pipe.score(X,Y)
###Output
/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/pipeline.py:511: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
###Markdown
Module 5: MODEL EVALUATION AND REFINEMENT import the necessary modules
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
###Output
done
###Markdown
we will split the data into training and testing set
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features ]
Y = df['price']
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)
print("number of test samples :", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
###Output
number of test samples : 3242
number of training samples: 18371
###Markdown
Question 9Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1 and calculate the R^2 using the test data.
###Code
from sklearn.linear_model import Ridge
ridge = Ridge(alpha=0.1)
ridge.fit(x_train,y_train)
ridge.score(x_test, y_test)
###Output
_____no_output_____
###Markdown
Question 10Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, setting the regularisation parameter to 0.1. Calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2.
###Code
Input=[('polynomial', PolynomialFeatures(include_bias=False)),('model',Ridge(alpha=0.1))]
pipe=Pipeline(Input)
pipe.fit(x_train,y_train)
pipe.score(x_test, y_test)
###Output
_____no_output_____
###Markdown
Data Analysis with Python House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id :a notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of Bedrooms/Housebathrooms: Number of bathrooms/bedroomssqft_living: square footage of the homesqft_lot: square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is Overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above :square footage of house apart from basementsqft_basement: square footage of the basementyr_built :Built Yearyr_renovated :Year when house was renovatedzipcode:zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 :Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 :lotSize area in 2015(implies-- some renovations) You will require the following libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.0 Importing the Data Load the csv:
###Code
file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)
###Output
_____no_output_____
###Markdown
we use the method head to display the first 5 columns of the dataframe.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Question 1 Display the data types of each column using the attribute dtype, then take a screenshot and submit it, include your code in the image.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We use the method describe to obtain a statistical summary of the dataframe.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
2.0 Data Wrangling Question 2 Drop the columns "id" and "Unnamed: 0" from axis 1 using the method drop(), then use the method describe() to obtain a statistical summary of the data. Take a screenshot and submit it, make sure the inplace parameter is set to True
###Code
df.drop(['id','Unnamed: 0'], axis =1, inplace = True)
df.describe()
###Output
_____no_output_____
###Markdown
we can see we have missing values for the columns bedrooms and bathrooms
###Code
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 13
number of NaN values for the column bathrooms : 10
###Markdown
We can replace the missing values of the column 'bedrooms' with the mean of the column 'bedrooms' using the method replace. Don't forget to set the inplace parameter top True
###Code
mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)
###Output
_____no_output_____
###Markdown
We also replace the missing values of the column 'bathrooms' with the mean of the column 'bedrooms' using the method replace.Don't forget to set the inplace parameter top Ture
###Code
mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 0
number of NaN values for the column bathrooms : 0
###Markdown
3.0 Exploratory data analysis Question 3Use the method value_counts to count the number of houses with unique floor values, use the method .to_frame() to convert it to a dataframe.
###Code
df['floors'].value_counts().to_frame()
###Output
_____no_output_____
###Markdown
Question 4Use the function boxplot in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers .
###Code
sns.boxplot(x='waterfront', y='price', data=df)
###Output
_____no_output_____
###Markdown
Question 5Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price.
###Code
sns.regplot(x='sqft_above', y='price', data=df)
###Output
_____no_output_____
###Markdown
We can use the Pandas method corr() to find the feature other than price that is most correlated with price.
###Code
df.corr()['price'].sort_values()
###Output
_____no_output_____
###Markdown
Module 4: Model Development Import libraries
###Code
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2.
###Code
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm
lm.fit(X,Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
Question 6Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2. Take a screenshot of your code and the value of the R^2.
###Code
X = df[['sqft_living']]
Y = df['price']
lm2 = LinearRegression()
lm2
lm2.fit(X,Y)
lm2.score(X,Y)
###Output
_____no_output_____
###Markdown
Question 7Fit a linear regression model to predict the 'price' using the list of features:
###Code
features =df[["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]]
###Output
_____no_output_____
###Markdown
the calculate the R^2. Take a screenshot of your code
###Code
multi_ = LinearRegression()
multi_
multi_.fit(features, df['price'])
multi_.score(features, df['price'])
###Output
_____no_output_____
###Markdown
this will help with Question 8Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression()
###Code
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
###Output
_____no_output_____
###Markdown
Question 8Use the list to create a pipeline object, predict the 'price', fit the object using the features in the list features , then fit the model and calculate the R^2
###Code
pipe=Pipeline(Input)
pipe
pipe.fit(features,Y)
pipe.score(features,Y)
###Output
/opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/pipeline.py:511: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
###Markdown
Module 5: MODEL EVALUATION AND REFINEMENT import the necessary modules
###Code
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
###Output
done
###Markdown
we will split the data into training and testing set
###Code
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features ]
Y = df['price']
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)
print("number of test samples :", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
###Output
number of test samples : 3242
number of training samples: 18371
###Markdown
Question 9Create and fit a Ridge regression object using the training data, setting the regularization parameter to 0.1 and calculate the R^2 using the test data.
###Code
from sklearn.linear_model import Ridge
Ridge_Model = Ridge(alpha=0.1)
Ridge_Model.fit(x_train, y_train)
Ridge_Model.score(x_train, y_train)
###Output
_____no_output_____
###Markdown
Question 10Perform a second order polynomial transform on both the training data and testing data. Create and fit a Ridge regression object using the training data, setting the regularisation parameter to 0.1. Calculate the R^2 utilising the test data provided. Take a screenshot of your code and the R^2.
###Code
poly_trans = PolynomialFeatures(degree=2)
x_train_poly = poly_trans.fit_transform(x_train)
x_test_poly = poly_trans.fit_transform(x_test)
Ridge_Model2 = Ridge(alpha=0.1)
Ridge_Model2.fit(x_train_poly, y_train)
Ridge_Model2.fit(x_test_poly, y_test)
Ridge_Model2.score(x_test_poly, y_test)
###Output
_____no_output_____
###Markdown
House Sales in King County, USA This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. id : A notation for a house date: Date house was soldprice: Price is prediction targetbedrooms: Number of bedroomsbathrooms: Number of bathroomssqft_living: Square footage of the homesqft_lot: Square footage of the lotfloors :Total floors (levels) in housewaterfront :House which has a view to a waterfrontview: Has been viewedcondition :How good the condition is overallgrade: overall grade given to the housing unit, based on King County grading systemsqft_above : Square footage of house apart from basementsqft_basement: Square footage of the basementyr_built : Built Yearyr_renovated : Year when house was renovatedzipcode: Zip codelat: Latitude coordinatelong: Longitude coordinatesqft_living15 : Living room area in 2015(implies-- some renovations) This might or might not have affected the lotsize areasqft_lot15 : LotSize area in 2015(implies-- some renovations) You will require the following libraries:
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler,PolynomialFeatures
from sklearn.linear_model import LinearRegression
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importing Data Sets Load the csv:
###Code
file_name='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/coursera/project/kc_house_data_NaN.csv'
df=pd.read_csv(file_name)
###Output
_____no_output_____
###Markdown
We use the method head to display the first 5 columns of the dataframe.
###Code
df.head()
# Checking the data types
df.dtypes
###Output
_____no_output_____
###Markdown
We use the method describe to obtain a statistical summary of the dataframe.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Data Wrangling
###Code
df.drop(['id','Unnamed: 0'],axis = 1, inplace = True)
df.describe()
###Output
_____no_output_____
###Markdown
Checking for NAN values
###Code
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 13
number of NaN values for the column bathrooms : 10
###Markdown
Replacing NAN values
###Code
mean=df['bedrooms'].mean()
df['bedrooms'].replace(np.nan,mean, inplace=True)
mean=df['bathrooms'].mean()
df['bathrooms'].replace(np.nan,mean, inplace=True)
print("number of NaN values for the column bedrooms :", df['bedrooms'].isnull().sum())
print("number of NaN values for the column bathrooms :", df['bathrooms'].isnull().sum())
###Output
number of NaN values for the column bedrooms : 0
number of NaN values for the column bathrooms : 0
###Markdown
Exploratory Data Analysis
###Code
#Using the value_counts function to count the number of unique floors and .to_frame() for getting an output in a dataframe
df['floors'].value_counts().to_frame()
#Use the function in the seaborn library to determine whether houses with a waterfront view or without a waterfront view have more price outliers.
box = sns.boxplot( x="waterfront", y='price', data=df)
#Use the function regplot in the seaborn library to determine if the feature sqft_above is negatively or positively correlated with price.
x = df['sqft_above']
y = df['price']
sns.regplot(x,y)
# We can use the Pandas method corr() to find the feature other than price that is most correlated with price.
df.corr()['price'].sort_values()
###Output
_____no_output_____
###Markdown
Model Development
###Code
#We can Fit a linear regression model using the longitude feature 'long' and caculate the R^2.
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm.fit(X,Y)
lm.score(X, Y)
X = df[['long']]
Y = df['price']
lm = LinearRegression()
lm.fit(X,Y)
lm.score(X, Y)
#Fit a linear regression model to predict the 'price' using the feature 'sqft_living' then calculate the R^2..
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features]
Y = df['price']
lm = LinearRegression()
lm
lm.fit(X,Y)
lm.score(X, Y)
###Output
_____no_output_____
###Markdown
Create a list of tuples, the first element in the tuple contains the name of the estimator:'scale''polynomial''model'The second element in the tuple contains the model constructor StandardScaler()PolynomialFeatures(include_bias=False)LinearRegression()
###Code
Input=[('scale',StandardScaler()),('polynomial', PolynomialFeatures(include_bias=False)),('model',LinearRegression())]
#Use the list to create a pipeline object to predict the 'price', fit the object using the features in the list features, and calculate the R^2.
pipe=Pipeline(Input)
pipe
pipe.fit(df[features], df['price'])
prediction = pipe.predict( df[features] )
print (prediction)
pipe.score(X,Y)
###Output
[351928.15625 560712.15625 454712.15625 ... 419512.15625 458352.15625
419512.15625]
###Markdown
Model Evaluation and Refinement
###Code
#Import the necessary modules:
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
print("done")
#We will split the data into training and testing sets:
features =["floors", "waterfront","lat" ,"bedrooms" ,"sqft_basement" ,"view" ,"bathrooms","sqft_living15","sqft_above","grade","sqft_living"]
X = df[features]
Y = df['price']
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.15, random_state=1)
print("number of test samples:", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
from sklearn.linear_model import Ridge
RG = Ridge (alpha = 0.1)
RG.fit (X, Y)
RG.score (X, Y)
P2 = PolynomialFeatures (degree = 2)
x_train_P2 = P2.fit_transform(x_train)
x_test_P2 = P2.fit_transform(x_test)
RG2 = Ridge (alpha = 0.1)
RG2.fit (x_train_P2, y_train)
RG2.score (x_test_P2, y_test)
###Output
_____no_output_____ |
textual_augmenter.ipynb | ###Markdown
Example of Textual Augmenter Usage:* [Character Augmenter](chara_aug) * [OCR](ocr_aug) * [Keyboard](keyboard_aug) * [Random](random_aug)* [Word Augmenter](word_aug) * [Spelling](spelling_aug) * [Word Embeddings](word_embs_aug) * [TF-IDF](tfidf_aug) * [Contextual Word Embeddings](context_word_embs_aug) * [Synonym](synonym_aug) * [Antonym](antonym_aug) * [Random Word](random_word_aug) * [Split](split_aug) * [Back Translatoin](back_translation_aug) * [Reserved Word](reserved_aug)* [Sentence Augmenter](sent_aug) * [Contextual Word Embeddings for Sentence](context_word_embs_sentence_aug) * [Abstractive Summarization](abst_summ_aug)
###Code
import os
os.environ["MODEL_DIR"] = '../model'
###Output
_____no_output_____
###Markdown
Config
###Code
import nlpaug.augmenter.char as nac
import nlpaug.augmenter.word as naw
import nlpaug.augmenter.sentence as nas
import nlpaug.flow as nafc
from nlpaug.util import Action
text = 'The quick brown fox jumps over the lazy dog .'
print(text)
###Output
The quick brown fox jumps over the lazy dog .
###Markdown
Character AugmenterAugmenting data in character level. Possible scenarios include image to text and chatbot. During recognizing text from image, we need to optical character recognition (OCR) model to achieve it but OCR introduces some errors such as recognizing "o" and "0". `OCRAug` simulate these errors to perform the data augmentation. For chatbot, we still have typo even though most of application comes with word correction. Therefore, `KeyboardAug` is introduced to simulate this kind of errors. OCR Augmenter Substitute character by pre-defined OCR error
###Code
aug = nac.OcrAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['The quick bkown fox jumps ovek the lazy dog .', 'The quick 6rown fox jumps ovek the lazy dog .', 'The quick brown f0x jomps over the la2y dog .']
###Markdown
Keyboard Augmenter Substitute character by keyboard distance
###Code
aug = nac.KeyboardAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown Gox juJps ocer the lazy dog .
###Markdown
Random Augmenter Insert character randomly
###Code
aug = nac.RandomCharAug(action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
T3he quicNk @brown fEox juamps $over th6e la1zy d*og
###Markdown
Substitute character randomly
###Code
aug = nac.RandomCharAug(action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
ThN qDick brow0 foB jumks oveE t+e laz6 dBg
###Markdown
Swap character randomly
###Code
aug = nac.RandomCharAug(action="swap")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
Hte quikc borwn fxo jupms ovre teh lzay dgo
###Markdown
Delete character randomly
###Code
aug = nac.RandomCharAug(action="delete")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
Te quic rown fx jump ver he laz og
###Markdown
Word AugmenterBesides character augmentation, word level is important as well. We make use of word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), fasttext (Joulin et al., 2016), BERT(Devlin et al., 2018) and wordnet to insert and substitute similar word. `Word2vecAug`, `GloVeAug` and `FasttextAug` use word embeddings to find most similar group of words to replace original word. On the other hand, `BertAug` use language models to predict possible target word. `WordNetAug` use statistics way to find the similar group of words. Spelling Augmenter Substitute word by spelling mistake words dictionary
###Code
aug = naw.SpellingAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
aug = naw.SpellingAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['They quick browb fox jumps over se lazy dog.', 'The quikly brown fox jumps over tge lazy dod.', 'Tha quick brown fox jumps ower their lazy dog.']
###Markdown
Word Embeddings Augmenter Insert word randomly by word embeddings similarity
###Code
# model_type: word2vec, glove or fasttext
aug = naw.WordEmbsAug(
model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin',
action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The quick brown fox jumps Alzeari over the lazy Superintendents dog
###Markdown
Substitute word by word2vec similarity
###Code
# model_type: word2vec, glove or fasttext
aug = naw.WordEmbsAug(
model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin',
action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The easy brown fox jumps around the lazy dog
###Markdown
TF-IDF Augmenter Insert word by TF-IDF similarity
###Code
aug = naw.TfIdfAug(
model_path=os.environ.get("MODEL_DIR"),
action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
sinks The quick brown fox jumps over the lazy Sidney dog
###Markdown
Substitute word by TF-IDF similarity
###Code
aug = naw.TfIdfAug(
model_path=os.environ.get("MODEL_DIR"),
action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The quick brown fox Baked over the polygraphy dog
###Markdown
Contextual Word Embeddings Augmenter Insert word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet)
###Code
aug = naw.ContextualWordEmbsAug(
model_path='bert-base-uncased', action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
even the quick brown fox usually jumps over the lazy dog
###Markdown
Substitute word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet)
###Code
aug = naw.ContextualWordEmbsAug(
model_path='bert-base-uncased', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = naw.ContextualWordEmbsAug(
model_path='distilbert-base-uncased', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = naw.ContextualWordEmbsAug(
model_path='roberta-base', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps Into the bull dog .
###Markdown
Synonym Augmenter Substitute word by WordNet's synonym
###Code
aug = naw.SynonymAug(aug_src='wordnet')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The speedy brown fox jumps complete the lazy dog .
###Markdown
Substitute word by PPDB's synonym
###Code
aug = naw.SynonymAug(aug_src='ppdb', model_path=os.environ.get("MODEL_DIR") + 'ppdb-2.0-s-all')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox climbs over the lazy dog .
###Markdown
Antonym Augmenter Substitute word by antonym
###Code
aug = naw.AntonymAug()
_text = 'Good boy'
augmented_text = aug.augment(_text)
print("Original:")
print(_text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
Good boy
Augmented Text:
Good daughter
###Markdown
Random Word Augmenter Swap word randomly
###Code
aug = naw.RandomWordAug(action="swap")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
Quick the brown fox jumps over the lazy dog .
###Markdown
Delete word randomly
###Code
aug = naw.RandomWordAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The brown jumps over the lazy dog
###Markdown
Delete a set of contunous word will be removed randomly
###Code
aug = naw.RandomWordAug(action='crop')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps dog .
###Markdown
Split Augmenter Split word to two tokens randomly
###Code
aug = naw.SplitAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The q uick b rown fox jumps o ver the lazy dog .
###Markdown
Back Translation Augmenter
###Code
import nlpaug.augmenter.word as naw
text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
from_model_name='transformer.wmt19.en-de',
to_model_name='transformer.wmt19.de-en'
)
back_translation_aug.augment(text)
# Load models from local path
import nlpaug.augmenter.word as naw
from_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.en-de')
to_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.de-en')
text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
from_model_name=from_model_dir, from_model_checkpt='model1.pt',
to_model_name=to_model_dir, to_model_checkpt='model1.pt',
is_load_from_github=False)
back_translation_aug.augment(text)
###Output
_____no_output_____
###Markdown
Reserved Word Augmenter
###Code
import nlpaug.augmenter.word as naw
text = 'Fwd: Mail for solution'
reserved_tokens = [
['FW', 'Fwd', 'F/W', 'Forward'],
]
reserved_aug = naw.ReservedAug(reserved_tokens=reserved_tokens)
augmented_text = reserved_aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
_____no_output_____
###Markdown
Sentence Augmentation Contextual Word Embeddings for Sentence Augmenter Insert sentence by contextual word embeddings (GPT2 or XLNet)
###Code
# model_path: xlnet-base-cased or gpt2
aug = nas.ContextualWordEmbsForSentenceAug(model_path='xlnet-base-cased')
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = nas.ContextualWordEmbsForSentenceAug(model_path='distilgpt2')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps over the lazy dog . She keeps running around the house.
###Markdown
Abstractive Summarization Augmenter
###Code
article = """
The history of natural language processing (NLP) generally started in the 1950s, although work can be
found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and
Intelligence" which proposed what is now called the Turing test as a criterion of intelligence.
The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian
sentences into English. The authors claimed that within three or five years, machine translation would
be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966,
which found that ten-year-long research had failed to fulfill the expectations, funding for machine
translation was dramatically reduced. Little further research in machine translation was conducted
until the late 1980s when the first statistical machine translation systems were developed.
"""
aug = nas.AbstSummAug(model_path='t5-base', num_beam=3)
augmented_text = aug.augment(article)
print("Original:")
print(article)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The history of natural language processing (NLP) generally started in the 1950s, although work can be
found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and
Intelligence" which proposed what is now called the Turing test as a criterion of intelligence.
The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian
sentences into English. The authors claimed that within three or five years, machine translation would
be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966,
which found that ten-year-long research had failed to fulfill the expectations, funding for machine
translation was dramatically reduced. Little further research in machine translation was conducted
until the late 1980s when the first statistical machine translation systems were developed.
Augmented Text:
the history of natural language processing (NLP) generally started in the 1950s. work can be found from earlier periods, such as the Georgetown experiment in 1954. little further research in machine translation was conducted until the late 1980s
###Markdown
Example of Textual Augmenter Usage:* [Character Augmenter](chara_aug) * [OCR](ocr_aug) * [Keyboard](keyboard_aug) * [Random](random_aug)* [Word Augmenter](word_aug) * [Spelling](spelling_aug) * [Word Embeddings](word_embs_aug) * [TF-IDF](tfidf_aug) * [Contextual Word Embeddings](context_word_embs_aug) * [Synonym](synonym_aug) * [Antonym](antonym_aug) * [Random Word](random_word_aug) * [Split](split_aug) * [Back Translatoin](back_translation_aug) * [Reserved Word](reserved_aug)* [Sentence Augmenter](sent_aug) * [Contextual Word Embeddings for Sentence](context_word_embs_sentence_aug) * [Abstractive Summarization](abst_summ_aug)
###Code
import os
os.environ["MODEL_DIR"] = '../model'
###Output
_____no_output_____
###Markdown
Config
###Code
import nlpaug.augmenter.char as nac
import nlpaug.augmenter.word as naw
import nlpaug.augmenter.sentence as nas
import nlpaug.flow as nafc
from nlpaug.util import Action
text = 'The quick brown fox jumps over the lazy dog .'
print(text)
###Output
The quick brown fox jumps over the lazy dog .
###Markdown
Character AugmenterAugmenting data in character level. Possible scenarios include image to text and chatbot. During recognizing text from image, we need to optical character recognition (OCR) model to achieve it but OCR introduces some errors such as recognizing "o" and "0". `OCRAug` simulate these errors to perform the data augmentation. For chatbot, we still have typo even though most of application comes with word correction. Therefore, `KeyboardAug` is introduced to simulate this kind of errors. OCR Augmenter Substitute character by pre-defined OCR error
###Code
aug = nac.OcrAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['The quick bkown fox jumps ovek the lazy dog .', 'The quick 6rown fox jumps ovek the lazy dog .', 'The quick brown f0x jomps over the la2y dog .']
###Markdown
Keyboard Augmenter Substitute character by keyboard distance
###Code
aug = nac.KeyboardAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown Gox juJps ocer the lazy dog .
###Markdown
Random Augmenter Insert character randomly
###Code
aug = nac.RandomCharAug(action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
T3he quicNk @brown fEox juamps $over th6e la1zy d*og
###Markdown
Substitute character randomly
###Code
aug = nac.RandomCharAug(action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
ThN qDick brow0 foB jumks oveE t+e laz6 dBg
###Markdown
Swap character randomly
###Code
aug = nac.RandomCharAug(action="swap")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
Hte quikc borwn fxo jupms ovre teh lzay dgo
###Markdown
Delete character randomly
###Code
aug = nac.RandomCharAug(action="delete")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
Te quic rown fx jump ver he laz og
###Markdown
Word AugmenterBesides character augmentation, word level is important as well. We make use of word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), fasttext (Joulin et al., 2016), BERT(Devlin et al., 2018) and wordnet to insert and substitute similar word. `Word2vecAug`, `GloVeAug` and `FasttextAug` use word embeddings to find most similar group of words to replace original word. On the other hand, `BertAug` use language models to predict possible target word. `WordNetAug` use statistics way to find the similar group of words. Spelling Augmenter Substitute word by spelling mistake words dictionary
###Code
aug = naw.SpellingAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
aug = naw.SpellingAug()
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Texts:
['They quick browb fox jumps over se lazy dog.', 'The quikly brown fox jumps over tge lazy dod.', 'Tha quick brown fox jumps ower their lazy dog.']
###Markdown
Word Embeddings Augmenter Insert word randomly by word embeddings similarity
###Code
# model_type: word2vec, glove or fasttext
aug = naw.WordEmbsAug(
model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin',
action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The quick brown fox jumps Alzeari over the lazy Superintendents dog
###Markdown
Substitute word by word2vec similarity
###Code
# model_type: word2vec, glove or fasttext
aug = naw.WordEmbsAug(
model_type='word2vec', model_path=model_dir+'GoogleNews-vectors-negative300.bin',
action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The easy brown fox jumps around the lazy dog
###Markdown
TF-IDF Augmenter Insert word by TF-IDF similarity
###Code
aug = naw.TfIdfAug(
model_path=os.environ.get("MODEL_DIR"),
action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
sinks The quick brown fox jumps over the lazy Sidney dog
###Markdown
Substitute word by TF-IDF similarity
###Code
aug = naw.TfIdfAug(
model_path=os.environ.get("MODEL_DIR"),
action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The quick brown fox Baked over the polygraphy dog
###Markdown
Contextual Word Embeddings Augmenter Insert word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet)
###Code
aug = naw.ContextualWordEmbsAug(
model_path='bert-base-uncased', action="insert")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
even the quick brown fox usually jumps over the lazy dog
###Markdown
Substitute word by contextual word embeddings (BERT, DistilBERT, RoBERTA or XLNet)
###Code
aug = naw.ContextualWordEmbsAug(
model_path='bert-base-uncased', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = naw.ContextualWordEmbsAug(
model_path='distilbert-base-uncased', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = naw.ContextualWordEmbsAug(
model_path='roberta-base', action="substitute")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps Into the bull dog .
###Markdown
Synonym Augmenter Substitute word by WordNet's synonym
###Code
aug = naw.SynonymAug(aug_src='wordnet')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The speedy brown fox jumps complete the lazy dog .
###Markdown
Substitute word by PPDB's synonym
###Code
aug = naw.SynonymAug(aug_src='ppdb', model_path=os.environ.get("MODEL_DIR") + 'ppdb-2.0-s-all')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox climbs over the lazy dog .
###Markdown
Antonym Augmenter Substitute word by antonym
###Code
aug = naw.AntonymAug()
_text = 'Good boy'
augmented_text = aug.augment(_text)
print("Original:")
print(_text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
Good boy
Augmented Text:
Good daughter
###Markdown
Random Word Augmenter Swap word randomly
###Code
aug = naw.RandomWordAug(action="swap")
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
Quick the brown fox jumps over the lazy dog .
###Markdown
Delete word randomly
###Code
aug = naw.RandomWordAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog
Augmented Text:
The brown jumps over the lazy dog
###Markdown
Delete a set of contunous word will be removed randomly
###Code
aug = naw.RandomWordAug(action='crop')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps dog .
###Markdown
Split Augmenter Split word to two tokens randomly
###Code
aug = naw.SplitAug()
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The q uick b rown fox jumps o ver the lazy dog .
###Markdown
Back Translation Augmenter
###Code
import nlpaug.augmenter.word as naw
text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
from_model_name='transformer.wmt19.en-de',
to_model_name='transformer.wmt19.de-en'
)
back_translation_aug.augment(text)
# Load models from local path
import nlpaug.augmenter.word as naw
from_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.en-de')
to_model_dir = os.path.join(os.environ["MODEL_DIR"], 'word', 'fairseq', 'wmt19.de-en')
text = 'The quick brown fox jumped over the lazy dog'
back_translation_aug = naw.BackTranslationAug(
from_model_name=from_model_dir, from_model_checkpt='model1.pt',
to_model_name=to_model_dir, to_model_checkpt='model1.pt',
is_load_from_github=False)
back_translation_aug.augment(text)
###Output
_____no_output_____
###Markdown
Reserved Word Augmenter
###Code
import nlpaug.augmenter.word as naw
text = 'Fwd: Mail for solution'
reserved_tokens = [
['FW', 'Fwd', 'F/W', 'Forward'],
]
reserved_aug = naw.ReservedAug(reserved_tokens=reserved_tokens)
augmented_text = reserved_aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
_____no_output_____
###Markdown
Sentence Augmentation Contextual Word Embeddings for Sentence Augmenter Insert sentence by contextual word embeddings (GPT2 or XLNet)
###Code
# model_path: xlnet-base-cased or gpt2
aug = nas.ContextualWordEmbsForSentenceAug(model_path='xlnet-base-cased')
augmented_texts = aug.augment(text, n=3)
print("Original:")
print(text)
print("Augmented Texts:")
print(augmented_texts)
aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = nas.ContextualWordEmbsForSentenceAug(model_path='gpt2')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
aug = nas.ContextualWordEmbsForSentenceAug(model_path='distilgpt2')
augmented_text = aug.augment(text)
print("Original:")
print(text)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The quick brown fox jumps over the lazy dog .
Augmented Text:
The quick brown fox jumps over the lazy dog . She keeps running around the house.
###Markdown
Abstractive Summarization Augmenter
###Code
article = """
The history of natural language processing (NLP) generally started in the 1950s, although work can be
found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and
Intelligence" which proposed what is now called the Turing test as a criterion of intelligence.
The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian
sentences into English. The authors claimed that within three or five years, machine translation would
be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966,
which found that ten-year-long research had failed to fulfill the expectations, funding for machine
translation was dramatically reduced. Little further research in machine translation was conducted
until the late 1980s when the first statistical machine translation systems were developed.
"""
aug = nas.AbstSummAug(model_path='t5-base', num_beam=3)
augmented_text = aug.augment(article)
print("Original:")
print(article)
print("Augmented Text:")
print(augmented_text)
###Output
Original:
The history of natural language processing (NLP) generally started in the 1950s, although work can be
found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and
Intelligence" which proposed what is now called the Turing test as a criterion of intelligence.
The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian
sentences into English. The authors claimed that within three or five years, machine translation would
be a solved problem. However, real progress was much slower, and after the ALPAC report in 1966,
which found that ten-year-long research had failed to fulfill the expectations, funding for machine
translation was dramatically reduced. Little further research in machine translation was conducted
until the late 1980s when the first statistical machine translation systems were developed.
Augmented Text:
the history of natural language processing (NLP) generally started in the 1950s. work can be found from earlier periods, such as the Georgetown experiment in 1954. little further research in machine translation was conducted until the late 1980s
|
projects/causal moneyball/Causal-analysis-on-football-transfer-prices/Causal Model notebooks/Causal Inference, Interventions, and Counterfactuals.ipynb | ###Markdown
Pgmpy
###Code
import random
import pandas as pd
from pgmpy.models import BayesianModel
from pgmpy.estimators import BayesianEstimator
import networkx as nx
import pylab as plt
random.seed(42)
###Output
_____no_output_____
###Markdown
Reading all the data to see the column headers
###Code
data = pd.read_csv("../data/modelling datasets/transfers_final.csv")
data.head()
data.describe(include='all').loc['unique']
data.describe(include='all')
###Output
_____no_output_____
###Markdown
Renaming all the columns to match the nodes of the DAG
###Code
data.rename(columns={"arrival_league": "AL", "year": "Y", "origin_league": "OL", "grouping_position": "P",
"arrival_club_tier": "AC", "origin_club_tier": "OC", "age_grouping_2": "A",
"transfer_price_group2": "T", "potential_fifa": "Pot", "overall_fifa": "Ovr",
"new_height": "H", "appearances": "App"}, inplace=True)
data = data[["A", "N", "Y", "P", "Pot", "Ovr", "App", "AL", "AC", "OL", "OC", "T"]]
data.head()
###Output
_____no_output_____
###Markdown
Using the functions in the PGMPY library to replicate the DAG from bnlearn
###Code
bn_model = BayesianModel([('OL', 'OC'), ('AL', 'AC'), ('Ovr', 'Pot'), ('A', 'App'), ('OC', 'T'),
('AC', 'T'), ('N', 'T'), ('Y', 'T'), ('Ovr', 'T'), ('Pot', 'T'),
('P', 'Ovr'), ('P', 'Pot'), ('A', 'T'), ('A', 'Ovr'), ('A', 'Pot'),
('App', 'T'), ('P', 'T')])
nx.draw(bn_model, with_labels=True)
plt.show()
###Output
_____no_output_____
###Markdown
Fitting the DAG with the data using a Bayesian Estimator
###Code
bn_model.fit(data, estimator=BayesianEstimator, prior_type="BDeu", equivalent_sample_size=10) # default equivalent_sample_size=5
###Output
_____no_output_____
###Markdown
The next step is to extract all the CPTs that the model fitting built, in order to transfer them to Pyro
###Code
# Demo of how to extract CPD
a = bn_model.get_cpds(node="Ovr")
a.state_names
a.get_evidence()
a.variables
a.values.T
###Output
_____no_output_____
###Markdown
Pyro
###Code
from statistics import mean
import torch
import numpy as np
import pyro
import pyro.distributions as dist
from pyro.infer import Importance, EmpiricalMarginal
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
pyro.set_rng_seed(101)
###Output
_____no_output_____
###Markdown
Defining the labels with the categories of all the variables
###Code
# labels
N_label = bn_model.get_cpds(node="N").state_names["N"]
print(N_label)
P_label = bn_model.get_cpds(node="P").state_names["P"]
print(P_label)
Age_label = bn_model.get_cpds(node="A").state_names["A"]
print(Age_label)
OC_label = bn_model.get_cpds(node="OC").state_names["OC"]
print(OC_label)
OL_label = bn_model.get_cpds(node="OL").state_names["OL"]
print(OL_label)
AC_label = bn_model.get_cpds(node="AC").state_names["AC"]
print(AC_label)
AL_label = bn_model.get_cpds(node="AL").state_names["AL"]
print(AL_label)
Ovr_label = bn_model.get_cpds(node="Ovr").state_names["Ovr"]
print(Ovr_label)
Pot_label = bn_model.get_cpds(node="Pot").state_names["Pot"]
print(Pot_label)
Y_label = bn_model.get_cpds(node="Y").state_names["Y"]
print(Y_label)
TP_label = bn_model.get_cpds(node="T").state_names["T"]
print(TP_label)
###Output
['AF', 'AS', 'EU', 'N_A', 'OC', 'SA']
['D', 'F', 'GK', 'M']
['Above30', 'Under23', 'Under30']
['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4']
['1 Bundesliga', 'Ligue 1', 'Other', 'Premier League', 'Primera Division', 'Serie A']
['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4']
['1 Bundesliga', 'Ligue 1', 'Other', 'Premier League', 'Primera Division', 'Serie A']
['65to74', '75to84', '85above', 'below65']
['65to74', '75to84', '85above', 'below65']
['After2016', 'Before2016']
['20Mto5M', '60Mto20M', 'Above60M']
###Markdown
Transferring the CPTs learnt by fitting the model using pgmpy to pyro for modeliing
###Code
Age_probs = torch.tensor(bn_model.get_cpds(node="A").values.T)
Position_probs = torch.tensor(bn_model.get_cpds(node="P").values.T)
Nationality_probs = torch.tensor(bn_model.get_cpds(node="N").values.T)
year_probs = torch.tensor(bn_model.get_cpds(node="Y").values.T)
arrival_league_probs = torch.tensor(bn_model.get_cpds(node="AL").values.T)
origin_league_probs = torch.tensor(bn_model.get_cpds(node="OL").values.T)
arrival_club_probs = torch.tensor(bn_model.get_cpds(node="AC").values.T)
origin_club_probs = torch.tensor(bn_model.get_cpds(node="OC").values.T)
overall_probs = torch.tensor(bn_model.get_cpds(node="Ovr").values.T)
potential_probs = torch.tensor(bn_model.get_cpds(node="Pot").values.T)
app_probs = torch.tensor(bn_model.get_cpds(node="App").values.T)
transfer_price_probs = torch.tensor(bn_model.get_cpds(node="T").values.T)
###Output
_____no_output_____
###Markdown
Defining the pyro model that will be the base of all the experiments/interventions
###Code
def pyro_model():
Age = pyro.sample("A", dist.Categorical(probs=Age_probs))
Position = pyro.sample("P", dist.Categorical(probs=Position_probs))
Nationality = pyro.sample("N", dist.Categorical(probs=Nationality_probs))
Year = pyro.sample("Y", dist.Categorical(probs=year_probs))
Arrival_league = pyro.sample("AL", dist.Categorical(probs=arrival_league_probs))
Origin_league = pyro.sample('OL', dist.Categorical(probs=origin_league_probs))
Arrival_club = pyro.sample('AC', dist.Categorical(probs=arrival_club_probs[Arrival_league]))
Origin_club = pyro.sample('OC', dist.Categorical(probs=origin_club_probs[Origin_league]))
Overall = pyro.sample('Ovr', dist.Categorical(probs=overall_probs[Position][Age]))
Potential = pyro.sample('Pot',dist.Categorical(probs=potential_probs[Position][Overall][Age]))
Appearances = pyro.sample('App',dist.Categorical(probs=app_probs[Age]))
transfer_price = pyro.sample('TP', dist.Categorical(probs=transfer_price_probs[Year][Potential][Position][Overall][Origin_club][Nationality][Appearances][Arrival_club][Age]))
return{'A': Age,'P': Position,'N': Nationality,'Y': Year,'AL': Arrival_league,'OL':Origin_league,'AC':Arrival_club,'OC':Origin_club,'Ovr':Overall,'Pot':Potential, 'App':Appearances, 'TP':transfer_price}
print(pyro_model())
###Output
{'A': tensor(2), 'P': tensor(3), 'N': tensor(2), 'Y': tensor(0), 'AL': tensor(3), 'OL': tensor(4), 'AC': tensor(1), 'OC': tensor(0), 'Ovr': tensor(1), 'Pot': tensor(2), 'App': tensor(0), 'TP': tensor(0)}
###Markdown
Defining an Importance sampling function that uses Importance Sampling to calculate the posterior, generates a list of samples using the Empirical Marginal algorithm and outputs a Histogram plot of the required variable
###Code
def importance_sampling(model, title, xlabel, ylabel, marginal_on="TP", label=TP_label):
posterior = pyro.infer.Importance(model, num_samples=5000).run()
marginal = EmpiricalMarginal(posterior, marginal_on)
samples = [marginal().item() for _ in range(5000)]
unique, counts = np.unique(samples, return_counts=True)
plt.bar(unique, counts, align='center', alpha=0.5)
plt.xticks(unique, label)
plt.ylabel(ylabel)
plt.xlabel(xlabel)
for i in range(len(label)):
plt.text(i, counts[i]+10, str(counts[i]))
plt.title(title)
###Output
_____no_output_____
###Markdown
Experiment 1: Intervention on Nationality = SA and Position = FThe first experiment is to intervene on all South American Forward players. The intuition is that they tend to have a higher transfer fee when we talk about Forward players. We want to see if our model can validate this intuition
###Code
# Intervening on south american fowards
do_on_SA_F = pyro.do(pyro_model, data={'N': torch.tensor(5), 'P': torch.tensor(1)})
importance_sampling(model=do_on_SA_F, title="P(TP | do(N = 'SA', P = 'F')) - Importance Sampling",
xlabel='Transfer Price', ylabel='count', marginal_on='TP')
###Output
_____no_output_____
###Markdown
Experiment 2: Intervention on ArrivalLeague = Premier League and OriginLeague = Premier LeagueThe second experiment is to intervene on Origin and Arrival Leagues to be Premier League. The intuition here is that all intra-league transfers in the Premier League extract a higher avgerage transfer fee.
###Code
# transfer between english teams
do_on_PremierL = pyro.do(pyro_model, data={'AL': torch.tensor(3), 'OL': torch.tensor(3)})
importance_sampling(model=do_on_PremierL,
title="P(TP | do(AL = 'Premier League', OL = 'Premier League') - Importance Sampling",
xlabel='Transfer Price', ylabel='count', marginal_on='TP')
###Output
_____no_output_____
###Markdown
Experiment 3: Intervention on ArrivalClub = Tier1 and OriginClub = Tier1The third experiment is to intervene on Arrival and Origin clubs being Tier1. The intuition here is that transfers between Tier1 clubs extract a higher average Transfer fee
###Code
# intervening on transfers betwen tier 1 clubs
do_on_Tier1 = pyro.do(pyro_model, data={'AC': torch.tensor(0), 'OC': torch.tensor(0)})
importance_sampling(model=do_on_Tier1,
title="P(TP | do(AC = 'Tier 1', OC = 'Tier 1') - Importance Sampling",
xlabel='Transfer Price', ylabel='count', marginal_on='TP')
###Output
_____no_output_____
###Markdown
Experiment 4: Intervention on Age = Under23 and Potential = 85aboveThe fourth experiment explores the intervention where Age is under 23 years old and player potential rating for the year of transfer is 85 and above. The intuition here is that a young player with a very high potential rating should extract a higher average transfer fee
###Code
# intervening on young and high potenital stars to test intution about our transfer strategy
do_on_young_stars = pyro.do(pyro_model, data={'A': torch.tensor(1), 'Pot': torch.tensor(2)})
importance_sampling(model=do_on_young_stars,
title="P(TP | do(A = 'Under23', Pot = '85above') - Importance Sampling",
xlabel='Transfer Price', ylabel='count', marginal_on='TP')
###Output
_____no_output_____
###Markdown
Experiment 5: Intervening on Year = before 2016 and then on Y = after 2016This experiment is something that we want our model to capture. As mentioned earlier, the said inflation in player transfer fee for high potential players, according to our beliefs was the year 2016. So we do a before and after intervention to see if our model captures this change
###Code
# intevrening on year to see inflated probabilities for price brackets
# intervening on players for transfers before 2016
do_before2016 = pyro.do(pyro_model, data={'Y': torch.tensor(1)})
do_before2016_conditioned_model = pyro.condition(do_before2016, data={'Pot':torch.tensor(2)})
importance_sampling(model=do_before2016_conditioned_model,
title="P(TP | do(Y = 'Before2016', P = '85above') - Importance Sampling",
xlabel='Transfer Price', ylabel='count', marginal_on='TP')
# intervening on players for transfers after 2016
do_after2016 = pyro.do(pyro_model, data={'Y': torch.tensor(0)})
do_after2016_conditioned_model = pyro.condition(do_after2016, data={'Pot':torch.tensor(2)})
importance_sampling(model=do_after2016_conditioned_model,
title="P(TP | do(Y = 'After2016', P = '85above') - Importance Sampling",
xlabel='Transfer Price', ylabel='count', marginal_on='TP')
###Output
_____no_output_____
###Markdown
Finding the Causal Effect of all variables on Transfer Price above 20M
###Code
def causal_effect(model1, model2, marginal_on, marginal_val, n_samples=5000):
posterior1 = pyro.infer.Importance(model1, num_samples=n_samples).run()
marginal1 = EmpiricalMarginal(posterior1, marginal_on)
samples1 = [marginal1().item() for _ in range(n_samples)]
unique1, counts1 = np.unique(samples1, return_counts=True)
posterior2 = pyro.infer.Importance(model2, num_samples=n_samples).run()
marginal2 = EmpiricalMarginal(posterior2, marginal_on)
samples2 = [marginal2().item() for _ in range(n_samples)]
unique2, counts2 = np.unique(samples2, return_counts=True)
return counts1[marginal_val] / n_samples - counts2[marginal_val] / n_samples
# Causal effect of year on Transfer price above 60M
do_before2016 = pyro.do(pyro_model, data={'Y': torch.tensor(1)})
do_after2016 = pyro.do(pyro_model, data={'Y': torch.tensor(0)})
#P(TP > Above60M | do(Y = After2016) - P(TP > Above60M | do(Y = Before2016))
causal_effect(model1=do_before2016, model2=do_after2016, marginal_on='TP', marginal_val=2)
# Causal effect of age on Transfer price above 60M
# Age_Label = ['Above30', 'Under23', 'Under30']
do_above30 = pyro.do(pyro_model, data={'A': torch.tensor(0)})
do_under30 = pyro.do(pyro_model, data={'A': torch.tensor(2)})
#P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30))
causal_effect(model1=do_above30, model2=do_under30, marginal_on='TP', marginal_val=2)
# Causal effect of Potential Rating on Transfer price betweein 20-60M
# Potential_Label = ['65to74', '75to84', '85above', 'below65']
do_above85_pot = pyro.do(pyro_model, data={'Pot': torch.tensor(2)})
do_below65_pot = pyro.do(pyro_model, data={'Pot': torch.tensor(0)})
#P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30))
causal_effect(model1=do_above85_pot, model2=do_below65_pot, marginal_on='TP', marginal_val=1)
# Causal effect of Overall Rating on Transfer price above 60M
# Potential_Label = ['65to74', '75to84', '85above', 'below65']
do_above85_ovr = pyro.do(pyro_model, data={'Ovr': torch.tensor(2)})
do_below65_ovr = pyro.do(pyro_model, data={'Ovr': torch.tensor(3)})
#P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30))
causal_effect(model1=do_above85_ovr, model2=do_below65_ovr, marginal_on='TP', marginal_val=1)
# Causal effect of Arrival Club on Transfer price above between 20 - 60M
#AC['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4']
do_tier1 = pyro.do(pyro_model, data={'AC': torch.tensor(0)})
do_tier3 = pyro.do(pyro_model, data={'AC': torch.tensor(2)})
#P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30))
causal_effect(model1=do_tier1, model2=do_tier3, marginal_on='TP', marginal_val=1)
# Causal effect of Origin Club on Transfer price between 20 - 60M
#OC['Tier_1', 'Tier_2', 'Tier_3', 'Tier_4']
oc_do_tier1 = pyro.do(pyro_model, data={'OC': torch.tensor(0)})
oc_do_tier3 = pyro.do(pyro_model, data={'OC': torch.tensor(2)})
#P(TP > Above60M | do(A = Above30) - P(TP > Above60M | do(A = Under30))
causal_effect(model1=oc_do_tier1, model2=oc_do_tier3, marginal_on='TP', marginal_val=1)
# Counterfactual query on Potential changing from 'below65' to '85above'
conditioned_model_for_cf = pyro.condition(pyro_model, data={'Pot':torch.tensor(3)})
cf_posterior = Importance(conditioned_model_for_cf, num_samples=1000).run()
marginal_cf = EmpiricalMarginal(cf_posterior, "TP")
samples_cf = [marginal_cf().item() for _ in range(1000)]
unique_cf, counts_cf = np.unique(samples_cf, return_counts=True)
tp_samples = []
for _ in range(1000):
trace_handler_1000 = pyro.poutine.trace(conditioned_model_for_cf)
trace = trace_handler_1000.get_trace()
N = trace.nodes["N"]['value']
A = trace.nodes["A"]['value']
P = trace.nodes["P"]['value']
Y = trace.nodes["Y"]['value']
Ovr = trace.nodes["Ovr"]['value']
AC = trace.nodes["AC"]['value']
OC = trace.nodes["OC"]['value']
AL = trace.nodes["AL"]['value']
OL = trace.nodes["OL"]['value']
App = trace.nodes["App"]['value']
intervention_model_q1_1000 = pyro.do(pyro_model, data={'Pot': torch.tensor(2)})
counterfact_model_q1_1000 = pyro.condition(intervention_model_q1_1000, data={'N': N, 'A':A, 'P': P,
"Y": Y, "Ovr": Ovr, "AC": AC,
"OC": OC, "AL": AL, "OL": OL,
"App": App})
tp_samples.append(counterfact_model_q1_1000()['TP'])
unique_tp, counts_tp = np.unique(tp_samples, return_counts=True)
# P (Y = 60Mto20M | Pot = below65) =
(counts_cf[1]) / 1000
# P (Y = 60Mto20M | do(Pot = above85)) =
(counts_tp[1]) / 1000
# Query: Are teams paying for 'X' nationality because they think they are great or are they actually better?
# Compare them to performance conditional on being Nationality={SA, EU, AF, AS}
# Nationality_Label = ['AF', 'AS', 'EU', 'N_A', 'OC', 'SA']
# TP_Label = ['20Mto5M', '60Mto20M', 'Above60M']
cond_on_N = pyro.condition(pyro_model, data={'TP': torch.tensor(2)})
importance_sampling(model=cond_on_N, title="P(N | TP = 'Above60M') - Importance Sampling",
xlabel='Overall Rating', ylabel='count', marginal_on='N', label=N_label)
# We determine X = EU
cond_on_SA = pyro.condition(pyro_model, data={'N': torch.tensor(5)})
importance_sampling(model=cond_on_SA, title="P(Ovr | N = 'SA') - Importance Sampling",
xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label)
(136)/5000 # good players in SA
cond_on_EU = pyro.condition(pyro_model, data={'N': torch.tensor(2)})
importance_sampling(model=cond_on_EU, title="P(Ovr | N = 'EU') - Importance Sampling",
xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label)
(176)/5000 # good players in EU
cond_on_AF = pyro.condition(pyro_model, data={'N': torch.tensor(0)})
importance_sampling(model=cond_on_AF, title="P(Ovr | N = 'AF') - Importance Sampling",
xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label)
(100)/5000 # good players in AF
cond_on_AS = pyro.condition(pyro_model, data={'N': torch.tensor(1)})
importance_sampling(model=cond_on_AS, title="P(Ovr | N = 'AS') - Importance Sampling",
xlabel='Overall Rating', ylabel='count', marginal_on='Ovr', label=Ovr_label)
(129)/5000 # good players in AS
###Output
_____no_output_____ |
natural-language-processing/word-embedding/word2vec.ipynb | ###Markdown
WORD2VECThe word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Each vector has some semantic meaning to it. Created with shallow 2 layered NN that reconstruct the context of words. Helps in developing context for each word using embeddings. Developed in either of the two model archs:1. CBOW - Continuous Bag of Words - model predicts current word from surrounding words. ( No order of context, faster, distant also better) 2. Skip Gram - model predicts surrounding windows from current word. (context is order, slower, closer ones more important)Hyper Parameters involved:1. Training algorithm - hierarchical softmax and/or negative sampling. hierarchical softmax works better for infrequent words while negative sampling works better for frequent words and better with low dimensional vectors.2. Sub Sampling - High-frequency words often provide little information. Words with a frequency above a certain threshold may be subsampled to increase training speed3. Dimensionality - After a point of increased embedding size, no point. Usually 100 to 1000 is the size.4. Context Window - number of Surrounding words - 10 for skip gram, 5 for CBOW Exercise is to train own word2vec model and play with pretrained model
###Code
import nltk
from gensim.models import Word2Vec
from nltk.corpus import stopwords
import re
paragraph = """WORD2VEC
The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text. Each vector has some semantic meaning to it. Created with shallow 2 layered NN that reconstruct the context of words. Helps in developing context for each word using embeddings.
Developed in either of the two model archs:
CBOW - Continuous Bag of Words - model predicts current word from surrounding words. ( No order of context, faster, distant also better)
Skip Gram - model predicts surrounding windows from current word. (context is order, slower, closer ones more important)
Hyper Parameters involved:
Training algorithm - hierarchical softmax and/or negative sampling. hierarchical softmax works better for infrequent words while negative sampling works better for frequent words and better with low dimensional vectors.
Sub Sampling - High-frequency words often provide little information. Words with a frequency above a certain threshold may be subsampled to increase training speed
Dimensionality - After a point of increased embedding size, no point. Usually 100 to 1000 is the size.
Context Window - number of Surrounding words - 10 for skip gram, 5 for CBOW"""
#preprocess the data using regex
sentences = nltk.sent_tokenize(paragraph)
processed_sentences = []
for sentence in sentences:
print("\nSentence before processing : ", sentence)
sentence = re.sub('[^a-zA-Z0-9]', ' ',sentence)
sentence = re.sub('\s+', ' ', sentence)
sentence = sentence.lower()
words = nltk.word_tokenize(sentence)
processed_sentence = [word for word in words if word not in stopwords.words('english')]
processed_sentences.append(processed_sentence)
print("\nSentence after processing : ", processed_sentence)
model = Word2Vec(processed_sentences, min_count = 1)
vocab = model.wv.vocab
for key, value in vocab.items():
print(key, " : ", value)
vector = model.wv['skip']
similar = model.wv.most_similar('skip')
similar
#pretrained model from gensim repository ( lsiting all avaialable models)
import gensim.downloader
print(list(gensim.downloader.info()['models'].keys()))
glove_wiki = gensim.downloader.load('glove-wiki-gigaword-300')
glove_wiki.most_similar('wikipedia')
###Output
_____no_output_____ |
Fairness/error-fairness.ipynb | ###Markdown
Fair share of errors Consider three variables of interest:- $S$: a sensitive variable- $\hat{Y}$: a prediction or decision- $Y$: the ground truth (often unobserved)For example $Y$ could be the ability to pay for a mortgage, $\hat{Y}$ is a decision whether to offer a person a home loan, and $S$ is the person's race.
###Code
import pandas as pd
import numpy as np
from itertools import product
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection
import matplotlib.pyplot as plt
%matplotlib notebook
# Illustrating the use of itertools product
for ix,value in enumerate(product(range(2), repeat=3)):
print(ix, value)
type(value[0])
###Output
0 (0, 0, 0)
1 (0, 0, 1)
2 (0, 1, 0)
3 (0, 1, 1)
4 (1, 0, 0)
5 (1, 0, 1)
6 (1, 1, 0)
7 (1, 1, 1)
###Markdown
Explaination of the conditionsThe meaning of 0 and 1 for $Y$ and $\hat{Y}$ are pretty standard (negative and positive). We add the interpretation that a value of $S=0$ indicates a minority or disadvantaged part of the community, and $S=1$ otherwise.If $Y$ is the same as $\hat{Y}$, then there is no bias as the predictions are correct.- If they are both zero, then this is **true negative**, and we label them 0 and 1 based on the sensitive variable- If they are both one, then this is **true positive**, and we label them 0 and 1 based on the sensitive variable The interesting cases are the **false positive** and **false negative** cases.When the prediction is one but the ground truth is zero, this is **false positive** (predict positive but falsely)- If the sensitive variable is zero, this is **A**ffirmative action. The minority group gets a positive action even though it really should not manage.- If the sensitive variable is one, this is **C**ronyism. The majority group benefits from positive action, even though not warranted.When the prediction is zero but the ground truth is one, this is **false negative** (predict negative but falsely)- If the sensitive variable is zero, this is **D**iscrimination. The minority group is negatively affected, since they should get positive action, but they did not.- If the sensitive variable is one, this is **B**acklash or Byproduct. The majority group is (as a side effect of decision making based on aggregate information) negatively affected.
###Code
def naming(y, yhat, s):
if y == 0 and yhat == 0 and s == 0:
return (y, yhat, s, 'TN0')
if y == 0 and yhat == 0 and s == 1:
return (y, yhat, s, 'TN1')
if y == 0 and yhat == 1 and s == 0:
return (y, yhat, s, 'A')
if y == 0 and yhat == 1 and s == 1:
return (y, yhat, s, 'C')
if y == 1 and yhat == 0 and s == 0:
return (y, yhat, s, 'D')
if y == 1 and yhat == 0 and s == 1:
return (y, yhat, s, 'B')
if y == 1 and yhat == 1 and s == 0:
return (y, yhat, s, 'TP0')
if y == 1 and yhat == 1 and s == 1:
return (y, yhat, s, 'TP1')
def name2position(variables):
ix_y = np.where(np.array(variables) == 'Y')[0][0]
ix_yhat = np.where(np.array(variables) == 'Yhat')[0][0]
ix_s = np.where(np.array(variables) == 'S')[0][0]
return (ix_y, ix_yhat, ix_s)
#variables = ['S', 'Yhat', 'Y', 'condition']
variables = ['Y', 'Yhat', 'S', 'condition']
ix_y, ix_yhat, ix_s = name2position(variables)
all_possibilities = pd.DataFrame(index=range(8), columns=variables, dtype='int')
for ix, value in enumerate(product([0,1], repeat=len(variables)-1)):
all_possibilities.iloc[ix] = naming(value[ix_y], value[ix_yhat], value[ix_s])
# Bug in pandas, creates a dataframe of floats. Workaround.
for col in all_possibilities.columns[:-1]:
all_possibilities[col] = pd.to_numeric(all_possibilities[col], downcast='integer')
all_possibilities
def plot_cube(ax, cube_definition):
"""
From https://stackoverflow.com/questions/44881885/python-draw-3d-cube
"""
cube_definition_array = [
np.array(list(item))
for item in cube_definition
]
points = []
points += cube_definition_array
vectors = [
cube_definition_array[1] - cube_definition_array[0],
cube_definition_array[2] - cube_definition_array[0],
cube_definition_array[3] - cube_definition_array[0]
]
points += [cube_definition_array[0] + vectors[0] + vectors[1]]
points += [cube_definition_array[0] + vectors[0] + vectors[2]]
points += [cube_definition_array[0] + vectors[1] + vectors[2]]
points += [cube_definition_array[0] + vectors[0] + vectors[1] + vectors[2]]
points = np.array(points)
edges = [
[points[0], points[3], points[5], points[1]],
[points[1], points[5], points[7], points[4]],
[points[4], points[2], points[6], points[7]],
[points[2], points[6], points[3], points[0]],
[points[0], points[2], points[4], points[1]],
[points[3], points[6], points[7], points[5]]
]
faces = Poly3DCollection(edges, linewidths=1, edgecolors='k')
faces.set_facecolor((0,0,1,0.1))
ax.add_collection3d(faces)
# Plot the points themselves to force the scaling of the axes
ax.scatter(points[:,0], points[:,1], points[:,2], s=50)
ax.set_aspect('equal')
ax.set_xlabel(variables[ix_s])
ax.set_ylabel(variables[ix_yhat])
ax.set_zlabel(variables[ix_y])
ax.grid(False)
return
cube_definition = [
(0,0,0), (0,1,0), (1,0,0), (0,0,1)
]
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
plot_cube(ax, cube_definition)
for ix, row in all_possibilities.iterrows():
ax.text(row[ix_s], row[ix_yhat], row[ix_y], row[3], size=30)
###Output
_____no_output_____
###Markdown
Studying the trade offFocusing on the plane traced out by A, C, B ,D, we get a two dimensional plot which provides insight into the trade off between 1. false positives and false negatives2. Favouritism, how much the majority group benefits
###Code
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
ax.plot([0,0,1,1], [0,1,0,1], 'bo')
ax.set_xlabel('FN -- FP')
ax.set_ylabel('favouritism')
ax.text(0, 0, naming(1, 0, 0)[3], size=30)
ax.text(0, 1, naming(1, 0, 1)[3], size=30)
ax.text(1, 0, naming(0, 1, 0)[3], size=30)
ax.text(1, 1, naming(0, 1, 1)[3], size=30)
###Output
_____no_output_____ |
assignment1-UMJCS-master/Homework1_partB(coding)/softmax.ipynb | ###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
loss: 1205.640317
sanity check: 2.302585
###Markdown
Inline Question 1:Why do we expect our loss to be close to -log(0.1)? Explain briefly.****Your answer:** *Because there are 10 samples in this experiment, so 0.1 just the average of the whole sample set*
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
num_iters = 3000
for lr in learning_rates:
for reg in regularization_strengths:
softmax = Softmax()
set_tuple = (lr,reg)
softmax.train(X_train, y_train, lr, reg, num_iters)
train_pred = softmax.predict(X_train)
corr = np.sum(y_train == train_pred)
train_acc = corr / len(y_train)
val_pred = softmax.predict(X_val)
corr = np.sum(y_val == val_pred)
val_acc = corr / len(y_val)
if val_acc >= best_val:
best_val = val_acc
best_softmax = softmax
results[(lr, reg)] = (train_acc, val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____ |
scripts/Analysis.ipynb | ###Markdown
Who Is J? Analysing JOTB diversity network One of the main goals of the ‘Yes We Tech’ community is contributing to create an inclusive space where we can celebrate diversity, provide visibility to women-in-tech, and ensure that everybody has an equal chance to learn, share and enjoy technology-related disciplines.As co-organisers of the event, we have concentrated our efforts in getting more women speakers on board under the assumption that a more diverse panel would enrich the conversation also around technology.Certainly, we have doubled the number of women giving talks this year, but, is this diversity enough? How can we know that we have succeeded in our goal? and more importantly, what can we learn to create a more diverse event in future editions?The work that we are sharing here talks about two things: data and people. Both data and people should help us to find out some answers and understand the reasons why.Let's start with a story about data. Data is pretty simple compared with people. Just take a look at the numbers, the small ones, the ones that better describe what happened in 2016 and 2017 J On The Beach editions.
###Code
import pandas as pd
import numpy as np
import scipy as sp
import pygal
import operator
from iplotter import GCPlotter
plotter = GCPlotter()
###Output
_____no_output_____
###Markdown
Small data analysisSmall data says that last year, our 'J' engaged up to 48 speakers and 299 attendees into this big data thing. I'm not considering here any member of the organisation.
###Code
data2016 = pd.read_csv('../input/small_data_2016.csv')
data2016['Women Rate'] = pd.Series(data2016['Women']*100/data2016['Total'])
data2016['Men Rate'] = pd.Series(data2016['Men']*100/data2016['Total'])
data2016
###Output
_____no_output_____
###Markdown
This year speakers are 40, few less than last year, while participation have reached the number of 368 people. (Compare the increment of attendees 368 vs 299
###Code
data2017 = pd.read_csv('../input/small_data_2017.csv')
data2017['Women Rate'] = pd.Series(data2017['Women']*100/data2017['Total'])
data2017['Men Rate'] = pd.Series(data2017['Men']*100/data2017['Total'])
data2017
increase = 100 - 299*100.00/368
increase
###Output
_____no_output_____
###Markdown
It is noticable also, that big data is bigger than ever and this year we have included workshops and a hackathon. The more the better right? Let's continue because there are more numbers behind those ones. Numbers that will give us some signs of diversity. DiversityWhen it comes about speakers, this year we have a **27.5%** of women speaking to J, compared with a rough **10.4%** of the last year.
###Code
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
['2016', data2016['Women Rate'][0], data2016['Men Rate'][0],''],
['2017', data2017['Women Rate'][0], data2017['Men Rate'][0],''],
]
options = {
"title": 'Speakers at JOTB',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '50%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
###Output
_____no_output_____
###Markdown
However, and this is the worrying thing, the participation of women as attendees has slightly dropped from a not too ambitious **13%** to a disappointing **9.8%**. So we have an x% more of attendees but zero impact on a wider variaty of people.
###Code
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
['2016', data2016['Women Rate'][1], data2016['Men Rate'][1],''],
['2017', data2017['Women Rate'][1], data2017['Men Rate'][1],''],
]
options = {
"title": 'Attendees at JOTB',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '55%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
###Output
_____no_output_____
###Markdown
Why this happened? We don’t really know. But we continued looking at the numbers and realised that **30** of the **45** companies that enrolled two or more people didn't include any women on their lists. Meaning a **31%** of the mass of attendees. Correlate team size with women percentage to validate if: the smaller the teams are, the less chances to include a women on their lists
###Code
companies_team = data2017['Total'][3] + data2017['Total'][4]
mass_represented = pd.Series(data2017['Total'][4]*100/companies_team)
women_represented = pd.Series(100 - mass_represented)
mass_represented
###Output
_____no_output_____
###Markdown
For us this is not a good sign. Despite the fact that our ability to summon has increased on our monthly meetups (the ones that attempts to create this culture for equality on Málaga), the engagement on other events doesn’t have a big impact.Again I'm not blaming companies here, because if we try to identify the participation rate of women who are not part of a team, the representation also decreased almost a **50%**.
###Code
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
[data2016['Tribe'][2], data2016['Women Rate'][2], data2016['Men Rate'][2],''],
[data2016['Tribe'][3], data2016['Women Rate'][3], data2016['Men Rate'][3],''],
[data2016['Tribe'][5], data2016['Women Rate'][5], data2016['Men Rate'][5],''],
]
options = {
"title": '2016 JOTB Edition',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '55%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
data = [
['Tribe', 'Women', 'Men', {"role": 'annotation'}],
[data2017['Tribe'][2], data2017['Women Rate'][2], data2017['Men Rate'][2],''],
[data2017['Tribe'][3], data2017['Women Rate'][3], data2017['Men Rate'][3],''],
[data2017['Tribe'][5], data2017['Women Rate'][5], data2017['Men Rate'][5],''],
]
options = {
"title": '2017 JOTB Edition',
"width": 600,
"height": 400,
"legend": {"position": 'top', "maxLines": 3},
"bar": {"groupWidth": '55%'},
"isStacked": "true",
"colors": ['#984e9e', '#ed1c40'],
}
plotter.plot(data,chart_type='ColumnChart',chart_package='corechart', options=options)
###Output
_____no_output_____
###Markdown
Before before blaming anyone or falling to quickly into self-indulgence, there are still more data to play with.Note aside: the next thing is nothing but an experiment, nothing is categorical or has been made with the intention of offending any body. Like our t-shirt labels says: no programmer have been injured in the creation of the following data game. Social network analysisThe next story talks about people. The people around J, the ones who follow, are followed by, interact with, and create the chances of a more diverse and interesting conference. It is also a story about the people who organise this conference. Because when we started to plan a conference like this, we did nothing but thinking on what could be interesting for the people who come. In order to get that we used the previous knowledge that we have about cool people who do amazing things with data, and JVM technologies. And this means looking into our own networks and following suggestions of the people we trust. So if we assume that we are biased by the people around us, we thought it was a good idea to know first how is the network of people around J to see the chances that we have to bring someone different, unusual that can add value to the conference.For the moment, since this is an experiment that wants to trigger your reaction we will look at J's Twitter account.Indeed, a real-world network would have a larger amount of numbers and people to look at, but yet a digital social network is about human interactions, conversations and knowledge sharing. For this experiment we've used `sexmachine` python library https://pypi.python.org/pypi/SexMachine/ and the 'Twitter Gender Distribution' project published in github https://github.com/ajdavis/twitter-gender-distribution to find out the gender of a specific twitter acount.
###Code
run index.py jotb2018
###Output
_____no_output_____
###Markdown
From the small **50%** of J's friends that could be identified with a gender, the distribution woman/men is a **20/80**. Friends are the ones who follow and are followed by J.
###Code
# Read the file and take some important information
whoisj = pd.read_json('../out/jotb2018.json', orient = 'columns')
people = pd.read_json(whoisj['jotb2018'].to_json())
following_total = whoisj['jotb2018']['friends_count']
followers_total = whoisj['jotb2018']['followers_count']
followers = pd.read_json(people['followers_list'].to_json(), orient = 'index')
following = pd.read_json(people['friends_list'].to_json(), orient = 'index')
whoisj
###Output
_____no_output_____
###Markdown
J follows to...
###Code
# J follows to...
following_total
###Output
_____no_output_____
###Markdown
J is followed by...
###Code
# J is followed by...
followers_total
###Output
_____no_output_____
###Markdown
Gender distribution
###Code
followers['gender'].value_counts()
following['gender'].value_counts()
followers_dist = followers['gender'].value_counts()
genders = followers['gender'].value_counts().keys()
followers_map = pygal.Pie(height=400)
followers_map.title = 'Followers Gender Map'
for i in genders:
followers_map.add(i,followers_dist[i]*100.00/followers_total)
followers_map.render_in_browser()
following_dist = following['gender'].value_counts()
genders = following['gender'].value_counts().keys()
following_map = pygal.Pie(height=400)
following_map.title = 'Following Gender Map'
for i in genders:
following_map.add(i,following_dist[i]*100.00/following_total)
following_map.render_in_browser()
###Output
file:///tmp/tmpdyrMnq.html
###Markdown
Language distribution
###Code
lang_counts = followers['lang'].value_counts()
languages = followers['lang'].value_counts().keys()
followers_dist = followers['gender'].value_counts()
lang_followers_map = pygal.Treemap(height=400)
lang_followers_map.title = 'Followers Language Map'
for i in languages:
lang_followers_map.add(i,lang_counts[i]*100.00/followers_total)
lang_followers_map.render_in_browser()
lang_counts = following['lang'].value_counts()
languages = following['lang'].value_counts().keys()
following_dist = following['gender'].value_counts()
lang_following_map = pygal.Treemap(height=400)
lang_following_map.title = 'Following Language Map'
for i in languages:
lang_following_map.add(i,lang_counts[i]*100.00/following_total)
lang_following_map.render_in_browser()
###Output
file:///tmp/tmpYEUnt2.html
###Markdown
Location distribution
###Code
followers['location'].value_counts()
following['location'].value_counts()
###Output
_____no_output_____
###Markdown
Tweets analysis
###Code
run tweets.py jotb2018 1000
j_network = pd.read_json('../out/jotb2018_tweets.json', orient = 'index')
interactions = j_network['gender'].value_counts()
genders = j_network['gender'].value_counts().keys()
j_network_map = pygal.Pie(height=400)
j_network_map.title = 'Interactions Gender Map'
for i in genders:
j_network_map.add(i,interactions[i])
j_network_map.render_in_browser()
a = j_network['hashtags']
b = j_network['gender']
say_something = [x for x in a if x != []]
tags = []
for y in say_something:
for x in pd.DataFrame(y)[0]:
tags.append(x.lower())
tags_used = pd.DataFrame(tags)[0].value_counts()
tags_keys = pd.DataFrame(tags)[0].value_counts().keys()
tags_map = pygal.Treemap(height=400)
tags_map.title = 'Hashtags Map'
for i in tags_keys:
tags_map.add(i,tags_used[i])
tags_map.render_in_browser()
pairs = []
for i in j_network['gender'].keys() :
if (j_network['hashtags'][i] != []) :
pairs.append([j_network['hashtags'][i], j_network['gender'][i]])
key_pairs = []
for i,j in pairs:
for x in i:
key_pairs.append((x,j))
key_pairs
key_pair_dist = {x: key_pairs.count(x) for x in key_pairs}
sorted_x = sorted(key_pair_dist.items(), key = operator.itemgetter(1), reverse = True)
sorted_x
###Output
_____no_output_____ |
python_lambda.ipynb | ###Markdown
###Code
(lambda first, second : first * second + 20)(10, 3)
def plus(first01, second02):
return first01 + 20
# result = first01 + 20
# return result
plus(10), type(plus)
plus(20)
plus_02 = (lambda first : first + 20)
type(plus_02)
plus_02(25)
###Output
_____no_output_____
###Markdown
lambda를 사용하면 아래와 같은 함수 정의를 간단하게 나타냄
###Code
(lambda first, second : first * second + 20)(10, 3)
def plus(first, second) : # 함수 정의
result = first + 20
return result
plus(10)
###Output
_____no_output_____
###Markdown
lambda를 변수안에 저장하면 재사용가능
###Code
plus_lambda = (lambda first: first + 20) # lambda 정의
plus_lambda(10)
###Output
_____no_output_____
###Markdown
###Code
(lambda first : first + 20)(10)
def plus(first01) :
return first01 + 20 # return 이 가장 마지막으로 실행되기때문에 first01+20이 먼저 실행되서 가능한것
#result = first01 + 20
#return result
plus(10), type(plus)
plus_02 = (lambda first : first + 20)
type(plus_02)
plus_02(10)
plus_03 = (lambda first,second : first * second + 20)
type(plus_03)
plus_03(30,20)
###Output
_____no_output_____
###Markdown
###Code
(lambda first, second : first * second + 20)(10,3)
def plus(first01, second02):
return first01 + 20 # first01 + 20, return
# result = first01 + 20
# return result
plus(10), type(plus)
plus(20)
plus_02 = (lambda first : first + 20)
type(plus_02)
plus_02(30)
###Output
_____no_output_____
###Markdown
###Code
(lambda first, second : first * second + 20)(10,3)
def sum(first01, second02):
return first01+20
# result = first01 + 20
# return result
sum(10)
sum02 = (lambda first : first+20)
type(sum02)
sum02(30)
###Output
_____no_output_____
###Markdown
###Code
(lambda first : first + 20 )(10)
def plus(first01):
result = first01+ 20
return result
plus(10)
plus_02 = (lambda first : first + 20)
type(plus_02)
plus_02(30)
###Output
_____no_output_____
###Markdown
###Code
(lambda first : first + 20)(10)
def plus(first01):
result = first01 + 20
return result
# 아래와 같이 하여도 가능
# def plus(first01):
# return first01 + 20
plus(10)
plus_02 = (lambda first : first + 20)
type(plus_02), type(plus)
plus_02(10)
(lambda first, second : first + second + 20)(10,20)
def plus(first01, second02):
result = first01 + second02 + 20
return result
###Output
_____no_output_____ |
GroupHW_1_Exposure_ForwardBond.ipynb | ###Markdown
Loading of Libraries and Classes.
###Code
%matplotlib inline
from datetime import date
import time
import pandas as pd
import numpy as np
pd.options.display.max_colwidth = 60
from Curves.Corporates.CorporateDailyVasicek import CorporateRates
from Boostrappers.CDSBootstrapper.CDSVasicekBootstrapper import BootstrapperCDSLadder
from MonteCarloSimulators.Vasicek.vasicekMCSim import MC_Vasicek_Sim
from Products.Rates.CouponBond import CouponBond
from Products.Credit.CDS import CDS
from Scheduler.Scheduler import Scheduler
import quandl
import matplotlib.pyplot as plt
from parameters import WORKING_DIR
import itertools
marker = itertools.cycle((',', '+', '.', 'o', '*'))
from IPython.core.pylabtools import figsize
figsize(15, 4)
from pandas import ExcelWriter
import numpy.random as nprnd
from pprint import pprint
###Output
_____no_output_____
###Markdown
Create forward bond future PV (Exposure) time profile Setting up parameters
###Code
t_step = 1.0 / 365.0
simNumber = 10
trim_start = date(2005,3,10)
trim_end = date(2010,12,31) # Last Date of the Portfolio
start = date(2005, 3, 10)
referenceDate = date(2005, 5, 10)
###Output
_____no_output_____
###Markdown
Data input for the CouponBond portfolioThe word portfolio is used to describe just a dict of CouponBonds. This line creates a referenceDateListmyScheduler = Scheduler()ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate) Create Simulator This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be inside the Monte Carlo simulation range [trim_start,trim_end] Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories. SDE parameters - Vasicek SDE dr(t) = k(θ − r(t))dt + σdW(t) self.kappa = x[0] self.theta = x[1] self.sigma = x[2] self.r0 = x[3] myVasicek = MC_Vasicek_Sim()xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)myVasicek.getLibor() Create Coupon Bond with several startDates.SixMonthDelay = myScheduler.extractDelay("6M")TwoYearsDelay = myScheduler.extractDelay("2Y")startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)] For debugging uncomment this to choose a single date for the forward bond print(startDates)startDates = [date(2005,3,10)] orstartDates = [date(2005,3,10) + SixMonthDelay]maturities = [(x+TwoYearsDelay) for x in startDates] You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life. Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate)myPortfolio = {}coupon = 0.07536509for i in range(len(startDates)): notional=(-1.0)**i myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional, maturity= maturities[i], freq="3M", referencedate=referenceDate)
###Code
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
# Create Simulator
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek = MC_Vasicek_Sim(datelist = [trim_start,trim_end],x = xOIS,simNumber = simNumber,t_step =1/365.0 )
#myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
# Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
# For debugging uncomment this to choose a single date for the forward bond
# print(startDates)
startDates = [date(2005,3,10)+SixMonthDelay,date(2005,3,10)+TwoYearsDelay ]
maturities = [(x+TwoYearsDelay) for x in startDates]
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates)):
notional=(-1.0)**i
myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional,
maturity= maturities[i], freq="3M", referencedate=referenceDate)
###Output
_____no_output_____
###Markdown
Create Libor and portfolioScheduleOfCF. This datelist contains all dates to be used in any calculation of the portfolio positions. BondCoupon class has to have a method getScheduleComplete, which return fullSet on [0] and datelist on [1], calculated by BondCoupon as:def getScheduleComplete(self): self.datelist=self.myScheduler.getSchedule(start=self.start,end=self.maturity,freq=self.freq,referencedate=self.referencedate) self.ntimes = len(self.datelist) fullset = sorted(set(self.datelist) .union([self.referencedate]) .union([self.start]) .union([self.maturity]) ) return fullset,self.datelist portfolioScheduleOfCF is the concatenation of all fullsets. It defines the set of all dates for which Libor should be known.
###Code
portfolioScheduleOfCF = set(ReferenceDateList)
for i in range(len(myPortfolio)):
portfolioScheduleOfCF=portfolioScheduleOfCF.union(myPortfolio[i].getScheduleComplete()[0]
)
portfolioScheduleOfCF = sorted(portfolioScheduleOfCF.union(ReferenceDateList))
OIS = myVasicek.getSmallLibor(datelist=portfolioScheduleOfCF)
# at this point OIS contains all dates for which the discount curve should be known.
# If the OIS doesn't contain that date, it would not be able to discount the cashflows and the calcualtion would faill.
print(OIS)
pvs={}
for t in portfolioScheduleOfCF:
pvs[t] = np.zeros([1,simNumber])
for i in range(len(myPortfolio)):
myPortfolio[i].setLibor(OIS)
pvs[t] = pvs[t] + myPortfolio[i].getExposure(referencedate=t).values
#print(portfolioScheduleOfCF)
#print(pvs)
pvsPlot = pd.DataFrame.from_dict(list(pvs.items()))
pvsPlot.index= list(pvs.keys())
pvs1={}
for i,t in zip(pvsPlot.values,pvsPlot.index):
pvs1[t]=i[1][0]
pvs = pd.DataFrame.from_dict(data=pvs1,orient="index")
ax=pvs.plot(legend=False)
ax.set_xlabel("Year")
ax.set_ylabel("Coupon Bond Exposure")
###Output
_____no_output_____ |
06-Data-Ingestion/06-02-Exercise-STRING-AGG.ipynb | ###Markdown
Practice on STRING_AGG() & ARRAY_AGG()We will use the Google Analytics dataset `data-to-insights.ecommerce.all_sessions_raw`, also use in the [uncoming Quiklab](https://google.qwiklabs.com/focuses/3638?parent=catalog).
###Code
# This cell is to enable the "hint" functionality. After each question there is a cell with either a hint about the correct answer or the solution.
from IPython.display import Pretty as disp
hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics-Toolbox/master/docs/hints/' # path to hints on GitHub
###Output
_____no_output_____
###Markdown
Question: Find out how many product names and product SKUs are on the website?
###Code
%%bigquery
SELECT COUNT(*) FROM
(
SELECT DISTINCT productSKU, v2ProductName
FROM `data-to-insights.ecommerce.all_sessions_raw`
)
###Output
_____no_output_____
###Markdown
Now find the number of distinct SKUs:
###Code
%%bigquery
SELECT COUNT(DISTINCT productSKU)
FROM `data-to-insights.ecommerce.all_sessions_raw`
###Output
_____no_output_____
###Markdown
Obviously these numbers do not match which indicates that there are duplications. Let's determine which products have more than one SKU and which SKUs have more than one Product Name.Let's determine if some product names have more than one SKU:
###Code
%%bigquery
SELECT
v2ProductName,
COUNT(DISTINCT productSKU) AS SKU_count,
STRING_AGG(DISTINCT productSKU LIMIT 5) AS SKU
FROM `data-to-insights.ecommerce.all_sessions_raw`
WHERE productSKU IS NOT NULL
GROUP BY v2ProductName
HAVING SKU_count > 1
ORDER BY SKU_count DESC
###Output
_____no_output_____
###Markdown
We can see that 493 products are under this category. We can see the SKUs that these product names are related to. Your turnFind the SKUs that have multiple product names:
###Code
# SOLUTION: Uncomment and execute the cell below to get help
#disp(hint + '06-02-products')
###Output
_____no_output_____ |
ipynb/Namibia.ipynb | ###Markdown
Namibia* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Namibia");
# load the data
cases, deaths, region_label = get_country_data("Namibia")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Namibia* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Namibia", weeks=5);
overview("Namibia");
compare_plot("Namibia", normalise=True);
# load the data
cases, deaths = get_country_data("Namibia")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Namibia* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Namibia", weeks=5);
overview("Namibia");
compare_plot("Namibia", normalise=True);
# load the data
cases, deaths = get_country_data("Namibia")
# get population of the region for future normalisation:
inhabitants = population("Namibia")
print(f'Population of "Namibia": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Namibia.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
code/simulation/calibration.ipynb | ###Markdown
Calibration
###Code
import numpy as np
import pandas as pd
import numpy as np
from os.path import join
import json
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.patches as patches
# custom functions to run the calibration simulations
import calibration_functions as cf
# parallelisation functionality
from multiprocess import Pool
import psutil
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Empirical outbreak data
###Code
empirical_data_src = '../../data/school_data/empirical_observations/'
# distribution of outbreak sizes by school type
outbreak_sizes = pd.read_csv(\
join(empirical_data_src, 'empirical_outbreak_sizes.csv'))
# ratio of infections in the student and teacher groups
group_distributions = pd.read_csv(\
join(empirical_data_src, 'empirical_group_distributions.csv'))
# note: these are the number of clusters per school type from the slightly older
# data version (November 2020).
counts = pd.DataFrame({'type':['upper_secondary', 'secondary'],
'count':[116, 70]})
counts.index = counts['type']
counts = counts.drop(columns=['type'])
# The cluster counts are used to weigh the respective school type in the
# calibration process.
counts['weight'] = counts['count'] / counts['count'].sum()
###Output
_____no_output_____
###Markdown
Simulation data Simulation parameters
###Code
# school types over which the calibration was run
school_types = ['upper_secondary', 'secondary']
# the way the simulation framework is set up, it works with a "base transmission risk"
# for a household contact, that is then multiplied by a modifier for a different contact
# setting. What we calibrate is this modifier.
base_transmission_risk = 0.16598
transmission_risk_modifier = np.asarray([0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29,
0.30, 0.31, 0.32, 0.33, 0.34, 0.35])
# For the school simulations, we also calibrated a modifier for student age
# (the "age transmission discount"). The calibration showed no age dependence
# but since we re-use data from the calibration that was done for the school
# simulations, we carry these parameter values with us and use them to access
# the simulation result files.
# The age_transmission_discount sets the slope of the age-dependence of the
# transmission risk. Transmission risk for adults (age 18+) is always base
# transmission risk. For every year an agent is younger than 18 years, the
# transmission risk is reduced. Parameter values are chosen around the optimum
# from the previous random sampling search
age_transmission_discounts = [0.00, -0.0025, -0.005, -0.0075, -0.01,
-0.0125, -0.015, -0.0175, -0.02,
-0.0225, -0.025, -0.0275, -0.03]
# list of all possible parameter combinations from the grid
screening_params = [(i, j, j, k) for i in school_types \
for j in transmission_risk_modifier \
for k in age_transmission_discounts]
print('values for the base transmission risk rescaled by the modifier [%]:')
print(transmission_risk_modifier * base_transmission_risk * 100)
print()
print('parameter value step rescaled by the modifier [%]: {:1.2f}'\
.format(transmission_risk_modifier[1] * base_transmission_risk * 100 -\
transmission_risk_modifier[0] * base_transmission_risk * 100))
###Output
values for the base transmission risk rescaled by the modifier [%]:
[3.81754 3.98352 4.1495 4.31548 4.48146 4.64744 4.81342 4.9794 5.14538
5.31136 5.47734 5.64332 5.8093 ]
parameter value step rescaled by the modifier [%]: 0.17
###Markdown
Load data for upper secondary and secondary schools
###Code
# calculate the various distribution distances between the simulated and
# observed outbreak size distributions
src = '../../data/school_data/simulation_results'
results_fine = pd.DataFrame()
for i, ep in enumerate(screening_params):
school_type, icw, fcw, atd = ep
if i % 100 == 0:
print('{}/{}'.format(i, len(screening_params)))
fname = 'school_type-{}_icw-{:1.2f}_fcw-{:1.2f}_atd-{:1.4f}_infected.csv'\
.format(school_type, icw, fcw, atd)
ensemble_results = pd.read_csv(join(src, fname),
dtype={'infected_students':int, 'infected_teachers':int,
'infected_total':int, 'run':int})
row = cf.calculate_distances(ensemble_results, school_type, icw, fcw, atd,
outbreak_sizes, group_distributions)
results_fine = results_fine.append(row, ignore_index=True)
print('number of runs per ensemble: {}'.format(len(ensemble_results)))
###Output
number of runs per ensemble: 4000
###Markdown
Calculate distances between empirical and simulated data
###Code
# collection of different distance metrics to try
distance_cols = [
'sum_of_squares',
'chi2_distance',
'bhattacharyya_distance',
'spearmanr_difference',
'pearsonr_difference',
'pp_difference',
'qq_difference',
]
results_fine = results_fine.sort_values(by=['school_type',
'intermediate_contact_weight', 'age_transmission_discount'])
results_fine = results_fine.reset_index(drop=True)
for col in distance_cols:
results_fine[col + '_total'] = results_fine[col + '_size'] + \
results_fine['sum_of_squares_distro']
results_fine[col + '_total_weighted'] = results_fine[col + '_total']
for i, row in results_fine.iterrows():
st = row['school_type']
weight = counts.loc[st, 'weight']
error = row[col + '_total']
results_fine.loc[i, col + '_total_weighted'] = error * weight
###Output
_____no_output_____
###Markdown
Find optimal parameter values
###Code
agg_results_fine = results_fine\
.drop(columns=['far_contact_weight'])\
.rename(columns={'intermediate_contact_weight':'contact_weight'})\
.groupby(['contact_weight',
'age_transmission_discount'])\
.sum()
for col in distance_cols:
print(col)
opt_fine = agg_results_fine.loc[\
agg_results_fine[col + '_total_weighted'].idxmin()].name
opt_contact_weight_fine = opt_fine[0]
opt_age_transmission_discount_fine = opt_fine[1]
print('optimal grid search parameter combination:')
print('\t contact weight: {:1.3f}'\
.format(opt_contact_weight_fine))
print('\t age transmission discount: {:1.4f}'\
.format(opt_age_transmission_discount_fine))
print()
###Output
sum_of_squares
optimal grid search parameter combination:
contact weight: 0.260
age transmission discount: 0.0000
chi2_distance
optimal grid search parameter combination:
contact weight: 0.260
age transmission discount: 0.0000
bhattacharyya_distance
optimal grid search parameter combination:
contact weight: 0.240
age transmission discount: -0.0300
spearmanr_difference
optimal grid search parameter combination:
contact weight: 0.230
age transmission discount: -0.0075
pearsonr_difference
optimal grid search parameter combination:
contact weight: 0.250
age transmission discount: 0.0000
pp_difference
optimal grid search parameter combination:
contact weight: 0.240
age transmission discount: -0.0050
qq_difference
optimal grid search parameter combination:
contact weight: 0.320
age transmission discount: -0.0225
###Markdown
Visualise distances
###Code
# compose matrices of the distance measurements for all different distance
# metrics which are calculated as sum between the first component (ratio of
# infected students and teachers) and the second component (outbreak size
# distribution)
distance_images = {}
for col in distance_cols:
img_fine = np.zeros((len(contact_weights_fine),
len(age_transmission_discounts_fine)))
for i, cw in enumerate(contact_weights_fine):
for j, atd in enumerate(age_transmission_discounts_fine):
cw = round(cw, 2)
atd = round(atd, 4)
try:
img_fine[i, j] = agg_results_fine\
.loc[cw, atd][col + '_total_weighted']
except KeyError:
print(atd)
img_fine[i, j] = np.nan
distance_images[col] = img_fine
# qq and spearman are super noisy, exclude them for further analysis
distance_col_names = {
'sum_of_squares':'Sum of squares',
'chi2_distance':'$\\chi^2$',
'bhattacharyya_distance':'Bhattacharyya',
'spearmanr_difference': 'Spearman correlation',
'pearsonr_difference':'Pearson correlation',
'pp_difference':'pp-slope',
'qq_difference':'qq-slope'
}
fig, axes = plt.subplots(2, 4, figsize=(15, 6))
for ax, col in zip(axes.flatten(), distance_col_names.keys()):
img_fine = distance_images[col]
im = ax.imshow(img_fine)
ax.set_yticks(range(len(contact_weights_fine))[::2])
ax.set_yticklabels(['{:1.2f}'.format(cw) for \
cw in contact_weights_fine[::2]])
#ax.set_xticks(range(len(age_transmission_discounts_fine))[::2])
ax.set_xticks([0, 4, 8, 12])
ax.set_xticklabels(['0.00', '-0.01', '-0.02', '-0.03'])
#ax.set_xticklabels(['{:1.4f}'.format(atd) for \
# atd in age_transmission_discounts_fine[::2]],
# fontsize=8)
ax.set_title(distance_col_names[col], fontsize=16)
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
cbar = fig.colorbar(im, cax=cax, orientation='vertical', format='%.0e')
cbar.ax.tick_params(labelsize=8)
cbar.set_label('$E$', fontsize=12)
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.set_ylabel('$c_\\mathrm{contact}$', fontsize=16)
ax.set_xlabel('$c_\\mathrm{age}$', fontsize=16)
axes[1, 3].axis('off')
fig.text(0.061, 0.875, 'A', color='w', fontsize=20)
fig.text(0.312, 0.875, 'B', color='w', fontsize=20)
fig.text(0.56, 0.875, 'C', color='w', fontsize=20)
fig.text(0.808, 0.875, 'D', color='w', fontsize=20)
fig.text(0.061, 0.395, 'E', color='w', fontsize=20)
fig.text(0.312, 0.395, 'F', color='w', fontsize=20)
fig.text(0.56, 0.395, 'G', color='w', fontsize=20)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Confidence intervals for the optimum values
###Code
def run_bootstrap(params):
src = '../../data/calibration/simulation_results/ensembles_fine_ensemble_distributions'
ensemble_results, st, icw, fcw, atd, outbreak_sizes, \
group_distributions, bootstrap_run = params
row = cf.calculate_distances(ensemble_results, st, icw, fcw, atd,
outbreak_sizes, group_distributions)
row.update({'bootstrap_run':bootstrap_run})
return row
# calculate the various distribution distances between the simulated and
# observed outbreak size distributions. Note:
dst = '../../data/school_data/simulation_results/'
N_bootstrap = 1000 # number of subsamplings per parameter combination
number_of_cores = 10
bootstrapping_results = pd.DataFrame()
for i, ep in enumerate(screening_params):
school_type, icw, fcw, atd = ep
if i % 100 == 0:
print('{}/{}'.format(i, len(screening_params)))
fname = 'school_type-{}_icw-{:1.2f}_fcw-{:1.2f}_atd-{:1.4f}_infected.csv'\
.format(school_type, icw, fcw, atd)
ensemble_results = pd.read_csv(join(src, fname),
dtype={'infected_students':int, 'infected_teachers':int,
'infected_total':int, 'run':int})
bootstrap_params = [(ensemble_results.sample(2000), school_type, icw, fcw, \
atd, outbreak_sizes, group_distributions, j) \
for j in range(N_bootstrap)]
number_of_cores = number_of_cores = psutil.cpu_count(logical=True) - 2
pool = Pool(number_of_cores)
for res in tqdm(pool.imap_unordered(func=run_bootstrap,
iterable=bootstrap_params), total=len(bootstrap_params)):
bootstrapping_results = bootstrapping_results\
.append(res, ignore_index=True)
bootstrapping_results.to_csv(join(dst, 'bootstrapping_results_{}.csv'\
.format(N_bootstrap)), index=False)
dst = '../../data/school_data/simulation_results/'
N_bootstrap = 1000
bs_results = pd.read_csv(join(dst, 'bootstrapping_results_{}.csv'\
.format(N_bootstrap)))
bs_results = bs_results\
.rename(columns={'intermediate_contact_weight':'contact_weight'})\
.drop(columns=['far_contact_weight'])
# calculated the weighted sum of error terms for all distance measures
for col in distance_cols:
bs_results[col + '_total'] = \
bs_results[col + '_size'] + bs_results['sum_of_squares_distro']
bs_results[col + '_total_weighted'] = bs_results[col + '_total']
for st in school_types:
weight = counts.loc[st, 'weight']
st_indices = bs_results[bs_results['school_type'] == st].index
bs_results.loc[st_indices, col + '_total_weighted'] = \
bs_results.loc[st_indices, col + '_total'] * weight
agg_bs_results = bs_results\
.groupby(['contact_weight',
'age_transmission_discount',
'bootstrap_run'])\
.sum()
opt_bs = pd.DataFrame()
for i in range(N_bootstrap):
run_data = agg_bs_results.loc[:, :, i]
row = {'bootstrap_run':i}
for col in distance_cols:
opt = run_data.loc[\
run_data[col + '_total_weighted'].idxmin()].name
opt_contact_weight_bs = opt[0]
opt_age_transmission_discount_bs = opt[1]
row.update({
'contact_weight_' + col:opt_contact_weight_bs,
'age_transmission_discount_' + col:opt_age_transmission_discount_bs
})
opt_bs = opt_bs.append(row, ignore_index=True)
uncertainties_cw = []
medians_cw = []
uncertainties_atd = []
medians_atd = []
for col in distance_cols:
median = opt_bs['contact_weight_' + col].median() * base_transmission_risk
mean = opt_bs['contact_weight_' + col].mean() * base_transmission_risk
low = opt_bs['contact_weight_' + col].quantile(0.025) * base_transmission_risk
high = opt_bs['contact_weight_' + col].quantile(0.975) * base_transmission_risk
atd_median = opt_bs['age_transmission_discount_' + col].median()
atd_mean = opt_bs['age_transmission_discount_' + col].mean()
atd_low = opt_bs['age_transmission_discount_' + col].quantile(0.025)
atd_high = opt_bs['age_transmission_discount_' + col].quantile(0.975)
print('{}: contact weight {} [{}; {}] (mean {:1.4f}), atd {} [{}; {}]'\
.format(col, median, low, high, mean, atd_median, atd_low, atd_high, atd_mean))
uncertainties_cw.append(high - low)
uncertainties_atd.append(atd_high - atd_low)
medians_cw.append(median)
medians_atd.append(atd_median)
###Output
sum_of_squares: contact weight 0.041495 [0.0381754; 0.0464744] (mean 0.0416), atd -0.0025 [-0.0225; 0.0]
chi2_distance: contact weight 0.0431548 [0.0381754; 0.0464744] (mean 0.0423), atd 0.0 [-0.015; 0.0]
bhattacharyya_distance: contact weight 0.039835199999999994 [0.0381754; 0.0448146] (mean 0.0406), atd -0.02 [-0.03; 0.0]
spearmanr_difference: contact weight 0.0381754 [0.0381754; 0.041495] (mean 0.0389), atd -0.025 [-0.03; -0.0025]
pearsonr_difference: contact weight 0.0448146 [0.039835199999999994; 0.048134199999999995] (mean 0.0439), atd -0.005 [-0.025; 0.0]
pp_difference: contact weight 0.039835199999999994 [0.0381754; 0.041495] (mean 0.0392), atd -0.0075 [-0.03; -0.0025]
qq_difference: contact weight 0.0547734 [0.048134199999999995; 0.05809299999999999] (mean 0.0547), atd -0.0075 [-0.0225; 0.0]
|
solutions_do_not_open/Lab_08_ML Improving performance_solution.ipynb | ###Markdown
Improving performance
###Code
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Load the data
df = pd.read_csv('../data/new_titanic_features.csv')
# Create Features and Labels
X = df[['Male', 'Family',
'Pclass2_one', 'Pclass2_two', 'Pclass2_three',
'Embarked_C', 'Embarked_Q', 'Embarked_S',
'Age2', 'Fare3_Fare11to50', 'Fare3_Fare51+', 'Fare3_Fare<=10']]
y = df['Survived']
X.describe()
from sklearn.model_selection import train_test_split
# Train test split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=.2, random_state=0)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X_train, y_train)
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
print('Train Accuracy: {:0.3}'.format(accuracy_score(y_train, pred_train)))
print('Test Accuracy: {:0.3}'.format(accuracy_score(y_test, pred_test)))
confusion_matrix(y_test, pred_test)
print(classification_report(y_test, pred_test))
###Output
_____no_output_____
###Markdown
Feature importances (wrong! see exercise 1)
###Code
coeffs = pd.Series(model.coef_.ravel(), index=X.columns)
coeffs
coeffs.plot(kind='barh')
###Output
_____no_output_____
###Markdown
Cross Validation
###Code
from sklearn.model_selection import cross_val_score, ShuffleSplit
cv = ShuffleSplit(n_splits=5, test_size=.4, random_state=0)
scores = cross_val_score(model, X, y, cv=cv)
scores
'Crossval score: %0.3f +/- %0.3f ' % (scores.mean(), scores.std())
###Output
_____no_output_____
###Markdown
Learning curve
###Code
from sklearn.model_selection import learning_curve
tsz = np.linspace(0.1, 1, 10)
train_sizes, train_scores, test_scores = learning_curve(model, X, y, train_sizes=tsz)
fig = plt.figure()
plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores")
plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores")
plt.title('Learning Curve: Logistic Regression')
plt.ylim((0.5, 1.0))
plt.legend()
plt.draw()
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 1Try rescaling the Age feature with [`preprocessing.StandardScaler`](http://scikit-learn.org/stable/modules/preprocessing.html) so that it will have comparable size to the other features.- Do the model prediction change?- Does the performance of the model change?- Do the feature importances change?- How can you explain what you've observed?
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train[['Age2']])
X_train_sc = X_train.copy()
X_test_sc = X_test.copy()
X_train_sc['Age2'] = sc.transform(X_train[['Age2']])
X_test_sc['Age2'] = sc.transform(X_test[['Age2']])
model = LogisticRegression()
model.fit(X_train, y_train)
print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train))))
print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test))))
coeffs = pd.Series(model.coef_.ravel(), index=X.columns)
model.fit(X_train_sc, y_train)
print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc))))
print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc))))
coeffs_sc = pd.Series(model.coef_.ravel(), index=X.columns)
plt.figure(figsize=(15, 5))
plt.subplot(121)
coeffs.plot(kind='barh', title='Unscaled Age2')
plt.subplot(122)
coeffs_sc.plot(kind='barh', title='Scaled Age2')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Only the coefficients of the rescaled features can be interpreted as feature importances. Exercise 2Experiment with another classifier for example `DecisionTreeClassifier`, `RandomForestClassifier`, `SVC`, `MLPClassifier`, `SGDClassifier` or any other classifier of choice you can find here: http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html. - Train the model on both the scaled data and on the unscaled data- Compare the score for the scaled and unscaled data- how can you get the features importances for tree based models? Check [here](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html) for some help.- Which classifiers are impacted by the age rescale? Why?
###Code
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)
print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train))))
print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test))))
coeffs = pd.Series(model.feature_importances_, index=X.columns)
coeffs.plot(kind='barh')
model.fit(X_train_sc, y_train)
print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc))))
print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc))))
coeffs = pd.Series(model.feature_importances_, index=X.columns)
coeffs.plot(kind='barh')
###Output
_____no_output_____
###Markdown
Exercise 3Pick your preferred classifier from Exercise 2 and search for the best hyperparameters. You can read about hyperparameter search [here](http://scikit-learn.org/stable/modules/grid_search.html)- Decide the range of hyperparameters you intend to explore- Try using [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlsklearn.model_selection.GridSearchCV) to perform brute force search- Try using [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.htmlsklearn.model_selection.RandomizedSearchCV) for a random search- Once you've chosen the best classifier and the best hyperparameter set, redo the learning curve.Do you need more data or a better model?
###Code
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 11),
"min_samples_split": sp_randint(2, 11),
"min_samples_leaf": sp_randint(1, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
clf = RandomForestClassifier(n_estimators=20)
model = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=40, n_jobs=-1)
model.fit(X_train, y_train)
model.best_score_
model.score(X_test, y_test)
best = model.best_estimator_
best.fit(X_train, y_train)
best.score(X_test, y_test)
train_sizes, train_scores, test_scores = learning_curve(best, X, y, train_sizes=tsz)
fig = plt.figure()
plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores")
plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores")
plt.title('Learning Curve: Logistic Regression')
plt.ylim((0.5, 1.0))
plt.legend()
plt.draw()
plt.show()
###Output
_____no_output_____
###Markdown
Improving performance
###Code
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Load the data
df = pd.read_csv('../data/new_titanic_features.csv')
# Create Features and Labels
X = df[['Male', 'Family',
'Pclass2_one', 'Pclass2_two', 'Pclass2_three',
'Embarked_C', 'Embarked_Q', 'Embarked_S',
'Age2', 'Fare3_Fare11to50', 'Fare3_Fare51+', 'Fare3_Fare<=10']]
y = df['Survived']
X.describe()
from sklearn.model_selection import train_test_split
# Train test split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=.2, random_state=0)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='liblinear')
model.fit(X_train, y_train)
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
print('Train Accuracy: {:0.3}'.format(accuracy_score(y_train, pred_train)))
print('Test Accuracy: {:0.3}'.format(accuracy_score(y_test, pred_test)))
confusion_matrix(y_test, pred_test)
print(classification_report(y_test, pred_test))
###Output
_____no_output_____
###Markdown
Feature importances (wrong! see exercise 1)
###Code
coeffs = pd.Series(model.coef_.ravel(), index=X.columns)
coeffs
coeffs.plot(kind='barh')
###Output
_____no_output_____
###Markdown
Cross Validation
###Code
from sklearn.model_selection import cross_val_score, ShuffleSplit
cv = ShuffleSplit(n_splits=5, test_size=.4, random_state=0)
scores = cross_val_score(model, X, y, cv=cv)
scores
'Crossval score: %0.3f +/- %0.3f ' % (scores.mean(), scores.std())
###Output
_____no_output_____
###Markdown
Learning curve
###Code
from sklearn.model_selection import learning_curve
tsz = np.linspace(0.1, 1, 10)
train_sizes, train_scores, test_scores = learning_curve(model, X, y, train_sizes=tsz, cv=3)
fig = plt.figure()
plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores")
plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores")
plt.title('Learning Curve: Logistic Regression')
plt.ylim((0.5, 1.0))
plt.legend()
plt.draw()
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 1Try rescaling the Age feature with [`preprocessing.StandardScaler`](http://scikit-learn.org/stable/modules/preprocessing.html) so that it will have comparable size to the other features.- Do the model prediction change?- Does the performance of the model change?- Do the feature importances change?- How can you explain what you've observed?
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train[['Age2']])
X_train_sc = X_train.copy()
X_test_sc = X_test.copy()
X_train_sc['Age2'] = sc.transform(X_train[['Age2']])
X_test_sc['Age2'] = sc.transform(X_test[['Age2']])
model = LogisticRegression(solver='liblinear')
model.fit(X_train, y_train)
print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train))))
print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test))))
coeffs = pd.Series(model.coef_.ravel(), index=X.columns)
model.fit(X_train_sc, y_train)
print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc))))
print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc))))
coeffs_sc = pd.Series(model.coef_.ravel(), index=X.columns)
plt.figure(figsize=(15, 5))
plt.subplot(121)
coeffs.plot(kind='barh', title='Unscaled Age2')
plt.subplot(122)
coeffs_sc.plot(kind='barh', title='Scaled Age2')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Only the coefficients of the rescaled features can be interpreted as feature importances. Exercise 2Experiment with another classifier for example `DecisionTreeClassifier`, `RandomForestClassifier`, `SVC`, `MLPClassifier`, `SGDClassifier` or any other classifier of choice you can find here: http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html. - Train the model on both the scaled data and on the unscaled data- Compare the score for the scaled and unscaled data- how can you get the features importances for tree based models? Check [here](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html) for some help.- Which classifiers are impacted by the age rescale? Why?
###Code
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=30)
model.fit(X_train, y_train)
print('Train Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train))))
print('Test Accuracy (not scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test))))
coeffs = pd.Series(model.feature_importances_, index=X.columns)
coeffs.plot(kind='barh')
model.fit(X_train_sc, y_train)
print('Train Accuracy (scaled): {:0.3}'.format(accuracy_score(y_train, model.predict(X_train_sc))))
print('Test Accuracy (scaled): {:0.3}'.format(accuracy_score(y_test, model.predict(X_test_sc))))
coeffs = pd.Series(model.feature_importances_, index=X.columns)
coeffs.plot(kind='barh')
###Output
_____no_output_____
###Markdown
Exercise 3Pick your preferred classifier from Exercise 2 and search for the best hyperparameters. You can read about hyperparameter search [here](http://scikit-learn.org/stable/modules/grid_search.html)- Decide the range of hyperparameters you intend to explore- Try using [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.htmlsklearn.model_selection.GridSearchCV) to perform brute force search- Try using [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.htmlsklearn.model_selection.RandomizedSearchCV) for a random search- Once you've chosen the best classifier and the best hyperparameter set, redo the learning curve.Do you need more data or a better model?
###Code
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint as sp_randint
param_dist = {"max_depth": [3, None],
"max_features": sp_randint(1, 11),
"min_samples_split": sp_randint(2, 11),
"min_samples_leaf": sp_randint(1, 11),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
clf = RandomForestClassifier(n_estimators=20)
model = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=40, n_jobs=-1, cv=3)
model.fit(X_train, y_train)
model.best_score_
model.score(X_test, y_test)
best = model.best_estimator_
best.fit(X_train, y_train)
best.score(X_test, y_test)
train_sizes, train_scores, test_scores = learning_curve(best, X, y, train_sizes=tsz, cv=3)
fig = plt.figure()
plt.plot(train_sizes, train_scores.mean(axis=1), 'ro-', label="Train Scores")
plt.plot(train_sizes, test_scores.mean(axis=1), 'go-', label="Test Scores")
plt.title('Learning Curve: Logistic Regression')
plt.ylim((0.5, 1.0))
plt.legend()
plt.draw()
plt.show()
###Output
_____no_output_____ |
Hyperparameter_Generated/Hyperparameter_Simple_With_Project_Scope.ipynb | ###Markdown
Hyperparameter Database Submitted to RISE:(Approved) Hyperparameters are parameters that are specified prior to running machine learning algorithms that have a large effect on the predictive power of statistical models. Knowledge of the relative importance of a hyperparameter to an algorithm and its range of values is crucial to hyperparameter tuning and creating effective models. To either experts or non-experts, determining hyperparameters that optimize model performance can be a tedious and difficult task. Therefore, we develop a hyperparameter database that allows users to visualize and understand how to choose hyperparameters that maximize the predictive power of their models. The database is created by running millions of hyperparameter values, over thousands of public datasets and calculating the individual conditional expectation of every hyperparameter to the quality of a model. We analyze the effect of hyperparameters on algorithms such as Distributed Random Forest (DRF), Generalized Linear Model (GLM), Gradient Boosting Machine (GBM), and several more. Consequently, the database attempts to provide a one-stop platform for data scientists to identify hyperparameters that have the most effect on their models in order to speed up the process of developing effective predictive models. Moreover, the database will also use these public datasets to build models that can predict hyperparameters without search and for visualizing and teaching concepts such as statistical power and bias/variance tradeoff. The raw data will also be publically available for the research community. What are the hyperparamters? Hyperparameters are parameters that are specified prior to running machine learning algorithms that have a large effecton the predictive power of statistical models. Hyperparameters are specified for tuning purpose, for examples: * learningrate - Learning Rate * n_layers - Number of layers * n_neurons - Number of neurons * Hidden Layers - Number of layers and size of each layers Hyperparameters are important because they directly control the behaviour of the training algorithm and have a significant impact on the performance of the model that is being trained.
###Code
import h2o
from h2o.automl import H2OAutoML
import random, os, sys
from datetime import datetime
import pandas as pd
import logging
import csv
import optparse
import time
import json
from distutils.util import strtobool
import psutil
import warnings
warnings.filterwarnings('ignore')
port_no=random.randint(5555,55555)
h2o.init(strict_version_check=False,min_mem_size_GB=min_mem_size,port=port_no)
#importing data to the server
df = h2o.import_file(path="./Dataset/loan.csv")
###Output
Parse progress: |█████████████████████████████████████████████████████████| 100%
###Markdown
We try to predict if it is a bad loan, by taking Loan dataset as an example
###Code
#Checking the heads
df.head()
# Assume the following are passed by the user from the web interface
'''
Need a user id and project id?
'''
target='bad_loan'
data_file='loan.csv'
run_time=333
run_id='SOME_ID_20180617_221529' # Just some arbitrary ID
server_path='./Dataset/'
classification=True
scale=False
max_models=None
balance_y=False # balance_classes=balance_y
balance_threshold=0.2
project ="automl_test" # project_name = project
###Output
_____no_output_____
###Markdown
All that we need is the `target`, and our AI software does the rest.
###Code
# assign target and inputs for logistic regression
y = target
X = [name for name in df.columns if name != y]
print(y)
print(X)
# impute missing values
_ = df[reals].impute(method='mean')
_ = df[ints].impute(method='median')
if scale:
df[reals] = df[reals].scale()
df[ints] = df[ints].scale()
# set target to factor for classification by default or if user specifies classification
if classification:
df[y] = df[y].asfactor()
df[y].levels()
# Use local data file or download from some type of bucket
import os
data_path=os.path.join(server_path,data_file)
data_path
if classification:
class_percentage = y_balance=df[y].mean()[0]/(df[y].max()-df[y].min())
if class_percentage < balance_threshold:
balance_y=True
print(run_time)
type(run_time)
# automl
# runs for run_time seconds then builds a stacked ensemble
aml = H2OAutoML(max_runtime_secs=run_time,project_name = project) # init automl, run for 300 seconds
aml.train(x=X,
y=y,
training_frame=df)
###Output
AutoML progress: |████████████████████████████████████████████████████████| 100%
Parse progress: |█████████████████████████████████████████████████████████| 100%
###Markdown
We run thousands of hyperparamter combinations and select the best out of it.
###Code
# view leaderboard
lb = aml.leaderboard
lb
aml_leaderboard_df=aml.leaderboard.as_data_frame()
model_set=aml_leaderboard_df['model_id']
mod_best=h2o.get_model(model_set[3])
mod_best.params
###Output
_____no_output_____
###Markdown
'ntrees': {'default': 50, 'actual': 33},'max_depth': {'default': 5, 'actual': 4},'learn_rate': {'default': 0.1, 'actual': 0.8} We try to check the plot between hyperparameter against its values to know the best value range.---------------------- We do the same for the different Hyperparameters we have--------------------- Not just that, we even see the importance of Hyperparameters through the plots. We will develop novel hyperparameter interpretability metrics, inspired by model inerpretability metrics, such as : * Global surrogate models * Word embeddings * Individual conditional expectation(ICE) plots * K-local interpretable model-agnostic explanations(K-LIME) * Leave-one-covariance (LOCO) * Local feature importance * Partial dependency plots * Random forest feature importance * Standardized coefficient importance * Visualization of neural network layers * Generalized low rank estimators * Feature extraction and ranking * accumulated local effects (ALE) * Shapley values. Currently The hyperparameter database analyzes the effect of hyperparameters on the following algorithms: * Distributed Random Forest (DRF)* Generalized Linear Model (GLM)* Gradient Boosting Machine (GBM)* Naïve Bayes Classifier* Stacked Ensembles * XGBoost and * Deep Learning Models (Neural Networks). Data dump for hyperparamter researchers and Kaggle competition
###Code
from IPython.display import IFrame
###Output
_____no_output_____
###Markdown
Database - UML Diagram
###Code
IFrame(src='./HP_Database_UML_Diagram.html', width=900, height=700)
###Output
_____no_output_____ |
lectures/Yaml.ipynb | ###Markdown
Конфигурирование с YAMLВ данном примере будет показано, как можно сконфигурировать отчёт при помощи `YAML` файла.В качестве фабрики, по производству отчёта возмём фабрики, созданные в предыдущих уроках, но изменим их так, чтобы формирование отчёта осуществялось через загрузку `yaml` файла. YAML файл отчётаОпределим строковые переменные `yml_MD` и `yml_HTML` в которых будут храниться содержание конфигурационных фалов для `Markdown` и `HTML` отчёта соответственно.для `Markdown` отчёта
###Code
yml_MD = '''
--- !MDreport # указывает, что хранящаяся ниже структура относиться к типу MDreport
objects: # для хранения якорей
- &img !img # якорь img хранит объект типа img
alt_text: coursera # описание изображения
src: "https://blog.coursera.org/wp-content/uploads/2017/07/coursera-fb.png" # адрес изображения
report: !report # содержит непосредственно отчёт
filename: report_yaml.md # имя файла отчёта
title: !!str Report # название отчёта - строковый параметр (!!str) "Report"
parts: # содержание отчёта - список частей (каждая часть начинаеться с "-")
- !chapter # первая часть отчёта - объект типа "chapter"
caption: "chapter one" # заглавие первой части
parts: # содержание первой части - список ниже
# первая часть - текст.
# символ '>' вконце показывает, что весь блок ниже являеться содержанием. Перенос строк не учитываеться
# Для учёта переноса строк - символ '|'
- |
chapter
1
text
- !link # далее ссылка
obj: coursera # текст ссылки
href: "https://ru.coursera.org" # куда ссылаеться
- !chapter # вторая часть отчёта - объект типа "chapter"
caption: "chapter two" # заглавие второй части
parts: # содержание второй части - список ниже
- "Chapter 2 header" # сначала текст
- !link # далее ссылка
obj: *img # объект, хранящийся по якорю img (изображение) будет являться ссылкой
href: "https://ru.coursera.org" # куда ссылаеться
- "Chapter 2 footer" # в конце - текст'''
###Output
_____no_output_____
###Markdown
Для `HTML` отчёта только одно изминение — тип отчёта:
###Code
yml_HTML = '''
--- !HTMLreport # указывает, что хранящаяся ниже структура относиться к типу HTMLreport
objects:
- &img !img
alt_text: google
src: "https://blog.coursera.org/wp-content/uploads/2017/07/coursera-fb.png"
report: !report
filename: report_yaml.html
title: Report
parts:
- !chapter
caption: "chapter one"
parts:
- "chapter 1 text"
- !link
obj: coursera
href: "https://ru.coursera.org"
- !chapter
caption: "chapter two"
parts:
- "Chapter 2 header"
- !link
obj: *img
href: "https://ru.coursera.org"
- "Chapter 2 footer"'''
###Output
_____no_output_____
###Markdown
Далее перейдём к изменению абстрактной фабрики `ReportFactory`
###Code
import yaml # для работы с PyYAML
# теперь ReportFactory - потомок yaml.YAMLObject.
# Сделано для того, чтобы yaml оработчик знал новый тип данных, указанный в yaml_tag
# он будет определён в фабриках - потомках
class ReportFactory(yaml.YAMLObject):
# данные yaml фала - структура отчёта одинакова для всех потомков.
# В связи с этим - получение отчёта из yaml файла - классовый метод со специальным именем from_yaml
@classmethod
def from_yaml(Class, loader, node):
# сначала опишем функции для обработки каждого нового типа
# метод loader.construct_mapping() формирует из содержания node словарь
# обработчик создания отчёта !report
def get_report(loader, node):
data = loader.construct_mapping(node)
rep = Class.make_report(data["title"])
rep.filename = data["filename"]
# на данный момент data["parts"] пуст. Он будет заполнен позже, соответствующим обработчиком,
# сохраняем на него ссылку, дополнив сразу частями из rep.parts
data["parts"].extend(rep.parts)
rep.parts = data["parts"]
return rep
# обработчик создания части !chapter
def get_chapter(loader, node):
data = loader.construct_mapping(node)
ch = Class.make_chapter(data["caption"])
# аналогично предыдущему обработчику
data["parts"].extend(ch.objects)
ch.objects = data["parts"]
return ch
# обработчик создания ссылки !link
def get_link(loader, node):
data = loader.construct_mapping(node)
lnk = Class.make_link(data["obj"], data["href"])
return lnk
# обработчик создания изображения !img
def get_img(loader, node):
data = loader.construct_mapping(node)
img = Class.make_img(data["alt_text"], data["src"])
return img
# добавляем обработчики
loader.add_constructor(u"!report", get_report)
loader.add_constructor(u"!chapter", get_chapter)
loader.add_constructor(u"!link", get_link)
loader.add_constructor(u"!img", get_img)
# возвращаем результат yaml обработчика - отчёт
return loader.construct_mapping(node)['report']
# ниже - без изменений
@classmethod
def make_report(Class, title):
return Class.Report(title)
@classmethod
def make_chapter(Class, caption):
return Class.Chapter(caption)
@classmethod
def make_link(Class, obj, href):
return Class.Link(obj, href)
@classmethod
def make_img(Class, alt_text, src):
return Class.Img(alt_text, src)
###Output
_____no_output_____
###Markdown
Далее берём непосредственно фабрики по производству элементов отчёта. Добавляем соответствие фабрик `yaml` типу
###Code
class MDreportFactory(ReportFactory):
yaml_tag = u'!MDreport' # указываем соответствие
class Report:
def __init__(self, title):
self.parts = []
self.parts.append("# "+title+"\n\n")
def add(self, part):
self.parts.append(part)
def save(self): # вносим изменения - имя файла отчёта указываеться в yaml файле
try:
file = open(self.filename, "w", encoding="utf-8")
print('\n'.join(map(str, self.parts)), file=file)
finally:
if isinstance(self.filename, str) and file is not None:
file.close()
class Chapter:
def __init__(self, caption):
self.caption = caption
self.objects = []
def add(self, obj):
print(obj)
self.objects.append(obj)
def __str__(self):
return f'## {self.caption}\n\n' + ''.join(map(str, self.objects))
class Link:
def __init__(self, obj, href):
self.obj = obj
self.href = href
def __str__(self):
return f'[{self.obj}]({self.href})'
class Img:
def __init__(self, alt_text, src):
self.alt_text = alt_text
self.src = src
def __str__(self):
return f''
class HTMLreportFactory(ReportFactory):
yaml_tag = u'!HTMLreport'
class Report:
def __init__(self, title):
self.title = title
self.parts = []
self.parts.append("<html>")
self.parts.append("<head>")
self.parts.append("<title>" + title + "</title>")
self.parts.append("<meta charset=\"utf-8\">")
self.parts.append("</head>")
self.parts.append("<body>")
def add(self, part):
self.parts.append(part)
def save(self):
try:
file = open(self.filename, "w", encoding="utf-8")
print('\n'.join(map(str, self.parts)), file=file)
finally:
if isinstance(self.filename, str) and file is not None:
file.close()
class Chapter:
def __init__(self, caption):
self.caption = caption
self.objects = []
def add(self, obj):
self.objects.append(obj)
def __str__(self):
ch = f'<h1>{self.caption}</h1>'
return ch + ''.join(map(str, self.objects))
class Link:
def __init__(self, obj, href):
self.obj = obj
self.href = href
def __str__(self):
return f'<a href="{self.href}">{self.obj}</a>'
class Img:
def __init__(self, alt_text, src):
self.alt_text = alt_text
self.src = src
def __str__(self):
return f'<img alt = "{self.alt_text}", sr c ="{self.src}"/>'
###Output
_____no_output_____
###Markdown
Осталось провести загрузку `yaml` файла и вывести результат
###Code
from IPython.display import display, Markdown, HTML
txtreport = yaml.load(yml_MD) # загружаем yaml файл markdown отчёта
txtreport.save() # сохраняем
print("Сохранено:", txtreport.filename) # вывод
HTMLreport = yaml.load(yml_HTML) # загружаем yaml файл markdown отчёта
HTMLreport.save() # сохраняем
print("Сохранено:", HTMLreport.filename) # вывод
# Выводим результат работы в jupyter notebook
display(Markdown('# <span style="color:red">report.md</span>'))
display(Markdown(filename="report_yaml.md"))
display(Markdown('# <span style="color:red">report.html</span>'))
display(HTML(filename="report_yaml.html"))
###Output
Сохранено: report_yaml.md
Сохранено: report_yaml.html
|
notebooks/3.0-jmk-scraping_stream_titles.ipynb | ###Markdown
1. Libraries, Configuration, and Importing Queries 1.1 Libraries
###Code
# selenium specific imports
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import TimeoutException
# other imports
import configparser
import time
import pandas as pd
import numpy as np
from datetime import datetime
###Output
_____no_output_____
###Markdown
1.2 Configuration
###Code
# configuration parser initialization
config = configparser.ConfigParser()
config.read('../config.ini')
delay = 10 # waits for 10 seconds for the correct element to appeara
###Output
_____no_output_____
###Markdown
1.3 Load csv of Brand Names Search Queries- Brand queries in conjuction with slight modifications were systematically created by Catherine C. Pollack at Dartmouth College.
###Code
query_df = pd.read_csv("../data/queries/Final_Words_List.csv")
query_df.describe()
###Output
_____no_output_____
###Markdown
2. Custom Functions 2.1 Login
###Code
def login_streamhatchet():
driver.get("https://app.streamhatchet.com/")
driver.find_element_by_id("cookiesAccepted").click()
username = driver.find_element_by_name("loginEmail")
username.clear()
username.send_keys(config['login_credentials']['email'])
password = driver.find_element_by_name("loginPassword")
password.clear()
password.send_keys(config['login_credentials']['password'])
driver.find_element_by_xpath("//button[contains(text(),'Login')]").click()
time.sleep(3) # sleep for 3 seconds to let the page load
###Output
_____no_output_____
###Markdown
2.1 Stream Title Search
###Code
def stream_title_search(query, incomplete_queries_list, df):
driver.get("https://app.streamhatchet.com/search/toolstatus")
time.sleep(1)
# Enters query into 'Stream title query'
stream_title_query_input = WebDriverWait(driver, delay).until(EC.element_to_be_clickable((By.XPATH,"//input[@id='status-query']")))
stream_title_query_input.send_keys(query)
# Makes twitch the only platform to search
platform_input = WebDriverWait(driver, delay).until(EC.element_to_be_clickable((By.XPATH,"//input[@class='search']")))
platform_input.click()
platform_input.send_keys(Keys.BACKSPACE)
platform_input.send_keys(Keys.BACKSPACE)
platform_input.send_keys(Keys.BACKSPACE)
# Click to Expand Date Options
driver.find_element_by_xpath("//div[@id='NewRangePicker']").click()
# change the hours and minutes to 0:00 for date from and to
driver.find_element_by_xpath("//div[@class='calendar left']//select[@class='hourselect']//option[1]").click()
driver.find_element_by_xpath("//div[@class='calendar left']//option[contains(text(),'00')]").click()
driver.find_element_by_xpath("//div[@class='calendar right']//select[@class='hourselect']//option[1]").click()
driver.find_element_by_xpath("//div[@class='calendar right']//option[contains(text(),'00')]").click()
# Keep clicking on right_arrow
while driver.find_element_by_xpath("//i[@id='icon-down-New']").is_displayed() == True:
try:
driver.find_element_by_xpath("//i[@class='fa fa-chevron-right glyphicon glyphicon-chevron-right']").click()
except:
break
# Click on first day of the month:
time.sleep(5)
driver.find_element_by_xpath("//div[@class='calendar left']//td[contains(text(), '1')]").click()
time.sleep(5)
driver.find_element_by_xpath("//div[@class='calendar right']//td[contains(text(), '1')]").click()
time.sleep(5)
# Runs the search
driver.find_element_by_xpath("//button[@class='applyBtn btn btn-sm btn-success ui google plus button']").click()
run_button = WebDriverWait(driver, delay).until(EC.element_to_be_clickable((By.XPATH,"//button[@class='medium ui google plus submit button']")))
run_button.click()
# Scrape the Number of Titles
num_titles = WebDriverWait(driver, delay).until(EC.visibility_of_element_located((By.XPATH,"//p[@id='messages-count']")))
num_titles = num_titles.text
# create a row_dict and append it to the df
row_dict = {
'query': query,
'month': "Fill in after, the date selection works properly",
'num_titles':num_titles
}
df = df.append(row_dict, ignore_index = True)
incomplete_queries_list.append(query)
return df
###Output
_____no_output_____
###Markdown
3. Run Stream Titles Search
###Code
df = pd.DataFrame(columns=['query', 'month', 'num_titles'])
incomplete_queries_list = []
driver = webdriver.Chrome()
login_streamhatchet()
stream_title_search("test", incomplete_queries_list, df)
###Output
_____no_output_____ |
notebooks/rain_in_spain.ipynb | ###Markdown
The Rain in Spain - the last 100 yearsData source: The data comes from a subset of The National Centers for Environmental Information (NCEI) [Daily Global Historical Climatology Network](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt) (GHCN-Daily). The GHCN-Daily is comprised of daily climate records from thousands of land surface stations across the globe.
###Code
import pandas as pd
import ipywidgets as widgets
import matplotlib.pyplot as plt
import mplleaflet
# countries and their codes
path='https://rain-in-spain-data.s3.us-east-2.amazonaws.com/'
ctry_df=pd.read_fwf(path+'ghcnd-countries.txt', widths=[2,1,46], header=None, encoding='utf8')
ctry_df.columns=['CODE','NO1','NAME']
ctry_df=ctry_df.drop(columns='NO1')
# metadata for all stations
meta_df = pd.read_fwf(path+'ghcnd-stations.txt', widths=[11,1,8,1,9,1,6,1,2,1,30,1,3,1,3,1,5], header=None, encoding='utf8')
meta_df.columns = ['ID','NO1','LATITUDE','NO2','LONGITUDE','NO3','ELEVATION','NO4','STATE','NO5','NAME',
'NO6','GSN FLAG','NO7','HCN/CRN FLAG','NO8','WMO ID']
meta_df = meta_df.drop(columns=['NO1','NO2','NO3','NO4','NO5','NO6','NO7','NO8'])
meta_df['COUNTRY']=[row[:2] for row in meta_df.ID]
# only stations for Spain
ctry_code='SP'
meta_df = meta_df[meta_df.COUNTRY==ctry_code]
print(f'Number of stations in {ctry_df[ctry_df.CODE==ctry_code].NAME.item()}: {len(meta_df)}')
meta_df.head()
def leaflet_plot_stations(df):
"Map of stations in Spain"
lats, lons = df.LATITUDE.tolist(), df.LONGITUDE.tolist()
plt.figure(figsize=(8,8))
plt.scatter(lons, lats, c='r', alpha=0.7, s=20)
return mplleaflet.display()
leaflet_plot_stations(meta_df)
# select weather station
stations=[*zip(meta_df.NAME,meta_df.ID)]
w=widgets.Dropdown(
options=stations[:500],
value=stations[0][1],
description='Station:',
disabled=False,
)
def on_change_stn(change):
"Code changed from default"
if change['type'] == 'change' and change['name'] == 'value':
print (f"station code {change['new']}")
w.observe(on_change_stn)
display(w)
def gen_col_names(lst):
"select columns of interest"
for i in range(31):
lst=lst+['VALUE'+str(i+1),'MFLAG'+str(i+1),'QFLAG'+str(i+1),'SFLAG'+str(i+1)]
return lst
def gen_drop_col_names(lst):
"drop columns of no interest"
for i in range(31):
lst=lst+['MFLAG'+str(i+1),'QFLAG'+str(i+1),'SFLAG'+str(i+1)]
return lst
# rainfall data for a single station
# PRCP = Precipitation (tenths of mm)
station=w.value
print('Daily',meta_df[meta_df.ID==station].NAME.item())
df=pd.read_fwf(path+'SP_dly/'+station+'.dly', widths=[11,4,2,4,]+[5,1,1,1]*31, header=None)
df.columns=gen_col_names(['ID','YEAR','MONTH','ELEMENT'])
df=df[df.ELEMENT=='PRCP'].drop(columns=gen_drop_col_names(['ID','ELEMENT']))
df=df.melt(['YEAR','MONTH']).sort_values(['YEAR','MONTH']).reset_index(drop=True)
df=df[df.value>-9999]
df['variable'] = [row[5:] for row in df.variable]
df['date']=pd.to_datetime(df[['YEAR', 'MONTH', 'variable']].rename(columns={'YEAR': 'year', 'MONTH': 'month', 'variable': 'day'}))
df=df.drop(columns=['YEAR','MONTH','variable']).rename(columns={'value':'PRECP'})
df.set_index('date', inplace=True)
df.tail()
mthly_df=df.resample('MS').sum()
print ('Monthly')
mthly_df.head()
plt.scatter(mthly_df.index[-60:],mthly_df.PRECP[-60:])
plt.title('monthly rainfall last 5 years');
plt.scatter(mthly_df.index[-120:-60],mthly_df.PRECP[-120:-60])
plt.title('monthly rainfall previous 5 years');
yrly_df=df.resample('YS').sum()
print ('Yearly')
yrly_df.head()
plt.scatter(yrly_df.index,yrly_df.PRECP);
plt.scatter(yrly_df.index[-60:],yrly_df.PRECP[-60:]);
plt.scatter(mthly_df.index[-60*12:],mthly_df.PRECP[-60*12:]);
qtly_df=df.resample('QS').sum()
qtly_df.head()
plt.scatter(qtly_df.index,qtly_df.PRECP);
%load_ext watermark
%watermark --iversions -p matplotlib,mplleaflet,watermark,pylint
#!git add .
#!git commit -m 'updated notebook'
!jupyter nbconvert --to=script --output-dir=/tmp/converted-notebooks/ ./rain_in_spain.ipynb
!pylint ./tmp/converted-notebooks/rain_in_spain.py --disable=C,E0602,W0301,W0621
###Output
[NbConvertApp] Converting notebook ./rain_in_spain.ipynb to script
[NbConvertApp] Writing 4573 bytes to /tmp/converted-notebooks/rain_in_spain.py
--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)
|
2_pytorch/convnet-classifier.ipynb | ###Markdown
PyTorch dataPyTorch comes with a nice paradigm for dealing with data which we'll use here. A PyTorch [`Dataset`](http://pytorch.org/docs/master/data.htmltorch.utils.data.Dataset) knows where to find data in its raw form (files on disk) and how to load individual examples into Python datastructures. A PyTorch [`DataLoader`](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader) takes a dataset and offers a variety of ways to sample batches from that dataset.Take a moment to browse through the `CIFAR10` `Dataset` in `2_pytorch/cifar10.py`, read the `DataLoader` documentation linked above, and see how these are used in the section of `train.py` that loads data. Note that in the first part of the homework we subtracted a mean CIFAR10 image from every image before feeding it in to our models. Here we subtract a constant color instead. Both methods are seen in practice and work equally well.PyTorch provides lots of vision datasets which can be imported directly from [`torchvision.datasets`](http://pytorch.org/docs/master/torchvision/datasets.html). Also see [`torchtext`](https://github.com/pytorch/textdatasets) for natural language datasets. ConvNet Classifier in PyTorchIn PyTorch Deep Learning building blocks are implemented in the neural network module [`torch.nn`](http://pytorch.org/docs/master/nn.html) (usually imported as `nn`). A PyTorch model is typically a subclass of [`nn.Module`](http://pytorch.org/docs/master/nn.htmltorch.nn.Module) and thereby gains a multitude of features. Because your logistic regressor is an `nn.Module` all of its parameters and sub-modules are accessible through the `.parameters()` and `.modules()` methods.Now implement a ConvNet classifier by filling in the marked sections of `models/convnet.py`. The main driver for this question is `train.py`. It reads arguments and model hyperparameter from the command line, loads CIFAR10 data and the specified model (in this case, softmax). Using the optimizer initialized with appropriate hyperparameters, it trains the model and reports performance on test data. Complete the following couple of sections in `train.py`:1. Initialize an optimizer from the torch.optim package2. Update the parameters in model using the optimizer initialized aboveAt this point all of the components required to train the softmax classifer are complete for the softmax classifier. Now run $ run_convnet.shto train a model and save it to `convnet.pt`. This will also produce a `convnet.log` file which contains training details which we will visualize below. **Note**: You may want to adjust the hyperparameters specified in `run_convnet.sh` to get reasonable performance. Visualizing the PyTorch model
###Code
# Assuming that you have completed training the classifer, let us plot the training loss vs. iteration. This is an
# example to show a simple way to log and plot data from PyTorch.
# we neeed matplotlib to plot the graphs for us!
import matplotlib
# This is needed to save images
matplotlib.use('Agg')
import matplotlib.pyplot as plt
%matplotlib inline
# Parse the train and val losses one line at a time.
import re
# regexes to find train and val losses on a line
float_regex = r'[-+]?(\d+(\.\d*)?|\.\d+)([eE][-+]?\d+)?'
train_loss_re = re.compile('.*Train Loss: ({})'.format(float_regex))
val_loss_re = re.compile('.*Val Loss: ({})'.format(float_regex))
val_acc_re = re.compile('.*Val Acc: ({})'.format(float_regex))
# extract one loss for each logged iteration
train_losses = []
val_losses = []
val_accs = []
# NOTE: You may need to change this file name.
with open('convnet.log', 'r') as f:
for line in f:
train_match = train_loss_re.match(line)
val_match = val_loss_re.match(line)
val_acc_match = val_acc_re.match(line)
if train_match:
train_losses.append(float(train_match.group(1)))
if val_match:
val_losses.append(float(val_match.group(1)))
if val_acc_match:
val_accs.append(float(val_acc_match.group(1)))
fig = plt.figure()
plt.plot(train_losses, label='train')
plt.plot(val_losses, label='val')
plt.title('ConvNet Learning Curve')
plt.ylabel('loss')
plt.legend()
fig.savefig('convnet_lossvstrain.png')
fig = plt.figure()
plt.plot(val_accs, label='val')
plt.title('ConvNet Validation Accuracy During Training')
plt.ylabel('accuracy')
plt.legend()
fig.savefig('convnet_valaccuracy.png')
###Output
_____no_output_____ |
2020WinterIPS-Tech/.ipynb_checkpoints/PythonBasic-01-checkpoint.ipynb | ###Markdown
---- 经典的星星游戏
###Code
print("*")
print("**")
print("***")
print("*")
print("**")
print("***")
print("*")
print("**")
print("***")
print("****")
print("*****")
print("*****")
print("*****")
print("*****")
print("*****")
print("*****")
# 打一行,用 for
for i in range(20):
print('*')
for i in range(20):
print('*',end='')
*****
*****
*****
*****
*****
for rowIndex in range(5): # 行
for columnIndex in range(5): # 列
print("*",end='')#这个 print 是列上的。
print('')
# 作业 1
*
**
***
****
*****
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(5):# 行
for j in range(i+1): # 列
print("*",end='')
print()
# 作业 2
*
***
*****
*******
*********
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(5):# 行
for j in range(2*i+1): # 列
print("*",end='')
print()
# 作业 3
*
***
*****
*******
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(4):# 行
for j in range(3-i): # 列
print(" ",end='')
for j in range(2*i+1): # 列
print("*",end='')
print()
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(5):# 行
for j in range(2*i+1): # 列
print("*",end='')
print()
# 作业 4
*
**
***
****
*****
******
# 作业 5
*
***
*****
*******
*****
***
*
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(4):# 行
for j in range(3-i): # 列
print(" ",end='')
for j in range(2*i+1): # 列
print("*",end='')
print()
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(3):# 行
for j in range(1+i): # 列
print(" ",end='')
for j in range(5-2*i): # 列
print("*",end='')
print()
# 循环和判断
#
*******
*****
***
*
# 所有的打星星的问题其实都是在找行变化和列变化的关系
for i in range(4):# 行
for j in range(i): # 列
print(" ",end='')
for j in range(7-2*i): # 列
print("*",end='')
print()
###Output
_____no_output_____
###Markdown
---
###Code
import math
math.ceil(4.1)
import random
random.random()
random.random()
math.pi * 2
print('Monday')
print("Monday")
print('''
Line1
Line2
Line3
''')
print('Today is '+'Saturday')
day = 'Sunday'
print('Today is '+day)
day*2
###Output
_____no_output_____ |
Mall_customers.ipynb | ###Markdown
Mall Customers
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings('ignore')
mall = pd.read_csv("Mall_Customers.csv")
mall.info()
mall.describe()
mall.head()
###Output
_____no_output_____
###Markdown
It is interesting to know how the features distributes according the gender. In the below, we define a function for performing that
###Code
def mall_chart(feature):
male=mall[mall["Gender"]=="Male"][feature]
female=mall[mall["Gender"]=="Female"][feature]
df = pd.DataFrame([male,female])
df.index = ['Male','Female']
plt.figure(figsize=(10,5))
sns.distplot(female,bins=30,kde=True,color="red")
plt.title("Female")
plt.figure(figsize=(10,5))
sns.distplot(male,bins=30,kde=True,color="blue")
plt.title("Male")
mall_chart("Age")
mall_chart("Annual Income (k$)")
mall_chart("Spending Score (1-100)")
mall.drop("CustomerID",axis=1,inplace=True)
#plt.figure(figsize=(10,5))
sns.pairplot(mall, hue="Gender")
mall.head()
###Output
_____no_output_____
###Markdown
Clustering Elbow methodK-means is a simple unsupervised machine learning algorithm that groups a dataset into a user-specified number (k) of clusters. The algorithm is somewhat naive--it clusters the data into k clusters, even if k is not the right number of clusters to use. Therefore, when using k-means clustering, users need some way to determine whether they are using the right number of clusters.One method to validate the number of clusters is the elbow method. The idea of the elbow method is to run k-means clustering on the dataset for a range of values of k (say, k from 1 to 10)
###Code
from sklearn.cluster import KMeans
X=mall[['Annual Income (k$)','Spending Score (1-100)']].values
sse=[]
# range(1,30) is random selection because in our dataset there may not be more than 30 cluster (assumption)
for i in range(1,30):
kmeans = KMeans(n_clusters= i, init='k-means++', random_state=0)
kmeans.fit(X)
sse.append(kmeans.inertia_)
plt.plot(range(1,30), sse)
plt.title('The Elbow Method')
plt.xlabel('number of clusters (k)')
plt.ylabel('Sum of squared errors')
plt.show()
###Output
_____no_output_____
###Markdown
According the plot in the above, one finds that the "elbow" is for $n=5$
###Code
kmeansmodel = KMeans(n_clusters= 5, init='k-means++', random_state=0)
y_kmeans= kmeansmodel.fit_predict(X)
plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y_kmeans == 3, 0], X[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y_kmeans == 4, 0], X[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
ConclusionWe can see from the plot in the above that the Cluster 3 is our target set. That is, one has high Spending Score and high Annual Income. Classification after clustering
###Code
mall["label_kmeans"] = y_kmeans
mall.head()
###Output
_____no_output_____
###Markdown
We have to change Gender for labels, e.g. male=1 and female=0.Our target is the Cluster 3, the others are not interesting to us. Hence, let us label cluster 3 =1 others=0.We have to normalize the data
###Code
gender_01=[1 if each=="Male" else 0 for each in mall["Gender"]]#converting male=1 and female=0.
gender_01_df=pd.DataFrame(data=gender_01,columns=["Gender"])
mall["Gender"]=gender_01_df["Gender"]
label_kmeans_01=[1 if each==3 else 0 for each in mall["label_kmeans"]]#converting cluster3=1 others=0.
label_kmeans_01_df=pd.DataFrame(data=label_kmeans_01,columns=["label_kmeans"])
mall["label_kmeans"]=label_kmeans_01_df["label_kmeans"]
mall.head()
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
y = mall["label_kmeans"].values
x = mall.drop(["label_kmeans"],axis=1)
X_train,X_test,y_train,y_test = train_test_split(x,y,test_size=.2,random_state=42)
scaler = MinMaxScaler()# escala as features entre 0 e 1.
X_train_scaled = scaler.fit_transform(X_train)
X_train = pd.DataFrame(X_train_scaled)
X_test_scaled = scaler.fit_transform(X_test)
X_test = pd.DataFrame(X_test_scaled)
seed = 7
scoring = 'accuracy'
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cv_results = model_selection.cross_val_score(model, X_train, y_train,cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
from sklearn import metrics
lr=LogisticRegression().fit(X_train,y_train)
prob_lr=lr.predict_proba(X_train)
lda=LinearDiscriminantAnalysis().fit(X_train,y_train)
prob_lda=lda.predict_proba(X_train)
knn=KNeighborsClassifier().fit(X_train,y_train)
prob_knn=knn.predict_proba(X_train)
cart=DecisionTreeClassifier().fit(X_train,y_train)
prob_cart=cart.predict_proba(X_train)
gnb=GaussianNB().fit(X_train,y_train)
prob_gnb=gnb.predict_proba(X_train)
svm=SVC(probability=True).fit(X_train,y_train)
prob_svm=svm.predict_proba(X_train)
#Compute the ROC curve: true positives/false positives
tpr_lr,fpr_lr,thresh_lr=metrics.roc_curve(y_train,prob_lr[:,0])
tpr_lda,fpr_lda,thresh_lda=metrics.roc_curve(y_train,prob_lda[:,0])
tpr_knn,fpr_knn,thresh_knn=metrics.roc_curve(y_train,prob_knn[:,0])
tpr_cart,fpr_cart,thresh_cart=metrics.roc_curve(y_train,prob_cart[:,0])
tpr_gnb,fpr_gnb,thresh_gnb=metrics.roc_curve(y_train,prob_gnb[:,0])
tpr_svm,fpr_svm,thresh_svm=metrics.roc_curve(y_train,prob_svm[:,0])
#Area under Curve (AUC)
from sklearn.metrics import auc
roc_auc_lr = auc(fpr_lr, tpr_lr)
roc_auc_lda = auc(fpr_lda, tpr_lda)
roc_auc_knn = auc(fpr_knn, tpr_knn)
roc_auc_cart = auc(fpr_cart, tpr_cart)
roc_auc_gnb = auc(fpr_gnb, tpr_gnb)
roc_auc_svm = auc(fpr_svm, tpr_svm)
#Plotting the ROC curves
plt.figure()
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_lr, tpr_lr, label='LR, ROC curve (area = %0.2f)' % roc_auc_lr)
plt.plot(fpr_lda, tpr_lda, label='LDA, ROC curve (area = %0.2f)' % roc_auc_lda)
plt.plot(fpr_knn, tpr_knn, label='KNN, ROC curve (area = %0.2f)' % roc_auc_knn)
plt.plot(fpr_cart, tpr_cart, label='CART, ROC curve (area = %0.2f)' % roc_auc_cart)
plt.plot(fpr_gnb, tpr_gnb, label='NB, ROC curve (area = %0.2f)' % roc_auc_gnb)
plt.plot(fpr_svm, tpr_svm, label='SVC, ROC curve (area = %0.2f)' % roc_auc_svm)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
# Make predictions on validation dataset
print("--------------------------")
print("LogisticRegression Report")
print("--------------------------")
predictions_lr = lr.predict(X_test)
print("accuracy =",accuracy_score(y_test, predictions_lr))
print("confusion matrix",confusion_matrix(y_test, predictions_lr))
print(classification_report(y_test, predictions_lr))
print("--------------------------")
print("LinearDiscriminantAnalysis Report")
print("--------------------------")
predictions_lda = lda.predict(X_test)
print("accuracy =",accuracy_score(y_test, predictions_lda))
print("confusion matrix",confusion_matrix(y_test, predictions_lda))
print(classification_report(y_test, predictions_lda))
print("--------------------------")
print("KNeighborsClassifier Report")
print("--------------------------")
predictions_knn = knn.predict(X_test)
print("accuracy =",accuracy_score(y_test, predictions_knn))
print("confusion matrix",confusion_matrix(y_test, predictions_knn))
print(classification_report(y_test, predictions_knn))
print("--------------------------")
print("DecisionTreeClassifier Report")
print("--------------------------")
predictions = cart.predict(X_test)
print("accuracy =",accuracy_score(y_test, predictions))
print("confusion matrix",confusion_matrix(y_test, predictions))
print(classification_report(y_test, predictions))
print("--------------------------")
print("GaussianNB Report")
print("--------------------------")
predictions_gnb = gnb.predict(X_test)
print("accuracy =",accuracy_score(y_test, predictions_gnb))
print("confusion matrix",confusion_matrix(y_test, predictions_gnb))
print(classification_report(y_test, predictions_gnb))
print("--------------------------")
print("SVC Report")
print("--------------------------")
predictions_svm = svm.predict(X_test)
print("accuracy =",accuracy_score(y_test, predictions_svm))
print("confusion matrix",confusion_matrix(y_test, predictions_svm))
print(classification_report(y_test, predictions_svm))
import numpy as np
y = np.array([accuracy_score(y_test, predictions_lr),accuracy_score(y_test, predictions_lda),accuracy_score(y_test, predictions_knn),accuracy_score(y_test, predictions),accuracy_score(y_test, predictions_gnb),accuracy_score(y_test, predictions_svm)])
x = ['LogisticRegression','LinearDiscriminantAnalysis','KNeighborsClassifier','DecisionTreeClassifier','GaussianNB','SVM']
plt.bar(x,y)
plt.title("Comparison of Regression Algorithms")
plt.xticks(rotation=90)
plt.xlabel("Classifier")
plt.ylabel("accuracy score")
plt.show()
###Output
_____no_output_____ |
Information_Retreival_System.ipynb | ###Markdown
Scraper Ready
###Code
# Query handling
while True:
query = input("\nWhat do you want to buy? ")
query = query.lower().split()
query = str(query).translate(str.maketrans(string.punctuation,
" " * len(string.punctuation))) # de-contaminated STRING
query = query.split() # de-contaminated LIST
# Creating query matrix
query_matrix = np.zeros((cols))
# Obtaining id of the queried word from w2n dictionary
count = 0
for token in query:
if token in w2n:
uid = w2n[token]
query_matrix[uid] = 1
count += 1
if count == 0:
print("Your search ", query, "did not match any documents.")
else:
# Dot Product
transpose = doc_matrix.T
dot_prod = query_matrix.dot(transpose)
# Used in elimination
descending_scores = np.sort(dot_prod)[::-1]
# Ranking the pages
descending_filenos = np.argsort(dot_prod)[::-1][:no_of_ads_to_be_fetched]
# Eliminating files with 0 matches
count = 0
for score in descending_scores:
if score < 1:
break
else:
count += 1
# Printing the matched results
print("Your results were matched in following files:")
for i in range(0, count):
filename = str(descending_filenos[i] + 1) + ".txt"
print(filename)
again = ""
again = input("\n**Search again? [y / any key]: ")
if again.lower() == 'y':
continue
else:
sys.exit(0)
###Output
What do you want to buy? refrigerated box
Your results were matched in following files:
3.txt
**Search again? [y / any key]: y
What do you want to buy? _+)*(*&^^%%^#@[]refrigerated+_))*()*(^box../';'
Your results were matched in following files:
3.txt
**Search again? [y / any key]: y
What do you want to buy? chAiRs
Your results were matched in following files:
2.txt
**Search again? [y / any key]: y
What do you want to buy? jumperoo
Your results were matched in following files:
1.txt
**Search again? [y / any key]: n
|
Notebooks/TP1.POC/TP1.reg2.ipynb | ###Markdown
Hace falta algo que indique con qué entorno vamos a trabajar Importar lo que hace falta
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import re
data_url = "../Data/properatti.csv"
data = pd.read_csv(data_url, encoding="utf-8")
#limpiamos los que NaN en el precio
data = data.dropna(axis=0, how='any', subset=['price_aprox_usd'])
#funcion para borrar outliers.
def borrar_outliers(data, columnas):
"""Solo recibo columnas con valores numericos.
Las columns van en forma de tupla"""
cols_limpiar = columnas
mask=np.ones(shape=(data.shape[0]), dtype=bool)
for i in cols_limpiar:
#calculamos cuartiles, y valores de corte
Q1=data[i].quantile(0.25)
Q3=data[i].quantile(0.75)
RSI=Q3-Q1
max_value=Q3+1.5*RSI
min_value=Q1-1.5*RSI
#ajusto el min value a mano... no puede ser negativo.
min_value=10
#filtramos por max y min
mask=np.logical_and(mask, np.logical_and(data[i]>=min_value, data[i]<=max_value))
return data[mask]
def regex_to_bool(col, reg) :
u"""Returns a series with boolean mask result of apply the regular expresion to the column
col : column where to apply regular expresion
reg : regular expresion compiled
"""
serie = col.apply(lambda x : x if x is np.NaN else reg.search(x))
serie = serie.apply(lambda x : x is not None)
return serie
def regex_to_ones(col, reg, fill = 0) :
u"""Returns a series with ones or other value result of apply the regular expresion to the column
the value of one will be when the regular expression search() method found a match
the fill value (default to 0) will be when the regular expression serach() method did not found a match
col : column where to apply regular expresion
reg : regular expresion compiled
"""
serie = col.apply(lambda x : x if x is np.NaN else reg.search(x))
serie = serie.apply(lambda x : 1 if x is not None else fill)
return serie
_pattern = 'cochera|garage|auto'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_ones(data['description'], _express)
data['cochera'] = work
_pattern = 'piscina|pileta'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_ones(data['description'], _express)
data['pileta'] = work
_pattern = 'parrilla'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_ones(data['description'], _express)
data['parrilla'] = work
_pattern = 'balcon'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_ones(data['description'], _express)
data['balcon'] = work
# Crear una categoría númerica (en base 2) de acuerdo a los valores individuales (es dependiente de la posición)
data['amenities'] = data['cochera']*8 + data['pileta']*4 + data['parrilla']*2 + data['balcon']
data[['cochera', 'pileta', 'parrilla', 'balcon', 'amenities']]
data['amenities'].describe()
data['amenities'].value_counts()
###Output
_____no_output_____ |
notebooks/Experiment.ipynb | ###Markdown
Chanllenge 1: Missing valuesstrategies:- fill missing values by mean of training data- Drop THC, CH4 and NMHC and fill other columns with the last valid value- Drop THC, CH4 and NMHC and fill other columns with mean
###Code
# Preprocessing for the experiments
def preprocessing(df, grouped=False, category_cols= []):
# time_step for lstm
time_step = 1
if not grouped:
df = df.groupby(["station", pd.Grouper(freq="D")]).mean()
# shift labels for predictions
for station, d in df.groupby(["station"]):
df.loc[station, "target"] = d["PM2.5"].shift(periods=-1).values
# for every station drop the first values
df.dropna(how='any',axis=0,inplace=True)
# encode rainfall if it is out of Q3 + 1.5 x IQR
Q1 = df.loc[df.index.get_level_values('time').year==2018, "RAINFALL"].quantile(0.25)
Q3 = df.loc[df.index.get_level_values('time').year==2018, "RAINFALL"].quantile(0.75)
IQR = Q3 - Q1
df["RAINFALL"] = np.where(df["RAINFALL"] < (Q3 + 1.5 * IQR), 1, 0)
# Normalization of features
scaler = MinMaxScaler(feature_range=(-1,1))
scaler.fit(
df.loc[df.index.get_level_values('time').year==2018, :]
.drop(["longitude", "latitude", "RAINFALL",'PM2.5',"target"]+category_cols, axis=1)
.values
)
df[
df.drop(["longitude", "latitude", "RAINFALL",'PM2.5',"target"]+category_cols, axis=1).columns
] = scaler.transform(df.drop(["longitude", "latitude", "RAINFALL",'PM2.5',"target"]+category_cols, axis=1).values)
# Normalization of labels
label_scaler = MinMaxScaler(feature_range=(-1,1))
label_scaler.fit(df.loc[df.index.get_level_values('time').year==2018, "target"].values.reshape(-1, 1))
df['target'] = label_scaler.transform(df['target'].values.reshape(-1, 1))
train_X, train_Y = [], []
validation_X, validation_Y = [], []
test_X, test_Y = [], []
for station, d in df.groupby('station'):
# find the index of first day and last day of each years
first_train_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2018].index[0])
last_train_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2018].index[-1])
first_val_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2019].index[0])
last_val_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2019].index[-1])
first_test_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2020].index[0])
last_test_date = d.index.get_loc(d.loc[d.index.get_level_values('time').year==2020].index[-1])
# append previous time step values to fit lstm input format
for i in range(first_train_date + time_step, last_train_date+1):
indices = range(i - time_step, i, 1)
train_X.append(d.reset_index(drop=True).loc[indices].drop(['PM2.5','target'],axis=1).values)
train_Y.append(d.reset_index(drop=True).loc[i-1,'target'])
for i in range(first_val_date + time_step, last_val_date):
indices = range(i - time_step, i, 1)
validation_X.append(d.reset_index(drop=True).loc[indices].drop(['PM2.5','target'],axis=1).values)
validation_Y.append(d.reset_index(drop=True).loc[i-1,'target'])
for i in range(first_test_date + time_step, last_test_date):
indices = range(i - time_step, i, 1)
test_X.append(d.reset_index(drop=True).loc[indices].drop(['PM2.5','target'],axis=1))
test_Y.append(d.reset_index(drop=True).loc[i-1,'target'])
return np.array(train_X), np.array(train_Y), np.array(validation_X), np.array(validation_Y), np.array(test_X), np.array(test_Y), label_scaler # return label scaler for recover result
def train_lstm(train_X, train_Y, validation_X, validation_Y):
# building models
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='Adam')
# if out of memory, lower the batch_size
history = model.fit(train_X, train_Y, epochs=200, batch_size=256, validation_data=(validation_X, validation_Y), verbose=2, shuffle=True)
return model, history
# helper function for plot experiment result
def plot_result(X, True_Y, model, title):
pred = model.predict(X)
pred = label_scaler.inverse_transform(pred.reshape(-1,1))
True_Y = label_scaler.inverse_transform(True_Y.reshape(-1,1))
plt.figure(figsize=(15,12))
plt.plot(pred[:200], label='prediction')
plt.plot(True_Y[:200], label='True label')
plt.title(title)
plt.legend()
plt.show()
return mean_absolute_error(True_Y, pred)
# helper function for plot training loss
def plot_loss(history, title):
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.title(title)
plt.legend()
plt.show()
# Method 1: fill missing values by mean of training data
df1 = df.copy()
for col in numerical_columns:
train_mean = df1.loc["2018", col].mean()
df1.fillna(train_mean,inplace=True)
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df1)
# training
model, history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'fill_by_mean_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'fill_by_mean_validation')
test_mae = plot_result(test_X, test_Y, model, 'fill_by_mean_testset')
result = pd.DataFrame({},columns=['Validation_MAE','test_MAE'])
result.loc['fill_by_mean'] = [valid_mae,test_mae]
result
# Method 2: fill missing values by last valid values
df2 = df.copy()
df2.drop(['THC','CH4','NMHC'],axis=1,inplace=True)
for station, d in df2.groupby('station'):
d.fillna(method='ffill',inplace=True)
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df2)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'drop_and_ffill_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'drop_and_ffill_validation')
test_mae = plot_result(test_X, test_Y, model, 'drop_and_ffill_test')
result.loc['drop_and_ffill'] = [valid_mae,test_mae]
result
# Method 3: drop THC, CH4, NMHC and fill by mean of training data
df3 = df.copy()
df3.drop(['THC','CH4','NMHC'],axis=1,inplace=True)
for col in numerical_columns:
if col in df3.columns:
train_mean = df3.loc["2018", col].mean()
df3.fillna(train_mean,inplace=True)
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df3)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'drop_and_fill_by_mean_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'drop_and_fill_by_mean_validation')
test_mae = plot_result(test_X, test_Y, model, 'drop_and_fill_by_mean_test')
result.loc['drop_and_fill_by_mean'] = [valid_mae,test_mae]
result
df4 = df.copy()
df4 = df4.groupby(["station", pd.Grouper(freq="D")]).mean()
for station, d in df4.groupby(["station"]):
# df4.loc[station, "previous"] = d.loc[station].groupby([d.loc[station].index.month,d.loc[station].index.day])['PM2.5'].shift().values
df4.loc[station, 'previous'] = d.loc[station,"PM2.5"].shift().values
df4_2019 = df4.loc[df4.index.get_level_values('time').year==2019].dropna(how='any',subset=['PM2.5','previous'])
valid_mae = mean_absolute_error(df4_2019['PM2.5'],df4_2019['previous'])
df4_2020 = df4.loc[df4.index.get_level_values('time').year==2020].dropna(how='any',subset=['PM2.5','previous'])
test_mae = mean_absolute_error(df4_2020['PM2.5'],df4_2020['previous'])
result.loc['previous_day'] = [valid_mae,test_mae]
result.style.highlight_min(color="green", axis=0)
###Output
_____no_output_____
###Markdown
Chanllenge 2: Temporal data representationstrategies:- Add new columns of year, month, day.- Add new features of previous 7 days target.- Add new features that represent the statistics of last week
###Code
def fill_na(df):
for col in numerical_columns:
train_mean = df.loc["2018", col].mean()
df.fillna(train_mean,inplace=True)
return df
# Method 1: Add new columns of year, month, day.
df1 = df.copy()
df1 = fill_na(df1)
df1['year'] = df1.index.year
df1['month'] = df1.index.month
df1['day'] = df1.index.day
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df1)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'add_time_columns_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'add_time_columns_validation')
test_mae = plot_result(test_X, test_Y, model, 'add_time_columns_test')
temporal_result = pd.DataFrame({},columns=['Validation_MAE','test_MAE'])
temporal_result.loc['add_time_columns'] = [valid_mae,test_mae]
temporal_result
# Method 2: add new features of previous 7 days target.
df2 = df.copy()
df2 = fill_na(df2)
df2 = df2.groupby(["station", pd.Grouper(freq="D")]).mean()
for t in range(7):
df2[f't-{t+1}'] = df2['PM2.5'].shift(periods=t+1)
df2.fillna(0,inplace=True)
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df2,grouped=True)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'add_prev_t_columns_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'add_prev_t_columns_validation')
test_mae = plot_result(test_X, test_Y, model, 'add_prev_t_columns_test')
temporal_result.loc['add_prev_t_columns'] = [valid_mae,test_mae]
temporal_result
# Method 3: add new features that represent the statistics of last week
df3 = df.copy()
df3 = fill_na(df3)
df3 = df3.groupby(["station", pd.Grouper(freq="D")]).mean()
df3['last_week_mean'] = df3['PM2.5'].rolling(7).mean()
df3['last_week_min'] = df3['PM2.5'].rolling(7).min()
df3['last_week_max'] = df3['PM2.5'].rolling(7).max()
df3['diff'] = df3['PM2.5'].diff(periods=1)
df3['last_week_diff_mean'] = df3['diff'].rolling(7).mean()
df3['last_week_diff_min'] = df3['diff'].rolling(7).min()
df3['last_week_diff_max'] = df3['diff'].rolling(7).max()
df3.fillna(0)
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df3,grouped=True)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'add_last_week_statistics_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'add_last_week_statistics_validation')
test_mae = plot_result(test_X, test_Y, model, 'add_last_week_statistics_test')
temporal_result.loc['add_last_week_statistics'] = [valid_mae,test_mae]
temporal_result
temporal_result.loc['drop_and_fill_by_mean'] = result.loc['drop_and_fill_by_mean']
temporal_result.loc['previous_day'] = result.loc['previous_day']
temporal_result.style.highlight_min(color="green", axis=0)
###Output
_____no_output_____
###Markdown
Chanllenge 3: Spatial data representationStrategies:- Use kmeans to separate into 5 groups- Apply one hot representation to counties(22 counties)- Factorize county to numeric representation
###Code
# Method 1: Use kmeans to separate into 5 groups
df1 = df.copy()
df1 = fill_na(df1)
kmeans = KMeans(n_clusters=5, random_state=0).fit(df1[['longitude','latitude']].values)
df1['geo_group'] = kmeans.labels_
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df1,category_cols=['geo_group'])
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'kmeans_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'kmeans_validation')
test_mae = plot_result(test_X, test_Y, model, 'kmeans_test')
spatial_result = pd.DataFrame({},columns=['Validation_MAE','test_MAE'])
spatial_result.loc['kmeans'] = [valid_mae,test_mae]
spatial_result
# Method 2: Apply one hot representation to counties(22 counties)
df2 = df.copy()
df2 = fill_na(df2)
new_geo = geo.copy()
df2['county'] = pd.merge(df2,new_geo, left_on= ['station'],
right_on= ['siteengname'],
how = 'left')['county'].values
df2 = pd.concat([df2.drop('county',axis=1), pd.get_dummies(df2['county'])], axis=1)
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df2)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'one_hot_county_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'one_hot_county_validation')
test_mae = plot_result(test_X, test_Y, model, 'one_hot_county_test')
spatial_result.loc['one_hot_county'] = [valid_mae,test_mae]
spatial_result
# Method 3: Factorize county to numeric representation
df3 = df.copy()
df3 = fill_na(df3)
new_geo = geo.copy()
df3['county'] = pd.merge(df3,geo, left_on= ['station'],
right_on= ['siteengname'],
how = 'left')['county'].values
df3['county'] = pd.factorize(df3['county'])[0]
(
train_X,
train_Y,
validation_X,
validation_Y,
test_X,
test_Y,
label_scaler
) = preprocessing(df3)
model,history = train_lstm(train_X,train_Y,validation_X,validation_Y)
plot_loss(history, 'categorize_county_loss')
valid_mae = plot_result(validation_X, validation_Y, model, 'categorize_county_validation')
test_mae = plot_result(test_X, test_Y, model, 'categorize_county_test')
spatial_result.loc['categorize_county'] = [valid_mae,test_mae]
spatial_result.loc['drop_and_fill_by_mean'] = result.loc['drop_and_fill_by_mean']
spatial_result.loc['previous_day'] = result.loc['previous_day']
spatial_result.style.highlight_min(color="green", axis=0)
###Output
_____no_output_____
###Markdown
Load the data
###Code
data = pd.read_csv("../data/skcm_vaf.csv").drop(['Unnamed: 0', 'Tumor_Sample_Barcode'], axis=1)
data.fillna(0, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Filter out passenger genes
###Code
driver = 'BRAF NRAS TP53 CDKN2A PTEN IDH1 MAP2K1 NF1 ARID2 RAC1 CTNNB1 CDK4 PPP6C KIT DDX3X RB1 GNA11 KRAS HRAS'.split()
data = data[driver]
###Output
_____no_output_____
###Markdown
Parameters
###Code
n_bootstrap = 100
lambda1 = 0.002
alpha = 0.05
pos_threshold = 0.09
neg_threshold = -0.01
###Output
_____no_output_____
###Markdown
Bootstrap
###Code
Ws = []
for i in tqdm(range(n_bootstrap)):
np.random.seed(42 + i)
bootstrap_indices = np.random.choice(data.index, size=data.shape[0], replace=True)
bootstrap_data = data.loc[bootstrap_indices]
W = notears_linear(bootstrap_data.values, lambda1=lambda1, loss_type='l2', w_threshold=0.0, max_iter=300)
Ws.append(W.T)
Ws = np.array(Ws)
###Output
100%|██████████| 100/100 [01:00<00:00, 1.41it/s]
###Markdown
Remove non-significant edges
###Code
t_pos, p_pos = stats.ttest_1samp(Ws, pos_threshold, axis=0)
t_pos = pd.DataFrame(t_pos, columns=data.columns, index=data.columns)
p_pos = pd.DataFrame(p_pos, columns=data.columns, index=data.columns)
t_neg, p_neg = stats.ttest_1samp(Ws, neg_threshold, axis=0)
t_neg = pd.DataFrame(t_neg, columns=data.columns, index=data.columns)
p_neg = pd.DataFrame(p_neg, columns=data.columns, index=data.columns)
W = Ws.mean(axis=0)
W_pos = ((t_pos > 0) & (p_pos < alpha)) * W
W_neg = ((t_neg < 0) & (p_neg < alpha)) * W
W = W_pos + W_neg
G1 = nx.from_pandas_adjacency(W[W > 0].fillna(0), create_using=nx.DiGraph)
G2 = nx.from_pandas_adjacency(W[W < 0].fillna(0), create_using=nx.DiGraph)
plot_graph(G1, G2)
plt.figure(figsize=(15, 5));
plt.subplot(1, 2, 1);
sns.heatmap(W_pos != 0);
plt.title('Positive Edges');
plt.subplot(1, 2, 2);
sns.heatmap(W_neg != 0);
plt.title('Negative Edges');
plt.figure(figsize=(10, 6));
sns.heatmap(W, cmap='RdBu_r', vmin=-np.max([W.max(), W.min()]), vmax=np.max([W.max(), W.min()]), annot=True, fmt='.2f', mask=np.isclose(W, 0));
plt.title('Adjacency Matrix');
###Output
_____no_output_____
###Markdown
Test pipeline1. Create dataset: sequence of preporcessed examples ready to feed to neuralnet 2. Create dataloader: define how dataset is loaded to neuralnet (batch size, order, computation optimizing ...)3. Create model : a bunch of matrixes math to transform input tensor to output tensor4. Training loop: + Forward + Calculate loss + Backward + Monitoring: + Evaluate metrics + Logger, back and forth + Visualize Import necessary packages
###Code
import os
import glob
import sys
import random
import matplotlib.pylab as plt
from PIL import Image, ImageDraw
import torch
from torch.utils.data import Dataset
import torchvision.transforms.functional as TF
import numpy as np
from sklearn.model_selection import ShuffleSplit
torch.manual_seed(0)
np.random.seed(0)
random.seed(0)
%matplotlib inline
sys.path.insert(0, '..')
from src.models.util import pipeline, Cornell_Grasp_dataset
###Output
_____no_output_____
###Markdown
Create a transformer
###Code
def resize_img_label(image,label,target_size=(256,256)):
w_orig,h_orig = image.size
w_target,h_target = target_size
label = label.view(-1,2)
# resize image and label
image_new = TF.resize(image,target_size)
for i in range(len(label)):
x, y = label[i]
label[i][0] = x/w_orig*w_target
label[i][1] = y/h_orig*h_target
label = label.view(-1,8)
return image_new,label
def transformer(image, label, params):
image,label=resize_img_label(image,label,params["target_size"])
if params["sample_output"]:
# randoom choose a grasp to be the ground truth
index = random.randint(0, len(label) -1)
label = label[index]
image=TF.to_tensor(image)
return image, label
###Output
_____no_output_____
###Markdown
Create Data loader
###Code
def collate_fn(batch):
imgs, labels = list(zip(*batch))
targets = []
for i in range(len(labels)):
label = labels[i]
target = torch.zeros(label.shape[0], label.shape[1] + 1)
target[:,0] = i
target[:, 1:] = label
targets.append(target)
targets = torch.cat(targets, 0)
imgs = torch.stack([img for img in imgs])
return imgs, targets,
trans_params_train={
"target_size" : (256, 256),
"sample_output" : True
}
trans_params_val={
"target_size" : (256, 256),
"sample_output" : False
}
path2data = "../data/processed/grasp.csv"
# create data set
train_ds = Cornell_Grasp_dataset(path2data,transformer,trans_params_train)
val_ds = Cornell_Grasp_dataset(path2data,transformer,trans_params_val)
sss = ShuffleSplit(n_splits=1, test_size=0.3, random_state=0)
indices=range(len(train_ds))
for train_index, val_index in sss.split(indices):
print(len(train_index))
print("-"*10)
print(len(val_index))
from torch.utils.data import Subset
train_ds = Subset(train_ds,train_index)
print(len(train_ds))
val_ds = Subset(val_ds,val_index)
print(len(val_ds))
import matplotlib.pyplot as plt
def show(img,label=None):
npimg = img.numpy().transpose((1,2,0))
plt.imshow(npimg)
if label is not None:
label = label.view(-1,2)
for point in label:
x,y= point
plt.plot(x,y,'b+',markersize=10)
plt.figure(figsize=(10,10))
for img,label in train_ds:
show(img,label)
break
plt.figure(figsize=(10,10))
for img,label in val_ds:
show(img,label)
break
from torch.utils.data import DataLoader
train_dl = DataLoader(train_ds, batch_size=16, shuffle=True)
val_dl = DataLoader(val_ds, batch_size=32, shuffle=False, collate_fn=collate_fn)
for img_b, label_b in train_dl:
print(img_b.shape,img_b.dtype)
print(label_b.shape)
break
for img, label in val_dl:
print(label.shape)
break
###Output
torch.Size([160, 9])
###Markdown
Create Model
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self, params):
super(Net, self).__init__()
def forward(self, x):
return x
def __init__(self, params):
super(Net, self).__init__()
C_in,H_in,W_in=params["input_shape"]
init_f=params["initial_filters"]
num_outputs=params["num_outputs"]
self.conv1 = nn.Conv2d(C_in, init_f, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(init_f+C_in, 2*init_f, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(3*init_f+C_in, 4*init_f, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(7*init_f+C_in, 8*init_f, kernel_size=3, padding=1)
self.conv5 = nn.Conv2d(15*init_f+C_in, 16*init_f, kernel_size=3, padding=1)
self.fc1 = nn.Linear(16*init_f, num_outputs)
def forward(self, x):
identity=F.avg_pool2d(x,4,4)
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = torch.cat((x, identity), dim=1)
identity=F.avg_pool2d(x,2,2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = torch.cat((x, identity), dim=1)
identity=F.avg_pool2d(x,2,2)
x = F.relu(self.conv3(x))
x = F.max_pool2d(x, 2, 2)
x = torch.cat((x, identity), dim=1)
identity=F.avg_pool2d(x,2,2)
x = F.relu(self.conv4(x))
x = F.max_pool2d(x, 2, 2)
x = torch.cat((x, identity), dim=1)
x = F.relu(self.conv5(x))
x=F.adaptive_avg_pool2d(x,1)
x = x.reshape(x.size(0), -1)
x = self.fc1(x)
return x
Net.__init__= __init__
Net.forward = forward
params_model={
"input_shape": (3,256,256),
"initial_filters": 16,
"num_outputs": 5,
}
model = Net(params_model)
device = torch.device("cuda")
model=model.to(device)
###Output
_____no_output_____
###Markdown
Create optimizer
###Code
from torch import optim
from torch.optim.lr_scheduler import ReduceLROnPlateau
opt = optim.Adam(model.parameters(), lr=1e-3)
lr_scheduler = ReduceLROnPlateau(opt, mode='min',factor=0.5, patience=20,verbose=1)
###Output
_____no_output_____
###Markdown
Training
###Code
path2models= "../models/"
mse_loss = nn.MSELoss(reduction="sum")
params_loss={
"mse_loss": mse_loss,
"gama": 5.0,
}
params_train={
"num_epochs": 10,
"optimizer": opt,
"params_loss": params_loss,
"train_dl": train_dl,
"val_dl": val_dl,
"sanity_check": True,
"lr_scheduler": lr_scheduler,
"path2weights": path2models+"weights.pt",
}
pline = pipeline(model, params_train, device)
model,loss_hist, metric_history = pline.train_val()
# Train-Validation Progress
num_epochs=params_train["num_epochs"]
# plot loss progress
plt.title("Train-Val Loss")
plt.plot(range(1,num_epochs+1),loss_hist["train"],label="train")
plt.plot(range(1,num_epochs+1),loss_hist["val"],label="val")
plt.ylabel("Loss")
plt.xlabel("Training Epochs")
plt.legend()
plt.show()
# plot accuracy progress
plt.title("Train-Val Accuracy")
plt.plot(range(1,num_epochs+1),metric_history["train"],label="train")
plt.plot(range(1,num_epochs+1),metric_history["val"],label="val")
plt.ylabel("Accuracy")
plt.xlabel("Training Epochs")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Prepare data Read data
###Code
with open("small", "r") as f:
text = f.read()
text = re.sub(r"\([^\)]+\)", "", text).strip()
text = '\n'.join([ t for t in text.split('\n') if re.search(r'[一-龯ぁ-んァ-ン]+', t)==None])
print("Text Sample >>\n{}\n".format(text[:200]))
print("Length >>\n{}".format(len(text)))
###Output
Text Sample >>
T-50 골든이글
KAI T-50 골든이글()은 대한민국이 제작한 초음속 고등 훈련기이다. 2005년 10월부터 제작사인 한국항공우주산업에서 양산에 들어가, 2005년 12월에 1호기가 납품되었다. 2008년 3월 25일 초도분량 25대 도입이 모두 완료되어 기존의 T-38 탤론의 역할을 대체하였다. 현재 납품된 기체는 대한민국 공군 1 전투비행단소속 18
Length >>
7641228
###Markdown
Preprocess text dataLoad data and tokenize. Convert to id
###Code
def encode_data(text, tokenize, vocab_size=None):
tokens = []
for line in text.split("\n"):
if len(line.strip())==0: continue;
tokens.extend(tokenize(line))
print("Tokenization done.")
c = Counter(tokens)
if vocab_size:
vocabs = ['UNK'] + [ word for word, cnt in c.most_common(vocab_size - 1) ]
else:
vocabs = [ word for word, cnt in c.most_common() ]
vocab_size = len(vocabs)
print(f"Total number of tokens: {len(tokens)}. Vocab size: {vocab_size}")
word2id = { word: idx for idx, word in enumerate(vocabs)}
text_encoded = np.array([word2id.get(t,0) for t in tokens])
return text_encoded, vocab_size
###Output
_____no_output_____
###Markdown
Make tensorflow datasetMake language model data(input, target) and convert to tensorflow batch data
###Code
# 임베딩 차원
embedding_dim = 64
# RNN 유닛(unit) 개수
rnn_units = 256
batch_size = 64
buffer_size = 1000
seq_length = 100
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
def batch_dataset(text_encoded):
token_dataset = tf.data.Dataset.from_tensor_slices(text_encoded)
sequences = token_dataset.batch(seq_length+1, drop_remainder=True)
dataset = sequences.map(split_input_target)
dataset = dataset.shuffle(buffer_size).batch(batch_size, drop_remainder=True)
dataset = dataset.repeat()
return dataset
###Output
_____no_output_____
###Markdown
Language model neural network Define model
###Code
def build_model(vocab_size, embedding_dim, rnn_units, batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim,
batch_input_shape=[batch_size, None]),
tf.keras.layers.LSTM(rnn_units,
return_sequences=True,
stateful=False,
recurrent_initializer='glorot_uniform'),
tf.keras.layers.Dense(vocab_size)
])
return model
###Output
_____no_output_____
###Markdown
TrainLog perplexity
###Code
from collections import defaultdict
class LossCallback(tf.keras.callbacks.Callback):
def __init__(self, name, log_dict):
super(LossCallback, self).__init__()
self.name = name
self.writer = None
self.log_dict = log_dict
def on_train_batch_end(self, batch, logs=None):
if self.writer is None:
self.writer = open(f"{self.name}.log", "w")
self.writer.write("{}\t{:.4f}\n".format(batch, logs["loss"]))
self.log_dict[self.name].append((batch, logs["loss"]))
def on_train_end(self, logs=None):
self.writer.close()
def run_experiment(text, tokenize, name, log_dict):
text_encoded, vocab_size = encode_data(text, tokenize, 30000)
dataset = batch_dataset(text_encoded)
model = build_model(vocab_size, embedding_dim, rnn_units, batch_size)
model.compile(optimizer='sgd', loss="sparse_categorical_crossentropy")
examples_per_epoch = len(text_encoded)//seq_length
steps_per_epoch = examples_per_epoch // batch_size
logger = LossCallback(name, log_dict)
model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=10, verbose=1, callbacks=[logger])
log = defaultdict(list)
t1 = Tokenizer(decompose=True)
tokenize = lambda x:t1.tokenize(x, as_id=True)
run_experiment(text, tokenize, "bpe_with_decomposition", log)
t2 = Tokenizer(decompose=False)
tokenize = lambda x:t2.tokenize(x, as_id=True)
run_experiment(text, tokenize, "bpe_no_decomposition", log)
from konlpy.tag import Komoran
k = Komoran()
def tokenize(text):
poses = k.pos(text)
return [ a for a,b in poses ]
run_experiment(text, tokenize, "morph_analyzer_komoran", log)
import matplotlib.pyplot as plt
from scipy.interpolate import make_interp_spline, BSpline
def get_smooth(data):
step = np.arange(len(data))
loss = data[:,1]
xnew = np.linspace(step.min(), step.max(), 300)
spl = make_interp_spline(step, loss, k=3) # type: BSpline
smooth = spl(xnew)
return xnew, smooth
for name, data in log.items():
data = np.array(data)
x, y = get_smooth(data)
plt.plot(x, y, label=name)
plt.xlabel("step")
plt.ylabel("perplexity")
plt.legend()
plt.show()
###Output
_____no_output_____ |
tutorial-contents-notebooks/502_GPU.ipynb | ###Markdown
502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhou
###Code
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
torch.manual_seed(1)
import matplotlib.pyplot as plt
%matplotlib inline
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001
DOWNLOAD_MNIST = True
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True,
transform=torchvision.transforms.ToTensor(),
download=DOWNLOAD_MNIST,)
train_loader = Data.DataLoader(
dataset=train_data,
batch_size=BATCH_SIZE,
shuffle=True)
test_data = torchvision.datasets.MNIST(
root='./mnist/', train=False)
# !!!!!!!! Change in here !!!!!!!!! #
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU
test_y = test_data.test_labels[:2000].cuda()
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
cnn = CNN()
# !!!!!!!! Change in here !!!!!!!!! #
cnn.cuda() # Moves all model parameters and buffers to the GPU.
optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
loss_func = nn.CrossEntropyLoss()
losses_his = []
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
# !!!!!!!! Change in here !!!!!!!!! #
b_x = Variable(x).cuda() # Tensor on GPU
b_y = Variable(y).cuda() # Tensor on GPU
output = cnn(b_x)
loss = loss_func(output, b_y)
losses_his.append(loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 50 == 0:
test_output = cnn(test_x)
# !!!!!!!! Change in here !!!!!!!!! #
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
accuracy = sum(pred_y == test_y).item() / test_y.size(0)
print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %.2f' % accuracy)
plt.plot(losses_his, label='loss')
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 1))
plt.show()
###Output
_____no_output_____
###Markdown
Test
###Code
# !!!!!!!! Change in here !!!!!!!!! #
test_output = cnn(test_x[:10])
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
###Output
tensor([ 7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') prediction number
tensor([ 7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') real number
###Markdown
502 GPUView more, visit my tutorial page: https://mofanpy.com/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision
###Code
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
torch.manual_seed(1)
import matplotlib.pyplot as plt
%matplotlib inline
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001
DOWNLOAD_MNIST = False
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True,
transform=torchvision.transforms.ToTensor(),
download=DOWNLOAD_MNIST,)
train_loader = Data.DataLoader(
dataset=train_data,
batch_size=BATCH_SIZE,
shuffle=True)
test_data = torchvision.datasets.MNIST(
root='./mnist/', train=False)
# !!!!!!!! Change in here !!!!!!!!! #
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU
test_y = test_data.test_labels[:2000].cuda()
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
cnn = CNN()
# !!!!!!!! Change in here !!!!!!!!! #
cnn.cuda() # Moves all model parameters and buffers to the GPU.
optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
loss_func = nn.CrossEntropyLoss()
losses_his = []
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
# !!!!!!!! Change in here !!!!!!!!! #
b_x = Variable(x).cuda() # Tensor on GPU
b_y = Variable(y).cuda() # Tensor on GPU
output = cnn(b_x)
loss = loss_func(output, b_y)
losses_his.append(loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 50 == 0:
test_output = cnn(test_x)
# !!!!!!!! Change in here !!!!!!!!! #
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0)
print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
plt.plot(losses_his, label='loss')
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 1))
plt.show()
###Output
_____no_output_____
###Markdown
Test
###Code
# !!!!!!!! Change in here !!!!!!!!! #
test_output = cnn(test_x[:10])
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
###Output
7
2
1
0
4
1
4
9
5
9
[torch.cuda.LongTensor of size 10 (GPU 0)]
prediction number
7
2
1
0
4
1
4
9
5
9
[torch.cuda.LongTensor of size 10 (GPU 0)]
real number
###Markdown
502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision
###Code
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
torch.manual_seed(1)
import matplotlib.pyplot as plt
%matplotlib inline
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001
DOWNLOAD_MNIST = False
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True,
transform=torchvision.transforms.ToTensor(),
download=DOWNLOAD_MNIST,)
train_loader = Data.DataLoader(
dataset=train_data,
batch_size=BATCH_SIZE,
shuffle=True)
test_data = torchvision.datasets.MNIST(
root='./mnist/', train=False)
# !!!!!!!! Change in here !!!!!!!!! #
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU
test_y = test_data.test_labels[:2000].cuda()
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
cnn = CNN()
# !!!!!!!! Change in here !!!!!!!!! #
cnn.cuda() # Moves all model parameters and buffers to the GPU.
optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
loss_func = nn.CrossEntropyLoss()
losses_his = []
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
# !!!!!!!! Change in here !!!!!!!!! #
b_x = Variable(x).cuda() # Tensor on GPU
b_y = Variable(y).cuda() # Tensor on GPU
output = cnn(b_x)
loss = loss_func(output, b_y)
losses_his.append(loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 50 == 0:
test_output = cnn(test_x)
# !!!!!!!! Change in here !!!!!!!!! #
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0)
print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
plt.plot(losses_his, label='loss')
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 1))
plt.show()
###Output
_____no_output_____
###Markdown
Test
###Code
# !!!!!!!! Change in here !!!!!!!!! #
test_output = cnn(test_x[:10])
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
###Output
7
2
1
0
4
1
4
9
5
9
[torch.cuda.LongTensor of size 10 (GPU 0)]
prediction number
7
2
1
0
4
1
4
9
5
9
[torch.cuda.LongTensor of size 10 (GPU 0)]
real number
###Markdown
502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision
###Code
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
torch.manual_seed(1)
import matplotlib.pyplot as plt
%matplotlib inline
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001
DOWNLOAD_MNIST = False
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True,
transform=torchvision.transforms.ToTensor(),
download=DOWNLOAD_MNIST,)
train_loader = Data.DataLoader(
dataset=train_data,
batch_size=BATCH_SIZE,
shuffle=True)
test_data = torchvision.datasets.MNIST(
root='./mnist/', train=False)
# !!!!!!!! Change in here !!!!!!!!! #
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU
test_y = test_data.test_labels[:2000].cuda()
print(test_data.test_data.shape, test_x.shape)
print(test_data.test_data.type(), test_x.type())
print(test_data.test_data.max(), test_x.max())
print(test_data.test_data.is_cuda, test_x.is_cuda)
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
cnn = CNN()
cnn
print(next(cnn.parameters()).is_cuda)
# !!!!!!!! Change in here !!!!!!!!! #
cnn.cuda() # Moves all model parameters and buffers to the GPU.
print(next(cnn.parameters()).is_cuda)
optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
loss_func = nn.CrossEntropyLoss()
losses_his = []
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
# !!!!!!!! Change in here !!!!!!!!! #
b_x = Variable(x).cuda() # Tensor on GPU
b_y = Variable(y).cuda() # Tensor on GPU
output = cnn(b_x)
loss = loss_func(output, b_y)
losses_his.append(loss.item()) #data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 50 == 0:
test_output = cnn(test_x)
# !!!!!!!! Change in here !!!!!!!!! #
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
"""
sum() returns different result as torch.sum!!!
giving a example:
sum() : 176
torch.sum(): 1968
"""
# without .type(torch.FloatTensor), accuracy will always be 0
accuracy = torch.sum(pred_y==test_y).type(torch.FloatTensor) / test_y.size(0)
# accuracy = sum(pred_y == test_y) / test_y.size(0)
# print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| test accuracy: %.2f' % accuracy)
# print(sum(pred_y == test_y), test_y.size(0))
# print(sum(pred_y == test_y), sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0))
# print(torch.sum(pred_y==test_y), torch.sum(pred_y==test_y).type(torch.FloatTensor) / test_y.size(0))
plt.plot(losses_his, label='loss')
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 1))
plt.show()
###Output
_____no_output_____
###Markdown
Test
###Code
# !!!!!!!! Change in here !!!!!!!!! #
test_output = cnn(test_x[:10])
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
print(pred_y, 'prediction number')
# if without [:10]
# print(test_y, 'real number') #tensor([7, 2, 1, ..., 3, 9, 5], device='cuda:0') real number
print(test_y[:10], 'real number')
###Output
tensor([7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') prediction number
tensor([7, 2, 1, 0, 4, 1, 4, 9, 5, 9], device='cuda:0') real number
###Markdown
502 GPUView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* torchvision
###Code
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
torch.manual_seed(1)
import matplotlib.pyplot as plt
%matplotlib inline
EPOCH = 1
BATCH_SIZE = 50
LR = 0.001
DOWNLOAD_MNIST = False
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True,
transform=torchvision.transforms.ToTensor(),
download=DOWNLOAD_MNIST,)
train_loader = Data.DataLoader(
dataset=train_data,
batch_size=BATCH_SIZE,
shuffle=True)
test_data = torchvision.datasets.MNIST(
root='./mnist/', train=False)
# !!!!!!!! Change in here !!!!!!!!! #
test_x = Variable(torch.unsqueeze(test_data.test_data, dim=1)).type(torch.FloatTensor)[:2000].cuda()/255. # Tensor on GPU
test_y = test_data.test_labels[:2000].cuda()
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2,),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 5, 1, 2),
nn.ReLU(),
nn.MaxPool2d(2),)
self.out = nn.Linear(32 * 7 * 7, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1)
output = self.out(x)
return output
cnn = CNN()
# !!!!!!!! Change in here !!!!!!!!! #
cnn.cuda() # Moves all model parameters and buffers to the GPU.
optimizer = torch.optim.Adam(cnn.parameters(), lr=LR)
loss_func = nn.CrossEntropyLoss()
losses_his = []
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
# !!!!!!!! Change in here !!!!!!!!! #
b_x = Variable(x).cuda() # Tensor on GPU
b_y = Variable(y).cuda() # Tensor on GPU
output = cnn(b_x)
loss = loss_func(output, b_y)
losses_his.append(loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
if step % 50 == 0:
test_output = cnn(test_x)
# !!!!!!!! Change in here !!!!!!!!! #
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
accuracy = torch.sum(pred_y == test_y).type(torch.FloatTensor) / test_y.size(0)
print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
plt.plot(losses_his, label='loss')
plt.legend(loc='best')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.ylim((0, 1))
plt.show()
###Output
_____no_output_____
###Markdown
Test
###Code
# !!!!!!!! Change in here !!!!!!!!! #
test_output = cnn(test_x[:10])
pred_y = torch.max(test_output, 1)[1].cuda().data.squeeze() # move the computation in GPU
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
###Output
7
2
1
0
4
1
4
9
5
9
[torch.cuda.LongTensor of size 10 (GPU 0)]
prediction number
7
2
1
0
4
1
4
9
5
9
[torch.cuda.LongTensor of size 10 (GPU 0)]
real number
|
CIFAR10.ipynb | ###Markdown
Download the CIFAR10 dataset
###Code
import torch
import torchvision
batch_size_train = 5000000
batch_size_test = 1000000
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10('/files/', train=True, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10('/files/', train=False, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.1307,), (0.3081,))
])),
batch_size=batch_size_test, shuffle=True)
examples = enumerate(train_loader)
batch_idx, (example_data, example_targets) = next(examples)
print(example_data.shape)
examples_t = enumerate(test_loader)
batch_idx_t, (example_data_t, example_targets_t) = next(examples_t)
print(example_data_t.shape)
sorted(list(set((example_targets_t).numpy().tolist())))
###Output
_____no_output_____
###Markdown
We are converting the dataset into five tasks and storing it in some JSON files to use in the future again and again so that we do not need to download the dataset always. Later, we can extract it from the driver to train and test models when we need the data. Though, we added a link for the zip file, so please skip the first few steps.
###Code
def get_indices(example_targets):
indices_list = [[] for i in range(5)]
for i in range(example_targets.shape[0]):
if example_targets[i].item() == 0 or example_targets[i].item() == 1:
indices_list[0].append(i)
elif example_targets[i].item() == 2 or example_targets[i].item() == 3:
indices_list[1].append(i)
elif example_targets[i].item() == 4 or example_targets[i].item() == 5:
indices_list[2].append(i)
elif example_targets[i].item() == 6 or example_targets[i].item() == 7:
indices_list[3].append(i)
elif example_targets[i].item() == 8 or example_targets[i].item() == 9:
indices_list[4].append(i)
return indices_list
import json
train_indices = get_indices(example_targets)
test_indices = get_indices(example_targets_t)
traindata_list =[]
trainlabels_list = []
testdata_list = []
testlabels_list = []
for j in range(5):
traindata = example_data[train_indices[j]]
trainlabels = example_targets[train_indices[j]]
testdata = example_data_t[test_indices[j]]
testlabels = example_targets_t[test_indices[j]]
testdata_list.append(testdata.numpy().tolist())
testlabels_list.append(testlabels.numpy().tolist())
traindata_list.append(traindata.detach().numpy().tolist())
trainlabels_list.append(trainlabels.numpy().tolist())
with open('traindata.json', 'w') as jsonfile:
json.dump(traindata_list, jsonfile)
with open('trainlabels.json', 'w') as jsonfile:
json.dump(trainlabels_list, jsonfile)
with open('testdata.json', 'w') as jsonfile:
json.dump(testdata_list, jsonfile)
with open('testlabels.json', 'w') as jsonfile:
json.dump(testlabels_list, jsonfile)
indices_list_t = [[] for i in range(5)]
for i in range(example_data_t.shape[0]):
if example_targets_t[i].item() == 0 or example_targets_t[i].item() == 1:
indices_list_t[0].append(i)
testdata = example_data_t[indices_list_t[0]]
testlabels = example_targets_t[indices_list_t[0]]
examples_test = enumerate(test_loader)
batch_idx_t, (example_data_t, example_targets_t) = next(examples_test)
traindata_t = [[] for i in range(10)]
indices_list_t = [[] for i in range(10)]
for j in range(10):
indices_t = torch.where(example_targets_t == j)
indices_list_t[j].append(indices_t)
traindata_t[j].append((example_data_t[indices_t], example_targets_t[indices_t]))
examples = enumerate(train_loader)
batch_idx, (example_data, example_targets) = next(examples)
traindata = [[] for i in range(10)]
indices_list = [[] for i in range(10)]
for j in range(10):
indices = torch.where(example_targets == j)
indices_list[j].append(indices)
traindata[j].append((example_data[indices], example_targets[indices]))
import matplotlib.pyplot as plt
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(example_targets[i]))
plt.xticks([])
plt.yticks([])
fig
###Output
_____no_output_____
###Markdown
If you download the zip file start from here
Please download the zip file, extracts four files and upload to a drive(your_path). Please click [here](https://drive.google.com/drive/folders/1tPBCC8DKl-uz3tixvRcQOpLk_FKDDdo9?usp=sharing) to download the data.
###Code
import torch
import torch.nn as nn
class encoder(nn.Module):
def __init__(self):
super(encoder, self).__init__()
self.nc_mnist = 1
self.nc_cifar10 = 3
self.conv1 = nn.Conv2d(self.nc_cifar10, 3, 3, 1, 1)
self.conv2 = nn.Conv2d(3, 6, 2, 2, 0)
self.conv3 = nn.Conv2d(6, 12, 2, 2, 0)
self.conv4 = nn.Conv2d(12, 24, 2, 2, 0)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
return x
class decoder(nn.Module):
def __init__(self):
super(decoder, self).__init__()
self.nc_mnist = 118
self.nc_cifar10 = 202
self.nk_mnist = 3
self.nk_cifar10 = 4
self.decon1 = nn.ConvTranspose2d(self.nc_cifar10, 24, 3, 1, 0)
self.decon2 = nn.ConvTranspose2d(24, 12, self.nk_cifar10, 2, 0)
self.decon3 = nn.ConvTranspose2d(12, 6, 2, 2, 0)
self.decon4 = nn.ConvTranspose2d(6, 3, 2, 2, 0)
def forward(self, x):
x = x.view(x.shape[0], self.nc_cifar10, 1, 1)
x = self.decon1(x)
x = self.decon2(x)
x = self.decon3(x)
x = self.decon4(x)
return x
class VAE(nn.Module):
def __init__(self, eps):
super(VAE, self).__init__()
self.en = encoder()
self.de = decoder()
self.eps = eps
self.mnist_z = 108
self.cifar10_z = 192
def forward(self, x, one_hot):
x = self.en(x)
x = x.view(x.shape[0], -1)
mu = x[:, :self.cifar10_z]
logvar = x[:, self.cifar10_z:]
std = torch.exp(0.5 * logvar)
z = mu + self.eps * std
#print(z.shape, 'aaa', one_hot.shape)
z1 = torch.cat((z, one_hot), axis = 1)
#print(z1.shape, 'bbb')
return self.de(z1), mu, logvar
class private(nn.Module):
def __init__(self, eps):
super(private, self).__init__()
self.task = torch.nn.ModuleList()
self.eps = eps
for _ in range(5):
self.task.append(VAE(self.eps))
def forward(self, x, one_hot, task_id):
return self.task[task_id].forward(x, one_hot)
class NET(nn.Module):
def __init__(self, eps):
super(NET, self).__init__()
self.eps = eps
self.shared = VAE(self.eps)
self.private = private(self.eps)
self.head = torch.nn.ModuleList()
self.mnist = 216
self.cifar10 = 384
self.in_mnist = 2
self.in_cifar10 = 6
for _ in range(5):
self.head.append(
nn.Sequential(
nn.Conv2d(self.in_cifar10, 12, 3, 1, 1),
nn.Conv2d(12, 24, 2, 2, 0),
nn.Flatten(1, -1),
nn.Linear(24*16*16, 100),
nn.Linear(100, 10)
)
)
def forward(self, x, one_hot, task_id):
s_x, s_mu, s_logvar = self.shared(x, one_hot)
p_x, p_mu, p_logvar = self.private(x, one_hot, task_id)
x = torch.cat([s_x, p_x], dim = 1)
return self.head[task_id].forward(x), (s_x, s_mu, s_logvar), (p_x, p_mu, p_logvar)
###Output
_____no_output_____
###Markdown
Number of epochs and synthetic data
If you wish to change the number of epochs and synthetic data used as a generative replay, check lines 113 and 64, respectively. Change according to your requirments.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
from collections import deque
from torch.autograd import grad as torch_grad
import torchvision.utils as vutils
import os
import os.path
class CL_VAE():
def __init__(self):
super(CL_VAE, self).__init__()
self.batch_size = 64
self.mnist_z = 108
self.cifar10_z = 192
self.build_model()
self.set_cuda()
self.criterion = torch.nn.CrossEntropyLoss()
self.recon = torch.nn.MSELoss()
self.net_path = 'path/CIFAR10.pth' #give your preffered path where you want to save.
self.accuracy_matrix = [[] for kk in range(5)]
self.acc_25 = []
self.acc_50 = []
def build_model(self):
self.eps = torch.randn(self.batch_size, self.cifar10_z)
self.eps = self.eps.cuda()
self.net = NET(self.eps)
pytorch_total_params = sum(p.numel() for p in self.net.parameters() if p.requires_grad)
print('pytorch_total_params:', pytorch_total_params)
def set_cuda(self):
self.net.cuda()
def VAE_loss(self, recon, mu, sigma):
kl_div = -0.5 * torch.sum(1 + sigma - mu.pow(2) - sigma.exp())
#print('kl_div', kl_div.item())
return recon + kl_div
def train(self, all_traindata, all_trainlabels, all_testdata, all_testlabels, total_tasks):
replay_classes = []
for i in range(total_tasks):
traindata = torch.tensor(all_traindata[i])
trainlabels = torch.tensor(all_trainlabels[i])
testdata = torch.tensor(all_testdata[i])
testlabels = torch.tensor(all_testlabels[i])
#print(trainlabels, 'avfr')
replay_classes.append(sorted(list(set(trainlabels.numpy().tolist()))))
if i + 1 == 1:
self.train_task(traindata, trainlabels, testdata, testlabels, i)
#replay_classes.append(sorted(list(set(trainlabels.detach().numpy().tolist()))))
else:
num_gen_samples = 4
#z_dim = 108
for m in range(i):
#print(replay_classes, 'replay_classes')
replay_trainlabels = []
for ii in replay_classes[m]:
for j in range(num_gen_samples):
replay_trainlabels.append(ii)
replay_trainlabels = torch.tensor(replay_trainlabels)
replay_trainlabels_onehot = self.one_hot(replay_trainlabels)
z = torch.randn(2 * num_gen_samples, self.cifar10_z)
z_one_hot = torch.cat((z, replay_trainlabels_onehot), axis = 1)
z_one_hot = z_one_hot.cuda()
replay_data = self.net.private.task[m].de(z_one_hot).detach().cpu()
traindata = torch.cat((replay_data, traindata), axis = 0)
trainlabels = torch.cat((replay_trainlabels, trainlabels))
testdata = torch.cat((testdata, torch.tensor(all_testdata[m])), axis = 0)
testlabels = torch.cat((testlabels, torch.tensor(all_testlabels[m])))
#print(sorted(list(set(trainlabels.detach().numpy().tolist()))), 'aaa', i + 1)
self.train_task(traindata, trainlabels, testdata, testlabels, i)
self.acc_mat(all_testdata, all_testlabels, total_tasks, i)
#print(sorted(list(set(trainlabels.detach().numpy()))), '/n', sorted(list(set(testlabels.detach().numpy()))))
self.forgetting_measure(self.accuracy_matrix, total_tasks)
print(self.acc_25, 'acc_25', np.mean(self.acc_25))
print(self.acc_50, 'acc_50', np.mean(self.acc_50))
def one_hot(self, labels):
matrix = torch.zeros(len(labels), 10)
rows = np.arange(len(labels))
matrix[rows, labels] = 1
return matrix
def model_save(self):
torch.save(self.net.state_dict(), os.path.join(self.net_path))
def train_task(self, traindata, trainlabels, testdata, testlabels, task_id):
net_opti = torch.optim.Adam(self.net.parameters(), lr = 1e-4)
#data, label = traindata
#batch_size = 64
num_iterations = int(traindata.shape[0]/self.batch_size)
num_epochs = 50
for e in range(num_epochs):
for i in range(num_iterations):
self.net.zero_grad()
self.net.train()
batch_data = traindata[i * self.batch_size : (i + 1)*self.batch_size]
#print(batch_data.shape, '41')
batch_label = trainlabels[i * self.batch_size : (i + 1)*self.batch_size]
batch_label_one_hot = self.one_hot(batch_label)
batch_data = batch_data.cuda()
batch_label = batch_label.cuda()
batch_label_one_hot = batch_label_one_hot.cuda()
out, shared_out, private_out = self.net(batch_data, batch_label_one_hot, task_id)
s_x, s_mu, s_logvar = shared_out
p_x, p_mu, p_logvar = private_out
#print(out.shape, '12', batch_label.shape, s_x.shape)
cross_en_loss = self.criterion(out, batch_label)
s_recon = self.recon(batch_data, s_x)
p_recon = self.recon(batch_data, p_x)
s_VAE_loss = self.VAE_loss(s_recon, s_mu, s_logvar)
p_VAE_loss = self.VAE_loss(p_recon, p_mu, p_logvar)
all_loss = cross_en_loss + s_VAE_loss + p_VAE_loss
all_loss.backward(retain_graph=True)
net_opti.step()
#print('epoch:', e + 1, 'task_loss', cross_en_loss.item(), 's_VAE:', s_VAE_loss.item(), 'p_VAE', p_VAE_loss.item())
if (e + 1) % 25 == 0:
acc1, _ = self.evall(testdata, testlabels, task_id)
print('Task:', task_id + 1, 'acc', acc1)
if task_id + 1 == 5:
self.model_save()
def evall(self, testdata, testlabels, task_id):
self.net.eval()
num_iterations = int(testdata.shape[0]/self.batch_size)
pred_labels_list = []
acc = []
for i in range(num_iterations):
batch_data = testdata[i * self.batch_size : (i + 1) * self.batch_size]
batch_labels = testlabels[i * self.batch_size : (i + 1) * self.batch_size]
batch_label_one_hot = self.one_hot(batch_labels)
batch_data = batch_data.cuda()
batch_labels = batch_labels.cuda()
batch_label_one_hot = batch_label_one_hot.cuda()
out, _, _ = self.net(batch_data, batch_label_one_hot, task_id)
pred_labels = torch.argmax(out, axis = 1)
pred_labels_list.append(pred_labels.detach().cpu().numpy().tolist())
#print(pred_labels, 'aa')
#print(pred_labels.shape, '1452', batch_labels)
acc.append((torch.sum(batch_labels == pred_labels)/batch_data.shape[0] * 100).detach().cpu().numpy().tolist())
#print('acc:', acc)
return np.mean(np.array(acc)), np.array(pred_labels_list).flatten()
def forgetting_measure(self, accuracy_matrix, num_tasks):
forgetting_measures = []
accuracy_matrix = np.array(accuracy_matrix)
#print(accuracy_matrix, 'aa')
for after_task_idx in range(1, num_tasks):
after_task_num = after_task_idx + 1
#print(accuracy_matrix, 'accuracy_matrix')
prev_acc = accuracy_matrix[:after_task_num - 1, :after_task_num - 1]
forgettings = prev_acc.max(axis=0) - accuracy_matrix[after_task_num - 1, :after_task_num - 1]
forgetting_measures.append(np.mean(forgettings).item())
#print('forgetting_measures', forgetting_measures)
#print("the forgetting measure is...", np.mean(np.array(forgetting_measures)))
def acc_mat(self, testData1, testLabels1, num_tasks, t):
for kk in range(num_tasks):
testData_tw = torch.tensor(testData1[kk])
testLabels_tw = torch.tensor(testLabels1[kk])
testLabels_tw_classes = sorted(list(set(testLabels_tw.detach().numpy().tolist())))
#pred_tw = (class_appr.test(testData_tw)).cpu() #classifier.predict(testData_tw)
_, pred_tw = self.evall(testData_tw, testLabels_tw, kk)
#pred_tw = torch.argmax(pred_tw, dim = 1)
#pred_tw = pred_tw.cpu()
testLabels_tw = testLabels_tw.detach().numpy()[:pred_tw.shape[0]]
#print(pred_tw[0], '12', testLabels_tw[0])
dict_correct_tw = {}
dict_total_tw = {}
for ii in testLabels_tw_classes:
dict_total_tw[ii] = 0
dict_correct_tw[ii] = 0
for ii in range(0, testLabels_tw.shape[0]):
#print(testLabels_tw[ii],'aaa', pred_tw[ii])
if(testLabels_tw[ii] == pred_tw[ii]):
dict_correct_tw[testLabels_tw[ii].item()] = dict_correct_tw[testLabels_tw[ii].item()] + 1
#print(testLabels_tw[ii], '1', dict_total_tw[testLabels_tw[ii]], '2', dict_total_tw[testLabels_tw[ii]])
dict_total_tw[testLabels_tw[ii].item()] = dict_total_tw[testLabels_tw[ii].item()] + 1
avgAcc_tw = 0.0
num_seen_tw = 0.0
for ii in testLabels_tw_classes:
avgAcc_tw = avgAcc_tw + (dict_correct_tw[ii]*1.0)/(dict_total_tw[ii])
num_seen_tw = num_seen_tw + 1
avgAcc_tw = avgAcc_tw/num_seen_tw
#testData_tw[jj].append(avgAcc_tw)
self.accuracy_matrix[t].append(avgAcc_tw)
###Output
_____no_output_____
###Markdown
Check your_path
run %ls to see which directory you are currently in. You can change the directory using the command %cd dir_name.
###Code
%ls
%cd dir_name
your_path = '/content/drive/MyDrive/' #change this path
import json
traindata_path = your_path + '/traindata.json'
trainlabels_path = your_path + '/trainlabels.json'
testdata_path = your_path + '/testdata.json'
testlabels_path = your_path + '/testlabels.json'
with open(traindata_path) as f:
traindata = json.load(f)
with open(trainlabels_path) as f:
trainlabels = json.load(f)
with open(testdata_path) as f:
testdata = json.load(f)
with open(testlabels_path) as f:
testlabels = json.load(f)
import time
model = CL_VAE()
st = time.time()
model.train(traindata, trainlabels, testdata, testlabels, 5)
fn = time.time()
#print("time:", fn - st)
###Output
pytorch_total_params: 3388428
###Markdown
CIFAR10 with Daisy and ResNet features (pytorch / skimage / sklearn)
###Code
# Remember to select a GPU runtime when setting this to True
USE_CUDA = False
###Output
_____no_output_____
###Markdown
Download and uncompress the dataset
###Code
import numpy as np
import torch
import torchvision
import matplotlib.pyplot as plt
%matplotlib inline
from skimage.feature import daisy
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import pickle
from tqdm import tqdm
import wget
import tarfile
import os
data_url = r'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
download_path = './data/cifar-10-python.tar.gz'
uncompressed_path = './data/cifar-10-python'
batches_subdir = 'cifar-10-batches-py'
batches_path = os.path.join(uncompressed_path, batches_subdir)
os.makedirs(os.path.dirname(download_path), exist_ok=True)
print('Downloading...')
wget.download(data_url, download_path)
print('Uncompressing...')
with tarfile.open(download_path, "r:gz") as tar:
tar.extractall(uncompressed_path)
print('Uncompressed batches: {}'.format(', '.join(os.listdir(batches_path))))
print('Data ready!')
###Output
Downloading...
Uncompressing...
Uncompressed batches: data_batch_1, readme.html, batches.meta, data_batch_2, data_batch_5, test_batch, data_batch_4, data_batch_3
Data ready!
###Markdown
Load the data into memory and format it as a stack of RGB images
###Code
def load_batch(batches_path, batch_name):
with open(os.path.join(batches_path, batch_name), 'rb') as f:
data_batch = pickle.load(f, encoding='bytes')
return data_batch
def data_to_images(data):
data_reshp = np.reshape(data, (-1, 3, 32, 32))
imgs = np.moveaxis(data_reshp, (1, 2, 3), (3, 1, 2))
return imgs
def batches_to_images_with_labels(batches):
data_table = np.concatenate([batch[b'data'] for batch in batches], axis=0)
labels = np.concatenate([np.asarray(batch[b'labels']) for batch in batches])
images = data_to_images(data_table)
return images, labels
train_batch_names = ['data_batch_{}'.format(i) for i in range(1, 5)]
val_batch_names = ['data_batch_5']
test_batch_names = ['test_batch']
train_batches = [load_batch(batches_path, batch_name) for batch_name in train_batch_names]
train_imgs, train_labels = batches_to_images_with_labels(train_batches)
print('Training set: Images shape = {}'.format(train_imgs.shape))
print('Training set: Labels shape = {}'.format(train_labels.shape))
print()
val_batches = [load_batch(batches_path, batch_name) for batch_name in val_batch_names]
val_imgs, val_labels = batches_to_images_with_labels(val_batches)
print('Validation set: Images shape = {}'.format(val_imgs.shape))
print('Validation set: Labels shape = {}'.format(val_labels.shape))
print()
test_batches = [load_batch(batches_path, batch_name) for batch_name in test_batch_names]
test_imgs, test_labels = batches_to_images_with_labels(test_batches)
print('Test set: Images shape = {}'.format(test_imgs.shape))
print('Test set: Labels shape = {}'.format(test_labels.shape))
###Output
Training set: Images shape = (40000, 32, 32, 3)
Training set: Labels shape = (40000,)
Validation set: Images shape = (10000, 32, 32, 3)
Validation set: Labels shape = (10000,)
Test set: Images shape = (10000, 32, 32, 3)
Test set: Labels shape = (10000,)
###Markdown
Display some random images from the training set
###Code
grid_num_rows = 10
grid_num_cols = 10
num_random_samples = grid_num_rows * grid_num_cols
def display_random_subset(imgs, labels, grid_num_rows=6, grid_num_cols=6, figsize=(12, 12)):
fig, ax_objs = plt.subplots(nrows=grid_num_rows, ncols=grid_num_cols, figsize=figsize)
for ax in np.ravel(ax_objs):
rnd_id = np.random.randint(labels.shape[0])
img = train_imgs[rnd_id]
label = labels[rnd_id]
ax.imshow(img)
ax.axis('off')
ax.set_title('{}'.format(label))
display_random_subset(train_imgs, train_labels)
###Output
_____no_output_____
###Markdown
Shallow baseline
###Code
def extract_daisy_features(images):
feature_vecs = []
for img in tqdm(images):
img_grayscale = np.mean(img, axis=2)
fvec = daisy(img_grayscale, step=4, radius=9).reshape(1, -1)
feature_vecs.append(fvec)
return np.concatenate(feature_vecs)
train_daisy_feature_vecs = extract_daisy_features(train_imgs)
val_daisy_feature_vecs = extract_daisy_features(val_imgs)
test_daisy_feature_vecs = extract_daisy_features(test_imgs)
print('Training set: Daisy features shape = {}'.format(train_daisy_feature_vecs.shape))
print('Training set: labels shape = {}'.format(train_labels.shape))
print()
print('Validation set: Daisy features shape = {}'.format(val_daisy_feature_vecs.shape))
print('Validation set: labels shape = {}'.format(val_labels.shape))
print()
print('Test set: Daisy features shape = {}'.format(test_daisy_feature_vecs.shape))
print('Test set: labels shape = {}'.format(test_labels.shape))
daisy_svm_param_grid = {'C': np.logspace(-4, 4, num=9, endpoint=True, base=10)}
daisy_clf = GridSearchCV(estimator=LinearSVC(), param_grid=daisy_svm_param_grid, cv=3, n_jobs=4, verbose=10)
daisy_clf.fit(X=train_daisy_feature_vecs, y=train_labels)
val_daisy_predictions = daisy_clf.predict(X=val_daisy_feature_vecs)
val_daisy_accuracy = np.mean(val_daisy_predictions == val_labels)
print('Validation Daisy accuracy = {} (use this to tune params)'.format(val_daisy_accuracy))
test_daisy_predictions = daisy_clf.predict(X=test_daisy_feature_vecs)
test_daisy_accuracy = np.mean(test_daisy_predictions == test_labels)
print('Test Daisy accuracy = {} (DO NOT use this to tune params!)'.format(test_daisy_accuracy))
###Output
Test Daisy accuracy = 0.5953 (DO NOT use this to tune params!)
###Markdown
ResNet feature extraction
###Code
r50 = torchvision.models.resnet50(pretrained=True)
# Throw away the classification layer and pooling layer before it (not needed
# because images are small anyway)
r50_fx_layers = list(r50.children())[:-2]
r50_fx = torch.nn.Sequential(*r50_fx_layers)
def extract_deep_features(images, model, use_cuda, batch_size=128):
# Normalize for torchvision
torchvision_mean = np.array([0.485, 0.456, 0.406])
torchvision_std = np.array([0.485, 0.456, 0.406])
images_norm = (images / 255. - torchvision_mean) / torchvision_std
images_norm_tensor = torch.from_numpy(images_norm.astype(np.float32)).permute((0, 3, 2, 1))
dset = torch.utils.data.TensorDataset(images_norm_tensor)
dataloader = torch.utils.data.DataLoader(dset, batch_size, shuffle=False, drop_last=False)
model.eval()
if use_cuda:
model.cuda()
feature_vec_batches = []
with tqdm(total=len(dataloader)) as pbar:
for data_batch in dataloader:
img_batch = data_batch[0] # We get a tuple so have to unpack it
if use_cuda:
img_batch = img_batch.cuda()
fvec_batch = model(img_batch)
if use_cuda:
fvec_batch = fvec_batch.cpu()
fvec_batch_cl = fvec_batch.detach().clone()
fvec_batch_np = fvec_batch_cl.view(img_batch.size(0), -1).numpy()
feature_vec_batches.append(fvec_batch_np)
pbar.update(1)
if use_cuda:
model.cpu() # cleanup
return np.concatenate(feature_vec_batches, axis=0)
train_resnet_feature_vecs = extract_deep_features(train_imgs, r50_fx, use_cuda=USE_CUDA)
val_resnet_feature_vecs = extract_deep_features(val_imgs, r50_fx, use_cuda=USE_CUDA)
test_resnet_feature_vecs = extract_deep_features(test_imgs, r50_fx, use_cuda=USE_CUDA)
print('Training set: ResNet features shape = {}'.format(train_resnet_feature_vecs.shape))
print('Training set: labels shape = {}'.format(train_labels.shape))
print()
print('Validation set: ResNet features shape = {}'.format(val_resnet_feature_vecs.shape))
print('Validation set: labels shape = {}'.format(val_labels.shape))
print()
print('Test set: ResNet features shape = {}'.format(test_resnet_feature_vecs.shape))
print('Test set: labels shape = {}'.format(test_labels.shape))
###Output
Training set: ResNet features shape = (40000, 2048)
Training set: labels shape = (40000,)
Validation set: ResNet features shape = (10000, 2048)
Validation set: labels shape = (10000,)
Test set: ResNet features shape = (10000, 2048)
Test set: labels shape = (10000,)
###Markdown
Visualize the ResNet features
###Code
pca = PCA(n_components=50)
embed = TSNE(n_components=2, init='pca')
dim_red = Pipeline([('pca', pca), ('embed', embed)])
dim_red_subset = np.random.choice(np.arange(train_resnet_feature_vecs.shape[0]), size=5000, replace=False)
train_resnet_feature_vecs_subset_dimred = dim_red.fit_transform(train_resnet_feature_vecs[dim_red_subset])
train_labels_subset = train_labels[dim_red_subset]
fig, ax = plt.subplots(figsize=(10, 10))
sc = ax.scatter(train_resnet_feature_vecs_subset_dimred[:, 0],
train_resnet_feature_vecs_subset_dimred[:, 1],
c=train_labels_subset,
marker='.',
cmap='nipy_spectral')
plt.colorbar(sc)
###Output
_____no_output_____
###Markdown
Fit an SVM to ResNet features
###Code
resnet_svm_param_grid = {'C': np.logspace(-4, 4, num=9, endpoint=True, base=10)}
resnet_clf = GridSearchCV(estimator=LinearSVC(), param_grid=resnet_svm_param_grid, cv=3, n_jobs=4, verbose=10)
resnet_clf.fit(X=train_resnet_feature_vecs, y=train_labels)
val_resnet_predictions = resnet_clf.predict(X=val_resnet_feature_vecs)
val_resnet_accuracy = np.mean(val_resnet_predictions == val_labels)
print('Validation Resnet accuracy = {} (use this to tune params)'.format(val_resnet_accuracy))
test_resnet_predictions = resnet_clf.predict(X=test_resnet_feature_vecs)
test_resnet_accuracy = np.mean(test_resnet_predictions == test_labels)
print('Test Resnet accuracy = {} (DO NOT use this to tune params!)'.format(test_resnet_accuracy))
###Output
Test Resnet accuracy = 0.6337 (DO NOT use this to tune params!)
###Markdown
###Code
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
tf.config.experimental.list_physical_devices()
(X_train,y_train), (X_test,y_test) = keras.datasets.cifar10.load_data()
X_train.shape
index = 5
plt.imshow(X_train[index])
X_train.shape
#X_train_flatten = X_train.reshape(X_train.shape[0],-1).T
#y_train_flatten = y_train.reshape(y_train.shape[0],-1).T
#X_test_flatten = X_test.reshape(X_test.shape[0],-1).T
#y_test_flatten = y_test.reshape(y_test.shape[0],-1).T
y_train.shape
X_train_final = X_train/255
#y_train_final = y_train_flatten/255
X_test_final = X_test/255
#y_test_final = y_test_flatten/255
def relu(z):
return max(0,z)
y_train_categorical = keras.utils.to_categorical(
y_train, num_classes=10, dtype='float32'
)
y_test_categorical = keras.utils.to_categorical(
y_test, num_classes=10, dtype='float32'
)
y_train[0:5]
model = keras.Sequential([
keras.layers.Flatten(input_shape=(32,32,3)),
keras.layers.Dense(3000, activation='relu'),
keras.layers.Dense(1000, activation='relu'),
keras.layers.Dense(10, activation='sigmoid')
])
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train_final, y_train_categorical, epochs=1)
def get_model():
model = keras.Sequential([
keras.layers.Flatten(input_shape=(32,32,3)),
keras.layers.Dense(3000, activation='relu'),
keras.layers.Dense(1000, activation='relu'),
keras.layers.Dense(10, activation='sigmoid')
])
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
prediction = model.predict(X_test_final)
%%timeit -n1 -r1
with tf.device('/CPU:0'):
cpu_model = get_model()
cpu_model.fit(X_train_final, y_train_categorical, epochs=1)
np.argmax(prediction[9])
y_test[9]
###Output
_____no_output_____
###Markdown
CNN Implementation
###Code
cnn = models.Sequential([
layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
cnn.summary()
cnn.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
cnn.fit(X_train, y_train, epochs=10)
###Output
Epoch 1/10
1563/1563 [==============================] - 57s 36ms/step - loss: 3.2933 - accuracy: 0.2945
Epoch 2/10
1563/1563 [==============================] - 57s 36ms/step - loss: 1.3525 - accuracy: 0.5197
Epoch 3/10
1563/1563 [==============================] - 57s 36ms/step - loss: 1.1631 - accuracy: 0.5923
Epoch 4/10
1563/1563 [==============================] - 56s 36ms/step - loss: 1.0229 - accuracy: 0.6470
Epoch 5/10
1563/1563 [==============================] - 57s 37ms/step - loss: 0.9365 - accuracy: 0.6783
Epoch 6/10
1563/1563 [==============================] - 57s 36ms/step - loss: 0.8721 - accuracy: 0.6983
Epoch 7/10
1563/1563 [==============================] - 57s 36ms/step - loss: 0.7981 - accuracy: 0.7231
Epoch 8/10
1563/1563 [==============================] - 58s 37ms/step - loss: 0.7493 - accuracy: 0.7417
Epoch 9/10
1563/1563 [==============================] - 58s 37ms/step - loss: 0.6969 - accuracy: 0.7558
Epoch 10/10
1563/1563 [==============================] - 58s 37ms/step - loss: 0.6580 - accuracy: 0.7745
###Markdown
###Code
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 48, 3, 1)
self.conv2 = nn.Conv2d(48, 96, 3, 1)
self.conv3 = nn.Conv2d(96, 192, 3, 1)
self.conv4 = nn.Conv2d(192, 256, 3, 1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(5*5*256, 512)
self.fc2 = nn.Linear(512, 64)
self.fc3 = nn.Linear(64, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = self.pool(x)
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = self.pool(x)
x = x.view(-1, 5*5*256)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net = net.to(device)
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(15): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs, labels = data[0].to(device), data[1].to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
net = Net()
net.load_state_dict(torch.load(PATH))
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
average_accuracy = 100 * correct / total
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
class_vals=[]
perAccuracy=[]
for i in range(10):
print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
class_vals.append(classes[i])
perAccuracy.append(100 * class_correct[i] / class_total[i])
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
plt.xlabel('Class')
plt.ylabel('Percentage')
ax.bar(class_vals,perAccuracy)
###Output
Accuracy of plane : 81 %
Accuracy of car : 87 %
Accuracy of bird : 67 %
Accuracy of cat : 59 %
Accuracy of deer : 77 %
Accuracy of dog : 69 %
Accuracy of frog : 84 %
Accuracy of horse : 80 %
Accuracy of ship : 87 %
Accuracy of truck : 83 %
###Markdown
Building an Artificial Neural Network **ANN** first to check the Performance
###Code
ann = models.Sequential([
layers.Flatten(input_shape=(32,32,3)),
layers.Dense(3000, activation='relu'),
layers.Dense(1000, activation='relu'),
layers.Dense(10, activation='softmax')
])
ann.compile(optimizer='SGD',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
ann.fit(X_train, y_train, epochs=10)
ann.evaluate(X_test, y_test)
from sklearn.metrics import confusion_matrix , classification_report
import numpy as np
y_pred = ann.predict(X_test)
y_pred_classes = [np.argmax(element) for element in y_pred]
print("Classification Report: \n\n", classification_report(y_test, y_pred_classes))
###Output
Classification Report:
precision recall f1-score support
0 0.72 0.24 0.36 1000
1 0.82 0.37 0.51 1000
2 0.22 0.77 0.34 1000
3 0.38 0.27 0.32 1000
4 0.33 0.47 0.39 1000
5 0.50 0.27 0.35 1000
6 0.55 0.54 0.55 1000
7 0.83 0.28 0.42 1000
8 0.72 0.53 0.61 1000
9 0.57 0.58 0.58 1000
accuracy 0.43 10000
macro avg 0.57 0.43 0.44 10000
weighted avg 0.57 0.43 0.44 10000
Classification Report:
precision recall f1-score support
0 0.72 0.24 0.36 1000
1 0.82 0.37 0.51 1000
2 0.22 0.77 0.34 1000
3 0.38 0.27 0.32 1000
4 0.33 0.47 0.39 1000
5 0.50 0.27 0.35 1000
6 0.55 0.54 0.55 1000
7 0.83 0.28 0.42 1000
8 0.72 0.53 0.61 1000
9 0.57 0.58 0.58 1000
accuracy 0.43 10000
macro avg 0.57 0.43 0.44 10000
weighted avg 0.57 0.43 0.44 10000
###Markdown
Building a Convolutional Neural Network **(CNN)**
###Code
cnn = models.Sequential([
#cnn layers
layers.Conv2D(filters=32, kernel_size=(3,3), activation ='relu', input_shape=(32,32,3)),
layers.MaxPooling2D((2,2)),
layers.Conv2D(filters=32, kernel_size=(3,3), activation ='relu') ,
layers.MaxPooling2D((2,2)),
#dense layers
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax')
])
cnn.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics = ["accuracy"])
cnn.fit(X_train, y_train, epochs = 10)
cnn.evaluate(X_test, y_test)
y_test[:5] #2 dimensional array
y_test = y_test.reshape(-1) #converting to 1 dimensional array
y_test[:5]
plot_sample(X_test, y_test, 1)
###Output
_____no_output_____
###Markdown
Predicting the Model and seeing its Performance
###Code
y_pred = cnn.predict(X_test)
y_pred[:5]
y_classes = [np.argmax(element) for element in y_pred]
y_classes[:5]
y_test[:5]
###Output
_____no_output_____
###Markdown
So as you can see above, our model predicts all the correct values for our classes except one value which is 6. All the other alues have been predicted successfully as a result of 82% accuracy that was achieved.
###Code
plot_sample(X_test, y_test, 1)
classes[y_classes[1]]
###Output
_____no_output_____
###Markdown
Here(above) the model has predicted correctly
###Code
plot_sample(X_test, y_test, 4)
classes[y_classes[4]]
###Output
_____no_output_____
###Markdown
Here(above) the model has predicted incorrectly
###Code
from sklearn.metrics import confusion_matrix , classification_report
import numpy as np
y_pred = cnn.predict(X_test)
y_pred_classes = [np.argmax(element) for element in y_pred]
print("Classification Report: \n", classification_report(y_test, y_pred_classes))
###Output
Classification Report:
precision recall f1-score support
0 0.68 0.73 0.70 1000
1 0.85 0.75 0.80 1000
2 0.56 0.59 0.57 1000
3 0.50 0.45 0.48 1000
4 0.64 0.63 0.63 1000
5 0.57 0.67 0.62 1000
6 0.80 0.71 0.75 1000
7 0.68 0.78 0.73 1000
8 0.77 0.77 0.77 1000
9 0.80 0.74 0.76 1000
accuracy 0.68 10000
macro avg 0.69 0.68 0.68 10000
weighted avg 0.69 0.68 0.68 10000
###Markdown
Training a network on CIFAR10 Downloading the dataset using torch vision
###Code
from torchvision import datasets
cifar10 = datasets.CIFAR10("./", train=True, download=True)
cifar10
cifar10_val = datasets.CIFAR10("./", train=False, download=True)
cifar10_val
len(cifar10)
cifar10[80]
###Output
_____no_output_____
###Markdown
Accessing data
###Code
import matplotlib.pyplot as plt
class_names = ['airplane','automobile','bird','cat','deer',
'dog','frog','horse','ship','truck']
fig = plt.figure(figsize=(8,3))
num_classes = 10
for i in range(num_classes):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
ax.set_title(class_names[i])
img = next(img for img, label in cifar10 if label == i)
plt.imshow(img)
plt.show()
img, label = cifar10[80]
class_names[label]
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Transforms
###Code
from torchvision import transforms
dir(transforms)
###Output
_____no_output_____
###Markdown
Converting the images to a tensor
###Code
to_tensor = transforms.ToTensor()
img_t = to_tensor(img)
img_t
img_t.shape
###Output
_____no_output_____
###Markdown
Directly getting the transformed dataset
###Code
tensor_cifar10 = datasets.CIFAR10("./", train=True, download=False,
transform=transforms.ToTensor())
img_t, _ = tensor_cifar10[80]
img_t
img_t.max(), img_t.min(), img_t.shape, type(img_t)
plt.imshow(img_t.permute(1, 2, 0))
plt.show()
###Output
_____no_output_____
###Markdown
Normalizing data
###Code
import torch
imgs = torch.stack([img_t for img_t, _ in tensor_cifar10], dim=3)
imgs.shape
imgs.view(3, -1).mean(dim=1)
imgs.view(3, -1).std(dim=1)
transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616))
transformed_cifar10 = datasets.CIFAR10("./", train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468), (0.2470, 0.2435, 0.2616))
]))
###Output
_____no_output_____
###Markdown
Making a birds v/s planes classifier
###Code
cifar10 = datasets.CIFAR10(
"./", train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
cifar10_val = datasets.CIFAR10(
"./", train=False, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
label_map = {0: 0, 2: 1}
class_names = ['airplane', 'bird']
cifar2 = [(img, label_map[label])
for img, label in cifar10
if label in [0, 2]]
cifar2_val = [(img, label_map[label])
for img, label in cifar10_val
if label in [0, 2]]
###Output
_____no_output_____
###Markdown
Defining the model
###Code
import torch.nn as nn
model = nn.Sequential(
nn.Linear(3072,
512),
nn.Tanh(),
nn.Linear(512, 2)
)
###Output
_____no_output_____
###Markdown
We will print out the probability of an object belonging to one class for which we use `Softmax`
###Code
model = nn.Sequential(
nn.Linear(3072,
512),
nn.Tanh(),
nn.Linear(512, 2),
nn.Softmax(dim=1)
)
###Output
_____no_output_____
###Markdown
Let's try to run the model without even training it
###Code
img, _ = cifar2[0]
type(img)
plt.imshow(img.permute(1, 2, 0))
plt.show()
img.shape
batch_img = img.view(-1).unsqueeze(0)
batch_img.shape
model(batch_img)
###Output
_____no_output_____
###Markdown
The model needs to be penalized when incorrect predictions are made, so we use `LogSoftmax`
###Code
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
loss = nn.NLLLoss()
img, label = cifar2[0]
out = model(img.view(-1).unsqueeze(0))
loss(out, torch.tensor([label]))
###Output
_____no_output_____
###Markdown
Training the classifier
###Code
torch.cuda.set_device(0)
torch.cuda.get_device_name(0)
torch.device('cuda' if torch.cuda.is_available() else 'cpu')
import torch
import torch.nn as nn
from torch import optim
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1)
)
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.NLLLoss()
n_epochs = 100
for epoch in range(0, n_epochs):
for img, label in cifar2:
pred = model(img.view(-1).unsqueeze(0))
loss = loss_fn(pred, torch.tensor([label]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
###Output
Epoch: 0, Loss: 4.885203
Epoch: 1, Loss: 8.398516
Epoch: 2, Loss: 12.194094
Epoch: 3, Loss: 8.400795
Epoch: 4, Loss: 7.035946
Epoch: 5, Loss: 6.672678
Epoch: 6, Loss: 14.702289
Epoch: 7, Loss: 2.806485
Epoch: 8, Loss: 9.093428
Epoch: 9, Loss: 1.937908
Epoch: 10, Loss: 0.096189
Epoch: 11, Loss: 7.063046
Epoch: 12, Loss: 10.701083
Epoch: 13, Loss: 8.897695
Epoch: 14, Loss: 10.900598
Epoch: 15, Loss: 3.986428
Epoch: 16, Loss: 0.029039
Epoch: 17, Loss: 1.960492
Epoch: 18, Loss: 10.484950
Epoch: 19, Loss: 1.311135
Epoch: 20, Loss: 7.059778
Epoch: 21, Loss: 8.214798
Epoch: 22, Loss: 11.473054
Epoch: 23, Loss: 4.624972
Epoch: 24, Loss: 0.464035
Epoch: 25, Loss: 5.108599
Epoch: 26, Loss: 0.656461
Epoch: 27, Loss: 4.596004
Epoch: 28, Loss: 1.365604
Epoch: 29, Loss: 3.978047
Epoch: 30, Loss: 13.991315
Epoch: 31, Loss: 0.012959
Epoch: 32, Loss: 9.526802
Epoch: 33, Loss: 5.412449
Epoch: 34, Loss: 5.310781
Epoch: 35, Loss: 7.506864
Epoch: 36, Loss: 7.706320
Epoch: 37, Loss: 13.320793
Epoch: 38, Loss: 8.017707
Epoch: 39, Loss: 14.925833
Epoch: 40, Loss: 19.361202
Epoch: 41, Loss: 11.212448
Epoch: 42, Loss: 15.842257
Epoch: 43, Loss: 9.839250
Epoch: 44, Loss: 4.075000
Epoch: 45, Loss: 6.826460
Epoch: 46, Loss: 8.869939
Epoch: 47, Loss: 11.255908
Epoch: 48, Loss: 1.516616
Epoch: 49, Loss: 3.173884
Epoch: 50, Loss: 15.334185
Epoch: 51, Loss: 17.397785
Epoch: 52, Loss: 4.067145
Epoch: 53, Loss: 13.176816
Epoch: 54, Loss: 0.064727
Epoch: 55, Loss: 17.370178
Epoch: 56, Loss: 7.199696
Epoch: 57, Loss: 20.260681
Epoch: 58, Loss: 18.920212
Epoch: 59, Loss: 12.034130
Epoch: 60, Loss: 13.000215
Epoch: 61, Loss: 12.489149
Epoch: 62, Loss: 12.272771
Epoch: 63, Loss: 12.774753
Epoch: 64, Loss: 10.542746
Epoch: 65, Loss: 15.277123
Epoch: 66, Loss: 3.086051
Epoch: 67, Loss: 16.968788
Epoch: 68, Loss: 14.771244
Epoch: 69, Loss: 3.802227
Epoch: 70, Loss: 10.204108
Epoch: 71, Loss: 5.821189
Epoch: 72, Loss: 7.695176
Epoch: 73, Loss: 1.036054
Epoch: 74, Loss: 2.437079
Epoch: 75, Loss: 9.829401
Epoch: 76, Loss: 0.046492
Epoch: 77, Loss: 9.588590
Epoch: 78, Loss: 10.307067
Epoch: 79, Loss: 6.878657
Epoch: 80, Loss: 1.898898
Epoch: 81, Loss: 0.028771
Epoch: 82, Loss: 0.000025
Epoch: 83, Loss: 4.961995
Epoch: 84, Loss: 10.551287
Epoch: 85, Loss: 1.690318
Epoch: 86, Loss: 0.106867
Epoch: 87, Loss: 3.279341
Epoch: 88, Loss: 4.938417
Epoch: 89, Loss: 16.231289
Epoch: 90, Loss: 11.445296
Epoch: 91, Loss: 17.599178
Epoch: 92, Loss: 20.918266
Epoch: 93, Loss: 19.979803
Epoch: 94, Loss: 18.109711
Epoch: 95, Loss: 16.496416
Epoch: 96, Loss: 12.549117
Epoch: 97, Loss: 15.137671
Epoch: 98, Loss: 15.985045
Epoch: 99, Loss: 6.681483
###Markdown
Using dataloader to form batches of training data ourselves
###Code
import torch
import torch.nn as nn
from torch import optim
torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64,
shuffle=True)
model = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1)
)
learning_rate = 1e-2
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
loss_fn = nn.NLLLoss()
n_epochs = 100
for epoch in range(0, n_epochs):
for imgs, labels in train_loader:
batch_size = imgs.shape[0]
preds = model(imgs.view(batch_size, -1))
loss = loss_fn(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
###Output
[1;30;43mStreaming output truncated to the last 5000 lines.[0m
Epoch: 68, Loss: 0.020706
Epoch: 68, Loss: 0.023238
Epoch: 68, Loss: 0.026829
Epoch: 68, Loss: 0.064414
Epoch: 68, Loss: 0.033158
Epoch: 68, Loss: 0.034060
Epoch: 68, Loss: 0.042462
Epoch: 68, Loss: 0.029777
Epoch: 68, Loss: 0.035627
Epoch: 68, Loss: 0.044759
Epoch: 68, Loss: 0.039389
Epoch: 68, Loss: 0.043158
Epoch: 68, Loss: 0.035331
Epoch: 68, Loss: 0.057891
Epoch: 68, Loss: 0.033963
Epoch: 68, Loss: 0.027889
Epoch: 68, Loss: 0.040650
Epoch: 68, Loss: 0.054751
Epoch: 68, Loss: 0.048984
Epoch: 68, Loss: 0.061292
Epoch: 68, Loss: 0.050086
Epoch: 68, Loss: 0.061158
Epoch: 68, Loss: 0.027495
Epoch: 68, Loss: 0.043672
Epoch: 68, Loss: 0.053512
Epoch: 68, Loss: 0.036537
Epoch: 68, Loss: 0.028120
Epoch: 68, Loss: 0.032815
Epoch: 68, Loss: 0.044790
Epoch: 68, Loss: 0.040989
Epoch: 68, Loss: 0.069342
Epoch: 68, Loss: 0.029691
Epoch: 68, Loss: 0.040325
Epoch: 68, Loss: 0.054844
Epoch: 68, Loss: 0.070573
Epoch: 68, Loss: 0.037940
Epoch: 68, Loss: 0.038947
Epoch: 68, Loss: 0.016877
Epoch: 68, Loss: 0.033706
Epoch: 68, Loss: 0.037013
Epoch: 68, Loss: 0.049049
Epoch: 68, Loss: 0.048095
Epoch: 68, Loss: 0.036387
Epoch: 68, Loss: 0.041609
Epoch: 68, Loss: 0.024449
Epoch: 68, Loss: 0.020152
Epoch: 68, Loss: 0.027200
Epoch: 68, Loss: 0.028327
Epoch: 68, Loss: 0.034577
Epoch: 68, Loss: 0.035169
Epoch: 68, Loss: 0.038995
Epoch: 68, Loss: 0.034833
Epoch: 68, Loss: 0.046466
Epoch: 68, Loss: 0.034587
Epoch: 68, Loss: 0.066631
Epoch: 68, Loss: 0.052028
Epoch: 68, Loss: 0.048311
Epoch: 68, Loss: 0.054566
Epoch: 68, Loss: 0.045157
Epoch: 68, Loss: 0.037886
Epoch: 68, Loss: 0.029190
Epoch: 68, Loss: 0.041958
Epoch: 68, Loss: 0.027107
Epoch: 68, Loss: 0.034198
Epoch: 68, Loss: 0.037499
Epoch: 68, Loss: 0.042694
Epoch: 68, Loss: 0.034167
Epoch: 68, Loss: 0.040985
Epoch: 68, Loss: 0.037411
Epoch: 68, Loss: 0.052636
Epoch: 68, Loss: 0.048679
Epoch: 68, Loss: 0.040363
Epoch: 68, Loss: 0.042059
Epoch: 68, Loss: 0.053635
Epoch: 68, Loss: 0.053063
Epoch: 68, Loss: 0.040772
Epoch: 68, Loss: 0.043361
Epoch: 68, Loss: 0.040872
Epoch: 68, Loss: 0.024670
Epoch: 68, Loss: 0.041461
Epoch: 68, Loss: 0.031434
Epoch: 68, Loss: 0.071219
Epoch: 68, Loss: 0.044117
Epoch: 68, Loss: 0.048665
Epoch: 68, Loss: 0.035391
Epoch: 68, Loss: 0.045969
Epoch: 68, Loss: 0.046201
Epoch: 68, Loss: 0.062133
Epoch: 68, Loss: 0.042414
Epoch: 68, Loss: 0.032862
Epoch: 68, Loss: 0.057866
Epoch: 68, Loss: 0.059030
Epoch: 68, Loss: 0.050733
Epoch: 68, Loss: 0.043005
Epoch: 68, Loss: 0.034858
Epoch: 68, Loss: 0.046670
Epoch: 68, Loss: 0.037907
Epoch: 68, Loss: 0.032018
Epoch: 68, Loss: 0.038030
Epoch: 68, Loss: 0.032326
Epoch: 68, Loss: 0.062041
Epoch: 68, Loss: 0.039640
Epoch: 68, Loss: 0.055875
Epoch: 68, Loss: 0.048883
Epoch: 68, Loss: 0.029705
Epoch: 68, Loss: 0.032533
Epoch: 68, Loss: 0.045861
Epoch: 68, Loss: 0.030641
Epoch: 68, Loss: 0.056983
Epoch: 68, Loss: 0.068224
Epoch: 68, Loss: 0.047386
Epoch: 68, Loss: 0.051504
Epoch: 68, Loss: 0.043628
Epoch: 68, Loss: 0.025607
Epoch: 68, Loss: 0.033940
Epoch: 68, Loss: 0.049387
Epoch: 68, Loss: 0.049147
Epoch: 68, Loss: 0.054808
Epoch: 68, Loss: 0.037116
Epoch: 68, Loss: 0.039975
Epoch: 68, Loss: 0.030107
Epoch: 68, Loss: 0.041606
Epoch: 68, Loss: 0.031010
Epoch: 68, Loss: 0.033524
Epoch: 68, Loss: 0.035708
Epoch: 68, Loss: 0.031620
Epoch: 68, Loss: 0.037492
Epoch: 68, Loss: 0.029264
Epoch: 68, Loss: 0.035431
Epoch: 68, Loss: 0.041765
Epoch: 68, Loss: 0.025732
Epoch: 68, Loss: 0.027997
Epoch: 68, Loss: 0.080580
Epoch: 69, Loss: 0.046025
Epoch: 69, Loss: 0.049845
Epoch: 69, Loss: 0.030949
Epoch: 69, Loss: 0.029917
Epoch: 69, Loss: 0.018050
Epoch: 69, Loss: 0.029253
Epoch: 69, Loss: 0.026979
Epoch: 69, Loss: 0.031989
Epoch: 69, Loss: 0.032294
Epoch: 69, Loss: 0.020407
Epoch: 69, Loss: 0.044427
Epoch: 69, Loss: 0.025044
Epoch: 69, Loss: 0.042891
Epoch: 69, Loss: 0.047729
Epoch: 69, Loss: 0.039865
Epoch: 69, Loss: 0.048864
Epoch: 69, Loss: 0.026373
Epoch: 69, Loss: 0.025157
Epoch: 69, Loss: 0.028525
Epoch: 69, Loss: 0.030529
Epoch: 69, Loss: 0.031063
Epoch: 69, Loss: 0.048844
Epoch: 69, Loss: 0.066841
Epoch: 69, Loss: 0.031430
Epoch: 69, Loss: 0.024188
Epoch: 69, Loss: 0.035260
Epoch: 69, Loss: 0.030693
Epoch: 69, Loss: 0.030549
Epoch: 69, Loss: 0.025840
Epoch: 69, Loss: 0.031299
Epoch: 69, Loss: 0.024342
Epoch: 69, Loss: 0.033079
Epoch: 69, Loss: 0.041060
Epoch: 69, Loss: 0.042940
Epoch: 69, Loss: 0.035386
Epoch: 69, Loss: 0.029224
Epoch: 69, Loss: 0.044290
Epoch: 69, Loss: 0.060839
Epoch: 69, Loss: 0.070138
Epoch: 69, Loss: 0.038935
Epoch: 69, Loss: 0.020620
Epoch: 69, Loss: 0.041719
Epoch: 69, Loss: 0.019975
Epoch: 69, Loss: 0.028578
Epoch: 69, Loss: 0.034181
Epoch: 69, Loss: 0.029485
Epoch: 69, Loss: 0.083391
Epoch: 69, Loss: 0.042804
Epoch: 69, Loss: 0.036295
Epoch: 69, Loss: 0.037610
Epoch: 69, Loss: 0.026414
Epoch: 69, Loss: 0.043508
Epoch: 69, Loss: 0.029587
Epoch: 69, Loss: 0.047396
Epoch: 69, Loss: 0.022128
Epoch: 69, Loss: 0.034149
Epoch: 69, Loss: 0.073642
Epoch: 69, Loss: 0.037528
Epoch: 69, Loss: 0.030801
Epoch: 69, Loss: 0.044208
Epoch: 69, Loss: 0.034938
Epoch: 69, Loss: 0.022176
Epoch: 69, Loss: 0.048866
Epoch: 69, Loss: 0.049340
Epoch: 69, Loss: 0.031137
Epoch: 69, Loss: 0.040396
Epoch: 69, Loss: 0.079014
Epoch: 69, Loss: 0.029186
Epoch: 69, Loss: 0.031061
Epoch: 69, Loss: 0.037155
Epoch: 69, Loss: 0.029984
Epoch: 69, Loss: 0.025000
Epoch: 69, Loss: 0.021525
Epoch: 69, Loss: 0.039344
Epoch: 69, Loss: 0.040706
Epoch: 69, Loss: 0.027943
Epoch: 69, Loss: 0.039837
Epoch: 69, Loss: 0.025553
Epoch: 69, Loss: 0.031560
Epoch: 69, Loss: 0.039798
Epoch: 69, Loss: 0.037968
Epoch: 69, Loss: 0.036174
Epoch: 69, Loss: 0.100867
Epoch: 69, Loss: 0.079918
Epoch: 69, Loss: 0.074221
Epoch: 69, Loss: 0.042872
Epoch: 69, Loss: 0.034676
Epoch: 69, Loss: 0.024961
Epoch: 69, Loss: 0.055759
Epoch: 69, Loss: 0.031890
Epoch: 69, Loss: 0.049418
Epoch: 69, Loss: 0.036337
Epoch: 69, Loss: 0.046712
Epoch: 69, Loss: 0.039235
Epoch: 69, Loss: 0.043344
Epoch: 69, Loss: 0.037730
Epoch: 69, Loss: 0.043295
Epoch: 69, Loss: 0.032325
Epoch: 69, Loss: 0.040547
Epoch: 69, Loss: 0.076959
Epoch: 69, Loss: 0.027347
Epoch: 69, Loss: 0.041478
Epoch: 69, Loss: 0.055891
Epoch: 69, Loss: 0.037481
Epoch: 69, Loss: 0.020119
Epoch: 69, Loss: 0.021871
Epoch: 69, Loss: 0.050749
Epoch: 69, Loss: 0.047826
Epoch: 69, Loss: 0.024918
Epoch: 69, Loss: 0.031933
Epoch: 69, Loss: 0.055102
Epoch: 69, Loss: 0.029348
Epoch: 69, Loss: 0.027301
Epoch: 69, Loss: 0.095119
Epoch: 69, Loss: 0.052931
Epoch: 69, Loss: 0.047369
Epoch: 69, Loss: 0.029643
Epoch: 69, Loss: 0.085253
Epoch: 69, Loss: 0.063631
Epoch: 69, Loss: 0.065459
Epoch: 69, Loss: 0.034996
Epoch: 69, Loss: 0.032712
Epoch: 69, Loss: 0.042427
Epoch: 69, Loss: 0.032395
Epoch: 69, Loss: 0.034418
Epoch: 69, Loss: 0.047149
Epoch: 69, Loss: 0.040856
Epoch: 69, Loss: 0.026689
Epoch: 69, Loss: 0.038716
Epoch: 69, Loss: 0.071062
Epoch: 69, Loss: 0.043691
Epoch: 69, Loss: 0.082031
Epoch: 69, Loss: 0.084280
Epoch: 69, Loss: 0.083058
Epoch: 69, Loss: 0.025441
Epoch: 69, Loss: 0.039977
Epoch: 69, Loss: 0.034904
Epoch: 69, Loss: 0.066236
Epoch: 69, Loss: 0.063556
Epoch: 69, Loss: 0.073368
Epoch: 69, Loss: 0.047526
Epoch: 69, Loss: 0.039912
Epoch: 69, Loss: 0.045325
Epoch: 69, Loss: 0.036933
Epoch: 69, Loss: 0.061100
Epoch: 69, Loss: 0.041740
Epoch: 69, Loss: 0.049983
Epoch: 69, Loss: 0.045465
Epoch: 69, Loss: 0.046579
Epoch: 69, Loss: 0.047799
Epoch: 69, Loss: 0.033641
Epoch: 69, Loss: 0.039113
Epoch: 69, Loss: 0.045597
Epoch: 69, Loss: 0.050520
Epoch: 69, Loss: 0.037602
Epoch: 69, Loss: 0.049297
Epoch: 69, Loss: 0.030417
Epoch: 70, Loss: 0.042877
Epoch: 70, Loss: 0.053889
Epoch: 70, Loss: 0.048876
Epoch: 70, Loss: 0.035661
Epoch: 70, Loss: 0.047175
Epoch: 70, Loss: 0.037618
Epoch: 70, Loss: 0.040118
Epoch: 70, Loss: 0.059143
Epoch: 70, Loss: 0.050036
Epoch: 70, Loss: 0.035357
Epoch: 70, Loss: 0.029706
Epoch: 70, Loss: 0.052190
Epoch: 70, Loss: 0.035361
Epoch: 70, Loss: 0.050933
Epoch: 70, Loss: 0.050087
Epoch: 70, Loss: 0.018722
Epoch: 70, Loss: 0.024243
Epoch: 70, Loss: 0.032654
Epoch: 70, Loss: 0.046970
Epoch: 70, Loss: 0.033493
Epoch: 70, Loss: 0.029330
Epoch: 70, Loss: 0.037093
Epoch: 70, Loss: 0.036461
Epoch: 70, Loss: 0.050446
Epoch: 70, Loss: 0.042041
Epoch: 70, Loss: 0.026536
Epoch: 70, Loss: 0.029694
Epoch: 70, Loss: 0.021562
Epoch: 70, Loss: 0.038476
Epoch: 70, Loss: 0.032730
Epoch: 70, Loss: 0.030345
Epoch: 70, Loss: 0.056372
Epoch: 70, Loss: 0.036621
Epoch: 70, Loss: 0.015404
Epoch: 70, Loss: 0.025733
Epoch: 70, Loss: 0.029981
Epoch: 70, Loss: 0.029992
Epoch: 70, Loss: 0.047707
Epoch: 70, Loss: 0.029563
Epoch: 70, Loss: 0.043098
Epoch: 70, Loss: 0.056643
Epoch: 70, Loss: 0.024476
Epoch: 70, Loss: 0.047527
Epoch: 70, Loss: 0.049973
Epoch: 70, Loss: 0.037347
Epoch: 70, Loss: 0.034709
Epoch: 70, Loss: 0.023339
Epoch: 70, Loss: 0.033847
Epoch: 70, Loss: 0.030462
Epoch: 70, Loss: 0.022266
Epoch: 70, Loss: 0.031264
Epoch: 70, Loss: 0.047136
Epoch: 70, Loss: 0.042069
Epoch: 70, Loss: 0.055027
Epoch: 70, Loss: 0.052359
Epoch: 70, Loss: 0.052517
Epoch: 70, Loss: 0.069855
Epoch: 70, Loss: 0.077291
Epoch: 70, Loss: 0.035465
Epoch: 70, Loss: 0.053235
Epoch: 70, Loss: 0.040972
Epoch: 70, Loss: 0.037830
Epoch: 70, Loss: 0.021557
Epoch: 70, Loss: 0.028164
Epoch: 70, Loss: 0.017738
Epoch: 70, Loss: 0.025134
Epoch: 70, Loss: 0.076589
Epoch: 70, Loss: 0.043802
Epoch: 70, Loss: 0.059428
Epoch: 70, Loss: 0.030968
Epoch: 70, Loss: 0.040815
Epoch: 70, Loss: 0.028548
Epoch: 70, Loss: 0.048465
Epoch: 70, Loss: 0.032867
Epoch: 70, Loss: 0.052986
Epoch: 70, Loss: 0.034801
Epoch: 70, Loss: 0.036225
Epoch: 70, Loss: 0.045660
Epoch: 70, Loss: 0.041811
Epoch: 70, Loss: 0.035772
Epoch: 70, Loss: 0.032948
Epoch: 70, Loss: 0.038533
Epoch: 70, Loss: 0.034387
Epoch: 70, Loss: 0.053184
Epoch: 70, Loss: 0.076480
Epoch: 70, Loss: 0.039678
Epoch: 70, Loss: 0.030731
Epoch: 70, Loss: 0.040653
Epoch: 70, Loss: 0.028020
Epoch: 70, Loss: 0.030374
Epoch: 70, Loss: 0.077962
Epoch: 70, Loss: 0.067431
Epoch: 70, Loss: 0.046267
Epoch: 70, Loss: 0.033055
Epoch: 70, Loss: 0.043527
Epoch: 70, Loss: 0.034496
Epoch: 70, Loss: 0.051126
Epoch: 70, Loss: 0.027300
Epoch: 70, Loss: 0.026414
Epoch: 70, Loss: 0.029903
Epoch: 70, Loss: 0.029159
Epoch: 70, Loss: 0.031043
Epoch: 70, Loss: 0.028524
Epoch: 70, Loss: 0.032069
Epoch: 70, Loss: 0.044945
Epoch: 70, Loss: 0.056932
Epoch: 70, Loss: 0.045185
Epoch: 70, Loss: 0.029785
Epoch: 70, Loss: 0.045885
Epoch: 70, Loss: 0.034837
Epoch: 70, Loss: 0.046727
Epoch: 70, Loss: 0.034074
Epoch: 70, Loss: 0.038042
Epoch: 70, Loss: 0.028153
Epoch: 70, Loss: 0.043106
Epoch: 70, Loss: 0.028234
Epoch: 70, Loss: 0.036778
Epoch: 70, Loss: 0.036397
Epoch: 70, Loss: 0.033177
Epoch: 70, Loss: 0.046302
Epoch: 70, Loss: 0.028270
Epoch: 70, Loss: 0.019103
Epoch: 70, Loss: 0.032495
Epoch: 70, Loss: 0.023436
Epoch: 70, Loss: 0.025578
Epoch: 70, Loss: 0.028993
Epoch: 70, Loss: 0.051525
Epoch: 70, Loss: 0.029122
Epoch: 70, Loss: 0.089683
Epoch: 70, Loss: 0.050887
Epoch: 70, Loss: 0.046020
Epoch: 70, Loss: 0.031848
Epoch: 70, Loss: 0.043006
Epoch: 70, Loss: 0.034470
Epoch: 70, Loss: 0.018557
Epoch: 70, Loss: 0.037268
Epoch: 70, Loss: 0.019415
Epoch: 70, Loss: 0.029432
Epoch: 70, Loss: 0.043011
Epoch: 70, Loss: 0.039405
Epoch: 70, Loss: 0.046886
Epoch: 70, Loss: 0.020680
Epoch: 70, Loss: 0.030393
Epoch: 70, Loss: 0.028676
Epoch: 70, Loss: 0.031468
Epoch: 70, Loss: 0.048925
Epoch: 70, Loss: 0.058222
Epoch: 70, Loss: 0.035900
Epoch: 70, Loss: 0.031450
Epoch: 70, Loss: 0.039727
Epoch: 70, Loss: 0.028241
Epoch: 70, Loss: 0.023102
Epoch: 70, Loss: 0.034434
Epoch: 70, Loss: 0.041863
Epoch: 70, Loss: 0.044781
Epoch: 70, Loss: 0.039815
Epoch: 70, Loss: 0.043933
Epoch: 71, Loss: 0.036823
Epoch: 71, Loss: 0.030713
Epoch: 71, Loss: 0.036810
Epoch: 71, Loss: 0.029785
Epoch: 71, Loss: 0.059583
Epoch: 71, Loss: 0.029100
Epoch: 71, Loss: 0.032740
Epoch: 71, Loss: 0.026896
Epoch: 71, Loss: 0.043977
Epoch: 71, Loss: 0.034613
Epoch: 71, Loss: 0.034183
Epoch: 71, Loss: 0.038365
Epoch: 71, Loss: 0.031537
Epoch: 71, Loss: 0.022434
Epoch: 71, Loss: 0.028143
Epoch: 71, Loss: 0.022923
Epoch: 71, Loss: 0.044494
Epoch: 71, Loss: 0.034640
Epoch: 71, Loss: 0.031743
Epoch: 71, Loss: 0.043784
Epoch: 71, Loss: 0.043746
Epoch: 71, Loss: 0.048691
Epoch: 71, Loss: 0.045405
Epoch: 71, Loss: 0.033982
Epoch: 71, Loss: 0.035063
Epoch: 71, Loss: 0.031802
Epoch: 71, Loss: 0.049574
Epoch: 71, Loss: 0.033263
Epoch: 71, Loss: 0.030432
Epoch: 71, Loss: 0.014723
Epoch: 71, Loss: 0.042780
Epoch: 71, Loss: 0.030598
Epoch: 71, Loss: 0.054734
Epoch: 71, Loss: 0.036447
Epoch: 71, Loss: 0.032697
Epoch: 71, Loss: 0.053536
Epoch: 71, Loss: 0.023332
Epoch: 71, Loss: 0.024355
Epoch: 71, Loss: 0.041830
Epoch: 71, Loss: 0.036647
Epoch: 71, Loss: 0.031329
Epoch: 71, Loss: 0.021023
Epoch: 71, Loss: 0.027948
Epoch: 71, Loss: 0.032675
Epoch: 71, Loss: 0.032579
Epoch: 71, Loss: 0.030009
Epoch: 71, Loss: 0.051993
Epoch: 71, Loss: 0.028829
Epoch: 71, Loss: 0.023098
Epoch: 71, Loss: 0.045025
Epoch: 71, Loss: 0.030595
Epoch: 71, Loss: 0.034885
Epoch: 71, Loss: 0.055362
Epoch: 71, Loss: 0.038196
Epoch: 71, Loss: 0.072978
Epoch: 71, Loss: 0.037190
Epoch: 71, Loss: 0.024386
Epoch: 71, Loss: 0.042837
Epoch: 71, Loss: 0.028023
Epoch: 71, Loss: 0.041743
Epoch: 71, Loss: 0.045255
Epoch: 71, Loss: 0.043147
Epoch: 71, Loss: 0.027664
Epoch: 71, Loss: 0.022742
Epoch: 71, Loss: 0.016633
Epoch: 71, Loss: 0.029368
Epoch: 71, Loss: 0.022613
Epoch: 71, Loss: 0.031191
Epoch: 71, Loss: 0.023886
Epoch: 71, Loss: 0.021822
Epoch: 71, Loss: 0.062647
Epoch: 71, Loss: 0.049592
Epoch: 71, Loss: 0.058751
Epoch: 71, Loss: 0.040506
Epoch: 71, Loss: 0.029358
Epoch: 71, Loss: 0.053182
Epoch: 71, Loss: 0.026985
Epoch: 71, Loss: 0.031354
Epoch: 71, Loss: 0.031531
Epoch: 71, Loss: 0.033063
Epoch: 71, Loss: 0.024082
Epoch: 71, Loss: 0.052586
Epoch: 71, Loss: 0.033575
Epoch: 71, Loss: 0.042126
Epoch: 71, Loss: 0.053944
Epoch: 71, Loss: 0.059814
Epoch: 71, Loss: 0.053440
Epoch: 71, Loss: 0.037248
Epoch: 71, Loss: 0.033431
Epoch: 71, Loss: 0.033650
Epoch: 71, Loss: 0.080356
Epoch: 71, Loss: 0.027233
Epoch: 71, Loss: 0.029996
Epoch: 71, Loss: 0.025678
Epoch: 71, Loss: 0.029469
Epoch: 71, Loss: 0.033215
Epoch: 71, Loss: 0.026511
Epoch: 71, Loss: 0.053108
Epoch: 71, Loss: 0.039794
Epoch: 71, Loss: 0.046144
Epoch: 71, Loss: 0.068003
Epoch: 71, Loss: 0.042558
Epoch: 71, Loss: 0.030040
Epoch: 71, Loss: 0.033464
Epoch: 71, Loss: 0.049174
Epoch: 71, Loss: 0.026893
Epoch: 71, Loss: 0.044176
Epoch: 71, Loss: 0.040300
Epoch: 71, Loss: 0.034901
Epoch: 71, Loss: 0.029153
Epoch: 71, Loss: 0.042992
Epoch: 71, Loss: 0.039339
Epoch: 71, Loss: 0.052223
Epoch: 71, Loss: 0.026458
Epoch: 71, Loss: 0.035364
Epoch: 71, Loss: 0.015894
Epoch: 71, Loss: 0.023070
Epoch: 71, Loss: 0.041126
Epoch: 71, Loss: 0.046184
Epoch: 71, Loss: 0.041218
Epoch: 71, Loss: 0.027085
Epoch: 71, Loss: 0.018124
Epoch: 71, Loss: 0.036293
Epoch: 71, Loss: 0.018496
Epoch: 71, Loss: 0.026487
Epoch: 71, Loss: 0.027246
Epoch: 71, Loss: 0.071327
Epoch: 71, Loss: 0.039033
Epoch: 71, Loss: 0.041510
Epoch: 71, Loss: 0.030892
Epoch: 71, Loss: 0.040386
Epoch: 71, Loss: 0.033359
Epoch: 71, Loss: 0.058856
Epoch: 71, Loss: 0.037902
Epoch: 71, Loss: 0.037237
Epoch: 71, Loss: 0.027981
Epoch: 71, Loss: 0.026197
Epoch: 71, Loss: 0.041166
Epoch: 71, Loss: 0.045356
Epoch: 71, Loss: 0.043225
Epoch: 71, Loss: 0.045986
Epoch: 71, Loss: 0.034589
Epoch: 71, Loss: 0.028346
Epoch: 71, Loss: 0.062104
Epoch: 71, Loss: 0.061164
Epoch: 71, Loss: 0.062528
Epoch: 71, Loss: 0.055924
Epoch: 71, Loss: 0.026042
Epoch: 71, Loss: 0.066567
Epoch: 71, Loss: 0.028354
Epoch: 71, Loss: 0.040844
Epoch: 71, Loss: 0.050428
Epoch: 71, Loss: 0.039298
Epoch: 71, Loss: 0.033880
Epoch: 71, Loss: 0.028238
Epoch: 71, Loss: 0.022618
Epoch: 71, Loss: 0.063594
Epoch: 72, Loss: 0.084008
Epoch: 72, Loss: 0.068120
Epoch: 72, Loss: 0.051232
Epoch: 72, Loss: 0.047272
Epoch: 72, Loss: 0.031268
Epoch: 72, Loss: 0.039799
Epoch: 72, Loss: 0.056396
Epoch: 72, Loss: 0.035696
Epoch: 72, Loss: 0.028966
Epoch: 72, Loss: 0.024661
Epoch: 72, Loss: 0.017379
Epoch: 72, Loss: 0.022731
Epoch: 72, Loss: 0.029921
Epoch: 72, Loss: 0.021416
Epoch: 72, Loss: 0.024735
Epoch: 72, Loss: 0.038875
Epoch: 72, Loss: 0.061839
Epoch: 72, Loss: 0.057738
Epoch: 72, Loss: 0.048185
Epoch: 72, Loss: 0.023924
Epoch: 72, Loss: 0.040653
Epoch: 72, Loss: 0.035141
Epoch: 72, Loss: 0.044562
Epoch: 72, Loss: 0.068698
Epoch: 72, Loss: 0.057699
Epoch: 72, Loss: 0.067877
Epoch: 72, Loss: 0.041127
Epoch: 72, Loss: 0.032008
Epoch: 72, Loss: 0.035367
Epoch: 72, Loss: 0.038319
Epoch: 72, Loss: 0.045611
Epoch: 72, Loss: 0.037378
Epoch: 72, Loss: 0.036704
Epoch: 72, Loss: 0.021517
Epoch: 72, Loss: 0.019675
Epoch: 72, Loss: 0.030906
Epoch: 72, Loss: 0.023122
Epoch: 72, Loss: 0.046353
Epoch: 72, Loss: 0.029561
Epoch: 72, Loss: 0.035950
Epoch: 72, Loss: 0.030611
Epoch: 72, Loss: 0.032641
Epoch: 72, Loss: 0.023686
Epoch: 72, Loss: 0.020364
Epoch: 72, Loss: 0.032048
Epoch: 72, Loss: 0.024996
Epoch: 72, Loss: 0.019433
Epoch: 72, Loss: 0.046075
Epoch: 72, Loss: 0.026312
Epoch: 72, Loss: 0.029305
Epoch: 72, Loss: 0.024053
Epoch: 72, Loss: 0.033429
Epoch: 72, Loss: 0.033042
Epoch: 72, Loss: 0.041544
Epoch: 72, Loss: 0.035303
Epoch: 72, Loss: 0.040801
Epoch: 72, Loss: 0.019183
Epoch: 72, Loss: 0.051692
Epoch: 72, Loss: 0.083206
Epoch: 72, Loss: 0.022499
Epoch: 72, Loss: 0.048191
Epoch: 72, Loss: 0.062351
Epoch: 72, Loss: 0.047537
Epoch: 72, Loss: 0.027859
Epoch: 72, Loss: 0.046527
Epoch: 72, Loss: 0.016244
Epoch: 72, Loss: 0.027902
Epoch: 72, Loss: 0.027335
Epoch: 72, Loss: 0.028460
Epoch: 72, Loss: 0.024973
Epoch: 72, Loss: 0.049347
Epoch: 72, Loss: 0.035390
Epoch: 72, Loss: 0.037607
Epoch: 72, Loss: 0.017490
Epoch: 72, Loss: 0.060296
Epoch: 72, Loss: 0.020520
Epoch: 72, Loss: 0.030141
Epoch: 72, Loss: 0.041028
Epoch: 72, Loss: 0.050712
Epoch: 72, Loss: 0.027309
Epoch: 72, Loss: 0.019277
Epoch: 72, Loss: 0.051665
Epoch: 72, Loss: 0.043598
Epoch: 72, Loss: 0.029337
Epoch: 72, Loss: 0.031173
Epoch: 72, Loss: 0.066895
Epoch: 72, Loss: 0.026932
Epoch: 72, Loss: 0.034206
Epoch: 72, Loss: 0.041388
Epoch: 72, Loss: 0.016322
Epoch: 72, Loss: 0.017400
Epoch: 72, Loss: 0.046861
Epoch: 72, Loss: 0.090243
Epoch: 72, Loss: 0.114867
Epoch: 72, Loss: 0.022677
Epoch: 72, Loss: 0.060810
Epoch: 72, Loss: 0.028952
Epoch: 72, Loss: 0.030367
Epoch: 72, Loss: 0.028600
Epoch: 72, Loss: 0.021554
Epoch: 72, Loss: 0.036334
Epoch: 72, Loss: 0.070359
Epoch: 72, Loss: 0.048025
Epoch: 72, Loss: 0.061795
Epoch: 72, Loss: 0.027504
Epoch: 72, Loss: 0.027078
Epoch: 72, Loss: 0.024784
Epoch: 72, Loss: 0.109711
Epoch: 72, Loss: 0.050909
Epoch: 72, Loss: 0.035664
Epoch: 72, Loss: 0.039530
Epoch: 72, Loss: 0.030076
Epoch: 72, Loss: 0.041111
Epoch: 72, Loss: 0.053535
Epoch: 72, Loss: 0.051445
Epoch: 72, Loss: 0.028468
Epoch: 72, Loss: 0.053194
Epoch: 72, Loss: 0.040527
Epoch: 72, Loss: 0.045435
Epoch: 72, Loss: 0.030397
Epoch: 72, Loss: 0.020329
Epoch: 72, Loss: 0.024184
Epoch: 72, Loss: 0.025574
Epoch: 72, Loss: 0.028972
Epoch: 72, Loss: 0.031205
Epoch: 72, Loss: 0.030417
Epoch: 72, Loss: 0.033950
Epoch: 72, Loss: 0.038933
Epoch: 72, Loss: 0.037337
Epoch: 72, Loss: 0.056876
Epoch: 72, Loss: 0.021125
Epoch: 72, Loss: 0.028114
Epoch: 72, Loss: 0.020091
Epoch: 72, Loss: 0.050418
Epoch: 72, Loss: 0.017090
Epoch: 72, Loss: 0.031412
Epoch: 72, Loss: 0.046785
Epoch: 72, Loss: 0.054234
Epoch: 72, Loss: 0.052752
Epoch: 72, Loss: 0.049329
Epoch: 72, Loss: 0.047757
Epoch: 72, Loss: 0.023728
Epoch: 72, Loss: 0.027898
Epoch: 72, Loss: 0.032511
Epoch: 72, Loss: 0.035510
Epoch: 72, Loss: 0.044636
Epoch: 72, Loss: 0.030031
Epoch: 72, Loss: 0.039533
Epoch: 72, Loss: 0.043340
Epoch: 72, Loss: 0.032317
Epoch: 72, Loss: 0.052010
Epoch: 72, Loss: 0.030883
Epoch: 72, Loss: 0.041970
Epoch: 72, Loss: 0.022318
Epoch: 72, Loss: 0.043663
Epoch: 72, Loss: 0.025734
Epoch: 72, Loss: 0.024682
Epoch: 73, Loss: 0.063707
Epoch: 73, Loss: 0.057158
Epoch: 73, Loss: 0.032434
Epoch: 73, Loss: 0.027527
Epoch: 73, Loss: 0.020871
Epoch: 73, Loss: 0.029945
Epoch: 73, Loss: 0.029185
Epoch: 73, Loss: 0.019019
Epoch: 73, Loss: 0.036697
Epoch: 73, Loss: 0.026919
Epoch: 73, Loss: 0.023041
Epoch: 73, Loss: 0.028221
Epoch: 73, Loss: 0.055272
Epoch: 73, Loss: 0.101844
Epoch: 73, Loss: 0.031595
Epoch: 73, Loss: 0.029290
Epoch: 73, Loss: 0.023382
Epoch: 73, Loss: 0.027706
Epoch: 73, Loss: 0.040739
Epoch: 73, Loss: 0.026946
Epoch: 73, Loss: 0.027918
Epoch: 73, Loss: 0.040838
Epoch: 73, Loss: 0.034743
Epoch: 73, Loss: 0.027873
Epoch: 73, Loss: 0.043960
Epoch: 73, Loss: 0.026057
Epoch: 73, Loss: 0.031835
Epoch: 73, Loss: 0.026864
Epoch: 73, Loss: 0.038293
Epoch: 73, Loss: 0.020792
Epoch: 73, Loss: 0.019981
Epoch: 73, Loss: 0.044688
Epoch: 73, Loss: 0.020982
Epoch: 73, Loss: 0.025370
Epoch: 73, Loss: 0.028766
Epoch: 73, Loss: 0.033536
Epoch: 73, Loss: 0.028989
Epoch: 73, Loss: 0.032987
Epoch: 73, Loss: 0.018174
Epoch: 73, Loss: 0.013773
Epoch: 73, Loss: 0.025486
Epoch: 73, Loss: 0.045287
Epoch: 73, Loss: 0.036118
Epoch: 73, Loss: 0.030298
Epoch: 73, Loss: 0.035962
Epoch: 73, Loss: 0.026212
Epoch: 73, Loss: 0.036622
Epoch: 73, Loss: 0.032108
Epoch: 73, Loss: 0.028822
Epoch: 73, Loss: 0.027711
Epoch: 73, Loss: 0.058451
Epoch: 73, Loss: 0.036286
Epoch: 73, Loss: 0.060941
Epoch: 73, Loss: 0.027906
Epoch: 73, Loss: 0.037766
Epoch: 73, Loss: 0.023211
Epoch: 73, Loss: 0.023143
Epoch: 73, Loss: 0.033946
Epoch: 73, Loss: 0.052003
Epoch: 73, Loss: 0.026501
Epoch: 73, Loss: 0.024765
Epoch: 73, Loss: 0.016877
Epoch: 73, Loss: 0.031911
Epoch: 73, Loss: 0.027944
Epoch: 73, Loss: 0.042085
Epoch: 73, Loss: 0.045852
Epoch: 73, Loss: 0.027475
Epoch: 73, Loss: 0.034555
Epoch: 73, Loss: 0.041987
Epoch: 73, Loss: 0.032529
Epoch: 73, Loss: 0.026417
Epoch: 73, Loss: 0.050547
Epoch: 73, Loss: 0.032393
Epoch: 73, Loss: 0.033502
Epoch: 73, Loss: 0.018565
Epoch: 73, Loss: 0.026770
Epoch: 73, Loss: 0.059319
Epoch: 73, Loss: 0.051419
Epoch: 73, Loss: 0.026164
Epoch: 73, Loss: 0.035933
Epoch: 73, Loss: 0.042751
Epoch: 73, Loss: 0.027005
Epoch: 73, Loss: 0.020957
Epoch: 73, Loss: 0.025314
Epoch: 73, Loss: 0.023493
Epoch: 73, Loss: 0.033258
Epoch: 73, Loss: 0.038791
Epoch: 73, Loss: 0.026712
Epoch: 73, Loss: 0.029392
Epoch: 73, Loss: 0.107152
Epoch: 73, Loss: 0.069803
Epoch: 73, Loss: 0.035286
Epoch: 73, Loss: 0.043793
Epoch: 73, Loss: 0.028446
Epoch: 73, Loss: 0.024259
Epoch: 73, Loss: 0.038050
Epoch: 73, Loss: 0.047546
Epoch: 73, Loss: 0.040689
Epoch: 73, Loss: 0.034454
Epoch: 73, Loss: 0.037117
Epoch: 73, Loss: 0.041007
Epoch: 73, Loss: 0.033963
Epoch: 73, Loss: 0.019830
Epoch: 73, Loss: 0.015434
Epoch: 73, Loss: 0.029297
Epoch: 73, Loss: 0.024769
Epoch: 73, Loss: 0.024644
Epoch: 73, Loss: 0.040226
Epoch: 73, Loss: 0.022293
Epoch: 73, Loss: 0.034788
Epoch: 73, Loss: 0.024892
Epoch: 73, Loss: 0.043263
Epoch: 73, Loss: 0.026581
Epoch: 73, Loss: 0.028189
Epoch: 73, Loss: 0.030810
Epoch: 73, Loss: 0.030057
Epoch: 73, Loss: 0.029695
Epoch: 73, Loss: 0.027516
Epoch: 73, Loss: 0.092234
Epoch: 73, Loss: 0.061195
Epoch: 73, Loss: 0.031497
Epoch: 73, Loss: 0.052249
Epoch: 73, Loss: 0.037255
Epoch: 73, Loss: 0.048052
Epoch: 73, Loss: 0.029634
Epoch: 73, Loss: 0.021351
Epoch: 73, Loss: 0.019064
Epoch: 73, Loss: 0.028241
Epoch: 73, Loss: 0.027123
Epoch: 73, Loss: 0.022296
Epoch: 73, Loss: 0.022660
Epoch: 73, Loss: 0.029035
Epoch: 73, Loss: 0.032832
Epoch: 73, Loss: 0.029363
Epoch: 73, Loss: 0.031216
Epoch: 73, Loss: 0.043946
Epoch: 73, Loss: 0.019548
Epoch: 73, Loss: 0.030645
Epoch: 73, Loss: 0.045279
Epoch: 73, Loss: 0.027506
Epoch: 73, Loss: 0.022837
Epoch: 73, Loss: 0.039587
Epoch: 73, Loss: 0.022419
Epoch: 73, Loss: 0.024430
Epoch: 73, Loss: 0.065247
Epoch: 73, Loss: 0.063048
Epoch: 73, Loss: 0.026265
Epoch: 73, Loss: 0.068243
Epoch: 73, Loss: 0.026557
Epoch: 73, Loss: 0.033189
Epoch: 73, Loss: 0.038960
Epoch: 73, Loss: 0.049713
Epoch: 73, Loss: 0.028366
Epoch: 73, Loss: 0.037078
Epoch: 73, Loss: 0.020515
Epoch: 73, Loss: 0.019021
Epoch: 73, Loss: 0.104016
Epoch: 74, Loss: 0.185700
Epoch: 74, Loss: 0.118790
Epoch: 74, Loss: 0.026872
Epoch: 74, Loss: 0.033226
Epoch: 74, Loss: 0.022105
Epoch: 74, Loss: 0.047967
Epoch: 74, Loss: 0.051631
Epoch: 74, Loss: 0.032356
Epoch: 74, Loss: 0.031663
Epoch: 74, Loss: 0.031525
Epoch: 74, Loss: 0.039243
Epoch: 74, Loss: 0.048192
Epoch: 74, Loss: 0.037044
Epoch: 74, Loss: 0.046912
Epoch: 74, Loss: 0.026292
Epoch: 74, Loss: 0.025725
Epoch: 74, Loss: 0.032719
Epoch: 74, Loss: 0.034822
Epoch: 74, Loss: 0.059404
Epoch: 74, Loss: 0.011794
Epoch: 74, Loss: 0.033466
Epoch: 74, Loss: 0.031099
Epoch: 74, Loss: 0.043978
Epoch: 74, Loss: 0.033077
Epoch: 74, Loss: 0.026086
Epoch: 74, Loss: 0.027863
Epoch: 74, Loss: 0.037055
Epoch: 74, Loss: 0.040057
Epoch: 74, Loss: 0.026439
Epoch: 74, Loss: 0.024442
Epoch: 74, Loss: 0.027123
Epoch: 74, Loss: 0.046032
Epoch: 74, Loss: 0.028969
Epoch: 74, Loss: 0.037561
Epoch: 74, Loss: 0.027653
Epoch: 74, Loss: 0.040453
Epoch: 74, Loss: 0.029450
Epoch: 74, Loss: 0.028088
Epoch: 74, Loss: 0.025180
Epoch: 74, Loss: 0.026593
Epoch: 74, Loss: 0.029024
Epoch: 74, Loss: 0.035358
Epoch: 74, Loss: 0.031114
Epoch: 74, Loss: 0.030499
Epoch: 74, Loss: 0.024843
Epoch: 74, Loss: 0.024631
Epoch: 74, Loss: 0.030278
Epoch: 74, Loss: 0.029408
Epoch: 74, Loss: 0.053911
Epoch: 74, Loss: 0.043577
Epoch: 74, Loss: 0.021479
Epoch: 74, Loss: 0.014290
Epoch: 74, Loss: 0.026959
Epoch: 74, Loss: 0.023898
Epoch: 74, Loss: 0.027286
Epoch: 74, Loss: 0.031659
Epoch: 74, Loss: 0.030804
Epoch: 74, Loss: 0.026915
Epoch: 74, Loss: 0.042419
Epoch: 74, Loss: 0.023384
Epoch: 74, Loss: 0.033313
Epoch: 74, Loss: 0.033829
Epoch: 74, Loss: 0.022055
Epoch: 74, Loss: 0.024993
Epoch: 74, Loss: 0.021974
Epoch: 74, Loss: 0.030822
Epoch: 74, Loss: 0.021783
Epoch: 74, Loss: 0.039688
Epoch: 74, Loss: 0.050657
Epoch: 74, Loss: 0.046232
Epoch: 74, Loss: 0.026186
Epoch: 74, Loss: 0.050844
Epoch: 74, Loss: 0.022239
Epoch: 74, Loss: 0.033469
Epoch: 74, Loss: 0.030276
Epoch: 74, Loss: 0.025047
Epoch: 74, Loss: 0.031957
Epoch: 74, Loss: 0.034143
Epoch: 74, Loss: 0.043326
Epoch: 74, Loss: 0.024845
Epoch: 74, Loss: 0.029431
Epoch: 74, Loss: 0.019496
Epoch: 74, Loss: 0.043747
Epoch: 74, Loss: 0.036594
Epoch: 74, Loss: 0.022538
Epoch: 74, Loss: 0.027044
Epoch: 74, Loss: 0.029791
Epoch: 74, Loss: 0.014587
Epoch: 74, Loss: 0.036377
Epoch: 74, Loss: 0.020949
Epoch: 74, Loss: 0.025498
Epoch: 74, Loss: 0.014718
Epoch: 74, Loss: 0.036606
Epoch: 74, Loss: 0.025945
Epoch: 74, Loss: 0.020980
Epoch: 74, Loss: 0.030002
Epoch: 74, Loss: 0.041376
Epoch: 74, Loss: 0.032757
Epoch: 74, Loss: 0.034570
Epoch: 74, Loss: 0.045830
Epoch: 74, Loss: 0.047744
Epoch: 74, Loss: 0.068716
Epoch: 74, Loss: 0.022037
Epoch: 74, Loss: 0.062865
Epoch: 74, Loss: 0.024435
Epoch: 74, Loss: 0.026145
Epoch: 74, Loss: 0.030412
Epoch: 74, Loss: 0.049835
Epoch: 74, Loss: 0.035044
Epoch: 74, Loss: 0.060054
Epoch: 74, Loss: 0.034690
Epoch: 74, Loss: 0.046083
Epoch: 74, Loss: 0.025311
Epoch: 74, Loss: 0.041302
Epoch: 74, Loss: 0.025112
Epoch: 74, Loss: 0.026558
Epoch: 74, Loss: 0.033791
Epoch: 74, Loss: 0.034024
Epoch: 74, Loss: 0.033734
Epoch: 74, Loss: 0.028065
Epoch: 74, Loss: 0.077195
Epoch: 74, Loss: 0.057861
Epoch: 74, Loss: 0.032038
Epoch: 74, Loss: 0.028976
Epoch: 74, Loss: 0.027133
Epoch: 74, Loss: 0.033461
Epoch: 74, Loss: 0.025180
Epoch: 74, Loss: 0.097006
Epoch: 74, Loss: 0.018872
Epoch: 74, Loss: 0.057013
Epoch: 74, Loss: 0.033118
Epoch: 74, Loss: 0.035528
Epoch: 74, Loss: 0.033775
Epoch: 74, Loss: 0.021790
Epoch: 74, Loss: 0.019190
Epoch: 74, Loss: 0.034865
Epoch: 74, Loss: 0.029539
Epoch: 74, Loss: 0.059734
Epoch: 74, Loss: 0.048853
Epoch: 74, Loss: 0.043646
Epoch: 74, Loss: 0.015642
Epoch: 74, Loss: 0.041543
Epoch: 74, Loss: 0.038299
Epoch: 74, Loss: 0.040961
Epoch: 74, Loss: 0.025014
Epoch: 74, Loss: 0.028033
Epoch: 74, Loss: 0.052119
Epoch: 74, Loss: 0.025723
Epoch: 74, Loss: 0.055147
Epoch: 74, Loss: 0.020039
Epoch: 74, Loss: 0.030874
Epoch: 74, Loss: 0.026459
Epoch: 74, Loss: 0.026966
Epoch: 74, Loss: 0.038537
Epoch: 74, Loss: 0.031775
Epoch: 74, Loss: 0.025645
Epoch: 74, Loss: 0.050413
Epoch: 75, Loss: 0.034162
Epoch: 75, Loss: 0.044521
Epoch: 75, Loss: 0.030991
Epoch: 75, Loss: 0.039574
Epoch: 75, Loss: 0.015807
Epoch: 75, Loss: 0.035812
Epoch: 75, Loss: 0.020990
Epoch: 75, Loss: 0.030427
Epoch: 75, Loss: 0.027350
Epoch: 75, Loss: 0.034265
Epoch: 75, Loss: 0.025958
Epoch: 75, Loss: 0.019776
Epoch: 75, Loss: 0.019104
Epoch: 75, Loss: 0.050185
Epoch: 75, Loss: 0.032524
Epoch: 75, Loss: 0.021889
Epoch: 75, Loss: 0.049723
Epoch: 75, Loss: 0.016993
Epoch: 75, Loss: 0.023895
Epoch: 75, Loss: 0.041097
Epoch: 75, Loss: 0.047757
Epoch: 75, Loss: 0.036788
Epoch: 75, Loss: 0.025868
Epoch: 75, Loss: 0.040597
Epoch: 75, Loss: 0.023284
Epoch: 75, Loss: 0.022417
Epoch: 75, Loss: 0.027318
Epoch: 75, Loss: 0.058743
Epoch: 75, Loss: 0.030935
Epoch: 75, Loss: 0.029316
Epoch: 75, Loss: 0.042137
Epoch: 75, Loss: 0.083603
Epoch: 75, Loss: 0.038327
Epoch: 75, Loss: 0.052255
Epoch: 75, Loss: 0.023990
Epoch: 75, Loss: 0.025586
Epoch: 75, Loss: 0.016675
Epoch: 75, Loss: 0.033438
Epoch: 75, Loss: 0.031476
Epoch: 75, Loss: 0.022857
Epoch: 75, Loss: 0.018705
Epoch: 75, Loss: 0.022386
Epoch: 75, Loss: 0.026523
Epoch: 75, Loss: 0.021179
Epoch: 75, Loss: 0.033308
Epoch: 75, Loss: 0.044940
Epoch: 75, Loss: 0.026788
Epoch: 75, Loss: 0.041085
Epoch: 75, Loss: 0.028262
Epoch: 75, Loss: 0.026925
Epoch: 75, Loss: 0.027930
Epoch: 75, Loss: 0.059622
Epoch: 75, Loss: 0.043025
Epoch: 75, Loss: 0.036031
Epoch: 75, Loss: 0.055595
Epoch: 75, Loss: 0.022410
Epoch: 75, Loss: 0.035562
Epoch: 75, Loss: 0.051050
Epoch: 75, Loss: 0.024618
Epoch: 75, Loss: 0.083519
Epoch: 75, Loss: 0.046636
Epoch: 75, Loss: 0.022112
Epoch: 75, Loss: 0.033041
Epoch: 75, Loss: 0.038845
Epoch: 75, Loss: 0.028183
Epoch: 75, Loss: 0.024292
Epoch: 75, Loss: 0.028428
Epoch: 75, Loss: 0.037848
Epoch: 75, Loss: 0.047495
Epoch: 75, Loss: 0.044750
Epoch: 75, Loss: 0.032089
Epoch: 75, Loss: 0.030783
Epoch: 75, Loss: 0.016689
Epoch: 75, Loss: 0.024383
Epoch: 75, Loss: 0.020188
Epoch: 75, Loss: 0.039622
Epoch: 75, Loss: 0.035494
Epoch: 75, Loss: 0.022879
Epoch: 75, Loss: 0.027798
Epoch: 75, Loss: 0.029596
Epoch: 75, Loss: 0.024692
Epoch: 75, Loss: 0.029488
Epoch: 75, Loss: 0.021936
Epoch: 75, Loss: 0.022559
Epoch: 75, Loss: 0.037337
Epoch: 75, Loss: 0.033056
Epoch: 75, Loss: 0.019419
Epoch: 75, Loss: 0.021464
Epoch: 75, Loss: 0.057984
Epoch: 75, Loss: 0.069777
Epoch: 75, Loss: 0.025974
Epoch: 75, Loss: 0.045422
Epoch: 75, Loss: 0.058574
Epoch: 75, Loss: 0.057069
Epoch: 75, Loss: 0.023398
Epoch: 75, Loss: 0.021139
Epoch: 75, Loss: 0.021520
Epoch: 75, Loss: 0.026369
Epoch: 75, Loss: 0.029636
Epoch: 75, Loss: 0.024742
Epoch: 75, Loss: 0.048079
Epoch: 75, Loss: 0.018391
Epoch: 75, Loss: 0.044580
Epoch: 75, Loss: 0.045831
Epoch: 75, Loss: 0.034250
Epoch: 75, Loss: 0.031165
Epoch: 75, Loss: 0.035222
Epoch: 75, Loss: 0.044555
Epoch: 75, Loss: 0.036860
Epoch: 75, Loss: 0.053806
Epoch: 75, Loss: 0.027592
Epoch: 75, Loss: 0.035949
Epoch: 75, Loss: 0.023808
Epoch: 75, Loss: 0.036032
Epoch: 75, Loss: 0.029688
Epoch: 75, Loss: 0.018373
Epoch: 75, Loss: 0.023815
Epoch: 75, Loss: 0.012313
Epoch: 75, Loss: 0.018506
Epoch: 75, Loss: 0.037019
Epoch: 75, Loss: 0.045822
Epoch: 75, Loss: 0.087229
Epoch: 75, Loss: 0.030367
Epoch: 75, Loss: 0.017902
Epoch: 75, Loss: 0.046627
Epoch: 75, Loss: 0.035161
Epoch: 75, Loss: 0.039763
Epoch: 75, Loss: 0.022212
Epoch: 75, Loss: 0.023161
Epoch: 75, Loss: 0.037966
Epoch: 75, Loss: 0.029232
Epoch: 75, Loss: 0.038110
Epoch: 75, Loss: 0.032929
Epoch: 75, Loss: 0.020993
Epoch: 75, Loss: 0.068693
Epoch: 75, Loss: 0.031789
Epoch: 75, Loss: 0.030386
Epoch: 75, Loss: 0.031967
Epoch: 75, Loss: 0.023814
Epoch: 75, Loss: 0.032000
Epoch: 75, Loss: 0.020155
Epoch: 75, Loss: 0.019958
Epoch: 75, Loss: 0.051871
Epoch: 75, Loss: 0.031270
Epoch: 75, Loss: 0.024395
Epoch: 75, Loss: 0.046079
Epoch: 75, Loss: 0.051169
Epoch: 75, Loss: 0.058926
Epoch: 75, Loss: 0.044479
Epoch: 75, Loss: 0.018911
Epoch: 75, Loss: 0.024523
Epoch: 75, Loss: 0.016154
Epoch: 75, Loss: 0.017449
Epoch: 75, Loss: 0.023161
Epoch: 75, Loss: 0.022697
Epoch: 75, Loss: 0.019211
Epoch: 75, Loss: 0.028522
Epoch: 76, Loss: 0.047278
Epoch: 76, Loss: 0.038499
Epoch: 76, Loss: 0.020389
Epoch: 76, Loss: 0.020455
Epoch: 76, Loss: 0.027009
Epoch: 76, Loss: 0.034863
Epoch: 76, Loss: 0.026651
Epoch: 76, Loss: 0.026469
Epoch: 76, Loss: 0.033002
Epoch: 76, Loss: 0.050904
Epoch: 76, Loss: 0.037747
Epoch: 76, Loss: 0.025702
Epoch: 76, Loss: 0.028539
Epoch: 76, Loss: 0.030050
Epoch: 76, Loss: 0.042014
Epoch: 76, Loss: 0.026809
Epoch: 76, Loss: 0.047113
Epoch: 76, Loss: 0.027682
Epoch: 76, Loss: 0.028041
Epoch: 76, Loss: 0.020062
Epoch: 76, Loss: 0.021335
Epoch: 76, Loss: 0.022245
Epoch: 76, Loss: 0.014035
Epoch: 76, Loss: 0.028279
Epoch: 76, Loss: 0.027886
Epoch: 76, Loss: 0.030423
Epoch: 76, Loss: 0.026669
Epoch: 76, Loss: 0.027546
Epoch: 76, Loss: 0.024662
Epoch: 76, Loss: 0.021761
Epoch: 76, Loss: 0.028683
Epoch: 76, Loss: 0.041864
Epoch: 76, Loss: 0.024908
Epoch: 76, Loss: 0.034272
Epoch: 76, Loss: 0.086297
Epoch: 76, Loss: 0.127354
Epoch: 76, Loss: 0.067788
Epoch: 76, Loss: 0.028466
Epoch: 76, Loss: 0.045639
Epoch: 76, Loss: 0.036330
Epoch: 76, Loss: 0.021019
Epoch: 76, Loss: 0.024526
Epoch: 76, Loss: 0.024432
Epoch: 76, Loss: 0.026193
Epoch: 76, Loss: 0.029053
Epoch: 76, Loss: 0.023954
Epoch: 76, Loss: 0.045688
Epoch: 76, Loss: 0.038527
Epoch: 76, Loss: 0.025283
Epoch: 76, Loss: 0.026898
Epoch: 76, Loss: 0.017390
Epoch: 76, Loss: 0.032244
Epoch: 76, Loss: 0.050444
Epoch: 76, Loss: 0.044087
Epoch: 76, Loss: 0.024961
Epoch: 76, Loss: 0.026428
Epoch: 76, Loss: 0.044820
Epoch: 76, Loss: 0.063160
Epoch: 76, Loss: 0.052642
Epoch: 76, Loss: 0.023339
Epoch: 76, Loss: 0.045275
Epoch: 76, Loss: 0.054115
Epoch: 76, Loss: 0.046135
Epoch: 76, Loss: 0.016485
Epoch: 76, Loss: 0.027904
Epoch: 76, Loss: 0.023074
Epoch: 76, Loss: 0.032942
Epoch: 76, Loss: 0.087063
Epoch: 76, Loss: 0.053957
Epoch: 76, Loss: 0.027029
Epoch: 76, Loss: 0.024625
Epoch: 76, Loss: 0.021041
Epoch: 76, Loss: 0.017186
Epoch: 76, Loss: 0.021198
Epoch: 76, Loss: 0.018983
Epoch: 76, Loss: 0.047745
Epoch: 76, Loss: 0.025657
Epoch: 76, Loss: 0.026987
Epoch: 76, Loss: 0.035261
Epoch: 76, Loss: 0.019071
Epoch: 76, Loss: 0.032061
Epoch: 76, Loss: 0.020638
Epoch: 76, Loss: 0.026601
Epoch: 76, Loss: 0.044773
Epoch: 76, Loss: 0.025796
Epoch: 76, Loss: 0.049252
Epoch: 76, Loss: 0.030570
Epoch: 76, Loss: 0.019685
Epoch: 76, Loss: 0.052914
Epoch: 76, Loss: 0.054830
Epoch: 76, Loss: 0.020941
Epoch: 76, Loss: 0.032904
Epoch: 76, Loss: 0.030039
Epoch: 76, Loss: 0.029192
Epoch: 76, Loss: 0.031760
Epoch: 76, Loss: 0.028460
Epoch: 76, Loss: 0.020555
Epoch: 76, Loss: 0.036808
Epoch: 76, Loss: 0.039029
Epoch: 76, Loss: 0.017642
Epoch: 76, Loss: 0.040211
Epoch: 76, Loss: 0.012794
Epoch: 76, Loss: 0.025280
Epoch: 76, Loss: 0.037215
Epoch: 76, Loss: 0.034033
Epoch: 76, Loss: 0.023183
Epoch: 76, Loss: 0.036298
Epoch: 76, Loss: 0.032391
Epoch: 76, Loss: 0.033803
Epoch: 76, Loss: 0.036618
Epoch: 76, Loss: 0.028582
Epoch: 76, Loss: 0.017723
Epoch: 76, Loss: 0.016848
Epoch: 76, Loss: 0.044920
Epoch: 76, Loss: 0.029529
Epoch: 76, Loss: 0.028862
Epoch: 76, Loss: 0.039363
Epoch: 76, Loss: 0.040640
Epoch: 76, Loss: 0.012437
Epoch: 76, Loss: 0.054595
Epoch: 76, Loss: 0.023156
Epoch: 76, Loss: 0.021309
Epoch: 76, Loss: 0.030862
Epoch: 76, Loss: 0.029403
Epoch: 76, Loss: 0.031486
Epoch: 76, Loss: 0.013413
Epoch: 76, Loss: 0.029637
Epoch: 76, Loss: 0.029639
Epoch: 76, Loss: 0.022111
Epoch: 76, Loss: 0.026412
Epoch: 76, Loss: 0.023110
Epoch: 76, Loss: 0.015918
Epoch: 76, Loss: 0.020720
Epoch: 76, Loss: 0.045543
Epoch: 76, Loss: 0.028694
Epoch: 76, Loss: 0.032151
Epoch: 76, Loss: 0.065409
Epoch: 76, Loss: 0.066806
Epoch: 76, Loss: 0.066857
Epoch: 76, Loss: 0.030190
Epoch: 76, Loss: 0.050100
Epoch: 76, Loss: 0.039507
Epoch: 76, Loss: 0.016896
Epoch: 76, Loss: 0.033150
Epoch: 76, Loss: 0.031973
Epoch: 76, Loss: 0.025469
Epoch: 76, Loss: 0.019789
Epoch: 76, Loss: 0.022075
Epoch: 76, Loss: 0.022420
Epoch: 76, Loss: 0.022726
Epoch: 76, Loss: 0.023428
Epoch: 76, Loss: 0.049005
Epoch: 76, Loss: 0.021597
Epoch: 76, Loss: 0.043078
Epoch: 76, Loss: 0.056186
Epoch: 76, Loss: 0.031417
Epoch: 76, Loss: 0.024216
Epoch: 77, Loss: 0.023071
Epoch: 77, Loss: 0.033633
Epoch: 77, Loss: 0.030777
Epoch: 77, Loss: 0.046545
Epoch: 77, Loss: 0.020971
Epoch: 77, Loss: 0.047033
Epoch: 77, Loss: 0.026673
Epoch: 77, Loss: 0.041180
Epoch: 77, Loss: 0.041262
Epoch: 77, Loss: 0.035697
Epoch: 77, Loss: 0.027157
Epoch: 77, Loss: 0.026967
Epoch: 77, Loss: 0.028150
Epoch: 77, Loss: 0.022891
Epoch: 77, Loss: 0.030014
Epoch: 77, Loss: 0.014373
Epoch: 77, Loss: 0.025653
Epoch: 77, Loss: 0.039450
Epoch: 77, Loss: 0.030450
Epoch: 77, Loss: 0.027472
Epoch: 77, Loss: 0.023456
Epoch: 77, Loss: 0.025120
Epoch: 77, Loss: 0.022941
Epoch: 77, Loss: 0.028367
Epoch: 77, Loss: 0.026780
Epoch: 77, Loss: 0.028920
Epoch: 77, Loss: 0.027233
Epoch: 77, Loss: 0.032428
Epoch: 77, Loss: 0.031645
Epoch: 77, Loss: 0.026327
Epoch: 77, Loss: 0.033054
Epoch: 77, Loss: 0.032511
Epoch: 77, Loss: 0.039747
Epoch: 77, Loss: 0.035308
Epoch: 77, Loss: 0.036155
Epoch: 77, Loss: 0.023532
Epoch: 77, Loss: 0.026479
Epoch: 77, Loss: 0.032855
Epoch: 77, Loss: 0.022262
Epoch: 77, Loss: 0.029715
Epoch: 77, Loss: 0.031182
Epoch: 77, Loss: 0.022714
Epoch: 77, Loss: 0.021331
Epoch: 77, Loss: 0.024131
Epoch: 77, Loss: 0.021023
Epoch: 77, Loss: 0.038197
Epoch: 77, Loss: 0.026070
Epoch: 77, Loss: 0.036699
Epoch: 77, Loss: 0.030622
Epoch: 77, Loss: 0.036525
Epoch: 77, Loss: 0.054599
Epoch: 77, Loss: 0.016312
Epoch: 77, Loss: 0.011640
Epoch: 77, Loss: 0.031101
Epoch: 77, Loss: 0.033208
Epoch: 77, Loss: 0.028918
Epoch: 77, Loss: 0.031929
Epoch: 77, Loss: 0.027265
Epoch: 77, Loss: 0.018209
Epoch: 77, Loss: 0.029139
Epoch: 77, Loss: 0.017464
Epoch: 77, Loss: 0.025292
Epoch: 77, Loss: 0.044260
Epoch: 77, Loss: 0.017067
Epoch: 77, Loss: 0.029672
Epoch: 77, Loss: 0.031421
Epoch: 77, Loss: 0.028405
Epoch: 77, Loss: 0.023135
Epoch: 77, Loss: 0.028315
Epoch: 77, Loss: 0.053940
Epoch: 77, Loss: 0.032788
Epoch: 77, Loss: 0.032399
Epoch: 77, Loss: 0.041711
Epoch: 77, Loss: 0.024475
Epoch: 77, Loss: 0.035773
Epoch: 77, Loss: 0.021617
Epoch: 77, Loss: 0.093308
Epoch: 77, Loss: 0.052525
Epoch: 77, Loss: 0.028828
Epoch: 77, Loss: 0.036618
Epoch: 77, Loss: 0.034109
Epoch: 77, Loss: 0.064136
Epoch: 77, Loss: 0.032957
Epoch: 77, Loss: 0.026771
Epoch: 77, Loss: 0.020335
Epoch: 77, Loss: 0.042653
Epoch: 77, Loss: 0.032126
Epoch: 77, Loss: 0.040708
Epoch: 77, Loss: 0.033115
Epoch: 77, Loss: 0.027369
Epoch: 77, Loss: 0.031452
Epoch: 77, Loss: 0.029131
Epoch: 77, Loss: 0.019786
Epoch: 77, Loss: 0.019768
Epoch: 77, Loss: 0.014415
Epoch: 77, Loss: 0.020638
Epoch: 77, Loss: 0.023076
Epoch: 77, Loss: 0.020640
Epoch: 77, Loss: 0.025827
Epoch: 77, Loss: 0.035197
Epoch: 77, Loss: 0.016474
Epoch: 77, Loss: 0.027302
Epoch: 77, Loss: 0.029093
Epoch: 77, Loss: 0.032265
Epoch: 77, Loss: 0.012019
Epoch: 77, Loss: 0.022449
Epoch: 77, Loss: 0.023450
Epoch: 77, Loss: 0.023202
Epoch: 77, Loss: 0.012804
Epoch: 77, Loss: 0.036616
Epoch: 77, Loss: 0.026102
Epoch: 77, Loss: 0.024914
Epoch: 77, Loss: 0.055509
Epoch: 77, Loss: 0.060121
Epoch: 77, Loss: 0.033743
Epoch: 77, Loss: 0.034930
Epoch: 77, Loss: 0.020234
Epoch: 77, Loss: 0.026819
Epoch: 77, Loss: 0.022684
Epoch: 77, Loss: 0.013468
Epoch: 77, Loss: 0.034716
Epoch: 77, Loss: 0.018627
Epoch: 77, Loss: 0.022424
Epoch: 77, Loss: 0.036888
Epoch: 77, Loss: 0.043379
Epoch: 77, Loss: 0.019996
Epoch: 77, Loss: 0.030616
Epoch: 77, Loss: 0.031794
Epoch: 77, Loss: 0.022811
Epoch: 77, Loss: 0.031210
Epoch: 77, Loss: 0.022489
Epoch: 77, Loss: 0.030394
Epoch: 77, Loss: 0.091034
Epoch: 77, Loss: 0.021043
Epoch: 77, Loss: 0.034311
Epoch: 77, Loss: 0.022608
Epoch: 77, Loss: 0.018990
Epoch: 77, Loss: 0.034003
Epoch: 77, Loss: 0.039045
Epoch: 77, Loss: 0.022802
Epoch: 77, Loss: 0.027325
Epoch: 77, Loss: 0.015368
Epoch: 77, Loss: 0.050984
Epoch: 77, Loss: 0.045637
Epoch: 77, Loss: 0.026564
Epoch: 77, Loss: 0.045182
Epoch: 77, Loss: 0.028129
Epoch: 77, Loss: 0.027726
Epoch: 77, Loss: 0.044792
Epoch: 77, Loss: 0.021146
Epoch: 77, Loss: 0.023825
Epoch: 77, Loss: 0.027616
Epoch: 77, Loss: 0.020029
Epoch: 77, Loss: 0.033643
Epoch: 77, Loss: 0.021138
Epoch: 77, Loss: 0.028242
Epoch: 77, Loss: 0.018470
Epoch: 78, Loss: 0.035702
Epoch: 78, Loss: 0.021533
Epoch: 78, Loss: 0.019558
Epoch: 78, Loss: 0.058816
Epoch: 78, Loss: 0.037740
Epoch: 78, Loss: 0.023480
Epoch: 78, Loss: 0.024070
Epoch: 78, Loss: 0.027747
Epoch: 78, Loss: 0.034359
Epoch: 78, Loss: 0.019695
Epoch: 78, Loss: 0.048446
Epoch: 78, Loss: 0.020159
Epoch: 78, Loss: 0.025717
Epoch: 78, Loss: 0.032718
Epoch: 78, Loss: 0.033765
Epoch: 78, Loss: 0.017021
Epoch: 78, Loss: 0.020928
Epoch: 78, Loss: 0.029389
Epoch: 78, Loss: 0.022266
Epoch: 78, Loss: 0.024482
Epoch: 78, Loss: 0.025516
Epoch: 78, Loss: 0.025284
Epoch: 78, Loss: 0.021845
Epoch: 78, Loss: 0.050106
Epoch: 78, Loss: 0.027284
Epoch: 78, Loss: 0.018522
Epoch: 78, Loss: 0.018970
Epoch: 78, Loss: 0.019622
Epoch: 78, Loss: 0.022168
Epoch: 78, Loss: 0.020195
Epoch: 78, Loss: 0.027021
Epoch: 78, Loss: 0.015817
Epoch: 78, Loss: 0.025020
Epoch: 78, Loss: 0.054775
Epoch: 78, Loss: 0.036754
Epoch: 78, Loss: 0.038682
Epoch: 78, Loss: 0.023349
Epoch: 78, Loss: 0.015584
Epoch: 78, Loss: 0.016871
Epoch: 78, Loss: 0.017969
Epoch: 78, Loss: 0.019136
Epoch: 78, Loss: 0.028490
Epoch: 78, Loss: 0.018467
Epoch: 78, Loss: 0.034796
Epoch: 78, Loss: 0.041118
Epoch: 78, Loss: 0.039459
Epoch: 78, Loss: 0.042624
Epoch: 78, Loss: 0.042766
Epoch: 78, Loss: 0.029555
Epoch: 78, Loss: 0.031685
Epoch: 78, Loss: 0.081661
Epoch: 78, Loss: 0.068193
Epoch: 78, Loss: 0.034210
Epoch: 78, Loss: 0.034615
Epoch: 78, Loss: 0.049375
Epoch: 78, Loss: 0.027369
Epoch: 78, Loss: 0.053373
Epoch: 78, Loss: 0.072650
Epoch: 78, Loss: 0.073426
Epoch: 78, Loss: 0.077898
Epoch: 78, Loss: 0.107802
Epoch: 78, Loss: 0.038240
Epoch: 78, Loss: 0.066511
Epoch: 78, Loss: 0.032434
Epoch: 78, Loss: 0.026746
Epoch: 78, Loss: 0.023351
Epoch: 78, Loss: 0.041317
Epoch: 78, Loss: 0.053556
Epoch: 78, Loss: 0.025693
Epoch: 78, Loss: 0.023411
Epoch: 78, Loss: 0.040614
Epoch: 78, Loss: 0.014984
Epoch: 78, Loss: 0.025814
Epoch: 78, Loss: 0.020531
Epoch: 78, Loss: 0.037895
Epoch: 78, Loss: 0.019822
Epoch: 78, Loss: 0.021838
Epoch: 78, Loss: 0.032496
Epoch: 78, Loss: 0.029439
Epoch: 78, Loss: 0.016992
Epoch: 78, Loss: 0.041484
Epoch: 78, Loss: 0.023934
Epoch: 78, Loss: 0.026261
Epoch: 78, Loss: 0.016220
Epoch: 78, Loss: 0.023369
Epoch: 78, Loss: 0.017850
Epoch: 78, Loss: 0.021486
Epoch: 78, Loss: 0.024502
Epoch: 78, Loss: 0.027306
Epoch: 78, Loss: 0.028314
Epoch: 78, Loss: 0.021343
Epoch: 78, Loss: 0.037491
Epoch: 78, Loss: 0.026718
Epoch: 78, Loss: 0.029046
Epoch: 78, Loss: 0.022663
Epoch: 78, Loss: 0.040218
Epoch: 78, Loss: 0.038127
Epoch: 78, Loss: 0.035646
Epoch: 78, Loss: 0.018805
Epoch: 78, Loss: 0.026508
Epoch: 78, Loss: 0.030841
Epoch: 78, Loss: 0.030878
Epoch: 78, Loss: 0.019452
Epoch: 78, Loss: 0.029641
Epoch: 78, Loss: 0.033521
Epoch: 78, Loss: 0.024390
Epoch: 78, Loss: 0.015438
Epoch: 78, Loss: 0.035379
Epoch: 78, Loss: 0.025372
Epoch: 78, Loss: 0.029129
Epoch: 78, Loss: 0.024562
Epoch: 78, Loss: 0.017905
Epoch: 78, Loss: 0.017415
Epoch: 78, Loss: 0.031114
Epoch: 78, Loss: 0.029875
Epoch: 78, Loss: 0.035511
Epoch: 78, Loss: 0.043871
Epoch: 78, Loss: 0.026204
Epoch: 78, Loss: 0.041917
Epoch: 78, Loss: 0.067197
Epoch: 78, Loss: 0.040102
Epoch: 78, Loss: 0.017230
Epoch: 78, Loss: 0.020809
Epoch: 78, Loss: 0.034969
Epoch: 78, Loss: 0.020872
Epoch: 78, Loss: 0.022804
Epoch: 78, Loss: 0.025811
Epoch: 78, Loss: 0.022242
Epoch: 78, Loss: 0.030012
Epoch: 78, Loss: 0.022970
Epoch: 78, Loss: 0.019004
Epoch: 78, Loss: 0.023064
Epoch: 78, Loss: 0.021283
Epoch: 78, Loss: 0.028819
Epoch: 78, Loss: 0.041741
Epoch: 78, Loss: 0.044308
Epoch: 78, Loss: 0.021193
Epoch: 78, Loss: 0.028327
Epoch: 78, Loss: 0.014566
Epoch: 78, Loss: 0.042085
Epoch: 78, Loss: 0.019113
Epoch: 78, Loss: 0.055333
Epoch: 78, Loss: 0.040152
Epoch: 78, Loss: 0.046577
Epoch: 78, Loss: 0.025686
Epoch: 78, Loss: 0.018889
Epoch: 78, Loss: 0.026123
Epoch: 78, Loss: 0.038181
Epoch: 78, Loss: 0.041655
Epoch: 78, Loss: 0.031683
Epoch: 78, Loss: 0.035920
Epoch: 78, Loss: 0.031226
Epoch: 78, Loss: 0.021042
Epoch: 78, Loss: 0.023990
Epoch: 78, Loss: 0.026806
Epoch: 78, Loss: 0.043859
Epoch: 78, Loss: 0.029039
Epoch: 79, Loss: 0.030665
Epoch: 79, Loss: 0.019594
Epoch: 79, Loss: 0.031890
Epoch: 79, Loss: 0.026515
Epoch: 79, Loss: 0.018597
Epoch: 79, Loss: 0.017360
Epoch: 79, Loss: 0.024780
Epoch: 79, Loss: 0.028416
Epoch: 79, Loss: 0.031669
Epoch: 79, Loss: 0.016833
Epoch: 79, Loss: 0.015698
Epoch: 79, Loss: 0.018830
Epoch: 79, Loss: 0.045630
Epoch: 79, Loss: 0.017258
Epoch: 79, Loss: 0.017537
Epoch: 79, Loss: 0.024419
Epoch: 79, Loss: 0.025602
Epoch: 79, Loss: 0.021385
Epoch: 79, Loss: 0.021552
Epoch: 79, Loss: 0.029905
Epoch: 79, Loss: 0.029747
Epoch: 79, Loss: 0.014425
Epoch: 79, Loss: 0.026719
Epoch: 79, Loss: 0.028642
Epoch: 79, Loss: 0.022294
Epoch: 79, Loss: 0.034625
Epoch: 79, Loss: 0.015681
Epoch: 79, Loss: 0.082662
Epoch: 79, Loss: 0.035699
Epoch: 79, Loss: 0.031454
Epoch: 79, Loss: 0.023470
Epoch: 79, Loss: 0.029066
Epoch: 79, Loss: 0.036740
Epoch: 79, Loss: 0.026734
Epoch: 79, Loss: 0.032371
Epoch: 79, Loss: 0.018516
Epoch: 79, Loss: 0.023901
Epoch: 79, Loss: 0.027211
Epoch: 79, Loss: 0.018783
Epoch: 79, Loss: 0.034517
Epoch: 79, Loss: 0.026012
Epoch: 79, Loss: 0.035641
Epoch: 79, Loss: 0.028837
Epoch: 79, Loss: 0.020539
Epoch: 79, Loss: 0.019945
Epoch: 79, Loss: 0.027498
Epoch: 79, Loss: 0.059355
Epoch: 79, Loss: 0.019549
Epoch: 79, Loss: 0.016673
Epoch: 79, Loss: 0.015869
Epoch: 79, Loss: 0.021467
Epoch: 79, Loss: 0.043096
Epoch: 79, Loss: 0.023246
Epoch: 79, Loss: 0.019774
Epoch: 79, Loss: 0.021306
Epoch: 79, Loss: 0.023949
Epoch: 79, Loss: 0.045051
Epoch: 79, Loss: 0.031815
Epoch: 79, Loss: 0.027073
Epoch: 79, Loss: 0.029337
Epoch: 79, Loss: 0.022333
Epoch: 79, Loss: 0.027299
Epoch: 79, Loss: 0.016165
Epoch: 79, Loss: 0.023122
Epoch: 79, Loss: 0.036501
Epoch: 79, Loss: 0.015835
Epoch: 79, Loss: 0.046966
Epoch: 79, Loss: 0.038912
Epoch: 79, Loss: 0.030052
Epoch: 79, Loss: 0.047189
Epoch: 79, Loss: 0.077168
Epoch: 79, Loss: 0.025150
Epoch: 79, Loss: 0.051018
Epoch: 79, Loss: 0.031952
Epoch: 79, Loss: 0.043377
Epoch: 79, Loss: 0.021611
Epoch: 79, Loss: 0.054197
Epoch: 79, Loss: 0.050701
Epoch: 79, Loss: 0.039976
Epoch: 79, Loss: 0.063528
Epoch: 79, Loss: 0.047068
Epoch: 79, Loss: 0.022223
Epoch: 79, Loss: 0.023317
Epoch: 79, Loss: 0.019627
Epoch: 79, Loss: 0.028576
Epoch: 79, Loss: 0.029449
Epoch: 79, Loss: 0.027169
Epoch: 79, Loss: 0.020973
Epoch: 79, Loss: 0.019508
Epoch: 79, Loss: 0.020740
Epoch: 79, Loss: 0.018064
Epoch: 79, Loss: 0.017435
Epoch: 79, Loss: 0.030720
Epoch: 79, Loss: 0.015472
Epoch: 79, Loss: 0.044273
Epoch: 79, Loss: 0.044529
Epoch: 79, Loss: 0.045795
Epoch: 79, Loss: 0.033422
Epoch: 79, Loss: 0.026032
Epoch: 79, Loss: 0.030694
Epoch: 79, Loss: 0.021999
Epoch: 79, Loss: 0.020037
Epoch: 79, Loss: 0.056458
Epoch: 79, Loss: 0.045569
Epoch: 79, Loss: 0.097101
Epoch: 79, Loss: 0.040532
Epoch: 79, Loss: 0.041946
Epoch: 79, Loss: 0.039393
Epoch: 79, Loss: 0.027276
Epoch: 79, Loss: 0.031539
Epoch: 79, Loss: 0.020563
Epoch: 79, Loss: 0.022341
Epoch: 79, Loss: 0.029417
Epoch: 79, Loss: 0.032921
Epoch: 79, Loss: 0.021077
Epoch: 79, Loss: 0.039230
Epoch: 79, Loss: 0.020430
Epoch: 79, Loss: 0.029015
Epoch: 79, Loss: 0.021860
Epoch: 79, Loss: 0.030590
Epoch: 79, Loss: 0.028875
Epoch: 79, Loss: 0.034882
Epoch: 79, Loss: 0.022374
Epoch: 79, Loss: 0.028005
Epoch: 79, Loss: 0.024535
Epoch: 79, Loss: 0.013023
Epoch: 79, Loss: 0.025436
Epoch: 79, Loss: 0.014804
Epoch: 79, Loss: 0.027417
Epoch: 79, Loss: 0.034931
Epoch: 79, Loss: 0.045470
Epoch: 79, Loss: 0.020157
Epoch: 79, Loss: 0.038656
Epoch: 79, Loss: 0.045090
Epoch: 79, Loss: 0.018582
Epoch: 79, Loss: 0.030723
Epoch: 79, Loss: 0.038463
Epoch: 79, Loss: 0.026430
Epoch: 79, Loss: 0.027831
Epoch: 79, Loss: 0.025225
Epoch: 79, Loss: 0.029930
Epoch: 79, Loss: 0.027660
Epoch: 79, Loss: 0.020125
Epoch: 79, Loss: 0.030434
Epoch: 79, Loss: 0.034816
Epoch: 79, Loss: 0.025824
Epoch: 79, Loss: 0.024471
Epoch: 79, Loss: 0.020835
Epoch: 79, Loss: 0.018217
Epoch: 79, Loss: 0.024883
Epoch: 79, Loss: 0.022086
Epoch: 79, Loss: 0.028947
Epoch: 79, Loss: 0.017213
Epoch: 79, Loss: 0.029890
Epoch: 79, Loss: 0.014413
Epoch: 79, Loss: 0.030152
Epoch: 79, Loss: 0.025668
Epoch: 80, Loss: 0.028182
Epoch: 80, Loss: 0.043164
Epoch: 80, Loss: 0.020797
Epoch: 80, Loss: 0.029385
Epoch: 80, Loss: 0.012761
Epoch: 80, Loss: 0.022143
Epoch: 80, Loss: 0.017971
Epoch: 80, Loss: 0.027402
Epoch: 80, Loss: 0.018882
Epoch: 80, Loss: 0.017083
Epoch: 80, Loss: 0.019520
Epoch: 80, Loss: 0.023578
Epoch: 80, Loss: 0.031487
Epoch: 80, Loss: 0.022054
Epoch: 80, Loss: 0.022136
Epoch: 80, Loss: 0.041777
Epoch: 80, Loss: 0.024218
Epoch: 80, Loss: 0.020696
Epoch: 80, Loss: 0.028747
Epoch: 80, Loss: 0.023406
Epoch: 80, Loss: 0.018677
Epoch: 80, Loss: 0.023158
Epoch: 80, Loss: 0.026106
Epoch: 80, Loss: 0.024747
Epoch: 80, Loss: 0.035668
Epoch: 80, Loss: 0.032986
Epoch: 80, Loss: 0.026218
Epoch: 80, Loss: 0.013628
Epoch: 80, Loss: 0.022723
Epoch: 80, Loss: 0.044397
Epoch: 80, Loss: 0.020594
Epoch: 80, Loss: 0.044047
Epoch: 80, Loss: 0.033784
Epoch: 80, Loss: 0.034725
Epoch: 80, Loss: 0.036876
Epoch: 80, Loss: 0.047489
Epoch: 80, Loss: 0.023948
Epoch: 80, Loss: 0.019170
Epoch: 80, Loss: 0.023157
Epoch: 80, Loss: 0.022106
Epoch: 80, Loss: 0.025123
Epoch: 80, Loss: 0.032826
Epoch: 80, Loss: 0.018452
Epoch: 80, Loss: 0.023429
Epoch: 80, Loss: 0.013801
Epoch: 80, Loss: 0.024951
Epoch: 80, Loss: 0.024664
Epoch: 80, Loss: 0.021166
Epoch: 80, Loss: 0.031467
Epoch: 80, Loss: 0.023325
Epoch: 80, Loss: 0.027581
Epoch: 80, Loss: 0.024683
Epoch: 80, Loss: 0.020601
Epoch: 80, Loss: 0.021431
Epoch: 80, Loss: 0.043550
Epoch: 80, Loss: 0.027550
Epoch: 80, Loss: 0.015750
Epoch: 80, Loss: 0.022639
Epoch: 80, Loss: 0.016660
Epoch: 80, Loss: 0.018773
Epoch: 80, Loss: 0.032814
Epoch: 80, Loss: 0.023292
Epoch: 80, Loss: 0.021572
Epoch: 80, Loss: 0.025029
Epoch: 80, Loss: 0.025745
Epoch: 80, Loss: 0.020238
Epoch: 80, Loss: 0.023956
Epoch: 80, Loss: 0.024583
Epoch: 80, Loss: 0.023269
Epoch: 80, Loss: 0.021380
Epoch: 80, Loss: 0.026917
Epoch: 80, Loss: 0.018747
Epoch: 80, Loss: 0.031050
Epoch: 80, Loss: 0.014160
Epoch: 80, Loss: 0.032063
Epoch: 80, Loss: 0.026551
Epoch: 80, Loss: 0.024234
Epoch: 80, Loss: 0.038256
Epoch: 80, Loss: 0.020715
Epoch: 80, Loss: 0.035436
Epoch: 80, Loss: 0.024685
Epoch: 80, Loss: 0.047389
Epoch: 80, Loss: 0.073081
Epoch: 80, Loss: 0.032810
Epoch: 80, Loss: 0.023436
Epoch: 80, Loss: 0.021992
Epoch: 80, Loss: 0.017634
Epoch: 80, Loss: 0.059348
Epoch: 80, Loss: 0.058196
Epoch: 80, Loss: 0.044848
Epoch: 80, Loss: 0.043653
Epoch: 80, Loss: 0.017248
Epoch: 80, Loss: 0.030855
Epoch: 80, Loss: 0.045246
Epoch: 80, Loss: 0.094294
Epoch: 80, Loss: 0.030830
Epoch: 80, Loss: 0.024178
Epoch: 80, Loss: 0.020950
Epoch: 80, Loss: 0.028191
Epoch: 80, Loss: 0.024405
Epoch: 80, Loss: 0.021318
Epoch: 80, Loss: 0.027478
Epoch: 80, Loss: 0.022230
Epoch: 80, Loss: 0.026314
Epoch: 80, Loss: 0.038950
Epoch: 80, Loss: 0.025416
Epoch: 80, Loss: 0.063130
Epoch: 80, Loss: 0.047659
Epoch: 80, Loss: 0.020589
Epoch: 80, Loss: 0.033031
Epoch: 80, Loss: 0.022765
Epoch: 80, Loss: 0.040852
Epoch: 80, Loss: 0.018290
Epoch: 80, Loss: 0.019081
Epoch: 80, Loss: 0.045626
Epoch: 80, Loss: 0.071960
Epoch: 80, Loss: 0.042506
Epoch: 80, Loss: 0.027203
Epoch: 80, Loss: 0.032923
Epoch: 80, Loss: 0.024469
Epoch: 80, Loss: 0.035348
Epoch: 80, Loss: 0.023363
Epoch: 80, Loss: 0.023104
Epoch: 80, Loss: 0.023637
Epoch: 80, Loss: 0.036632
Epoch: 80, Loss: 0.027676
Epoch: 80, Loss: 0.024729
Epoch: 80, Loss: 0.014664
Epoch: 80, Loss: 0.036181
Epoch: 80, Loss: 0.040058
Epoch: 80, Loss: 0.028087
Epoch: 80, Loss: 0.040182
Epoch: 80, Loss: 0.024350
Epoch: 80, Loss: 0.029598
Epoch: 80, Loss: 0.031449
Epoch: 80, Loss: 0.030953
Epoch: 80, Loss: 0.030719
Epoch: 80, Loss: 0.025501
Epoch: 80, Loss: 0.028641
Epoch: 80, Loss: 0.013634
Epoch: 80, Loss: 0.028478
Epoch: 80, Loss: 0.029620
Epoch: 80, Loss: 0.021646
Epoch: 80, Loss: 0.033369
Epoch: 80, Loss: 0.039135
Epoch: 80, Loss: 0.056957
Epoch: 80, Loss: 0.086490
Epoch: 80, Loss: 0.120457
Epoch: 80, Loss: 0.049302
Epoch: 80, Loss: 0.136819
Epoch: 80, Loss: 0.037823
Epoch: 80, Loss: 0.030718
Epoch: 80, Loss: 0.027863
Epoch: 80, Loss: 0.020707
Epoch: 80, Loss: 0.027097
Epoch: 80, Loss: 0.020557
Epoch: 80, Loss: 0.022339
Epoch: 81, Loss: 0.019947
Epoch: 81, Loss: 0.025887
Epoch: 81, Loss: 0.032834
Epoch: 81, Loss: 0.041457
Epoch: 81, Loss: 0.020670
Epoch: 81, Loss: 0.036588
Epoch: 81, Loss: 0.016320
Epoch: 81, Loss: 0.033665
Epoch: 81, Loss: 0.031640
Epoch: 81, Loss: 0.022635
Epoch: 81, Loss: 0.022335
Epoch: 81, Loss: 0.023106
Epoch: 81, Loss: 0.033801
Epoch: 81, Loss: 0.021845
Epoch: 81, Loss: 0.017416
Epoch: 81, Loss: 0.027183
Epoch: 81, Loss: 0.016281
Epoch: 81, Loss: 0.030358
Epoch: 81, Loss: 0.032816
Epoch: 81, Loss: 0.017773
Epoch: 81, Loss: 0.036381
Epoch: 81, Loss: 0.023573
Epoch: 81, Loss: 0.013039
Epoch: 81, Loss: 0.029577
Epoch: 81, Loss: 0.019426
Epoch: 81, Loss: 0.029602
Epoch: 81, Loss: 0.021279
Epoch: 81, Loss: 0.016848
Epoch: 81, Loss: 0.092213
Epoch: 81, Loss: 0.036452
Epoch: 81, Loss: 0.020133
Epoch: 81, Loss: 0.039546
Epoch: 81, Loss: 0.017286
Epoch: 81, Loss: 0.025810
Epoch: 81, Loss: 0.031153
Epoch: 81, Loss: 0.030721
Epoch: 81, Loss: 0.024448
Epoch: 81, Loss: 0.026372
Epoch: 81, Loss: 0.009129
Epoch: 81, Loss: 0.021509
Epoch: 81, Loss: 0.015571
Epoch: 81, Loss: 0.033130
Epoch: 81, Loss: 0.022043
Epoch: 81, Loss: 0.024322
Epoch: 81, Loss: 0.016037
Epoch: 81, Loss: 0.029830
Epoch: 81, Loss: 0.020470
Epoch: 81, Loss: 0.026502
Epoch: 81, Loss: 0.022679
Epoch: 81, Loss: 0.029538
Epoch: 81, Loss: 0.036236
Epoch: 81, Loss: 0.031001
Epoch: 81, Loss: 0.011014
Epoch: 81, Loss: 0.039847
Epoch: 81, Loss: 0.027327
Epoch: 81, Loss: 0.021453
Epoch: 81, Loss: 0.043642
Epoch: 81, Loss: 0.069631
Epoch: 81, Loss: 0.029514
Epoch: 81, Loss: 0.019329
Epoch: 81, Loss: 0.020423
Epoch: 81, Loss: 0.020475
Epoch: 81, Loss: 0.038528
Epoch: 81, Loss: 0.033139
Epoch: 81, Loss: 0.030549
Epoch: 81, Loss: 0.015382
Epoch: 81, Loss: 0.021520
Epoch: 81, Loss: 0.034744
Epoch: 81, Loss: 0.022535
Epoch: 81, Loss: 0.029293
Epoch: 81, Loss: 0.026139
Epoch: 81, Loss: 0.024376
Epoch: 81, Loss: 0.020019
Epoch: 81, Loss: 0.028447
Epoch: 81, Loss: 0.017827
Epoch: 81, Loss: 0.024677
Epoch: 81, Loss: 0.023255
Epoch: 81, Loss: 0.027976
Epoch: 81, Loss: 0.017678
Epoch: 81, Loss: 0.030140
Epoch: 81, Loss: 0.022493
Epoch: 81, Loss: 0.012026
Epoch: 81, Loss: 0.025651
Epoch: 81, Loss: 0.023174
Epoch: 81, Loss: 0.031759
Epoch: 81, Loss: 0.016382
Epoch: 81, Loss: 0.018042
Epoch: 81, Loss: 0.021532
Epoch: 81, Loss: 0.019885
Epoch: 81, Loss: 0.022546
Epoch: 81, Loss: 0.018857
Epoch: 81, Loss: 0.056030
Epoch: 81, Loss: 0.039981
Epoch: 81, Loss: 0.053095
Epoch: 81, Loss: 0.031408
Epoch: 81, Loss: 0.010511
Epoch: 81, Loss: 0.025805
Epoch: 81, Loss: 0.016231
Epoch: 81, Loss: 0.028800
Epoch: 81, Loss: 0.022861
Epoch: 81, Loss: 0.019888
Epoch: 81, Loss: 0.035318
Epoch: 81, Loss: 0.022742
Epoch: 81, Loss: 0.014555
Epoch: 81, Loss: 0.022578
Epoch: 81, Loss: 0.019111
Epoch: 81, Loss: 0.016530
Epoch: 81, Loss: 0.022079
Epoch: 81, Loss: 0.041547
Epoch: 81, Loss: 0.024788
Epoch: 81, Loss: 0.074609
Epoch: 81, Loss: 0.105038
Epoch: 81, Loss: 0.085381
Epoch: 81, Loss: 0.037041
Epoch: 81, Loss: 0.029603
Epoch: 81, Loss: 0.029764
Epoch: 81, Loss: 0.034273
Epoch: 81, Loss: 0.024109
Epoch: 81, Loss: 0.017263
Epoch: 81, Loss: 0.032539
Epoch: 81, Loss: 0.037120
Epoch: 81, Loss: 0.016156
Epoch: 81, Loss: 0.036902
Epoch: 81, Loss: 0.032377
Epoch: 81, Loss: 0.041112
Epoch: 81, Loss: 0.017153
Epoch: 81, Loss: 0.059138
Epoch: 81, Loss: 0.047262
Epoch: 81, Loss: 0.052829
Epoch: 81, Loss: 0.025696
Epoch: 81, Loss: 0.019765
Epoch: 81, Loss: 0.021554
Epoch: 81, Loss: 0.035170
Epoch: 81, Loss: 0.025377
Epoch: 81, Loss: 0.022136
Epoch: 81, Loss: 0.022937
Epoch: 81, Loss: 0.017283
Epoch: 81, Loss: 0.032141
Epoch: 81, Loss: 0.026042
Epoch: 81, Loss: 0.015673
Epoch: 81, Loss: 0.015697
Epoch: 81, Loss: 0.021484
Epoch: 81, Loss: 0.022127
Epoch: 81, Loss: 0.019428
Epoch: 81, Loss: 0.024140
Epoch: 81, Loss: 0.082020
Epoch: 81, Loss: 0.085195
Epoch: 81, Loss: 0.039635
Epoch: 81, Loss: 0.051137
Epoch: 81, Loss: 0.027363
Epoch: 81, Loss: 0.045391
Epoch: 81, Loss: 0.030045
Epoch: 81, Loss: 0.040169
Epoch: 81, Loss: 0.015346
Epoch: 81, Loss: 0.035894
Epoch: 81, Loss: 0.011371
Epoch: 81, Loss: 0.028197
Epoch: 82, Loss: 0.016616
Epoch: 82, Loss: 0.025420
Epoch: 82, Loss: 0.019400
Epoch: 82, Loss: 0.020677
Epoch: 82, Loss: 0.018579
Epoch: 82, Loss: 0.022860
Epoch: 82, Loss: 0.019762
Epoch: 82, Loss: 0.019214
Epoch: 82, Loss: 0.017820
Epoch: 82, Loss: 0.036759
Epoch: 82, Loss: 0.019972
Epoch: 82, Loss: 0.025554
Epoch: 82, Loss: 0.023276
Epoch: 82, Loss: 0.014279
Epoch: 82, Loss: 0.017026
Epoch: 82, Loss: 0.022097
Epoch: 82, Loss: 0.024791
Epoch: 82, Loss: 0.018127
Epoch: 82, Loss: 0.012317
Epoch: 82, Loss: 0.026758
Epoch: 82, Loss: 0.017946
Epoch: 82, Loss: 0.026362
Epoch: 82, Loss: 0.031035
Epoch: 82, Loss: 0.033996
Epoch: 82, Loss: 0.037765
Epoch: 82, Loss: 0.034097
Epoch: 82, Loss: 0.019299
Epoch: 82, Loss: 0.025998
Epoch: 82, Loss: 0.038613
Epoch: 82, Loss: 0.028214
Epoch: 82, Loss: 0.019536
Epoch: 82, Loss: 0.027453
Epoch: 82, Loss: 0.019985
Epoch: 82, Loss: 0.013629
Epoch: 82, Loss: 0.028769
Epoch: 82, Loss: 0.019066
Epoch: 82, Loss: 0.019075
Epoch: 82, Loss: 0.022225
Epoch: 82, Loss: 0.038308
Epoch: 82, Loss: 0.045723
Epoch: 82, Loss: 0.022796
Epoch: 82, Loss: 0.018095
Epoch: 82, Loss: 0.033217
Epoch: 82, Loss: 0.032200
Epoch: 82, Loss: 0.033674
Epoch: 82, Loss: 0.028058
Epoch: 82, Loss: 0.032392
Epoch: 82, Loss: 0.029491
Epoch: 82, Loss: 0.030358
Epoch: 82, Loss: 0.049472
Epoch: 82, Loss: 0.027772
Epoch: 82, Loss: 0.023317
Epoch: 82, Loss: 0.017479
Epoch: 82, Loss: 0.015621
Epoch: 82, Loss: 0.039291
Epoch: 82, Loss: 0.018522
Epoch: 82, Loss: 0.050252
Epoch: 82, Loss: 0.016636
Epoch: 82, Loss: 0.021370
Epoch: 82, Loss: 0.029557
Epoch: 82, Loss: 0.029530
Epoch: 82, Loss: 0.028664
Epoch: 82, Loss: 0.018889
Epoch: 82, Loss: 0.014054
Epoch: 82, Loss: 0.019394
Epoch: 82, Loss: 0.030123
Epoch: 82, Loss: 0.038544
Epoch: 82, Loss: 0.019451
Epoch: 82, Loss: 0.027465
Epoch: 82, Loss: 0.017205
Epoch: 82, Loss: 0.061356
Epoch: 82, Loss: 0.052544
Epoch: 82, Loss: 0.061794
Epoch: 82, Loss: 0.044608
Epoch: 82, Loss: 0.019173
Epoch: 82, Loss: 0.104929
Epoch: 82, Loss: 0.043982
Epoch: 82, Loss: 0.011847
Epoch: 82, Loss: 0.018654
Epoch: 82, Loss: 0.023180
Epoch: 82, Loss: 0.020295
Epoch: 82, Loss: 0.065594
Epoch: 82, Loss: 0.023924
Epoch: 82, Loss: 0.020822
Epoch: 82, Loss: 0.034837
Epoch: 82, Loss: 0.032183
Epoch: 82, Loss: 0.041810
Epoch: 82, Loss: 0.032490
Epoch: 82, Loss: 0.018477
Epoch: 82, Loss: 0.030567
Epoch: 82, Loss: 0.027974
Epoch: 82, Loss: 0.014971
Epoch: 82, Loss: 0.021770
Epoch: 82, Loss: 0.023944
Epoch: 82, Loss: 0.039768
Epoch: 82, Loss: 0.037010
Epoch: 82, Loss: 0.023051
Epoch: 82, Loss: 0.024203
Epoch: 82, Loss: 0.024404
Epoch: 82, Loss: 0.019718
Epoch: 82, Loss: 0.035992
Epoch: 82, Loss: 0.016286
Epoch: 82, Loss: 0.022071
Epoch: 82, Loss: 0.026646
Epoch: 82, Loss: 0.043582
Epoch: 82, Loss: 0.022597
Epoch: 82, Loss: 0.033843
Epoch: 82, Loss: 0.020663
Epoch: 82, Loss: 0.029552
Epoch: 82, Loss: 0.024868
Epoch: 82, Loss: 0.026833
Epoch: 82, Loss: 0.014722
Epoch: 82, Loss: 0.014636
Epoch: 82, Loss: 0.016955
Epoch: 82, Loss: 0.015878
Epoch: 82, Loss: 0.019178
Epoch: 82, Loss: 0.027910
Epoch: 82, Loss: 0.046189
Epoch: 82, Loss: 0.025550
Epoch: 82, Loss: 0.045834
Epoch: 82, Loss: 0.017303
Epoch: 82, Loss: 0.018316
Epoch: 82, Loss: 0.014703
Epoch: 82, Loss: 0.020989
Epoch: 82, Loss: 0.070784
Epoch: 82, Loss: 0.031710
Epoch: 82, Loss: 0.024131
Epoch: 82, Loss: 0.016292
Epoch: 82, Loss: 0.029843
Epoch: 82, Loss: 0.017677
Epoch: 82, Loss: 0.034199
Epoch: 82, Loss: 0.020420
Epoch: 82, Loss: 0.027302
Epoch: 82, Loss: 0.024253
Epoch: 82, Loss: 0.023505
Epoch: 82, Loss: 0.028228
Epoch: 82, Loss: 0.029531
Epoch: 82, Loss: 0.019523
Epoch: 82, Loss: 0.032696
Epoch: 82, Loss: 0.019294
Epoch: 82, Loss: 0.024507
Epoch: 82, Loss: 0.035880
Epoch: 82, Loss: 0.010688
Epoch: 82, Loss: 0.019174
Epoch: 82, Loss: 0.022286
Epoch: 82, Loss: 0.022804
Epoch: 82, Loss: 0.032932
Epoch: 82, Loss: 0.023682
Epoch: 82, Loss: 0.020215
Epoch: 82, Loss: 0.020653
Epoch: 82, Loss: 0.030704
Epoch: 82, Loss: 0.036566
Epoch: 82, Loss: 0.026911
Epoch: 82, Loss: 0.013644
Epoch: 82, Loss: 0.019345
Epoch: 82, Loss: 0.019820
Epoch: 82, Loss: 0.014731
Epoch: 83, Loss: 0.025586
Epoch: 83, Loss: 0.036040
Epoch: 83, Loss: 0.031185
Epoch: 83, Loss: 0.026523
Epoch: 83, Loss: 0.026485
Epoch: 83, Loss: 0.035655
Epoch: 83, Loss: 0.033968
Epoch: 83, Loss: 0.020418
Epoch: 83, Loss: 0.024913
Epoch: 83, Loss: 0.025389
Epoch: 83, Loss: 0.017070
Epoch: 83, Loss: 0.027688
Epoch: 83, Loss: 0.018733
Epoch: 83, Loss: 0.027724
Epoch: 83, Loss: 0.021413
Epoch: 83, Loss: 0.029349
Epoch: 83, Loss: 0.030252
Epoch: 83, Loss: 0.024552
Epoch: 83, Loss: 0.029788
Epoch: 83, Loss: 0.017124
Epoch: 83, Loss: 0.011561
Epoch: 83, Loss: 0.025723
Epoch: 83, Loss: 0.016353
Epoch: 83, Loss: 0.024256
Epoch: 83, Loss: 0.027000
Epoch: 83, Loss: 0.016673
Epoch: 83, Loss: 0.017627
Epoch: 83, Loss: 0.030771
Epoch: 83, Loss: 0.027817
Epoch: 83, Loss: 0.016539
Epoch: 83, Loss: 0.020082
Epoch: 83, Loss: 0.016261
Epoch: 83, Loss: 0.043428
Epoch: 83, Loss: 0.022359
Epoch: 83, Loss: 0.022200
Epoch: 83, Loss: 0.015136
Epoch: 83, Loss: 0.017410
Epoch: 83, Loss: 0.039801
Epoch: 83, Loss: 0.037012
Epoch: 83, Loss: 0.038527
Epoch: 83, Loss: 0.038102
Epoch: 83, Loss: 0.022769
Epoch: 83, Loss: 0.017942
Epoch: 83, Loss: 0.016508
Epoch: 83, Loss: 0.012798
Epoch: 83, Loss: 0.031596
Epoch: 83, Loss: 0.009196
Epoch: 83, Loss: 0.028653
Epoch: 83, Loss: 0.017234
Epoch: 83, Loss: 0.015853
Epoch: 83, Loss: 0.029663
Epoch: 83, Loss: 0.019693
Epoch: 83, Loss: 0.021453
Epoch: 83, Loss: 0.025663
Epoch: 83, Loss: 0.019856
Epoch: 83, Loss: 0.025258
Epoch: 83, Loss: 0.014301
Epoch: 83, Loss: 0.028362
Epoch: 83, Loss: 0.023381
Epoch: 83, Loss: 0.024214
Epoch: 83, Loss: 0.028453
Epoch: 83, Loss: 0.017567
Epoch: 83, Loss: 0.020098
Epoch: 83, Loss: 0.021577
Epoch: 83, Loss: 0.025698
Epoch: 83, Loss: 0.018980
Epoch: 83, Loss: 0.022563
Epoch: 83, Loss: 0.038980
Epoch: 83, Loss: 0.022341
Epoch: 83, Loss: 0.024669
Epoch: 83, Loss: 0.018174
Epoch: 83, Loss: 0.018046
Epoch: 83, Loss: 0.037068
Epoch: 83, Loss: 0.052012
Epoch: 83, Loss: 0.036015
Epoch: 83, Loss: 0.016109
Epoch: 83, Loss: 0.015988
Epoch: 83, Loss: 0.015303
Epoch: 83, Loss: 0.019538
Epoch: 83, Loss: 0.016405
Epoch: 83, Loss: 0.010313
Epoch: 83, Loss: 0.014400
Epoch: 83, Loss: 0.038883
Epoch: 83, Loss: 0.026281
Epoch: 83, Loss: 0.020443
Epoch: 83, Loss: 0.024509
Epoch: 83, Loss: 0.030171
Epoch: 83, Loss: 0.022994
Epoch: 83, Loss: 0.027172
Epoch: 83, Loss: 0.026044
Epoch: 83, Loss: 0.039978
Epoch: 83, Loss: 0.024659
Epoch: 83, Loss: 0.011950
Epoch: 83, Loss: 0.027919
Epoch: 83, Loss: 0.014230
Epoch: 83, Loss: 0.018582
Epoch: 83, Loss: 0.009640
Epoch: 83, Loss: 0.023356
Epoch: 83, Loss: 0.026496
Epoch: 83, Loss: 0.101500
Epoch: 83, Loss: 0.051689
Epoch: 83, Loss: 0.021594
Epoch: 83, Loss: 0.015602
Epoch: 83, Loss: 0.016422
Epoch: 83, Loss: 0.034699
Epoch: 83, Loss: 0.052066
Epoch: 83, Loss: 0.054638
Epoch: 83, Loss: 0.075149
Epoch: 83, Loss: 0.091506
Epoch: 83, Loss: 0.034605
Epoch: 83, Loss: 0.029738
Epoch: 83, Loss: 0.044707
Epoch: 83, Loss: 0.056910
Epoch: 83, Loss: 0.012575
Epoch: 83, Loss: 0.030030
Epoch: 83, Loss: 0.018936
Epoch: 83, Loss: 0.027339
Epoch: 83, Loss: 0.014453
Epoch: 83, Loss: 0.031470
Epoch: 83, Loss: 0.024830
Epoch: 83, Loss: 0.027650
Epoch: 83, Loss: 0.024593
Epoch: 83, Loss: 0.034260
Epoch: 83, Loss: 0.034507
Epoch: 83, Loss: 0.031128
Epoch: 83, Loss: 0.028859
Epoch: 83, Loss: 0.033454
Epoch: 83, Loss: 0.021397
Epoch: 83, Loss: 0.018590
Epoch: 83, Loss: 0.050939
Epoch: 83, Loss: 0.042497
Epoch: 83, Loss: 0.019242
Epoch: 83, Loss: 0.018178
Epoch: 83, Loss: 0.019746
Epoch: 83, Loss: 0.022245
Epoch: 83, Loss: 0.020019
Epoch: 83, Loss: 0.034809
Epoch: 83, Loss: 0.023658
Epoch: 83, Loss: 0.026133
Epoch: 83, Loss: 0.036703
Epoch: 83, Loss: 0.017053
Epoch: 83, Loss: 0.025595
Epoch: 83, Loss: 0.024292
Epoch: 83, Loss: 0.053413
Epoch: 83, Loss: 0.042775
Epoch: 83, Loss: 0.030706
Epoch: 83, Loss: 0.021699
Epoch: 83, Loss: 0.018862
Epoch: 83, Loss: 0.024162
Epoch: 83, Loss: 0.027015
Epoch: 83, Loss: 0.019656
Epoch: 83, Loss: 0.020909
Epoch: 83, Loss: 0.019191
Epoch: 83, Loss: 0.035822
Epoch: 83, Loss: 0.021472
Epoch: 83, Loss: 0.013640
Epoch: 83, Loss: 0.022524
Epoch: 84, Loss: 0.020331
Epoch: 84, Loss: 0.023685
Epoch: 84, Loss: 0.015383
Epoch: 84, Loss: 0.015035
Epoch: 84, Loss: 0.026739
Epoch: 84, Loss: 0.015768
Epoch: 84, Loss: 0.015143
Epoch: 84, Loss: 0.013577
Epoch: 84, Loss: 0.018806
Epoch: 84, Loss: 0.021318
Epoch: 84, Loss: 0.027786
Epoch: 84, Loss: 0.022820
Epoch: 84, Loss: 0.024159
Epoch: 84, Loss: 0.024300
Epoch: 84, Loss: 0.018412
Epoch: 84, Loss: 0.021063
Epoch: 84, Loss: 0.025724
Epoch: 84, Loss: 0.021708
Epoch: 84, Loss: 0.036757
Epoch: 84, Loss: 0.014512
Epoch: 84, Loss: 0.016497
Epoch: 84, Loss: 0.012354
Epoch: 84, Loss: 0.023274
Epoch: 84, Loss: 0.017675
Epoch: 84, Loss: 0.029811
Epoch: 84, Loss: 0.018154
Epoch: 84, Loss: 0.020434
Epoch: 84, Loss: 0.020776
Epoch: 84, Loss: 0.029099
Epoch: 84, Loss: 0.043525
Epoch: 84, Loss: 0.033900
Epoch: 84, Loss: 0.032295
Epoch: 84, Loss: 0.026050
Epoch: 84, Loss: 0.013479
Epoch: 84, Loss: 0.018170
Epoch: 84, Loss: 0.012176
Epoch: 84, Loss: 0.017184
Epoch: 84, Loss: 0.028784
Epoch: 84, Loss: 0.024309
Epoch: 84, Loss: 0.016227
Epoch: 84, Loss: 0.016572
Epoch: 84, Loss: 0.013988
Epoch: 84, Loss: 0.023399
Epoch: 84, Loss: 0.050016
Epoch: 84, Loss: 0.039331
Epoch: 84, Loss: 0.034872
Epoch: 84, Loss: 0.021397
Epoch: 84, Loss: 0.036667
Epoch: 84, Loss: 0.015358
Epoch: 84, Loss: 0.013647
Epoch: 84, Loss: 0.022483
Epoch: 84, Loss: 0.028822
Epoch: 84, Loss: 0.019532
Epoch: 84, Loss: 0.020365
Epoch: 84, Loss: 0.021232
Epoch: 84, Loss: 0.018375
Epoch: 84, Loss: 0.017377
Epoch: 84, Loss: 0.023537
Epoch: 84, Loss: 0.014884
Epoch: 84, Loss: 0.018842
Epoch: 84, Loss: 0.011298
Epoch: 84, Loss: 0.030357
Epoch: 84, Loss: 0.025004
Epoch: 84, Loss: 0.014315
Epoch: 84, Loss: 0.024042
Epoch: 84, Loss: 0.028866
Epoch: 84, Loss: 0.036108
Epoch: 84, Loss: 0.026442
Epoch: 84, Loss: 0.032241
Epoch: 84, Loss: 0.021534
Epoch: 84, Loss: 0.020681
Epoch: 84, Loss: 0.012387
Epoch: 84, Loss: 0.038149
Epoch: 84, Loss: 0.028605
Epoch: 84, Loss: 0.025057
Epoch: 84, Loss: 0.021100
Epoch: 84, Loss: 0.018919
Epoch: 84, Loss: 0.041420
Epoch: 84, Loss: 0.041007
Epoch: 84, Loss: 0.025293
Epoch: 84, Loss: 0.020263
Epoch: 84, Loss: 0.039929
Epoch: 84, Loss: 0.029576
Epoch: 84, Loss: 0.027430
Epoch: 84, Loss: 0.017567
Epoch: 84, Loss: 0.012625
Epoch: 84, Loss: 0.030135
Epoch: 84, Loss: 0.014708
Epoch: 84, Loss: 0.021292
Epoch: 84, Loss: 0.030731
Epoch: 84, Loss: 0.018912
Epoch: 84, Loss: 0.058512
Epoch: 84, Loss: 0.030075
Epoch: 84, Loss: 0.023399
Epoch: 84, Loss: 0.036038
Epoch: 84, Loss: 0.030930
Epoch: 84, Loss: 0.029052
Epoch: 84, Loss: 0.019332
Epoch: 84, Loss: 0.027025
Epoch: 84, Loss: 0.022639
Epoch: 84, Loss: 0.031475
Epoch: 84, Loss: 0.033040
Epoch: 84, Loss: 0.017350
Epoch: 84, Loss: 0.022747
Epoch: 84, Loss: 0.029051
Epoch: 84, Loss: 0.026498
Epoch: 84, Loss: 0.048960
Epoch: 84, Loss: 0.024415
Epoch: 84, Loss: 0.047089
Epoch: 84, Loss: 0.029686
Epoch: 84, Loss: 0.027208
Epoch: 84, Loss: 0.012448
Epoch: 84, Loss: 0.039896
Epoch: 84, Loss: 0.023532
Epoch: 84, Loss: 0.023783
Epoch: 84, Loss: 0.022020
Epoch: 84, Loss: 0.049074
Epoch: 84, Loss: 0.023475
Epoch: 84, Loss: 0.017436
Epoch: 84, Loss: 0.021568
Epoch: 84, Loss: 0.013169
Epoch: 84, Loss: 0.019349
Epoch: 84, Loss: 0.020636
Epoch: 84, Loss: 0.019906
Epoch: 84, Loss: 0.017238
Epoch: 84, Loss: 0.025236
Epoch: 84, Loss: 0.013837
Epoch: 84, Loss: 0.026493
Epoch: 84, Loss: 0.041505
Epoch: 84, Loss: 0.036934
Epoch: 84, Loss: 0.020559
Epoch: 84, Loss: 0.035602
Epoch: 84, Loss: 0.035910
Epoch: 84, Loss: 0.017390
Epoch: 84, Loss: 0.022497
Epoch: 84, Loss: 0.033554
Epoch: 84, Loss: 0.020178
Epoch: 84, Loss: 0.032932
Epoch: 84, Loss: 0.025684
Epoch: 84, Loss: 0.020243
Epoch: 84, Loss: 0.075947
Epoch: 84, Loss: 0.021544
Epoch: 84, Loss: 0.028647
Epoch: 84, Loss: 0.019468
Epoch: 84, Loss: 0.023287
Epoch: 84, Loss: 0.018159
Epoch: 84, Loss: 0.015877
Epoch: 84, Loss: 0.018745
Epoch: 84, Loss: 0.016469
Epoch: 84, Loss: 0.030035
Epoch: 84, Loss: 0.013063
Epoch: 84, Loss: 0.022952
Epoch: 84, Loss: 0.034272
Epoch: 84, Loss: 0.016329
Epoch: 84, Loss: 0.022241
Epoch: 84, Loss: 0.013685
Epoch: 84, Loss: 0.009745
Epoch: 85, Loss: 0.030265
Epoch: 85, Loss: 0.018947
Epoch: 85, Loss: 0.023321
Epoch: 85, Loss: 0.021551
Epoch: 85, Loss: 0.023244
Epoch: 85, Loss: 0.032526
Epoch: 85, Loss: 0.020717
Epoch: 85, Loss: 0.032491
Epoch: 85, Loss: 0.032061
Epoch: 85, Loss: 0.017944
Epoch: 85, Loss: 0.019100
Epoch: 85, Loss: 0.019642
Epoch: 85, Loss: 0.023040
Epoch: 85, Loss: 0.040389
Epoch: 85, Loss: 0.015295
Epoch: 85, Loss: 0.012629
Epoch: 85, Loss: 0.019820
Epoch: 85, Loss: 0.022192
Epoch: 85, Loss: 0.010386
Epoch: 85, Loss: 0.037954
Epoch: 85, Loss: 0.024465
Epoch: 85, Loss: 0.025828
Epoch: 85, Loss: 0.013832
Epoch: 85, Loss: 0.019123
Epoch: 85, Loss: 0.019416
Epoch: 85, Loss: 0.016415
Epoch: 85, Loss: 0.028762
Epoch: 85, Loss: 0.030287
Epoch: 85, Loss: 0.023205
Epoch: 85, Loss: 0.021741
Epoch: 85, Loss: 0.018539
Epoch: 85, Loss: 0.038863
Epoch: 85, Loss: 0.020454
Epoch: 85, Loss: 0.020620
Epoch: 85, Loss: 0.018456
Epoch: 85, Loss: 0.016393
Epoch: 85, Loss: 0.015344
Epoch: 85, Loss: 0.018975
Epoch: 85, Loss: 0.028583
Epoch: 85, Loss: 0.018743
Epoch: 85, Loss: 0.019924
Epoch: 85, Loss: 0.026958
Epoch: 85, Loss: 0.021825
Epoch: 85, Loss: 0.024935
Epoch: 85, Loss: 0.024954
Epoch: 85, Loss: 0.050141
Epoch: 85, Loss: 0.019998
Epoch: 85, Loss: 0.016524
Epoch: 85, Loss: 0.025437
Epoch: 85, Loss: 0.022134
Epoch: 85, Loss: 0.040036
Epoch: 85, Loss: 0.040745
Epoch: 85, Loss: 0.043229
Epoch: 85, Loss: 0.017397
Epoch: 85, Loss: 0.023166
Epoch: 85, Loss: 0.023573
Epoch: 85, Loss: 0.019678
Epoch: 85, Loss: 0.025389
Epoch: 85, Loss: 0.020281
Epoch: 85, Loss: 0.046402
Epoch: 85, Loss: 0.018699
Epoch: 85, Loss: 0.049277
Epoch: 85, Loss: 0.033156
Epoch: 85, Loss: 0.021310
Epoch: 85, Loss: 0.042099
Epoch: 85, Loss: 0.023435
Epoch: 85, Loss: 0.025506
Epoch: 85, Loss: 0.040831
Epoch: 85, Loss: 0.030088
Epoch: 85, Loss: 0.018864
Epoch: 85, Loss: 0.021360
Epoch: 85, Loss: 0.021165
Epoch: 85, Loss: 0.019074
Epoch: 85, Loss: 0.015448
Epoch: 85, Loss: 0.049542
Epoch: 85, Loss: 0.024329
Epoch: 85, Loss: 0.023520
Epoch: 85, Loss: 0.053886
Epoch: 85, Loss: 0.032736
Epoch: 85, Loss: 0.032667
Epoch: 85, Loss: 0.027391
Epoch: 85, Loss: 0.018946
Epoch: 85, Loss: 0.018791
Epoch: 85, Loss: 0.022044
Epoch: 85, Loss: 0.018350
Epoch: 85, Loss: 0.040117
Epoch: 85, Loss: 0.035281
Epoch: 85, Loss: 0.023883
Epoch: 85, Loss: 0.025837
Epoch: 85, Loss: 0.008670
Epoch: 85, Loss: 0.009305
Epoch: 85, Loss: 0.027053
Epoch: 85, Loss: 0.028650
Epoch: 85, Loss: 0.026559
Epoch: 85, Loss: 0.033096
Epoch: 85, Loss: 0.016652
Epoch: 85, Loss: 0.017485
Epoch: 85, Loss: 0.023146
Epoch: 85, Loss: 0.015799
Epoch: 85, Loss: 0.021012
Epoch: 85, Loss: 0.034941
Epoch: 85, Loss: 0.018513
Epoch: 85, Loss: 0.022455
Epoch: 85, Loss: 0.018934
Epoch: 85, Loss: 0.013854
Epoch: 85, Loss: 0.012392
Epoch: 85, Loss: 0.013819
Epoch: 85, Loss: 0.029270
Epoch: 85, Loss: 0.022650
Epoch: 85, Loss: 0.022379
Epoch: 85, Loss: 0.017120
Epoch: 85, Loss: 0.018646
Epoch: 85, Loss: 0.012840
Epoch: 85, Loss: 0.024169
Epoch: 85, Loss: 0.082256
Epoch: 85, Loss: 0.017436
Epoch: 85, Loss: 0.023182
Epoch: 85, Loss: 0.020107
Epoch: 85, Loss: 0.021907
Epoch: 85, Loss: 0.017448
Epoch: 85, Loss: 0.026649
Epoch: 85, Loss: 0.013967
Epoch: 85, Loss: 0.017095
Epoch: 85, Loss: 0.036344
Epoch: 85, Loss: 0.025505
Epoch: 85, Loss: 0.013974
Epoch: 85, Loss: 0.025707
Epoch: 85, Loss: 0.033036
Epoch: 85, Loss: 0.020837
Epoch: 85, Loss: 0.014746
Epoch: 85, Loss: 0.025288
Epoch: 85, Loss: 0.021761
Epoch: 85, Loss: 0.014316
Epoch: 85, Loss: 0.012624
Epoch: 85, Loss: 0.019804
Epoch: 85, Loss: 0.021318
Epoch: 85, Loss: 0.012601
Epoch: 85, Loss: 0.017476
Epoch: 85, Loss: 0.023538
Epoch: 85, Loss: 0.042775
Epoch: 85, Loss: 0.046601
Epoch: 85, Loss: 0.017718
Epoch: 85, Loss: 0.035467
Epoch: 85, Loss: 0.016678
Epoch: 85, Loss: 0.018035
Epoch: 85, Loss: 0.037405
Epoch: 85, Loss: 0.025480
Epoch: 85, Loss: 0.015337
Epoch: 85, Loss: 0.015830
Epoch: 85, Loss: 0.022374
Epoch: 85, Loss: 0.032156
Epoch: 85, Loss: 0.026237
Epoch: 85, Loss: 0.018890
Epoch: 85, Loss: 0.012301
Epoch: 85, Loss: 0.015381
Epoch: 85, Loss: 0.039531
Epoch: 85, Loss: 0.014651
Epoch: 86, Loss: 0.018886
Epoch: 86, Loss: 0.037016
Epoch: 86, Loss: 0.015412
Epoch: 86, Loss: 0.026244
Epoch: 86, Loss: 0.024916
Epoch: 86, Loss: 0.018875
Epoch: 86, Loss: 0.016504
Epoch: 86, Loss: 0.014827
Epoch: 86, Loss: 0.027061
Epoch: 86, Loss: 0.023930
Epoch: 86, Loss: 0.016744
Epoch: 86, Loss: 0.012908
Epoch: 86, Loss: 0.028192
Epoch: 86, Loss: 0.011347
Epoch: 86, Loss: 0.039009
Epoch: 86, Loss: 0.021297
Epoch: 86, Loss: 0.023323
Epoch: 86, Loss: 0.018500
Epoch: 86, Loss: 0.015441
Epoch: 86, Loss: 0.033621
Epoch: 86, Loss: 0.016485
Epoch: 86, Loss: 0.022235
Epoch: 86, Loss: 0.025695
Epoch: 86, Loss: 0.027556
Epoch: 86, Loss: 0.025608
Epoch: 86, Loss: 0.018139
Epoch: 86, Loss: 0.020982
Epoch: 86, Loss: 0.026014
Epoch: 86, Loss: 0.013307
Epoch: 86, Loss: 0.029634
Epoch: 86, Loss: 0.025833
Epoch: 86, Loss: 0.017465
Epoch: 86, Loss: 0.015581
Epoch: 86, Loss: 0.017628
Epoch: 86, Loss: 0.027522
Epoch: 86, Loss: 0.044136
Epoch: 86, Loss: 0.029140
Epoch: 86, Loss: 0.021869
Epoch: 86, Loss: 0.042277
Epoch: 86, Loss: 0.029613
Epoch: 86, Loss: 0.015858
Epoch: 86, Loss: 0.022855
Epoch: 86, Loss: 0.014763
Epoch: 86, Loss: 0.023542
Epoch: 86, Loss: 0.016593
Epoch: 86, Loss: 0.027808
Epoch: 86, Loss: 0.017840
Epoch: 86, Loss: 0.015159
Epoch: 86, Loss: 0.049095
Epoch: 86, Loss: 0.021472
Epoch: 86, Loss: 0.031680
Epoch: 86, Loss: 0.021083
Epoch: 86, Loss: 0.037612
Epoch: 86, Loss: 0.019806
Epoch: 86, Loss: 0.012372
Epoch: 86, Loss: 0.040700
Epoch: 86, Loss: 0.035106
Epoch: 86, Loss: 0.026577
Epoch: 86, Loss: 0.026262
Epoch: 86, Loss: 0.031955
Epoch: 86, Loss: 0.034530
Epoch: 86, Loss: 0.028303
Epoch: 86, Loss: 0.041852
Epoch: 86, Loss: 0.023846
Epoch: 86, Loss: 0.023186
Epoch: 86, Loss: 0.034500
Epoch: 86, Loss: 0.021847
Epoch: 86, Loss: 0.019439
Epoch: 86, Loss: 0.027113
Epoch: 86, Loss: 0.021577
Epoch: 86, Loss: 0.031823
Epoch: 86, Loss: 0.022795
Epoch: 86, Loss: 0.028419
Epoch: 86, Loss: 0.021812
Epoch: 86, Loss: 0.026355
Epoch: 86, Loss: 0.018552
Epoch: 86, Loss: 0.023288
Epoch: 86, Loss: 0.017968
Epoch: 86, Loss: 0.028281
Epoch: 86, Loss: 0.021543
Epoch: 86, Loss: 0.013265
Epoch: 86, Loss: 0.014341
Epoch: 86, Loss: 0.018991
Epoch: 86, Loss: 0.025985
Epoch: 86, Loss: 0.031809
Epoch: 86, Loss: 0.023409
Epoch: 86, Loss: 0.015796
Epoch: 86, Loss: 0.029725
Epoch: 86, Loss: 0.022906
Epoch: 86, Loss: 0.008529
Epoch: 86, Loss: 0.020045
Epoch: 86, Loss: 0.018576
Epoch: 86, Loss: 0.033692
Epoch: 86, Loss: 0.012393
Epoch: 86, Loss: 0.012909
Epoch: 86, Loss: 0.020885
Epoch: 86, Loss: 0.037127
Epoch: 86, Loss: 0.014791
Epoch: 86, Loss: 0.024729
Epoch: 86, Loss: 0.021390
Epoch: 86, Loss: 0.021538
Epoch: 86, Loss: 0.024757
Epoch: 86, Loss: 0.028777
Epoch: 86, Loss: 0.026332
Epoch: 86, Loss: 0.016284
Epoch: 86, Loss: 0.031341
Epoch: 86, Loss: 0.015009
Epoch: 86, Loss: 0.010849
Epoch: 86, Loss: 0.012286
Epoch: 86, Loss: 0.014801
Epoch: 86, Loss: 0.021720
Epoch: 86, Loss: 0.015199
Epoch: 86, Loss: 0.019511
Epoch: 86, Loss: 0.020388
Epoch: 86, Loss: 0.013610
Epoch: 86, Loss: 0.019741
Epoch: 86, Loss: 0.022668
Epoch: 86, Loss: 0.025214
Epoch: 86, Loss: 0.015145
Epoch: 86, Loss: 0.029145
Epoch: 86, Loss: 0.033428
Epoch: 86, Loss: 0.021710
Epoch: 86, Loss: 0.024050
Epoch: 86, Loss: 0.026627
Epoch: 86, Loss: 0.023172
Epoch: 86, Loss: 0.030182
Epoch: 86, Loss: 0.021531
Epoch: 86, Loss: 0.029629
Epoch: 86, Loss: 0.018937
Epoch: 86, Loss: 0.014530
Epoch: 86, Loss: 0.024292
Epoch: 86, Loss: 0.014681
Epoch: 86, Loss: 0.013898
Epoch: 86, Loss: 0.017114
Epoch: 86, Loss: 0.017520
Epoch: 86, Loss: 0.023051
Epoch: 86, Loss: 0.012692
Epoch: 86, Loss: 0.026257
Epoch: 86, Loss: 0.028815
Epoch: 86, Loss: 0.031267
Epoch: 86, Loss: 0.026819
Epoch: 86, Loss: 0.035565
Epoch: 86, Loss: 0.014006
Epoch: 86, Loss: 0.012324
Epoch: 86, Loss: 0.024176
Epoch: 86, Loss: 0.026897
Epoch: 86, Loss: 0.022200
Epoch: 86, Loss: 0.018777
Epoch: 86, Loss: 0.014057
Epoch: 86, Loss: 0.085264
Epoch: 86, Loss: 0.012613
Epoch: 86, Loss: 0.034279
Epoch: 86, Loss: 0.016750
Epoch: 86, Loss: 0.016454
Epoch: 86, Loss: 0.022547
Epoch: 86, Loss: 0.020305
Epoch: 86, Loss: 0.017005
Epoch: 87, Loss: 0.016187
Epoch: 87, Loss: 0.018459
Epoch: 87, Loss: 0.013666
Epoch: 87, Loss: 0.012492
Epoch: 87, Loss: 0.015562
Epoch: 87, Loss: 0.030483
Epoch: 87, Loss: 0.021740
Epoch: 87, Loss: 0.037585
Epoch: 87, Loss: 0.013637
Epoch: 87, Loss: 0.024370
Epoch: 87, Loss: 0.016375
Epoch: 87, Loss: 0.019042
Epoch: 87, Loss: 0.017451
Epoch: 87, Loss: 0.029151
Epoch: 87, Loss: 0.023538
Epoch: 87, Loss: 0.017161
Epoch: 87, Loss: 0.054761
Epoch: 87, Loss: 0.021874
Epoch: 87, Loss: 0.021514
Epoch: 87, Loss: 0.014262
Epoch: 87, Loss: 0.023459
Epoch: 87, Loss: 0.029350
Epoch: 87, Loss: 0.037398
Epoch: 87, Loss: 0.026622
Epoch: 87, Loss: 0.018506
Epoch: 87, Loss: 0.020927
Epoch: 87, Loss: 0.023028
Epoch: 87, Loss: 0.015960
Epoch: 87, Loss: 0.054009
Epoch: 87, Loss: 0.037838
Epoch: 87, Loss: 0.030989
Epoch: 87, Loss: 0.032576
Epoch: 87, Loss: 0.020287
Epoch: 87, Loss: 0.032143
Epoch: 87, Loss: 0.017392
Epoch: 87, Loss: 0.043062
Epoch: 87, Loss: 0.017631
Epoch: 87, Loss: 0.020042
Epoch: 87, Loss: 0.018997
Epoch: 87, Loss: 0.016450
Epoch: 87, Loss: 0.029530
Epoch: 87, Loss: 0.017403
Epoch: 87, Loss: 0.012103
Epoch: 87, Loss: 0.023957
Epoch: 87, Loss: 0.023276
Epoch: 87, Loss: 0.018488
Epoch: 87, Loss: 0.026493
Epoch: 87, Loss: 0.026294
Epoch: 87, Loss: 0.018489
Epoch: 87, Loss: 0.016627
Epoch: 87, Loss: 0.029704
Epoch: 87, Loss: 0.022677
Epoch: 87, Loss: 0.015367
Epoch: 87, Loss: 0.015794
Epoch: 87, Loss: 0.018854
Epoch: 87, Loss: 0.026225
Epoch: 87, Loss: 0.019659
Epoch: 87, Loss: 0.034391
Epoch: 87, Loss: 0.031178
Epoch: 87, Loss: 0.009669
Epoch: 87, Loss: 0.024072
Epoch: 87, Loss: 0.017385
Epoch: 87, Loss: 0.015300
Epoch: 87, Loss: 0.023265
Epoch: 87, Loss: 0.018442
Epoch: 87, Loss: 0.012503
Epoch: 87, Loss: 0.016221
Epoch: 87, Loss: 0.021263
Epoch: 87, Loss: 0.048777
Epoch: 87, Loss: 0.019413
Epoch: 87, Loss: 0.013290
Epoch: 87, Loss: 0.017656
Epoch: 87, Loss: 0.030592
Epoch: 87, Loss: 0.012293
Epoch: 87, Loss: 0.018521
Epoch: 87, Loss: 0.016011
Epoch: 87, Loss: 0.014903
Epoch: 87, Loss: 0.023093
Epoch: 87, Loss: 0.017971
Epoch: 87, Loss: 0.025698
Epoch: 87, Loss: 0.026575
Epoch: 87, Loss: 0.020587
Epoch: 87, Loss: 0.039738
Epoch: 87, Loss: 0.022513
Epoch: 87, Loss: 0.021352
Epoch: 87, Loss: 0.016807
Epoch: 87, Loss: 0.012020
Epoch: 87, Loss: 0.018716
Epoch: 87, Loss: 0.022522
Epoch: 87, Loss: 0.014836
Epoch: 87, Loss: 0.021715
Epoch: 87, Loss: 0.022479
Epoch: 87, Loss: 0.018835
Epoch: 87, Loss: 0.026313
Epoch: 87, Loss: 0.012204
Epoch: 87, Loss: 0.019479
Epoch: 87, Loss: 0.022822
Epoch: 87, Loss: 0.015784
Epoch: 87, Loss: 0.018929
Epoch: 87, Loss: 0.032242
Epoch: 87, Loss: 0.019489
Epoch: 87, Loss: 0.022658
Epoch: 87, Loss: 0.022989
Epoch: 87, Loss: 0.025622
Epoch: 87, Loss: 0.027561
Epoch: 87, Loss: 0.024733
Epoch: 87, Loss: 0.013539
Epoch: 87, Loss: 0.011748
Epoch: 87, Loss: 0.021647
Epoch: 87, Loss: 0.019807
Epoch: 87, Loss: 0.032837
Epoch: 87, Loss: 0.047934
Epoch: 87, Loss: 0.019584
Epoch: 87, Loss: 0.022931
Epoch: 87, Loss: 0.017510
Epoch: 87, Loss: 0.027581
Epoch: 87, Loss: 0.022834
Epoch: 87, Loss: 0.029447
Epoch: 87, Loss: 0.033900
Epoch: 87, Loss: 0.019508
Epoch: 87, Loss: 0.015314
Epoch: 87, Loss: 0.019051
Epoch: 87, Loss: 0.040988
Epoch: 87, Loss: 0.015478
Epoch: 87, Loss: 0.039262
Epoch: 87, Loss: 0.031952
Epoch: 87, Loss: 0.023957
Epoch: 87, Loss: 0.051324
Epoch: 87, Loss: 0.035611
Epoch: 87, Loss: 0.025685
Epoch: 87, Loss: 0.013349
Epoch: 87, Loss: 0.018476
Epoch: 87, Loss: 0.028268
Epoch: 87, Loss: 0.021882
Epoch: 87, Loss: 0.028929
Epoch: 87, Loss: 0.019644
Epoch: 87, Loss: 0.008758
Epoch: 87, Loss: 0.013840
Epoch: 87, Loss: 0.015014
Epoch: 87, Loss: 0.016548
Epoch: 87, Loss: 0.022559
Epoch: 87, Loss: 0.012926
Epoch: 87, Loss: 0.018940
Epoch: 87, Loss: 0.020264
Epoch: 87, Loss: 0.011982
Epoch: 87, Loss: 0.016311
Epoch: 87, Loss: 0.016891
Epoch: 87, Loss: 0.010912
Epoch: 87, Loss: 0.044381
Epoch: 87, Loss: 0.019899
Epoch: 87, Loss: 0.027826
Epoch: 87, Loss: 0.022760
Epoch: 87, Loss: 0.022912
Epoch: 87, Loss: 0.015610
Epoch: 87, Loss: 0.024225
Epoch: 87, Loss: 0.025250
Epoch: 87, Loss: 0.030005
Epoch: 88, Loss: 0.014039
Epoch: 88, Loss: 0.030714
Epoch: 88, Loss: 0.035495
Epoch: 88, Loss: 0.016003
Epoch: 88, Loss: 0.012502
Epoch: 88, Loss: 0.016008
Epoch: 88, Loss: 0.020993
Epoch: 88, Loss: 0.018560
Epoch: 88, Loss: 0.021363
Epoch: 88, Loss: 0.038038
Epoch: 88, Loss: 0.024209
Epoch: 88, Loss: 0.012426
Epoch: 88, Loss: 0.034271
Epoch: 88, Loss: 0.023927
Epoch: 88, Loss: 0.016518
Epoch: 88, Loss: 0.016157
Epoch: 88, Loss: 0.018230
Epoch: 88, Loss: 0.017095
Epoch: 88, Loss: 0.017665
Epoch: 88, Loss: 0.019678
Epoch: 88, Loss: 0.024048
Epoch: 88, Loss: 0.027168
Epoch: 88, Loss: 0.038347
Epoch: 88, Loss: 0.015459
Epoch: 88, Loss: 0.010245
Epoch: 88, Loss: 0.022147
Epoch: 88, Loss: 0.013805
Epoch: 88, Loss: 0.017366
Epoch: 88, Loss: 0.013641
Epoch: 88, Loss: 0.014989
Epoch: 88, Loss: 0.028829
Epoch: 88, Loss: 0.017024
Epoch: 88, Loss: 0.021300
Epoch: 88, Loss: 0.025228
Epoch: 88, Loss: 0.019221
Epoch: 88, Loss: 0.020443
Epoch: 88, Loss: 0.016742
Epoch: 88, Loss: 0.017556
Epoch: 88, Loss: 0.014518
Epoch: 88, Loss: 0.020690
Epoch: 88, Loss: 0.012505
Epoch: 88, Loss: 0.012615
Epoch: 88, Loss: 0.021433
Epoch: 88, Loss: 0.020926
Epoch: 88, Loss: 0.015434
Epoch: 88, Loss: 0.075150
Epoch: 88, Loss: 0.064413
Epoch: 88, Loss: 0.022773
Epoch: 88, Loss: 0.009910
Epoch: 88, Loss: 0.022217
Epoch: 88, Loss: 0.015108
Epoch: 88, Loss: 0.022914
Epoch: 88, Loss: 0.020453
Epoch: 88, Loss: 0.030647
Epoch: 88, Loss: 0.032549
Epoch: 88, Loss: 0.024307
Epoch: 88, Loss: 0.020241
Epoch: 88, Loss: 0.016351
Epoch: 88, Loss: 0.028042
Epoch: 88, Loss: 0.020636
Epoch: 88, Loss: 0.014073
Epoch: 88, Loss: 0.039032
Epoch: 88, Loss: 0.030646
Epoch: 88, Loss: 0.029355
Epoch: 88, Loss: 0.016634
Epoch: 88, Loss: 0.021813
Epoch: 88, Loss: 0.023516
Epoch: 88, Loss: 0.027922
Epoch: 88, Loss: 0.016735
Epoch: 88, Loss: 0.026217
Epoch: 88, Loss: 0.035059
Epoch: 88, Loss: 0.020004
Epoch: 88, Loss: 0.027870
Epoch: 88, Loss: 0.017384
Epoch: 88, Loss: 0.017617
Epoch: 88, Loss: 0.014774
Epoch: 88, Loss: 0.012792
Epoch: 88, Loss: 0.015673
Epoch: 88, Loss: 0.016405
Epoch: 88, Loss: 0.018893
Epoch: 88, Loss: 0.013863
Epoch: 88, Loss: 0.020422
Epoch: 88, Loss: 0.021841
Epoch: 88, Loss: 0.028514
Epoch: 88, Loss: 0.017826
Epoch: 88, Loss: 0.016386
Epoch: 88, Loss: 0.028924
Epoch: 88, Loss: 0.024454
Epoch: 88, Loss: 0.009829
Epoch: 88, Loss: 0.014609
Epoch: 88, Loss: 0.022433
Epoch: 88, Loss: 0.014733
Epoch: 88, Loss: 0.025580
Epoch: 88, Loss: 0.021261
Epoch: 88, Loss: 0.033429
Epoch: 88, Loss: 0.031458
Epoch: 88, Loss: 0.028987
Epoch: 88, Loss: 0.015999
Epoch: 88, Loss: 0.008778
Epoch: 88, Loss: 0.019066
Epoch: 88, Loss: 0.030595
Epoch: 88, Loss: 0.021429
Epoch: 88, Loss: 0.022090
Epoch: 88, Loss: 0.030161
Epoch: 88, Loss: 0.015441
Epoch: 88, Loss: 0.025204
Epoch: 88, Loss: 0.022376
Epoch: 88, Loss: 0.021711
Epoch: 88, Loss: 0.026456
Epoch: 88, Loss: 0.013997
Epoch: 88, Loss: 0.011582
Epoch: 88, Loss: 0.019070
Epoch: 88, Loss: 0.022913
Epoch: 88, Loss: 0.011759
Epoch: 88, Loss: 0.024746
Epoch: 88, Loss: 0.028937
Epoch: 88, Loss: 0.019599
Epoch: 88, Loss: 0.011478
Epoch: 88, Loss: 0.069848
Epoch: 88, Loss: 0.020898
Epoch: 88, Loss: 0.019000
Epoch: 88, Loss: 0.028452
Epoch: 88, Loss: 0.022283
Epoch: 88, Loss: 0.018584
Epoch: 88, Loss: 0.019128
Epoch: 88, Loss: 0.028385
Epoch: 88, Loss: 0.029152
Epoch: 88, Loss: 0.025293
Epoch: 88, Loss: 0.029677
Epoch: 88, Loss: 0.030097
Epoch: 88, Loss: 0.016715
Epoch: 88, Loss: 0.015484
Epoch: 88, Loss: 0.027471
Epoch: 88, Loss: 0.015474
Epoch: 88, Loss: 0.012669
Epoch: 88, Loss: 0.009800
Epoch: 88, Loss: 0.013802
Epoch: 88, Loss: 0.023540
Epoch: 88, Loss: 0.019643
Epoch: 88, Loss: 0.026124
Epoch: 88, Loss: 0.017559
Epoch: 88, Loss: 0.013118
Epoch: 88, Loss: 0.036031
Epoch: 88, Loss: 0.022142
Epoch: 88, Loss: 0.017155
Epoch: 88, Loss: 0.015875
Epoch: 88, Loss: 0.031436
Epoch: 88, Loss: 0.025383
Epoch: 88, Loss: 0.021192
Epoch: 88, Loss: 0.039096
Epoch: 88, Loss: 0.026899
Epoch: 88, Loss: 0.026518
Epoch: 88, Loss: 0.017179
Epoch: 88, Loss: 0.014976
Epoch: 88, Loss: 0.044988
Epoch: 88, Loss: 0.044645
Epoch: 88, Loss: 0.030940
Epoch: 89, Loss: 0.086641
Epoch: 89, Loss: 0.018413
Epoch: 89, Loss: 0.018178
Epoch: 89, Loss: 0.053455
Epoch: 89, Loss: 0.015734
Epoch: 89, Loss: 0.018023
Epoch: 89, Loss: 0.016556
Epoch: 89, Loss: 0.015817
Epoch: 89, Loss: 0.019882
Epoch: 89, Loss: 0.021261
Epoch: 89, Loss: 0.015266
Epoch: 89, Loss: 0.014727
Epoch: 89, Loss: 0.019709
Epoch: 89, Loss: 0.014718
Epoch: 89, Loss: 0.017385
Epoch: 89, Loss: 0.022303
Epoch: 89, Loss: 0.017959
Epoch: 89, Loss: 0.015426
Epoch: 89, Loss: 0.017392
Epoch: 89, Loss: 0.018428
Epoch: 89, Loss: 0.016336
Epoch: 89, Loss: 0.020796
Epoch: 89, Loss: 0.053355
Epoch: 89, Loss: 0.024350
Epoch: 89, Loss: 0.022459
Epoch: 89, Loss: 0.025982
Epoch: 89, Loss: 0.043860
Epoch: 89, Loss: 0.018992
Epoch: 89, Loss: 0.032898
Epoch: 89, Loss: 0.030104
Epoch: 89, Loss: 0.017487
Epoch: 89, Loss: 0.019094
Epoch: 89, Loss: 0.024494
Epoch: 89, Loss: 0.020498
Epoch: 89, Loss: 0.017806
Epoch: 89, Loss: 0.021383
Epoch: 89, Loss: 0.021310
Epoch: 89, Loss: 0.023698
Epoch: 89, Loss: 0.023890
Epoch: 89, Loss: 0.014061
Epoch: 89, Loss: 0.018496
Epoch: 89, Loss: 0.012866
Epoch: 89, Loss: 0.010757
Epoch: 89, Loss: 0.035034
Epoch: 89, Loss: 0.014242
Epoch: 89, Loss: 0.023011
Epoch: 89, Loss: 0.016113
Epoch: 89, Loss: 0.019272
Epoch: 89, Loss: 0.020741
Epoch: 89, Loss: 0.030692
Epoch: 89, Loss: 0.019826
Epoch: 89, Loss: 0.016079
Epoch: 89, Loss: 0.023250
Epoch: 89, Loss: 0.016759
Epoch: 89, Loss: 0.017916
Epoch: 89, Loss: 0.021780
Epoch: 89, Loss: 0.021946
Epoch: 89, Loss: 0.067480
Epoch: 89, Loss: 0.041367
Epoch: 89, Loss: 0.011878
Epoch: 89, Loss: 0.012732
Epoch: 89, Loss: 0.016064
Epoch: 89, Loss: 0.017250
Epoch: 89, Loss: 0.020243
Epoch: 89, Loss: 0.023806
Epoch: 89, Loss: 0.022352
Epoch: 89, Loss: 0.026552
Epoch: 89, Loss: 0.031456
Epoch: 89, Loss: 0.024991
Epoch: 89, Loss: 0.017872
Epoch: 89, Loss: 0.023554
Epoch: 89, Loss: 0.018220
Epoch: 89, Loss: 0.023446
Epoch: 89, Loss: 0.017183
Epoch: 89, Loss: 0.013147
Epoch: 89, Loss: 0.021691
Epoch: 89, Loss: 0.017063
Epoch: 89, Loss: 0.015796
Epoch: 89, Loss: 0.021017
Epoch: 89, Loss: 0.022052
Epoch: 89, Loss: 0.020614
Epoch: 89, Loss: 0.024387
Epoch: 89, Loss: 0.021930
Epoch: 89, Loss: 0.018068
Epoch: 89, Loss: 0.015482
Epoch: 89, Loss: 0.030844
Epoch: 89, Loss: 0.026559
Epoch: 89, Loss: 0.019468
Epoch: 89, Loss: 0.022177
Epoch: 89, Loss: 0.021296
Epoch: 89, Loss: 0.014168
Epoch: 89, Loss: 0.013273
Epoch: 89, Loss: 0.017286
Epoch: 89, Loss: 0.027581
Epoch: 89, Loss: 0.025244
Epoch: 89, Loss: 0.023854
Epoch: 89, Loss: 0.017807
Epoch: 89, Loss: 0.045459
Epoch: 89, Loss: 0.120076
Epoch: 89, Loss: 0.105060
Epoch: 89, Loss: 0.020339
Epoch: 89, Loss: 0.027127
Epoch: 89, Loss: 0.018061
Epoch: 89, Loss: 0.021973
Epoch: 89, Loss: 0.040448
Epoch: 89, Loss: 0.021412
Epoch: 89, Loss: 0.018382
Epoch: 89, Loss: 0.010828
Epoch: 89, Loss: 0.017181
Epoch: 89, Loss: 0.016545
Epoch: 89, Loss: 0.018528
Epoch: 89, Loss: 0.014750
Epoch: 89, Loss: 0.057830
Epoch: 89, Loss: 0.040693
Epoch: 89, Loss: 0.019161
Epoch: 89, Loss: 0.016389
Epoch: 89, Loss: 0.024241
Epoch: 89, Loss: 0.025704
Epoch: 89, Loss: 0.012275
Epoch: 89, Loss: 0.019823
Epoch: 89, Loss: 0.030207
Epoch: 89, Loss: 0.020571
Epoch: 89, Loss: 0.011653
Epoch: 89, Loss: 0.022893
Epoch: 89, Loss: 0.025699
Epoch: 89, Loss: 0.019939
Epoch: 89, Loss: 0.039730
Epoch: 89, Loss: 0.021655
Epoch: 89, Loss: 0.024184
Epoch: 89, Loss: 0.019021
Epoch: 89, Loss: 0.019877
Epoch: 89, Loss: 0.021447
Epoch: 89, Loss: 0.022746
Epoch: 89, Loss: 0.013917
Epoch: 89, Loss: 0.029922
Epoch: 89, Loss: 0.035663
Epoch: 89, Loss: 0.030255
Epoch: 89, Loss: 0.015331
Epoch: 89, Loss: 0.015639
Epoch: 89, Loss: 0.023827
Epoch: 89, Loss: 0.016578
Epoch: 89, Loss: 0.016204
Epoch: 89, Loss: 0.012031
Epoch: 89, Loss: 0.010768
Epoch: 89, Loss: 0.020797
Epoch: 89, Loss: 0.016581
Epoch: 89, Loss: 0.021392
Epoch: 89, Loss: 0.015659
Epoch: 89, Loss: 0.046911
Epoch: 89, Loss: 0.015391
Epoch: 89, Loss: 0.025185
Epoch: 89, Loss: 0.026389
Epoch: 89, Loss: 0.016587
Epoch: 89, Loss: 0.023609
Epoch: 89, Loss: 0.024207
Epoch: 89, Loss: 0.021579
Epoch: 89, Loss: 0.013258
Epoch: 90, Loss: 0.033626
Epoch: 90, Loss: 0.020540
Epoch: 90, Loss: 0.010728
Epoch: 90, Loss: 0.011932
Epoch: 90, Loss: 0.019323
Epoch: 90, Loss: 0.016176
Epoch: 90, Loss: 0.012249
Epoch: 90, Loss: 0.024696
Epoch: 90, Loss: 0.013728
Epoch: 90, Loss: 0.022514
Epoch: 90, Loss: 0.010586
Epoch: 90, Loss: 0.016940
Epoch: 90, Loss: 0.010704
Epoch: 90, Loss: 0.021138
Epoch: 90, Loss: 0.026191
Epoch: 90, Loss: 0.019082
Epoch: 90, Loss: 0.019363
Epoch: 90, Loss: 0.034257
Epoch: 90, Loss: 0.020225
Epoch: 90, Loss: 0.019541
Epoch: 90, Loss: 0.021621
Epoch: 90, Loss: 0.017549
Epoch: 90, Loss: 0.018657
Epoch: 90, Loss: 0.034470
Epoch: 90, Loss: 0.011105
Epoch: 90, Loss: 0.021850
Epoch: 90, Loss: 0.021840
Epoch: 90, Loss: 0.016398
Epoch: 90, Loss: 0.020376
Epoch: 90, Loss: 0.015009
Epoch: 90, Loss: 0.018857
Epoch: 90, Loss: 0.014506
Epoch: 90, Loss: 0.012707
Epoch: 90, Loss: 0.016445
Epoch: 90, Loss: 0.020847
Epoch: 90, Loss: 0.017053
Epoch: 90, Loss: 0.021045
Epoch: 90, Loss: 0.017106
Epoch: 90, Loss: 0.014044
Epoch: 90, Loss: 0.023049
Epoch: 90, Loss: 0.022812
Epoch: 90, Loss: 0.020782
Epoch: 90, Loss: 0.031897
Epoch: 90, Loss: 0.012938
Epoch: 90, Loss: 0.014333
Epoch: 90, Loss: 0.018931
Epoch: 90, Loss: 0.025534
Epoch: 90, Loss: 0.023594
Epoch: 90, Loss: 0.035330
Epoch: 90, Loss: 0.014658
Epoch: 90, Loss: 0.021548
Epoch: 90, Loss: 0.014094
Epoch: 90, Loss: 0.022228
Epoch: 90, Loss: 0.017019
Epoch: 90, Loss: 0.018301
Epoch: 90, Loss: 0.014567
Epoch: 90, Loss: 0.012176
Epoch: 90, Loss: 0.021399
Epoch: 90, Loss: 0.015076
Epoch: 90, Loss: 0.017098
Epoch: 90, Loss: 0.019411
Epoch: 90, Loss: 0.021595
Epoch: 90, Loss: 0.022044
Epoch: 90, Loss: 0.014839
Epoch: 90, Loss: 0.012892
Epoch: 90, Loss: 0.032252
Epoch: 90, Loss: 0.015283
Epoch: 90, Loss: 0.023085
Epoch: 90, Loss: 0.029912
Epoch: 90, Loss: 0.077864
Epoch: 90, Loss: 0.016841
Epoch: 90, Loss: 0.019337
Epoch: 90, Loss: 0.022635
Epoch: 90, Loss: 0.023400
Epoch: 90, Loss: 0.013772
Epoch: 90, Loss: 0.015803
Epoch: 90, Loss: 0.012694
Epoch: 90, Loss: 0.016304
Epoch: 90, Loss: 0.015737
Epoch: 90, Loss: 0.035926
Epoch: 90, Loss: 0.025992
Epoch: 90, Loss: 0.019259
Epoch: 90, Loss: 0.027644
Epoch: 90, Loss: 0.026821
Epoch: 90, Loss: 0.015779
Epoch: 90, Loss: 0.022662
Epoch: 90, Loss: 0.009238
Epoch: 90, Loss: 0.019092
Epoch: 90, Loss: 0.020190
Epoch: 90, Loss: 0.021631
Epoch: 90, Loss: 0.015812
Epoch: 90, Loss: 0.020587
Epoch: 90, Loss: 0.009891
Epoch: 90, Loss: 0.013860
Epoch: 90, Loss: 0.021707
Epoch: 90, Loss: 0.018529
Epoch: 90, Loss: 0.017625
Epoch: 90, Loss: 0.018339
Epoch: 90, Loss: 0.017509
Epoch: 90, Loss: 0.019977
Epoch: 90, Loss: 0.022112
Epoch: 90, Loss: 0.016207
Epoch: 90, Loss: 0.035471
Epoch: 90, Loss: 0.013306
Epoch: 90, Loss: 0.016364
Epoch: 90, Loss: 0.013977
Epoch: 90, Loss: 0.018971
Epoch: 90, Loss: 0.014995
Epoch: 90, Loss: 0.022165
Epoch: 90, Loss: 0.013690
Epoch: 90, Loss: 0.014649
Epoch: 90, Loss: 0.022430
Epoch: 90, Loss: 0.016897
Epoch: 90, Loss: 0.016142
Epoch: 90, Loss: 0.026829
Epoch: 90, Loss: 0.028650
Epoch: 90, Loss: 0.012214
Epoch: 90, Loss: 0.031416
Epoch: 90, Loss: 0.016815
Epoch: 90, Loss: 0.012963
Epoch: 90, Loss: 0.027654
Epoch: 90, Loss: 0.015077
Epoch: 90, Loss: 0.023856
Epoch: 90, Loss: 0.018640
Epoch: 90, Loss: 0.015855
Epoch: 90, Loss: 0.021633
Epoch: 90, Loss: 0.029795
Epoch: 90, Loss: 0.021654
Epoch: 90, Loss: 0.017107
Epoch: 90, Loss: 0.029524
Epoch: 90, Loss: 0.029692
Epoch: 90, Loss: 0.009494
Epoch: 90, Loss: 0.019347
Epoch: 90, Loss: 0.013856
Epoch: 90, Loss: 0.029153
Epoch: 90, Loss: 0.015513
Epoch: 90, Loss: 0.018885
Epoch: 90, Loss: 0.024338
Epoch: 90, Loss: 0.012329
Epoch: 90, Loss: 0.020675
Epoch: 90, Loss: 0.016336
Epoch: 90, Loss: 0.029829
Epoch: 90, Loss: 0.027454
Epoch: 90, Loss: 0.024352
Epoch: 90, Loss: 0.022031
Epoch: 90, Loss: 0.020317
Epoch: 90, Loss: 0.019342
Epoch: 90, Loss: 0.030929
Epoch: 90, Loss: 0.033829
Epoch: 90, Loss: 0.043266
Epoch: 90, Loss: 0.025401
Epoch: 90, Loss: 0.020305
Epoch: 90, Loss: 0.031530
Epoch: 90, Loss: 0.017279
Epoch: 90, Loss: 0.017763
Epoch: 90, Loss: 0.016557
Epoch: 90, Loss: 0.034591
Epoch: 91, Loss: 0.036547
Epoch: 91, Loss: 0.015171
Epoch: 91, Loss: 0.028587
Epoch: 91, Loss: 0.029765
Epoch: 91, Loss: 0.017366
Epoch: 91, Loss: 0.017515
Epoch: 91, Loss: 0.014674
Epoch: 91, Loss: 0.023018
Epoch: 91, Loss: 0.015815
Epoch: 91, Loss: 0.017586
Epoch: 91, Loss: 0.011438
Epoch: 91, Loss: 0.017977
Epoch: 91, Loss: 0.015990
Epoch: 91, Loss: 0.016042
Epoch: 91, Loss: 0.027443
Epoch: 91, Loss: 0.010517
Epoch: 91, Loss: 0.014627
Epoch: 91, Loss: 0.021100
Epoch: 91, Loss: 0.014155
Epoch: 91, Loss: 0.015625
Epoch: 91, Loss: 0.012671
Epoch: 91, Loss: 0.019744
Epoch: 91, Loss: 0.017501
Epoch: 91, Loss: 0.026379
Epoch: 91, Loss: 0.018392
Epoch: 91, Loss: 0.017268
Epoch: 91, Loss: 0.021913
Epoch: 91, Loss: 0.011061
Epoch: 91, Loss: 0.015819
Epoch: 91, Loss: 0.017768
Epoch: 91, Loss: 0.016770
Epoch: 91, Loss: 0.018987
Epoch: 91, Loss: 0.011152
Epoch: 91, Loss: 0.021166
Epoch: 91, Loss: 0.015494
Epoch: 91, Loss: 0.025406
Epoch: 91, Loss: 0.015277
Epoch: 91, Loss: 0.019194
Epoch: 91, Loss: 0.031413
Epoch: 91, Loss: 0.016428
Epoch: 91, Loss: 0.022623
Epoch: 91, Loss: 0.018043
Epoch: 91, Loss: 0.020093
Epoch: 91, Loss: 0.016375
Epoch: 91, Loss: 0.027199
Epoch: 91, Loss: 0.017470
Epoch: 91, Loss: 0.023701
Epoch: 91, Loss: 0.013355
Epoch: 91, Loss: 0.027699
Epoch: 91, Loss: 0.019177
Epoch: 91, Loss: 0.025667
Epoch: 91, Loss: 0.015029
Epoch: 91, Loss: 0.024027
Epoch: 91, Loss: 0.021973
Epoch: 91, Loss: 0.021449
Epoch: 91, Loss: 0.027936
Epoch: 91, Loss: 0.016767
Epoch: 91, Loss: 0.052483
Epoch: 91, Loss: 0.055149
Epoch: 91, Loss: 0.017344
Epoch: 91, Loss: 0.020131
Epoch: 91, Loss: 0.032307
Epoch: 91, Loss: 0.015308
Epoch: 91, Loss: 0.017131
Epoch: 91, Loss: 0.008988
Epoch: 91, Loss: 0.016519
Epoch: 91, Loss: 0.014991
Epoch: 91, Loss: 0.013291
Epoch: 91, Loss: 0.019804
Epoch: 91, Loss: 0.017411
Epoch: 91, Loss: 0.020648
Epoch: 91, Loss: 0.016522
Epoch: 91, Loss: 0.033911
Epoch: 91, Loss: 0.017135
Epoch: 91, Loss: 0.019500
Epoch: 91, Loss: 0.015542
Epoch: 91, Loss: 0.075344
Epoch: 91, Loss: 0.027125
Epoch: 91, Loss: 0.013818
Epoch: 91, Loss: 0.020756
Epoch: 91, Loss: 0.021937
Epoch: 91, Loss: 0.018894
Epoch: 91, Loss: 0.022119
Epoch: 91, Loss: 0.017484
Epoch: 91, Loss: 0.019096
Epoch: 91, Loss: 0.021436
Epoch: 91, Loss: 0.020918
Epoch: 91, Loss: 0.011193
Epoch: 91, Loss: 0.010313
Epoch: 91, Loss: 0.024806
Epoch: 91, Loss: 0.013627
Epoch: 91, Loss: 0.012203
Epoch: 91, Loss: 0.024439
Epoch: 91, Loss: 0.024801
Epoch: 91, Loss: 0.029317
Epoch: 91, Loss: 0.031217
Epoch: 91, Loss: 0.026866
Epoch: 91, Loss: 0.018876
Epoch: 91, Loss: 0.021366
Epoch: 91, Loss: 0.014536
Epoch: 91, Loss: 0.019812
Epoch: 91, Loss: 0.021814
Epoch: 91, Loss: 0.018134
Epoch: 91, Loss: 0.023548
Epoch: 91, Loss: 0.013145
Epoch: 91, Loss: 0.013238
Epoch: 91, Loss: 0.017841
Epoch: 91, Loss: 0.018805
Epoch: 91, Loss: 0.021379
Epoch: 91, Loss: 0.017173
Epoch: 91, Loss: 0.019778
Epoch: 91, Loss: 0.010696
Epoch: 91, Loss: 0.021220
Epoch: 91, Loss: 0.022379
Epoch: 91, Loss: 0.018697
Epoch: 91, Loss: 0.009698
Epoch: 91, Loss: 0.021470
Epoch: 91, Loss: 0.016396
Epoch: 91, Loss: 0.013777
Epoch: 91, Loss: 0.015170
Epoch: 91, Loss: 0.009052
Epoch: 91, Loss: 0.024096
Epoch: 91, Loss: 0.092515
Epoch: 91, Loss: 0.061076
Epoch: 91, Loss: 0.036698
Epoch: 91, Loss: 0.012852
Epoch: 91, Loss: 0.019628
Epoch: 91, Loss: 0.019710
Epoch: 91, Loss: 0.011272
Epoch: 91, Loss: 0.018862
Epoch: 91, Loss: 0.016040
Epoch: 91, Loss: 0.035078
Epoch: 91, Loss: 0.020832
Epoch: 91, Loss: 0.018373
Epoch: 91, Loss: 0.013975
Epoch: 91, Loss: 0.020713
Epoch: 91, Loss: 0.017623
Epoch: 91, Loss: 0.013472
Epoch: 91, Loss: 0.021688
Epoch: 91, Loss: 0.018506
Epoch: 91, Loss: 0.017483
Epoch: 91, Loss: 0.024204
Epoch: 91, Loss: 0.032785
Epoch: 91, Loss: 0.022594
Epoch: 91, Loss: 0.030959
Epoch: 91, Loss: 0.032785
Epoch: 91, Loss: 0.023688
Epoch: 91, Loss: 0.038430
Epoch: 91, Loss: 0.052248
Epoch: 91, Loss: 0.073038
Epoch: 91, Loss: 0.036390
Epoch: 91, Loss: 0.014983
Epoch: 91, Loss: 0.018041
Epoch: 91, Loss: 0.041530
Epoch: 91, Loss: 0.022991
Epoch: 91, Loss: 0.021451
Epoch: 91, Loss: 0.005497
Epoch: 92, Loss: 0.019422
Epoch: 92, Loss: 0.023709
Epoch: 92, Loss: 0.019166
Epoch: 92, Loss: 0.022340
Epoch: 92, Loss: 0.017028
Epoch: 92, Loss: 0.014829
Epoch: 92, Loss: 0.020508
Epoch: 92, Loss: 0.011163
Epoch: 92, Loss: 0.020331
Epoch: 92, Loss: 0.019689
Epoch: 92, Loss: 0.019477
Epoch: 92, Loss: 0.023135
Epoch: 92, Loss: 0.016848
Epoch: 92, Loss: 0.020926
Epoch: 92, Loss: 0.025421
Epoch: 92, Loss: 0.011816
Epoch: 92, Loss: 0.015618
Epoch: 92, Loss: 0.022645
Epoch: 92, Loss: 0.013208
Epoch: 92, Loss: 0.011198
Epoch: 92, Loss: 0.013358
Epoch: 92, Loss: 0.016942
Epoch: 92, Loss: 0.015603
Epoch: 92, Loss: 0.025734
Epoch: 92, Loss: 0.018524
Epoch: 92, Loss: 0.037925
Epoch: 92, Loss: 0.010538
Epoch: 92, Loss: 0.024014
Epoch: 92, Loss: 0.023922
Epoch: 92, Loss: 0.014614
Epoch: 92, Loss: 0.019350
Epoch: 92, Loss: 0.028196
Epoch: 92, Loss: 0.038287
Epoch: 92, Loss: 0.021871
Epoch: 92, Loss: 0.035402
Epoch: 92, Loss: 0.038046
Epoch: 92, Loss: 0.021566
Epoch: 92, Loss: 0.018911
Epoch: 92, Loss: 0.035712
Epoch: 92, Loss: 0.027929
Epoch: 92, Loss: 0.013344
Epoch: 92, Loss: 0.012151
Epoch: 92, Loss: 0.017771
Epoch: 92, Loss: 0.037920
Epoch: 92, Loss: 0.018967
Epoch: 92, Loss: 0.020842
Epoch: 92, Loss: 0.019796
Epoch: 92, Loss: 0.017738
Epoch: 92, Loss: 0.025921
Epoch: 92, Loss: 0.016718
Epoch: 92, Loss: 0.019568
Epoch: 92, Loss: 0.028227
Epoch: 92, Loss: 0.014863
Epoch: 92, Loss: 0.017071
Epoch: 92, Loss: 0.022376
Epoch: 92, Loss: 0.029597
Epoch: 92, Loss: 0.022732
Epoch: 92, Loss: 0.022188
Epoch: 92, Loss: 0.011083
Epoch: 92, Loss: 0.010318
Epoch: 92, Loss: 0.019021
Epoch: 92, Loss: 0.013109
Epoch: 92, Loss: 0.014928
Epoch: 92, Loss: 0.018819
Epoch: 92, Loss: 0.009323
Epoch: 92, Loss: 0.011833
Epoch: 92, Loss: 0.025644
Epoch: 92, Loss: 0.012448
Epoch: 92, Loss: 0.017015
Epoch: 92, Loss: 0.021798
Epoch: 92, Loss: 0.012496
Epoch: 92, Loss: 0.015272
Epoch: 92, Loss: 0.024347
Epoch: 92, Loss: 0.014144
Epoch: 92, Loss: 0.012726
Epoch: 92, Loss: 0.014234
Epoch: 92, Loss: 0.027427
Epoch: 92, Loss: 0.016424
Epoch: 92, Loss: 0.022988
Epoch: 92, Loss: 0.015743
Epoch: 92, Loss: 0.027744
Epoch: 92, Loss: 0.017636
Epoch: 92, Loss: 0.005330
Epoch: 92, Loss: 0.018687
Epoch: 92, Loss: 0.013200
Epoch: 92, Loss: 0.018859
Epoch: 92, Loss: 0.020959
Epoch: 92, Loss: 0.008209
Epoch: 92, Loss: 0.025106
Epoch: 92, Loss: 0.017361
Epoch: 92, Loss: 0.023195
Epoch: 92, Loss: 0.026658
Epoch: 92, Loss: 0.031469
Epoch: 92, Loss: 0.018842
Epoch: 92, Loss: 0.014181
Epoch: 92, Loss: 0.040661
Epoch: 92, Loss: 0.019966
Epoch: 92, Loss: 0.016192
Epoch: 92, Loss: 0.010829
Epoch: 92, Loss: 0.021711
Epoch: 92, Loss: 0.015928
Epoch: 92, Loss: 0.014511
Epoch: 92, Loss: 0.064922
Epoch: 92, Loss: 0.058107
Epoch: 92, Loss: 0.041104
Epoch: 92, Loss: 0.017307
Epoch: 92, Loss: 0.014825
Epoch: 92, Loss: 0.015510
Epoch: 92, Loss: 0.031339
Epoch: 92, Loss: 0.009881
Epoch: 92, Loss: 0.015606
Epoch: 92, Loss: 0.017065
Epoch: 92, Loss: 0.014280
Epoch: 92, Loss: 0.010651
Epoch: 92, Loss: 0.020619
Epoch: 92, Loss: 0.027649
Epoch: 92, Loss: 0.022984
Epoch: 92, Loss: 0.014114
Epoch: 92, Loss: 0.032417
Epoch: 92, Loss: 0.016304
Epoch: 92, Loss: 0.012369
Epoch: 92, Loss: 0.014947
Epoch: 92, Loss: 0.030401
Epoch: 92, Loss: 0.016922
Epoch: 92, Loss: 0.025482
Epoch: 92, Loss: 0.019857
Epoch: 92, Loss: 0.019125
Epoch: 92, Loss: 0.019635
Epoch: 92, Loss: 0.014807
Epoch: 92, Loss: 0.011369
Epoch: 92, Loss: 0.020658
Epoch: 92, Loss: 0.023321
Epoch: 92, Loss: 0.021110
Epoch: 92, Loss: 0.023952
Epoch: 92, Loss: 0.021194
Epoch: 92, Loss: 0.011571
Epoch: 92, Loss: 0.014701
Epoch: 92, Loss: 0.020418
Epoch: 92, Loss: 0.011869
Epoch: 92, Loss: 0.019290
Epoch: 92, Loss: 0.019804
Epoch: 92, Loss: 0.019565
Epoch: 92, Loss: 0.014316
Epoch: 92, Loss: 0.033793
Epoch: 92, Loss: 0.018215
Epoch: 92, Loss: 0.014707
Epoch: 92, Loss: 0.011897
Epoch: 92, Loss: 0.012669
Epoch: 92, Loss: 0.014089
Epoch: 92, Loss: 0.015068
Epoch: 92, Loss: 0.014099
Epoch: 92, Loss: 0.016928
Epoch: 92, Loss: 0.017086
Epoch: 92, Loss: 0.024261
Epoch: 92, Loss: 0.026597
Epoch: 92, Loss: 0.024846
Epoch: 92, Loss: 0.008572
Epoch: 93, Loss: 0.016302
Epoch: 93, Loss: 0.015387
Epoch: 93, Loss: 0.020002
Epoch: 93, Loss: 0.013697
Epoch: 93, Loss: 0.017280
Epoch: 93, Loss: 0.018275
Epoch: 93, Loss: 0.015700
Epoch: 93, Loss: 0.015013
Epoch: 93, Loss: 0.015528
Epoch: 93, Loss: 0.016115
Epoch: 93, Loss: 0.021440
Epoch: 93, Loss: 0.011514
Epoch: 93, Loss: 0.019884
Epoch: 93, Loss: 0.009678
Epoch: 93, Loss: 0.015961
Epoch: 93, Loss: 0.031811
Epoch: 93, Loss: 0.020320
Epoch: 93, Loss: 0.036518
Epoch: 93, Loss: 0.010185
Epoch: 93, Loss: 0.018737
Epoch: 93, Loss: 0.016002
Epoch: 93, Loss: 0.024415
Epoch: 93, Loss: 0.020384
Epoch: 93, Loss: 0.017424
Epoch: 93, Loss: 0.013587
Epoch: 93, Loss: 0.021171
Epoch: 93, Loss: 0.016560
Epoch: 93, Loss: 0.015582
Epoch: 93, Loss: 0.030546
Epoch: 93, Loss: 0.031241
Epoch: 93, Loss: 0.021359
Epoch: 93, Loss: 0.018711
Epoch: 93, Loss: 0.037167
Epoch: 93, Loss: 0.034994
Epoch: 93, Loss: 0.019260
Epoch: 93, Loss: 0.023407
Epoch: 93, Loss: 0.012904
Epoch: 93, Loss: 0.020813
Epoch: 93, Loss: 0.015680
Epoch: 93, Loss: 0.016576
Epoch: 93, Loss: 0.040291
Epoch: 93, Loss: 0.015254
Epoch: 93, Loss: 0.015530
Epoch: 93, Loss: 0.021121
Epoch: 93, Loss: 0.032433
Epoch: 93, Loss: 0.021372
Epoch: 93, Loss: 0.020920
Epoch: 93, Loss: 0.020667
Epoch: 93, Loss: 0.012800
Epoch: 93, Loss: 0.014333
Epoch: 93, Loss: 0.027188
Epoch: 93, Loss: 0.020886
Epoch: 93, Loss: 0.017121
Epoch: 93, Loss: 0.013424
Epoch: 93, Loss: 0.012370
Epoch: 93, Loss: 0.014304
Epoch: 93, Loss: 0.010105
Epoch: 93, Loss: 0.010481
Epoch: 93, Loss: 0.027344
Epoch: 93, Loss: 0.010300
Epoch: 93, Loss: 0.016764
Epoch: 93, Loss: 0.012466
Epoch: 93, Loss: 0.024678
Epoch: 93, Loss: 0.014878
Epoch: 93, Loss: 0.019460
Epoch: 93, Loss: 0.016858
Epoch: 93, Loss: 0.012607
Epoch: 93, Loss: 0.028469
Epoch: 93, Loss: 0.018680
Epoch: 93, Loss: 0.010220
Epoch: 93, Loss: 0.022992
Epoch: 93, Loss: 0.014223
Epoch: 93, Loss: 0.016570
Epoch: 93, Loss: 0.016183
Epoch: 93, Loss: 0.014950
Epoch: 93, Loss: 0.016736
Epoch: 93, Loss: 0.021307
Epoch: 93, Loss: 0.020859
Epoch: 93, Loss: 0.014059
Epoch: 93, Loss: 0.012872
Epoch: 93, Loss: 0.015763
Epoch: 93, Loss: 0.066358
Epoch: 93, Loss: 0.009999
Epoch: 93, Loss: 0.029931
Epoch: 93, Loss: 0.019659
Epoch: 93, Loss: 0.016852
Epoch: 93, Loss: 0.015875
Epoch: 93, Loss: 0.039738
Epoch: 93, Loss: 0.021942
Epoch: 93, Loss: 0.012318
Epoch: 93, Loss: 0.017916
Epoch: 93, Loss: 0.013148
Epoch: 93, Loss: 0.021258
Epoch: 93, Loss: 0.023002
Epoch: 93, Loss: 0.012589
Epoch: 93, Loss: 0.023117
Epoch: 93, Loss: 0.028448
Epoch: 93, Loss: 0.014351
Epoch: 93, Loss: 0.020442
Epoch: 93, Loss: 0.019491
Epoch: 93, Loss: 0.015998
Epoch: 93, Loss: 0.013939
Epoch: 93, Loss: 0.026309
Epoch: 93, Loss: 0.014887
Epoch: 93, Loss: 0.024114
Epoch: 93, Loss: 0.011658
Epoch: 93, Loss: 0.012216
Epoch: 93, Loss: 0.009573
Epoch: 93, Loss: 0.012194
Epoch: 93, Loss: 0.017175
Epoch: 93, Loss: 0.017656
Epoch: 93, Loss: 0.016916
Epoch: 93, Loss: 0.015679
Epoch: 93, Loss: 0.018852
Epoch: 93, Loss: 0.017600
Epoch: 93, Loss: 0.015318
Epoch: 93, Loss: 0.030708
Epoch: 93, Loss: 0.018828
Epoch: 93, Loss: 0.014228
Epoch: 93, Loss: 0.019637
Epoch: 93, Loss: 0.020584
Epoch: 93, Loss: 0.012411
Epoch: 93, Loss: 0.019266
Epoch: 93, Loss: 0.016149
Epoch: 93, Loss: 0.019580
Epoch: 93, Loss: 0.048559
Epoch: 93, Loss: 0.025143
Epoch: 93, Loss: 0.023416
Epoch: 93, Loss: 0.014352
Epoch: 93, Loss: 0.025340
Epoch: 93, Loss: 0.024411
Epoch: 93, Loss: 0.017702
Epoch: 93, Loss: 0.014294
Epoch: 93, Loss: 0.020618
Epoch: 93, Loss: 0.022685
Epoch: 93, Loss: 0.019091
Epoch: 93, Loss: 0.010594
Epoch: 93, Loss: 0.011731
Epoch: 93, Loss: 0.036194
Epoch: 93, Loss: 0.033334
Epoch: 93, Loss: 0.016013
Epoch: 93, Loss: 0.015467
Epoch: 93, Loss: 0.012502
Epoch: 93, Loss: 0.022057
Epoch: 93, Loss: 0.033390
Epoch: 93, Loss: 0.016703
Epoch: 93, Loss: 0.020330
Epoch: 93, Loss: 0.025623
Epoch: 93, Loss: 0.012758
Epoch: 93, Loss: 0.020358
Epoch: 93, Loss: 0.015594
Epoch: 93, Loss: 0.018206
Epoch: 93, Loss: 0.016761
Epoch: 93, Loss: 0.035519
Epoch: 93, Loss: 0.017617
Epoch: 93, Loss: 0.012335
Epoch: 93, Loss: 0.034017
Epoch: 94, Loss: 0.010192
Epoch: 94, Loss: 0.010944
Epoch: 94, Loss: 0.026711
Epoch: 94, Loss: 0.012683
Epoch: 94, Loss: 0.016599
Epoch: 94, Loss: 0.051406
Epoch: 94, Loss: 0.028467
Epoch: 94, Loss: 0.016814
Epoch: 94, Loss: 0.013880
Epoch: 94, Loss: 0.023739
Epoch: 94, Loss: 0.032173
Epoch: 94, Loss: 0.015751
Epoch: 94, Loss: 0.016932
Epoch: 94, Loss: 0.010799
Epoch: 94, Loss: 0.010659
Epoch: 94, Loss: 0.012670
Epoch: 94, Loss: 0.018117
Epoch: 94, Loss: 0.026916
Epoch: 94, Loss: 0.023985
Epoch: 94, Loss: 0.009284
Epoch: 94, Loss: 0.027688
Epoch: 94, Loss: 0.018887
Epoch: 94, Loss: 0.012849
Epoch: 94, Loss: 0.012946
Epoch: 94, Loss: 0.024393
Epoch: 94, Loss: 0.010946
Epoch: 94, Loss: 0.009592
Epoch: 94, Loss: 0.015469
Epoch: 94, Loss: 0.027411
Epoch: 94, Loss: 0.020968
Epoch: 94, Loss: 0.012622
Epoch: 94, Loss: 0.018687
Epoch: 94, Loss: 0.018292
Epoch: 94, Loss: 0.010004
Epoch: 94, Loss: 0.015340
Epoch: 94, Loss: 0.011449
Epoch: 94, Loss: 0.018021
Epoch: 94, Loss: 0.011633
Epoch: 94, Loss: 0.015315
Epoch: 94, Loss: 0.023627
Epoch: 94, Loss: 0.014322
Epoch: 94, Loss: 0.018499
Epoch: 94, Loss: 0.019521
Epoch: 94, Loss: 0.017567
Epoch: 94, Loss: 0.017701
Epoch: 94, Loss: 0.018186
Epoch: 94, Loss: 0.012732
Epoch: 94, Loss: 0.016755
Epoch: 94, Loss: 0.016200
Epoch: 94, Loss: 0.009948
Epoch: 94, Loss: 0.009686
Epoch: 94, Loss: 0.017961
Epoch: 94, Loss: 0.028786
Epoch: 94, Loss: 0.019388
Epoch: 94, Loss: 0.022793
Epoch: 94, Loss: 0.012279
Epoch: 94, Loss: 0.017308
Epoch: 94, Loss: 0.015299
Epoch: 94, Loss: 0.013286
Epoch: 94, Loss: 0.018853
Epoch: 94, Loss: 0.077036
Epoch: 94, Loss: 0.048637
Epoch: 94, Loss: 0.009950
Epoch: 94, Loss: 0.017653
Epoch: 94, Loss: 0.010828
Epoch: 94, Loss: 0.013245
Epoch: 94, Loss: 0.016283
Epoch: 94, Loss: 0.015529
Epoch: 94, Loss: 0.020519
Epoch: 94, Loss: 0.049216
Epoch: 94, Loss: 0.033554
Epoch: 94, Loss: 0.019985
Epoch: 94, Loss: 0.035223
Epoch: 94, Loss: 0.022022
Epoch: 94, Loss: 0.013984
Epoch: 94, Loss: 0.026228
Epoch: 94, Loss: 0.034645
Epoch: 94, Loss: 0.029117
Epoch: 94, Loss: 0.021321
Epoch: 94, Loss: 0.014299
Epoch: 94, Loss: 0.017673
Epoch: 94, Loss: 0.020857
Epoch: 94, Loss: 0.049064
Epoch: 94, Loss: 0.029743
Epoch: 94, Loss: 0.040855
Epoch: 94, Loss: 0.035192
Epoch: 94, Loss: 0.020786
Epoch: 94, Loss: 0.017423
Epoch: 94, Loss: 0.011849
Epoch: 94, Loss: 0.013833
Epoch: 94, Loss: 0.015147
Epoch: 94, Loss: 0.013914
Epoch: 94, Loss: 0.014998
Epoch: 94, Loss: 0.019309
Epoch: 94, Loss: 0.029054
Epoch: 94, Loss: 0.023497
Epoch: 94, Loss: 0.025014
Epoch: 94, Loss: 0.016765
Epoch: 94, Loss: 0.034923
Epoch: 94, Loss: 0.033964
Epoch: 94, Loss: 0.035611
Epoch: 94, Loss: 0.023627
Epoch: 94, Loss: 0.016733
Epoch: 94, Loss: 0.013095
Epoch: 94, Loss: 0.011150
Epoch: 94, Loss: 0.013375
Epoch: 94, Loss: 0.014812
Epoch: 94, Loss: 0.021463
Epoch: 94, Loss: 0.023033
Epoch: 94, Loss: 0.012711
Epoch: 94, Loss: 0.012126
Epoch: 94, Loss: 0.015078
Epoch: 94, Loss: 0.018006
Epoch: 94, Loss: 0.012466
Epoch: 94, Loss: 0.015183
Epoch: 94, Loss: 0.024398
Epoch: 94, Loss: 0.024931
Epoch: 94, Loss: 0.019719
Epoch: 94, Loss: 0.019833
Epoch: 94, Loss: 0.021497
Epoch: 94, Loss: 0.020493
Epoch: 94, Loss: 0.015296
Epoch: 94, Loss: 0.017261
Epoch: 94, Loss: 0.017619
Epoch: 94, Loss: 0.017436
Epoch: 94, Loss: 0.022924
Epoch: 94, Loss: 0.029071
Epoch: 94, Loss: 0.027628
Epoch: 94, Loss: 0.016683
Epoch: 94, Loss: 0.018680
Epoch: 94, Loss: 0.020062
Epoch: 94, Loss: 0.016889
Epoch: 94, Loss: 0.011690
Epoch: 94, Loss: 0.030778
Epoch: 94, Loss: 0.023458
Epoch: 94, Loss: 0.015248
Epoch: 94, Loss: 0.015560
Epoch: 94, Loss: 0.013366
Epoch: 94, Loss: 0.014704
Epoch: 94, Loss: 0.012615
Epoch: 94, Loss: 0.021706
Epoch: 94, Loss: 0.013173
Epoch: 94, Loss: 0.018290
Epoch: 94, Loss: 0.012281
Epoch: 94, Loss: 0.011434
Epoch: 94, Loss: 0.012338
Epoch: 94, Loss: 0.011360
Epoch: 94, Loss: 0.018785
Epoch: 94, Loss: 0.023802
Epoch: 94, Loss: 0.011573
Epoch: 94, Loss: 0.015137
Epoch: 94, Loss: 0.021056
Epoch: 94, Loss: 0.032982
Epoch: 94, Loss: 0.021202
Epoch: 94, Loss: 0.010169
Epoch: 94, Loss: 0.012880
Epoch: 94, Loss: 0.039141
Epoch: 95, Loss: 0.052802
Epoch: 95, Loss: 0.062557
Epoch: 95, Loss: 0.037126
Epoch: 95, Loss: 0.023272
Epoch: 95, Loss: 0.017987
Epoch: 95, Loss: 0.023814
Epoch: 95, Loss: 0.018069
Epoch: 95, Loss: 0.014600
Epoch: 95, Loss: 0.014774
Epoch: 95, Loss: 0.013673
Epoch: 95, Loss: 0.019644
Epoch: 95, Loss: 0.017407
Epoch: 95, Loss: 0.012576
Epoch: 95, Loss: 0.019329
Epoch: 95, Loss: 0.029639
Epoch: 95, Loss: 0.013079
Epoch: 95, Loss: 0.026481
Epoch: 95, Loss: 0.026758
Epoch: 95, Loss: 0.018739
Epoch: 95, Loss: 0.019088
Epoch: 95, Loss: 0.014270
Epoch: 95, Loss: 0.023837
Epoch: 95, Loss: 0.068518
Epoch: 95, Loss: 0.028789
Epoch: 95, Loss: 0.020284
Epoch: 95, Loss: 0.014057
Epoch: 95, Loss: 0.011509
Epoch: 95, Loss: 0.013925
Epoch: 95, Loss: 0.012523
Epoch: 95, Loss: 0.013916
Epoch: 95, Loss: 0.013879
Epoch: 95, Loss: 0.013001
Epoch: 95, Loss: 0.013372
Epoch: 95, Loss: 0.012922
Epoch: 95, Loss: 0.027421
Epoch: 95, Loss: 0.018394
Epoch: 95, Loss: 0.014834
Epoch: 95, Loss: 0.012043
Epoch: 95, Loss: 0.015516
Epoch: 95, Loss: 0.025908
Epoch: 95, Loss: 0.017429
Epoch: 95, Loss: 0.016950
Epoch: 95, Loss: 0.010231
Epoch: 95, Loss: 0.010818
Epoch: 95, Loss: 0.027202
Epoch: 95, Loss: 0.017940
Epoch: 95, Loss: 0.014830
Epoch: 95, Loss: 0.020244
Epoch: 95, Loss: 0.020450
Epoch: 95, Loss: 0.020584
Epoch: 95, Loss: 0.025070
Epoch: 95, Loss: 0.017214
Epoch: 95, Loss: 0.011944
Epoch: 95, Loss: 0.013795
Epoch: 95, Loss: 0.023034
Epoch: 95, Loss: 0.014651
Epoch: 95, Loss: 0.027278
Epoch: 95, Loss: 0.015505
Epoch: 95, Loss: 0.017238
Epoch: 95, Loss: 0.021545
Epoch: 95, Loss: 0.010115
Epoch: 95, Loss: 0.010095
Epoch: 95, Loss: 0.011264
Epoch: 95, Loss: 0.022341
Epoch: 95, Loss: 0.012382
Epoch: 95, Loss: 0.029310
Epoch: 95, Loss: 0.018077
Epoch: 95, Loss: 0.020727
Epoch: 95, Loss: 0.036570
Epoch: 95, Loss: 0.019860
Epoch: 95, Loss: 0.018830
Epoch: 95, Loss: 0.021731
Epoch: 95, Loss: 0.019982
Epoch: 95, Loss: 0.016747
Epoch: 95, Loss: 0.011651
Epoch: 95, Loss: 0.011825
Epoch: 95, Loss: 0.017264
Epoch: 95, Loss: 0.013740
Epoch: 95, Loss: 0.016177
Epoch: 95, Loss: 0.018375
Epoch: 95, Loss: 0.010157
Epoch: 95, Loss: 0.023208
Epoch: 95, Loss: 0.016451
Epoch: 95, Loss: 0.017584
Epoch: 95, Loss: 0.013270
Epoch: 95, Loss: 0.019509
Epoch: 95, Loss: 0.017854
Epoch: 95, Loss: 0.015832
Epoch: 95, Loss: 0.021953
Epoch: 95, Loss: 0.010388
Epoch: 95, Loss: 0.016504
Epoch: 95, Loss: 0.028481
Epoch: 95, Loss: 0.015913
Epoch: 95, Loss: 0.008582
Epoch: 95, Loss: 0.014364
Epoch: 95, Loss: 0.013176
Epoch: 95, Loss: 0.019769
Epoch: 95, Loss: 0.008737
Epoch: 95, Loss: 0.016919
Epoch: 95, Loss: 0.011170
Epoch: 95, Loss: 0.024294
Epoch: 95, Loss: 0.018151
Epoch: 95, Loss: 0.016506
Epoch: 95, Loss: 0.012649
Epoch: 95, Loss: 0.023751
Epoch: 95, Loss: 0.030984
Epoch: 95, Loss: 0.021911
Epoch: 95, Loss: 0.017245
Epoch: 95, Loss: 0.040137
Epoch: 95, Loss: 0.022234
Epoch: 95, Loss: 0.013777
Epoch: 95, Loss: 0.014124
Epoch: 95, Loss: 0.011392
Epoch: 95, Loss: 0.023157
Epoch: 95, Loss: 0.028773
Epoch: 95, Loss: 0.017055
Epoch: 95, Loss: 0.019704
Epoch: 95, Loss: 0.021179
Epoch: 95, Loss: 0.018090
Epoch: 95, Loss: 0.013108
Epoch: 95, Loss: 0.032233
Epoch: 95, Loss: 0.014169
Epoch: 95, Loss: 0.011431
Epoch: 95, Loss: 0.016543
Epoch: 95, Loss: 0.020599
Epoch: 95, Loss: 0.010143
Epoch: 95, Loss: 0.018677
Epoch: 95, Loss: 0.023946
Epoch: 95, Loss: 0.016187
Epoch: 95, Loss: 0.014224
Epoch: 95, Loss: 0.014969
Epoch: 95, Loss: 0.027674
Epoch: 95, Loss: 0.016209
Epoch: 95, Loss: 0.012754
Epoch: 95, Loss: 0.017692
Epoch: 95, Loss: 0.019989
Epoch: 95, Loss: 0.024744
Epoch: 95, Loss: 0.025295
Epoch: 95, Loss: 0.022715
Epoch: 95, Loss: 0.015607
Epoch: 95, Loss: 0.028424
Epoch: 95, Loss: 0.013860
Epoch: 95, Loss: 0.009743
Epoch: 95, Loss: 0.012563
Epoch: 95, Loss: 0.019369
Epoch: 95, Loss: 0.012168
Epoch: 95, Loss: 0.012013
Epoch: 95, Loss: 0.022818
Epoch: 95, Loss: 0.017878
Epoch: 95, Loss: 0.025609
Epoch: 95, Loss: 0.015535
Epoch: 95, Loss: 0.013651
Epoch: 95, Loss: 0.019222
Epoch: 95, Loss: 0.013164
Epoch: 95, Loss: 0.021209
Epoch: 95, Loss: 0.018628
Epoch: 95, Loss: 0.011498
Epoch: 96, Loss: 0.013484
Epoch: 96, Loss: 0.014574
Epoch: 96, Loss: 0.027933
Epoch: 96, Loss: 0.017613
Epoch: 96, Loss: 0.014369
Epoch: 96, Loss: 0.011938
Epoch: 96, Loss: 0.019678
Epoch: 96, Loss: 0.017681
Epoch: 96, Loss: 0.010388
Epoch: 96, Loss: 0.012731
Epoch: 96, Loss: 0.021376
Epoch: 96, Loss: 0.016480
Epoch: 96, Loss: 0.017396
Epoch: 96, Loss: 0.012821
Epoch: 96, Loss: 0.023529
Epoch: 96, Loss: 0.012789
Epoch: 96, Loss: 0.019083
Epoch: 96, Loss: 0.021853
Epoch: 96, Loss: 0.012795
Epoch: 96, Loss: 0.013876
Epoch: 96, Loss: 0.018443
Epoch: 96, Loss: 0.012073
Epoch: 96, Loss: 0.011359
Epoch: 96, Loss: 0.015843
Epoch: 96, Loss: 0.023863
Epoch: 96, Loss: 0.015215
Epoch: 96, Loss: 0.024364
Epoch: 96, Loss: 0.022709
Epoch: 96, Loss: 0.022996
Epoch: 96, Loss: 0.013107
Epoch: 96, Loss: 0.019775
Epoch: 96, Loss: 0.022086
Epoch: 96, Loss: 0.020659
Epoch: 96, Loss: 0.037598
Epoch: 96, Loss: 0.026318
Epoch: 96, Loss: 0.029539
Epoch: 96, Loss: 0.023275
Epoch: 96, Loss: 0.019770
Epoch: 96, Loss: 0.013180
Epoch: 96, Loss: 0.020929
Epoch: 96, Loss: 0.020328
Epoch: 96, Loss: 0.020066
Epoch: 96, Loss: 0.024256
Epoch: 96, Loss: 0.021539
Epoch: 96, Loss: 0.014812
Epoch: 96, Loss: 0.013265
Epoch: 96, Loss: 0.008334
Epoch: 96, Loss: 0.014143
Epoch: 96, Loss: 0.011380
Epoch: 96, Loss: 0.017509
Epoch: 96, Loss: 0.019240
Epoch: 96, Loss: 0.016332
Epoch: 96, Loss: 0.016931
Epoch: 96, Loss: 0.014853
Epoch: 96, Loss: 0.017359
Epoch: 96, Loss: 0.014342
Epoch: 96, Loss: 0.022307
Epoch: 96, Loss: 0.020840
Epoch: 96, Loss: 0.012146
Epoch: 96, Loss: 0.015670
Epoch: 96, Loss: 0.018648
Epoch: 96, Loss: 0.019229
Epoch: 96, Loss: 0.010520
Epoch: 96, Loss: 0.021552
Epoch: 96, Loss: 0.014197
Epoch: 96, Loss: 0.011416
Epoch: 96, Loss: 0.012223
Epoch: 96, Loss: 0.023321
Epoch: 96, Loss: 0.008321
Epoch: 96, Loss: 0.019405
Epoch: 96, Loss: 0.010901
Epoch: 96, Loss: 0.013919
Epoch: 96, Loss: 0.024970
Epoch: 96, Loss: 0.028615
Epoch: 96, Loss: 0.012769
Epoch: 96, Loss: 0.025303
Epoch: 96, Loss: 0.015319
Epoch: 96, Loss: 0.012320
Epoch: 96, Loss: 0.020323
Epoch: 96, Loss: 0.016398
Epoch: 96, Loss: 0.011734
Epoch: 96, Loss: 0.022713
Epoch: 96, Loss: 0.019605
Epoch: 96, Loss: 0.009580
Epoch: 96, Loss: 0.012709
Epoch: 96, Loss: 0.009868
Epoch: 96, Loss: 0.013404
Epoch: 96, Loss: 0.009620
Epoch: 96, Loss: 0.029080
Epoch: 96, Loss: 0.015938
Epoch: 96, Loss: 0.023501
Epoch: 96, Loss: 0.013653
Epoch: 96, Loss: 0.012426
Epoch: 96, Loss: 0.021723
Epoch: 96, Loss: 0.019157
Epoch: 96, Loss: 0.030100
Epoch: 96, Loss: 0.015235
Epoch: 96, Loss: 0.020639
Epoch: 96, Loss: 0.017326
Epoch: 96, Loss: 0.016927
Epoch: 96, Loss: 0.048656
Epoch: 96, Loss: 0.063780
Epoch: 96, Loss: 0.010277
Epoch: 96, Loss: 0.039127
Epoch: 96, Loss: 0.014726
Epoch: 96, Loss: 0.012187
Epoch: 96, Loss: 0.021035
Epoch: 96, Loss: 0.022710
Epoch: 96, Loss: 0.018820
Epoch: 96, Loss: 0.015985
Epoch: 96, Loss: 0.017261
Epoch: 96, Loss: 0.015355
Epoch: 96, Loss: 0.020924
Epoch: 96, Loss: 0.016462
Epoch: 96, Loss: 0.014426
Epoch: 96, Loss: 0.019964
Epoch: 96, Loss: 0.008687
Epoch: 96, Loss: 0.029900
Epoch: 96, Loss: 0.012725
Epoch: 96, Loss: 0.022768
Epoch: 96, Loss: 0.009891
Epoch: 96, Loss: 0.019981
Epoch: 96, Loss: 0.010853
Epoch: 96, Loss: 0.016382
Epoch: 96, Loss: 0.021130
Epoch: 96, Loss: 0.023085
Epoch: 96, Loss: 0.036066
Epoch: 96, Loss: 0.017015
Epoch: 96, Loss: 0.016982
Epoch: 96, Loss: 0.007790
Epoch: 96, Loss: 0.044060
Epoch: 96, Loss: 0.018555
Epoch: 96, Loss: 0.023328
Epoch: 96, Loss: 0.017756
Epoch: 96, Loss: 0.012841
Epoch: 96, Loss: 0.018247
Epoch: 96, Loss: 0.014729
Epoch: 96, Loss: 0.015082
Epoch: 96, Loss: 0.011431
Epoch: 96, Loss: 0.016210
Epoch: 96, Loss: 0.014734
Epoch: 96, Loss: 0.012531
Epoch: 96, Loss: 0.019007
Epoch: 96, Loss: 0.010795
Epoch: 96, Loss: 0.016185
Epoch: 96, Loss: 0.022182
Epoch: 96, Loss: 0.023730
Epoch: 96, Loss: 0.014284
Epoch: 96, Loss: 0.019425
Epoch: 96, Loss: 0.016873
Epoch: 96, Loss: 0.014935
Epoch: 96, Loss: 0.008648
Epoch: 96, Loss: 0.014221
Epoch: 96, Loss: 0.064157
Epoch: 96, Loss: 0.034460
Epoch: 96, Loss: 0.018093
Epoch: 96, Loss: 0.025506
Epoch: 97, Loss: 0.022696
Epoch: 97, Loss: 0.016716
Epoch: 97, Loss: 0.012311
Epoch: 97, Loss: 0.017062
Epoch: 97, Loss: 0.012937
Epoch: 97, Loss: 0.014768
Epoch: 97, Loss: 0.024087
Epoch: 97, Loss: 0.022406
Epoch: 97, Loss: 0.014386
Epoch: 97, Loss: 0.017035
Epoch: 97, Loss: 0.013705
Epoch: 97, Loss: 0.013104
Epoch: 97, Loss: 0.014037
Epoch: 97, Loss: 0.019602
Epoch: 97, Loss: 0.013078
Epoch: 97, Loss: 0.013187
Epoch: 97, Loss: 0.019837
Epoch: 97, Loss: 0.032392
Epoch: 97, Loss: 0.014495
Epoch: 97, Loss: 0.017850
Epoch: 97, Loss: 0.010790
Epoch: 97, Loss: 0.017129
Epoch: 97, Loss: 0.018769
Epoch: 97, Loss: 0.013044
Epoch: 97, Loss: 0.012331
Epoch: 97, Loss: 0.013491
Epoch: 97, Loss: 0.017904
Epoch: 97, Loss: 0.022350
Epoch: 97, Loss: 0.009163
Epoch: 97, Loss: 0.008668
Epoch: 97, Loss: 0.019482
Epoch: 97, Loss: 0.017116
Epoch: 97, Loss: 0.013838
Epoch: 97, Loss: 0.021820
Epoch: 97, Loss: 0.023436
Epoch: 97, Loss: 0.020765
Epoch: 97, Loss: 0.020111
Epoch: 97, Loss: 0.015691
Epoch: 97, Loss: 0.015637
Epoch: 97, Loss: 0.016707
Epoch: 97, Loss: 0.021434
Epoch: 97, Loss: 0.019110
Epoch: 97, Loss: 0.017253
Epoch: 97, Loss: 0.018803
Epoch: 97, Loss: 0.016312
Epoch: 97, Loss: 0.015778
Epoch: 97, Loss: 0.026598
Epoch: 97, Loss: 0.012789
Epoch: 97, Loss: 0.019476
Epoch: 97, Loss: 0.013914
Epoch: 97, Loss: 0.015182
Epoch: 97, Loss: 0.016536
Epoch: 97, Loss: 0.015590
Epoch: 97, Loss: 0.016266
Epoch: 97, Loss: 0.020379
Epoch: 97, Loss: 0.009028
Epoch: 97, Loss: 0.014701
Epoch: 97, Loss: 0.014952
Epoch: 97, Loss: 0.011835
Epoch: 97, Loss: 0.026077
Epoch: 97, Loss: 0.019896
Epoch: 97, Loss: 0.011944
Epoch: 97, Loss: 0.007175
Epoch: 97, Loss: 0.014290
Epoch: 97, Loss: 0.026091
Epoch: 97, Loss: 0.020644
Epoch: 97, Loss: 0.015506
Epoch: 97, Loss: 0.014453
Epoch: 97, Loss: 0.021990
Epoch: 97, Loss: 0.019962
Epoch: 97, Loss: 0.030771
Epoch: 97, Loss: 0.017921
Epoch: 97, Loss: 0.026114
Epoch: 97, Loss: 0.013501
Epoch: 97, Loss: 0.017888
Epoch: 97, Loss: 0.026238
Epoch: 97, Loss: 0.014783
Epoch: 97, Loss: 0.013401
Epoch: 97, Loss: 0.029140
Epoch: 97, Loss: 0.030007
Epoch: 97, Loss: 0.014879
Epoch: 97, Loss: 0.024721
Epoch: 97, Loss: 0.013231
Epoch: 97, Loss: 0.015334
Epoch: 97, Loss: 0.017545
Epoch: 97, Loss: 0.009129
Epoch: 97, Loss: 0.014838
Epoch: 97, Loss: 0.010520
Epoch: 97, Loss: 0.015832
Epoch: 97, Loss: 0.024203
Epoch: 97, Loss: 0.016616
Epoch: 97, Loss: 0.022916
Epoch: 97, Loss: 0.011320
Epoch: 97, Loss: 0.016418
Epoch: 97, Loss: 0.017278
Epoch: 97, Loss: 0.015007
Epoch: 97, Loss: 0.018313
Epoch: 97, Loss: 0.027513
Epoch: 97, Loss: 0.017125
Epoch: 97, Loss: 0.032487
Epoch: 97, Loss: 0.019060
Epoch: 97, Loss: 0.026423
Epoch: 97, Loss: 0.020373
Epoch: 97, Loss: 0.021533
Epoch: 97, Loss: 0.019981
Epoch: 97, Loss: 0.015895
Epoch: 97, Loss: 0.014971
Epoch: 97, Loss: 0.018178
Epoch: 97, Loss: 0.010533
Epoch: 97, Loss: 0.015350
Epoch: 97, Loss: 0.011457
Epoch: 97, Loss: 0.010092
Epoch: 97, Loss: 0.016310
Epoch: 97, Loss: 0.013097
Epoch: 97, Loss: 0.016937
Epoch: 97, Loss: 0.016169
Epoch: 97, Loss: 0.035593
Epoch: 97, Loss: 0.014165
Epoch: 97, Loss: 0.008483
Epoch: 97, Loss: 0.023487
Epoch: 97, Loss: 0.012574
Epoch: 97, Loss: 0.020017
Epoch: 97, Loss: 0.016322
Epoch: 97, Loss: 0.014757
Epoch: 97, Loss: 0.011305
Epoch: 97, Loss: 0.016056
Epoch: 97, Loss: 0.011922
Epoch: 97, Loss: 0.009409
Epoch: 97, Loss: 0.032185
Epoch: 97, Loss: 0.009648
Epoch: 97, Loss: 0.020904
Epoch: 97, Loss: 0.014568
Epoch: 97, Loss: 0.019892
Epoch: 97, Loss: 0.027724
Epoch: 97, Loss: 0.022552
Epoch: 97, Loss: 0.025851
Epoch: 97, Loss: 0.026445
Epoch: 97, Loss: 0.022102
Epoch: 97, Loss: 0.015065
Epoch: 97, Loss: 0.034423
Epoch: 97, Loss: 0.015723
Epoch: 97, Loss: 0.018394
Epoch: 97, Loss: 0.022962
Epoch: 97, Loss: 0.011147
Epoch: 97, Loss: 0.014007
Epoch: 97, Loss: 0.021359
Epoch: 97, Loss: 0.015828
Epoch: 97, Loss: 0.008494
Epoch: 97, Loss: 0.009072
Epoch: 97, Loss: 0.008002
Epoch: 97, Loss: 0.012263
Epoch: 97, Loss: 0.018240
Epoch: 97, Loss: 0.016115
Epoch: 97, Loss: 0.013427
Epoch: 97, Loss: 0.062443
Epoch: 97, Loss: 0.017716
Epoch: 97, Loss: 0.009457
Epoch: 98, Loss: 0.018222
Epoch: 98, Loss: 0.015249
Epoch: 98, Loss: 0.014896
Epoch: 98, Loss: 0.014144
Epoch: 98, Loss: 0.010955
Epoch: 98, Loss: 0.014428
Epoch: 98, Loss: 0.024322
Epoch: 98, Loss: 0.031838
Epoch: 98, Loss: 0.011454
Epoch: 98, Loss: 0.029963
Epoch: 98, Loss: 0.031002
Epoch: 98, Loss: 0.014111
Epoch: 98, Loss: 0.017919
Epoch: 98, Loss: 0.020904
Epoch: 98, Loss: 0.023660
Epoch: 98, Loss: 0.017508
Epoch: 98, Loss: 0.011544
Epoch: 98, Loss: 0.015266
Epoch: 98, Loss: 0.013238
Epoch: 98, Loss: 0.014058
Epoch: 98, Loss: 0.014406
Epoch: 98, Loss: 0.043025
Epoch: 98, Loss: 0.039561
Epoch: 98, Loss: 0.025899
Epoch: 98, Loss: 0.020078
Epoch: 98, Loss: 0.011918
Epoch: 98, Loss: 0.023460
Epoch: 98, Loss: 0.017439
Epoch: 98, Loss: 0.011747
Epoch: 98, Loss: 0.017482
Epoch: 98, Loss: 0.012466
Epoch: 98, Loss: 0.013590
Epoch: 98, Loss: 0.011639
Epoch: 98, Loss: 0.016226
Epoch: 98, Loss: 0.010331
Epoch: 98, Loss: 0.013125
Epoch: 98, Loss: 0.015639
Epoch: 98, Loss: 0.025731
Epoch: 98, Loss: 0.045690
Epoch: 98, Loss: 0.016573
Epoch: 98, Loss: 0.012821
Epoch: 98, Loss: 0.015303
Epoch: 98, Loss: 0.021602
Epoch: 98, Loss: 0.008689
Epoch: 98, Loss: 0.043371
Epoch: 98, Loss: 0.011807
Epoch: 98, Loss: 0.015546
Epoch: 98, Loss: 0.021092
Epoch: 98, Loss: 0.013959
Epoch: 98, Loss: 0.015624
Epoch: 98, Loss: 0.023825
Epoch: 98, Loss: 0.012651
Epoch: 98, Loss: 0.016923
Epoch: 98, Loss: 0.017841
Epoch: 98, Loss: 0.012324
Epoch: 98, Loss: 0.015504
Epoch: 98, Loss: 0.016934
Epoch: 98, Loss: 0.028041
Epoch: 98, Loss: 0.014135
Epoch: 98, Loss: 0.015046
Epoch: 98, Loss: 0.015078
Epoch: 98, Loss: 0.013659
Epoch: 98, Loss: 0.015550
Epoch: 98, Loss: 0.008513
Epoch: 98, Loss: 0.054300
Epoch: 98, Loss: 0.027485
Epoch: 98, Loss: 0.014100
Epoch: 98, Loss: 0.026612
Epoch: 98, Loss: 0.024021
Epoch: 98, Loss: 0.015514
Epoch: 98, Loss: 0.011791
Epoch: 98, Loss: 0.021379
Epoch: 98, Loss: 0.010149
Epoch: 98, Loss: 0.014573
Epoch: 98, Loss: 0.015340
Epoch: 98, Loss: 0.020873
Epoch: 98, Loss: 0.035375
Epoch: 98, Loss: 0.016918
Epoch: 98, Loss: 0.013932
Epoch: 98, Loss: 0.013721
Epoch: 98, Loss: 0.011859
Epoch: 98, Loss: 0.014729
Epoch: 98, Loss: 0.014313
Epoch: 98, Loss: 0.012853
Epoch: 98, Loss: 0.011811
Epoch: 98, Loss: 0.026145
Epoch: 98, Loss: 0.025705
Epoch: 98, Loss: 0.015119
Epoch: 98, Loss: 0.013106
Epoch: 98, Loss: 0.017321
Epoch: 98, Loss: 0.015563
Epoch: 98, Loss: 0.013784
Epoch: 98, Loss: 0.015237
Epoch: 98, Loss: 0.012983
Epoch: 98, Loss: 0.017587
Epoch: 98, Loss: 0.010856
Epoch: 98, Loss: 0.009534
Epoch: 98, Loss: 0.005964
Epoch: 98, Loss: 0.015613
Epoch: 98, Loss: 0.020925
Epoch: 98, Loss: 0.010287
Epoch: 98, Loss: 0.027722
Epoch: 98, Loss: 0.019072
Epoch: 98, Loss: 0.011905
Epoch: 98, Loss: 0.027069
Epoch: 98, Loss: 0.014934
Epoch: 98, Loss: 0.010968
Epoch: 98, Loss: 0.015282
Epoch: 98, Loss: 0.018512
Epoch: 98, Loss: 0.012184
Epoch: 98, Loss: 0.012610
Epoch: 98, Loss: 0.013018
Epoch: 98, Loss: 0.028345
Epoch: 98, Loss: 0.025648
Epoch: 98, Loss: 0.024041
Epoch: 98, Loss: 0.014649
Epoch: 98, Loss: 0.010459
Epoch: 98, Loss: 0.015473
Epoch: 98, Loss: 0.016236
Epoch: 98, Loss: 0.016763
Epoch: 98, Loss: 0.011368
Epoch: 98, Loss: 0.026180
Epoch: 98, Loss: 0.018375
Epoch: 98, Loss: 0.019317
Epoch: 98, Loss: 0.011221
Epoch: 98, Loss: 0.021541
Epoch: 98, Loss: 0.029505
Epoch: 98, Loss: 0.019174
Epoch: 98, Loss: 0.017781
Epoch: 98, Loss: 0.037199
Epoch: 98, Loss: 0.028059
Epoch: 98, Loss: 0.020887
Epoch: 98, Loss: 0.017214
Epoch: 98, Loss: 0.015356
Epoch: 98, Loss: 0.016771
Epoch: 98, Loss: 0.013383
Epoch: 98, Loss: 0.019264
Epoch: 98, Loss: 0.009693
Epoch: 98, Loss: 0.023385
Epoch: 98, Loss: 0.019407
Epoch: 98, Loss: 0.011683
Epoch: 98, Loss: 0.008698
Epoch: 98, Loss: 0.022170
Epoch: 98, Loss: 0.015574
Epoch: 98, Loss: 0.017409
Epoch: 98, Loss: 0.018103
Epoch: 98, Loss: 0.018276
Epoch: 98, Loss: 0.015098
Epoch: 98, Loss: 0.018760
Epoch: 98, Loss: 0.009648
Epoch: 98, Loss: 0.016149
Epoch: 98, Loss: 0.011563
Epoch: 98, Loss: 0.015992
Epoch: 98, Loss: 0.009974
Epoch: 98, Loss: 0.012930
Epoch: 98, Loss: 0.010980
Epoch: 98, Loss: 0.004651
Epoch: 99, Loss: 0.011637
Epoch: 99, Loss: 0.009384
Epoch: 99, Loss: 0.019328
Epoch: 99, Loss: 0.014601
Epoch: 99, Loss: 0.016661
Epoch: 99, Loss: 0.010429
Epoch: 99, Loss: 0.017080
Epoch: 99, Loss: 0.012255
Epoch: 99, Loss: 0.008027
Epoch: 99, Loss: 0.009618
Epoch: 99, Loss: 0.014289
Epoch: 99, Loss: 0.015280
Epoch: 99, Loss: 0.017019
Epoch: 99, Loss: 0.021009
Epoch: 99, Loss: 0.010158
Epoch: 99, Loss: 0.024153
Epoch: 99, Loss: 0.014805
Epoch: 99, Loss: 0.023935
Epoch: 99, Loss: 0.011321
Epoch: 99, Loss: 0.011667
Epoch: 99, Loss: 0.009536
Epoch: 99, Loss: 0.020308
Epoch: 99, Loss: 0.026156
Epoch: 99, Loss: 0.016639
Epoch: 99, Loss: 0.016139
Epoch: 99, Loss: 0.015701
Epoch: 99, Loss: 0.029213
Epoch: 99, Loss: 0.010985
Epoch: 99, Loss: 0.023748
Epoch: 99, Loss: 0.014928
Epoch: 99, Loss: 0.012796
Epoch: 99, Loss: 0.013291
Epoch: 99, Loss: 0.012893
Epoch: 99, Loss: 0.020918
Epoch: 99, Loss: 0.013402
Epoch: 99, Loss: 0.015281
Epoch: 99, Loss: 0.014606
Epoch: 99, Loss: 0.010546
Epoch: 99, Loss: 0.019568
Epoch: 99, Loss: 0.011360
Epoch: 99, Loss: 0.012113
Epoch: 99, Loss: 0.022881
Epoch: 99, Loss: 0.015194
Epoch: 99, Loss: 0.021612
Epoch: 99, Loss: 0.020167
Epoch: 99, Loss: 0.019655
Epoch: 99, Loss: 0.013513
Epoch: 99, Loss: 0.010905
Epoch: 99, Loss: 0.031515
Epoch: 99, Loss: 0.010175
Epoch: 99, Loss: 0.015861
Epoch: 99, Loss: 0.020818
Epoch: 99, Loss: 0.022639
Epoch: 99, Loss: 0.013082
Epoch: 99, Loss: 0.018008
Epoch: 99, Loss: 0.027293
Epoch: 99, Loss: 0.013689
Epoch: 99, Loss: 0.011023
Epoch: 99, Loss: 0.024847
Epoch: 99, Loss: 0.026152
Epoch: 99, Loss: 0.019093
Epoch: 99, Loss: 0.017093
Epoch: 99, Loss: 0.018940
Epoch: 99, Loss: 0.009954
Epoch: 99, Loss: 0.017948
Epoch: 99, Loss: 0.017851
Epoch: 99, Loss: 0.014041
Epoch: 99, Loss: 0.015731
Epoch: 99, Loss: 0.017708
Epoch: 99, Loss: 0.011058
Epoch: 99, Loss: 0.018846
Epoch: 99, Loss: 0.030035
Epoch: 99, Loss: 0.011401
Epoch: 99, Loss: 0.016191
Epoch: 99, Loss: 0.015654
Epoch: 99, Loss: 0.008613
Epoch: 99, Loss: 0.014419
Epoch: 99, Loss: 0.022223
Epoch: 99, Loss: 0.014064
Epoch: 99, Loss: 0.016122
Epoch: 99, Loss: 0.015334
Epoch: 99, Loss: 0.011989
Epoch: 99, Loss: 0.021343
Epoch: 99, Loss: 0.012928
Epoch: 99, Loss: 0.017504
Epoch: 99, Loss: 0.014271
Epoch: 99, Loss: 0.016422
Epoch: 99, Loss: 0.020805
Epoch: 99, Loss: 0.019434
Epoch: 99, Loss: 0.018530
Epoch: 99, Loss: 0.014325
Epoch: 99, Loss: 0.009998
Epoch: 99, Loss: 0.009624
Epoch: 99, Loss: 0.014841
Epoch: 99, Loss: 0.013415
Epoch: 99, Loss: 0.019336
Epoch: 99, Loss: 0.022252
Epoch: 99, Loss: 0.013507
Epoch: 99, Loss: 0.014855
Epoch: 99, Loss: 0.013026
Epoch: 99, Loss: 0.017579
Epoch: 99, Loss: 0.011165
Epoch: 99, Loss: 0.011545
Epoch: 99, Loss: 0.013549
Epoch: 99, Loss: 0.038742
Epoch: 99, Loss: 0.041424
Epoch: 99, Loss: 0.017415
Epoch: 99, Loss: 0.018242
Epoch: 99, Loss: 0.019576
Epoch: 99, Loss: 0.017466
Epoch: 99, Loss: 0.022713
Epoch: 99, Loss: 0.021368
Epoch: 99, Loss: 0.014318
Epoch: 99, Loss: 0.021256
Epoch: 99, Loss: 0.020139
Epoch: 99, Loss: 0.026513
Epoch: 99, Loss: 0.014317
Epoch: 99, Loss: 0.016646
Epoch: 99, Loss: 0.023694
Epoch: 99, Loss: 0.011014
Epoch: 99, Loss: 0.023808
Epoch: 99, Loss: 0.023353
Epoch: 99, Loss: 0.026599
Epoch: 99, Loss: 0.026251
Epoch: 99, Loss: 0.016918
Epoch: 99, Loss: 0.015216
Epoch: 99, Loss: 0.013455
Epoch: 99, Loss: 0.015124
Epoch: 99, Loss: 0.010177
Epoch: 99, Loss: 0.009918
Epoch: 99, Loss: 0.015579
Epoch: 99, Loss: 0.040073
Epoch: 99, Loss: 0.018768
Epoch: 99, Loss: 0.011658
Epoch: 99, Loss: 0.007179
Epoch: 99, Loss: 0.012262
Epoch: 99, Loss: 0.026884
Epoch: 99, Loss: 0.024127
Epoch: 99, Loss: 0.013085
Epoch: 99, Loss: 0.016700
Epoch: 99, Loss: 0.012060
Epoch: 99, Loss: 0.016036
Epoch: 99, Loss: 0.017610
Epoch: 99, Loss: 0.011167
Epoch: 99, Loss: 0.019217
Epoch: 99, Loss: 0.015670
Epoch: 99, Loss: 0.015200
Epoch: 99, Loss: 0.018902
Epoch: 99, Loss: 0.015284
Epoch: 99, Loss: 0.008313
Epoch: 99, Loss: 0.025930
Epoch: 99, Loss: 0.009416
Epoch: 99, Loss: 0.012116
Epoch: 99, Loss: 0.055932
Epoch: 99, Loss: 0.033490
Epoch: 99, Loss: 0.021114
Epoch: 99, Loss: 0.035522
###Markdown
Performing validation
###Code
val_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64,
shuffle=False)
correct = 0
total = 0
with torch.no_grad():
for imgs, labels in val_loader:
batch_size = imgs.shape[0]
outputs = model(imgs.view(batch_size, -1))
_, predicted = torch.max(outputs, dim=1)
total += labels.shape[0]
correct += int((predicted == labels).sum())
print("Accuracy:", correct / total)
###Output
Accuracy: 0.815
###Markdown
CIFAR10 with Keras and CNNTesting Keras' CNNs on CIFAR10 with a pretty typical layer disposition. Data Setup
###Code
from keras.datasets import cifar10
(x_train, y_train_), (x_test, y_test_) = cifar10.load_data()
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
from keras.utils import to_categorical
y_train = to_categorical(y_train_)
y_test = to_categorical(y_test_)
###Output
_____no_output_____
###Markdown
Model Definition
###Code
from keras.models import Sequential
model = Sequential()
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense
model.add(Conv2D(filters=32,
kernel_size=(3, 3),
activation='relu',
input_shape=(32, 32, 3)))
model.add(MaxPool2D())
model.add(Conv2D(filters=64,
kernel_size=(3, 3),
activation='relu'))
model.add(MaxPool2D())
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
print(model.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 30, 30, 32) 896
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 15, 15, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 13, 13, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 6, 6, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 2304) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 23050
=================================================================
Total params: 42,442
Trainable params: 42,442
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Fitting
###Code
history = model.fit(x_train, y_train, batch_size=50, epochs=15, verbose=1, validation_data=(x_test, y_test))
import matplotlib.pyplot as plt
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(history_dict['acc']) + 1)
plt.plot(epochs, loss_values, 'bo', label='Training loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Convolutional Neural Network for CIFAR-10 dataset image classification:* 1- Import libraries* 2- Load dataset - shape of dataset - Output classes - Visualization of input data* 3- Simple CNN model (First implementation) - CNN model - Building the model - Training (learning) - Evaluation - Accuracy of training data - Accuracy of test data* 4- Second CNN implementation - Normalization - New visualization - Data Augmentation - Xavier initialization - CNN model - Evaluation* 5- Prediction* 6- Import ResNet50 - Transfer Learning - Evaluation 1- Import libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
from keras import datasets
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.optimizers import Adam
from keras.regularizers import l2
from random import randint
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
from keras.utils import np_utils
from keras.datasets import cifar10
from keras.preprocessing import image
from PIL import Image
import cv2
###Output
_____no_output_____
###Markdown
2- Load dataset from datasets of keras, we load cifar10 images dataset
###Code
(X_train, y_train), (X_test, y_test) = datasets.cifar10.load_data()
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 3s 0us/step
170508288/170498071 [==============================] - 3s 0us/step
###Markdown
- Information of dataset
###Code
print("trining X shape:" + str(X_train.shape))
print("trining y shape:" + str(y_train.shape))
print("testing X shape:" + str(X_test.shape))
print("trining y shape:" + str(y_test.shape))
###Output
trining X shape:(50000, 32, 32, 3)
trining y shape:(50000, 1)
testing X shape:(10000, 32, 32, 3)
trining y shape:(10000, 1)
###Markdown
- Output classes
###Code
num_classes=10
classes = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
###Output
_____no_output_____
###Markdown
- Data visualization befor processing Showing one random image among the training set
###Code
img = randint(0, 50000)
plt.imshow(X_train[img])
plt.show()
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
3- CNN model (raw implementation) Based on the VGG and without any additional process (Batch Normalization, Augmentation), we implement the model 
###Code
###Output
_____no_output_____
###Markdown
- Build model
###Code
# Input size:
img_rows = 32
img_cols = 32
channels = 3
# Regularization:
reg=None
# Initial number of filters:
num_filters=32
# Activation function:
ac='relu'
# Optimizer (Adam)
adm=Adam(lr=0.001,decay=0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
opt=adm
# Deop-out:
drop_dense=0.5
drop_conv=0
model = Sequential()
model.add(Conv2D(num_filters, (3, 3), activation=ac, kernel_regularizer=reg, input_shape=(img_rows, img_cols, channels),padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(Conv2D(num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 16x16x3xnum_filters
model.add(Dropout(drop_conv))
model.add(Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(Conv2D(2*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 8x8x3x(2*num_filters)
model.add(Dropout(drop_conv))
model.add(Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(Conv2D(4*num_filters, (3, 3), activation=ac,kernel_regularizer=reg,padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 4x4x3x(4*num_filters)
model.add(Dropout(drop_conv))
model.add(Flatten())
model.add(Dense(512, activation=ac,kernel_regularizer=reg))
model.add(BatchNormalization())
model.add(Dropout(drop_dense))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer=opt)
###Output
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
###Markdown
Number of parameters:
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
batch_normalization (BatchNo (None, 32, 32, 32) 128
_________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 32) 9248
_________________________________________________________________
batch_normalization_1 (Batch (None, 32, 32, 32) 128
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 32) 0
_________________________________________________________________
dropout (Dropout) (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
batch_normalization_2 (Batch (None, 16, 16, 64) 256
_________________________________________________________________
conv2d_3 (Conv2D) (None, 16, 16, 64) 36928
_________________________________________________________________
batch_normalization_3 (Batch (None, 16, 16, 64) 256
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 8, 8, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 8, 8, 128) 73856
_________________________________________________________________
batch_normalization_4 (Batch (None, 8, 8, 128) 512
_________________________________________________________________
conv2d_5 (Conv2D) (None, 8, 8, 128) 147584
_________________________________________________________________
batch_normalization_5 (Batch (None, 8, 8, 128) 512
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 128) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 4, 4, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 2048) 0
_________________________________________________________________
dense (Dense) (None, 512) 1049088
_________________________________________________________________
batch_normalization_6 (Batch (None, 512) 2048
_________________________________________________________________
dropout_3 (Dropout) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 5130
=================================================================
Total params: 1,345,066
Trainable params: 1,343,146
Non-trainable params: 1,920
_________________________________________________________________
###Markdown
Summary of model precedure:
###Code
tf.keras.utils.plot_model(model, to_file="model.png")
###Output
_____no_output_____
###Markdown
- Train model **Batch size: 128, Epochs: 100**
###Code
history = model.fit(X_train, y_train, batch_size=128, epochs=100, validation_data=(X_test, y_test))
###Output
Epoch 1/100
391/391 [==============================] - 17s 37ms/step - loss: 1.5035 - accuracy: 0.5122 - val_loss: 1.3294 - val_accuracy: 0.5573
Epoch 2/100
391/391 [==============================] - 14s 35ms/step - loss: 0.9139 - accuracy: 0.6845 - val_loss: 0.9305 - val_accuracy: 0.6830
Epoch 3/100
391/391 [==============================] - 14s 35ms/step - loss: 0.6982 - accuracy: 0.7582 - val_loss: 0.8087 - val_accuracy: 0.7243
Epoch 4/100
391/391 [==============================] - 14s 35ms/step - loss: 0.5702 - accuracy: 0.8008 - val_loss: 0.7081 - val_accuracy: 0.7574
Epoch 5/100
391/391 [==============================] - 14s 35ms/step - loss: 0.4745 - accuracy: 0.8359 - val_loss: 0.7023 - val_accuracy: 0.7678
Epoch 6/100
391/391 [==============================] - 14s 35ms/step - loss: 0.3993 - accuracy: 0.8607 - val_loss: 0.6951 - val_accuracy: 0.7757
Epoch 7/100
391/391 [==============================] - 14s 35ms/step - loss: 0.3280 - accuracy: 0.8840 - val_loss: 0.7011 - val_accuracy: 0.7820
Epoch 8/100
391/391 [==============================] - 14s 35ms/step - loss: 0.2731 - accuracy: 0.9045 - val_loss: 0.7750 - val_accuracy: 0.7758
Epoch 9/100
391/391 [==============================] - 14s 35ms/step - loss: 0.2283 - accuracy: 0.9190 - val_loss: 0.7398 - val_accuracy: 0.7936
Epoch 10/100
391/391 [==============================] - 14s 35ms/step - loss: 0.1874 - accuracy: 0.9345 - val_loss: 0.9847 - val_accuracy: 0.7463
Epoch 11/100
391/391 [==============================] - 14s 35ms/step - loss: 0.1513 - accuracy: 0.9461 - val_loss: 0.8872 - val_accuracy: 0.7870
Epoch 12/100
391/391 [==============================] - 14s 35ms/step - loss: 0.1453 - accuracy: 0.9487 - val_loss: 0.9534 - val_accuracy: 0.7793
Epoch 13/100
391/391 [==============================] - 14s 35ms/step - loss: 0.1166 - accuracy: 0.9597 - val_loss: 0.8475 - val_accuracy: 0.8003
Epoch 14/100
391/391 [==============================] - 14s 35ms/step - loss: 0.1120 - accuracy: 0.9602 - val_loss: 1.0408 - val_accuracy: 0.7727
Epoch 15/100
391/391 [==============================] - 14s 35ms/step - loss: 0.1005 - accuracy: 0.9651 - val_loss: 0.9832 - val_accuracy: 0.7820
Epoch 16/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0896 - accuracy: 0.9692 - val_loss: 0.9513 - val_accuracy: 0.7963
Epoch 17/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0896 - accuracy: 0.9685 - val_loss: 0.9583 - val_accuracy: 0.7953
Epoch 18/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0916 - accuracy: 0.9681 - val_loss: 0.9913 - val_accuracy: 0.7916
Epoch 19/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0805 - accuracy: 0.9720 - val_loss: 0.9916 - val_accuracy: 0.7903
Epoch 20/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0797 - accuracy: 0.9720 - val_loss: 1.0206 - val_accuracy: 0.7808
Epoch 21/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0693 - accuracy: 0.9754 - val_loss: 1.0696 - val_accuracy: 0.7841
Epoch 22/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0542 - accuracy: 0.9805 - val_loss: 0.9777 - val_accuracy: 0.8068
Epoch 23/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0714 - accuracy: 0.9756 - val_loss: 1.0927 - val_accuracy: 0.7868
Epoch 24/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0631 - accuracy: 0.9780 - val_loss: 1.1878 - val_accuracy: 0.7752
Epoch 25/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0685 - accuracy: 0.9754 - val_loss: 1.0782 - val_accuracy: 0.7990
Epoch 26/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0457 - accuracy: 0.9848 - val_loss: 1.0334 - val_accuracy: 0.8055
Epoch 27/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0510 - accuracy: 0.9824 - val_loss: 1.1227 - val_accuracy: 0.7933
Epoch 28/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0479 - accuracy: 0.9826 - val_loss: 1.1104 - val_accuracy: 0.7954
Epoch 29/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0530 - accuracy: 0.9810 - val_loss: 1.1311 - val_accuracy: 0.7984
Epoch 30/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0480 - accuracy: 0.9826 - val_loss: 1.0391 - val_accuracy: 0.8100
Epoch 31/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0552 - accuracy: 0.9814 - val_loss: 1.0742 - val_accuracy: 0.8072
Epoch 32/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0459 - accuracy: 0.9841 - val_loss: 1.1048 - val_accuracy: 0.7995
Epoch 33/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0401 - accuracy: 0.9852 - val_loss: 1.1516 - val_accuracy: 0.7922
Epoch 34/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0469 - accuracy: 0.9838 - val_loss: 1.0878 - val_accuracy: 0.8040
Epoch 35/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0456 - accuracy: 0.9843 - val_loss: 1.2495 - val_accuracy: 0.7909
Epoch 36/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0349 - accuracy: 0.9879 - val_loss: 1.1021 - val_accuracy: 0.8124
Epoch 37/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0326 - accuracy: 0.9886 - val_loss: 1.1721 - val_accuracy: 0.8036
Epoch 38/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0407 - accuracy: 0.9869 - val_loss: 1.1667 - val_accuracy: 0.8047
Epoch 39/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0439 - accuracy: 0.9848 - val_loss: 1.1386 - val_accuracy: 0.8030
Epoch 40/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0343 - accuracy: 0.9882 - val_loss: 1.2436 - val_accuracy: 0.7880
Epoch 41/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0432 - accuracy: 0.9853 - val_loss: 1.1782 - val_accuracy: 0.8029
Epoch 42/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0242 - accuracy: 0.9913 - val_loss: 1.1192 - val_accuracy: 0.8145
Epoch 43/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0275 - accuracy: 0.9911 - val_loss: 1.1788 - val_accuracy: 0.8108
Epoch 44/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0399 - accuracy: 0.9868 - val_loss: 1.2830 - val_accuracy: 0.7909
Epoch 45/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0387 - accuracy: 0.9866 - val_loss: 1.2780 - val_accuracy: 0.7980
Epoch 46/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0355 - accuracy: 0.9874 - val_loss: 1.2352 - val_accuracy: 0.8039
Epoch 47/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0262 - accuracy: 0.9914 - val_loss: 1.2076 - val_accuracy: 0.7989
Epoch 48/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0298 - accuracy: 0.9896 - val_loss: 1.1743 - val_accuracy: 0.8107
Epoch 49/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0270 - accuracy: 0.9904 - val_loss: 1.1919 - val_accuracy: 0.8085
Epoch 50/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0255 - accuracy: 0.9913 - val_loss: 1.1761 - val_accuracy: 0.8140
Epoch 51/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0305 - accuracy: 0.9894 - val_loss: 1.2419 - val_accuracy: 0.8069
Epoch 52/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0336 - accuracy: 0.9885 - val_loss: 1.2388 - val_accuracy: 0.8036
Epoch 53/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0310 - accuracy: 0.9893 - val_loss: 1.2214 - val_accuracy: 0.8128
Epoch 54/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0259 - accuracy: 0.9910 - val_loss: 1.2563 - val_accuracy: 0.8073
Epoch 55/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0281 - accuracy: 0.9903 - val_loss: 1.3035 - val_accuracy: 0.7974
Epoch 56/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0229 - accuracy: 0.9923 - val_loss: 1.2904 - val_accuracy: 0.8080
Epoch 57/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0224 - accuracy: 0.9920 - val_loss: 1.2884 - val_accuracy: 0.8098
Epoch 58/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0261 - accuracy: 0.9912 - val_loss: 1.3058 - val_accuracy: 0.8036
Epoch 59/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0221 - accuracy: 0.9927 - val_loss: 1.1905 - val_accuracy: 0.8190
Epoch 60/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0251 - accuracy: 0.9913 - val_loss: 1.2612 - val_accuracy: 0.8080
Epoch 61/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0240 - accuracy: 0.9919 - val_loss: 1.2729 - val_accuracy: 0.7999
Epoch 62/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0262 - accuracy: 0.9914 - val_loss: 1.2271 - val_accuracy: 0.8140
Epoch 63/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0258 - accuracy: 0.9912 - val_loss: 1.3934 - val_accuracy: 0.8013
Epoch 64/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0235 - accuracy: 0.9923 - val_loss: 1.2781 - val_accuracy: 0.8094
Epoch 65/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0209 - accuracy: 0.9925 - val_loss: 1.2417 - val_accuracy: 0.8181
Epoch 66/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0212 - accuracy: 0.9925 - val_loss: 1.2880 - val_accuracy: 0.8099
Epoch 67/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0225 - accuracy: 0.9923 - val_loss: 1.3255 - val_accuracy: 0.8095
Epoch 68/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0235 - accuracy: 0.9921 - val_loss: 1.3492 - val_accuracy: 0.8070
Epoch 69/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0214 - accuracy: 0.9925 - val_loss: 1.3266 - val_accuracy: 0.8119
Epoch 70/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0186 - accuracy: 0.9934 - val_loss: 1.3326 - val_accuracy: 0.8137
Epoch 71/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0190 - accuracy: 0.9935 - val_loss: 1.2055 - val_accuracy: 0.8176
Epoch 72/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0208 - accuracy: 0.9932 - val_loss: 1.3183 - val_accuracy: 0.8110
Epoch 73/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0229 - accuracy: 0.9919 - val_loss: 1.3162 - val_accuracy: 0.8090
Epoch 74/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0211 - accuracy: 0.9930 - val_loss: 1.2677 - val_accuracy: 0.8140
Epoch 75/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0187 - accuracy: 0.9935 - val_loss: 1.3902 - val_accuracy: 0.8002
Epoch 76/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0176 - accuracy: 0.9939 - val_loss: 1.2830 - val_accuracy: 0.8071
Epoch 77/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0214 - accuracy: 0.9928 - val_loss: 1.2419 - val_accuracy: 0.8191
Epoch 78/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0186 - accuracy: 0.9939 - val_loss: 1.2832 - val_accuracy: 0.8131
Epoch 79/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0179 - accuracy: 0.9939 - val_loss: 1.3228 - val_accuracy: 0.8131
Epoch 80/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0154 - accuracy: 0.9945 - val_loss: 1.2993 - val_accuracy: 0.8151
Epoch 81/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0168 - accuracy: 0.9942 - val_loss: 1.2962 - val_accuracy: 0.8169
Epoch 82/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0198 - accuracy: 0.9934 - val_loss: 1.3340 - val_accuracy: 0.8097
Epoch 83/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0161 - accuracy: 0.9947 - val_loss: 1.3283 - val_accuracy: 0.8104
Epoch 84/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0178 - accuracy: 0.9941 - val_loss: 1.3074 - val_accuracy: 0.8122
Epoch 85/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0179 - accuracy: 0.9943 - val_loss: 1.3557 - val_accuracy: 0.8136
Epoch 86/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0175 - accuracy: 0.9940 - val_loss: 1.2966 - val_accuracy: 0.8182
Epoch 87/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0183 - accuracy: 0.9936 - val_loss: 1.3410 - val_accuracy: 0.8152
Epoch 88/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0164 - accuracy: 0.9943 - val_loss: 1.3201 - val_accuracy: 0.8202
Epoch 89/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0153 - accuracy: 0.9950 - val_loss: 1.2950 - val_accuracy: 0.8191
Epoch 90/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0147 - accuracy: 0.9952 - val_loss: 1.4269 - val_accuracy: 0.8091
Epoch 91/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0156 - accuracy: 0.9947 - val_loss: 1.3328 - val_accuracy: 0.8133
Epoch 92/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0156 - accuracy: 0.9945 - val_loss: 1.3320 - val_accuracy: 0.8198
Epoch 93/100
391/391 [==============================] - 14s 36ms/step - loss: 0.0174 - accuracy: 0.9938 - val_loss: 1.3392 - val_accuracy: 0.8126
Epoch 94/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0141 - accuracy: 0.9947 - val_loss: 1.4214 - val_accuracy: 0.8045
Epoch 95/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0132 - accuracy: 0.9955 - val_loss: 1.3792 - val_accuracy: 0.8171
Epoch 96/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0214 - accuracy: 0.9929 - val_loss: 1.3372 - val_accuracy: 0.8132
Epoch 97/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0149 - accuracy: 0.9952 - val_loss: 1.4022 - val_accuracy: 0.8171
Epoch 98/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0150 - accuracy: 0.9947 - val_loss: 1.3535 - val_accuracy: 0.8157
Epoch 99/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0164 - accuracy: 0.9946 - val_loss: 1.3515 - val_accuracy: 0.8135
Epoch 100/100
391/391 [==============================] - 14s 35ms/step - loss: 0.0155 - accuracy: 0.9949 - val_loss: 1.3446 - val_accuracy: 0.8182
###Markdown
- Training accuracy
###Code
train_acc = model.evaluate(X_train,y_train,batch_size=128)
train_acc
###Output
391/391 [==============================] - 5s 12ms/step - loss: 0.0071 - accuracy: 0.9976
###Markdown
- Test accuracy
###Code
test_acc = model.evaluate(X_test, y_test, batch_size=128)
test_acc
###Output
79/79 [==============================] - 1s 11ms/step - loss: 1.3446 - accuracy: 0.8182
###Markdown
Evaluation
###Code
plt.plot(history.history['loss'], label='Train_loss')
plt.plot(history.history['val_loss'], label = 'val_loss')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0, 2])
plt.legend(loc='lower right')
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
4- Second implementation In this implementation, we use "**Data Augmentation**", "**Data Normalization**", "**Regularization**" and finally apply "**Parameter initialization**" - Normalizing input 1- convert images into float type (for deviding over 255)\2- compute mean of data\3- standard deviation\4- normalization
###Code
# convert to float
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
# compute mean
mean = np.mean(X_train)
# standard deviation
std = np.std(X_train)
# normalization
X_test = (X_test-mean)/std
X_train=(X_train-mean)/std
###Output
_____no_output_____
###Markdown
- Data visualization after Normalization Let's see the normalized image
###Code
plt.imshow(X_train[img])
plt.show()
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
- Augmentation set up
###Code
datagen = ImageDataGenerator(
rotation_range = 25, # rotating image with 25 degree
shear_range = 0.2, # Shear angle
horizontal_flip = True, # Horizontal flipping
width_shift_range = 0.2, # width shift
height_shift_range = 0.2, # hight shift
zoom_range = 0.1 # zoom >> [1-0.1, 1+0.1]
)
datagen.fit(X_train)
# some data visualization after Augmentation
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=9):
for i in range(0, 9):
plt.subplot(330 + 1 + i)
plt.imshow(X_batch[i].astype(np.uint8))
plt.show()
break
###Output
_____no_output_____
###Markdown
- CNN model implemetn the same model except the "Regularization" parameters
###Code
# L2 or "ridge" regularisation:
reg2=l2(1e-4)
num_filters2=32
ac2='relu'
adm2=Adam(lr=0.001,decay=0, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
opt2=adm2
drop_dense2=0.2
drop_conv2=0.1
# Define Xavier initialization method:
initializer = tf.keras.initializers.GlorotNormal()
model2 = Sequential()
model2.add(Conv2D(num_filters2, (3, 3), activation=ac2, kernel_regularizer=reg2, input_shape=(img_rows, img_cols, channels),padding='same'))
model2.add(BatchNormalization(axis=-1))
model2.add(Conv2D(num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same'))
model2.add(BatchNormalization(axis=-1))
model2.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 16x16x3xnum_filters
model2.add(Dropout(drop_conv2))
model2.add(Conv2D(2*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same'))
model2.add(BatchNormalization(axis=-1))
model2.add(Conv2D(2*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same'))
model2.add(BatchNormalization(axis=-1))
model2.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 8x8x3x(2*num_filters)
model2.add(Dropout(drop_conv2))
model2.add(Conv2D(4*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same'))
model2.add(BatchNormalization(axis=-1))
model2.add(Conv2D(4*num_filters2, (3, 3), activation=ac2,kernel_regularizer=reg2,padding='same'))
model2.add(BatchNormalization(axis=-1))
model2.add(MaxPooling2D(pool_size=(2, 2))) # reduces to 4x4x3x(4*num_filters)
model2.add(Dropout(drop_conv2))
model2.add(Flatten())
model2.add(Dense(512, activation=ac2,kernel_regularizer=reg2,kernel_initializer=initializer)) # Add Xavier initialization to the Dense layer
model2.add(BatchNormalization())
model2.add(Dropout(drop_dense2))
model2.add(Dense(num_classes, activation='softmax'))
model2.compile(loss='categorical_crossentropy', metrics=['accuracy'],optimizer=opt2)
###Output
/usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/optimizer_v2.py:356: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
"The `lr` argument is deprecated, use `learning_rate` instead.")
###Markdown
- Train model with "Data Augmentation"
###Code
history2=model2.fit_generator(datagen.flow(X_train, y_train, batch_size=128),steps_per_epoch = len(X_train) / 128, epochs=100, validation_data=(X_test, y_test))
###Output
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1972: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
warnings.warn('`Model.fit_generator` is deprecated and '
###Markdown
- Training accuracy
###Code
model2_test_acc=model2.evaluate(X_test,y_test,batch_size=128)
model2_test_acc
###Output
79/79 [==============================] - 1s 15ms/step - loss: 0.6127 - accuracy: 0.8716
###Markdown
- Test accuracy
###Code
model2_train_acc=model2.evaluate(X_train,y_train,batch_size=128)
model2_train_acc
###Output
391/391 [==============================] - 5s 13ms/step - loss: 0.4914 - accuracy: 0.9090
###Markdown
Evaluation
###Code
plt.plot(history2.history['accuracy'], label='accuracy')
plt.plot(history2.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
plt.plot(history2.history['loss'], label='loss')
plt.plot(history2.history['val_loss'], label = 'val_loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.ylim([0, 1])
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
5-Prediction select a random image among test set and predict its class
###Code
image = randint(0,10000)
y_pred = model2.predict(X_test)
y_classes = [np.argmax(element) for element in y_pred]
plt.imshow(X_train[image])
plt.show()
classes[y_classes[image]]
###Output
_____no_output_____
###Markdown
Define callback
###Code
# Adding callbacks.
from keras import callbacks
callbacks = [
callbacks.EarlyStopping(monitor='acc', patience=3, restore_best_weights=True),
callbacks.TerminateOnNaN()
]
###Output
_____no_output_____
###Markdown
6-Import Resnet50 and pre-trained weights
###Code
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
conv_base = ResNet50(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.summary()
###Output
_____no_output_____
###Markdown
Transfer learning changing the classifying part to fit our dataset
###Code
model = models.Sequential()
model.add(conv_base) # import and use ResNet50 model
model.add(layers.Flatten())
model.add(layers.BatchNormalization())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dropout(0.3))
model.add(layers.BatchNormalization())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dropout(0.3))
model.add(layers.BatchNormalization())
model.add(layers.Dense(10, activation='softmax'))
model.compile(optimizer=opt2, loss='categorical_crossentropy', metrics=['acc'])
model.summary()
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resnet50 (Functional) (None, 1, 1, 2048) 23587712
_________________________________________________________________
flatten_2 (Flatten) (None, 2048) 0
_________________________________________________________________
batch_normalization_14 (Batc (None, 2048) 8192
_________________________________________________________________
dense_4 (Dense) (None, 128) 262272
_________________________________________________________________
dropout_8 (Dropout) (None, 128) 0
_________________________________________________________________
batch_normalization_15 (Batc (None, 128) 512
_________________________________________________________________
dense_5 (Dense) (None, 64) 8256
_________________________________________________________________
dropout_9 (Dropout) (None, 64) 0
_________________________________________________________________
batch_normalization_16 (Batc (None, 64) 256
_________________________________________________________________
dense_6 (Dense) (None, 10) 650
=================================================================
Total params: 23,867,850
Trainable params: 23,810,250
Non-trainable params: 57,600
_________________________________________________________________
###Markdown
Training model
###Code
history = model.fit(X_train, y_train, epochs=100, batch_size=256, validation_data=(X_test, y_test), callbacks=callbacks)
history_dict = history.history
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1, len(loss_values) + 1)
plt.figure(figsize=(14, 4))
plt.subplot(1,2,1)
plt.plot(epochs, loss_values, 'bo', label='Training Loss')
plt.plot(epochs, val_loss_values, 'b', label='Validation Loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
acc = history_dict['acc']
val_acc = history_dict['val_acc']
epochs = range(1, len(loss_values) + 1)
plt.subplot(1,2,2)
plt.plot(epochs, acc, 'bo', label='Training Accuracy', c='orange')
plt.plot(epochs, val_acc, 'b', label='Validation Accuracy', c='orange')
plt.title('Training and Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
###Output
_____no_output_____
###Markdown
CIFAR10 - 10 categories of 32 x 32 sized color images - 50000 training and 10000 testing samples The full CIFAR dataset contains 80 million tiny colored images. - The main page: https://www.cs.toronto.edu/%7Ekriz/cifar.html - About CIFAR: https://www.cs.toronto.edu/%7Ekriz/learning-features-2009-TR.pdf
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
np.random.seed(42)
import tensorflow as tf
tf.random.set_seed(42)
import tensorflow.keras as keras
import os
from functools import partial
from sklearn.model_selection import StratifiedShuffleSplit
from tensorflow.keras.datasets.cifar10 import load_data
from tensorflow.keras import Sequential
from tensorflow.keras.layers import InputLayer, Dense, BatchNormalization, Activation, \
Dropout, AlphaDropout
from tensorflow.keras.optimizers import Nadam, SGD
from tensorflow.keras.callbacks import EarlyStopping
import tensorflow.keras.backend as K
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
You can download the data from the original link above and load it like this ...
###Code
def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
file_dicts = {}
for i in range(1, 6):
batch = f'data_batch_{i}'
filename = os.path.join('.', 'data', 'cifar', 'cifar-10-batches-py', batch)
file_dicts[i-1] = unpickle(filename)
def append_data(data, type_):
a = data[0][type_]
for i in range(1, 5):
a = np.r_[a, data[i][type_]]
return a
X_full = append_data(file_dicts, b'data')
y_full = append_data(file_dicts, b'labels')
X_full.shape, y_full.shape
test_file = os.path.join('.', 'data', 'cifar', 'cifar-10-batches-py', 'test_batch')
test_file_dict = unpickle(test_file)
X_test = test_file_dict[b'data']
y_test = test_file_dict[b'labels']
len(X_test), len(y_test)
# Use StratifiedShuffleSplit to split training data into training and validation.
# This will ensure that the training and validation data has an equal proportion of classes.
#
split = StratifiedShuffleSplit(n_splits=1, train_size=0.8, test_size=0.2) # We don't need to specify both test/train.
# sizes, but it is good for clarity.
for train_idx, test_idx in split.split(X_full, y_full):
X_train, X_val = X_full[train_idx], X_full[test_idx]
y_train, y_val = y_full[train_idx], y_full[test_idx]
X_train.shape, len(y_train), X_test.shape, len(y_test)
# Validate that the split shows the correct proportion of classes
pd.Series(y_train).value_counts(normalize=True), pd.Series(y_val).value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
... or an easier way is to use Tensorflow's load_data() function
###Code
(X_train, y_train), (X_test, y_test) = load_data()
X_train = X_train.reshape(X_train.shape[0], -1)
X_test = X_test.reshape(X_test.shape[0], -1)
y_train = y_train.flatten()
y_test = y_test.flatten()
X_train.shape, y_train.shape, X_test.shape, y_test.shape
# Validate that the split shows the correct proportion of classes
pd.Series(y_train).value_counts(normalize=True), pd.Series(y_test).value_counts(normalize=True)
# Use StratifiedShuffleSplit to split training data into training and validation.
# This will ensure that the training and validation data has an equal proportion of classes.
#
split = StratifiedShuffleSplit(n_splits=1, train_size=0.8, test_size=0.2) # We don't need to specify both test/train.
# sizes, but it is good for clarity.
for train_idx, test_idx in split.split(X_train, y_train):
X_train_1, X_val = X_train[train_idx], X_train[test_idx]
y_train_1, y_val = y_train[train_idx], y_train[test_idx]
X_train = X_train_1
y_train = y_train_1
X_train.shape, y_train.shape, X_val.shape, y_val.shape
###Output
_____no_output_____
###Markdown
Create a model with Batch Normalization layers
###Code
class MCDropout(Dropout):
def call(self, rate):
return super().call(rate, training=True) # When training = True, the Dropout class from which we inherit
# MCDropout drops some of the cells in the layer.
# When cells are dropped, the model we're training is different.
# We get the benefit of running thousands of models on the data.
# The final model is also more robust to small changes in input.
# MC Dropout acts as a regularizer.
def create_model(with_bn=False,
initialization='he_normal',
hidden_activation='elu',
dropout_rate=None,
mc_dropout=False):
def add_dropout_layer(layer_num=None, dropout_rate=None, mc_dropout=False):
assert(layer_num is not None)
if dropout_rate is not None:
if layer_num > 16: # For the last 3 layers, add AlphaDropout layer
if mc_dropout == True:
model.add(MCDropout(dropout_rate))
else:
model.add(AlphaDropout(dropout_rate))
model = Sequential([
InputLayer(input_shape=[3072])
])
if with_bn:
model.add(BatchNormalization()) # Add BN layer after input
NormalDense = partial(Dense, # Put all your common init here.
kernel_initializer=initialization,
use_bias=False if with_bn else True) # BN has bias, so remove it
# from the Dense layer.
for layer_num in range(20):
model.add(NormalDense(100))
if with_bn: # Add a BatchNormalization layer after each
model.add(BatchNormalization()) # Dense layer
model.add(Activation(hidden_activation)) # Add an activation function. This is needed
# because we did not add it when we created the
# partial dense layer
add_dropout_layer(layer_num, dropout_rate, mc_dropout) # Add dropout layer
model.add(Dense(10, activation='softmax')) # Output layer
return model
model = create_model(with_bn=False)
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
early_stopping_cb = EarlyStopping(patience=10,
restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_val, y_val),
callbacks=[early_stopping_cb])
model = create_model(with_bn=True)
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
early_stopping_cb = EarlyStopping(patience=10,
restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_val, y_val),
callbacks=[early_stopping_cb])
# Standardize data so you can use it with SELU and
# get a net that self-normalizes
X_train = (X_train - np.mean(X_train)) / np.std(X_train)
X_val = (X_val - np.mean(X_val)) / np.std(X_val)
X_test = (X_test - np.mean(X_test)) / np.std(X_test)
# Verify data is standardized
X_train[:1], X_val[:1], X_test[:1]
model = create_model(with_bn=False,
initialization='lecun_normal',
hidden_activation='selu')
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
early_stopping_cb = EarlyStopping(patience=10,
restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_val, y_val),
callbacks=[early_stopping_cb])
y_pred = model.predict(X_val)
y_pred_classes = np.argmax(y_pred, axis=1)
print(f'accuracy for model: {accuracy_score(y_val, y_pred_classes)}')
models = []
for i in range(4):
dropout_rate = 0.1 * (i + 1) # Try different dropout rates for your model.
# For a self-normalizing model:
# - use only Dense layers
# - standardize the input features
# - use LeCun initialization + SELU activation
model = create_model(with_bn=False,
initialization='lecun_normal',
hidden_activation='selu',
dropout_rate=dropout_rate)
# model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
early_stopping_cb = EarlyStopping(patience=10,
restore_best_weights=True)
print(f' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
print(f' >>>>>>>>>>>>>>>>> For dropout_rate: {dropout_rate}')
print(f' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_val, y_val),
callbacks=[early_stopping_cb])
models.append(model)
for i in range(4):
model = models[i]
y_pred = model.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)
print(f'accuracy for model {i+1}: {accuracy_score(y_test, y_pred_classes)}')
###Output
accuracy for model 1: 0.4238
accuracy for model 2: 0.3692
accuracy for model 3: 0.329
accuracy for model 4: 0.3258
###Markdown
Without dropout, with LeCun initialization, and SELU activation, we get accuracy of 0.48. With dropout, the best model gives us accuracy of 0.42 Now let's try retraining the same model with MC Dropout to see if we get better results
###Code
model = models[0]
y_probas = np.stack([model(X_test, training=True) for _ in range(100)])
y_proba = np.mean(y_probas, axis=0)
y_pred = np.argmax(y_proba, axis=1)
print(f'accuracy for model with dropout: {accuracy_score(y_test, y_pred)}')
###Output
accuracy for model with dropout: 0.4209
###Markdown
MC Dropout does not give us any better results, but the model is regularized, so it will be better on different test sets Now let's try it using MCDropout with Batch Normalization
###Code
model.layers
model_mc_dropout = tf.keras.models.clone_model(model) # Everything in the model except the weights are cloned.
model_mc_dropout.set_weights(model.get_weights()) # Cloning does not clone weights, so set them instead
model_mc_dropout.layers[0].get_weights()
# We remove the last 10 model layers including the output layer.
# The output layer is a single layer.
# The layers before that are combined of both Dense and Activation.
# So we remove the single output layer, and 3 (Dense + Activation)
# layers. We found this out by looking at the model.layers
print(f'Number of layers in model: {len(model_mc_dropout.layers)}')
for _ in range(10):
model_mc_dropout.pop()
print(f'Number of layers in model: {len(model_mc_dropout.layers)}')
# Add (3 Dense layers + MC Dropout) layers
for i in range(3):
model_mc_dropout.add(Dense(100,
kernel_initializer='lecun_normal',
activation='selu'))
model_mc_dropout.add(MCDropout(0.1))
# Add output layer
model_mc_dropout.add(Dense(10, activation='softmax'))
model_mc_dropout.summary()
model = model_mc_dropout
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_val, y_val),
callbacks=[early_stopping_cb])
y_probas = np.stack([model(X_test, training=True) for _ in range(100)])
y_proba = np.mean(y_probas, axis=0)
y_pred = np.argmax(y_proba, axis=1)
print(f'accuracy for model with dropout: {accuracy_score(y_test, y_pred)}')
###Output
accuracy for model with dropout: 0.4929
###Markdown
With MC dropout, accuracy improved from 0.42 to 0.49, an increase of 0.07. This means error decreased from 0.58 to 0.51, a decrease of 7% - not bad for a small change in the model
###Code
class ExponentialLearningRate(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
self.rates = []
self.losses = []
def on_batch_end(self, batch, logs):
self.rates.append(K.get_value(self.model.optimizer.lr))
self.losses.append(logs["loss"])
K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
def find_learning_rate(model, X, y, epochs=1, batch_size=32, min_rate=10**-5, max_rate=10):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
init_lr = K.get_value(model.optimizer.lr)
K.set_value(model.optimizer.lr, min_rate)
exp_lr = ExponentialLearningRate(factor)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr])
K.set_value(model.optimizer.lr, init_lr)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
def plot_lr_vs_loss(rates, losses):
plt.plot(rates, losses)
plt.gca().set_xscale('log')
plt.hlines(min(losses), min(rates), max(rates))
plt.axis([min(rates), max(rates), min(losses), (losses[0] + min(losses)) / 2])
plt.xlabel("Learning rate")
plt.ylabel("Training Loss")
# These two classes can be used when your optimizer uses momentum.
# In this case, when your learning rate is going down,
# momentum should be going up, and vice versa.
#
class LinearMomentum(keras.callbacks.Callback):
def __init__(self, factor):
self.factor = factor
def on_batch_end(self, batch, logs):
K.set_value(self.model.optimizer.momentum,
K.get_value(self.model.optimizer.momentum) + self.factor)
# Here we're starting with a high learning rate and decreasing it,
# so start with a low momentum and increase it
def find_learning_rate_with_momentum(model, X, y, epochs=1, batch_size=32,
min_rate=10**-5, max_rate=10,
start_mom=0.85, end_mom=0.95):
init_weights = model.get_weights()
iterations = len(X) // batch_size * epochs
factor = np.exp(np.log(max_rate / min_rate) / iterations)
factor_mom = (start_mom - end_mom) / iterations
init_lr = K.get_value(model.optimizer.lr)
init_mom = K.get_value(model.optimizer.momentum)
K.set_value(model.optimizer.lr, min_rate)
K.set_value(model.optimizer.momentum, start_mom)
exp_lr = ExponentialLearningRate(factor)
lin_mom = LinearMomentum(factor_mom)
history = model.fit(X, y, epochs=epochs, batch_size=batch_size,
callbacks=[exp_lr, lin_mom])
K.set_value(model.optimizer.lr, init_lr)
K.set_value(model.optimizer.momentum, init_mom)
model.set_weights(init_weights)
return exp_lr.rates, exp_lr.losses
model = create_model(with_bn=False,
initialization='lecun_normal',
hidden_activation='selu')
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer=SGD(),
metrics=['accuracy'])
batch_size = 32
rates, losses = find_learning_rate(model, X_train, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
###Output
Train on 40000 samples
40000/40000 [==============================] - 9s 229us/sample - loss: nan - accuracy: 0.1650
###Markdown
Lowest LR from the plot to is around 3e-3. So select starting LR to be 5e-3
###Code
# This shows how you can ramp up on the LR rate and ramp down on it at the halfway point.
#
class OneCycleScheduler(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.iteration = 0
self.rates = []
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (iter2 - self.iteration)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
rate = max(rate, self.last_rate)
self.iteration += 1
self.rates.append(rate)
K.set_value(self.model.optimizer.lr, rate)
class OneCycleSchedulerWithMomentum(keras.callbacks.Callback):
def __init__(self, iterations, max_rate, start_rate=None,
last_iterations=None, last_rate=None,
start_momentum=0.95, end_momentum=0.85):
self.iterations = iterations
self.max_rate = max_rate
self.start_rate = start_rate or max_rate / 10
self.last_iterations = last_iterations or iterations // 10 + 1
self.half_iteration = (iterations - self.last_iterations) // 2
self.last_rate = last_rate or self.start_rate / 1000
self.start_momentum = start_momentum
self.end_momentum = end_momentum
self.iteration = 0
self.rates = []
self.momentums = []
def _interpolate(self, iter1, iter2, rate1, rate2):
return ((rate2 - rate1) * (iter2 - self.iteration)
/ (iter2 - iter1) + rate1)
def on_batch_begin(self, batch, logs):
if self.iteration < self.half_iteration:
rate = self._interpolate(0, self.half_iteration, self.start_rate, self.max_rate)
momentum = self._interpolate(0, self.half_iteration,
self.start_momentum, self.end_momentum)
elif self.iteration < 2 * self.half_iteration:
rate = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.max_rate, self.start_rate)
momentum = self._interpolate(self.half_iteration, 2 * self.half_iteration,
self.end_momentum, self.start_momentum)
else:
rate = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_rate, self.last_rate)
momentum = self._interpolate(2 * self.half_iteration, self.iterations,
self.start_momentum, self.end_momentum)
rate = max(rate, self.last_rate)
momentum = self.start_momentum
self.iteration += 1
self.rates.append(rate)
self.momentums.append(momentum)
K.set_value(self.model.optimizer.lr, rate)
K.set_value(self.model.optimizer.momentum, momentum)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=5e-3)
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_val, y_val),
callbacks=[onecycle])
###Output
Train on 40000 samples, validate on 10000 samples
Epoch 1/25
40000/40000 [==============================] - 9s 227us/sample - loss: 1.8345 - accuracy: 0.3392 - val_loss: 1.7438 - val_accuracy: 0.3762
Epoch 2/25
40000/40000 [==============================] - 9s 216us/sample - loss: 1.6224 - accuracy: 0.4205 - val_loss: 1.6452 - val_accuracy: 0.4029
Epoch 3/25
40000/40000 [==============================] - 9s 221us/sample - loss: 1.5203 - accuracy: 0.4575 - val_loss: 1.5755 - val_accuracy: 0.4413
Epoch 4/25
40000/40000 [==============================] - 9s 220us/sample - loss: 1.4461 - accuracy: 0.4864 - val_loss: 1.5515 - val_accuracy: 0.4521
Epoch 5/25
40000/40000 [==============================] - 9s 221us/sample - loss: 1.3826 - accuracy: 0.5072 - val_loss: 1.5309 - val_accuracy: 0.4605
Epoch 6/25
40000/40000 [==============================] - 9s 230us/sample - loss: 1.3250 - accuracy: 0.5287 - val_loss: 1.5138 - val_accuracy: 0.4670
Epoch 7/25
40000/40000 [==============================] - 9s 224us/sample - loss: 1.2740 - accuracy: 0.5500 - val_loss: 1.4984 - val_accuracy: 0.4741
Epoch 8/25
40000/40000 [==============================] - 9s 225us/sample - loss: 1.2236 - accuracy: 0.5643 - val_loss: 1.4970 - val_accuracy: 0.4774
Epoch 9/25
40000/40000 [==============================] - 9s 219us/sample - loss: 1.1754 - accuracy: 0.5847 - val_loss: 1.5019 - val_accuracy: 0.4795
Epoch 10/25
40000/40000 [==============================] - 9s 227us/sample - loss: 1.1315 - accuracy: 0.6028 - val_loss: 1.5027 - val_accuracy: 0.4806
Epoch 11/25
40000/40000 [==============================] - 9s 220us/sample - loss: 1.0888 - accuracy: 0.6183 - val_loss: 1.5068 - val_accuracy: 0.4870
Epoch 12/25
40000/40000 [==============================] - 9s 224us/sample - loss: 1.0589 - accuracy: 0.6309 - val_loss: 1.5139 - val_accuracy: 0.4822
Epoch 13/25
40000/40000 [==============================] - 9s 221us/sample - loss: 1.0662 - accuracy: 0.6267 - val_loss: 1.5312 - val_accuracy: 0.4799
Epoch 14/25
40000/40000 [==============================] - 9s 225us/sample - loss: 1.0727 - accuracy: 0.6229 - val_loss: 1.5415 - val_accuracy: 0.4771
Epoch 15/25
40000/40000 [==============================] - 9s 222us/sample - loss: 1.0797 - accuracy: 0.6179 - val_loss: 1.5594 - val_accuracy: 0.4761
Epoch 16/25
40000/40000 [==============================] - 9s 222us/sample - loss: 1.0859 - accuracy: 0.6151 - val_loss: 1.5717 - val_accuracy: 0.4787
Epoch 17/25
40000/40000 [==============================] - 9s 224us/sample - loss: 1.0894 - accuracy: 0.6148 - val_loss: 1.5577 - val_accuracy: 0.4774
Epoch 18/25
40000/40000 [==============================] - 9s 223us/sample - loss: 1.0916 - accuracy: 0.6143 - val_loss: 1.5642 - val_accuracy: 0.4780
Epoch 19/25
40000/40000 [==============================] - 9s 222us/sample - loss: 1.0878 - accuracy: 0.6163 - val_loss: 1.5834 - val_accuracy: 0.4722
Epoch 20/25
40000/40000 [==============================] - 9s 221us/sample - loss: 1.0883 - accuracy: 0.6143 - val_loss: 1.5824 - val_accuracy: 0.4692
Epoch 21/25
40000/40000 [==============================] - 9s 221us/sample - loss: 1.0850 - accuracy: 0.6136 - val_loss: 1.5825 - val_accuracy: 0.4689
Epoch 22/25
40000/40000 [==============================] - 9s 224us/sample - loss: 1.0788 - accuracy: 0.6149 - val_loss: 1.5868 - val_accuracy: 0.4723
Epoch 23/25
40000/40000 [==============================] - 9s 223us/sample - loss: 1.0459 - accuracy: 0.6288 - val_loss: 1.5373 - val_accuracy: 0.4885
Epoch 24/25
40000/40000 [==============================] - 9s 220us/sample - loss: 0.8814 - accuracy: 0.6924 - val_loss: 1.5288 - val_accuracy: 0.4958
Epoch 25/25
40000/40000 [==============================] - 9s 224us/sample - loss: 0.8252 - accuracy: 0.7157 - val_loss: 1.5543 - val_accuracy: 0.4997
###Markdown
If you compare the losses for each epoch above with the plot earlier where we scanned the learning rate, you will see that we should get a loss around 1.6. This is indeed what we get when we use the 1cycle scheduler. What can we do, and what does not work with 1cycle learning rate scheduling: - Cannot use dropout since dropout keeps changing the network for each batch. Here we want to cycle through the learning rate in a particular sequence for the same network - Cannot use early stopping since our run through the learning rates will not be complete
###Code
min(onecycle.rates), max(onecycle.rates)
plt.scatter(range(len(onecycle.rates)), onecycle.rates)
plt.axis([0, 40000, -0.001, 0.001])
###Output
_____no_output_____
###Markdown
Let's see if we can use He initialization and ELU activation to get good results with 1cycle learning schedule
###Code
model = create_model(with_bn=False,
initialization='he_normal',
hidden_activation='elu')
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer=SGD(),
metrics=['accuracy'])
batch_size = 32
rates, losses = find_learning_rate(model, X_train, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(onecycle.rates, onecycle.losses)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=1e-2)
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_val, y_val),
callbacks=[onecycle])
###Output
Train on 40000 samples, validate on 10000 samples
Epoch 1/25
40000/40000 [==============================] - 8s 199us/sample - loss: 1.9359 - accuracy: 0.2977 - val_loss: 1.7862 - val_accuracy: 0.3573
Epoch 2/25
40000/40000 [==============================] - 8s 192us/sample - loss: 1.6839 - accuracy: 0.3961 - val_loss: 1.6741 - val_accuracy: 0.4022
Epoch 3/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.5663 - accuracy: 0.4415 - val_loss: 1.6091 - val_accuracy: 0.4245
Epoch 4/25
40000/40000 [==============================] - 8s 192us/sample - loss: 1.4854 - accuracy: 0.4708 - val_loss: 1.6187 - val_accuracy: 0.4230
Epoch 5/25
40000/40000 [==============================] - 8s 189us/sample - loss: 1.4186 - accuracy: 0.4965 - val_loss: 1.5697 - val_accuracy: 0.4398
Epoch 6/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.3539 - accuracy: 0.5212 - val_loss: 1.5529 - val_accuracy: 0.4473
Epoch 7/25
40000/40000 [==============================] - 8s 190us/sample - loss: 1.2966 - accuracy: 0.5390 - val_loss: 1.5513 - val_accuracy: 0.4511
Epoch 8/25
40000/40000 [==============================] - 8s 192us/sample - loss: 1.2434 - accuracy: 0.5598 - val_loss: 1.5513 - val_accuracy: 0.4634
Epoch 9/25
40000/40000 [==============================] - 8s 192us/sample - loss: 1.1878 - accuracy: 0.5814 - val_loss: 1.5701 - val_accuracy: 0.4601
Epoch 10/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.1360 - accuracy: 0.6003 - val_loss: 1.5776 - val_accuracy: 0.4650
Epoch 11/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.0830 - accuracy: 0.6190 - val_loss: 1.5947 - val_accuracy: 0.4650
Epoch 12/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.0451 - accuracy: 0.6310 - val_loss: 1.6071 - val_accuracy: 0.4615
Epoch 13/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.0549 - accuracy: 0.6294 - val_loss: 1.6198 - val_accuracy: 0.4561
Epoch 14/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.0660 - accuracy: 0.6270 - val_loss: 1.6284 - val_accuracy: 0.4539
Epoch 15/25
40000/40000 [==============================] - 8s 192us/sample - loss: 1.0781 - accuracy: 0.6209 - val_loss: 1.6473 - val_accuracy: 0.4570
Epoch 16/25
40000/40000 [==============================] - 8s 191us/sample - loss: 1.0855 - accuracy: 0.6175 - val_loss: 1.6479 - val_accuracy: 0.4502
Epoch 17/25
40000/40000 [==============================] - 8s 190us/sample - loss: 1.1014 - accuracy: 0.6110 - val_loss: 1.6548 - val_accuracy: 0.4451
Epoch 18/25
40000/40000 [==============================] - 8s 193us/sample - loss: 1.1064 - accuracy: 0.6084 - val_loss: 1.6474 - val_accuracy: 0.4491
Epoch 19/25
40000/40000 [==============================] - 8s 195us/sample - loss: 1.1065 - accuracy: 0.6090 - val_loss: 1.6511 - val_accuracy: 0.4523
Epoch 20/25
40000/40000 [==============================] - 8s 197us/sample - loss: 1.1175 - accuracy: 0.6029 - val_loss: 1.6577 - val_accuracy: 0.4509
Epoch 21/25
40000/40000 [==============================] - 8s 197us/sample - loss: 1.1148 - accuracy: 0.6042 - val_loss: 1.6570 - val_accuracy: 0.4558
Epoch 22/25
40000/40000 [==============================] - 8s 198us/sample - loss: 1.1082 - accuracy: 0.6090 - val_loss: 1.6363 - val_accuracy: 0.4595
Epoch 23/25
40000/40000 [==============================] - 8s 197us/sample - loss: 1.0846 - accuracy: 0.6204 - val_loss: 1.6134 - val_accuracy: 0.4677
Epoch 24/25
40000/40000 [==============================] - 8s 199us/sample - loss: 0.8926 - accuracy: 0.6906 - val_loss: 1.6364 - val_accuracy: 0.4766
Epoch 25/25
40000/40000 [==============================] - 8s 197us/sample - loss: 0.8218 - accuracy: 0.7157 - val_loss: 1.6736 - val_accuracy: 0.4821
###Markdown
Accuracy as with lecun_normal initialization and SELU activation = 0.5 Accuracy with he_normal initialization and ELU activation = 0.48
###Code
model = create_model(with_bn=False,
initialization='he_normal',
hidden_activation='elu')
model.summary()
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
batch_size = 32
rates, losses = find_learning_rate(model, X_train, y_train, epochs=1, batch_size=batch_size)
plot_lr_vs_loss(onecycle.rates, onecycle.losses)
n_epochs = 25
onecycle = OneCycleScheduler(len(X_train) // batch_size * n_epochs, max_rate=3e-3)
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_val, y_val),
callbacks=[onecycle])
###Output
Train on 40000 samples, validate on 10000 samples
Epoch 1/25
40000/40000 [==============================] - 14s 339us/sample - loss: 2.1133 - accuracy: 0.2258 - val_loss: 2.0406 - val_accuracy: 0.2417
Epoch 2/25
40000/40000 [==============================] - 13s 333us/sample - loss: 1.9822 - accuracy: 0.2693 - val_loss: 2.0095 - val_accuracy: 0.2601
Epoch 3/25
40000/40000 [==============================] - 13s 334us/sample - loss: 1.9424 - accuracy: 0.2866 - val_loss: 1.9505 - val_accuracy: 0.2855
Epoch 4/25
40000/40000 [==============================] - 13s 332us/sample - loss: 1.9127 - accuracy: 0.2995 - val_loss: 1.9405 - val_accuracy: 0.2900
Epoch 5/25
40000/40000 [==============================] - 13s 336us/sample - loss: 1.8880 - accuracy: 0.3131 - val_loss: 1.9094 - val_accuracy: 0.2998
Epoch 6/25
40000/40000 [==============================] - 13s 332us/sample - loss: 1.8657 - accuracy: 0.3228 - val_loss: 1.9069 - val_accuracy: 0.3069
Epoch 7/25
40000/40000 [==============================] - 13s 332us/sample - loss: 1.8434 - accuracy: 0.3280 - val_loss: 1.8869 - val_accuracy: 0.3135
Epoch 8/25
40000/40000 [==============================] - 13s 331us/sample - loss: 1.8225 - accuracy: 0.3403 - val_loss: 1.8950 - val_accuracy: 0.3106
Epoch 9/25
40000/40000 [==============================] - 13s 329us/sample - loss: 1.8022 - accuracy: 0.3464 - val_loss: 1.8822 - val_accuracy: 0.3210
Epoch 10/25
40000/40000 [==============================] - 13s 331us/sample - loss: 1.7780 - accuracy: 0.3568 - val_loss: 1.8823 - val_accuracy: 0.3157
Epoch 11/25
40000/40000 [==============================] - 13s 333us/sample - loss: 1.7518 - accuracy: 0.3674 - val_loss: 1.8831 - val_accuracy: 0.3208
Epoch 12/25
40000/40000 [==============================] - 13s 335us/sample - loss: 1.7305 - accuracy: 0.3758 - val_loss: 1.8946 - val_accuracy: 0.3177
Epoch 13/25
40000/40000 [==============================] - 13s 334us/sample - loss: 1.7425 - accuracy: 0.3717 - val_loss: 1.8937 - val_accuracy: 0.3195
Epoch 14/25
40000/40000 [==============================] - 13s 328us/sample - loss: 1.7537 - accuracy: 0.3645 - val_loss: 1.9076 - val_accuracy: 0.3128
Epoch 15/25
40000/40000 [==============================] - 13s 330us/sample - loss: 1.7734 - accuracy: 0.3595 - val_loss: 1.9130 - val_accuracy: 0.3129
Epoch 16/25
40000/40000 [==============================] - 13s 331us/sample - loss: 1.7917 - accuracy: 0.3530 - val_loss: 1.9033 - val_accuracy: 0.3151
Epoch 17/25
40000/40000 [==============================] - 13s 331us/sample - loss: 1.8107 - accuracy: 0.3419 - val_loss: 1.8962 - val_accuracy: 0.3110
Epoch 18/25
40000/40000 [==============================] - 13s 336us/sample - loss: 1.8204 - accuracy: 0.3410 - val_loss: 1.8952 - val_accuracy: 0.3110
Epoch 19/25
40000/40000 [==============================] - 14s 342us/sample - loss: 1.8242 - accuracy: 0.3397 - val_loss: 1.8963 - val_accuracy: 0.3165
Epoch 20/25
40000/40000 [==============================] - 14s 340us/sample - loss: 1.8216 - accuracy: 0.3391 - val_loss: 1.8516 - val_accuracy: 0.3403
Epoch 21/25
40000/40000 [==============================] - 13s 337us/sample - loss: 1.8013 - accuracy: 0.3480 - val_loss: 1.8293 - val_accuracy: 0.3382
Epoch 22/25
40000/40000 [==============================] - 14s 339us/sample - loss: 1.7748 - accuracy: 0.3640 - val_loss: 1.7980 - val_accuracy: 0.3611
Epoch 23/25
40000/40000 [==============================] - 14s 338us/sample - loss: 1.7314 - accuracy: 0.3799 - val_loss: 1.7387 - val_accuracy: 0.3799
Epoch 24/25
40000/40000 [==============================] - 14s 339us/sample - loss: 1.6158 - accuracy: 0.4224 - val_loss: 1.7086 - val_accuracy: 0.3953
Epoch 25/25
40000/40000 [==============================] - 14s 338us/sample - loss: 1.5818 - accuracy: 0.4337 - val_loss: 1.6981 - val_accuracy: 0.3978
###Markdown
Since RMSProp optimizer uses momentum, let's try decreasing the momentum as we increase the learning rate, and increasing the momentum as we decrease the learning rate
###Code
model = create_model(with_bn=False,
initialization='he_normal',
hidden_activation='elu')
model.compile(loss='sparse_categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size = 32
rates, losses = find_learning_rate_with_momentum(model,
X_train, y_train, epochs=1,
batch_size=batch_size)
plot_lr_vs_loss(rates, losses)
n_epochs = 25
onecycle = OneCycleSchedulerWithMomentum(len(X_train) // batch_size * n_epochs,
max_rate=1e-3)
history = model.fit(X_train, y_train, epochs=n_epochs, batch_size=batch_size,
validation_data=(X_val, y_val),
callbacks=[onecycle])
min(onecycle.momentums), max(onecycle.momentums), min(onecycle.rates), max(onecycle.rates)
plt.scatter(onecycle.rates, onecycle.momentums);
plt.gca().set_xlabel('rates');
plt.gca().set_ylabel('momentums');
###Output
_____no_output_____
###Markdown
You can use beta_1 and beta_2 for the Nadam optimizer by changing the 1cycle code above for beta_1 and beta_2. This is just the same as for the RMSProp using momentum
###Code
model = create_model(with_bn=False,
initialization='he_normal',
hidden_activation='elu')
model.compile(loss='sparse_categorical_crossentropy',
optimizer='nadam',
metrics=['accuracy'])
K.get_value(model.optimizer.beta_1)
K.get_value(model.optimizer.beta_2)
###Output
_____no_output_____
###Markdown
**INITIALIZATION:**- I use these three lines of code on top of my each notebooks because it will help to prevent any problems while reloading the same project. And the third line of code helps to make visualization within the notebook.
###Code
#@ INITIALIZATION:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
**DOWNLOADING THE DEPENDENCIES:**- I have downloaded all the libraries and dependencies required for the project in one particular cell.
###Code
#@ DOWNLOADING THE LIBRARIES AND DEPENDENCIES:
# !pip install -U d2l
# !apt-get install p7zip-full
import os, collections, math
import shutil
import pandas as pd
import torch
import torchvision
from torch import nn
from d2l import torch as d2l
PROJECT_ROOT_DIR = "."
ID = "RECOG"
IMAGE_PATH = os.path.join(PROJECT_ROOT_DIR, "Images", ID)
if not os.path.isdir(IMAGE_PATH):
os.makedirs(IMAGE_PATH)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGE_PATH, fig_id + "." + fig_extension)
print("Saving Figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
**OBTAINING AND ORGANIZING THE DATASET:**- I have used google colab for this project so the process of downloading and reading the data might be different in other platforms. I will use [**CIFAR-10 Object Recognition in Images**](https://www.kaggle.com/c/cifar-10) for this project. The dataset is divided into training set and test set. The training set contains 50,000 images. The images contains the categories such as planes, cars, birds, cats, deer, dogs, frogs, horses, boats and trucks.
###Code
#@ ORGANIZING THE DATASET: UNCOMMENT BELOW:
# os.environ['KAGGLE_CONFIG_DIR'] = "/content/drive/MyDrive/Kaggle"
# %cd /content/drive/MyDrive/Kaggle
# !kaggle competitions download -c cifar-10
#@ OBTAINING THE DATASET:
d2l.DATA_HUB["CIFAR10"] = (d2l.DATA_URL + "kaggle_cifar10_tiny.zip",
'2068874e4b9a9f0fb07ebe0ad2b29754449ccacd') # Initializing the Dataset.
demo = True # Initialization.
if demo: data_dir = d2l.download_extract("CIFAR10") # Initialization.
else: data_dir = "../Data/CIFAR10/" # Initializaiton.
###Output
_____no_output_____
###Markdown
**ORGANIZING THE DATASET:**- I will organize the datasets to facilitate model training and testing.
###Code
#@ ORGANIZING THE DATASET:
def read_csv_labels(fname): # Returning names to Labels.
with open(fname, "r") as f:
lines = f.readlines()[1:] # Reading Lines.
tokens = [l.rstrip().split(",") for l in lines]
return dict(((name, label) for name, label in tokens))
labels = read_csv_labels(os.path.join(data_dir, "trainLabels.csv")) # Implementation.
print(f"Training Examples: {len(labels)}") # Number of Training Examples.
print(f"Classes: {len(set(labels.values()))}") # Number of Classes.
#@ ORGANIZING THE DATASET:
def copyfile(filename, target_dir): # Copying File into Target Directory.
os.makedirs(target_dir, exist_ok=True)
shutil.copy(filename, target_dir)
#@ ORGANIZING THE DATASET:
def reorg_train_valid(data_dir, labels, valid_ratio):
n = collections.Counter(labels.values()).most_common()[-1][1] # Number of examples per class.
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, "train")):
label = labels[train_file.split(".")[0]]
fname = os.path.join(data_dir, "train", train_file)
copyfile(fname, os.path.join(data_dir, "train_valid_test", "train_valid", label)) # Copy to Train Valid.
if label not in label_count or label_count[label] < n_valid_per_label:
copyfile(fname, os.path.join(data_dir, "train_valid_test", "valid", label)) # Copy to Valid.
label_count[label] = label_count.get(label, 0) + 1
else:
copyfile(fname, os.path.join(data_dir, "train_valid_test", "train", label)) # Copy to Train.
return n_valid_per_label
###Output
_____no_output_____
###Markdown
- The reorg test function is used to organize the testing set to facilitate the reading during prediction.
###Code
#@ ORGANIZING THE DATASET:
def reorg_test(data_dir): # Initialization.
for test_file in os.listdir(os.path.join(data_dir, "test")):
copyfile(os.path.join(data_dir, "test", test_file),
os.path.join(data_dir, "train_valid_test", "test", "unknown")) # Implementation of Function.
#@ OBTAINING AND ORGANIZING THE DATASET:
def reorg_cifar10_data(data_dir, valid_ratio): # Obtaining and Organizing the Dataset.
labels = read_csv_labels(os.path.join(data_dir, "trainLabels.csv")) # Implementation of Function.
reorg_train_valid(data_dir, labels, valid_ratio) # Implementation of Function.
reorg_test(data_dir) # Implementation of Function.
#@ INITIALIZING THE PARAMETERS:
batch_size = 4 if demo else 128 # Initializing Batchsize.
valid_ratio = 0.1 # Initialization.
reorg_cifar10_data(data_dir, valid_ratio) # Obtaining and Organizing the Dataset.
###Output
_____no_output_____
###Markdown
**IMAGE AUGMENTATION:**- I will use image augmentation to cope with overfitting. The images are flipped at random and normalized.
###Code
#@ IMPLEMENTATION OF IMAGE AUGMENTATION: TRAINING DATASET:
transform_train = torchvision.transforms.Compose([ # Initialization.
torchvision.transforms.Resize(40), # Resizing both Height and Width.
torchvision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
ratio=(1.0, 1.0)), # Cropping and Resizing.
torchvision.transforms.RandomHorizontalFlip(), # Randomly Flipping Image.
torchvision.transforms.ToTensor(), # Converting into Tensors.
torchvision.transforms.Normalize(mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010])]) # Normalization of RGB Channels.
#@ IMPLEMENTATION OF IMAGE AUGMENTATION: TEST DATASET:
transform_test = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(), # Converting into Tensors.
torchvision.transforms.Normalize(mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010])]) # Normalization of RGB Channels.
###Output
_____no_output_____
###Markdown
**READING THE DATASET:**- I will create the image folder dataset instance to read the organized dataset containing original image files where each example includes the image and label.
###Code
#@ READING THE DATASET:
train_ds, train_valid_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, "train_valid_test", folder),
transform = transform_train) for folder in ["train", "train_valid"]] # Initializing Training Dataset.
#@ READING THE DATASET:
valid_ds, test_ds = [torchvision.datasets.ImageFolder(
os.path.join(data_dir, "train_valid_test", folder),
transform = transform_test) for folder in ["valid", "test"]] # Initializing Test Dataset.
#@ IMPLEMENTATION OF DATALOADER:
train_iter, train_valid_iter = [torch.utils.data.DataLoader(
dataset, batch_size, shuffle=True, drop_last=True) for dataset in (train_ds,
train_valid_ds)] # Implementation of DataLoader.
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=True, drop_last=True) # Implementation of DataLoader.
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=True, drop_last=False) # Implementation of DataLoader.
###Output
_____no_output_____
###Markdown
**DEFINING THE MODEL:**- I will define ResNet18 model. I will perform xavier random initialization on the model before training begins.
###Code
#@ DEFINING THE MODEL:
def get_net(): # Function for Initializing the Model.
num_classes = 10 # Number of Classes.
net = d2l.resnet18(num_classes, 3) # Initializing the RESNET Model.
return net
#@ DEFINING THE LOSS FUNCTION:
loss = nn.CrossEntropyLoss(reduction="none") # Initializing Cross Entropy Loss Function.
###Output
_____no_output_____
###Markdown
**DEFINING TRAINING FUNCTION:**- I will define model training function train here. I will record the training time of each epoch which helps to compare costs of different models.
###Code
#@ DEFINING TRAINING FUNCTIONS:
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices,
lr_period, lr_decay): # Defining Training Function.
trainer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,
weight_decay=wd) # Initializing the SGD Optimizer.
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay) # Initializing Learning Rate Scheduler.
num_batches, timer = len(train_iter), d2l.Timer() # Initializing the Parameters.
animator = d2l.Animator(xlabel="epoch", xlim=[1, num_epochs],
legend=["train loss", "train acc", "valid acc"]) # Initializing the Animation.
net = nn.DataParallel(net, device_ids=devices).to(devices[0]) # Implementation of Parallelism on Model.
for epoch in range(num_epochs):
net.train() # Initializing the Training Mode.
metric = d2l.Accumulator(3) # Initializing the Accumulator.
for i, (features, labels) in enumerate(train_iter):
timer.start() # Starting the Timer.
l, acc = d2l.train_batch_ch13(net, features, labels, loss, trainer,
devices) # Initializing the Training.
metric.add(l, acc, labels.shape[0]) # Accumulating the Metrics.
timer.stop() # Stopping the Timer.
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches, (
metric[0] / metric[2], metric[1] / metric[2], None)) # Implementation of Animation.
if valid_iter is not None:
valid_acc = d2l.evaluate_accuracy_gpu(net, valid_iter) # Evaluating Validation Accuracy.
animator.add(epoch + 1, (None, None, valid_acc)) # Implementation of Animation.
scheduler.step() # Optimization of the Model.
if valid_iter is not None:
print(f"Loss {metric[0] / metric[2]:.3f}," # Inspecting Loss.
f"Train acc {metric[1] / metric[2]:.3f}," # Inspecting Training Accuracy.
f"Valid acc {valid_acc:.3f}") # Inspecting Validation Accuracy.
else:
print(f"Loss {metric[0] / metric[2]:.3f}," # Inspecting Loss.
f"Train acc {metric[1] / metric[2]:.3f}") # Inspecting Training Accuracy.
print(f"{metric[2]*num_epochs / timer.sum():.1f} examples/sec"
f"on {str(devices)}") # Inspecting Time Taken.
###Output
_____no_output_____
###Markdown
**TRAINING AND VALIDATING THE MODEL:**- I will train and validate the model here.
###Code
#@ TRAINING AND VALIDATING THE MODEL:
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 5, 0.1, 5e-4 # Initializing the Parameters.
lr_period, lr_decay, net = 50, 0.1, get_net() # Initializing the Neural Network Model.
train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay) # Training the Model.
###Output
Loss nan,Train acc 0.102,Valid acc 0.100
283.7 examples/secon [device(type='cuda', index=0)]
###Markdown
**CLASSIFYING THE TESTING SET:**
###Code
#@ CLASSIFYING THE TESTING SET:
net, preds = get_net(), [] # Initializing the Parameters.
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period,
lr_decay) # Training the Model.
for X, _ in test_iter:
y_hat = net(X.to(devices[0]))
preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({"id": sorted_ids, "label": preds})
df["label"] = df["label"].apply(lambda x: train_valid_ds.classes[x])
df.to_csv("result.csv", index=False)
###Output
Loss 2.520,Train acc 0.100
291.0 examples/secon [device(type='cuda', index=0)]
###Markdown
Finetuning PyTorch vision models to work with CIFAR-10 dataset Author: Huy Phan Github: https://github.com/huyvnphan/PyTorch-CIFAR10 1. Import required libraries
###Code
import copy
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from tqdm import tqdm as pbar
from torch.utils.tensorboard import SummaryWriter
from models import *
###Output
_____no_output_____
###Markdown
2. Prepare datasets
###Code
def make_dataloaders(params):
"""
Make a Pytorch dataloader object that can be used for traing and valiation
Input:
- params dict with key 'path' (string): path of the dataset folder
- params dict with key 'batch_size' (int): mini-batch size
- params dict with key 'num_workers' (int): number of workers for dataloader
Output:
- trainloader and testloader (pytorch dataloader object)
"""
transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
trainset = torchvision.datasets.CIFAR10(root=params['path'], train=True, transform=transform_train)
testset = torchvision.datasets.CIFAR10(root=params['path'], train=False, transform=transform_validation)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, num_workers=params['num_workers'])
testloader = torch.utils.data.DataLoader(testset, batch_size=params['batch_size'], shuffle=False, num_workers=params['num_workers'])
return trainloader, testloader
###Output
_____no_output_____
###Markdown
3. Train model
###Code
def train_model(model, params):
writer = SummaryWriter('runs/' + params['description'])
model = model.to(params['device'])
optimizer = optim.SGD(model.parameters(), lr=params['learning_rate'],
weight_decay=params['weight_decay'], momentum=0.9, nesterov=True)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=params['reduce_learning_rate'], gamma=0.1)
criterion = nn.CrossEntropyLoss()
best_accuracy = test_model(model, params)
best_model = copy.deepcopy(model.state_dict())
for epoch in pbar(range(params['num_epochs'])):
scheduler.step()
# Each epoch has a training and validation phase
for phase in ['train', 'validation']:
# Loss accumulator for each epoch
logs = {'Loss': 0.0,
'Accuracy': 0.0}
# Set the model to the correct phase
model.train() if phase == 'train' else model.eval()
# Iterate over data
for image, label in params[phase+'_loader']:
image = image.to(params['device'])
label = label.to(params['device'])
# Zero gradient
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
# Forward pass
prediction = model(image)
loss = criterion(prediction, label)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
# Update log
logs['Loss'] += image.shape[0]*loss.detach().item()
logs['Accuracy'] += accuracy
# Backward pass
if phase == 'train':
loss.backward()
optimizer.step()
# Normalize and write the data to TensorBoard
logs['Loss'] /= len(params[phase+'_loader'].dataset)
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
writer.add_scalars('Loss', {phase: logs['Loss']}, epoch)
writer.add_scalars('Accuracy', {phase: logs['Accuracy']}, epoch)
# Save the best weights
if phase == 'validation' and logs['Accuracy'] > best_accuracy:
best_accuracy = logs['Accuracy']
best_model = copy.deepcopy(model.state_dict())
# Write best weights to disk
if epoch % params['check_point'] == 0 or epoch == params['num_epochs']-1:
torch.save(best_model, params['state_dict_path'] + params['description'] + '.pt')
final_accuracy = test_model(model, params)
writer.add_text('Final_Accuracy', str(final_accuracy), 0)
writer.close()
###Output
_____no_output_____
###Markdown
4. Test model
###Code
def test_model(model, params):
model = model.to(params['device']).eval()
phase = 'validation'
logs = {'Accuracy': 0.0}
# Iterate over data
for image, label in pbar(params[phase+'_loader']):
image = image.to(params['device'])
label = label.to(params['device'])
with torch.no_grad():
prediction = model(image)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
logs['Accuracy'] += accuracy
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
return logs['Accuracy']
###Output
_____no_output_____
###Markdown
5. Create PyTorch models
###Code
model = densenet169(pretrained=True)
###Output
_____no_output_____
###Markdown
6. Put everything together
###Code
# Train on cuda if available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print("Using", device)
data_params = {'path': '/raid/data/pytorch_dataset/cifar10',
'batch_size': 256, 'num_workers': 4}
train_loader, validation_loader = make_dataloaders(data_params)
train_params = {'description': 'densenet161',
'num_epochs': 600,
'reduce_learning_rate': [200, 400],
'learning_rate': 5e-2, 'weight_decay': 1e-3,
'check_point': 100, 'device': device,
'state_dict_path': 'trained_models/',
'train_loader': train_loader, 'validation_loader': validation_loader}
# train_model(model, train_params)
test_model(model, train_params)
###Output
_____no_output_____
###Markdown
Finetuning PyTorch vision models to work with CIFAR-10 dataset Author: Huy Phan Github: https://github.com/huyvnphan/PyTorch-CIFAR10 1. Import required libraries
###Code
import copy
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from tqdm import tqdm as pbar
from torch.utils.tensorboard import SummaryWriter
from cifar10_models import *
###Output
_____no_output_____
###Markdown
2. Prepare datasets
###Code
def make_dataloaders(params):
"""
Make a Pytorch dataloader object that can be used for traing and valiation
Input:
- params dict with key 'path' (string): path of the dataset folder
- params dict with key 'batch_size' (int): mini-batch size
- params dict with key 'num_workers' (int): number of workers for dataloader
Output:
- trainloader and testloader (pytorch dataloader object)
"""
transform_train = transforms.Compose([transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.4914, 0.4822, 0.4465], [0.2023, 0.1994, 0.2010])])
transform_validation = transforms.Compose([transforms.ToTensor()])
trainset = torchvision.datasets.CIFAR10(root=params['path'], train=True, transform=transform_train)
testset = torchvision.datasets.CIFAR10(root=params['path'], train=False, transform=transform_validation)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=params['batch_size'], shuffle=True, num_workers=4)
testloader = torch.utils.data.DataLoader(testset, batch_size=params['batch_size'], shuffle=False, num_workers=4)
return trainloader, testloader
###Output
_____no_output_____
###Markdown
3. Train model
###Code
def train_model(model, params):
writer = SummaryWriter('runs/' + params['description'])
model = model.to(params['device'])
optimizer = optim.AdamW(model.parameters())
total_updates = params['num_epochs']*len(params['train_loader'])
criterion = nn.CrossEntropyLoss()
best_accuracy = test_model(model, params)
best_model = copy.deepcopy(model.state_dict())
for epoch in pbar(range(params['num_epochs'])):
# Each epoch has a training and validation phase
for phase in ['train', 'validation']:
# Loss accumulator for each epoch
logs = {'Loss': 0.0,
'Accuracy': 0.0}
# Set the model to the correct phase
model.train() if phase == 'train' else model.eval()
# Iterate over data
for image, label in params[phase+'_loader']:
image = image.to(params['device'])
label = label.to(params['device'])
# Zero gradient
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
# Forward pass
prediction = model(image)
loss = criterion(prediction, label)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
# Update log
logs['Loss'] += image.shape[0]*loss.detach().item()
logs['Accuracy'] += accuracy
# Backward pass
if phase == 'train':
loss.backward()
optimizer.step()
# Normalize and write the data to TensorBoard
logs['Loss'] /= len(params[phase+'_loader'].dataset)
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
writer.add_scalars('Loss', {phase: logs['Loss']}, epoch)
writer.add_scalars('Accuracy', {phase: logs['Accuracy']}, epoch)
# Save the best weights
if phase == 'validation' and logs['Accuracy'] > best_accuracy:
best_accuracy = logs['Accuracy']
best_model = copy.deepcopy(model.state_dict())
# Write best weights to disk
if epoch % params['check_point'] == 0 or epoch == params['num_epochs']-1:
torch.save(best_model, params['description'] + '.pt')
final_accuracy = test_model(model, params)
writer.add_text('Final_Accuracy', str(final_accuracy), 0)
writer.close()
###Output
_____no_output_____
###Markdown
4. Test model
###Code
def test_model(model, params):
model = model.to(params['device']).eval()
phase = 'validation'
logs = {'Accuracy': 0.0}
# Iterate over data
for image, label in pbar(params[phase+'_loader']):
image = image.to(params['device'])
label = label.to(params['device'])
with torch.no_grad():
prediction = model(image)
accuracy = torch.sum(torch.max(prediction, 1)[1] == label.data).item()
logs['Accuracy'] += accuracy
logs['Accuracy'] /= len(params[phase+'_loader'].dataset)
return logs['Accuracy']
###Output
_____no_output_____
###Markdown
5. Create PyTorch models
###Code
model = resnet18()
###Output
_____no_output_____
###Markdown
6. Put everything together
###Code
# Train on cuda if available
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print("Using", device)
data_params = {'path': '/raid/data/pytorch_dataset/cifar10', 'batch_size': 256}
train_loader, validation_loader = make_dataloaders(data_params)
train_params = {'description': 'Test',
'num_epochs': 300,
'check_point': 50, 'device': device,
'train_loader': train_loader, 'validation_loader': validation_loader}
train_model(model, train_params)
test_model(model, train_params)
###Output
_____no_output_____ |
src/predictions/Load_PredictedMask_And_Image.ipynb | ###Markdown
Load the pickled predicted mask and original image; the pickled file is created by "UNET_Prediction_EntireScan" script.1. Create a folder ../data/luna16/2. Create a folder ../data/luna16/subset2 Download pickled prediction file (it has been created for this one scan) 'entire_predictions_1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405.dat" from https://drive.google.com/drive/u/1/folders/13wmubTgm-7sh3MxPGxqmVZuoqi0G3ufW
###Code
import pandas as pd
import numpy as np
import h5py
import pandas as pd
import argparse
import SimpleITK as sitk
from PIL import Image
import os, glob
import os, os.path
import tensorflow as tf
import keras
from ipywidgets import interact
import pickle
import matplotlib.pyplot as plt
%matplotlib inline
# HOLDOUT = 5
# HO_dir = 'HO{}/'.format(HOLDOUT)
data_dir = '../data/luna16/'
prediction_file = 'subset2/entire_predictions_1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405.dat'
size_file = 'subset2/entire_size_1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405.dat'
pkl_file = open(data_dir+prediction_file, 'rb')
predictions_dict = pickle.load(pkl_file)
# predictions_dict {seriesuid : (img.shape, padded_img, predicted_mask)}
value = predictions_dict['1.3.6.1.4.1.14519.5.2.1.6279.6001.100621383016233746780170740405']
img_shape = value[0]
padded_img = value[1]
predicted_mask = value[2]
print ("\n Predicted mask sum : {}".format(np.sum(predicted_mask)))
def displaySlice(sliceNo):
plt.figure(figsize=[20,20]);
plt.subplot(121)
plt.title("True Image")
plt.imshow(padded_img[:, :, sliceNo], cmap='bone');
plt.subplot(122)
plt.title("Predicted Mask")
plt.imshow(predicted_mask[:, :, sliceNo], cmap='bone');
plt.show()
interact(displaySlice, sliceNo=(1,img_shape[2],1));
# print ("\n Predicted mask sum : {}".format(np.sum(predicted_mask)))
# Predicted mask sum : 119040.40901441715
###Output
_____no_output_____ |
KNN/Classification - K Nearest Neighbors.ipynb | ###Markdown
Regression: the output variable takes continuous values.Classification: the output variable takes class labels. f:x→yIf y is discrete/categorical variable, then this is classification problem.If y is real number/continuous, then this is a regression problem.
###Code
import numpy as np
from sklearn import preprocessing,cross_validation,neighbors
import pandas as pd
###Output
_____no_output_____
###Markdown
About The Data Set : https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.names
###Code
df = pd.read_csv('data/breast-cancer-wisconsin.data')
df.replace('?' , -99999, inplace = True)
df.drop(['id'],1,inplace = True)
print(df.describe())
# Defining Features and Labels
X = np.array(df.drop(['class'], 1))
y = np.array(df['class'])
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2)
clf = neighbors.KNeighborsClassifier(n_neighbors = 5)
clf.fit(X_train, y_train)
accuracy = clf.score(X_test, y_test)
print(accuracy)
#Testing With Sample Data
example_data = np.array([[4,2,1,1,1,2,3,2,1],[4,2,1,3,2,2,3,2,1]])
example_data = example_data.reshape(len(example_data), -1)
prediction = clf.predict(example_data)
print(prediction)
###Output
[2 2]
|
nbs/train.ipynb | ###Markdown
Training (Legacy Version)> Notebook to train deep learning models or ensembles for segmentation of fluorescent labels in microscopy images.This notebook is optmizied to be executed on [Google Colab](https://colab.research.google.com).* If youre new on _Google Colab_, try out the [tutorial](https://colab.research.google.com/notebooks/intro.ipynb).* Use Firefox or Google Chrome if you want to upload and download files
###Code
#@title Set up environment
#@markdown Please run this cell to get started.
%load_ext autoreload
%autoreload 2
try:
from google.colab import files, drive
except ImportError:
pass
try:
import deepflash2
except ImportError:
!pip install -q deepflash2==0.0.14
import zipfile
import shutil
import imageio
from sklearn.model_selection import KFold, train_test_split
from fastai.vision.all import *
from deepflash2.all import *
from deepflash2.data import _read_msk
from scipy.stats import entropy
###Output
_____no_output_____
###Markdown
Provide Training Data __Required data structure__- __One folder for training images__- __One folder for segmentation masks__ - We highly recommend using [ground truth estimation](https://matjesg.github.io/deepflash2/gt_estimation.html)_Examplary structure: see [naming conventions](https://matjesg.github.io/deepflash2/add_information.htmlNaming)_* [folder] images * [file] 0001.tif * [file] 0002.tif* [folder] masks * [file] 0001_mask.png * [file] 0002_mask.png Option A: Upload via _Google Drive_ (recommended, Colab only) - The folder in your drive must contain all files and correct folder structure. - See [here](https://support.google.com/drive/answer/2375091?co=GENIE.Platform%3DDesktop&hl=en) how to organize your files in _Google Drive_.- See this [stackoverflow post](https://stackoverflow.com/questions/46986398/import-data-into-google-colaboratory) for browsing files with the file browser
###Code
#@markdown Provide the path to the folder on your _Google Drive_
try:
drive.mount('/content/drive')
path = "/content/drive/My Drive/data" #@param {type:"string"}
path = Path(path)
print('Path contains the following files and folders: \n', L(os.listdir(path)))
#@markdown Follow the instructions and press Enter after copying and pasting the key.
except:
print("Warning: Connecting to Google Drive only works on Google Colab.")
pass
###Output
_____no_output_____
###Markdown
Option B: Upload via _zip_ file (Colab only) - The *zip* file must contain all images and segmentations and correct folder structure. - See [here](https://www.hellotech.com/guide/for/how-to-zip-a-file-mac-windows-pc) how to _zip_ files on Windows or Mac.
###Code
#@markdown Run to upload a *zip* file
path = Path('data')
try:
u_dict = files.upload()
for key in u_dict.keys():
unzip(path, key)
print('Path contains the following files and folders: \n', L(os.listdir(path)))
except:
print("Warning: File upload only works on Google Colab.")
pass
###Output
_____no_output_____
###Markdown
Option C: Provide path (Local installation) If you're working on your local machine or server, provide a path to the correct folder.
###Code
#@markdown Provide path (either relative to notebook or absolute) and run cell
path = "" #@param {type:"string"}
path = Path(path)
print('Path contains the following files and folders: \n', L(os.listdir(path)))
###Output
_____no_output_____
###Markdown
Option D: Try with sample data (Testing only) If you don't have any data available yet, try our sample data
###Code
#@markdown Run to use sample files
path = Path('sample_data_cFOS')
url = "https://github.com/matjesg/deepflash2/releases/download/model_library/wue1_cFOS_small.zip"
urllib.request.urlretrieve(url, 'sample_data_cFOS.zip')
unzip(path, 'sample_data_cFOS.zip')
###Output
_____no_output_____
###Markdown
Check and load data
###Code
#@markdown Provide your parameters according to your provided data
image_folder = "images" #@param {type:"string"}
mask_folder = "masks" #@param {type:"string"}
mask_suffix = "_mask.png" #@param {type:"string"}
#@markdown Number of classes: e.g., 2 for binary segmentation (foreground and background class)
n_classes = 2 #@param {type:"integer"}
#@markdown Check if you are providing instance labels (class-aware and instance-aware)
instance_labels = False #@param {type:"boolean"}
f_names = get_image_files(path/image_folder)
label_fn = lambda o: path/mask_folder/f'{o.stem}{mask_suffix}'
#Check if corresponding masks exist
mask_check = [os.path.isfile(label_fn(x)) for x in f_names]
if len(f_names)==sum(mask_check) and len(f_names)>0:
print(f'Found {len(f_names)} images and {sum(mask_check)} masks in "{path}".')
else:
print(f'IMAGE/MASK MISMATCH! Found {len(f_names)} images and {sum(mask_check)} masks in "{path}".')
print('Please check the steps above.')
###Output
_____no_output_____
###Markdown
Customize [mask weights](https://matjesg.github.io/deepflash2/data.htmlWeight-Calculation) (optional)- Default values should work for most of the data. - However, this choice can significantly change the model performance later on.
###Code
#@title { run: "auto" }
#@markdown Run to set weight parameters
border_weight_sigma=10 #@param {type:"slider", min:1, max:20, step:1}
foreground_dist_sigma=10 #@param {type:"slider", min:1, max:20, step:1}
border_weight_factor=10 #@param {type:"slider", min:1, max:50, step:1}
foreground_background_ratio= 0.1 #@param {type:"slider", min:0.1, max:1, step:0.1}
#@markdown Check if want to plot the resulting weights of one mask
plot_weights = False #@param {type:"boolean"}
#@markdown Check `reset_to_defaults` to reset your parameters.
reset_to_defaults = False #@param {type:"boolean"}
mw_dict = {'bws': 10 if reset_to_defaults else border_weight_sigma ,
'fds': 10 if reset_to_defaults else foreground_dist_sigma,
'bwf': 10 if reset_to_defaults else border_weight_factor,
'fbr' : 0.1 if reset_to_defaults else foreground_background_ratio}
#@markdown Select image number
image_number = 0 #@param {type:"slider", min:0, max:100, step:1}
if plot_weights:
idx = np.minimum(len(f_names), image_number)
print('Plotting mask for image', f_names[idx].name, '- Please wait.')
msk = _read_msk(label_fn(f_names[idx]))
_, w, _ = calculate_weights(msk, n_dims=n_classes, **mw_dict)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,12))
axes[0].imshow(msk)
axes[0].set_axis_off()
axes[0].set_title('Mask')
axes[1].imshow(w)
axes[1].set_axis_off()
axes[1].set_title('Weights')
###Output
_____no_output_____
###Markdown
Create mask weights
###Code
#@markdown Run to create mask weights for the whole dataset.
try:
mw_dict=mw_dict
except:
mw_dict = {'bws': 10,'fds': 10, 'bwf': 10,'fbr' : 0.1}
ds = RandomTileDataset(f_names, label_fn, n_classes=n_classes, instance_labels=instance_labels, **mw_dict)
#@title { run: "auto" }
#@markdown Run to show data.
#@markdown Use the slider to control the number of displayed images
first_n = 3 #@param {type:"slider", min:1, max:100, step:1}
ds.show_data(max_n = first_n, figsize=(15,15), overlay=False)
###Output
_____no_output_____
###Markdown
Model Defintion Select one of the available [model architectures](https://matjesg.github.io/deepflash2/models.htmlU-Net-architectures).
###Code
#@title { run: "auto" }
model_arch = 'unet_deepflash2' #@param ["unet_deepflash2", "unet_falk2019", "unet_ronnberger2015"]
###Output
_____no_output_____
###Markdown
Pretrained weights - Select 'new' to use an untrained model (no pretrained weights)- Or select [pretraind](https://matjesg.github.io/deepflash2/model_library.html) model weights from dropdown menu
###Code
pretrained_weights = "wue_cFOS" #@param ["new", "wue_cFOS", "wue_Parv", "wue_GFAP", "wue_GFP", "wue_OPN3"]
pre = False if pretrained_weights=="new" else True
n_channels = ds.get_data(max_n=1)[0].shape[-1]
model = torch.hub.load('matjesg/deepflash2', model_arch, pretrained=pre, dataset=pretrained_weights, n_classes=ds.c, in_channels=n_channels)
if pretrained_weights=="new": apply_init(model)
###Output
_____no_output_____
###Markdown
Setting model hyperparameters (optional) - *mixed_precision_training*: enables [Mixed precision training](https://docs.fast.ai/callback.fp16A-little-bit-of-theory) - decreases memory usage and speed-up training - may effect model accuracy- *batch_size*: the number of samples that will be propagated through the network during one iteration - 4 works best in our experiements - 4-8 works good for [mixed precision training](https://docs.fast.ai/callback.fp16A-little-bit-of-theory)
###Code
mixed_precision_training = False #@param {type:"boolean"}
batch_size = 4 #@param {type:"slider", min:2, max:8, step:2}
loss_fn = WeightedSoftmaxCrossEntropy(axis=1)
cbs = [ElasticDeformCallback]
dls = DataLoaders.from_dsets(ds,ds, bs=batch_size)
if torch.cuda.is_available(): dls.cuda(), model.cuda()
learn = Learner(dls, model, wd=0.001, loss_func=loss_fn, cbs=cbs)
if mixed_precision_training: learn.to_fp16()
###Output
_____no_output_____
###Markdown
- `max_lr`: The learning rate controls how quickly or slowly a neural network model learns. - We found that a maximum learning rate of 5e-4 (i.e., 0.0005) yielded the best results across experiments. - `learning_rate_finder`: Check only if you want use the [Learning Rate Finder](https://matjesg.github.io/deepflash2/add_information.htmlLearning-Rate-Finder) on your dataset.
###Code
#@markdown Check and run to use learning rate finder
learning_rate_finder = False #@param {type:"boolean"}
if learning_rate_finder:
lr_min,lr_steep = learn.lr_find()
print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}")
max_lr = 5e-4 #@param {type:"number"}
###Output
_____no_output_____
###Markdown
Model Training Setting training parameters - `n_models`: Number of models to train. - If you're experimenting with parameters, try only one model first. - Depending on the data, ensembles should comprise 3-5 models. - _Note: Number of model affects the [Train-validation-split](https://matjesg.github.io/deepflash2/add_information.htmlTrain-validation-split)._
###Code
#@title { run: "auto" }
try:
batch_size=batch_size
except:
batch_size=4
mixed_precision_training = False
loss_fn = WeightedSoftmaxCrossEntropy(axis=1)
try:
max_lr=max_lr
except:
max_lr = 5e-4
metrics = [Dice_f1(), Iou()]
n_models = 1 #@param {type:"slider", min:1, max:5, step:1}
print("Suggested epochs for 1000 iterations:", calc_iterations(len(ds), batch_size, n_models))
###Output
_____no_output_____
###Markdown
- `epochs`: One epoch is when an entire (augemented) dataset is passed through the model for training. - Epochs need to be adusted depending on the size and number of images - We found that choosing the number of epochs such that the network parameters are update about 1000 times (iterations) leads to satiesfying results in most cases.
###Code
epochs = 30 #@param {type:"slider", min:1, max:200, step:1}
###Output
_____no_output_____
###Markdown
Train models
###Code
#@markdown Run to train model(s).<br/> **THIS CAN TAKE A FEW HOURS FOR MULTIPLE MODELS!**
kf = KFold(n_splits=max(n_models,2))
model_path = path/'models'
model_path.mkdir(parents=True, exist_ok=True)
res, res_mc = {}, {}
fold = 0
for train_idx, val_idx in kf.split(f_names):
fold += 1
name = f'model{fold}'
print('Train', name)
if n_models==1:
files_train, files_val = train_test_split(f_names)
else:
files_train, files_val = f_names[train_idx], f_names[val_idx]
print(f'Validation Images: {files_val}')
train_ds = RandomTileDataset(files_train, label_fn, **mw_dict)
valid_ds = TileDataset(files_val, label_fn, **mw_dict)
dls = DataLoaders.from_dsets(train_ds, valid_ds, bs=batch_size)
dls_valid = DataLoaders.from_dsets(valid_ds, batch_size=batch_size ,shuffle=False, drop_last=False)
model = torch.hub.load('matjesg/deepflash2', model_arch, pretrained=pre,
dataset=pretrained_weights, n_classes=ds.c, in_channels=n_channels)
if pretrained_weights=="new": apply_init(model)
if torch.cuda.is_available(): dls.cuda(), model.cuda(), dls_valid.cuda()
cbs = [SaveModelCallback(monitor='iou'), ElasticDeformCallback]
metrics = [Dice_f1(), Iou()]
learn = Learner(dls, model, metrics = metrics, wd=0.001, loss_func=loss_fn, cbs=cbs)
if mixed_precision_training: learn.to_fp16()
learn.fit_one_cycle(epochs, max_lr)
# save_model(model_path/f'{name}.pth', learn.model, opt=None)
torch.save(learn.model.state_dict(), model_path/f'{name}.pth', _use_new_zipfile_serialization=False)
smxs, segs, _ = learn.predict_tiles(dl=dls_valid.train)
smxs_mc, segs_mc, std = learn.predict_tiles(dl=dls_valid.train, mc_dropout=True, n_times=10)
for i, file in enumerate(files_val):
res[(name, file)] = smxs[i], segs[i]
res_mc[(name, file)] = smxs_mc[i], segs_mc[i], std[i]
if n_models==1:
break
###Output
_____no_output_____
###Markdown
Validate models Here you can validate your models. To avoid information leakage, only predictions on the respective models' validation set are made.
###Code
#@markdown Create folders to save the resuls. They will be created at your provided 'path'.
pred_dir = 'val_preds' #@param {type:"string"}
pred_path = path/pred_dir/'ensemble'
pred_path.mkdir(parents=True, exist_ok=True)
uncertainty_dir = 'val_uncertainties' #@param {type:"string"}
uncertainty_path = path/uncertainty_dir/'ensemble'
uncertainty_path.mkdir(parents=True, exist_ok=True)
result_path = path/'results'
result_path.mkdir(exist_ok=True)
#@markdown Define `filetype` to save the predictions and uncertainties. All common [file formats](https://imageio.readthedocs.io/en/stable/formats.html) are supported.
filetype = 'png' #@param {type:"string"}
#@markdown Show and save results
res_list = []
for model_number in range(1,n_models+1):
model_name = f'model{model_number}'
val_files = [f for mod , f in res.keys() if mod == model_name]
print(f'Validating {model_name}')
pred_path = path/pred_dir/model_name
pred_path.mkdir(parents=True, exist_ok=True)
uncertainty_path = path/uncertainty_dir/model_name
uncertainty_path.mkdir(parents=True, exist_ok=True)
for file in val_files:
img = ds.get_data(file)[0]
msk = ds.get_data(file, mask=True)[0]
pred = res[(model_name,file)][1]
pred_std = res_mc[(model_name,file)][2][...,0]
df_tmp = pd.Series({'file' : file.name,
'model' : model_name,
'iou': iou(msk, pred),
'entropy': entropy(pred_std, axis=None)})
plot_results(img, msk, pred, pred_std, df=df_tmp)
res_list.append(df_tmp)
imageio.imsave(pred_path/f'{file.stem}_pred.{filetype}', pred.astype(np.uint8) if np.max(pred)>1 else pred.astype(np.uint8)*255)
imageio.imsave(uncertainty_path/f'{file.stem}_uncertainty.{filetype}', pred_std.astype(np.uint8)*255)
df_res = pd.DataFrame(res_list)
df_res.to_csv(result_path/f'val_results.csv', index=False)
###Output
_____no_output_____
###Markdown
Download Section - The models will always be the _last_ version trained in section _Model Training_- To download validation predictions and uncertainties, you first need to execute section _Validate models_._Note: If you're connected to *Google Drive*, the models are automatically saved to your drive._
###Code
#@title Download models { run: "auto" }
model_number = "1" #@param ["1", "2", "3", "4", "5"]
model_path = path/'models'/f'model{model_number}.pth'
try:
files.download(model_path)
except:
print("Warning: File download only works on Google Colab.")
print(f"Models are saved at {model_path.parent}")
pass
#@markdown Download validation predicitions { run: "auto" }
out_name = 'val_predictions'
shutil.make_archive(path/out_name, 'zip', path/pred_dir)
try:
files.download(path/f'{out_name}.zip')
except:
print("Warning: File download only works on Google Colab.")
pass
#@markdown Download validation uncertainties
out_name = 'val_uncertainties'
shutil.make_archive(path/out_name, 'zip', path/uncertainty_dir)
try:
files.download(path/f'{out_name}.zip')
except:
print("Warning: File download only works on Google Colab.")
pass
#@markdown Download result analysis '.csv' files
try:
files.download(result_path/f'val_results.csv')
except:
print("Warning: File download only works on Google Colab.")
pass
###Output
_____no_output_____
###Markdown
We use the foward fill method in pandas to fill all the nans for the each sentence in the `Sentence ` column.
###Code
#hide
df['Sentence #'].fillna(method='ffill')
#export
df['Sentence #'] = df['Sentence #'].fillna(method='ffill')
###Output
_____no_output_____
###Markdown
In total we cans ee that there are 47959 sentences in our dataset
###Code
#hide
len(df['Sentence #'].unique())
###Output
_____no_output_____
###Markdown
Now let us encode all the labels for every word in every sentence
###Code
#hide
le_pos = LabelEncoder()
le_tag = LabelEncoder()
#export
utils.save_label_encoders(le_tag=le_tag, le_pos=le_pos)
#export
le_pos, le_tag = utils.load_label_encoders()
#hide
df["encoded_POS"] = le_pos.fit_transform(df.POS)
df["encoded_Tag"] = le_tag.fit_transform(df.Tag)
#export
sentences, tags, pos = utils.process_data(df)
#hide
len(sentences), len(tags), len(pos)
###Output
_____no_output_____
###Markdown
data Split I'll be using a simple train-test split
###Code
#export
train_sentences, valid_sentences, train_tag, valid_tag, train_pos, valid_pos = train_test_split(sentences, tags, pos, test_size=0.2)
#export
train_dl = utils.create_loader(train_sentences, train_tag, train_pos, bs=config.TRAIN_BATCH_SIZE)
valid_dl = utils.create_loader(valid_sentences, valid_tag, valid_pos, bs=config.VALID_BATCH_SIZE)
#export
modeller = model.EntityModel(num_tag=len(le_tag.classes_), num_pos=len(le_pos.classes_))
# #export
model_params = list(modeller.named_parameters())
#export
# we don't want weight decay for these
no_decay = ['bias', 'LayerNorm.weight', 'LayerNorm.bias']
optimizer_params = [
{'params': [p for n, p in model_params if not any(nd in n for nd in no_decay)],
'weight_decay':0.001},
# no weight decay should be applied
{'params': [p for n, p in model_params if any(nd in n for nd in no_decay)],
'weight_decay':0.0}
]
#export
lr = config.LR
#hide
lr
#export
optimizer = AdamW(optimizer_params, lr=lr)
#export
num_train_steps = int(len(sentences) / config.TRAIN_BATCH_SIZE * config.NUM_EPOCHS)
#export
scheduler = get_linear_schedule_with_warmup(optimizer=optimizer,
num_warmup_steps=0,
num_training_steps=num_train_steps)
#export
learn = engine.BertFitter(modeller, (train_dl, valid_dl), optimizer, [accuracy_score, partial(f1_score, average='macro')], config.DEVICE, scheduler=scheduler, log_file='training_log.txt')
#hide
config.NUM_EPOCHS
#export
NUM_EPOCHS = config.NUM_EPOCHS + 2
learn.fit(NUM_EPOCHS, model_path=config.MODEL_PATH/'entity_model.bin')
###Output
_____no_output_____
###Markdown
Train> API details.
###Code
%load_ext autoreload
%autoreload 2
import matplotlib as mpl
%matplotlib inline
#export
import warnings
import re
from functools import partial
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.models as models
import pytorch_lightning as pl
from pytorch_lightning.core import LightningModule
from pytorch_lightning.metrics import functional as FM
#export
from isic.dataset import SkinDataModule, from_label_idx_to_key
from isic.layers import LabelSmoothingCrossEntropy
from isic.callback.hyperlogger import HyperparamsLogger
from isic.callback.logtable import LogTableMetricsCallback
from isic.callback.mixup import MixupDict
from isic.callback.cutmix import CutmixDict
from isic.callback.freeze import FreezeCallback, UnfreezeCallback
from isic.utils.core import reduce_loss, generate_val_steps
from isic.utils.model import apply_init, get_bias_batchnorm_params, apply_leaf, check_attrib_module, create_body, create_head, lr_find, freeze, unfreeze, log_metrics_per_key
from isic.model import BaselineModel, Model
message_formater = "You have set {0} number of classes if different from predicted {0} and target {0} number of classes"
warnings.filterwarnings("ignore", message_formater.format("(.*)"), category=UserWarning)
dm = SkinDataModule()
dm.prepare_data()
dm.setup('fit')
F_EPOCHS = 1
U_EPOCHS = 1
LR = 1e-2
###Output
_____no_output_____
###Markdown
Baseline
###Code
model = BaselineModel('resnet18')
trainer = pl.Trainer(fast_dev_run=True, callbacks=[LogTableMetricsCallback()])
trainer.fit(model, dm)
dm.setup('test')
a = trainer.test(model, dm.val_dataloader())
torch.load('preds.pt').shape
###Output
_____no_output_____
###Markdown
Real
###Code
# init model
model = Model(LR, arch='resnet18')
check_attrib_module(model)
lr_find(model, dm,lr_find=False,verbose=True)
cbs = [LogTableMetricsCallback(), HyperparamsLogger()]
trainer = fit_one_cycle(F_EPOCHS, model, dm, max_lr=LR, callbacks=cbs, fast_dev_run=False, limit_val_batches=0, limit_train_batches=0.01)
unfreeze(model, 3)
# Unfreeze training
trainer = fit_one_cycle(callbacks=cbs, fast_dev_run=False, limit_val_batches=0, limit_train_batches=0.01)
trainer.fit(model, dm)
###Output
| Name | Type | Params
-----------------------------------------------
0 | model | Sequential | 25 M
1 | loss_func | CrossEntropyLoss | 0
###Markdown
Tensorboard
###Code
%load_ext tensorboard
%tensorboard --logdir=lightning_logs/
###Output
_____no_output_____ |
Tutorial-11/TUT11-1-graph-processing.ipynb | ###Markdown
TUT11-1 Graph Processing **Graph representation** **Graph Structure**Mathematically, a graph $\mathcal{G}$ is defined as a tuple of a set of nodes/vertices $V$, and a set of edges/links $E$: $\mathcal{G}=(V,E)$. Each edge is a pair of two vertices, and represents a connection between them as shown in the figure.The vertices are $V=\{1,2,3,4\}$, and edges $E=\{(1,2), (2,3), (2,4), (3,4)\}$. Note that for simplicity, we assume the graph to be undirected and hence don't add mirrored pairs like $(2,1)$. In application, vertices and edge can often have specific attributes, and edges can even be directed. **Adjacency Matrix**The **adjacency matrix** $A$ is a square matrix whose elements indicate whether pairs of vertices are adjacent, i.e. connected, or not. In the simplest case, $A_{ij}$ is 1 if there is a connection from node $i$ to $j$, and otherwise 0. If we have edge attributes or different categories of edges in a graph, this information can be added to the matrix as well. For an undirected graph, keep in mind that $A$ is a symmetric matrix ($A_{ij}=A_{ji}$). For the example graph above, we have the following adjacency matrix:$$A = \begin{bmatrix} 0 & 1 & 0 & 0\\ 1 & 0 & 1 & 1\\ 0 & 1 & 0 & 1\\ 0 & 1 & 1 & 0\end{bmatrix}$$Alternatively, we could also define a sparse adjacency matrix with which we can work as if it was a dense matrix, but allows more memory-efficient operations. PyTorch supports this with the sub-package `torch.sparse` ([documentation](https://pytorch.org/docs/stable/sparse.html)). 1. Import libraries
###Code
import numpy as np
import scipy.sparse as sp
import torch
###Output
_____no_output_____
###Markdown
2. Load features
###Code
path = '../input/aist4010-spring2022-a3/data/'
idx_features = np.loadtxt(path + "features.txt", dtype=np.dtype(str))
features = idx_features[:, 1:]
idx_features, features
idx_features.shape, features.shape
# Compressed Sparse Row matrix
features = sp.csr_matrix(features, dtype=np.float32) # features 2707 * 1433
###Output
_____no_output_____
###Markdown
3. Load Labels 1) Load train and val data
###Code
train_data = np.loadtxt(path + "train_labels.csv", delimiter=",", dtype=np.dtype(str))
train_idx, train_labels = train_data[1:, 0], train_data[1:, 1]
val_data = np.loadtxt(path + "val_labels.csv", delimiter=",", dtype=np.dtype(str))
val_idx, val_labels = val_data[1:, 0], val_data[1:, 1] # one-hot encoding labels 2708 * 7
train_idx[:10], train_labels[:10]
###Output
_____no_output_____
###Markdown
2) Load test idx
###Code
test_idx, _ = np.loadtxt(path + "test_idx.csv", delimiter=",", dtype=np.dtype(str), unpack = True)
test_idx = test_idx[1:]
all_idx = np.concatenate((train_idx, val_idx, test_idx), axis = 0)
test_idx.shape, all_idx.shape
###Output
_____no_output_____
###Markdown
3) One-hot encoding
###Code
def encode_onehot(labels):
classes = set(labels)
class_dict = {c:i for i, c in enumerate(classes)}
classes_onehot_dict = {c: np.identity(len(classes))[i, :] for i, c in
enumerate(classes)}
labels_onehot = np.array(list(map(classes_onehot_dict.get, labels)),
dtype=np.int32)
return labels_onehot, class_dict
train_labels, class_dict = encode_onehot(train_labels)
val_labels, _ = encode_onehot(val_labels)
class_dict
###Output
_____no_output_____
###Markdown
4. Build graph 1) Load nodes
###Code
idx = np.array(idx_features[:, 0], dtype=np.int32) # nodes names 2707
idx_map = {j: i for i, j in enumerate(idx)} # nodes mapping 'names' : 'idx'
dict(list(idx_map.items())[:10])
###Output
_____no_output_____
###Markdown
2) Load edges
###Code
edges_unordered = np.genfromtxt(path + "edges.txt", dtype=np.int32) # node1, node2
edges = np.array(list(map(idx_map.get, edges_unordered.flatten())),
dtype=np.int32).reshape(edges_unordered.shape) # node_idx1, node_idx2 5427 * 2
edges.shape, edges[:10]
###Output
_____no_output_____
###Markdown
3) Build adjacency matrix
###Code
# build graph
# A sparse matrix in COOrdinate format.
adj = sp.coo_matrix((np.ones(edges.shape[0]), (edges[:, 0], edges[:, 1])),
shape=(all_idx.shape[0], all_idx.shape[0]),
dtype=np.float32) # adjacency matrix 2707 * 2707
# build symmetric adjacency matrix
adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj) # symmetric adjacency matrix
adj
###Output
_____no_output_____
###Markdown
4) Normalize
###Code
def normalize(mx):
"""Row-normalize sparse matrix"""
rowsum = np.array(mx.sum(1))
r_inv = np.power(rowsum, -1).flatten()
r_inv[np.isinf(r_inv)] = 0.
r_mat_inv = sp.diags(r_inv)
mx = r_mat_inv.dot(mx)
return mx
# normalize
features_n = normalize(features)
adj_n = normalize(adj + sp.eye(adj.shape[0]))
###Output
_____no_output_____ |
Datasets/.ipynb_checkpoints/Data Cleaning(19-20)-checkpoint.ipynb | ###Markdown
Creatiing OPPORTUNITIES Table from LEADS
###Code
Leads = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Leads(2019-20).csv")
Opp = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Opportunities.csv")
# Leads_1 = Leads.dropna(how='all', axis='columns')
Opp['Lead_ID'] = Leads['Lead_ID']
Opp['Product_Name'] = Leads['Product_Name']
Opp['Product_ID'] = Leads['Product_ID']
Opp['Email_address'] = Leads['Email_address']
Opp['Product_Name'].unique()
Opp.drop(['Product_ID'], axis = 1, inplace = True)
Opp.head(10)
# def random_dates(start, end, n=10):
# start_u = start.value//10**9
# end_u = end.value//10**9
# return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
# start = pd.to_datetime('2019-01-01')
# end = pd.to_datetime('2020-01-01')
# random_dates(start, end)
###Output
c:\users\jaswinder singh\appdata\local\programs\python\python38\lib\site-packages\pandas\core\frame.py:4163: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().drop(
###Markdown
Product ID Issue
###Code
# df = pd.DataFrame({'Type':list('ABBC'), 'Set':list('ZZXY')})
# conditions = [
# (df['Set'] == 'Z') & (df['Type'] == 'A'),
# (df['Set'] == 'Z') & (df['Type'] == 'B'),
# (df['Type'] == 'B')]
# choices = ['yellow', 'blue', 'purple']
# df['color'] = np.select(conditions, choices, default='black')
# print(df)
Opp['Product_Name'].unique()
def product_id(row):
if row["Product_Name"] == "Proxima-C":
return "PRO-23-0493"
elif row["Product_Name"] == "Kits Dragon":
return "KTD-32-3231"
elif row["Product_Name"] == "Phoenix":
return "PHO-52-1928"
elif row["Product_Name"] == "Sirius":
return "SIR-10-0293"
elif row["Product_Name"] == "Aurora":
return "AUR-67-4989"
elif row["Product_Name"] == "Apollo":
return "APO-09-8723"
elif row["Product_Name"] == "Agyrap-S":
return "AGY-90-2818"
else:
return "ANH-02-0987"
Opp = Opp.assign(Product_ID = Opp.apply(product_id, axis = 1))
Opp.head(10)
Leads['Product_ID'] = Opp['Product_ID']
Leads.head(10)
Leads.head(10)
def random_dates(start, end, n=1000):
start_u = start.value//10**9
end_u = end.value//10**9
return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
start = pd.to_datetime('2019-10-31')
end = pd.to_datetime('2020-11-01')
Leads['Lead_Created_on'] = random_dates(start, end)
Leads.head(10)
Leads.info()
Opp['Created_on'] = Leads['Lead_Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,20), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
#Opp['Created_on'] = Opp['Created_on'].dt.strftime("%d/%m/%Y")
Opp.info()
Opp['Days_Diff'] = Opp['Created_on'] - Leads['Lead_Created_on']
Opp.head(10)
#The opportunity close date should be 5-10 days after it is created
Opp['Close_Date'] = Opp['Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,10), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Opp.head(10)
Opp.rename(columns = {'Total Price(Euros)':'Sales_Price(EUR)',}, inplace = True)
Opp.head(10)
Opp['Actual_Revenue'] = randint(0,100)
Opp.head(10)
#Actual Revenue Column: Actual revenue is equal to the quantity*price of the product
Proxima_C = 50
Kits_Dragon = 40
Phoenix = 48
Sirius = 60
Aurora = 65
Apollo = 70
Agyrap_S = 75
Anhee_C = 65
Opp['Actual_Revenue'] = Opp['Actual_Revenue'].map(lambda a: (Proxima_C*randint(0,5))+(Kits_Dragon*randint(0,5))+(Phoenix*randint(0,5))+(Sirius*randint(0,5))+(Aurora*randint(0,2))+(Apollo*randint(0,2))+(Agyrap_S*randint(0,5))+(Anhee_C*randint(0,5)))
# product_names = Opp['Product_Name'].unique()
# print(product_names)
# Opp1.set_index('Product_Name', inplace=True)
# Opp1.head()
# Opp1.loc[['Proxima-C']]
# def actual_rev():
# if Opp1.loc[['Proxima-C']] :
# Opp['Actual_Revenue'] = 50*randint(0,5)
# elif Opp1.loc[['Kits Dragon']]:
# Opp['Actual_Revenue']= 40*randint(0,5)
# elif Opp1.loc[['Phoenix']]:
# Opp['Actual_Revenue'] = 48*randint(0,5)
# elif Opp1.loc[['Sirius']]:
# Opp['Actual_Revenue'] = 60*randint(0,5)
# elif Opp.loc[['Aurora']]:
# Opp['Actual_Revenue'] = 65*randint(0,5)
# elif Opp1.loc[['Apollo']]:
# Opp['Actual_Revenue'] = 70*randint(0,5)
# elif Opp1.loc[['Agyrap-S']]:
# Opp['Actual_Revenue'] = 75*randint(0,5)
# elif Opp1.loc[['Anhee-C']]:
# Opp['Actual_Revenue'] = 65*randint(0,5)
# Opp['Actual_Revenue'] = actual_rev()
# Opp.head(10)
# Opp['Actual_Revenue'] = actual_rev(product_names)
Opp.head(20)
# Estimated Revenue: 0.75 to 1.5 times Actual Revenue
# Need to think about this value
Opp['Estimated_Revenue'] = Opp['Actual_Revenue'].map(lambda a: int(a*uniform(0.75, 1.5)))
Opp.head(10)
Desc = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Description.csv')
Desc.head(10)
Opp['Description'] = Desc['Description']
#Rating: Status Won => Hot
# Status Open => 50% Hot, 25% Warm, 25% Cold
# Status Lost => Cold
Opp['Rating'] = Opp['Description'].map(lambda a: 'Hot' if str(a)=='Won' else ('Warm' if str(a)=='Open' else 'Cold'))
# Opp['Rating'] = Opp['Description'].map(lambda a: 'Cold' if str(a)=='Lost' else "")
Opp.head(10)
#Probability Column
prob = [0.95,0.90,0.85,0.80,0.75,0.70,0.65,0.60,0.55,0.50,0.45,0.40,0.35,0.30,0.25,0.20,0.15,0.10,0.05,0]
def ProbImpute(Status, Rating):
if Status=='Won':
return prob[randint(0,3)]
elif Status=='Open' and Rating=='Hot':
return prob[randint(4,9)]
elif Status=='Open' and Rating=='Warm':
return prob[randint(10,13)]
elif Status=='Open' and Rating=='Cold':
return prob[randint(14,17)]
else:
return prob[randint(18,19)]
Opp['Probability'] = Opp.apply(lambda a: ProbImpute(a['Description'],a['Rating']),axis=1)
Opp.head(10)
Opp['Product_Name'] = Leads['Product_Name']
Opp.head(10)
Opp.columns.values
Opp = Opp[['Lead_ID', 'Opportunity_ID', 'Product_Name', 'Product_ID', 'Email_address', 'Created_on', 'Close_Date', 'Estimated_Revenue', 'Actual_Revenue', 'Description', 'Rating', 'Probability', 'Last_Modified_By']]
Opp.head(10)
Opp.to_csv("Opportunities-Final.csv")
###Output
_____no_output_____
###Markdown
ACCOUNTS Table
###Code
Acc = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Accounts(2019-20).csv', delimiter = ',')
# Acc['City'].unique()
Acc['Lead_ID'] = Opp['Lead_ID']
Acc['Opportunity_ID'] = Opp['Opportunity_ID']
Acc['Full_Name'] = Leads['Full_Name']
Acc['Email_address'] = Opp['Email_address']
Acc.head(10)
Acc = Acc[['Lead_ID', 'Opportunity_ID', 'Account_ID', 'Full_Name', 'City', 'Phone', 'Email_address', 'Status']]
Acc1 = pd.read_excel("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Final final datasets/Accounts_Final(2019-20).xlsx")
Acc1.head(10)
###Output
_____no_output_____
###Markdown
QUOTES Table
###Code
Quo = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Quotes.csv')
Quo['Lead_ID'] = Acc['Lead_ID']
Quo['Opportunity_ID'] = Acc['Opportunity_ID']
Quo['Account_ID'] = Acc['Account_ID']
Quo['Product_Name'] = Opp['Product_Name']
Quo['Product_ID'] = Opp['Product_ID']
Quo['Product_Category'] = Leads['Product_Category']
Quo['Actual_Revenue'] = Opp['Actual_Revenue']
Quo['Email_address'] = Acc['Email_address']
Quo['Status'] = Acc['Status']
Quo['Created_On'] = Opp['Close_Date'].map(lambda a: a + pd.DateOffset(days=randint(10,25), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Quo.head(10)
Quo = Quo[['Lead_ID', 'Opportunity_ID', 'Account_ID', 'Quote_ID', 'Product_Name', 'Product_ID', 'Product_Category', 'Actual_Revenue', 'Email_address', 'Status' ]]
Quo.to_csv('Quotes_Final(2019-20).csv')
###Output
_____no_output_____
###Markdown
ORDERS Table
###Code
Od = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Orders.csv')
Od['Lead_ID'] = Quo['Lead_ID']
Od['Opportunity_ID'] = Quo['Opportunity_ID']
Od['Account_ID'] = Quo['Account_ID']
Od['Quote_ID'] = Quo['Quote_ID']
Od['Product_Name'] = Quo['Product_Name']
Od['Product_Category'] = Quo['Product_Category']
Od['Actual_Revenue'] = Quo['Actual_Revenue']
Od['Email_address'] = Quo['Email_address']
Od.head(10)
Od.to_csv('Orders_final(2019-20).csv')
print(Od['Product_Name'].unique())
print(Od['Product_Category'].unique())
###Output
['Proxima-C' 'Kits Dragon' 'Phoenix' 'Sirius' 'Aurora' 'Apollo' 'Agyrap-S'
'Anhee-C']
['Tech' 'Kitchen' 'Christmas' 'Knitting' 'Painting' 'Mystery Kit'
'Science' 'Craft']
###Markdown
Invoice Table
###Code
Inv = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/Invoices.csv")
Inv['Lead_ID'] = Od['Lead_ID']
Inv['Opportunity_ID'] = Od['Opportunity_ID']
Inv['Account_ID'] = Od['Account_ID']
Inv['Quote_ID'] = Od['Quote_ID']
Inv['Order_ID'] = Od['Order_ID']
Inv['Product_Name'] = Od['Product_Name']
Inv['Product_ID'] = Opp['Product_ID']
Inv['Actual_Revenue'] = Od['Actual_Revenue']
Inv['Email_address'] = Od['Email_address']
Inv['Phone_No'] = Acc1['Phone_No']
Inv.head(10)
###Output
_____no_output_____ |
Lectures/Lecture-09/FindingDisplacement.ipynb | ###Markdown
Testing displacement estimates using correlation between two images Import some libs
###Code
import numpy as np
import skimage as ski
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load the data
###Code
img = plt.imread('2_S_day5.jpg');
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Locating a small ROI around a screw
###Code
plt.imshow(img[1600:1700,1100:1200])
###Output
_____no_output_____
###Markdown
Making two ROI imagesExtracting two images with a displacement of $d_{row}$=50 and $d_{col}$=10 and showing the result.
###Code
a=img[1600:1700,1100:1200]
b=img[1650:1750,1110:1210]
plt.subplot(1,2,1), plt.imshow(a)
plt.subplot(1,2,2), plt.imshow(b)
###Output
_____no_output_____
###Markdown
Correlation calculation- Compute the 2D FFT of the two images (they have to be the same size)- Compute $\mathcal{F}\{corr\}=\mathcal{F}\{a\} * \mathcal{F}\{b\}^*$- Compute corr=$|\mathcal{F}^{-1}\{\mathcal{F}\{corr\}\}|$
###Code
fa=np.fft.fft2(a);
fb=np.fft.fft2(b);
f=fa*np.conjugate(fb);
co=np.abs(np.fft.ifft2(f));
plt.imshow(np.abs(co))
plt.title('Correlation image between a and b');
###Output
_____no_output_____
###Markdown
Find the displacementLocate the max location in $corr$.
###Code
pos = np.where(co == np.amax(co))
pos
###Output
_____no_output_____ |
_site/lectures/Week 03 - Functions, Loops, Comprehensions and Generators/05 - Python Generators.ipynb | ###Markdown
Python Generators[Source](https://realpython.com/introduction-to-python-generators/)todo : add content
###Code
def infinite_sequence():
num = 0
while True:
num += 1
return num
x = infinite_sequence()
print(x)
def infinite_sequence():
num = 0
while True:
yield num
num += 1
seq = infinite_sequence()
print(next(seq))
for i in range(0, 10):
print(next(seq))
for index, value in enumerate(seq):
print(value)
if index > 10:
break
print(next(seq))
import random
def my_sequence():
num = 0
while True:
yield num
num += random.randint(0, 10)
if num > 20:
break
seq = my_sequence()
print(next(seq))
print(next(seq))
print(next(seq))
print(next(seq))
print(next(seq))
print(next(seq))
print(next(seq))
seq2 = my_sequence()
print(next(seq2))
image_db = [ 1, 2, 5, 7, 10]
meds_db = [ 3, 5, 7, 9 ]
i = get_next_image_patient()
m = get_next_med_patient()
def get_next_patient()
while no_match:
if i == m:
yield i
elif i > m:
i = get_next_image_patient()
elif m < i:
m = get_next_med_patient()
for patient in get_next_patient():
# do something
###Output
_____no_output_____ |
02A_TensorFlow-Slim.ipynb | ###Markdown
TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot:
###Code
import tensorflow as tf
# Set up the data loading:
images, labels = ...
# Define the model
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')
with tf.name_scope('conv2_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv2_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2')
with tf.name_scope('conv3_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3')
with tf.name_scope('conv4_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4')
with tf.name_scope('conv5_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5')
with tf.name_scope('fc_6') as scope:
flat = tf.reshape(pool5, [-1, 7*7*512])
weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(flat, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc6 = tf.nn.relu(bias, name=scope)
fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_7') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc6, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc7 = tf.nn.relu(bias, name=scope)
fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_8') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc7, weights)
biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
predictions = bias
###Output
_____no_output_____
###Markdown
Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows:
###Code
import tensorflow as tf
slim = tf.contrib.slim
# Set up the data loading:
images, labels = ...
# Define the model:
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
net = slim.fully_connected(net, 4096, scope='fc6')
net = slim.dropout(net, 0.5, scope='dropout6')
net = slim.fully_connected(net, 4096, scope='fc7')
net = slim.dropout(net, 0.5, scope='dropout7')
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
predictions = net
###Output
_____no_output_____
###Markdown
Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier:
###Code
import tensorflow as tf
slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg
# Set up the data loading:
images, labels = ...
# Define the model:
predictions = vgg.vgg16(images)
###Output
_____no_output_____
###Markdown
Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x')
W = tf.Variable(tf.zeros([784, 10]), name='W')
b = tf.Variable(tf.zeros([10]), name='b')
y = tf.nn.bias_add(tf.matmul(x, W), b, name='y')
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10], name='y_')
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Variable Initializer
init_op = tf.global_variables_initializer()
# Create a Saver object for saving weights
saver = tf.train.Saver()
# Create a Session object, initialize all variables
sess = tf.Session()
sess.run(init_op)
# Train
for _ in trange(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Save model
save_path = saver.save(sess, "./log_reg_model.ckpt")
print("Model saved in file: %s" % save_path)
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create a Session object, initialize all variables
sess = tf.Session()
# Restore weights
saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
print("Model restored.")
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("x:0")
y = graph.get_tensor_by_name("y:0")
y_ = graph.get_tensor_by_name("y_:0")
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt
Model restored.
Test accuracy: 0.916700005531311
###Markdown
Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), but the general idea:
###Code
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = './checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import os
import tensorflow as tf
from nets import vgg
slim = tf.contrib.slim
# Load images
images = ...
# Pre-process
processed_images = ...
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
# Load checkpoint values
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
###Output
_____no_output_____
###Markdown
TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot:
###Code
import tensorflow as tf
# Set up the data loading:
images, labels = ...
# Define the model
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')
with tf.name_scope('conv2_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv2_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2')
with tf.name_scope('conv3_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3')
with tf.name_scope('conv4_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4')
with tf.name_scope('conv5_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5')
with tf.name_scope('fc_6') as scope:
flat = tf.reshape(pool5, [-1, 7*7*512])
weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(flat, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc6 = tf.nn.relu(bias, name=scope)
fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_7') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc6, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc7 = tf.nn.relu(bias, name=scope)
fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_8') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc7, weights)
biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
predictions = bias
###Output
_____no_output_____
###Markdown
Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows:
###Code
import tensorflow as tf
slim = tf.contrib.slim
# Set up the data loading:
images, labels = ...
# Define the model:
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
net = slim.fully_connected(net, 4096, scope='fc6')
net = slim.dropout(net, 0.5, scope='dropout6')
net = slim.fully_connected(net, 4096, scope='fc7')
net = slim.dropout(net, 0.5, scope='dropout7')
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
predictions = net
###Output
_____no_output_____
###Markdown
Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier:
###Code
import tensorflow as tf
slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg
# Set up the data loading:
images, labels = ...
# Define the model:
predictions = vgg.vgg16(images)
###Output
_____no_output_____
###Markdown
Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x')
W = tf.Variable(tf.zeros([784, 10]), name='W')
b = tf.Variable(tf.zeros([10]), name='b')
y = tf.nn.bias_add(tf.matmul(x, W), b, name='y')
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10], name='y_')
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Variable Initializer
init_op = tf.global_variables_initializer()
# Create a Saver object for saving weights
saver = tf.train.Saver()
# Create a Session object, initialize all variables
sess = tf.Session()
sess.run(init_op)
# Train
for _ in trange(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Save model
save_path = saver.save(sess, "./log_reg_model.ckpt")
print("Model saved in file: %s" % save_path)
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create a Session object, initialize all variables
sess = tf.Session()
# Restore weights
saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
print("Model restored.")
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("x:0")
y = graph.get_tensor_by_name("y:0")
y_ = graph.get_tensor_by_name("y_:0")
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt
Model restored.
Test accuracy: 0.916700005531311
###Markdown
Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), but the general idea:
###Code
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = './checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import os
import tensorflow as tf
from nets import vgg
slim = tf.contrib.slim
# Load images
images = ...
# Pre-process
processed_images = ...
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
# Load checkpoint values
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
###Output
_____no_output_____
###Markdown
TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/research/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot:
###Code
import tensorflow as tf
# Set up the data loading:
images, labels = ...
# Define the model
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')
with tf.name_scope('conv2_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv2_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2')
with tf.name_scope('conv3_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3')
with tf.name_scope('conv4_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4')
with tf.name_scope('conv5_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5')
with tf.name_scope('fc_6') as scope:
flat = tf.reshape(pool5, [-1, 7*7*512])
weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(flat, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc6 = tf.nn.relu(bias, name=scope)
fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_7') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc6, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc7 = tf.nn.relu(bias, name=scope)
fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_8') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc7, weights)
biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
predictions = bias
###Output
_____no_output_____
###Markdown
Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows:
###Code
import tensorflow as tf
slim = tf.contrib.slim
# Set up the data loading:
images, labels = ...
# Define the model:
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
net = slim.fully_connected(net, 4096, scope='fc6')
net = slim.dropout(net, 0.5, scope='dropout6')
net = slim.fully_connected(net, 4096, scope='fc7')
net = slim.dropout(net, 0.5, scope='dropout7')
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
predictions = net
###Output
_____no_output_____
###Markdown
Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier:
###Code
import tensorflow as tf
slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg
# Set up the data loading:
images, labels = ...
# Define the model:
predictions = vgg.vgg16(images)
###Output
_____no_output_____
###Markdown
Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/research/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x')
W = tf.Variable(tf.zeros([784, 10]), name='W')
b = tf.Variable(tf.zeros([10]), name='b')
y = tf.nn.bias_add(tf.matmul(x, W), b, name='y')
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10], name='y_')
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Variable Initializer
init_op = tf.global_variables_initializer()
# Create a Saver object for saving weights
saver = tf.train.Saver()
# Create a Session object, initialize all variables
sess = tf.Session()
sess.run(init_op)
# Train
for _ in trange(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Save model
save_path = saver.save(sess, "./log_reg_model.ckpt")
print("Model saved in file: %s" % save_path)
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create a Session object, initialize all variables
sess = tf.Session()
# Restore weights
saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
print("Model restored.")
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("x:0")
y = graph.get_tensor_by_name("y:0")
y_ = graph.get_tensor_by_name("y_:0")
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt
Model restored.
Test accuracy: 0.916700005531311
###Markdown
Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/research/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/slim_walkthrough.ipynb), but the general idea:
###Code
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = './checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import os
import tensorflow as tf
from nets import vgg
slim = tf.contrib.slim
# Load images
images = ...
# Pre-process
processed_images = ...
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
# Load checkpoint values
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
###Output
_____no_output_____
###Markdown
TensorFlow-Slim [TensorFlow-Slim](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim) is a high-level API for building TensorFlow models. TF-Slim makes defining models in TensorFlow easier, cutting down on the number of lines required to define models and reducing overall clutter. In particular, TF-Slim shines in image domain problems, and weights pre-trained on the [ImageNet dataset](http://www.image-net.org/) for many famous CNN architectures are provided for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models).*Note: Unlike previous notebooks, not every cell here is necessarily meant to run. Some are just for illustration.* VGG-16 To show these benefits, this tutorial will focus on [VGG-16](https://arxiv.org/abs/1409.1556). This style of architecture came in 2nd during the 2014 ImageNet Large Scale Visual Recognition Challenge and is famous for its simplicity and depth. The model looks like this:The architecture is pretty straight-forward: simply stack multiple 3x3 convolutional filters one after another, interleave with 2x2 maxpools, double the number of convolutional filters after each maxpool, flatten, and finish with fully connected layers. A couple ideas behind this model:- Instead of using larger filters, VGG notes that the receptive field of two stacked layers of 3x3 filters is 5x5, and with 3 layers, 7x7. Using 3x3's allows VGG to insert additional non-linearities and requires fewer weight parameters to learn.- Doubling the width of the network every time the features are spatially downsampled (maxpooled) gives the model more representational capacity while achieving spatial compression. TensorFlow Core In code, setting up the computation graph for prediction with just TensorFlow Core API is kind of a lot:
###Code
import tensorflow as tf
# Set up the data loading:
images, labels = ...
# Define the model
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 3, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv1_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 64], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)
pool1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool1')
with tf.name_scope('conv2_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv2_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 128], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv2 = tf.nn.relu(bias, name=scope)
pool2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2')
with tf.name_scope('conv3_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 128, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv3_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv3 = tf.nn.relu(bias, name=scope)
pool3 = tf.nn.max_pool(conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool3')
with tf.name_scope('conv4_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool3, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv4_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv4 = tf.nn.relu(bias, name=scope)
pool4 = tf.nn.max_pool(conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool4')
with tf.name_scope('conv5_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(pool4, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_2') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
with tf.name_scope('conv5_3') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 512, 512], dtype=tf.float32, stddev=1e-1), name='weights')
conv = tf.nn.conv2d(conv5, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[512], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(conv, biases)
conv5 = tf.nn.relu(bias, name=scope)
pool5 = tf.nn.max_pool(conv5, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool5')
with tf.name_scope('fc_6') as scope:
flat = tf.reshape(pool5, [-1, 7*7*512])
weights = tf.Variable(tf.truncated_normal([7*7*512, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(flat, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc6 = tf.nn.relu(bias, name=scope)
fc6_drop = tf.nn.dropout(fc6, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_7') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 4096], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc6, weights)
biases = tf.Variable(tf.constant(0.0, shape=[4096], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
fc7 = tf.nn.relu(bias, name=scope)
fc7_drop = tf.nn.dropout(fc7, keep_prob=0.5, name='dropout')
with tf.name_scope('fc_8') as scope:
weights = tf.Variable(tf.truncated_normal([4096, 1000], dtype=tf.float32, stddev=1e-1), name='weights')
mat = tf.matmul(fc7, weights)
biases = tf.Variable(tf.constant(0.0, shape=[1000], dtype=tf.float32), trainable=True, name='biases')
bias = tf.nn.bias_add(mat, biases)
predictions = bias
###Output
_____no_output_____
###Markdown
Understanding every line of this model isn't important. The main point to notice is how much space this takes up. Several of the above lines (conv2d, bias_add, relu, maxpool) can obviously be combined to cut down on the size a bit, and you could also try to compress the code with some clever `for` looping, but all at the cost of sacrificing readability. With this much code, there is high potential for bugs or typos (to be honest, there are probably a few up there^), and modifying or refactoring the code becomes a huge pain.By the way, although VGG-16's paper was titled "Very Deep Convolutional Networks for Large-Scale Image Recognition", it isn't even considered a particularly deep network by today's standards. [Residual Networks](https://arxiv.org/abs/1512.03385) (2015) started beating state-of-the-art results with 50, 101, and 152 layers in their first incarnation, before really going off the deep end and getting up to 1001 layers and beyond. I'll spare you from me typing out the uncompressed TensorFlow Core code for that. TF-Slim Enter TF-Slim. The same VGG-16 model can be expressed as follows:
###Code
import tensorflow as tf
slim = tf.contrib.slim
# Set up the data loading:
images, labels = ...
# Define the model:
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1')
net = slim.max_pool2d(net, [2, 2], scope='pool1')
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool3')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
net = slim.max_pool2d(net, [2, 2], scope='pool4')
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
net = slim.max_pool2d(net, [2, 2], scope='pool5')
net = slim.fully_connected(net, 4096, scope='fc6')
net = slim.dropout(net, 0.5, scope='dropout6')
net = slim.fully_connected(net, 4096, scope='fc7')
net = slim.dropout(net, 0.5, scope='dropout7')
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
predictions = net
###Output
_____no_output_____
###Markdown
Much cleaner. For the TF-Slim version, it's much more obvious what the network is doing, writing it is faster, and typos and bugs are much less likely.Things to notice:- Weight and bias variables for every layer are automatically generated and tracked. Also, the "in_channel" parameter for determining weight dimension is automatically inferred from the input. This allows you to focus on what layers you want to add to the model, without worrying as much about boilerplate code. - The repeat() function allows you to add the same layer multiple times. In terms of variable scoping, repeat() will add "_" to the scope to distinguish the layers, so we'll still have layers of scope "`conv1_1`, `conv1_2`, `conv2_1`, etc...".- The non-linear activation function (here: ReLU) is wrapped directly into the layer. In more advanced architectures with batch normalization, that's included as well.- With slim.argscope(), we're able to specify defaults for common parameter arguments, such as the type of activation function or weights_initializer. Of course, these defaults can still be overridden in any individual layer, as demonstrated in the finally fully connected layer (fc8).If you're reusing one of the famous architectures (like VGG-16), TF-Slim already has them defined, so it becomes even easier:
###Code
import tensorflow as tf
slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg
# Set up the data loading:
images, labels = ...
# Define the model:
predictions = vgg.vgg16(images)
###Output
_____no_output_____
###Markdown
Pre-Trained Weights TF-Slim provides weights pre-trained on the ImageNet dataset available for [download](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). First a quick tutorial on saving and restoring models: Saving and Restoring One of the nice features of modern machine learning frameworks is the ability to save model parameters in a clean way. While this may not have been a big deal for the MNIST logistic regression model because training only took a few seconds, it's easy to see why you wouldn't want to have to re-train a model from scratch every time you wanted to do inference or make a small change if training takes days or weeks.TensorFlow provides this functionality with its [Saver()](https://www.tensorflow.org/programmers_guide/variablessaving_and_restoring) class. While I just said that saving the weights for the MNIST logistic regression model isn't necessary because of how it is easy to train, let's do it anyway for illustrative purposes:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create the model
x = tf.placeholder(tf.float32, [None, 784], name='x')
W = tf.Variable(tf.zeros([784, 10]), name='W')
b = tf.Variable(tf.zeros([10]), name='b')
y = tf.nn.bias_add(tf.matmul(x, W), b, name='y')
# Define loss and optimizer
y_ = tf.placeholder(tf.float32, [None, 10], name='y_')
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Variable Initializer
init_op = tf.global_variables_initializer()
# Create a Saver object for saving weights
saver = tf.train.Saver()
# Create a Session object, initialize all variables
sess = tf.Session()
sess.run(init_op)
# Train
for _ in trange(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# Save model
save_path = saver.save(sess, "./log_reg_model.ckpt")
print("Model saved in file: %s" % save_path)
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
Note, the differences from what we worked with yesterday:- In lines 9-12, 15, there are now 'names' properties attached to certain ops and variables of the graph. There are many reasons to do this, but here, it will help us identify which variables are which when restoring. - In line 23, we create a Saver() object, and in line 35, we save the variables of the model to a checkpoint file. This will create a series of files containing our saved model.Otherwise, the code is more or less the same.To restore the model:
###Code
import tensorflow as tf
from tqdm import trange
from tensorflow.examples.tutorials.mnist import input_data
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Create a Session object, initialize all variables
sess = tf.Session()
# Restore weights
saver = tf.train.import_meta_graph('./log_reg_model.ckpt.meta')
saver.restore(sess, tf.train.latest_checkpoint('./'))
print("Model restored.")
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("x:0")
y = graph.get_tensor_by_name("y:0")
y_ = graph.get_tensor_by_name("y_:0")
# Test trained model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print('Test accuracy: {0}'.format(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))
sess.close()
###Output
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
INFO:tensorflow:Restoring parameters from ./log_reg_model.ckpt
Model restored.
Test accuracy: 0.916700005531311
###Markdown
Importantly, notice that we didn't have to retrain the model. Instead, the graph and all variable values were loaded directly from our checkpoint files. In this example, this probably takes just as long, but for more complex models, the utility of saving/restoring is immense. TF-Slim Model Zoo One of the biggest and most surprising unintended benefits of the ImageNet competition was deep networks' transfer learning properties: CNNs trained on ImageNet classification could be re-used as general purpose feature extractors for other tasks, such as object detection. Training on ImageNet is very intensive and expensive in both time and computation, and requires a good deal of set-up. As such, the availability of weights already pre-trained on ImageNet has significantly accelerated and democratized deep learning research.Pre-trained models of several famous architectures are listed in the TF Slim portion of the [TensorFlow repository](https://github.com/tensorflow/models/tree/master/slimpre-trained-models). Also included are the papers that proposed them and their respective performances on ImageNet. Side note: remember though that accuracy is not the only consideration when picking a network; memory and speed are important to keep in mind as well.Each entry has a link that allows you to download the checkpoint file of the pre-trained network. Alternatively, you can download the weights as part of your program. A tutorial can be found [here](https://github.com/tensorflow/models/blob/master/slim/slim_walkthrough.ipynb), but the general idea:
###Code
from datasets import dataset_utils
import tensorflow as tf
url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz"
checkpoints_dir = './checkpoints'
if not tf.gfile.Exists(checkpoints_dir):
tf.gfile.MakeDirs(checkpoints_dir)
dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir)
import os
import tensorflow as tf
from nets import vgg
slim = tf.contrib.slim
# Load images
images = ...
# Pre-process
processed_images = ...
# Create the model, use the default arg scope to configure the batch norm parameters.
with slim.arg_scope(vgg.vgg_arg_scope()):
logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False)
probabilities = tf.nn.softmax(logits)
# Load checkpoint values
init_fn = slim.assign_from_checkpoint_fn(
os.path.join(checkpoints_dir, 'vgg_16.ckpt'),
slim.get_model_variables('vgg_16'))
###Output
_____no_output_____ |
Data Science Academy/Cap06/Notebooks/DSA-Python-Cap06-02-Insert no SQLite.ipynb | ###Markdown
Data Science Academy - Python Fundamentos - Capítulo 6 Download: http://github.com/dsacademybr
###Code
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
###Output
Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6
###Markdown
Criando o Banco de Dados e Inserindo Dados
###Code
# Reemove o arquivo com o banco de dados SQLite (caso exista)
import os
os.remove("dsa.db") if os.path.exists("dsa.db") else None
import sqlite3
# Criando uma conexão
conn = sqlite3.connect('dsa.db')
# Criando um cursor
c = conn.cursor()
# Função para criar uma tabela
def create_table():
c.execute('CREATE TABLE IF NOT EXISTS produtos(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, date TEXT, '\
'prod_name TEXT, valor REAL)')
# Função para inserir uma linha
def data_insert():
c.execute("INSERT INTO produtos VALUES(10, '2020-05-02 14:32:11', 'Teclado', 90 )")
conn.commit()
c.close()
conn.close()
# Criar tabela
create_table()
# Inserir dados
data_insert()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.